The role of the development of artificial intelligence in geopolitics usually means competition between the United States and China. While reports are inconclusive about which country will ultimately win (if this is the right term), there is a glaring shortcoming in the United States that China will largely avoid.

No, it’s not financial contribution. In 2017 China accounted for 48% of global AI venture capital while the US only accounted for 38%. But only two years later the trend reversed, possibly because of headwinds from the trade war.

Neither is it talent acquisition and a brain drain. For years, analysts have pointed to the decline of STEM (science, technology, engineering and mathematics) graduates in American universities. These studies are coupled with the observation that foreign students (especially from China and India) comprise a large proportion of these graduates. Yet talent acquisition is more complicated than university statistics suggest. And recent studies show that the US is still the largest destination for researchers, graduates, and students in the AI field.

Finally, neither is it any structural difference between US and Chinese political economy. These arguments, while fascinating from a macro-historical lens, do little to clarify industry specifics.

It’s demographics.

The central social problem with artificial intelligence in the United States is bias and discrimination. This isn’t about dystopian robots or technology singularities. It’s about advanced analytics solutions for recruitment, advertising, marketing, biometrics, health insurance and law enforcement.

And because China’s Han majority comprises 91.6% of the population, the large-scale integration of robust AI services in the digital economy will happen faster and more efficiently in China than in the United States.

Policy discussions in the US

Concerns about the potential discriminatory effects of AI are growing. One component of this is the publicization of major PR blunders concerning the algorithms of such companies as Google, Facebook and Apple.

For instance, Google’s facial-recognition technology has come under attack for mistaking black people for gorillas. Facebook has likewise agreed to countless settlements for failing to correct the discriminatory bias in the company’s AdTech platform.

Another component is the growing emphasis of this in public-policy discussions. Civil-rights advocacy groups such as the American Civil Liberties Union are beginning to focus heavily on the negative implications of facial-recognition technology in law enforcement in their outreach efforts. Moreover, the US state of Illinois has already passed a bill to regulate artificial intelligence and the US Congress, when it actually does anything productive, may consider the Algorithmic Accountability Act introduced this year.

Throughout Think Tank Land USA, the topic is also becoming increasingly popular: The amount of papers, events, and funding on ethical AI design has shot up considerably this year.

Digital race to the bottom

What this means is that the AI deployment and innovation in the United States will face constraints in its short-term market diffusion stage.

Big Data raises issues with Metcalfe’s network effect in economics – that is, the idea that the utility of a network grows exponentially with only a marginal increase in its connections. In concrete terms, if a company acquires individuals in its network, the chances of other individuals joining that network increases considerably.

The outcome is that gaining that network advantage early on means that a company stands a better chance at outcompeting others and acquiring market share.

This creates a classic race to the bottom whenever a new digital “industry” pops up. And companies are becoming wiser in responding to this phenomenon. As CBInsights repeatedly points out, market disruption at an alarming rate is the quintessential trend in AI.

History complicates technological development

The effect: Companies have consistently ignored long-term risk in the design and programming stage of AI product development. And since this has become the general practice in the United States, the standard for evaluating “national competitiveness” in AI is now set at a problematic level.

Future innovation will be necessary to the interests of the United States. And innovation in a market economy generally means allowing companies to do what they want, make the decisions they have to in order to stay ahead of the curve, and little government interference.

All of this means that as the specter of regulation looms in the shadows of public policy, companies will play the innovation card in order to avoid algorithmic accountability and transparency.

Moreover, because of the unique history of systemic racial discrimination and level of demographic diversity in the United States, the push toward regulation will not and cannot go away. In turn, this will affect risk assessment and decision-making for the biggest AI innovators.

To be sure, it may be possible to find a way to promote healthy AI design without inhibiting the market. One such example is shifting the incentive structure away from cost externalization toward companies bearing the responsibility to internalize it themselves.

But such an outcome is unlikely given the current trends in AI programming and increased geopolitical competition across the world. And in terms of competition, China will simply not have to deal with the problems of discrimination and bias like the US.

Eventually China will have to confront these problems when it attempts to export its AI products and services to countries that do have demographic diversity. But by that time, AI capacity within its borders will be so well established and efficient that the innovation of today will look as primitive as dial-up in comparison.