What Does 'Winning' The AI Tech Race Between the U.S. and China Actually Mean?

Published February 6th, 2019 - 10:33 GMT
(Rami Khoury/Al Bawaba)
(Rami Khoury/Al Bawaba)

 

By Ty Joplin

 

The geopolitical giants of the 20th century, the U.S. and the Soviet Union, locked themselves into a space race that resembles the current quest for AI dominance between the U.S. and China.

Both states funnelled billions into their respective spaceflight programs in the hopes of being the first to develop advanced spaceflight technology. The U.S. eventually ‘won’ by landing the successful Apollo 11 module on the surface of the moon.

Now, policy analysts and commentators are comparing the technological developments in AI between the U.S. and China as the next ‘tech race,’ and following benchmarks of progress to determine who is winning. Beating the world’s top chess player was considered a major benchmark for machine learning; another was winning in the ancient Chinese game of 'Go.'

China and the U.S.  are further attempting to design AI and machine learning algorithms at breakneck speeds while trying to slow the progress of the other; as the U.S. and USSR did to each other in the mid-20th century.

But this type of coverage and framing misses the more fundamental and pressing questions of the AI tech race, ones that were implicitly understood in determining the ‘winner’ of the earlier race to spaceflight.

The point for each state was never simply to ‘win the race’ but to demonstrate that they had the superior structure for organizing society around a state; one that was more efficient, prosperous and better at engineering progress that surpassed certain benchmarks. Getting into space first and landing on the moon were just ways of showing off one side’s perceived societal superiority.

 

 

Current coverage of the race to AI dominance however, hides the more fundamental questions lurking beneath: namely, the political visions driving AI’s development into the future.

What are the socio-political visions of those creating AI and algorithms? What types of societies do they help engender, and how do they affect a person’s relationship to the state and to corporations? Do China and the U.S. have different ideas for how AI ought to be used?

‘Winning’ the AI race may be a vague goal, but the real trophy rests with the government that can successfully use AI to service its strategic priorities.

As AI becomes more advanced and is relied upon as a tool of governance, it is increasingly important to understand the political visions of its coders.

The Cold War between the U.S. and China includes a vitally under-reported difference in the way each government is attempting to utilize AI.

For China, AI is an instrument being used to centralize state power over citizens; to augment authoritarian governments’ hold on power.

The U.S., so far, lacks a central organizing principle for the political use of AI, but is trying to develop one that advances its military edge.

 

The CCP’s Big Data Dreams

(Megvii)

The biggest difference between China and the U.S.’ AI ambitions is that China has one and is structuring its technology sector around it. Meanwhile the U.S., with its array of private Silicon Valley interests and Washington policymakers lacking a consensus, has still yet to develop a cohesive strategy for AI.

Zhuang Rongwen, the new leader of China’s Cyberspace Administration of China, laid out the country’s hopes for the new AI-powered digital age in a manifesto published Sep 2018.

“In the current era, the Internet has already become the main channel for producing, disseminating, and obtaining information. Its ability to mobilize society is ever greater, increasingly becoming a transmitter and amplifier of all kinds of risks,” he writes.

“Whoever masters the Internet holds the initiative of the era,” Rongwen pens, “and whoever does not take the Internet seriously will be cast aside by the times.”

The manifesto decries the lack of control the CCP has over the internet as the central organizing space, and portrays the digital realm’s chaotic lack of governance as having “created major difficulties for the Party ideological departments’ uniform leadership over ideology and theory propagation…”

Totalizing control over cyberspace and the flow of information, Rongwen explains, “has become a major and urgent task set in front of us.”

China’s leader, Xi Jinping, put it more bluntly: “We need to enhance the combination of AI and social governance and develop AI systems for government services and decision-making,” he explained at a Politburo "group study session" regarding AI. Chinese academics and tech workers call this state embrace of AI governance a “military-civil fusion.”

The new manifesto reflects the CCP’s general desire to use AI as form of digital governance that compels, incentivizes and coerces acquiescence while punishing, surveilling and censoring dissidence.

It’s a continuation of the CCP’s general style of authoritarian governance, only it adds a flashy new tool to accomplish these tasks.

 

Xi Jinping (AFP/FILE)

To harness the technology under the auspices of the CCP, China has organized much of the technology sector around the ruling party’s agenda.

Its biggest technology companies are led by CCP loyalists and party members; state-owned investment firms ensure any and all promising tech companies that presents strategically valuable AI are generously funded and incubated.

Tencent is one of the biggest companies in the world responsible for multi-billion dollar investments in other firms, social media apps, games and internet services among other products. Its founder, CEO and Chairman, Ma Huateng, is a delegate in China’s National People’s Congress. At a conference in Singapore, Huateng explained that free speech was potentially dangerous, saying “ We are a great supporter of the government in terms of the information security. We try to have a better management and control of the Internet.”

Tencent’s prized all-purpose app, WeChat, reflects this position. WeChat is a credit card, social media app, messaging service, appointment booker, logistical assistant, search engine and mobile calling device all in one application. Because of its ubiquitous functionality, the app helps further China’s so-called Golden Shield of censorship by filtering out messages and images that may contain anti-regime sentiment.

 

 

Alibaba, the other major tech company in China, competes with Tencent for profits but services the CCP’s AI goals with as much efficiency. Its founder and chairman, Jack Ma, was recently outed as a member of the CCP.

Alibaba is one of the country’s biggest financiers of emergent tech that relies on AI to bolster predictive policing programs being utilized throughout the country.

In early 2018, Alibaba invested $327 million into Megvii, which helped to develop a face-scanning payment program for Alibaba. Megvii specializes in facial recognition technology and one of its most advanced products, Face ++, is being used developed and utilized as part of a multi-pronged police state in northwestern China.

Later, in April 2018, Alibaba led a $620 million fundraising effort for Megvii’s competitor, SenseTime, which also produces state-of-the-art facial recognition tech to be used in monitoring and predictive policing.

State-owned venture capital and investment firms are also pouring billions into tech projects which promise to provide the CCP innovative ways to centralize control over its people and cyberspace.

To add to Megvii’s value, Bank of China Group Investment, a state-owned private equity firm, pledged $200 million to the tech startup. 

According to the Reuters report which broke news of the firm’s investment intentions, “Megvii’s latest fundraising comes amid Beijing’s plans to build a ubiquitous closed-circuit television (CCTV) surveillance network and become an international leader in AI, a technology that is increasingly becoming key to various sectors.”

Another state-owned investment body was established in 2016 by the CCP and given a $30 billion budget to invest in tech startups and AI. A Global Risk Insights report on the new fund details that “although referred to as ‘venture capital’ in name, the fund exists first and foremost to serve the needs of the government.”

 

 

The overall level of coordination, cooperation and strategic alignment of China’s ruling government and its private firms is striking. A real-time illustration of this joint public-private effort to develop AI towards centralizing control can be found in the province of Xinjiang.

Much has been reported on Xinjiang’s multifaceted, layered police state, but the basics are this: the CCP is building a state-operated, privately supplied surveillance network aimed against the region’s ethnic and religious minorities. Over a million Turkic Muslims have been reportedly sent to detention centers while millions of others have their every movement, conversation and purchase tracked by an exhaustive yet unitary monitoring system.

Maya Wang, a researcher for Human Rights Watch, explained the goal of China's AI-powered police state: “The goal is to mass engineer the identity of the Muslims--which are too different from Hans [the main ethnicity in China], from the state's perspective-- so they become loyal, obedient subjects of the CCP. This is done through pervasive surveillance, political indoctrination and control--particularly over their movement--over the Muslims of that region."

Xinjiang is the realized dream of the CCP’s AI ambitions. It’s also serving as a human testing ground to further augment and perfect the tech that’s being tested there.

 

The U.S.’ Chaotic Assemblage of AI Interests

(AFP/FILE)

The U.S. on the other hand, has comparatively little coordination between its various public and private interests.

While China has the overarching CCP, party-aligned tech giant companies and investment firms acting in unison towards a single objective, the U.S. has independent Silicon Valley companies and a federal government desperately trying to entice companies with lucrative government contracts.

AI’s application in the U.S.’ public sphere is defined by local, conflicting interests rather than nationally developed strategies. For example, some local police departments use AI-driven programs to identify which neighborhoods to patrol or which suspects to charge. AI experts warned that these programs simply reinforce the racial prejudice of their coders and hurt efforts for police accountability. Start-ups are now designing AI to police the police by attempting to analyze their body camera footage. Other startups are developing AI-powered body cameras that enhance their ability to carry out surveillance.

Private U.S. companies have an outsized ability to gather mass amounts of data, like the ones in China, but often process the data internally to further their own agendas or sell the information to other private companies. When the U.S. federal government demands some elements of this information, private pushback is often fierce.

For example, in 2016 the FBI tried to force Apple to unlock the phone of a suspect in a terror attack, Apple refused and the issue went to court.

Further, to counteract the buildup of Big Data in the hands of powerful U.S. corporations and police, scientists and engineers from MIT are developing advanced cryptographic methods to improve transparency for when and what data is released and ensure accountability for those who improperly gather data.

All of this paints a chaotic picture of AI’s socio-political developments, with its many players positioning themselves against one another and no central authority or blueprint regulating or guiding them.

In May 2018, former U.S. Defense Secretary Jim Mattis wrote a memo to Trump pleading with him to create a unified framework for harnessing the strategic power of AI.

In the memo, Mattis said the U.S. was lagging behind the progress of Chinese AI development. Quoting a recent Atlantic piece penned by Henry Kissinger, Mattis urged Trump to create a presidential commision on AI that would “ensure the U.S. is a leader not just in matters of defense but in the broader ‘transformation of the human condition,’” that will be caused by AI.

According to cybersecurity experts Al Bawaba spoke with, despite this plea, the U.S. federal government has yet to develop such a blueprint.

 

 

“The U.S. still don’t have a cohesive, national AI strategy, for instance, and policymakers who discuss AI focus too heavily on its military applications,” said Justin Sherman, a Cybersecurity Policy Fellow at New America.

While the federal government lags behind, the Department of Defense is attempting to create a military-backed tech incubator of its own. In June 2018, the Pentagon announced the creation of a the Joint Artificial Intelligence Center (JAIC), nicknamed Jake, which will provide funding and oversight services to firms developing military tech.

The memo that announced JAIC describes the center as one that revolves around ‘National Mission Initiatives,’ the first of which is reportedly Project Maven.

Project Maven is an ongoing program to enhance the Pentagon’s ability to target people in drone strikes. As such, it relies on developing an advanced AI program that can distinguish between people and objects.

Google signed on to build the AI, and reportedly outsourced the task to a crowd worker website, which pays people as little as $1 an hour for performing short, menial tasks such as identifying things in videos and pictures. The workers did not know the data they were sifting through and identify was going into building the military AI program to be used around the world.

 

U.S.’ armed ‘Grey Eagle’ Drone (U.S. Military)

When Google’s employees gained knowledge of their company’s efforts to build an AI to be used in drone strikes, 3,100 of them signed a petition to demand the company end its dealings with the Pentagon; something that could never be done in China.

Google’s executives relented and withdrew from the project, but internal emails acquired by Gizmodo show an executive team who “saw a huge opportunity for growth in the possibility of lucrative business with the Pentagon and projects that could ultimately lead to a cutting-edge AI-powered system capable of surveilling entire cities.”

Despite the controversy, Project Maven’s AI program “has already been deployed to half a dozen locations in the Middle East and Africa, where it is helping military analysts sort through the mountains of data their sensors and drones soak up."

The U.S. may have developed the most complicated and advanced AI systems capable of conquering increasingly complex problems, but China is using them for brutally efficient means of socio-political ends that end up being far more impactful.

To this, Al Bawaba spoke with Dan McQuilliam, an experimental physicist and theorist on AI’s applications at Goldsmiths, University of London, and asked him what it means that AI, arguably the most promising technology being developed in the world today, is being honed as a tool to police and surveill populations.

“I think it says we're in serious trouble,” he said bluntly.


© 2000 - 2019 Al Bawaba (www.albawaba.com)

You may also like