Artificial Intelligence Isn't an Agent of Oppression but in China it's Already the Tool

Published January 31st, 2019 - 01:46 GMT
(AFP/FILE)
(AFP/FILE)

 

 

By Ty Joplin

 

Since their entrance into mainstream political consciousness, Artificial Intelligence (AI) and Big Data have long been a harbinger of political doom and a coming onslaught of global, geopolitical shifts.

Movies, TV series, think pieces and tech reports paint and increasingly grim picture of power being handed over by governments and citizens to amorphous algorithms that govern with no transparency.

The most dramatic depiction is the all-out data-driven apocalypse of the Terminator universe, but subtler, more intimate insights into our Data Hell come from Black Mirror, whose episodes shed light on people, relationships and societies that have sacrificed their subjectivity in the name of optimization. In the political sphere, a mainstream position in the Democratic party of the U.S. is that Russia stole the 2016 Presidential Election with advanced hacking tools and trolls.

The emerging race between the U.S. and China to develop the most advanced AI is being called the Cold War of the 21st century; the central power struggle that defines an era for the world.

Technological developments in AI and algorithms aimed at governance are simultaneously viewed as a naturally evolving phenomenon and an imminent political peril. This framing, though popular as it is, misunderstands who the agent is in AI and Big Tech. It is not the algorithms themselves, but those who design them and define their goals.

It’s not Big Tech and AI that are threatening democracy and sparking a new Cold War, it’s the political vision of those that design their codes and the unchecked hold on power that allows them to make those visions a reality. AI and algorithms are merely tools that, experts interviewed by Al Bawaba, say are ways to maintain and extend status quo political landscapes.

Authoritarian states may use these tools to centralize control, while democracies can use them to expand and modernize people-power.

Continuing to misattribute the power of algorithms to the code itself, cybersecurity experts argue, guarantees the states and corporates behind them will continue to shape AI-driven governance with no transparency.



AI and Algorithms Are Not Self-Generating a Data-Driven Technostate

(The Royal Society of Arts)

To understand the power of AI and the algorithms that define it, it’s important to start by defining what these algorithms are and what they aren’t.

Cathy O’Neil, a data scientist and author of the book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” recently spoke about a central misconception about algorithms: “I went into data science and I was struck by what I thought was essentially a lie. Namely that algorithms were being presented and marketed as objective fact,” she said.

“A much more accurate description of an algorithm is that its an opinion embedded in math. An algorithm is a very general concept. It’s something that we do in our heads every day. To build an algorithm, we need only two things essentially: a historical data and a definition of success.”

An illustrative example of an algorithm is cooking dinner for her family. O’Neil’s data set for her dinner is her ingredients, and she assess its success when her children are eating. If they’re eating vegetables, the dinner was successful. On the flipside, if the children were the ones who defined the success of the dinner algorithm, it would be a failure if they encountered any vegetables, O’Neil quips.

Facial recognition algorithms that build predictive policing programs function in the same way, though their political consequences can be grave.

 

 

In 2016, a man named Willie Lynch was convicted of selling crack in Florida: his case was decided by a photo taken at the scene of a drug deal with undercover agents that was then put through a facial recognition software. That same software has come under fire for its racial biases, which many claim comes from the fact that it’s written by members of one race that cannot be accurately used on another. Facial recognition software have had issues with confusing black people with guerrillas, since coders imprinted their own biases of others’ physicality.

Combine that with policing laws that disproportionately target vulnerable community members of color, and you have an algorithm just as biased as its programmers, which is now embedding racial policing protocols in the language of scientific objectivity.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” U.S. politician and Chair of the House Oversight Committee, Elijah Cummings said in 2017.

“That’s a hell of a combination.”

 

(Shutterstock)

Crime predicting algorithms are also reportedly targeting poorer neighborhoods with dominant minority populations as they are the ones already monitored more heavily by police.

These algorithms recommend a heavier police presence, which exacerbates the bias and drives arrest rates up. “The utilitarian bean-counters will clock this up as a success. But the data will continue to feed back into the model, creating a self-perpetuating loop of growing inequality and algorithm-driven racial targeting,” explains Jamie Bartlett, a senior fellow at Demos.

“We embed our values into algorithms,” data scientist O'Neil concluded. To put it simply, an algorithm mechanizes a way to accomplish a goal set by humans, whose values are also defined by humans and reliant on pre-existing social or political conditions. It is a tool, and its accomplishments function in service to those who make it.

Instead of this dynamic informing how the global community conceptualizes the power of AI, experimental physicist Dan McQuillan observes that “AI has become a hyperobject so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is perceived by us through its informational imprints.”

AI is becoming an inaccessible, transcendent entity that is untouchable, unanalyzable, and ungovernable, sitting in a black box of fake objectivity as it slowly takes over governance, policing and surveillance.

“It's vital we don't allow the AI-evangelists or product managers to dominate this conversation: it can't just be 'oh this is too confusing!' or 'oh the march of AI is inevitable,’  Alex Krasodomski-Jones, director of the Center for the Analysis of Social Media at the London-based Demos, told Al Bawaba.

“Neither of those things are true.”

“Unchecked, AI is likely to augment existing powers, particularly in states with weak democratic safeguards. Increasingly invasive surveillance, for instance, or the use of machine-learning models in law enforcement, are worrying precedents for the use of AI under repressive regimes,” Krasodomski-Jones added.

 

So What Does AI and Big Data Do Politically?


 

 

An emerging consensus among data scientists and tech analysts is that AI and its constituent algorithms reinforce pre-existing political orders with new, automated ways to govern people.

“A society whose synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal,” Mcquillan writes.

This emerging type of society is particularly evident in places ruled by authoritarian states, where governments have the ability to totalize control over the development, goals and utilization of Big Data tech.

“Authoritarian regimes can easily abuse such new powers and will be tempted to use them to control and suppress dissent,” said Adrian Zenz, a social researcher with the European School of Culture & Theology whose work has helped to expose a dystopian techno-police state in China.

“It can certainly be said that the increase of surveillance technologies and predictive algorithms is likely to make daily life in connection with a securitized state both less transparent and could lead to a more frequent and more automated infringement on private rights, both by governments and companies,” he added.

Some examples of this that can be exploited to enhance a state’s power over its people are predictive policing programs that mass monitor millions of people and pick out those most likely to commit crimes, hate speech monitoring programs whose parameters for hate speech can be tweaked to jail dissidents and algorithms that block information and sites potentially critical of the state.

All of these types of programs are being developed by Chinese companies aligned with the governing party, the CCP.

 

China’s Social Credit System

Beijing sunset (AFP/FILE)

The most standout example of precisely how this tech can enhance state power is China’s social credit system, which is currently being perfected throughout the country.

The credit system aims to give every Chinese citizen a standardized score of trustworthiness, and every act recorded in public or in private may have an impact on that score. If you go under certain thresholds, you are slowly edged out of society: your rent may increase, you may be denied certain forms of public transportation, your passport may be invalidated and your face may be plastered in public as a way to shame you.

“The impact of AI will be mostly felt on the domestic scene with the significant amount of data that is collected - for instance the social credit system that China is putting in place is possible because of the extensive data that the state is able to collect on its citizens and others living within the boundaries,” Dr. Rajeswari Pillai Rajagopalan, a distinguished fellow and head of the Nuclear and Space Policy Initiative told Al Bawaba.

“The facial recognition and social tracking that China has established to a large extent will aid in strengthening policing, law and order mechanisms and counter-terrorism but the very same technologies and capabilities can be put to use in a negative manner which can give way to what is known as digital authoritarianism,” she added.

 

 

China’s social credit system is still in its primitive stages, but the residents of one of the systems pilot towns have already felt its effects. Donghuo Tangzhai in China’s Rongcheng region has a population of about 3,000 people. Zhou Aini, a retired resident, has been hired by China to be an ‘Information Collector,” for the town.

Aini is paid to walk around the town, talk to people on the street about what others are doing and record their lives in a notebook she carries with her everywhere. Aini writes down the information of people littering on the street or helping elders throughout the small town.

In this town, everyone starts off with 1,000 points. The deeds recorded in Aini’s notebook are presented to a monitoring officer, who then assigns a positive or negative numerical value to each of those deeds and publicizes the actions on a community signpost, where everyone can see just how good or bad their neighbors have been.

“Now in our community, neighbors get along very well. There are no fights. Not at all. Now life is good. There is no reason to fight,” Wang Fengbo, the appointed director of the town’s credit system, exclaimed to a reporter.

Fengbo shared the story of a resident with a penchant for alcohol who used to physically abuse his wife. “Although this is a family issue, it’s a bad influence,” said Fengbo. “We also deduct the points.”

Zhang Yingjie, another resident, co-signed on a loan with someone who was unable to pay his share, and was subsequently docked social points. Because of his new low score, he was unable secure any more loans, take high-speed transit around the vast country or buy certain products.

 

Zhang Yingjie (Vice News)

In order to restore his score to the point where it no longer inhibited his quality of life, he had to go to a local government office, where he paid money the government insists goes to charity. The workers he gives the money to record his deed and slowly his score went back up. Yingjie explained that he also donated blood and did volunteer work in his spare time to help nurture his score.

The CCP’s experiment with a unified social credit system has already impacted tens of millions of people, who find themselves paying more interest on loans or for utilities, or are unable to travel.

From the stories shared from Donghuo Tangzhai, residents are careful to watch their own behavior in private lest their actions are recorded and they are penalized.

As it is currently designed, the credit system is more or less a worst-case scenario for techno-dystopianism who fear people power is being ceded to impersonal Big Data algorithms that govern their lives.

But again, that mischaracterizes the system as the agent in the oppression rather than the tool, which may excuse the very human political responsibility the government bears for deploying the program while contributing to the neutralization of the social credit system itself to become an intangible ‘hyperobject.’

The social credit system isn’t the beginning of a technocracy but an indication that a central government has found a way for its citizens to acquiesce to its system of controls so thoroughly that they now police themselves. It’s a proof-of-concept policy that reveals the use of Big Data and AI as tools that help a state wield power over its people.

The tools may be scary and unknown-seeming, but the perpetrators remain the same.

“Artificial intelligence threatens to centralise political decision-making power, disempower citizens, alienate them from participating in democratic decision-making, reduce the power of workers in a workforce and so on,” noted Alex Krasodomski-Jones of Demos.

AI isn't a new harbinger of doom, but another political tool in the age-old quest to consolidate power. Demanding transparency in the codemaking process may be the most effective means to understand its uses and abuses.

Accoding to Krasodomski-Jones, “for us to make the best of AI, we need two things: first, the political will, powers and scrutiny, and likely regulation, required to ensure that AI is being used in a way that is fair and democratic. Second, a commitment by AI developers to ensure that the products they are building are open to this level of scrutiny."


© 2000 - 2019 Al Bawaba (www.albawaba.com)

You may also like