A Guide to Actually Understanding the Political Impact of AI

banner image, ai, tech, politics, democracy
(AFP/FILE, edited by Rami Khoury/Al Bawaba)
A Guide to Actually Understanding the Political Impact of AI
Published August 19th, 2018 - 08:49 GMT
Since their entrance into mainstream political consciousness, Artificial Intelligence (AI) and Big Data have been seen a harbinger of either political doom or revolution.

Movies, TV series, think pieces and tech reports paint and increasingly grim picture of power being handed over by governments and citizens to amorphous algorithms that govern with no transparency.

The most dramatic depiction is the all-out data-driven apocalypse of the Terminator universe, but subtler, more intimate insights into our Data Hell come from Black Mirror, whose episodes shed light on people, relationships and societies that have sacrificed their subjectivity in the name of optimization.

Speculating about AI is now its own sub-industry, with constellations of anointed experts roaming to conferences, summits and working groups, trading their dreams and cautions about the promise of AI.

In the political sphere, a mainstream position in the Democratic party of the U.S. is that Russia stole the 2016 Presidential Election with advanced hacking tools and troll factories.

The emerging race between the U.S. and China to develop the most advanced AI is being called the Cold War of the 21st century; the central power struggle that defines an era for the world.

Technological developments in AI and algorithms aimed at governance are simultaneously viewed as a naturally evolving phenomenon and an imminent political peril. This framing, though popular as it is, misunderstands who the agent is in AI and Big Tech.

It is not the algorithms themselves, but those who design them and define their goals.

It’s not Big Tech and AI that are threatening democracy and sparking a new Cold War, it’s the political vision of those that design their codes and the unchecked hold on power that allows them to make those visions a reality. AI and algorithms are merely tools that are ways to maintain and extend status quo political landscapes.

Authoritarian states may use these tools to centralize control, while democracies can use them to expand and modernize people-power.

It has been decided that AI as the way of the future, for better or for worse. And in this environment, deciphering between the actual impacts of AI in governance and prophetic polemics is becoming harder; the task more inaccessible even as the stakes for understanding it grow by the day.

Continuing to misattribute the power of algorithms to code itself, cybersecurity experts argue, guarantees the states and corporations behind them will continue to shape AI-driven governance with no transparency.

Part One: Understanding how AI works

AI and Algorithms Are Not Self-Generating a Data-Driven Technostate

To understand the power of AI and the algorithms that define it, it’s important to start by defining what these algorithms are and what they aren’t.

Cathy O’Neil, a data scientist and author of the book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” recently spoke about a central misconception about algorithms: “I went into data science and I was struck by what I thought was essentially a lie. Namely that algorithms were being presented and marketed as objective fact,” she said.

“A much more accurate description of an algorithm is that its an opinion embedded in math. An algorithm is a very general concept. It’s something that we do in our heads every day. To build an algorithm, we need only two things essentially: a historical data and a definition of success.”

An illustrative example of an algorithm is cooking dinner for her family. O’Neil’s data set for her dinner is her ingredients, and she assess its success when her children are eating. If they’re eating vegetables, the dinner was successful. On the flipside, if the children were the ones who defined the success of the dinner algorithm, it would be a failure if they encountered any vegetables, O’Neil quips.

Facial recognition algorithms that build predictive policing programs function in the same way, though their political consequences can be grave.

In 2016, a man named Willie Lynch was convicted of selling crack in Florida: his case was decided by a photo taken at the scene of a drug deal with undercover agents that was then put through a facial recognition software. That same software has come under fire for its racial biases, which many claim comes from the fact that it’s written by members of one race that cannot be accurately used on another. Facial recognition software have had issues with confusing black people with guerrillas, since coders imprinted their own biases of others’ physicality.

Combine that with policing laws that disproportionately target vulnerable community members of color, and you have an algorithm just as biased as its programmers, which is now embedding racial policing protocols in the language of scientific objectivity.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” U.S. politician and Chair of the House Oversight Committee, Elijah Cummings said in 2017.

“That’s a hell of a combination.”

(Shutterstock)

Crime predicting algorithms are also reportedly targeting poorer neighborhoods with dominant minority populations as they are the ones already monitored more heavily by police.

These algorithms recommend a heavier police presence, which exacerbates the bias and drives arrest rates up. “The utilitarian bean-counters will clock this up as a success. But the data will continue to feed back into the model, creating a self-perpetuating loop of growing inequality and algorithm-driven racial targeting,” explains Jamie Bartlett, a senior fellow at Demos.

“We embed our values into algorithms,” data scientist O'Neil concluded. To put it simply, an algorithm mechanizes a way to accomplish a goal set by humans, whose values are also defined by humans and reliant on pre-existing social or political conditions. It is a tool, and its accomplishments function in service to those who make it.

Instead of this dynamic informing how the global community conceptualizes the power of AI, experimental physicist Dan McQuillan observes that “AI has become a hyperobject so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is perceived by us through its informational imprints.”

AI is becoming an inaccessible, transcendent entity that is untouchable, unanalyzable, and ungovernable, sitting in a black box of fake objectivity as it slowly takes over governance, policing and surveillance.

“It's vital we don't allow the AI-evangelists or product managers to dominate this conversation: it can't just be 'oh this is too confusing!' or 'oh the march of AI is inevitable,’  Alex Krasodomski-Jones, director of the Center for the Analysis of Social Media at the London-based Demos, told Al Bawaba.

“Neither of those things are true.”

“Unchecked, AI is likely to augment existing powers, particularly in states with weak democratic safeguards. Increasingly invasive surveillance, for instance, or the use of machine-learning models in law enforcement, are worrying precedents for the use of AI under repressive regimes,” Krasodomski-Jones added.

So What Does AI and Big Data Do Politically?

An emerging consensus among data scientists and tech analysts is that AI and its constituent algorithms reinforce pre-existing political orders with new, automated ways to govern people.

“A society whose synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal,” Mcquillan writes.

This emerging type of society is particularly evident in places ruled by authoritarian states, where governments have the ability to totalize control over the development, goals and utilization of Big Data tech.

“Authoritarian regimes can easily abuse such new powers and will be tempted to use them to control and suppress dissent,” said Adrian Zenz, a social researcher with the European School of Culture & Theology whose work has helped to expose a dystopian techno-police state in China.

“It can certainly be said that the increase of surveillance technologies and predictive algorithms is likely to make daily life in connection with a securitized state both less transparent and could lead to a more frequent and more automated infringement on private rights, both by governments and companies,” he added.

Some examples of this that can be exploited to enhance a state’s power over its people are predictive policing programs that mass monitor millions of people and pick out those most likely to commit crimes, hate speech monitoring programs whose parameters for hate speech can be tweaked to jail dissidents and algorithms that block information and sites potentially critical of the state.

All of these types of programs are being developed by Chinese companies aligned with the governing party, the CCP.

China’s Social Credit System

(AFP/FILE)

The most standout example of precisely how this tech can enhance state power is China’s social credit system, which is currently being perfected throughout the country.

The credit system aims to give every Chinese citizen a standardized score of trustworthiness, and every act recorded in public or in private may have an impact on that score. If you go under certain thresholds, you are slowly edged out of society: your rent may increase, you may be denied certain forms of public transportation, your passport may be invalidated and your face may be plastered in public as a way to shame you.

“The impact of AI will be mostly felt on the domestic scene with the significant amount of data that is collected - for instance the social credit system that China is putting in place is possible because of the extensive data that the state is able to collect on its citizens and others living within the boundaries,” Dr. Rajeswari Pillai Rajagopalan, a distinguished fellow and head of the Nuclear and Space Policy Initiative told Al Bawaba.

“The facial recognition and social tracking that China has established to a large extent will aid in strengthening policing, law and order mechanisms and counter-terrorism but the very same technologies and capabilities can be put to use in a negative manner which can give way to what is known as digital authoritarianism,” she added.

China’s social credit system is still in its primitive stages, but the residents of one of the systems pilot towns have already felt its effects. Donghuo Tangzhai in China’s Rongcheng region has a population of about 3,000 people. Zhou Aini, a retired resident, has been hired by China to be an ‘Information Collector,” for the town.

Aini is paid to walk around the town, talk to people on the street about what others are doing and record their lives in a notebook she carries with her everywhere. Aini writes down the information of people littering on the street or helping elders throughout the small town.

In this town, everyone starts off with 1,000 points. The deeds recorded in Aini’s notebook are presented to a monitoring officer, who then assigns a positive or negative numerical value to each of those deeds and publicizes the actions on a community signpost, where everyone can see just how good or bad their neighbors have been.

“Now in our community, neighbors get along very well. There are no fights. Not at all. Now life is good. There is no reason to fight,” Wang Fengbo, the appointed director of the town’s credit system, exclaimedto a reporter.

Fengbo shared the story of a resident with a penchant for alcohol who used to physically abuse his wife. “Although this is a family issue, it’s a bad influence,” said Fengbo. “We also deduct the points.”

Zhang Yingjie, another resident, co-signed on a loan with someone who was unable to pay his share, and was subsequently docked social points. Because of his new low score, he was unable secure any more loans, take high-speed transit around the vast country or buy certain products.

Zhang Yingjie (Vice News)

In order to restore his score to the point where it no longer inhibited his quality of life, he had to go to a local government office, where he paid money the government insists goes to charity. The workers he gives the money to record his deed and slowly his score went back up. Yingjie explained that he also donated blood and did volunteer work in his spare time to help nurture his score.

The CCP’s experiment with a unified social credit system has already impacted tens of millions of people, who find themselves paying more interest on loans or for utilities, or are unable to travel.

From the stories shared from Donghuo Tangzhai, residents are careful to watch their own behavior in private lest their actions are recorded and they are penalized.

As it is currently designed, the credit system is more or less a worst-case scenario for techno-dystopianism who fear people power is being ceded to impersonal Big Data algorithms that govern their lives.

But again, that mischaracterizes the system as the agent in the oppression rather than the tool, which may excuse the very human political responsibility the government bears for deploying the program while contributing to the neutralization of the social credit system itself to become an intangible ‘hyperobject.’

The social credit system isn’t the beginning of a technocracy but an indication that a central government has found a way for its citizens to acquiesce to its system of controls so thoroughly that they now police themselves. It’s a proof-of-concept policy that reveals the use of Big Data and AI as tools that help a state wield power over its people.

The tools may be scary and unknown-seeming, but the perpetrators remain the same.

“Artificial intelligence threatens to centralize political decision-making power, disempower citizens, alienate them from participating in democratic decision-making, reduce the power of workers in a workforce and so on,” noted Alex Krasodomski-Jones of Demos.

AI isn't a new harbinger of doom, but another political tool in the age-old quest to consolidate power. Demanding transparency in the codemaking process may be the most effective means to understand its uses and abuses.

According to Krasodomski-Jones, “for us to make the best of AI, we need two things: first, the political will, powers and scrutiny, and likely regulation, required to ensure that AI is being used in a way that is fair and democratic. Second, a commitment by AI developers to ensure that the products they are building are open to this level of scrutiny."

Part two: What are the political visions AI services?

The geopolitical giants of the 20th century, the U.S. and the Soviet Union, locked themselves into a space race that resembles the current quest for AI dominance between the U.S. and China.

Both states funneled billions into their respective spaceflight programs in the hopes of being the first to develop advanced spaceflight technology. The U.S. eventually ‘won’ by landing the successful Apollo 11 module on the surface of the moon.

Now, policy analysts and commentators are comparing the technological developments in AI between the U.S. and China as the next ‘tech race,’ and following benchmarks of progress to determine who is winning. Beating the world’s top chess player was considered a major benchmark for machine learning; another was winning in the ancient Chinese game of 'Go.'

China and the U.S.  are further attempting to design AI and machine learning algorithms at breakneck speeds while trying to slow the progress of the other; as the U.S. and USSR did to each other in the mid-20th century.

But this type of coverage and framing misses the more fundamental and pressing questions of the AI tech race, ones that were implicitly understood in determining the ‘winner’ of the earlier race to spaceflight.

The point for each state was never simply to ‘win the race’ but to demonstrate that they had the superior structure for organizing society around a state; one that was more efficient, prosperous and better at engineering progress that surpassed certain benchmarks. Getting into space first and landing on the moon were just ways of showing off one side’s perceived societal superiority.

Current coverage of the race to AI dominance however, hides the more fundamental questions lurking beneath: namely, the political visions driving AI’s development into the future.

What are the socio-political visions of those creating AI and algorithms? What types of societies do they help engender, and how do they affect a person’s relationship to the state and to corporations? Do China and the U.S. have different ideas for how AI ought to be used?

‘Winning’ the AI race may be a vague goal, but the real trophy rests with the government that can successfully use AI to service its strategic priorities.

As AI becomes more advanced and is relied upon as a tool of governance, it is increasingly important to understand the political visions of its coders.

The Cold War between the U.S. and China includes a vitally under-reported difference in the way each government is attempting to utilize AI.

For China, AI is an instrument being used to centralize state power over citizens; to augment authoritarian governments’ hold on power.

The U.S., so far, lacks a central organizing principle for the political use of AI, but is trying to develop one that advances its military edge.

The CCP’s Big Data Dreams

Image result for buzz cut

 

The biggest difference between China and the U.S.’ AI ambitions is that China has one and is structuring its technology sector around it. Meanwhile the U.S., with its array of private Silicon Valley interests and Washington policymakers lacking a consensus, has still yet to develop a cohesive strategy for AI.

Zhuang Rongwen, the new leader of China’s Cyberspace Administration of China, laid out the country’s hopes for the new AI-powered digital age in a manifesto published Sep 2018.

“In the current era, the Internet has already become the main channel for producing, disseminating, and obtaining information. Its ability to mobilize society is ever greater, increasingly becoming a transmitter and amplifier of all kinds of risks,” he writes.

“Whoever masters the Internet holds the initiative of the era,” Rongwen pens, “and whoever does not take the Internet seriously will be cast aside by the times.”

The manifesto decries the lack of control the CCP has over the internet as the central organizing space, and portrays the digital realm’s chaotic lack of governance as having “created major difficulties for the Party ideological departments’ uniform leadership over ideology and theory propagation…”

Totalizing control over cyberspace and the flow of information, Rongwen explains, “has become a major and urgent task set in front of us.”

China’s leader, Xi Jinping, put it more bluntly: “We need to enhance the combination of AI and social governance and develop AI systems for government services and decision-making,” he explained at a Politburo "group study session" regarding AI. Chinese academics and tech workers call this state embrace of AI governance a “military-civil fusion.”

The new manifesto reflects the CCP’s general desire to use AI as form of digital governance that compels, incentivizes and coerces acquiescence while punishing, surveilling and censoring dissidence.

It’s a continuation of the CCP’s general style of authoritarian governance, only it adds a flashy new tool to accomplish these tasks.

(AFP/FILE)

To harness the technology under the auspices of the CCP, China has organized much of the technology sector around the ruling party’s agenda.

Its biggest technology companies are led by CCP loyalists and party members; state-owned investment firms ensure any and all promising tech companies that presents strategically valuable AI are generously funded and incubated.

Tencent is one of the biggest companies in the world responsible for multi-billion dollar investments in other firms, social media apps, games and internet services among other products. Its founder, CEO and Chairman, Ma Huateng, is a delegate in China’s National People’s Congress. At a conference in Singapore, Huateng explained that free speech was potentially dangerous, saying “ We are a great supporter of the government in terms of the information security. We try to have a better management and control of the Internet.”

Tencent’s prized all-purpose app, WeChat, reflects this position. WeChat is a credit card, social media app, messaging service, appointment booker, logistical assistant, search engine and mobile calling device all in one application. Because of its ubiquitous functionality, the app helps further China’s so-called Golden Shield of censorship by filtering out messages and images that may contain anti-regime sentiment.

Alibaba, the other major tech company in China, competes with Tencent for profits but services the CCP’s AI goals with as much efficiency. Its founder and chairman, Jack Ma, was recently outed as a member of the CCP.

Alibaba is one of the country’s biggest financiers of emergent tech that relies on AI to bolster predictive policing programs being utilized throughout the country.

In early 2018, Alibaba invested $327 million into Megvii, which helped to develop a face-scanning payment program for Alibaba. Megvii specializes in facial recognition technology and one of its most advanced products, Face ++, is being used developed and utilized as part of a multi-pronged police statein northwestern China.

Later, in April 2018, Alibaba led a $620 million fundraising effort for Megvii’s competitor, SenseTime, which also produces state-of-the-art facial recognition tech to be used in monitoring and predictive policing.

State-owned venture capital and investment firms are also pouring billions into tech projects which promise to provide the CCP innovative ways to centralize control over its people and cyberspace.

To add to Megvii’s value, Bank of China Group Investment, a state-owned private equity firm, pledged $200 million to the tech startup.

According to the Reuters report which broke news of the firm’s investment intentions, “Megvii’s latest fundraising comes amid Beijing’s plans to build a ubiquitous closed-circuit television (CCTV) surveillance network and become an international leader in AI, a technology that is increasingly becoming key to various sectors.”

Another state-owned investment body was established in 2016 by the CCP and given a $30 billion budget to invest in tech startups and AI. A Global Risk Insights report on the new fund details that “although referred to as ‘venture capital’ in name, the fund exists first and foremost to serve the needs of the government.”

The overall level of coordination, cooperation and strategic alignment of China’s ruling government and its private firms is striking. A real-time illustration of this joint public-private effort to develop AI towards centralizing control can be found in the province of Xinjiang.

Much has been reported on Xinjiang’s multifaceted, layered police state, but the basics are this: the CCP is building a state-operated, privately supplied surveillance network aimed against the region’s ethnic and religious minorities. Over a million Turkic Muslims have been reportedly sent to detention centers while millions of others have their every movement, conversation and purchase tracked by an exhaustive yet unitary monitoring system.

Maya Wang, a researcher for Human Rights Watch, explained the goal of China's AI-powered police state: “The goal is to mass engineer the identity of the Muslims--which are too different from Hans [the main ethnicity in China], from the state's perspective-- so they become loyal, obedient subjects of the CCP. This is done through pervasive surveillance, political indoctrination and control--particularly over their movement--over the Muslims of that region."

Xinjiang is the realized dream of the CCP’s AI ambitions. It’s also serving as a human testing ground to further augment and perfect the tech that’s being tested there.

The U.S.’ Chaotic Assemblage of AI Interests

Google Maps

(AFP/FILE)

The U.S. on the other hand, has comparatively little coordination between its various public and private interests.

While China has the overarching CCP, party-aligned tech giant companies and investment firms acting in unison towards a single objective, the U.S. has independent Silicon Valley companies and a federal government desperately trying to entice companies with lucrative government contracts.

AI’s application in the U.S.’ public sphere is defined by local, conflicting interests rather than nationally developed strategies. For example, some local police departments use AI-driven programs to identify which neighborhoods to patrol or which suspects to charge. AI experts warned that these programs simply reinforce the racial prejudice of their coders and hurt efforts for police accountability. Start-ups are now designing AI to police the police by attempting to analyze their body camera footage. Other startups are developing AI-powered body cameras that enhance their ability to carry out surveillance.

Private U.S. companies have an outsized ability to gather mass amounts of data, like the ones in China, but often process the data internally to further their own agendas or sell the information to other private companies. When the U.S. federal government demands some elements of this information, private pushback is often fierce.

For example, in 2016 the FBI tried to force Apple to unlock the phone of a suspect in a terror attack, Apple refused and the issue went to court.

Further, to counteract the buildup of Big Data in the hands of powerful U.S. corporations and police, scientists and engineers from MIT are developing advanced cryptographic methods to improve transparency for when and what data is released and ensure accountability for those who improperly gather data.

All of this paints a chaotic picture of AI’s socio-political developments, with its many players positioning themselves against one another and no central authority or blueprint regulating or guiding them.

In May 2018, former U.S. Defense Secretary Jim Mattis wrote a memo to Trump pleading with him to create a unified framework for harnessing the strategic power of AI.

In the memo, Mattis said the U.S. was lagging behind the progress of Chinese AI development. Quoting a recent Atlantic piece penned by Henry Kissinger, Mattis urged Trump to create a presidential commision on AI that would “ensure the U.S. is a leader not just in matters of defense but in the broader ‘transformation of the human condition,’” that will be caused by AI.

According to cybersecurity experts Al Bawaba spoke with, despite this plea, the U.S. federal government has yet to develop such a blueprint.

“The U.S. still don’t have a cohesive, national AI strategy, for instance, and policymakers who discuss AI focus too heavily on its military applications,” said Justin Sherman, a Cybersecurity Policy Fellow at New America.

Those who hoped Trump would create one were disappointed by a recent executive order that failed to articulate any concrete steps the U.S. will do to develop a comprehensive AI framework.

While the federal government lags behind, the Department of Defense is attempting to create a military-backed tech incubator of its own. In June 2018, the Pentagon announced the creation of a the Joint Artificial Intelligence Center (JAIC), nicknamed Jake, which will provide funding and oversight services to firms developing military tech.

The memo that announced JAIC describes the center as one that revolves around ‘National Mission Initiatives,’ the first of which is reportedly Project Maven.

Project Maven is an ongoing program to enhance the Pentagon’s ability to target people in drone strikes. As such, it relies on developing an advanced AI program that can distinguish between people and objects.

Google signed on to build the AI, and reportedly outsourced the task to a crowd worker website, which pays people as little as $1 an hour for performing short, menial tasks such as identifying things in videos and pictures. The workers did not know the data they were sifting through and identify was going into building the military AI program to be used around the world.

 

U.S.’ armed ‘Grey Eagle’ Drone (U.S. Military)

When Google’s employees gained knowledge of their company’s efforts to build an AI to be used in drone strikes, 3,100 of them signed a petition to demand the company end its dealings with the Pentagon; something that could never be done in China.

Google’s executives relented and withdrew from the project, but internal emails acquired by Gizmodo show an executive team who “saw a huge opportunity for growth in the possibility of lucrative business with the Pentagon and projects that could ultimately lead to a cutting-edge AI-powered system capable of surveilling entire cities.”

Despite the controversy, Project Maven’s AI program “has already been deployed to half a dozen locations in the Middle East and Africa, where it is helping military analysts sort through the mountains of data their sensors and drones soak up."

The U.S. may have developed the most complicated and advanced AI systems capable of conquering increasingly complex problems, but China is using them for brutally efficient means of socio-political ends that end up being far more impactful.

To this, Al Bawaba spoke with Dan McQuillan, an experimental physicist and theorist on AI’s applications at Goldsmiths, University of London, and asked him what it means that AI, arguably the most promising technology being developed in the world today, is being honed as a tool to police and surveil populations.

“I think it says we're in serious trouble,” he said bluntly.

Part three: To embrace or regulate AI’s role in politics

A growing number of artificial intelligence experts are warning that the technology may render democracy obsolete.

Data-driven policing are bringing more cops to poor neighborhoods; they are tracking over a billion Chinese citizens in their everyday movements, they are virtually monitoring borders between countries and determining access to welfare and living expenses.

They are managing information flows, creating penetrative surveillance states while turning phones into listening devices corporations can use to target consumers based on their conversations. They are radicalizing entire populations through source-filtering and are now determining who lives and dies in drone strikes. But advancements are also revolutionizing the practice of medicine, civil engineering and infrastructure development: AI systems are beginning to read medical images better than imperfect human eyes and are creating better-organized infrastructures for cities.

Politically, AI’s development is blurring lines of governance, making some authoritarian regimes even less accountable while expanding the power of corporations over citizens, making the biggest ones more like quasi-governmental bodies without any typical standards of governmental transparency applied to them.

Applications of AI are, quietly and without deliberation, rewriting countries’ social contracts.

As evidence of AI’s power in shaping social and political life mounts, AI developers and governments are struggling to formulate a central strategy around which to define its uses and limits.

Two distinct schools of thoughts, however, are steadily forming: one that calls for AI to be tightly controlled and regulated and another that calls for the utter embrace of AI.

The former hopes the type of analogue democracy crafted in the Enlightenment era and idealized ever since can be preserved. The latter seeks the reinvention of democracy into a fully fledged ‘nanocracy,’ or advocates the automatization of authoritarianism.

While the E.U. is busy attempting to clarify AI’s place in governance, China is weaponizing its use, signalling a full statist embrace of the technology.

With AI-driven governance tempting more countries throughout the world, time is running out to ensure the technology's ethical use.

Regulating AI to be “Human-Centric”

German Chancellor Angela Merkel speaking at a digital conference in Nuremberg (AFP/FILE)

The breakneck speed of AI’s progress and use in governance has caused some to slow down and ask for its place in society to be clarified and carefully regulated.

The basic contention informing this approach is that the type of democracy and social contract idealized by current democracies in Europe and North America are worth preserving. States have obligations to their citizens to maintain basic levels of transparency, while corporations ought to have little, if any, role in governance. States further have obligations to protect the welfare of their citizens, while citizens give up a certain level of their freedom in order to be governed.

Because AI threatens to destabilize this fragile balance by giving states and corporations non-democratic, opaque governance tools that infringe upon certain agreed upon rights like freedom of speech, freedom to associate or organize and freedom from arbitrary detention or discrimination, its power should be limited.

The central force to control AI is by making sure its progress always meets ethical standards. Some are calling this the ‘ethical algorithm,’ or ethical A.I.

The E.U. recently assembled a commission comprised of academics and AI developers to draft a framework defining ethical AI and laying down a foundation for its use.

The assembly wrote, “having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximizes the benefits of AI while minimizing its risks.”

“To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology,” the report adds.

The phrase ‘human-centric approach to AI’ immediately stands out. It implicitly articulates the stakes involved in having a non-centric approach to AI: the subversion of human well-being, careless development of artificial intelligence with no thought to whether it constitutes actual social progress.

The report later defines human-centric approach to AI as one that “strives to ensure that human values are always the primary consideration, and forces us to keep in mind that the development and use of AI should not be seen as a means in itself, but with the goal of increasing citizen's well-being.”

Interestingly enough, the report does not lay out a rigorous definition of what ‘human values’ are, taking it as a given that human values and its supposed opposite, presumably AI-centric values, are antagonistic towards one another.

Data scientist and author of the book “Weapons of Math Destruction,” Cathy O'Neil, calls for AI developers to create and obey a digital Hippocratic Oath, pledging their programs will do no harm.

O’Neil explained in an interview that taking such an oath is made difficult by the fact that data scientists and other AI developers are assigned to produce the most accurate, optimized algorithm but have no decision-making power in the fairness or equity of the task to which they are assigned.

She cites a conversation she had with a statistician developing a recidivism risk algorithm for a state prison system. “Do you ever use race as an attribute to determine recidivism risk?” she asked him, to which he responded “Oh, no. I would never do that.” However, when she asked him if he used zip codes as a way to assess  the risk for recidivism, he admitted he does, exposing that the algorithm he uses is likely discriminatory against disproportionately poor, minority-dominated neighborhoods.

“I think the biggest thrust of a Hippocratic oath would be to realise that we have the ability and the potential to have an enormous amount of influence on society but without the wisdom to understand the true impact of this influence,” O’Neil explains.

To answer the lingering question of acquisition of massive amounts of data, often without the consumer knowing, some are proposing that consumers get paid for the data they share. In Feb 2019, California Governor Gavin Newsom argued the case for "a new data dividend" law, which would require corporations to pay users for the data they are willing to share.

"California's consumers should also be able to share in the wealth that is created from their data," Newson said. In theory, this regulation would help to create a two-way street for data sharing, revealing exactly what data is or isn’t being vacuumed up by data-seeking corporations like Facebook, Google and Amazon.

 

How do you Regulate Artificial Intelligence?

(AFP/FILE)

To have a world with ‘ethical AI’ sounds safe enough, but creating the regulations necessary to achieve that end is an incredibly complex task that may stifle the promises of data-driven technology entirely.

Stanford University convened a panel of AI experts as part of a “One Hundred Year Study of Artificial Intelligence,” who all concluded that regulation wasn’t the answer. “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains,” they wrotecollectively.

The panel further warned against regulations that stymie innovation, the cornerstone of AI’s rapid development. Indeed, advanced AI systems promise revolutions in the fields of medicine and infrastructure. Justin Sherman, a Cybersecurity Policy Fellow at New America, a think tank focused on security, tells Al Bawaba that AI promises “massive gains in economic power and the quality of public health by developing, say, better cancer-detecting neural nets.”

Already, AI is being used to detect kidney disease among other diagnoses along with medical imaging. So-called ‘Smart Cities’ designed by AI-driven algorithms also promise more efficient means of civil administration that saves time and money.

The progress of these developments may be slowed down by legislative bodies that are designed to make sure no algorithms infringe upon people’s rights.

More basically on the Stanford Panel’s first point; there is no actionable definition of AI with which regulations could written. A broad definition of AI may encourage governments to over-impose and mandate unwieldy, unenforceable regulations on programming in general.

To this, the Stanford panel encourages a different approach to governing AI: “regulators can strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance,” they argue.

“Policies should be evaluated as to whether they democratically foster the development and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few,” they continue.

Put simply, Ethical AI is best served not by instituting hard-line rules and demanding compliance by AI developers, but by encouraging transparency.

While this may not stop the shift in a social contract away from a traditional democracy, it will at least explicate the new moving parts, and publicly show the new relationship people have to states using AI to govern them and corporations creating mountains of user-generated data.

 

Embracing AI: The Nanocracy and the Digital Authoritarian

Others welcome the socio-political revolution AI promises to bring.

They consider the technology’s relegation as a mere instrument for current political systems to be squandering the transformative power of AI.

Within those emphasizing regulation less and the transformative power of AI more, there are two camps: some who claim AI’s promotion can revolutionize democracy and others who use AI to centralize state power.

Jamie Susskind, author of the book “Future Politics” is among those who believe fully integrating data-driven AI into democracies can save them from obsolesce.

“Increasingly, digital technology is eroding the assumptions and conditions that have underpinned democracy for centuries,” he writes in his book.

“But in the future, we’ll have to grapple with the much more significant idea of AI Democracy, asking which decisions could and should be taken by powerful digital systems, and whether such systems might better represent the people than the politicians we send to Congress.”

Susskind argues that democracy’s core function is to “ to unleash the information and knowledge contained in people’s minds and put it to political use,” and claims current democracies’ emphasis on periodic voting and elections actually don’t achieve democracy’s desired result very well.

 

U.S. voters cast their ballots in the 2018 midterms (AFP/FILE)

“A vote on a small number of questions–usually which party or candidate to support–produces only a small number of data points. Put in the context of an increasingly quantified society, the amount of information generated by the democratic process–even when private polling is taken into account–is laughably small,” he explains.

Rather than slow the collection of data, as the ‘Ethical AI’ crew might advocate for, Susskind says voters and consumers ought to feed more data to administrative bodies. These bodies would then be tasked with reading real-time political measurements of sentiment and have AI systems cast votes, or ‘nano-ballots’ thousands of times a day “without having to disturb us at all.”

To acquire the political imagination necessary to comprehend this profound shift, Susskind emphasizes that humans already give AI systems outsized power to determine what they are exposed to, what kind of news articles they read, what kind of information they give of themselves and receive about the world and the people around them. Is it really that much of a jump for AI systems to take an extra step and take information about us, and rework it as a ballot?

The idea is provocative and radical, but feels far away from being implemented or taken seriously as a counter-framework to the ‘human-centric approach’ the EU is advancing.

More pragmatically, some private AI companies are advertising systems that can augment  people-power using newly developing AI. For instance, Avantgard promises to “supercharge social movements and political campaigns,” by identifying key opportunities to grow a movement’s size and sway.

The scope of this endeavour however, will be limited to those who can pay to gain access to the private tech in the first place: many grassroots movements would likely be left out of the digital revolution.

A much more immediate all-out embrace of AI in governance is happening right now: that of governments currently integrating AI as a tool to automate authoritarian policies.

China, as Al Bawaba has extensively documented, is the biggest advocate for this approach, and is currently looking to export the AI systems it has perfected to other aspiring authoritarian states. As a technology with aimed limited to its creator, China isn’t embracing AI designed inside democratic countries: they’ve been active in periodically or permanently blocking out AI-driven apps like Twitter and Facebook and companies like Google.

“While China itself has been in the process of making itself a police state with extensive snooping and surveillance carried out on its population, the dangers of China exporting such technologies and practices to other states are far greater,” Rajeswari Pillai Rajagopalan, who heads the Nuclear and Space Policy Initiative told Al Bawaba.

“Such export and the net result of creating insecure states and societies especially in regions and countries with poor rule of law, weak human rights track record, can be a slippery slope, slowly eroding democratic practices such as freedom of expression, free and open media and so on.”

Already, Chinese companies are selling AI systems designed to surveil citizens en masse to countries like Zimbabwe, Venezuela, Malaysia and Germany.

In Venezuela, Chinese telecom giant ZTE is integrating surveillance technology into the company’s ‘fatherland card’, which is part ID, part credit card. Infusing the card with a ‘social credit score’ similar to the one being piloted in China would tie Venezuelans access to vital social welfare, upon which many depend for their survival, to how high their score is.

Analysts immediately pointed out how this would be used to punish dissident protesters and allow loyalists disproportionate access to goods and services.

 

A Venezuelan woman holds up her so-called ‘fatherland’ card (AFP/FILE)

By tying together a nightmarish mix of AI-powered policing and access to social welfare, Venezuela is adopting a social contract that more or less goes like this: you are considered a full citizen if your habits, thinkings and behavior do not threaten the state. If your beliefs or actions run against the state’s adoptive ideology, you are liable to be slowly removed from society and state protections.

Your personhood is then decided by the ideology of the coders working for the state.

The Zimbabwean government too, has been working to saturate the country with a network of Chinese-made CCTV cameras outfitted with state-of-the-art facial recognition technology from Chinese companies that would track the movements of all its inhabitants.

What these examples illustrate is that AI is a tool for states to accomplish the agendas they have already set out for themselves. The E.U. pledging to reign in the unfettered power of AI, in this light, is not so much a radical proposal but a continuation of the intergovernmental organizations dedication to ideals designed in the pre-AI age. Likewise, China’s weaponized use of AI reflects its own desire to totalize control over its citizens.

As advances in AI remain steady, and the technology becomes an ever-more potent tool in governance, time is running out for countries to clarify its role in their respective societies.

States and corporations looking to reap the technology’s benefits are quietly harnessing AI for non-democratic means. In the absence of conscious deliberation as to the social role of AI, it is steadily being naturalized as a means to augment the power of the already-powerful, and cement the place of the powerless.


© 2000 - 2019 Al Bawaba (www.albawaba.com)