To Embrace or to Regulate: How Will the World Adapt to AI?

Published February 14th, 2019 - 10:08 GMT
A facial recognition system used by police in China is showcased (AFP/FILE)
A facial recognition system used by police in China is showcased (AFP/FILE)

 

By Ty Joplin

 

A growing number of artificial intelligence experts are warning that the technology may render democracy obsolete.

Data-driven policing are bringing more cops to poor neighborhoods; they are tracking over a billion Chinese citizens in their everyday movements, they are virtually monitoring borders between countries and determining access to welfare and living expenses.

They are managing information flows, creating penetrative surveillance states while turning phones into listening devices corporations can use to target consumers based on their conversations. They are radicalizing entire populations through source-filtering and are now determining who lives and dies in drone strikes. But advancements are also revolutionizing the practice of medicine, civil engineering and infrastructure development: AI systems are beginning to read medical images better than imperfect human eyes and are creating better-organized infrastructures for cities.

AI has been anointed as the way of the future, for better or for worse.

Politically, AI’s development is blurring lines of governance, making some authoritarian regimes even less accountable while expanding the power of corporations over citizens, making the biggest ones more like quasi-governmental bodies without any typical standards of governmental transparency applied to them.

Applications of AI are, quietly and without deliberation, rewriting countries’ social contracts.

As evidence of AI’s power in shaping social and political life mounts, AI developers and governments are struggling to formulate a central strategy around which to define its uses and limits.

Two distinct schools of thoughts, however, are steadily forming: one that calls for AI to be tightly controlled and regulated and another that calls for the utter embrace of AI.

The former hopes the type of analogue democracy crafted in the Enlightenment era and idealized ever since can be preserved. The latter seeks the reinvention of democracy into a fully fledged ‘nanocracy,’ or advocates the automatization of authoritarianism.

While the E.U. is busy attempting to clarify AI’s place in governance, China is weaponizing its use, signalling a full statist embrace of the technology.

With AI-driven governance tempting more countries throughout the world, time is running out to ensure the technology's ethical use.

Regulating AI to be “Human-Centric”

German Chancellor Angela Merkel speaking at a digital conference in Nuremberg (AFP/FILE)

The breakneck speed of AI’s progress and use in governance has caused some to slow down and ask for its place in society to be clarified and carefully regulated.

The basic contention informing this approach is that the type of democracy and social contract idealized by current democracies in Europe and North America are worth preserving. States have obligations to their citizens to maintain basic levels of transparency, while corporations ought to have little, if any, role in governance. States further have obligations to protect the welfare of their citizens, while citizens give up a certain level of their freedom in order to be governed.

Because AI threatens to destabilize this fragile balance by giving states and corporations non-democratic, opaque governance tools that infringe upon certain agreed upon rights like freedom of speech, freedom to associate or organize and freedom from arbitrary detention or discrimination, its power should be limited.

The central force to control AI is by making sure its progress always meets ethical standards. Some are calling this the ‘ethical algorithm,’ or ethical A.I.

The E.U. recently assembled a commission comprised of academics and AI developers to draft a framework defining ethical AI and laying down a foundation for its use.

The assembly wrote, “having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks.”

“To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology,” the report adds.

The phrase ‘human-centric approach to AI’ immediately stands out. It implicitly articulates the stakes involved in having a non-centric approach to AI: the subversion of human well-being, careless development of artificial intelligence with no thought to whether it constitutes actual social progress.

The report later defines human-centric approach to AI as one that “strives to ensure that human values are always the primary consideration, and forces us to keep in mind that the development and use of AI should not be seen as a means in itself, but with the goal of increasing citizen's well-being.”

 

 

Interestingly enough, the report does not lay out a rigorous definition of what ‘human values’ are, taking it as a given that human values and its supposed opposite, presumably AI-centric values, are antagonistic towards one another.

Data scientist and author of the book “Weapons of Math Destruction,” Cathy O'Neil, called for AI developers to create and obey a digital Hippocratic Oath, pledging their programs will do no harm.

O’Neil explained in an interview that taking such an oath is made difficult by the fact that data scientists and other AI developers are assigned to produce the most accurate, optimized algorithm but have no decision-making power in the fairness or equity of the task to which they are assigned.

She cites a conversation she had with a statistician developing a recidivism risk algorithm for a state prison system. “Do you ever use race as an attribute to determine recidivism risk?” she asked him, to which he responded “Oh, no. I would never do that.” However, when she asked him if he used zip codes as a way to assess  the risk for recidivism, he admitted he does, exposing that the algorithm he uses is likely discriminatory against disproportionately poor, minority-dominated neighborhoods.

“I think the biggest thrust of a Hippocratic oath would be to realise that we have the ability and the potential to have an enormous amount of influence on society but without the wisdom to understand the true impact of this influence,” O’Neil explains.

To answer the lingering question of acquisition of massive amounts of data, often without the consumer knowing, some are proposing that consumers get paid for the data they share. In Feb 2019, California Governor Gavin Newsom argued the case for "a new data dividend" law, which would require corporations to pay users for the data they are willing to share.

"California's consumers should also be able to share in the wealth that is created from their data," Newson said. In theory, this regulation would help to create a two-way street for data sharing, revealing exactly what data is or isn’t being vacuumed up by data-seeking corporations like Facebook, Google and Amazon.

 

How do you Regulate Artificial Intelligence?

A Chinese police officer shows off sunglasses outfitted with facial recognition technology (AFP/FILE)

To have a world with ‘ethical AI’ sounds safe enough, but creating the regulations necessary to achieve that end is an incredibly complex task that may stifle the promises of data-driven technology entirely.

Stanford University convened a panel of AI experts as part of a “One Hundred Year Study of Artificial Intelligence,” who all concluded that regulation wasn’t the answer. “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains,” they wrote collectively.

The panel further warned against regulations that stymie innovation, the cornerstone of AI’s rapid development. Indeed, advanced AI systems promise revolutions in the fields of medicine and infrastructure. Justin Sherman, a Cybersecurity Policy Fellow at New America, a think tank focused on security, tells Al Bawaba that AI promises “massive gains in economic power and the quality of public health by developing, say, better cancer-detecting neural nets.”

Already, AI is being used to detect kidney disease among other diagnoses along with medical imaging. So-called ‘Smart Cities’ designed by AI-driven algorithms also promise more efficient means of civil administration that saves time and money.

The progress of these developments may be slowed down by legislative bodies that are designed to make sure no algorithms infringe upon people’s rights.

 

 

More basically on the Stanford Panel’s first point; there is no actionable definition of AI with which regulations could written. A broad definition of AI may encourage governments to over-impose and mandate unwieldy, unenforceable regulations on programming in general.

To this, the Stanford panel encourages a different approach to governing AI: “regulators can strengthen a virtuous cycle of activity involving internal and external accountability, transparency, and professionalization, rather than narrow compliance,” they argue.

“Policies should be evaluated as to whether they democratically foster the development and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few,” they continue.

Put simply, Ethical AI is best served not by instituting hard-line rules and demanding compliance by AI developers, but by encouraging transparency.

While this may not stop the shift in a social contract away from a traditional democracy, it will at least explicate the new moving parts, and publically show the new relationship people have to states using AI to govern them and corporations creating mountains of user-generated data.

 

Embracing AI: The Nanocracy and the Digital Authoritarian

China is weaponizing AI as a tool of authoritarian governance (AFP/FILE)

Others welcome the socio-political revolution AI promises to bring.

They consider the technology’s relegation as a mere instrument for current political systems to be squandering the transformative power of AI.

Within those emphasizing regulation less and the transformative power of AI more, there are two camps: some who claim AI’s promotion can revolutionize democracy and others who use AI to centralize state power.

Jamie Susskind, author of the book “Future Politics” is among those who believe fully integrating data-driven AI into democracies can save them from obsolesce.

“Increasingly, digital technology is eroding the assumptions and conditions that have underpinned democracy for centuries,” he writes in his book.

“But in the future, we’ll have to grapple with the much more significant idea of AI Democracy, asking which decisions could and should be taken by powerful digital systems, and whether such systems might better represent the people than the politicians we send to Congress.”

Susskind argues that democracy’s core function is to “ to unleash the information and knowledge contained in people’s minds and put it to political use,” and claims current democracies’ emphasis on periodic voting and elections actually don’t achieve democracy’s desired result very well.

 

U.S. voters cast their ballots in the 2018 midterms (AFP/FILE)

“A vote on a small number of questions–usually which party or candidate to support–produces only a small number of data points. Put in the context of an increasingly quantified society, the amount of information generated by the democratic process–even when private polling is taken into account–is laughably small,” he explains.

Rather than slow the collection of data, as the ‘Ethical AI’ crew might advocate for, Susskind says voters and consumers ought to feed more data to administrative bodies. These bodies would then be tasked with reading real-time political measurements of sentiment and have AI systems cast votes, or ‘nano-ballots’ thousands of times a day “without having to disturb us at all.”

To acquire the political imagination necessary to comprehend this profound shift, Susskind emphasizes that humans already give AI systems outsized power to determine what they are exposed to, what kind of news articles they read, what kind of information they give of themselves and receive about the world and the people around them. Is it really that much of a jump for AI systems to take an extra step and take information about us, and rework it as a ballot?

The idea is provocative and radical, but feels far away from being implemented or taken seriously as a counter-framework to the ‘human-centric approach’ the EU is advancing.

More pragmatically, some private AI companies are advertising systems that can augment  people-power using newly developing AI. For instance, Avantgard promises to “supercharge social movements and political campaigns,” by identifying key opportunities to grow a movement’s size and sway.

The scope of this endeavour however, will be limited to those who can pay to gain access to the private tech in the first place: many grassroots movements would likely be left out of the digital revolution.

 

 

A much more immediate all-out embrace of AI in governance is happening right now: that of governments currently integrating AI as a tool to automate authoritarian policies.

China, as Al Bawaba has extensively documented, is the biggest advocate for this approach, and is currently looking to export the AI systems it has perfected to other aspiring authoritarian states. As a technology with aimed limited to its creator, China isn’t embracing AI designed inside democratic countries: they’ve been active in periodically or permanently blocking out AI-driven apps like Twitter and Facebook and companies like Google.

“While China itself has been in the process of making itself a police state with extensive snooping and surveillance carried out on its population, the dangers of China exporting such technologies and practices to other states are far greater,” Rajeswari Pillai Rajagopalan, who heads the Nuclear and Space Policy Initiative told Al Bawaba.

“Such export and the net result of creating insecure states and societies especially in regions and countries with poor rule of law, weak human rights track record, can be a slippery slope, slowly eroding democratic practices such as freedom of expression, free and open media and so on.”

Already, Chinese companies are selling AI systems designed to surveil citizens en masse to countries like Zimbabwe, Venezuela, Malaysia and Germany.

In Venezuela, Chinese telecom giant ZTE is integrating surveillance technology into the company’s ‘fatherland card’, which is part ID, part credit card. Infusing the card with a ‘social credit score’ similar to the one being piloted in China would tie Venezuelans access to vital social welfare, upon which many depend for their survival, to how high their score is.

Analysts immediately pointed out how this would be used to punish dissident protesters and allow loyalists disproportionate access to goods and services.

 

A Venezuelan woman holds up her so-called ‘fatherland’ card (AFP/FILE)

By tying together a nightmarish mix of AI-powered policing and access to social welfare, Venezuela is adopting a social contract that more or less goes like this: you are considered a full citizen if your habits, thinkings and behavior do not threaten the state. If your beliefs or actions run against the state’s adoptive ideology, you are liable to be slowly removed from society and state protections.

Your personhood is then decided by the ideology of the coders working for the state.

The Zimbabwean government too, has been working to saturate the country with a network of Chinese-made CCTV cameras outfitted with state-of-the-art facial recognition technology from Chinese companies that would track the movements of all its inhabitants.

What these examples illustrate is that AI is a tool for states to accomplish the agendas they have already set out for themselves. The E.U. pledging to reign in the unfettered power of AI, in this light, is not so much a radical proposal but a continuation of the intergovernmental organizations dedication to ideals designed in the pre-AI age. Likewise, China’s weaponized use of AI reflects its own desire to totalize control over its citizens.

As advances in AI remain steady, and the technology becomes an ever-more potent tool in governance, time is running out for countries to clarify its role in their respective societies.

States and corporations looking to reap the technology’s benefits are quietly harnessing AI for non-democratic means. In the absence of conscious deliberation as to the social role of AI, they are slowly naturalizing its place as a means to augment the power of the already-powerful, and cement the place of the powerless.


© 2000 - 2019 Al Bawaba (www.albawaba.com)

You may also like