Facebook is considering adding facial recognition to its smart glasses set to launch this year, if it can legally do so. Andrew Bosworth, Facebook’s vice president of augmented and virtual reality, announced the proposal at a company wide meeting last week.
The potential move comes amidst global debate about privacy in the age of big tech.
Facial recognition technology in particular is often used in the name of “safety”, but has been linked to oppressive and genocidal practices, racist policing policies, and other daily security risks.
Facebook itself faces broad criticism and skepticism over its privacy practices. In 2019, the Federal Trade Commission punished Facebook, which agreed to pay a record $5 billion fine to settle a data privacy probe
linked to the Cambridge Analytica scandal. And it’s only one event in its long, problematic privacy protection record.
“The real question is whether we will be able to recognize any faces at all, and we don’t know. Legally, the answer might be no, if you’re familiar with BIPA [Biometric Information Privacy Act] in Illinois ... people are making face recognition illegal,” said Bosworth, according to a Buzzfeed News report, suggesting that Facebook’s primary concern had to do with the legality of such software more than the ethical and privacy implications, though the Chief Diversity Officer said that the company would have to consider the potential for discrimination and other harmful outcomes.
"Face recognition is a hugely controversial topic and for good reason and I was speaking about how we are going to have to have a very public discussion about the pros and cons," he said.
So what are some of the debates, and what are their implications in an increasingly digitised world?
What are the concerns?
The main concerns include racist and discriminatory policing; potential for state abuse; stalkers; and general safety, civil liberties, and privacy concerns.
Wearable facial recognition technology for everyday use can give a lot of power to abusers or anyone with nefarious intentions, as it allows users to track and learn about someone based on just their face. This can pave the way for stalking, and endangers women and children in particular.
One of the most extreme examples can be found in China, where
facial recognition technology is used to surveil society writ large; every single citizen of China is logged in the state system, which is used to track and monitor their activities and behaviour. The Muslim Uighur population is targeted in particular, and the technology, along with other tools, is used as part of a mass campaign against the minority group, which has been called a genocide by human rights organisations, and some states like Canada.
In the United States, facial recognition has been used by law enforcement, but it is rife with controversy, not only regarding privacy concerns, but also due to
racist and discriminatory algorithms and utilisation.
There is also the matter of storing and securing all the biometric data. Facebook, along with other tech companies have a terrible track recording regarding privacy, with
new scandals seemingly every day regarding security vulnerabilities and sloppy handling of data, as well as cover-ups of breaches. These all raise additional ethical questions, particularly about privacy, theft, and exploitation.
So, are there any benefits?
Yes, some, but the insufficient regulation of the technology can open the way for human rights violations, including the right to privacy and non-discrimination.
For instance, facial recognition technology can help those with prosopagnosia, or face blindness, a rare neurological condition that prevents people from recognising faces. Bosworth also noted that it can help in potentially awkward social situations where you may forget a colleague’s name at an event.
Government agencies underline its potential usefulness in law enforcement, for helping identify criminals, and for airport security. In 2018, facial recognition technology was used to help
identify the stalkers of international pop star Taylor Swift.
Some activists are also using the technology to help identify law enforcement officials in cases of violence or
What protections do you have?
The technology is unregulated for the most part, but some places have limits on its employment.
In the US, some cities like Boston and San Francisco have banned its use in local law enforcement, and the state of Illinois has one of the strictest privacy laws in the country.
The European Union has the toughest data protection laws in the world, and the European Parliament this year called for a new legal framework for artificial intelligence and surveillance-related face biometric applications like facial recognition technology, including an invitation for a moratorium on their use
until “the technical standards can be considered fully fundamental rights-compliant, the results derived are non-biased and non-discriminatory, and there are strict safeguards against misuse that ensure the necessity and proportionality of using such technologies.”
Rights organisation Amnesty International has called for a ban of the technology through its
Ban the Scan campaign which was launched in New York, and will expand to other places in the world later this year.
“Facial recognition risks being weaponized by law enforcement against marginalized communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights,” said Matt Mahmoudi, AI and Human Rights Researcher at Amnesty International.
This article has been adapted from its original source.