You don’t want your face to appear in the database of Clearview AI?
The company’s CEO doesn’t seem to care.
“All the information we collect is collected legally and it is all publicly available information,” Hoan Ton-That said Monday during DW’s Global Media Forum (GMF), addressing criticism that the firm’s controversial technology infringes on the privacy of hundreds of millions.
Privacy activists recently lodged data protection complaints against Clearview AI in five European countries. They argue that the software — a search engine for faces combing through billions of photos — violates the UK’s and the EU’s strict privacy rules.
The controversy highlights how, as artificial intelligence technology matures, it could give rise to surveillance on an unprecedented scale.
Amos Toh, a senior researcher at NGO Human Rights Watch, warned during the GMF that governments and companies increasingly deploy facial recognition to spy on their citizens and customers. The technology uses AI to identify individuals in images.
“We have seen facial recognition being used in Russia to detain peaceful protesters, we have seen facial recognition being used on children in Argentina,” Toh told the conference, which was held mostly online due to the pandemic.
He added that facial recognition technology is often inaccurate and prone to discriminate against minorities.
“And even if it is accurate, there is also immense potential for … human rights abuses,” he said.
How Clearview AI surveillance works
Hundreds of companies around the world are working on facial recognition software. Analysts estimate that the global market was worth around $10 billion (€8.25 billion) last year.
And yet no other firm has sparked as much backlash as Clearview AI.
The firm’s technology is based on a biometric database of billions of photos scraped from websites including Facebook, Twitter, or Instagram.
Once paying customers upload a photo, the program spits out all other images it has of the person, plus information on who he or she likely is.
Amos Toh of Human Rights Watch says that facial recognition technology can lead to human rights abuses
Law enforcement agencies have defended using the software as a tool to identify victims and perpetrators of child abuse or to fight terrorism. US police have deployed it, for instance, to identify some of the rioters involved in the storming of the US Capitol on January 6.
CEO Ton-That said during the GMF that his company, which has reportedly shut down all the accounts it had with private companies, “only sells to law enforcement at this time.”
He added that “thousands and thousands of crimes have been solved that otherwise wouldn’t have been.”
But Human Rights Watch’s Amos Toh shot back that nonetheless, “the potential for abuse in many cases outweighs any purported benefits.”
“We know that facial recognition certainly has been very effective in facilitating human rights abuses here [in the US] and abroad,” he told the conference.
Enter the privacy watchdogs
Since media reports about Clearview AI’s software first emerged in early 2020, regulators from Australia to the US have taken a closer look at the company’s software. Last week, Canada’s privacy commissioner ruled that the use by its police was a serious violation of privacy laws.
But nowhere has the backlash been as vocal as in Europe — a region that often prides itself as a global leader in data protection.
Last summer, the European Union’s privacy watchdog said that the use of Clearview AI’s software by police would, “as it stands, likely not be consistent with the EU data protection regime.”
A German regional data protection authority ruled in January 2021 that the company had to delete the biometric profile of a privacy researcher, who had filed a complaint. And in February, Sweden’s privacy watchdog said that the country’s police had violated the law when it used the software.
But none of those decisions have set sufficient precedent to ban the software’s use across the entire bloc, said Ioannis Kouvakas, a legal officer at NGO Privacy International.
That’s why in May, the organization together with four other NGOs submitted additional complaints with data watchdogs in Austria, France, Greece, Italy, and the United Kingdom.
They argue that information about European residents in Clearview AI’s database was collected without their knowledge or consent.
The five cases are pending; some decisions could come as soon as in the next few weeks, according to the activists.