sp3n - stock.adobe.com
How the Clearview AI affair likely won't spur meaningful change
An obscure AI vendor sparked controversy for scraping social media for data to power a facial recognition platform that it sells to law enforcement agencies around the world.
Clearview AI, a secretive startup, has drawn criticism for creating and selling a facial recognition platform to law enforcement agencies, populated with data about millions of people.
But even amid controversy over the small vendor's methods and technology -- which have even been rejected by social media giants as intrusive and dangerous -- it's not likely that the affair centering on Clearview AI will spur significant regulation of facial recognition technology.
The privately held company, founded in 2017 and based in New York, is doing what many other technology companies have declined to do for years -- scrape social media platforms to create a database with more than three billion images of ordinary people, then use the database to power a powerful facial recognition tool.
The platform, already being used by hundreds of local and federal law enforcement agencies in the U.S., can supposedly draw on its extensive database to identify people from just a single image.
The New York Times last month broke the story on Clearview AI, which had previously operated in relative secrecy. Since then, critics have blasted the company for allegedly violating the privacy of countless people, even as law enforcement officials have said the facial recognition platform is a powerful weapon against crime because it can identify suspects faster than ever before.
Public unlikely to press for change
Still, said Alan Pelz-Sharpe, founder of market advisory and research firm Deep Analysis, while Clearview AI's practices should be a warning signal to the public, he doubts people will react strongly to it.
"Many people already accept that they have [to] give up a right to privacy and ownership of their data," he said.
Since the New York Times' story, however, the state of New Jersey barred its police officers from using Clearview AI. Also, social media and tech giants including Facebook, Twitter, Google, LinkedIn and Venmo have sent cease and desist letters to Clearview demanding the company stop scraping their platforms.
Over the last few years, several cities across the U.S., including Somerville, Mass. and San Francisco, had already banned their police forces from using facial recognition technology in general.
Despite the outcry over Clearview, the legality of Clearview AI's practices is a murky moral area. And most law enforcement use of facial recognition technology has been relatively uncontroversial.
Some vendors have come under attack, however, over alleged privacy violations and bias, including Amazon with its Rekognition system.
Alan Pelz-SharpeFounder, Deep Analysis
But far-reaching regulation of how police agencies use facial recognition may not happen anytime soon.
"As of now, a policy is little more than deciding whether or not to use such technology at a local level," Pelz-Sharpe said. "At a federal level I don't see much happening."
The Clearview AI controversy "falls into the same chasm that many criminal justice issues do -- only wrongdoers should worry about it, 'so why should I worry?'" he said. "Of course, the problem here is much wider as this kind of technology affects everyone and can be used by anyone for whatever purpose they wish."
A Clearview AI competitor's perspective
Jon Gacek, head of government, legal, and compliance at Veritone, an AI vendor that sells a competing facial recognition platform, said too much regulation would be a mistake.
"It's unfortunate that the Clearview article has created such a firestorm," he said.
Any regulation of facial recognition by law enforcement needs to be "really thoughtful," Gacek continued. The government shouldn't ban all facial recognition platforms, he maintained.
Veritone, unlike Clearview, only includes mugshot images in its platform, which it developed with help from the Anaheim, Calif. police department. Gacek said Veritone has run into Clearview a few times. He said he didn't want to speak negatively about its platform but noted that "when you have a massive database, you will get a lot of false positives, and too many false positives make the tool not useable."
Clearview AI did not respond to a request for comment.
Clearview AI seemingly stands out from competing vendors due to its willingness scrape images and information from social media to build its platform. Google, for example, claims it has for a long time been able to do the same thing, but has declined to do so, and other vendors, like Veritone, have opted to use publicly available mugshots for their platforms.
Yet, Pelz-Sharpe asserted, it's likely other vendors are doing similar things as Clearview, but out of public view.
"The goal, if there is one, is to continue to work on this technology and perfect it, assuming that by that point any discussions about its ethical or legal status will be moot at that point," he said. "There are good reasons for keeping quiet, as the practice raise questions of ownership, misuse, accuracy and of course equally important issues regarding the future use of such applications."
Pelz-Sharpe said he thinks such platforms will be used to "target, identify and discriminate." Activists have expressed the same concerns over the last few years: that authoritarian governments could use facial recognition to target government protestors, for example.
But, "no doubt those using the technology will somehow attempt to justify their actions if challenged," he said.