Sunstein Insights Shape Created with Sketch.

Back to All Publications

Facial Recognition: A Clear View to Dystopia

Katherine W. Soule

Katherine W. Soule | Attorney View more articles

Katherine is a member of our Litigation Practice Group

In the wake of an alarming exposé published by The New York Times in January, Clearview AI, Inc., a New York startup, faces a slew of lawsuits. Since the article’s publication, the Vermont Attorney General filed a complaint against Clearview, and class actions have been filed against the company in Illinois and California. All three lawsuits raise the specter of a “dystopian future” enabled by Clearview’s technology and its seeming disregard for privacy rights.

Before the exposé, Clearview lurked in the shadows, invisible to the general public. The Times revealed that Clearview had covertly screen-scraped millions of websites, such as Facebook, Twitter, Venmo and Google, to collect and catalog approximately three billion images of individual faces to build a facial recognition database.

Clearview used artificial intelligence algorithms to analyze biometric information and generate a biometric template from every image it scraped from the internet. The templates matched each individual to information about that person, such as his / her name, address, workplace and friends. Clearview assembled its database without notice to the people whose images it had collected and without their consent.

Clearview sold access to the database to law enforcement agencies, private individuals and companies for commercial gain, allowing purchasers to upload a photograph in order to instantly identify the person in the photo through facial recognition matching, provide that individual’s personal information, and present a dossier of every photo of that person that had been posted online.

Clearview’s technology reportedly includes programmable language to enable it to pair with augmented reality glasses that would enable any user walking down the street to identify sensitive personal information about the individuals they pass.

Facial recognition technology uses computers and algorithms to extract unique individual characteristics from photographs based on the features and geometry of individuals’ faces. It then stores the unique identifiers as “faceprints” in a searchable database to allow rapid identification of an individual based on a photograph or a video.

Unlike social security numbers, which can be changed if they have been compromised, biometric information remains constant throughout an individual’s life. Once a person enters into a facial recognition database, he or she loses an enormous amount of anonymity and privacy.

The Vermont Attorney General’s complaint acknowledged the potential benefits of facial recognition technology, such as the ability to assist law enforcement agencies. But all three complaints note that businesses and policymakers have been particularly cautious about facial recognition technology. For example, the complaint in California quotes Senator Edward Markey, who has warned that the widespread use of the technology “could facilitate dangerous behavior and could effectively destroy individuals’ ability to go about their daily lives anonymously.”

The complaints filed in California and Vermont cite a statement made in 2011 by Google’s then-CEO that, as far as he knew, “it’s the only technology Google has built and, after looking at it, we decided to stop” because it could be used “in a very bad way.”

Some states and municipalities have enacted or considered bans on the use of such technology, and the Human Rights Council of the United Nations considers electronic surveillance to interfere with privacy and to repress the right to freedom of expression. The EU Data Protection Supervisor wrote, on February 21, that the use of facial recognition to identify an individual is highly intrusive, subject to “function creep,” and may involve poor-quality datasets that can result in bias or discrimination.

As the Vermont complaint warns, “Easily accessible facial recognition would permit governments, stalkers, predators, and con artists to instantly identify any stranger and, combined with other readily available data sources, know extensive details about their family, address, workplace, and other characteristics.” The information could be used to stalk romantic partners, to discover information to use as blackmail, and to pry into the lives of private citizens with no probable cause.

Clearview reported a data breach on February 26, 2020, in which its client lists and other data were stolen. Clearview stated that “[u]nfortunately, data breaches are part of life in the 21st century.” Clearview said that its facial template database was not compromised, but its seeming indifference to the breach did not inspire confidence in its commitment to protecting sensitive information.

Clearview’s facial recognition database has been developed in violation of many websites’ terms of service and privacy policies. Facebook, for example, does not permit unauthorized data-scraping, and its users expect that pictures of themselves, their friends, and their children are not subject to mass collection for automated analysis and inclusion in a facial recognition database.

Even individuals who do not engage in social media or post any photographs online are exposed to Clearview’s collection process, as anyone appearing in any photo posted by any user may be captured in the screen-scraping process and added to Clearview’s database.

Clearview’s technology and practices violate numerous state laws. Under the California Consumer Privacy Act (“CCPA”), a business that collects a consumer’s personal information, including biometric information, “shall, at or before the point of collection, inform consumers as to the categories of personal information to be collected and the purposes for which the categories of personal information shall be used” (emphasis added). Clearview failed to so inform consumers at any point, much less before it began collecting data.

Similarly, under the Illinois Biometric Information Privacy Act, a company may not collect, capture, purchase, receive through trade, or otherwise obtain a person’s biometric identifier without prior notice to and consent of that individual. BIPA also prohibits selling or otherwise profiting from a person’s biometric data and requires companies collecting such data to develop a publicly available written policy establishing a retention schedule and guidelines for permanently destroying the biometric data. Of course, Clearview did not comply with these requirements.

Residents of states other than California and Illinois may have protections under state consumer protection laws. For example, the Vermont Attorney General alleges the following unfair and deceptive practices in commerce in violation of Vermont’s consumer protection laws: screen scraping without consent; collecting, storing, analyzing, and distributing photographs of minors without consent, failing to provide adequate data security; exposing sensitive personal data to theft; violating individuals’ rights to display and distribute photographs; misrepresenting the accuracy of the technology; misrepresenting the use of the technology; and misleading individuals as to privacy rights.

The use of facial recognition technology will have widespread effects on individuals’ privacy and safety. The pending lawsuits stand to highlight the grave concerns associated with this technology and motivate legislative and regulatory action on a federal level.

We use cookies to improve your site experience, distinguish you from other users and support the marketing of our services. These cookies may store your personal information. By continuing to use our website, you agree to the storing of cookies on your device. For more information, please visit our Privacy Notice.

Subscribe to our Newsletters

Subscribe to: