Facial Recognition: A New Trend in State Regulation

12 Min Read By: E. Barlow Keener

Ten years ago, the average person did not know what facial recognition was. Now, especially after its use in locating persons involved in the January 6, 2021, riots at the US Capitol, almost everyone knows its utility and power to find anyone who shows up in a video or “snap.” Many from both the left and the right sides of the aisle see its unregulated use as an intrusion into the privacy of the individual. State legislators, as explained below, are exercising their power to regulate the use of facial recognition by law enforcement and by private companies. The states are taking facial recognition regulation into their own hands while the federal government is at a standstill on passing privacy laws curbing the use of this powerful new software tool.

We pose and smile for selfies with friends and put them on Facebook, TikTok, Instagram, and Snapchat. We look up as we walk outside and see cameras on every street intersection pole, or at the city park. We believe they are looking for cars going through red lights or watching out for crime. What we may not realize is that our favorite apps and ever-present street cameras are using facial recognition to identify us and, using advanced A.I. software, tag us as we move from location to location. We also may not be aware that cameras can identify us by our gait and body movement, as well as our face. “Walk that way” has a new meaning.

New York City police reportedly used facial recognition from 15,000 cameras 22,000 times to identify individuals since 2017.[1] Fear of crime is driving us, or being used to drive us, to give up our privacy by allowing law enforcement to use those ubiquitous street cameras to identify where we are, and even to listen to our words to recognize us. This technique, commonly called “voiceprint” identification, lets surveillance equipment instantly turn our words into searchable text as we walk down the street.

The legal issue of advanced technologies taking away our right of privacy is not new. In 1890, a young Boston lawyer, Louis Brandeis, co-wrote a Harvard Review article asserting that privacy was a fundamental right even if not listed as a right in the US Constitution. Brandeis was upset that two new inventions, the Kodak camera and the Edison dictating machine, were invading our private lives, exposing them to the public without our consent:

Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” [2]

In 1928, almost four decades later, then-Supreme Court Justice Brandeis penned his famous Olmsted v. US dissent on the issue of privacy. The case involved law enforcement wiretapping a new device located on the sidewalk: the public telephone. Brandeis explained:

Whenever a telephone line is tapped, the privacy of the persons at both ends of the line is invaded, and all conversations between them upon any subject, and although proper, confidential, and privileged, may be overheard.[3]

Justice Brandeis advocated limiting law enforcement’s use of wiretapping. His views on regulating privacy rights eventually became law. Nine decades later, state legislators are again working to rein in the use of new technology: the pervasive placement of high-quality cameras and corresponding use of A.I. software. The concept of facial and biometric recognition has been around since the 1960s. However, the technology to make facial recognition accurate and fast has only been achieved in the last two decades with improvements in “computer vision” algorithms, faster processers, ubiquitous broadband, and inexpensive cameras. Law enforcement showed the world the effectiveness of the cameras and biometric A.I. software after the January 6th Insurrection by accurately identifying hundreds of perpetrators within days.

Several states and municipalities are seeking to protect persons from abuse of biometrics by private companies and by law enforcement. The new laws generally attempt to limit private firms from using facial recognition without opt-in consent, or to limit law enforcement’s use of biometric identification tools.

Illinois Law Allows a Private Right of Action

Illinois led the way in this legislative trend by limiting private firms’ ability to collect biometric data without consent. In 2008, the state passed the Biometric Information Privacy Act, or BIPA. BIPA arose in response to a software company that collected fingerprint data at cash registers to allow for easy checkout but then, when the company went bankrupt, attempted to sell the customers’ fingerprint data as a bankruptcy asset. BIPA defines a biometric identifier as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” The law requires written consent for an entity to collect, capture, purchase, receive, disclose, or disseminate biometric information. Most significantly, it gives a person a right of action “against an offending party.” Damages are set per violation: $1,000 if caused by negligence and $5,000 if intentional.

The private right of action is one of the most controversial aspects of various privacy laws being proposed around the country. With a private right of action, plaintiffs’ attorneys are enforcing the privacy law by constantly seeking out potential defendants who are allegedly violating the law. Without a private right of action, state attorney generals must decide who to sue, if there are resources to sue, and if it is politically a good move to sue. Companies are often adamantly opposed to laws creating a private right of action, as such suits can result in large, complex class actions lasting for years and, potentially, very large judgements and settlements.

One result of the Illinois BIPA’s private right of action is that many online web firms and off-line companies are either stopping their use of biometric identification or more carefully obtaining opt-in consent from their customers and employees. The door opened for class actions and large judgments when in 2018, the Illinois Supreme Court ruled in Rosenbach v. Six Flags that BIPA did not require a showing of damages, only a showing that a violation occurred.[4] Then in February 2022, the Illinois Supreme Court held in McDonald v. Symphony Bronzeville Park that the Illinois Workers’ Compensation Act does not protect companies from statutory BIPA damages. The McDonald case involved a nursing home collecting employees’ fingerprints without their consent, and the court found that the BIPA claims for statutory damages were not barred by the exclusivity provisions of the Illinois Workers’ Compensation Act.

In 2021, Facebook paid $650 million in a historic settlement of a BIPA lawsuit. Class members are to be awarded at least $345 each, though the payments have been delayed. Notably, Facebook announced it would stop using facial recognition just a few months later. Other plaintiffs and their attorneys also sued other web platforms including TikTok, Snapchat, and Google under BIPA. In 2021, TikTok announced that it settled an Illinois class action for $92 million.[5] Shortly thereafter, in June 2021, TikTok changed its privacy policy to state that TikTok “may collect biometric identifiers” including “faceprints and voiceprints.” Plaintiffs filed a class action suit against Snapchat in 2020 for violations of BIPA.[6] The case is currently before the Seventh Circuit on the issue of whether the minor plaintiff is subject to the Snapchat terms and conditions’ arbitration requirement. Microsoft, Amazon, and Shutterfly have also been sued for alleged BIPA violations. Reportedly, these cases involved photos uploaded from Flickr that were later used by IBM to “train” facial recognition software to help accurately identify people of color. The project was called “Diversity of Faces.” The IBM training database was then used by Microsoft and Amazon to improve their facial recognition systems.

Non-web firms have also been sued under BIPA. In 2021, for example, in Rosenbach v. Six Flags, Six Flags settled an Illinois class action for $36 million for fingerprints taken without consent.

Other States Take Action

Other states have also passed statutes limiting companies’ biometric use, but none with the “teeth” of a private right of action like Illinois’s BIPA. Texas was one of those states. In 2009, Texas passed the “Capture or Use of Biometric Identifier Act,” or CUBI. CUBI imposes a penalty of “not more than” $25,000 for each violation. However, unlike Illinois, there is no private right of action. In February 2022, Texas Attorney General Ken Paxton took action under the CUBI legislation and filed suit against Facebook, claiming that Facebook owed billions to the state for violating CUBI for not obtaining user consent when collecting the biometric data of more than 20 million Texas residents.

Still other states have passed laws limiting law enforcement’s use of facial recognition and biometric data. In October 2020, Vermont passed the “Moratorium on Facial Recognition Technology,” prohibiting law enforcement from using facial recognition. The Moratorium provides “a law enforcement officer shall not use facial recognition technology or information acquired through the use of facial recognition technology unless the use would be permitted with respect to drones….” Notably, the Vermont law expanded the definition of facial recognition to include recognition of “sentiment”:

Facial recognition” means… the automated or semiautomated process by which the characteristics of a person’s face are analyzed to determine the person’s sentiment, state of mind, or other propensities, including the person’s level of dangerousness.

The COVID pandemic has been a busy time for new facial recognition laws. In 2021, Virginia enacted the “Facial recognition technology; authorization of use by local law-enforcement agencies” legislation (HB 2031) prohibiting local law enforcement and campus police from “purchasing or deploying” facial recognition. The Virginia statute did not prevent local law enforcement from using facial recognition deployed by others. Also, by prohibiting just “local law-enforcement agencies,” the law allowed other Virginia law enforcement agencies to use the technology. Interestingly, the law addressed only facial recognition and not the recognition of gait, fingerprints, voiceprints, or state of mind.

The same year, Massachusetts passed the “Facial and Other Remote Biometric Recognition” legislation limiting state law enforcement’s use of facial recognition. The law expressly included in the definition of facial recognition the “characteristics of an individual’s face, head or body to infer emotion, associations, activities or the location of an individual… gait, voice or other biometric characteristic.” The law required a court order or an immediate emergency where there could be a risk of harm to a person for use of facial recognition. It also limited all law enforcement agencies in the state, not just local law enforcement as in Virginia.

In 2021, Maine passed the “Act To Increase Privacy and Security by Prohibiting the Use of Facial Surveillance by Certain Government Employees and Officials,” which is similar to the “Facial and Other Remote Biometric Recognition” legislation in Massachusetts. However, Maine’s law applied to all government employees, not just law enforcement. Maine also allowed government employees to use facial recognition without a court order as long as the state employee was investigating a “serious crime” and believed there was “probable cause to believe that an unidentified individual in an image has committed the serious crime,” or under a limited number of additional exceptions. Massachusetts, by contrast, required a court order issued by a court that issues criminal warrants. Utah passed a similar law to that of Maine in 2021, limiting the government’s use of facial recognition except for investigations where there is a “fair probability” the individual is connected to the crime.

In other states:

  • New York passed a 2021 law prohibiting facial recognition in schools.
  • Washington state passed a law prohibiting government agencies from using facial recognition except with a warrant or in an emergency.
  • In 2014, New Hampshire limited government agencies from using biometric data but allowed them to use it to solve a crime without a warrant.
  • A 2020 Maryland law prohibits employers from using facial recognition during interviews without a signed consent.
  • California passed a new law that banned law enforcement from using facial recognition in their body cameras but not in other police surveillance cameras. The law expires on January 1, 2023.
  • Similarly, Oregon limited law enforcement from using facial recognition on body cameras.

While there appears to be a new trend in privacy rights among states, the majority of states—like Colorado and Montana—have failed in attempts to enact facial recognition legislation. Today as when Justice Brandeis opined on the topic 94 years ago, we are still balancing our right of privacy from the law enforcement with our fear of crime and the need to allow law enforcement to freely act. In addition, while Illinois, Texas, and California are limiting private companies from using biometric data without prior opt-in consent, most states have not enacted regulation to prevent private firms from using the technology, for now.

While the federal government is not addressing the thorny issue of facial recognition, states appear to be on a roll and are taking matters into their own hands. It is clear that both the left and the right of the political spectrum are seeking to curb the use of facial recognition and biometric software by law enforcement. Also, the implementation of a private right of action by Illinois has produced results in terms of keeping companies in line with regard to privacy rights. We should expect to see more state legislation granting private rights of action in cases related to violations of limitations on facial recognition and biometric data use, particularly in states with strong plaintiffs’ bars.


  1. Amnesty International, “Surveillance city: NYPD can use more than 15,000 cameras to track people using facial recognition in Manhattan, Bronx and Brooklyn,” (June 3, 2021) https://www.amnesty.org/en/latest/news/2021/06/scale-new-york-police-facial-recognition-revealed/

  2. Samuel D. Warren and Louis Brandeis, “The Right to Privacy,” Harvard Law Review, Vol. IV, No. 5 (1890), https://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html

  3. Olmsted v US, 277 U.S. 438 (1928)

  4. Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197(Ill. 2019)

  5. In Re: Tiktok, Inc., Consumer Privacy Litigation ILND 1:20-cv-04699; Joe Walsh, “TikTok Settles Privacy Lawsuit For $92 Million,” Forbes (February 25, 2021) https://www.forbes.com/sites/joewalsh/2021/02/25/tiktok-settles-privacy-lawsuit-for-92-million/

  6.  Clark v Snap Inc. Case 3:21-cv-00009-DWD (US District Court Southern District of Illinois) (2021)

By: E. Barlow Keener

Login or Registration Required

You need to be logged in to complete that action.

Register/Login