Bradford K. Newman
Chair of North America
Trade Secrets Practice
600 Hansen Way
Palo Alto, CA 94304
Co-Chair Global Technology Transactions
300 E. Randolph St., Suite 5000
Chicago, IL 60001
Contributor, Legislation Section
Senior Associate, IPTech
1900 N. Pearl Street, Suite 1500
Dallas, TX 75201
We are pleased to present the inaugural Chapter on Artificial Intelligence. For years, I have been at the forefront of advocating for rational federal regulation of AI and have published on AI ethics and fairness.I frequently represent clients with legal issues related to both commercial and embedded AI. Additionally, I am the host of the ABA’s AI.2day Podcast. In 2018, my proposed legislation, The AI Data Protection Act, was formalized into a House of Representatives Draft Discussion Bill. In 2021, the ABA will publish my book which is designed to be an AI field guide for business lawyers.
So I was thrilled when the Section agreed that the Annual Review should include a new Chapter devoted entirely to AI. Before any substantive federal legislation is enacted, many legal issues related to AI will play out in state and federal courts around the country. As applications of artificial intelligence, including machine learning, continue to be deployed in a myriad of ways that impact our health, work, education, sleep, security, social interaction, and every other aspect of our lives, many critical questions do not have clear cut answers yet. Companies, counsel, and the courts will, at times, struggle to grasp technical concepts and apply existing law in a uniform way to resolve business disputes. Thus, tracking and understanding the emerging body of law is critically important for business lawyers called on to advise clients in this area. As with other areas of emerging technology, the courts will be faced with applying legal doctrines in new ways in view of the nature of the technology ranging from the use of AI in criminal cases to the impact of AI on patentable subject matter.
The goal of this Chapter is to serve as a useful tool for those business attorneys who seek to be kept up to date on a national basis concerning how the courts are deciding cases involving AI. Micro and macro trends can only be identified by surveying cases around the country. We confidently predict the cases we report will increase exponentially year over year.
As this is our first installment of the AI Chapter in a burgeoning field, we made some editorial decisions: (i) we included a few cases older than the past year; (ii) unlike other Chapters, we have included cases of note recently filed in the lower courts which we will track in subsequent editions; and (iii) we included legislation and pending legislation in our summary.
We also made certain judgments as to what should be included. A notable example is facial recognition. Due to the nature of the underlying technology and the complexity of facial recognition, the subject matter necessarily involve issues of algorithmic/artificial intelligence. However, we did not include every case that references facial recognition when the issue at bar pertained to procedural aspects such as class certification (e.g., class action lawsuits filed under the Illinois Biometric Information Privacy Act (BIPA) (740 ILCS 14).
Finally, I want to thank my two colleagues, Adam Aft and Yoon Chae, for their assistance in preparing this inaugural chapter. Adam and Yoon are both knowledgeable and accomplished AI attorneys with whom I frequently collaborate. We are excited to add many colleagues from other firms around the country to next year’s Chapter.
We hope this Chapter provides useful guidance to practitioners of varying experience and expertise and look forward to tracking the trends in these cases and presenting the cases arising in the next several years.
United States Supreme Court
There were no qualifying decisions by the United Sates Supreme Court. We note the Court has heard a number of cases foreshadowing the types of issues that will soon arise with respect to artificial intelligence, such as United States v. Am. Library Ass’n (539 U.S. 194 (2003)), in which a plurality of the Court upheld the constitutionality of filtering software that libraries had to implement pursuant to the Children’s Internet Protection Act, and Gill v. Whitford (138 S. Ct. 1916 (2017)), in which, if the plaintiffs had standing, the Justices may have had to evaluate the use of sophisticated software in redistricting (a point noted again in Justice Kagan’s express reference to machine learning in her dissent in Rucho v. Common Cause (139 S. Ct. 2484 (2019))). The Court had previously concluded that a “people search engine” site presenting incorrect information that prejudiced a plaintiff’s job search was a cognizable injury under the Fair Credit Reporting Act in Spokeo, Inc. v. Robins (136 S. Ct. 1540 (2016)). These cases are representative of the type of any number of cases that are likely to make their way to the Court in the near future that will require the Justices to contemplate artificial intelligence, machine learning, and the impact of the use of these technologies.
There were no qualifying decisions within the First Circuit.
Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). Victims, estates, and family members of victims of terrorist attacks in Israel alleged that Facebook was a provider of terrorist postings where they developed and used algorithms designed to match users’ information with other users and content. The court held that Facebook was a publisher protected by Section 230 of the Communications Decency Act and that the term “publisher” under the Act was not so limited that Facebook’s use of algorithms to match information with users’ interests changed Facebook’s role as a publisher.
Additional Cases of Note
Calderon v. Clearview AI, Inc., 2020 U.S. Dist. LEXIS 94926 (S.D.N.Y. 2020) (stating the court’s intent to consolidate cases against Clearview based on a January 2020 New York Times article alleging defendants scraped over 3 billion facial images from the internet and scanned biometric identifiers and then used those scans to create a searchable database, which defendants then allegedly sold access to the database to law enforcement, government agencies, and private entities without complying with BIPA); see also Mutnick v. Clearview AI, Inc., 2020 U.S. Dist. LEXIS 109864 (N.D. Ill. 2020).
People v. Wakefield, 175 A.D.3d 158 (N.Y. App. Div. 2019) (concluding no violation of the confrontation clause where the creator of artificial intelligence software was the declarant, not the “sophisticated and highly automated tool powered by electronics and source code”); see also People v. H.K., 2020 NY Slip Op 20232, 130 N.Y.S.3d 890 (Crim. Ct. 2020) (following Wakefield in concluding that, where software was “acting as a highly sophisticated calculator,” the analyst using the software was still a declarant and the right to confrontation was preserved).
Vigil v. Take-Two Interactive Software, Inc., 235 F. Supp. 3d 499 (S.D.N.Y. 2017) (affirmed in relevant part by Santana v. Take-Two Interactive Software, Inc., 717 Fed.Appx. 12 (2d Cir. 2017)) (concluding that BIPA doesn’t create a concrete interest in the form of right-to-information, but instead operates to support the statute’s data protection goal; therefore, defendant’s bare violations of the notice and consent provisions of BIPA were dismissed for lack of standing).
LivePerson, Inc. v. 24/7 Customer, Inc., 83 F. Supp. 3d 501 (S.D.N.Y. 2015) (determining plaintiff adequately pleaded possession and misappropriation of a trade secret where plaintiff alleged its “predictive algorithms” and “proprietary behavioral analysis methods” were based on many years of expensive research and were secured by patents, copyrights, trademarks and contractual provisions).
Zaletel v. Prisma Labs, Inc., No. 16-1307-SLR, 2017 U.S. Dist. LEXIS 30868 (D. Del. Mar. 6, 2017). The plaintiff had a “Prizmia” photo editing app. The plaintiff alleged trademark infringement based on the defendant’s “Prisma” photo transformation app. In reviewing the Third Circuit’s likelihood of confusion factors, the court considered the competition and overlap factor. The court concluded that, “while plaintiff broadly describes both apps as distributing photo filtering apps, the record demonstrates that defendant’s app analyzes photos using artificial intelligence technology and then redraws the photos in a chosen artistic style, resulting in machine generated art. Given these very real differences in functionality, it stands to reason that the two products are directed to different consumers.”
Sevatec, Inc. v. Ayyar, 102 Va. Cir. 148 (Va. Cir. Ct. 2019). The court noted that matters such as data analytics, artificial intelligence, and machine learning are complex enough that expert testimony is proper and helpful and such testimony does not invade the province of the jury.
Aerotek, Inc. v. Boyd, 598 S.W.3d 373 (Tex. App. 2020). The court expressly acknowledged that one day courts may have to determine whether machine learning and artificial intelligence resulted in software altering itself and inserting an arbitration clause after the fact.
Additional Cases of Note
Bertuccelli v. Universal City Studios LLC, No. 19-1304, 2020 U.S. Dist. LEXIS 195295 (E.D. La. Oct. 21, 2020) (denying a motion to disqualify an expert the court concluded was component to testify in a copyright infringement case after having performed an “artificial intelligence assisted facial recognition analysis” of the plaintiff’s mask and the alleged infringing mask).
Delphi Auto, PLC v. Absmeier, 167 F. Supp. 3d 868 (E.D. Mich. 2016). Plaintiff employer alleged defendant former employee breached his contractual obligations by terminating his employment with the plaintiff and accepting a job with Samsung in the same line of business. Defendant worked for the plaintiff as director of its labs in Silicon Valley, managing engineers and programmers on work related to autonomous driving. Defendant had signed a confidentiality and noninterference agreement. The court concluded that the plaintiff had a strong likelihood of success on the merits of its breach of contract claim. Therefore, the court granted the plaintiff’s motion for preliminary injunction with certain modifications (namely, limiting the applicability of the non-compete provision to the field of autonomous vehicle technology for one year because the Court determined that autonomous vehicle technology is a “small and specialized field that is international in scope” and, therefore, a global restriction was reasonable).
Additional Cases of Note
In re C.W., 2019-Ohio-5262 (Oh. Ct. App. 2019) (noting that “[p]roving that an actual person is behind something like a social-networking account becomes increasingly important in an era when Twitter bots and other artificial intelligence troll the internet pretending to be people”).
Bryant v. Compass Group USA, Inc., 958 F.3d 617 (7th Cir. 2020). Plaintiff vending machine customer filed class action against vending machine owner/operator, alleging violation of BIPA when it required her to provide a fingerprint scan before allowing her to purchase items. The district court found defendant’s alleged violations were mere procedural violations that cause no concrete harm to plaintiff and, therefore, remanded the action to state court. The Court of Appeals held that a violation of § 15(a) (requiring development of a written and public policy establishing a retention schedule and guidelines for destroying biometric identifiers and information) of BIPA did not create a concrete and particularized injury and plaintiff lacked standing under Article III to pursue the claim in federal court. In contrast, the Court of Appeals held that a violation of § 15(b) (requiring private entities make certain disclosures and receive informed consent from consumers before obtaining biometric identifiers and information) of BIPA did result in a concrete injury (plaintiff’s loss of the power and ability to make informed decisions about the collection, storage and use of her biometric information) and she, therefore, had standing and her claim could proceed in federal court.
Rosenbach v. Six Flags Entertainment Corporation, 129 N.E.3d 1197 (Ill. 2019). Rosenbach is a key Supreme Court of Illinois case answering whether one qualifies as an “aggrieved” person for purposes of BIPA and may seek damages and injunctive relief if she hasn’t alleged some actual injury or adverse effect beyond a violation of her rights under the statute. Plaintiff purchased a season pass for her son to defendant’s amusement park. Plaintiff’s son was asked to scan his thumb into defendant’s biometric data capture system and neither plaintiff nor her son were informed of the specific purpose and length of term for which the son’s fingerprint had been collected. Plaintiff brought suit alleging violation of BIPA. The Supreme Court of Illinois held that an individual need not allege some actual injury or adverse effect, beyond violation of his or her rights under BIPA, to qualify as an “aggrieved” person under the statute and be entitled to seek damages and injunctive relief. The court reasoned that requiring individuals to wait until they’ve sustained some compensable injury beyond violation of their statutory rights before they can seek recourse would be antithetical to BIPA’s purposes. The court found that BIPA codified individuals’ right to privacy in and control over their biometric identifiers and information. Therefore, the court found also that a violation of BIPA is not merely “technical,” but rather the “injury is real and significant.”
Additional Cases of Note
Kloss v. Acuant, Inc., 2020 U.S. Dist. LEXIS 89411 (N.D. Ill. 2020) (applying Bryant v. Compass Group (summarized in this chapter) and concluding that the court lacked subject-matter jurisdiction over plaintiff’s BIPA § 15(a) claims because a violation of § 15(a) is procedural and, thus, does not create a concrete and particularized Article III injury).
Acaley v. Vimeo, 2020 U.S. Dist. LEXIS 95208 (N.D. Ill. June 1, 2020) (concluding that parties made an agreement to arbitrate because defendant provided reasonable notice of its terms of service to users by requiring users to give consent to its terms when they first opened the app and when they signed up for a free subscription plan, but the BIPA violation claim alleged by the plaintiff was not within the scope of the parties’ agreement to arbitrate because the “Exceptions to Arbitration” clause excluded claims for invasion of privacy).
Heard v. Becton, Dickinson & Co., 2020 U.S. Dist. LEXIS 31249 (N.D. Ill. 2020) (concluding that, for § 15(b) to apply, an entity must at least take an active step to “collect, capture, purchase, receive through trade, or otherwise obtain” biometric data and the plaintiff did not adequately plead that defendant took any such active step where the complaint omitted specific factual detail and merely parroted BIPA’s statutory language and the plaintiff failed to adequately plead possession because he failed to sufficiently allege that defendant “exercised any dominion or control” over his fingerprint data).
Rogers v. CSX Intermodal Terminals, Inc., 409 F. Supp. 3d 612 (N.D. Ill. 2019) (denying defendant’s motion to dismiss and relying on the Illinois Supreme Court’s holding in Rosenbach (summarized in this chapter) to conclude that plaintiff’s right to privacy in his fingerprint data included “the right to give up his biometric identifiers or information only after receiving written notice of the purpose and duration of collection and providing informed written consent”).
Neals v. PAR Technology Corp., 419 F. Supp. 3d 1088 (N.D. Ill. 2019) (concluding that BIPA does not exempt a third-party non-employer collector of biometric information when an action arises in the employment context, rejecting defendant’s argument that a third-party vendor couldn’t be required to comply with BIPA because only the employer has a preexisting relationship with the employees).
Ocean Tomo, LLC v. PatentRatings, LLC, 375 F. Supp. 3d 915, 957 (N.D. Ill. 2019) (determining that Ocean Tomo training its machine learning algorithm on PatentRatings’ patent database violated a requirement in a license agreement between the parties that prohibited Ocean Tomo from using the database (which was designated as PatentRatings confidential information) from developing a product for anyone except PatentRatings).
Liu v. Four Seasons Hotel, Ltd., 2019 IL App(1st) 182645, 138 N.E.3d 201 (Ill. 2019) (noting that “simply because an employer opts to use biometric data, like fingerprints, for timekeeping purposes does not transform a complaint into a wages or hours claim”).
There were no qualifying decisions within the Eighth Circuit.
Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019). Facebook moved to dismiss plaintiff users’ complaint for lack of standing on the ground that the plaintiffs hadn’t alleged any concrete injury as a result of Facebook’s facial recognition technology. The court concluded that BIPA protects concrete privacy interests, and violations of BIPA’s procedures actually harm or pose a material risk of harm to those privacy interests.
WeRide Corp. v. Kun Huang, 379 F. Supp. 3d 834 (N.D. Cal. 2019). Autonomous vehicle companies brought, inter alia, trade secret misappropriation claims against former director and officer and his competing company. The court determined the plaintiff showed it was likely to succeed on the merits of its trade secret misappropriation claims where it developed source code and algorithms for autonomous vehicles over 18 months with investments of over $45M and restricted access to its code base to on-site employees or employees who use a password-protected VPN. Plaintiff identified its trade secrets with particularity where it described the functionality of each trade secret and named numerous files in its code base because plaintiff was “not required to identify the specific source code to meet the reasonable particularity standard.”
Additional Cases of Note
Hatteberg v. Capital One Bank, N.A., No. SA CV 19-1425-DOC-KES, 2019 U.S. Dist. LEXIS 231235 (C.D. Cal. Nov. 20, 2019) (relying on advances in technology, including use of artificial intelligence to “deepfake” audio, as a basis for denying defendant’s argument that a plaintiff must plead to a higher standard alleging specific indicia of automatic dialing to survive a motion to dismiss in a Telephone Consumer Protection Act case).
Williams-Sonoma, Inc. v. Amazon.com, Inc., No. 18-cv-07548-EDL, 2019 U.S. Dist. LEXIS 226300, at *36 (N.D. Cal. May 2, 2019) (denying Amazon’s motion to dismiss Williams-Sonoma’s service mark infringement case noting “it would not be plausible to presume that Amazon conducted its marketing of Williams-Sonoma’s products without some careful aforethought (whether consciously in the traditional sense or via algorithm and artificial intelligence)”).
Nevarez v. Forty Niners Football Co., LLC, No. 16-cv-07013-LHK (SVK), 2018 U.S. Dist. LEXIS 182255 (N.D. Cal. Oct. 16, 2018) (determining that protections exist such as protective orders and the Federal Rules of Evidence that prohibit a party from using artificial intelligence to identify non-responsive documents without identifying a “cut-off” point for some manner of reviewing the alleged non-responsive documents).
There were no qualifying decisions within the Tenth Circuit.
There were no qualifying decisions within the Eleventh Circuit.
Elec. Privacy Info. Ctr. v. Nat’l Sec. Comm’n on Artificial Intelligence, No. 1:19-cv-02906 (TNM), 2020 U.S. Dist. LEXIS 95508 (D.D.C. June 1, 2020). The court concluded that the National Security Commission on Artificial Intelligence is subject to both the Freedom of Information Act and the Federal Advisory Committee Act.
Court of Appeals for the Federal Circuit
McRO, Inc. v. Bandai Namco Games America, Inc., 837 F.3d 1299 (Fed. Cir. 2016). Patent litigation over a patent which claimed a method of using a computer to automate the realistic syncing of lip and facial expressions in animated characters. The plaintiff owners of the patents brought infringement actions, and defendants argued the claims were unpatentable algorithms that merely took a preexisting process and made it faster by automating it on a computer. The court held that the patent claim was not directed to ineligible subject matter where the claim involved the use of automation algorithms and was specific enough such that the claimed rules would not prevent broad preemption of all rules-based means of automating facial animation.
- Kraus v. Cegavske, No. 82018, 2020 Nev. Unpub. LEXIS 1043 (Nov. 3, 2020). A lawsuit filed challenging, on behalf of President Trump, use of AI to authenticate ballot signatures.
- Williams-Sonoma Inc. v. Amazon.com, Inc. (N.D. Cal. 3:18-cv-07548). Williams-Sonoma asserted a copyright infringement claim against Amazon related to how Amazon sells Williams-Sonoma’s products. Amazon argued that Williams-Sonoma didn’t state a claim for direct copyright infringement because it didn’t plead that Amazon engaged in “volitional conduct” where the algorithm chooses the disputed images. Williams-Sonoma argued that the Copyright Act covers “anyone” who violates it and the term encompasses artificial intelligence and “software agents.”
- Asif Kumandan et al. v. Google LLC (N.D. Cal. 5:19-cv-04286). Plaintiff Google Assistant users filed a wiretapping class action against Google, alleging they were recorded without their consent or knowledge when the artificial intelligence voice recognition program allegedly recorded their conversations when plaintiffs never uttered the trigger words.
- Vance et al v. Amazon.com, Inc. (W.D. Wa. 2:20-cv-01084); Vance et al v. Facefirst, Inc. (C.D. Cal. 2:20-cv-06244); Vance et al v. Google LLC (N.D. Cal. 5:20-cv-04696); and Vance et al v. Microsoft Corporation (W.D. Wa. 2:20-cv-01082). Chicago residents Steven Vance and Tim Janecyk filed four nearly identical proposed class actions against Amazon.com, Inc., Google LLC, Microsoft Corp., and a fourth company called Facefirst Inc., alleging the companies violated Illinois’ Biometric Information Privacy Act by “unlawfully collecting, obtaining, storing, using, possessing and profiting from the biometric identifiers and information” of plaintiffs without their permission. Plaintiffs allege that the tech companies used the dataset containing their geometric face scans to train computer programs how to better recognize faces. These companies, in an attempt to win an “arms race,” are working to develop the ability to claim a low identification error rate. Allegedly, the four tech giants obtained plaintiffs’ face scans by purchasing a dataset created by IBM Corp. (the subject of another suit brought by Janecyk).
- Janecyk v. IBM Corp. (Cook County Cir. Ct. Ill. 2020CH00833). IBM Corp. was accused in an Illinois state court lawsuit of violating the state’s biometrics law when it allegedly collected photographs to develop its facial recognition technology without obtaining consent from the subjects to use biometric information. Plaintiff Janecyk, a photographer, said that at least seven of his photos appeared in IBM’s “diversity in faces” dataset. The photos were used to generate unique face templates that recognized the subjects’ gender, age and race, and were given to third parties without consent. IBM allegedly created, collected and stored millions of face templates—highly detailed geometric maps of the face—from about a million photos that make up the “diversity in faces” database. Janecyk claimed that IBM obtained the photos from Flickr, a website where users upload their photos. IBM obtained photos depicting people Janecyk has photographed in the Chicago area whom he had assured he was only taking their photos as a hobbyist and that their images wouldn’t be used by other parties or for a commercial purpose.
- Jordan Stein v. Clarifai, Inc. (Cook County Cir. Ct. Ill. 2020CH01810). Clarifai, Inc., an artificial intelligence company, allegedly violated Illinois’ privacy law when it captured and profited from the profile photos of OKCupid Inc. users without their permission or knowledge, according to a lawsuit filed in Illinois state court. The company allegedly harvested the profile photos of tens of thousands of users, scanned the facial geometry to create face templates, and used the data to develop and train its facial recognition technology.
- K. et al v. Google, LLC (N.D. Cal. 5:20-cv-02257). A proposed class action filed in California federal court alleged that Google violated federal privacy laws by selling and distributing Chromebooks that collect and store students’ facial and voice data. The complaint alleged that Google violated BIPA and the federal Children’s Online Privacy Protection Act (COPPA). Chromebooks come with a “G Suite for Education” platform through which Google collects face templates, or scans of a person’s face, as well as voice data, location data and search histories without permission, the complaint says. Google never informed the parents of the purpose and length of term for which their children’s biometric identifiers and information would be collected, stored and used. The complaint proposes two classes: (1) a BIPA class and (2) a COPPA class. The BIPA class seeks an injunction requiring Google to comply with BIPA and destroy data it has collected, plus monetary damages. The COPPA class seeks an injunction requiring Google to obtain parental consent to collect biometric data and delete data already collected without consent. The plaintiffs have moved to dismiss the case without prejudice.
- Williams et al. v. PersonalizationMall.com LLC (N.D. Ill. 1:20-cv-00025). An online gift platform, PersonalizationMall.com, owned by Bed Bath & Beyond, moved to dismiss or stay an action against it accusing the online retailer of violating its rights under BIPA. Plaintiffs allege that they were never informed in writing that PersonalizationMall.com was capturing, collecting, storing or using their biometric information and they never signed a release consenting. The company moved for dismissal or, in the alternative, moved for the court to stay the case pending Illinois’ Appellate Court decision on whether the Illinois Workers’ Compensation Act (IWCA) preempts claims under BIPA. The company argues that plaintiffs’ claims clearly arose from their employment because they are challenging PersonalizationMall.com’s requirement that warehouse workers use their fingerprints to track hours and breaks.
We organize the enacted and proposed legislation into (i) policy (e.g., executive orders); (ii) algorithmic accountability (e.g., legislation aimed at responding to public concerns regarding algorithmic bias and discrimination); (iii) facial recognition; (iv) transparency (e.g., legislation primarily directed at promoting transparency in use of AI); and (v) other (e.g., other pending bills such as federal bills on governance issues for AI).
- [Fed] Maintaining Am Leadership in AI (Feb 2019). Executive order 13859 (Feb. 2019) launching “American AI Initiative” intended to help coordinate federal resources to support development of AI in the US.
- [Fed] H R Res 153 (Feb 2019). Legislation to support the development of guidelines for ethical development of artificial intelligence.
- [Fed / NIST] US Leadership in AI (Aug 2019). NIST to establish standards to support reliable, robust and trustworthy AI.
- [CA] Res on 23 Asilomar AI Principles (Sep 2018). Adopted state resolution ACR 215 (Sept. 2018) expressing legislative support for the Asilomar AI Principles as “guiding values” for AI development.
- [Fed] Algorithmic Accountability Act (Apr 2019). Bills S 1108, HR 2231 (Apr. 2019) intended to require “companies to regularly evaluate their tools for accuracy, fairness, bias, and discrimination.”
- [NJ] New Jersey Algorithmic Accountability Act (May 2019). Require that certain businesses conduct automated decision system and data protection impact assessments of their automated decision system and information systems.
- [CA] AI Reporting (Feb 2019). Require California business entities with more than 50 employees and associated contractors and vendors to each maintain a written record of the data used relating to any use of artificial intelligence for the delivery of the product or service to the public entity.
- [WA] Guidelines for Gov’t Procurement and Use of Auto Decision Systems (Jan 2019). Establish guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.
- [NY] NYC (Jan 2018). —“A local law in relation to automated decision systems used by agencies” (Int. No. 1696-2017) required the creation of a task force for providing recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address situations where people are harmed by such agency automated decision systems.
Facial Recognition Technology
- [Fed] Commercial Facial Recognition Privacy Act (Mar 2019). Bill S 847 (Mar. 2019) intended to provide people information and control over how their data is shared with companies using facial recognition technology.
- [Fed] FACE Protection Act (July 2019). Restrict federal government from using a facial recognition technology without a court order.
- [Fed] No Biometric Barriers to Housing Act (July 2019). Prohibiting owners of certain federally assisted rental units from using facial recognition, physical biometric recognition, or remote biometric recognition technology in any units, buildings or grounds of such project.
- [CA] Body Camera Account Act (Feb 2019). Bill A.B. 1215 was introduced to prohibit law enforcement agencies and officials from using any “biometric surveillance system,” including facial recognition technology, in connection with an officer camera or data collected by the camera.
- [MA] An Act Establishing a Moratorium on Face Recognition (Jan 2019). Senate Bill 1385 was introduced to establish a moratorium on the use of face recognition systems by state and local law enforcement.
- [NY] Prohibits Use of Facial Recog. Sys. (May 2019). Senate Bill 5687 was introduced to propose a temporary stop to the use of facial recognition technology in public schools.
- [SF and Oakland, CA] City ordinances were passed to ban the use of facial recognition software by the police and other government agencies (June, July 2019).
- [Somerville, MA] City ordinance was passed to ban the use of facial recognition technology by government agencies (July 2019).
- [CA] B O T Act – SB 1001 (effective July 2019). Enacted bill SB 1001 (eff. July 2019) intended to “shed light on bots by requiring them to identify themselves as automated accounts.”
- [CA] Anti- Eavesdropping Act (Assemb. May 2019). Prohibiting a person or entity from providing the operation of a voice recognition feature within the state without prominently informing the user during the initial setup or installation of a smart speaker device.
- [IL] AI Video Interview Act (effective Jan 2020). Provide notice and explainability requirements for recorded video interviews.
- [Fed] FUTURE of AI Act (Dec 2017). Requiring the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence.
- [Fed] AI JOBS Act (Jan 2019). Promoting a 21st century artificial intelligence workforce.
- [Fed] GrAITR Act (Apr 2019). Legislation directed to research on cybersecurity and algorithm accountability, explainability and trustworthiness.
- [Fed] AI in Government Act (May 2019). Instructing the General Services Administration’s AI Center of Excellence to advise and promote the efforts of the federal government in developing innovative uses of AI to benefit the public, and improve cohesion and competency in the use of AI.
- [Fed] AI Initiative Act (May 2019). Requiring federal government activities related to AI, including implementing a National Artificial Intelligence Research and Development Initiative.
 As noted in our introduction, we made certain judgment calls with respect to which cases to include. For example, we omitted certain BIPA cases that did not add any additional information to those we have presented in this Chapter. See, e.g., Darty v. Columbia Rehabilitation and Nursing Center, LLC, 2020 U.S. Dist. LEXIS 110574 (N.D. Ill. 2020); Figueroa v. Kronos Incorporated, 2020 U.S. Dist. LEXIS 64131 (N.D. Ill. 2020); Namuwonge v. Kronos, Inc., 418 F. Supp. 3d 279 (N.D. Ill. 2019); Treadwell v. Power Solutions International Inc., 427 F. Supp. 3d 984 (N.D. Ill. 2019); Kiefer v. Bob Evans Farm, LLC, 313 F. Supp. 3d 966 (C.D. Ill. 2018); Rivera v. Google Inc., 238 F. Supp. 3d 1088 (N.D. Ill. 2017); In re Facebook Biometric Information Privacy Litigation, 185 F. Supp. 3d 1155 (N.D. Cal. 2016); Norberg v. Shutterfly, Inc., 152 F. Supp. 3d 1103 (N.D. Ill. 2015).
 As noted in our introduction, we made certain judgment calls with respect to which cases to include. For example, we omitted several patent cases directed to subject-matter eligibility that we felt did not substantiate additional insight to those we have presented in this Chapter. See, e.g., Kaavo Inc. v. Amazon.com, Inc., 323 F. Supp. 3d 630 (D. Del. 2018); Hyper Search, LLC v. Facebook, Inc., No. 17-1387, 2018 U.S. Dist. LEXIS 212336 (D. Del. Dec. 18, 2018); Purepredictive, Inc. v. H20.AI, Inc., No. 17-cv-03049, 2017 U.S. Dist. LEXIS 139056 (N.D. Cal. Aug. 29, 2017); Power Analytics Corp. v. Operation Tech., Inc., No. SA CV16-01955 JAK, 2017 U.S. Dist. LEXIS 216875 (C.D. Cal. July 13, 2017); Nice Sys. v. Clickfox, Inc., 207 F. Supp. 3d 393 (D. Del. 2016); eResearch Tech., Inc. v. CRF, Inc., 186 F. Supp. 3d 463 (W.D. Pa. 2016); Neochloris, Inc. v. Emerson Process Mgmt. LLP, 140 F. Supp. 3d 763 (N.D. Ill. 2015).