300 E. Randolph St.
Chicago, IL 60001
§ 1.1. Introduction
We are pleased to present the second edition of the rapidly growing Chapter on Artificial Intelligence.
A few years ago, when in my capacity as the Founder and Chair of the AI Subcommittee, I made the original suggestion that the Annual Review should include a new Chapter devoted entirely to AI, I understood from clients that this is an area where they were hungry for guidance. Over the last decade, AI and Machine Learning have become my passion. I continue to be fascinated by the various AI/ML issues I am asked by my clients to advise them on, and have watched with interest as US regulators and the plaintiff’s bar have begun to focus their sights on commercial and embedded AI. The pace of corporate deals involving companies who count AI as their innovation have also increased substantially. I have tried hard to keep up with the rapid pace of change and have published on many aspects, formulated proposed federal AI legislation that in 2018 became a House of Representatives Draft Discussion Bill, and have been invited to speak and teach on AI at many institutions including MIT/SLOAN, NYU, and Berkeley Law School.
In the absence of substantive federal legislation, case law plays an outsized role in helping shape the contours of the emerging legal issues associated with widespread adoption of AI and Machine Learning. Tracking relevant case developments from around the country is essential. Last year, we confidently predicted that the developments we report this year would increase exponentially year over year. Our prognostication has proven correct. This year’s Chapter marks a notable increase in reported cases in the field.
The goal of this Chapter is to have it become a useful tool for those business attorneys who seek to be kept up to date on a national basis concerning how the courts are deciding cases involving AI. We again made the same editorial decisions and included relevant legislation and pending legislation. We also made the same judgments as to what should be included. A notable example is facial recognition. Due to the nature of the underlying technology and the complexity of FR, FR necessarily involves issues of algorithmic/artificial intelligence. However, we did not include every case that references facial recognition when the issue at bar pertained to procedural aspects such as class certification (e.g., class action lawsuits filed under the Illinois Biometric Information Privacy Act (BIPA) (740 ILCS 14)).
Finally, I want to thank my colleagues, Adam Aft and Alex Crowley, for their assistance in preparing this year’s Chapter. Adam is a knowledgeable and accomplished AI attorney with whom I frequently collaborate, and Alex is a new joiner to our team with a noted exuberance for AI.
We hope this Chapter provides useful guidance to practitioners of varying experience and expertise and look forward to tracking the trends in these cases and presenting the cases arising in the next several years.
Palo Alto, California
§ 1.2. Cases
§ 1.2.1. United States Supreme Court
Van Buren v. United States, 141 S. Ct. 1648 (Jun. 3, 2021). The dispute underlying this case arose when a police officer violated department policy by using the computer in his patrol car to access information in a law enforcement database for a non-law-enforcement purpose. The Court held that a computer user “exceeds authorized access” under the Computer Fraud and Abuse Act of 1986 (CFAA) “when he accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off-limits to him.” Here, the police officer was authorized to access his patrol-car computer and the law enforcement database. But due to the Court’s holding, the officer’s purpose, albeit improper, in accessing the computer and database was not relevant to determining liability under CFAA. This holding resolved a circuit split about how broadly to interpret CFAA. It avoided making “millions of otherwise law-abiding citizens” into criminals simply on the basis that they used their computers in a technically unauthorized way, such as to send personal email from a work laptop.
There were no other qualifying decisions by the United Sates Supreme Court. We note the Court has heard a number of cases foreshadowing the types of issues that will soon arise with respecting to artificial intelligence such as United States v. Am. Library Ass’n (539 U.S. 194 (2003)) in which a plurality of the Court upheld the constitutionality of filtering software that libraries had to implement pursuant to the Children’s Internet Protection Act and Gill v. Whitford (138 S. Ct. 1916) in 2017 in which, if the plaintiffs had standing, the Justices may have had to evaluate the use of sophisticated software in redistricting (a point noted again in Justice Kagan’s express reference to machine learning in her dissent in Rucho v. Common Cause (139 S. Ct. 2484 (2019))). The Court had previously concluded that a “people search engine” site presenting incorrect information that prejudiced a plaintiff’s job search was a cognizable injury under the Fair Credit Reporting Act in Spokeo, Inc. v. Robins (136 S. Ct. 1540 (2016)). These cases are representative of the type of any number of cases that are likely to make their way to the Court in the near future that will require the Justices to contemplate artificial intelligence, machine learning, and the impact of the use of these technologies.
§ 1.2.2. First Circuit
There were no qualifying decisions within the First Circuit.
§ 1.2.3. Second Circuit
Flores v. Stanford, 2021 U.S. Dist. LEXIS 185700 (S.D.N.Y 2021) (compelling disclosure of information related to the COMPAS software (used to assess the likelihood of recidivism and used by courts to inform bail amounts and sentencing) as relevant to inform the plaintiffs class certification given that having transparency and explainability regarding such information and the operation of the applicable algorithm would be potentially central to the plaintiffs’ assertions that the defendants had unconstitutional practices deployed against them in the manner in which the COMPAS software informed their sentencing).
Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019). Victims, estates, and family members of victims of terrorist attacks in Israel alleged that Facebook was a provider of terrorist postings where they developed and used algorithms designed to match users’ information with other users and content. The court held that Facebook was a publisher protected by Section 230 of the Communications Decency Act and that the term publisher under the Act was not so limited that Facebook’s use of algorithms to match information with users’ interests changed Facebook’s role as a publisher.
§ 188.8.131.52. Additional Cases of Note
Clark v. City of New York, 2021 U.S. Dist. LEXIS 177534 (S.D.N.Y. 2021) (denying motion to dismiss first amendment and state law religious discrimination claims against New York City for requiring Muslim women to remove their hijabs for booking photographs after arrest. A primary motivation for the women’s complaint was that forcing them to remove their hijabs for a picture would cause the women to violate their religious belief that men outside of their immediate family were prohibited from seeing the women without their hijabs, even if only via pictures stored in facial recognition databases. The court found that “requiring the removal of a hijab does not rationally advance the City’s valid interest in readily identifying arrestees,” including via facial recognition databases.)
Nat’l Coalition on Black Civic Participation v. Wohl, 2021 U.S. Dist. LEXIS 177589, 2021 WL 4254802 (S.D.N.Y. 2021) (holding that a robocall service provider that allowed users to upload messages to a website for distribution via the service provider’s automated phone calling (i.e., robocall) system was not entitled to neutral publisher immunity under Section 230 because it was not a provider or user of an interactive computer service and the service provider allegedly knew of the discriminatory and false content of the messages and actively helped the users determine where to distribute the messages).
Nuance Communs., Inc. v. IBM, 2021 U.S. Dist. LEXIS 115228 (S.D.N.Y. 2021) (noting that “This is a breach of contract case arising under New York law. But more than that, this case is a contemporary window into the brave new world of artificial intelligence (“AI”) commercial applications” and finding after a bench trial that IBM had breached its implied covenant of good faith by updating a product outside of the scope of the parties’ agreement in order to avoid making the updates available to Nuance in relation to the product within the scope of the agreement).
Calderon v. Clearwater AI, Inc., 2020 U.S. Dist. LEXIS 94926 (S.D. N.Y. 2020) (stating the court’s intent to consolidate cases against Clearview based on a January 2020 New York Times article alleging defendants scraped over 3 billion facial images from the internet and scanned biometric identifiers and then used those scans to create a searchable database, which defendants then allegedly sold access to the database to law enforcement, government agencies, and private entities without complying with BIPA); see also Mutnick v. Clearview Ai, Inc., 2020 U.S. Dist. LEXIS 109864 (N.D. Ill. 2020).
People v. Wakefield, 175 A.D.3d 158 (N.Y. App. Div. 2019) (concluding no violation of the confrontation clause where the creator of artificial intelligence software was the declarant, not the “sophisticated and highly automated tool powered by electronics and source code.”); see also People v. H.K., 2020 NY Slip Op 20232, 130 N.Y.S.3d 890 (Crim. Ct. 2020) (following Wakefield in concluding that where software was “acting as a highly sophisticated calculator” the analyst using the software was still a declarant and the right to confrontation was preserved).
Vigil v. Take-Two Interactive Software, Inc., 235 F. Supp. 3d 499 (S.D.N.Y. 2017) (affirmed in relevant part by Santana v. Take-Two Interactive Software, Inc., 717 Fed.Appx. 12 (2d Cir. 2017)) (concluding that the BIPA doesn’t create a concrete interest in the form of right-to-information, but instead operates to support the statute’s data protection goal; therefor, defendant’s bare violations of the notice and consent provisions of BIPA were dismissed for lack of standing).
LivePerson, Inc. v. 24/7 Customer, Inc., 83 F. Supp. 3d 501 (S.D.N.Y. 2015) (determining plaintiff adequately plead possession and misappropriation of a trade secret where plaintiff alleged its “predictive algorithms” and “proprietary behavioral analysis methods” were based on many years of expensive research and were secured by patents, copyrights, trademarks, and contractual provisions).
§ 1.2.4. Third Circuit
Zaletel v. Prisma Labs, Inc., No. 16-1307-SLR, 2017 U.S. Dist. LEXIS 30868 (D. Del. Mar. 6, 2017). The plaintiff had a “Prizmia” photo editing app. The plaintiff alleged trademark infringement based on the defendant’s “Prisma” photo transformation app. In reviewing the Third Circuit’s likelihood of confusion factors, the court considered the competition and overlap factor. The court concluded that “while plaintiff broadly describes both apps as distributing photo filtering apps, the record demonstrates that defendant’s app analyzes photos using artificial intelligence technology and then redraws the photos in a chosen artistic style, resulting in machine generated art. Given these very real differences in functionality, it stands to reason that the two products are directed to different consumers.”
§ 184.108.40.206. Additional Cases of Note
McGoveran v. Amazon Web Servs., 2021 U.S. Dist. LEXIS 189633 (D. Del. 2021) (granting motion to dismiss a claim under Illinois’ Biometric Information Privacy Act (BIPA) brought by residents of Illinois against non-Illinois-based companies Amazon Web Services (AWS) and Pindrop Security for collecting callers’ “voiceprints,” which can be used to identify the speaker, when the residents made calls from Illinois using Illinois phone numbers to a company that used AWS and Pindrop services. The court found no “allegations involving conduct that occurred ‘primarily and substantially’ in Illinois” and that “BIPA does not apply extraterritorially.”)
Thomson Reuters Enter. Ctr. GmbH v. ROSS Intelligence Inc., 2021 U.S. Dist. LEXIS 59945 (D. Del. 2021) (denying a motion to dismiss claim of copyright infringement and tortious interference with contract against ROSS Intelligence, a legal research services company, related to ROSS’s alleged obtaining, via a third party contracted with Thomson Reuters, and use of certain Westlaw materials, notably Westlaw’s Headnotes and Key Number System, when developing ROSS’s own artificial intelligence-based legal research software. While ROSS argued “that Westlaw Content is not copyrightable under the government edicts doctrine,” the court nonetheless held that Thomson Reuters at least had a plausible claim for copyright infringement based on Thomson Reuters’ efforts to register its content with the US Copyright Office and a plausible claim of tortious interference with contract due to the manner in which ROSS allegedly obtained the Westlaw content.)
In re Valsartan, Losartan, & Irbesartan Prods. Liab. Litig., 337 F.R.D. 610 (D.N.J. 2020) (requiring the defendants to use an eDiscovery document review protocol that the parties had mostly agreed on rather than letting Teva unilaterally implement its own machine-learning-based document review protocol, suggesting that eDiscovery implementation is a collaborative effort requiring transparency in how document analysis is performed regardless of which technologies are used to conduct the analysis.)
§ 1.2.5. Fourth Circuit
Thaler v. Hirshfeld, 2021 U.S. Dist. LEXIS 167393 (E.D. Va. 2021) (holding that an artificial intelligence machine cannot be considered an “inventor” under the US Patent Act because plain reading of relevant provisions and statutes indicates that inventors must be natural persons.)
TruGreen Ltd. P’ship v. Allegis Global Sols., Inc., 2021 U.S. Dist. LEXIS 33587 (4th Cir. 2021) (granting motions to dismiss counts of negligent misrepresentation and promissory estoppel made based on claims that Defendant failed to perform under the contract. The failure of the defendant’s AI chatbot recruiting tool to perform as defendant promised was one way in which the defendant failed to meet its contractual obligations.)
Sevatec, Inc. v. Ayyar, 102 Va. Cir. 148 (Va. Cir. Ct. 2019). The court noted that matters such as data analytics, artificial intelligence, and machine learning are complex enough that expert testimony is proper and helpful and such testimony does not invade the province of the jury.
§ 1.2.6. Fifth Circuit
Aerotek, Inc. v. Boyd, 598 S.W.3d 373 (Tex. App. 2020). The court expressly acknowledged that one day courts may have to determine whether machine learning and artificial intelligence resulted in software altering itself and inserting an arbitration clause after the fact.
§ 220.127.116.11. Additional Cases of Note
Bertuccelli v. Universal City Studios LLC, No. 19-1304, 2020 U.S. Dist. LEXIS 195295 (E.D. La. 2020) (denying a motion to disqualify an expert who the court concluded was qualified to testify in a copyright infringement case after having performed a “artificial intelligence assisted facial recognition analysis” of the plaintiff’s mask and the alleged infringing mask). But see Bertuccelli v. Universal City Studios LLC, 2021 U.S. Dist. LEXIS 77784 (E.D. La. 2021) (later excluding a portion of the expert witness’s testimony on the basis that plaintiff Bertuccelli failed to timely respond to defendants’ request for additional information about the expert witness’s initial report.)
§ 1.2.7. Sixth Circuit
Cahoo v. Fast Enters. LLC, 508 F. Supp. 3d 162 (E.D. Mich. 2020) (finding that the plaintiff class had sufficiently demonstrated injury-in-fact due to “fraud determinations based on rigid application of UIA’s logic trees coupled with inadequate notice procedures.” The application of the logic trees was too rigid in that such application resulted in significant outcomes—determination of fraud—solely on the basis of plaintiff’s failure to respond to a questionnaire. Whether or not the software using the logic trees constituted artificial intelligence was of little consequence.)
Delphi Auto, PLC v. Absmeier, 167 F. Supp. 3d 868 (E.D. Mich. 2016). Plaintiff employer alleged defendant former employee breached his contractual obligations by terminating his employment with the plaintiff and accepting a job with Samsung in the same line of business. Defendant worked for the plaintiff as director of their labs in Silicon Valley, managing engineers and programmers on work related to autonomous driving. Defendant had signed a confidentiality and Noninterference agreement. The court concluded that the plaintiff had a strong likelihood of success on the merits of its breach of contract claim. Therefore, the court granted the plaintiff’s motion for preliminary injunction with certain modifications (namely, limiting the applicability of the non-compete provision to the field of autonomous vehicle technology for one year because the Court determined that autonomous vehicle technology is a “small and specialized field that is international in scope” and therefore a global restriction was reasonable).
§ 18.104.22.168. Additional Cases of Note
In re C.W., 2019-Ohio-5262 (Oh. Ct. App. 2019) (noting that “[p]roving that an actual person is behind something like a social-networking account becomes increasingly important in an era when Twitter bots and other artificial intelligence troll the internet pretending to be people.”).
§ 1.2.8. Seventh Circuit
King v. PeopleNet Corp., 2021 U.S. Dist. LEXIS 207694 (N.D. Ill. 2021) (remanding plaintiff’s BIPA § 15(a) and (c) claims to state court, and denying defendant’s motion to dismiss plaintiff’s BIPA § 15(b) claim. Re the BIPA § 15(a) and (c) claims, the court found that plaintiff lacked Article III standing because she failed to allege a concrete and particularized injury rather than a general injury not particular to her. Re the BIPA § 15(b) claim, the court found that plaintiff suffered a concrete and particularized injury when defendant, a third-party technology provider, actively collected plaintiff’s biometric facial scans without obtain plaintiff’s informed consent, thereby violating § 15(b).)
Kislov v. Am. Airlines, Inc., 2021 U.S. Dist. LEXIS 194911 (N.D. Ill. 2021) (applying Bryant and Fox, among other cases to hold that plaintiffs lacked Article III standing for their BIPA § 15(a) claim because, like Bryant but unlike Fox, the plaintiffs only alleged that defendant American Airlines “failed to make publicly available any policy addressing its biometric retention and destruction policies” without further alleging a failure to comply with those policies (which the plaintiff in Fox did). The court remanded the case to state court.)
Jacobs v. Hanwha Techwin Am., Inc., 2021 U.S. Dist. LEXIS 139668 (N.D. Ill. 2021) (dismissing claims brought under BIPA § 15(a), (b), and (d) against a third-party technology manufacturer. Re the BIPA § 15(b) claim, the court found that defendant was not engaged in illegal collection of facial recognition data because it merely manufactured the camera and did not take any active steps to use the camera to collect or retain the data. Re the BIPA § 15(a) and (d) claims, the court found that no evidence to plausibly suggest that defendant, as a mere third-party technology provider, actually possessed or disclosed plaintiff’s biometric data.)
United States v. Bebris, 4 F.4th 551 (7th Cir. 2021) (affirming a district court holding that quashed Bebris’s subpoena made on the basis of an alleged claim that plaintiff’s Fourth Amendment rights were violated and Facebook acted as a government agent when providing results of image recognition analysis to the National Center for Missing and Exploited Children (NCMEC) in compliance with 18 U.S.C. § 2258A(a). The court found that the district court’s holding that Facebook did not act as a government agent was not clearly erroneous because Facebook voluntarily provided the images to the NCMEC (a quasi-governmental organization), no government entity contacted Facebook about Bebris or directed Facebook to take any actions with respect to Bebris, and Facebook had an “independent business purpose in keeping its platform free of child pornography.”)
Hazlitt v. Apple Inc., 2021 U.S. Dist. LEXIS 110556 (S.D. Ill. 2021) (applying Bryant and Fox to hold that plaintiffs had Article III standing for their BIPA § 15(a) and (b) claims, and applying Thornley to hold that plaintiffs lacked Article III standing for their BIPA § 15(c) because they had merely alleged a regulatory violation). Compare Hazlitt v. Apple Inc., 500 F. Supp. 3d 738 (S.D. Ill. 2020) (vacated for reconsideration after the Fox and Thornley decisions published).
Kalb v. Gardaworld Cashlink LLC, 2021 U.S. Dist. LEXIS 81325 (C.D. Ill. 2021) (finding that plaintiff had Article III standing for his BIPA § 15(a) claim because, like Fox and unlike Bryant, plaintiff alleged that not only had defendant failed to publish a data retention and destruction policy, defendant had no such policy at all. Thus, under Fox, plaintiff had alleged a concrete and particularized injury sufficient for Article III standing.)
Stein v. Clarifai, Inc., 2021 U.S. Dist. LEXIS 49516 (N.D. Ill. 2021) (finding no personal jurisdiction for a set of claims alleging that Clarifai violated BIPA § 15 by obtaining images of Illinois users from OKCupid user profiles to use in training facial recognition software. The court found that plaintiff had not alleged sufficient contacts with Illinois to bring a BIPA claim given that the only evidence of Clarifai’s contact with Illinois was obtaining a data set from an investor based in Chicago.)
Wilcosky v. Amazon.com, Inc., 517 F. Supp. 3d 751 (N.D. Ill. 2021) (holding that plaintiffs Wilcosky, Gunderson, and E.G. (a minor) had Article III standing for their BIPA § 15(a) and (b) claims against Amazon’s collection, use, and storage of plaintiffs’ voice biometric data, i.e., “voiceprint,” via the speech and voice recognition capabilities of Amazon’s Alexa virtual assistant. Under Bryant, Amazon’s failure to obtain plaintiffs’ informed consent about Amazon’s collection and storage of their voiceprints was sufficient concrete and particularized injury-in-fact under BIPA § 15(b). Under Fox, Amazon failed to publish and comply with a voiceprint data retention policy, which is a sufficiently concrete and particularized injury under BIPA § 15(a). The court also held that plaintiff Wilcosky’s and Gundersons’ claims were subject to arbitration related to Amazon Alexa given that they had agreed to arbitration when purchasing products from Amazon’s website.)
Thornley v. Clearview AI, Inc., 984 F.3d 1241 (7th Cir. 2021) (affirming that plaintiffs did not have Article III standing to pursue their BIPA § 15(c) claim in federal court against Clearview AI, a facial recognition company that plaintiffs alleged included the plaintiffs’ biometric identifiers or information in Clearview AI’s database. Significantly, the plaintiffs’ only alleged injury was general, statutory aggrievement under BIPA § 15(c). Because the plaintiffs had not alleged a concrete and particularized injury, the court remanded the case back to state court. This case was the court’s first opportunity to consider BIPA § 15(c).)
Fox v. Dakkota Integrated Sys., LLC, 980 F.3d 1146 (7th Cir. 2020) (finding that plaintiff had standing under BIPA § 15(a) because defendant violated plaintiff’s legal right by failing to “comply with data retention and destruction policies—resulting in the wrongful retention of her biometric data after her employment ended, beyond the time authorized by law.” The court distinguished Bryant from this case on the basis that Bryant was focused only on public disclosure of data retention and destruction protocols while this case also relied on an evaluation of compliance with those protocols. Further, the court held that unlawful retention of biometric data was a concrete and particularized injury just like unlawful collection of biometric data.)
Marquez v. Google LLC, 2020 U.S. Dist. LEXIS 199098 (N.D. Ill. 2020) (finding that plaintiff Marquez lacked Article III standing in federal court because he did not plead any particularized harm arising under defendant’s alleged violation of BIPA; rather, he had merely alleged that Google committed a public harm under BIPA § 15(a) by not publishing data retention policies. Thus, under Bryant, the court remanded the § 15(a) claim back to Illinois state court.)
Bryant v. Compass Group USA, Inc., 958 F.3d 617 (7th Cir. 2020). Plaintiff vending machine customer filed class action against vending machine owner/operator, alleging violation of BIPA when it required her to provide a fingerprint scan before allowing her to purchase items. The district court found defendant’s alleged violations were mere procedural violations that cause no concrete harm to plaintiff and therefore remanded the action to state court. The Court of Appeals held that a violation of § 15(a) (requiring development of a written and public policy establishing a retention schedule and guidelines for destroying biometric identifiers and information) of BIPA did not create a concrete and particularized injury and plaintiff therefore lacked standing under Article III to pursue the claim in federal court. In contrast, the Court of Appeals held that a violation of § 15(b) (requiring private entities make certain disclosures and receive informed consent from consumers before obtaining biometric identifiers and information) of BIPA did result in a concrete injury (plaintiff’s loss of the power and ability to make informed decisions about the collection, storage, and use of her biometric information) and therefore she had standing and her claim could proceed in federal court.
Rosenbach v. Six Flags Entertainment Corporation, 129 N.E.3d 1197 (Ill. 2019). Rosenbach is a key Supreme Court of Illinois case answering whether one qualifies as an “aggrieved” person for purposes of BIPA and may seek damages and injunctive relief if she hasn’t alleged some actual injury or adverse effect beyond a violation of her rights under the statute. Plaintiff purchased a season pass for her son to defendant’s amusement park. Plaintiff’s son was asked to scan his thumb into defendant’s biometric data capture system and neither plaintiff nor her son were informed of the specific purpose and length of term for which the son’s fingerprint had been collected. Plaintiff brought suit alleging violation of BIPA. The Supreme Court of Illinois held that an individual need not allege some actual injury or adverse effect, beyond violation of his or her rights under BIPA, to qualify as an “aggrieved” person under the statute and be entitled to seek damages and injunctive relief. The court reasoned that requiring individuals to wait until they’ve sustained some compensable injury beyond violation of their statutory rights before they can seek recourse would be antithetical to BIPA’s purposes. The court found that BIPA codified individuals’ right to privacy in and control over their biometric identifiers and information. Therefore, the court found also that a violation of BIPA is not merely “technical,” but rather the “injury is real and significant.”
§ 22.214.171.124. Additional Cases of Note
Kloss v. Acuant, Inc., 2020 U.S. Dist. LEXIS 89411 (N.D. Ill. 2020) (applying Bryant v. Compass Group (summarized in this chapter) and concluded that the court lacked subject-matter jurisdiction over plaintiff’s BIPA § 15(a) claims because a violation of § 15(a) is procedural and thus doesn’t create a concrete and particularized Article III injury).
Acaley v. Vimeo, 2020 U.S. Dist. LEXIS 95208 (N.D. Ill. June 1, 2020) (concluding that parties made an agreement to arbitrate because defendant provided reasonable notice of its terms of service to users by requiring users to give consent to its terms when they first opened the app and when they signed up for a free subscription plan, but the BIPA violation claim alleged by the plaintiff was not within the scope of the parties’ agreement to arbitrate because the “Exceptions to Arbitration” clause excluded claims for invasion of privacy).
Heard v. Becton, Dickinson & Co., 2020 U.S. Dist. LEXIS 31249 (N.D. Ill. 2020) (concluding that for § 15(b) to apply, an entity must at least take an active step to “collect, capture, purchase, receive through trade, or otherwise obtain” biometric data and the plaintiff did not adequately plead that defendant took any such active step where the complaint omitted specific factual detail and merely parroted BIPA’s statutory language and the plaintiff failed to adequately plead possession because he failed to sufficiently allege that defendant “exercised any dominion or control” over his fingerprint data).
Rogers v. CSX Intermodal Terminals, Inc., 409 F. Supp. 3d 612 (N.D. Ill. 2019) (denying defendant’s motion to dismiss and relied on the Illinois Supreme Court’s holding in Rosenbach (summarized in this chapter) to conclude that plaintiff’s right to privacy in his fingerprint data included “the right to give up his biometric identifiers or information only after receiving written notice of the purpose and duration of collection and providing informed written consent.”).
Neals v. PAR Technology Corp., 419 F. Supp. 3d 1088 (N.D. Ill. 2019) (concluding that the BIPA does not exempt a third-party non-employer collector of biometric information when an action arises in the employment context, rejected defendant’s argument that a third-party vendor couldn’t be required to comply with the BIPA because only the employer has a preexisting relationship with the employees).
Ocean Tomo, LLC v. Patentratings, LLC, 375 F. Supp. 3d 915, 957 (N.D. Ill. 2019) (determining that Ocean Tomo training its machine learning algorithm on PatentRatings’ patent database violated a requirement in a license agreement between the parties that prohibited Ocean Tomo from using the database (which was designated as PatentRatings confidential information) from developing a product for anyone except PatentRatings).
Liu v. Four Seasons Hotel, Ltd., 2019 IL App(1st) 182645, 138 N.E.3d 201 (Ill. 2019) (noting that “simply because an employer opts to use biometric data, like fingerprints, for timekeeping purposes does not transform a complaint into a wages or hours claim.”).
§ 1.2.9. Eighth Circuit
There were no qualifying decisions within the Eighth Circuit.
§ 1.2.10. Ninth Circuit
Klein v. Facebook, Inc., 2021 U.S. Dist. LEXIS 175738 (N.D. Cal. 2021) (resolving disputes between the parties related to the electronically stored information (ESI) protocol to use as part of e-discovery. Notably, the court required the parties to disclose intent to their use technology assisted review (TAR), predictive coding, or machine learning for e-discovery, discuss how those tools would be used, and, if needed, defend the decisions made in using the tools to produce a sufficient set of documents for review. As part of its opinion, the court cited In re Valsartan, Losartan, & Irbesartan Prods. Liab. Litig., 337 F.R.D. 610 (D.N.J. 2020), which was discussed previously in this chapter.)
Gonzalez v. Google LLC, 2 F.4th 871 (9th Cir. 2021) (evaluating multiple complaints alleging that Google and other social media companies such as Facebook and Twitter were directly and secondarily liable for acts of terrorism committed by ISIS because the companies’ platforms facilitated ISIS recruiting and messaging. The court held that the defendants retained publisher immunity under 47 U.S.C.S. § 230. Notably, the court stated that it did “not hold that ‘machine-learning algorithms can never produce content within the meaning of Section 230.’ We only reiterate that a website’s use of content-neutral algorithms, without more, does not expose it to liability for content posted by a third-party. Under our existing case law, § 230 requires this result.” The court also held that the plaintiffs for two of the three complaints failed to state an adequate claim that the companies were liable for aiding and abetting ISIS.)
United States v. Nelson, 2021 U.S. Dist. LEXIS 71421 (N.D. Cal. 2021) (denying a motion to exclude an expert witness’s testimony about the function of an AI-based software program due to Federal Rule 702 and Daubert concerns because expert witnesses do not have to be experts in the algorithms used in certain software to reliably testify about the software’s outputs.)
In re Facebook Biometric Info. Privacy Litig., 522 F. Supp. 3d 617 (N.D. Cal. 2021) (approving a $650 million settlement for the Facebook biometric information privacy litigation, which involved BIPA § 15(a) and (b) claim against Facebook’s collection and retention of Illinois residents’ facial scans (biometric data) for facial recognition purposes.)
Lopez v. Apple, Inc., 519 F. Supp. 3d 672 (N.D. Cal. 2021) (dismissing claims that Apple violated multiple federal and state privacy laws when its artificial intelligence-based virtual assistant “Siri” was accidentally triggered to “listen” to conversations intended to be private. The court held that the plaintiffs lacked Article III standing because their claims were based entirely on a news article that claimed to reveal details about accidental triggering of Siri and resulting subsequent recordings of private conversations. The court also dismissed the each of the plaintiff’s claims for a variety of reasons, including that the plaintiffs’ allegations were conclusory or out of scope of the bounds of a given law.)
Williams-Sonoma, Inc. v. Amazon.com, Inc., 2020 U.S. Dist. LEXIS 163066 (N.D. Cal. 2020) (holding that Williams Sonoma had adequately alleged copyright infringement by Amazon because Amazon’s algorithm selects the “most attractive photos irrespective of rights” and then publishes those photos on its website without input from any other party.)
Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019). Facebook moved to dismiss plaintiff users’ complaint for lack of standing on the ground that the plaintiffs hadn’t alleged any concrete injury as a result of Facebook’s facial recognition technology. The court concluded that BIPA protects concrete privacy interests and violations of BIPA’s procedures actually harm or pose a material risk of harm to those privacy interests.
WeRide Corp. v. Kun Huang, 379 F. Supp. 3d 834 (N.D. Cal. 2019). Autonomous vehicle companies brought, inter alia, trade secret misappropriation claims against former director and officer and his competing company. The court determined the plaintiff showed it was likely to succeed on the merits of its trade secret misappropriation claims where it developed source code and algorithms for autonomous vehicles over 18 months with investments of over $45M and restricted access to its code base to on-site employees or employees who use a password-protected VPN. Plaintiff identified its trade secrets with particularity where it described the functionality of each trade secret and named numerous files in its code base because plaintiff was “not required to identify the specific source code to meet the reasonable particularity standard.”
§ 126.96.36.199. Status of the Vance et al line of cases
(i) Complaints filed in 2020
Vance et al v. Amazon.com, Inc. (W.D. Wa. 2:20-cv-01084); Vance et al v. Facefirst, Inc. (C.D. Cal. 2:20-cv-06244); Vance et al v. Google LLC (N.D. Cal. 5:20-cv-04696); and Vance et al v. Microsoft Corporation (W.D. Wa. 2:20-cv-01082). Chicago residents Steven Vance and Tim Janecyk filed four nearly identical proposed class actions against Amazon.com Inc., Google LLC, Microsoft Corp. and a fourth company called Facefirst Inc., alleging the companies violated Illinois’ Biometric Information Privacy Act by “unlawfully collecting, obtaining, storing, using, possessing and profiting from the biometric identifiers and information” of plaintiffs without their permission. Plaintiffs allege that the tech companies used the dataset containing their geometric face scans to train computer programs how to better recognize faces. These companies, in an attempt to win an “arms race,” are working to develop the ability to claim a low identification error rate. Allegedly, the four tech giants obtained plaintiffs’ face scans by purchasing a dataset created by IBM Corp. (the subject of another suit brought by Janecyk).
Janecyk v. IBM Corp. (Cook County Cir. Ct. Ill. 2020CH00833). IBM Corp. was accused in an Illinois state court lawsuit of violating the state’s biometrics law when it allegedly collected photographs to develop its facial recognition technology without obtaining consent from the subjects to use biometric information. Plaintiff Janecyk, a photographer, said that at least seven of his photos appeared in IBM’s “diversity in faces” dataset. The photos were used to generate unique face templates that recognized the subjects’ gender, age, and race, and were given to third parties without consent. IBM allegedly created, collected and stored millions of face templates – highly detailed geometric maps of the face – from about a million photos that make up the “diversity in faces” database. Janecyk claimed that IBM obtained the photos from Flickr, a website where users upload their photos. IBM obtained photos depicting people Janecyk has photographed in the Chicago area whom he had assured he was only taking their photos as a hobbyist and that their images wouldn’t be used by other parties or for a commercial purpose. See also Vance v. IBM Corp. (N.D. Ill. 1:20-cv-00577; January 24, 2020) (initial class action complaint); Vance v. IBM (N.D. Ill. 1:20-cv-00577; March 12, 2020) (second amended class action complaint, included both Steven Vance and Tim Janecyk).
(ii) Judicial decisions in 2021
Vance v. Amazon.com Inc., 2021 U.S. Dist. LEXIS 72294 (W.D. Wash. 2021) (denying Amazon’s motion to dismiss the plaintiffs’ BIPA § 15(c) claim and unjust enrichment claim. Re the BIPA § 15(c) claim, the court found that the plaintiffs’ allegations that Amazon’s Recognition software was used by customers such as law enforcement agencies to monitor certain individuals support inferences “that the biometric data is itself so incorporated into Amazon’s product that by marketing the product, it is commercially disseminating the biometric data” and “Amazon received some benefit from the biometric data through increased sales of its improved products.” Nonetheless, the court recognized that additional factual development may reveal that plaintiffs’ allegations are false. Re the unjust enrichment claim, the court applied Illinois law and held that the plaintiffs had sufficiently stated an unjust enrichment claim by alleging increased risk of privacy harm and loss of control over their biometric data, aligning with the Northern District of Illinois District Court’s holding in Vance v. IBM, 2020 U.S. Dist. LEXIS 168610 2020 WL 5530134.)
Vance v. Microsoft Corp., 2021 U.S. Dist. LEXIS 72286 (W.D. Wash. 2021) (applying similar reasoning to Vance v. Amazon.com Inc., 2021 U.S. Dist. LEXIS 72294 to the facts of this case, the court dismissed the plaintiffs’ BIPA § 15(c) claim and denied Microsoft’s motion to dismiss the plaintiff’s unjust enrichment claim. Re the BIPA § 15(c) claim, the court found that the plaintiffs did not allege enough facts to infer that Microsoft “disseminated or shared access to biometric data through its products” or sold the data. Re the unjust enrichment claim, the court applied Illinois law and held that the plaintiffs had sufficiently stated an unjust enrichment claim by alleging increased risk of privacy harm and loss of control over their biometric data, aligning with the Northern District of Illinois District Court’s holding in Vance v. IBM, 2020 U.S. Dist. LEXIS 168610 2020 WL 5530134.)
Vance v. Amazon.com, Inc., 525 F. Supp. 3d 1301 (W.D. Wash. 2021) (holding that there was not enough factual information at that time to (1) dismiss BIPA claims on the basis of extraterritorial effect and (2) determine whether applying BIPA would violate the Dormant Commerce Clause; that BIPA applies to facial scans captured from photographs; and that BIPA § 15(b) applied to downloading biometric data from IBM and using it to improve the downloader’s products. The court requested additional briefing on whether Amazon had profited from the biometric data it possessed and which state law should govern the plaintiffs’ unjust enrichment claim.) See also Vance v. Microsoft Corp., 525 F. Supp. 3d 1287 (W.D. Wash. 2021) (same).
Vance v. Google LLC, 2021 U.S. Dist. LEXIS 27546, *1, 2021 WL 534363 (N.D. Cal. 2021) (granting Google’s motion to stay pending the resolution of Vance v. International Business Machines, Corporation. The court held that the balance of hardships for granting a stay weighed in favor of granting it. The case is stayed until the earlier of February 12, 2022 and the resolution of the IBM action.) Vance v. Facefirst, Inc., 2021 U.S. Dist. LEXIS 212756, *1, 2021 WL 5044010 (C.D. Cal. 2021) (similar reasoning, except the case is stayed until the earlier of February 11, 2022 and the resolution of the IBM action).
Vance v. IBM, 2020 U.S. Dist. LEXIS 168610 2020 WL 5530134 (N.D. Ill. 2020) (among other holdings, the court denied IBM’s motion to dismiss plaintiff’s BIPA claim. The court rejected IBM’s argument that “BIPA expressly excludes photographs and biometric information derived from photographs.”)
§ 188.8.131.52. Additional Cases of Note
Hatteberg v. Capital One Bank, N.A., No. SA CV 19-1425-DOC-KES, 2019 U.S. Dist. LEXIS 231235 (C.D. Cal. Nov. 20, 2019) (relying on advances in technology, including use of artificial intelligence to “deepfake” audio as a basis for denying defendant’s argument that a plaintiff must plead to a higher standard alleging specific indicia of automatic dialing to survive a motion to dismiss in a Telephone Consumer Protection Act case).
Williams-Sonoma, Inc. v. Amazon.com, Inc., No. 18-cv-07548-EDL, 2019 U.S. Dist. LEXIS 226300, at *36 (N.D. Cal. May 2, 2019) (denying Amazon’s motion to dismiss Williams-Sonoma’s service mark infringement case noting “it would not be plausible to presume that Amazon conducted its marketing of Williams-Sonoma’s products without some careful aforethought (whether consciously in the traditional sense or via algorithm and artificial intelligence).”
Nevarez v. Forty Niners Football Co., LLC, No. 16-cv-07013-LHK (SVK), 2018 U.S. Dist. LEXIS 182255 (N.D. Cal. Oct. 16, 2018) (determining that protections exist such as protective orders and the Federal Rules of Evidence that prohibit a party from using artificial intelligence to identify non-responsive documents without identifying a “cut-off” point for some manner of reviewing the alleged non-responsive documents).
§ 1.2.11. Tenth Circuit
There were no qualifying decisions within the Tenth Circuit.
§ 1.2.12. Eleventh Circuit
There were no qualifying decisions within the Eleventh Circuit.
§ 1.2.13. DC Circuit
Elec. Privacy Info. Ctr. v. Nat’l Sec. Comm’n on Artificial Intelligence, No. 1:19-cv-02906 (TNM), 2020 U.S. Dist. LEXIS 95508 (D.D.C. June 1, 2020). The court concluded that the National Security Commission on Artificial Intelligence is subject to both the Freedom of Information Act and the Federal Advisory Committee Act.
§ 1.2.14. Court of Appeals for the Federal Circuit
McRO, Inc. v. Bandai Namco Games America, Inc., 837 F.3d 1299 (Fed. Cir. 2016). Patent litigation over a patent which claimed a method of using a computer to automate the realistic syncing of lip and facial expressions in animated characters. The plaintiff owners of the patents brought infringement actions and defendants argued the claims were unpatentable algorithms that merely took a preexisting process and make it faster by automating it on a computer. The court held that the patent claim was not directed to ineligible subject matter where the claim involved the use of automation algorithms and was specific enough such that the claimed rules would not prevent broad preemption of all rules-based means of automating facial animation.
§ 1.3. Legislation
We organize the enacted and proposed legislation into (i) policy (e.g., executive orders); (ii) algorithmic accountability (e.g., legislation aimed at responding to public concerns regarding algorithmic bias and discrimination); (iii) facial recognition; (iv) transparency (e.g., legislation primarily directed at promoting transparency in use of AI); and (v) other (e.g., other pending bills such as federal bills on governance issues for AI).
§ 1.4. Policy
§ 1.4.1. 2021
- [Fed] National Artificial Intelligence Initiative Office (Jan. 2021) Established pursuant to the National Artificial Intelligence Initiative Act of 2020 to lead AI education, research, and development efforts on behalf of the Executive Branch.
- [Fed] Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning (Mar. 2021). Initiative by federal agencies to research use of AI and machine learning by financial institutions.
- [Fed] Aiming for truth, fairness, and equity in your company’s use of AI (Apr. 2021). Guidance from the FTC about ensuring truth, fairness, and equity when using AI.
- [Fed] Launch of the National Artificial Intelligence Research Resource Task Force (Jun. 2021). Established pursuant to the National Artificial Intelligence Initiative Act of 2020 to develop a roadmap for US AI research.
- [Fed] Call for Nominations to Serve on the National Artificial Intelligence Advisory Committee and Call for Nominations To Serve on the Subcommittee on Artificial Intelligence and Law Enforcement (Sep. 2021). Establishing the National Artificial Intelligence Advisory Committee (NAIAC) pursuant to the National Artificial Intelligence Initiative Act of 2020, to be comprised of members from academic institutions, private companies, nonprofit organizations, and Federal laboratories to help guide US AI education, research, and development efforts.
- [Fed] Notice of Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies (Oct. 2021). Inviting public comment on use of biometric technologies for purposes related to identification and inference of individual attributes.
- [Fed] U.S. Equal Employment Opportunity Commission Initiative on Artificial Intelligence and Algorithmic Fairness (Oct. 2021). Researching use and impact of AI in hiring and other employment decisions.
§ 1.4.2. 2020
- [Fed] Maintaining Am Leadership in Al (Feb 2019). Executive order 13859 (Feb. 2019) launching “American AI Initiative” intended to help coordinate federal resources to support development of AI in the US
- [Fed] H R Res 153 (Feb 2019). Legislation to support the development of guidelines for ethical development of artificial intelligence.
- [Fed / NIST] US Leadership in Al (Aug 2019). NIST to establish standards to support reliable, robust, and trustworthy AI.
- [Cal] Res on 23 Asilomar Al Principles (Sep 2018). Adopted state resolution ACR 215 (Sept. 2018) expressing legislative support for the Asilomar AI Principles as “guiding values” for AI development.
§ 1.5. Algorithmic Accountability
§ 1.5.1. 2021
- [Fed] Algorithmic Justice and Online Platform Transparency Act. Bills H.R.3611, S.1896 (In committee May 2021). Seeks to prevent discrimination by algorithmic processes and increase algorithmic transparency.
- [CA] Automated Decision Systems Accountability Act. Bill A.B. 13 (In committee Aug. 2021). Restricts state agencies’ use of automated decision-making systems to avoid algorithmic discrimination.
- [CO] Restrict Insurers’ Use Of External Consumer Data. Bill S.B. 169 (Enacted Jul. 2021). Prohibits algorithmic discrimination in insurance practices.
- [IL] Video Interview Demographic. Bill H.B. 53 (Effective Jan. 2022). Seeks to avoid algorithmic discrimination in first-pass hiring interviews conducted using AI.
- [MA] An Act Establishing a Commission on Automated Decision-Making By Government in the Commonwealth. Bill H.119 (similar to S.60) (In committee Mar. 2021). Establishes a committee to help prevent algorithmic discrimination and improve algorithmic transparency in state agencies’ use of automated decision systems.
- [MA] An Act Related to Data Privacy. Bill H.136 (In committee Mar. 2021). Proposes a state data privacy law that includes creating a Massachusetts Data Accountability and Transparency Agency and ensuring algorithmic accountability.
- [MA] An Act Establishing the Massachusetts Information Privacy Act. Bill H.142 (In committee Mar. 2021). Proposes a state data privacy law that includes restrictions on covered entities’ and data processors’ processing of personal information and use of automated decision systems; and provides data rights for subjects.
- [MI] Michigan employment security act. Bill H.B. 4439 (In committee Mar. 2021). Requires auditing the source code, algorithms, and logic formulas of the unemployment agency computer systems.
- [NJ] Prohibits certain discrimination by automated decision systems. Bill S.B. 1943 (Introduced Feb. 2020). Prohibits algorithmic discrimination with respect to the provision of financial services, insurance, and healthcare services.
- [VT] An act relating to State development, use, and procurement of automated decision systems. Bill H.263 (In committee Feb. 2021). Requires Secretary of Digital Services to ensure automated decision systems used by the State do not lead to algorithmic discrimination.
- [VT] An act relating to the creation of the Artificial Intelligence Commission. Bill H.410 (In committee Mar. 2021). Establishes commission to promote ethical AI use and development.
- [VT] An act relating to establishing an advisory group to address bias in State-used software. Bill H.429 (In committee Mar. 2021). Seeks to prevent bias in software.
§ 1.5.2. 2020
- [Fed] Algorithmic Accountability Act (Apr 2019). Bills S 1108, HR 2231 (Apr. 2019) intended to require “companies to regularly evaluate their tools for accuracy, fairness, bias, and discrimination.”
- [NJ] New Jersey Algorithmic Accountability Act (May 2019). Require that certain businesses conduct automated decision system and data protection impact assessments of their automated decision system and information systems.
- [CA] AI Reporting (Feb 2019). Require California business entities with over 50 employees and associated contractors and venders to each maintain a written record of the data used relating to any use of artificial intelligence for the delivery of the product or service to the public entity.
- [WA] Guidelines for Gov’t Procurement and Use of Auto Decision Systems (Jan 2019). Establish guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.
- [NY] NYC (Jan 2018). —“A Local Law in relation to automated decision systems used by agencies” (Int. No. 1696-2017) required the creation of a task force for providing recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address situations where people are harmed by such agency automated decision systems.
§ 1.6. Facial Recognition Technology
§ 1.6.1. 2021
- [Fed] Facial Recognition and Biometric Technology Moratorium Act of 2021. Bill S.2052 (In committee Jun. 2021). Requires Federal agencies or officials to receive Congressional approval (i.e., legislation) to use biometric surveillance systems or information derived therefrom.
§ 1.6.2. 2020
- [Fed] Commercial Facial Recognition Privacy Act (Mar 2019). Bill S 847 (Mar. 2019) intended to provide people info and control over how their data is shared with companies using facial recognition tech.
- [Fed] FACE Protection Act (July 2019). Restrict Federal government from using a facial recognition technology without a court order.
- [Fed] No Biometric Barriers to Housing Act (July 2019). Prohibiting owners of certain federally assisted rental units from using facial recognition, physical biometric recognition, or remote biometric recognition technology in any units, buildings, or grounds of such project.
- [CA] Body Camera Account Act (Feb 2019). Bill A.B. 1215 was introduced to prohibit law enforcement agencies and officials from using any “biometric surveillance system,” including facial recognition technology, in connection with an officer camera or data collected by the camera.
- [MA] An Act Establishing a Moratorium on Face Recognition (Jan 2019). Senate Bill 1385 was introduced to establish a moratorium on the use of face recognition systems by state and local law enforcement.
- [NY] Prohibits Use of Facial Recog. Sys. (May 2019). Senate Bill 5687 was introduced to propose a temporary stop to the use of facial recognition technology in public schools.
- [SF and Oakland, CA]City ordinances were passed to ban the use of facial recognition software by the police and other government agencies.(June, July 2019).
- [Somerville, MA] City ordinance was passed to ban the use of facial recognition technology by government agencies (July 2019).
§ 1.7. Transparency
§ 1.7.1. 2021
- [Fed] Mind Your Own Business Act of 2021. Bill S.1444 (In committee Apr. 2021). Seeks to prevent algorithmic bias in high-risk information systems and automated-decision systems, and enables consumers to opt out of tracking by covered entities.
- [Fed] Filter Bubble Transparency Act. Bill S.2024 (In committee Jun. 2021). Requires online platform operators that use algorithms to customize what users see to allow users to opt out of the use of those algorithms.
- [Fed] GOOD AI Act of 2021. Bill S.3035 (In committee Oct. 2021). Establishes AI Hygiene Working Group to ensure that Federal acquisition contracts for AI ensure protection of privacy, civil rights, civil liberties, and data security.
- [WA] Establishing guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability. Bill S.B. 5116 (In committee Feb. 2021).
§ 1.7.2. 2020
- [CA] BOT Act – SB 1001 (effective July 2019). Enacted bill SB 1001 (eff. July 2019) intended to “shed light on bots by requiring them to identify themselves as automated accounts.”
- [CA] Anti-Eavesdropping Act (Assemb. May 2019). Prohibiting a person or entity from providing the operation of a voice recognition feature within the state without prominently informing the user during the initial setup or installation of a smart speaker device.
- [IL] Al Video Interview Act (effective Jan 2020). Provide notice and explainability requirements for recorded video interviews.
§ 1.8. Other
§ 1.8.1. 2021
- [Fed] Democracy Technology Partnership Act. Bill S.604 (In committee Mar. 2021). Seeks to develop international partnerships to develop regimes for technology governance, including for AI and machine learning.
- [Fed] Information Transparency & Personal Data Control Act. Bill H.R. 1816 (In committee Mar. 2021). Requires Federal Trade Commission to regulate the protection of sensitive personal information.
- [Fed] AI Scholarship-for-Service Act. Bill S.1257 (In committee Apr. 2021). Creates Federal scholarship to develop AI professionals in government.
- [Fed] Consumer Data Privacy and Security Act of 2021. S. 1494 (In committee Apr. 2021). Federal data protection law.
- [Fed] Artificial Intelligence Capabilities and Transparency (AICT) Act of 2021. Bill S.1705 (Introduced May 2021). Seeks to promote research and development of AI in the US for economic and national security purposes.
- [Fed] Artificial Intelligence for the Military Act of 2021. Bill S.1776 (Introduced May 2021). Requires AI training for certain military personnel.
- [Fed] Next Generation Computing Research and Development Act of 2021. Bill H.R.3284 (In committee May 2021). Promotes Department of Energy research in advanced scientific computing.
- [Fed] SELF DRIVE Act. Bill H.R.3711 (In committee Jun. 2021). Regulates highly automated vehicles to ensure their safe operation.
- [Fed] Consumer Safety Technology Act. Bill H.R.3723 (Passed House Jun. 2021). Requires various Federal agencies to research impact of technologies such as AI and blockchain on consumer product safety and protection.
- [Fed] United States Innovation and Competition Act of 2021. Bill S.1260, H.R.2731 (Passed Senate Jun. 2021). Creates a Directorate for Technology and Innovation in the National Science Foundation tasked with promoting research, innovation, and commercialization of technologies such as AI.
- [Fed] Fellowships and Traineeships for Early-Career AI Researchers Act. Bill H.R.3844 (In committee Jun. 2021). Provides financial support, via university traineeships, to graduate students studying AI.
- [Fed] Data Protection Act of 2021. Bill S.2134 (In committee Jun. 2021). Creates a Federal Data Protection Agency that, among other things, will research and analyze the use of automated decision systems in collecting and processing personal data.
- [Fed] AI for Agency Impact Act. Bill H.R.4468 (In committee Jul. 2021). Requires Executive agencies to implement trustworthy-AI strategies and implementation plans.
- [Fed] AI in Counterterrorism Oversight Enhancement Act. Bill H.R.4469 (In committee Jul. 2021). Enables the Privacy and Civil Liberties Oversight Board to oversee use of AI for counterterrorism.
- [Fed] SAFE DATA Act. Bill S.2499 (In committee Jul. 2021). Federal data protection law.
- [Fed] Intelligence Authorization Act for Fiscal Year 2022. Bill S.2160. (introduced Aug. 2021). Director of National Intelligence to develop modern digital ecosystem plan that includes use of AI for intelligence purposes.
- [Fed] Digital Defense Leadership Act. Bill H.R.4985 (In committee Aug. 2021). Implements National Security Commission on Artificial Intelligence recommendations, including for AI research, development, and use.
- [Fed] Deepfake Task Force Act. Bill S.2559 (In committee Aug. 2021). Creates task force to research and propose methods to reduce digital content forgeries.
- [Fed] Department of Defense Artificial Intelligence Metrics Act of 2021. Bill S.2904 (In committee Sep. 2021). Creates performance objectives and metrics for AI implementation by the Department of Defense.
- [Fed] Healthy Technology Act of 2021. Bill H.R.5467 (In committee Sep. 2021). Allows AI and machine learning technologies to prescribe drugs if approved by the relevant State and the Food and Drug Administration.
- [Fed] United States–Israel Artificial Intelligence Center Act. Bill H.R.5148, S.2120 (In committee Oct. 2021). Promotes AI research and development partnership between US and Israel.
- [Fed] AI Training Act. Bill S.2551 (Introduced Oct. 2021). Requires AI training for certain Executive Branch employees.
- [Fed] Financial Transparency Act of 2021. Bill H.R.2989 (In committee Oct. 2021). Seeks to improve and standardize regulation-imposed financial data collection to, among other things, enable more effective use of AI.
- [Fed] Protecting Sensitive Personal Data Act of 2021. Bill (Introduced Oct. 2021). Empower Committee on Foreign Investment in the United States to regulate the security of sensitive personal information.
- [Fed] Infrastructure Investment and Jobs Act. Bill H.R.3684 (Enacted Nov. 2021). Provides funding for assessing the use of AI and machine learning in mining research and development activities, digital climate solutions, and in manufacturing.
- [AL] Technology, Alabama Council on Advanced Technology, estab., to advise Governor and Legislature, members, duties. Bill S.B. 78 (Passed Apr. 2021). Creates council that advises Alabama’s Governor and Legislature about advanced technology and AI.
- [HI] Relating To Taxation. Bill H.B. 454 (Introduced Jan. 2021). Provide income tax credit for investment in cybersecurity and AI development businesses.
- [IL] Future of Work Task Force. Bill H.B. 645 (Effective Aug. 2021). Creates a task force to study how to help Illinois workers adapt to new technologies such as AI.
- [MS] Computer science curriculum; require State Department of Education to implement in K-12 public schools.. Bill HB 633 (Effective Jul. 2021). Mandates K-12 computer science education in Mississippi, including about AI.
- [NC] An Act to Establish the Study Committee on Automation and the Workforce. Bill S.B. 600 (Introduced Apr. 2021). Creates committee that researches how to help workers adapt to new technologies such as AI.
- [NJ] Requires Commissioner of Labor and Workforce Development to conduct study and issue report on impact of artificial intelligence on growth of State’s economy. Bill A.B. 195 (Received in Senate after passing Assembly Mar 2021).
- [NJ] Establishes Edison Innovation Science and Technology Fund in EDA to provide grants for certain science and technology-based university research; appropriates $5 million. Bill A.B. 2172 (In committee Jan. 2020). Includes grants for AI research.
- [NJ] 21st Century Integrated Digital Experience Act. Bill A.B. 2614, S.B. 2723 (In committee(?) Feb. 2021). Requires State Executive Branch to leverage new technologies such as AI and machine learning to modernize government service delivery.
- [NY] Establishes the commission on the future of work. Bill A.B. 2414 (In committee Jan. 2021). Creates commission that researches how to help workers adapt to new technologies such as AI
- [VA] Consumer Data Protection Act. Bills S.B. 1392, H.B. 2307 (Enacted Mar. 2021). Virginia data protection law.
§ 1.8.2. 2020
- [Fed] FUTURE of Al Act (Dec 2017). Requiring the Secretary of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence.
- [Fed] Al JOBS Act (Jan 2019). Promoting a 21st century artificial intelligence workforce.
- [Fed] GrAITR Act (Apr 2019). Legislation directed to research on cybersecurity and algorithm accountability, explainability, and trustworthiness.
- [Fed] Al in Government Act (May 2019). Instructing the General Services Administration’s AI Center of Excellence to advise and promote the efforts of the federal government in developing innovative uses of AI to benefit the public, and improve cohesion and competency in the use of AI.
- [Fed] Al Initiative Act (May 2019). Requiring federal government activities related to AI, including implementing a National Artificial Intelligence Research and Development Initiative.
 In addition to the Federal cases we note, a number of states are dealing with similar evidentiary and explanability issues. See, e.g., Green v. Geico Gen. Ins. Co., No. : N17C-03-242 EMD CCLD, 2021 Del. Super. LEXIS 308 (Super. Ct. Mar. 24, 2021) (granting declaratory relief to plaintiff due to GEICO’s use of automated, rules-based insurance claims analysis tools that were based on “antiquated” rules; the court stated, “until…a system [with rules updated based on relevant case law] is in place, human judgment should not be eliminated from the process.” Note that the court was careful not to “eliminat[e] the ability for insurers to use automated systems to make the claims process more efficient” but wanted those systems to be based on up-to-date rules.); R.L.G. v. State, 322 So. 3d 721 (Fla. Dist. Ct. App. 2021) (holding that a statement made by a machine without any facts provided about how that statement was made (automatically, with human input or interpretation, etc.) was inadmissible hearsay on appeal because the facts on the record did not support a determination otherwise.); People v. Reyes, 2020 NY Slip Op 20258, 69 Misc. 3d 963, 133 N.Y.S.3d 433 (Sup. Ct.) (denying defendant’s motion to preclude testimony that was based on the use of facial recognition software to identify defendant. The court observed that the use of and reliability of facial recognition to date suggested that facial recognition is among “a growing number of scientific and near-scientific techniques that may be used as tools for identifying or eliminating suspects, but that do not produce results admissible at a trial.”); Matter of Phila. Ins. Indem. Co. v. Kendall, 197 A.D.3d 75, 2021 N.Y. App. Div. LEXIS 4393, 2021 NY Slip Op 04284, 151 N.Y.S.3d 392 (observing, “Just as a party may attack a hardcopy settlement offer or acceptance as a forgery, a party that claims an email was the product of a hacker (or of artificial intelligence, or of some other source) may rebut its authenticity.”)).
 Demonstrating the wealth of caselaw that has developed related to Bryant and Fox, the court cited as supporting cases “Roberson v. Maestro Consulting Servs. LLC, 507 F. Supp. 3d 998, 1008 (S.D. Ill. 2020) (denying motion to remand Section 15(a) claim where plaintiff alleged that defendant failed to comply with its retention schedule and destruction guidelines); Marsh v. CSL Plasma Inc., 503 F. Supp. 3d 677, 682-83 (N.D. Ill. 2020) (same); Neals v. ParTech, Inc., No. 19-cv-05660, 2021 U.S. Dist. LEXIS 24542, 2021 WL 463100, at *5 (N.D. Ill. Feb. 9, 2021) (same); Heard v. Becton, Dickinson & Co., 2021 U.S. Dist. LEXIS 44160, 2021 WL 872963, at *3-4 (N.D. Ill. Mar. 9, 2021) (same); Wilcosky v. Amazon.com, Inc., 517 F. Supp. 3d 751, 761-62 (N.D. Ill. 2021) (same); Hazlitt v. Apple Inc., 2021 U.S. Dist. LEXIS 110556, 2021 WL 2414669, at *4-5 (S.D. Ill. June 14, 2021) (same); Fernandez v. Kerry, Inc., No. 17-cv-08971, 2020 U.S. Dist. LEXIS 223075, 2020 WL 7027587, at *7-8 (N.D. Ill. Nov. 30, 2020) (concluding that plaintiffs had standing to bring Section 15(a) claims against their employer not only because they alleged unlawful retention of their biometric data, but also because union members have a concrete interest in collective bargaining over biometric data usage) (citing Miller v. Southwest Airlines Co., 926 F.3d 898, 902 (7th Cir. 2019).”).
 As noted in our introduction, we made certain judgment calls with respect to which cases to include. For example, we omitted certain BIPA cases that did not add any additional information to those we have presented in this Chapter. See, e.g., Darty v. Columbia Rehabilitation and Nursing Center, LLC, 2020 U.S. Dist. LEXIS 110574 (N.D. Ill. 2020); Figueroa v. Kronos Incorporated, 2020 U.S. Dist. LEXIS 64131 (N.D. Ill. 2020); Namuwonge v. Kronos, Inc., 418 F. Supp. 3d 279 (N.D. Ill. 2019); Treadwell v. Power Solutions International Inc., 427 F. Supp. 3d 984 (N.D. Ill. 2019); Kiefer v. Bob Evans Farm, LLC, 313 F. Supp. 3d 966 (C.D. Ill. 2018); Rivera v. Google Inc., 238 F. Supp. 3d 1088 (N.D. Ill. 2017); In re Facebook Biometric Information Privacy Litigation, 185 F. Supp. 3d 1155 (N.D. Cal. 2016); Norberg v. Shutterfly, Inc., 152 F. Supp. 3d 1103 (N.D. Ill. 2015).
 As noted in our introduction, we made certain judgment calls with respect to which cases to include. For example, we omitted several patent cases directed to subject-matter eligibility that we felt did not substantive additional insight to those we have presented in this Chapter. See, e.g., Kaavo Inc. v. Amazon.com Inc., 323 F. Supp. 3d 630 (D. Del. 2018); Hyper Search, LLC v. Facebook, Inc., No. 17-1387, 2018 U.S. Dist. LEXIS 212336 (D. Del. Dec. 18, 2018); Purepredictive, Inc. v. H20.AI, Inc., No. 17-cv-03049, 2017 U.S. Dist. LEXIS 139056 (N.D. Cal. Aug. 29, 2017); Power Analytics Corp. v. Operation Tech., Inc., No. SA CV16-01955 JAK, 2017 U.S. Dist. LEXIS 216875 (C.D. Cal. July 13, 2017); Nice Sys. v. Clickfox, Inc., 207 F. Supp. 3d 393 (D. Del. 2016); eResearch Tech., Inc. v. CRF, Inc., 186 F. Supp. 3d 463 (W.D. Pa. 2016); Neochloris, Inc. v. Emerson Process Mgmt. LLP, 140 F. Supp. 3d 763 (N.D. Ill. 2015).