On June 6, the U.S. Department of the Treasury issued a request for information (“RFI”) seeking information and public input on the use of artificial intelligence in the financial services sector. The RFI asks that written comments and information be submitted on or before August 12, 2024.
Through this RFI, the Treasury Department seeks to increase its understanding of how AI is being used within the financial services sector and the opportunities and risks presented by the development and applications of AI. The Treasury Department is relying on the definition of AI utilized in President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and the National Artificial Intelligence Initiative: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” It is worth noting that the first question asked by the Treasury Department, however, is whether this definition is appropriate for financial institutions.
The focus of the RFI is on the following uses of AI by financial institutions:
- Provision of products and services (e.g., how is AI being used to offer financial products or services? How is AI being used for financial forecasting products and pattern recognition tools?)
- Risk management (e.g., how is AI being used to manage risk and asset liability?)
- Capital markets (e.g., how is AI being used to identify investment opportunities and provide financial advisory services?)
- Internal operations (e.g., how is AI being used to manage payroll, HR functions, training, and software development?)
- Customer service (e.g., how is AI being used to help handle complaints or manage a website?)
- Regulatory compliance (e.g., how is AI being used to assist with regulatory reporting or disclosure requirements?)
- Marketing (e.g., how is AI being used to market to consumers?)
As part of the RFI, the Treasury Department has posed a series of questions geared toward a broad set of stakeholders in the financial services ecosystem (including consumer and small business advocates, nonprofits, academics, and others) to understand the benefits and risks of AI. These questions are use-case focused and seek to ferret out how AI could benefit or pose risks to stakeholders. In addition, the Treasury Department is seeking specific information on how financial institutions are protecting against “dark patterns” and predatory targeting, which could lead to bias and fair lending issues; mimicry of biometric data (e.g., a consumer’s voice), which could affect fraud detection and prevention tools such as multi-factor authentication; and unfair or deceptive acts or practices. The Treasury Department also asks about the privacy impact of AI, noting that AI can enable a firm’s ability to infer attributes and behavior about an individual that could “undermine privacy (including the privacy of others) and dilute the power of existing ‘opt-out’ privacy protections.”
The Treasury Department’s RFI is one of several requests for information on AI. Various other federal agencies are seeking or have sought information on AI, including the Office of the Comptroller of the Currency, the Federal Reserve Board, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and the National Credit Union Administration, which issued an interagency RFI in 2021 on financial institutions’ use of AI. This most recent RFI is a reminder that this is a hot topic for regulators, and the heat does not appear to be dissipating any time soon.