Authors’ note: For this report, the authors use the definition of artificial intelligence (AI) from the 2020 National Defense Authorization Act, which established the National Artificial Intelligence Initiative.1 This definition was also used by the 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”2 Similarly, this report makes repeated reference to “Appendix I: Purposes for Which AI is Presumed to be Safety-Impacting and Rights-Impacting” of the 2024 OMB M-24-10 memo, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”3
Read the fact sheet
The accompanying fact sheet lists all of the recommendations detailed in this chapter of the report.
Artificial intelligence (AI) is poised to affect every aspect of the U.S. economy and play a significant role in the U.S. financial system, leading financial regulators to take various steps to address the impact of AI on their areas of responsibility. The economic risks of AI to the U.S. financial system include everything from the potential for consumer and institutional fraud to algorithmic discrimination and AI-enabled cybersecurity risks. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators. While Governing for Impact (GFI) and the Center for American Progress have extensively researched these existing authorities in consultation with numerous subject matter experts, the goal is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. Each potential recommendation will require further vetting before agencies act. Even if additional AI legislation is needed, this menu of potential recommendations to address AI demonstrates that there are more options for agencies to explore beyond their current work and that agencies cannot and should not wait to utilize existing authorities to address AI.
The October 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” assigned executive branch financial regulators AI-related tasks4 and specifically encouraged independent regulatory agencies, which cannot be directly tasked by the president, to address the risks of AI:
Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.5
In March 2024, the U.S. Treasury Department issued a report on AI specific cybersecurity risks in financial services that included the following summary of the AI regulatory landscape:
Financial regulatory agencies generally do not issue regulations or guidance on specific technologies, but instead address the importance of effective risk management, governance, and controls regarding the use of technology, including AI, and the business activities that those technologies support. Regulators have emphasized that it is important that financial institutions and critical infrastructure organizations manage the use of AI in a safe, sound, and fair manner, in accordance with applicable laws and regulations, including those related to consumer and investor protection. Controls and oversight over the use of AI should be commensurate with the risk of the business processes supported by AI. Regulators have noted that it is important for financial institutions to identify, measure, monitor, and manage risks arising from the use of AI, as they would for the use of any other technology. Advances in technology do not render existing risk management and compliance requirements or expectations inapplicable. Various existing laws, regulations, and supervisory guidance are applicable to financial institutions’ use of AI. Although existing laws, regulations, and supervisory guidance may not expressly address AI, the principles contained therein can help promote safe, sound, and fair implementation of AI.6
As noted in the Treasury Department’s report, existing laws and regulations clearly apply to the use of AI in the financial services sector. This report for financial regulators highlights 11 relevant existing authorities and the numerous agencies that oversee them in detail below, along with recommendations on how to potentially utilize those authorities to address AI. It should be noted that there is some repetition and overlap in the recommendations for financial services regulators due to the multiple parallel existing statutory authorities. Additionally, these recommendations align with or draw from the AI best practices recommended by the Biden administration’s AI Bill of Rights, the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the 2023 AI executive order, and the Office of Management and Budget (OMB) M-24-10 memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” issued in March 2024.7
In this report, the term “U.S. financial regulatory agencies” includes the federal banking and credit union agencies, financial markets regulators, and executive branch agencies. Specifically, in this report, these agencies include the Treasury Department, the Office of the Comptroller of the Currency, the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the Commodity Futures Trading Commission, the National Credit Union Administration, the Securities and Exchange Commission (SEC), the Consumer Financial Protection Bureau, the Financial Stability Oversight Council, which is chaired by the secretary of the treasury, and, to some extent, the Financial Industry Regulatory Authority, the self-regulatory organization for securities brokers, which is overseen by the SEC. It should be noted that other federal agencies not listed in this report also have financial regulation responsibilities and authorities that could potentially be used to address AI.
AI risks and opportunities
AI may affect financial services consumers and the U.S. and international banking and financial systems in various known and unknown ways.8 The risks and opportunities of AI for financial services start with similar broad concerns as other areas discussed in this report, including the need for safe and secure systems with clear safeguards to address and mitigate risk, the potential for algorithmic discrimination that perpetuates or exacerbates existing historical inequalities, the potential for fraud and harm to consumers, and the possibility of affecting essential systems.
Several areas of concern are detailed below:
- Prevention of access to financial services: AI-powered systems may prevent consumers from accessing critical financial services9 by illegally discriminating against customers, generating incorrect information for their credit reports, or using faulty AI systems to execute transactions. The OMB M-24-10 AI guidance lists AI used by federal agencies for “[a]llocating loans; determining financial-system access; credit scoring; determining who is subject to a financial audit; making insurance determinations and risk assessments; determining interest rates; or determining financial penalties” as potentially rights-impacting.10
- Algorithmic discrimination that may exacerbate historical inequalities: Massive amounts of data are required to train and run AI-powered systems.11 In the financial services world, such historical data may dangerously reflect long-embedded systemic inequalities, such as redlining, unfair credit denials, and other discriminatory practices. AI systems trained on these historic data run the substantial risk of incorporating these inequities if not addressed proactively.
- AI-enabled fraud: AI is already embraced as a tool to enable advanced fraud against consumers and financial institutions. The use of AI voice cloning12 and AI-generated fake accounts13 are just the tip of the iceberg when it comes to future AI-enabled financial fraud.
- Failure to comply with anti-money laundering requirements: The Bank Secrecy Act and Treasury Department regulations require institutions to submit suspicious activity reports (SARs) whenever customers engage in activity that may be money laundering.14 Black-box AI systems may fail to report otherwise suspicious activities, leaving banks in violation of the Bank Secrecy Act.
- Threats to safe, secure, and stable financial systems: Integrating AI systems into financial services may pose a risk to the operation of these critical systems, as their sophistication grows along with the lack of transparency into proprietary black-box AI systems and algorithms that provide essential services and upkeep. The 2008 financial crisis proved how important the stability of the broader financial system is for a growing economy; yet AI and the commercial cloud computing that provides advanced AI pose risks that could negatively affect financial stability. Indeed, the Financial Stability Oversight Council has identified AI as a “vulnerability” within the U.S. financial system.15 For example, a bank’s use of the same or similar data for AI-based risk management models, AI-enabled network effects, or unregulated AI service providers may pose systemic risks.16
Although certainly not exhaustive, these known risks affect at least three main categories of stakeholders in the financial sector:
- Customers: Banks and other financial services providers may illegally discriminate against customers when making lending decisions with unknowingly biased AI systems.17 Banks’ and lenders’ retail and institutional customers are also at risk of faulty AI systems that fail to accurately respond to their inquiries, accurately assess their credit worthiness, or execute transactions.18 Similarly, brokers’ customers face losses from transactions that AI systems fail to execute.19 Financial institutions also serve as a wealth of information about customers, which is necessary for AI systems to operate, and may be liable for customer losses stemming from AI-enabled fraud.20
- Banks: The core purpose of bank regulation is to ensure banks’ safety and soundness,21 and AI could put this at risk. Banks face potential operational failures from AI-enabled cyberattacks that can evade their information technology (IT) defenses,22 runs from depositors’ use of AI for treasury management,23 and losses from banks’ own opaque and faulty AI-based risk management systems.24
- Securities brokers and futures commission merchants, securities and derivatives exchanges, and other market intermediaries: In addition to banks, the nonbank financial institutions that comprise the capital markets are also poised to use AI systems that may pose risks to firms’ financial health and that of markets overall. Brokers may be liable for trades that AI systems failed to execute or misexecuted, and investment advisers and brokers may be liable for AI systems that fail to offer conflict-free advice or advice in the clients’ best interests.25 Exchanges may face operational failures from their AI-based matching software or experience flash crashes stemming from erroneous high-frequency trading.26 Additionally, clearinghouses relying on AI systems that fail may be unable to novate trades, putting the markets at risk of requiring bailouts.27
Current state
The 2022 White House AI Bill of Rights, which was the basis of much of the 2023 executive order on AI, noted that AI or automated systems could “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services” and that critical resources or services included financial services.28
The 2023 AI executive order outlines eight policies and principles for AI for the Biden administration’s approach to AI, including that AI must be “safe and secure,” “[promote] responsible innovation, competition, and collaboration,” and “[advance] equity and civil rights,” as AI “systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.” The guidance specifically highlights the need to “enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI,” emphasizing the need for protections in “financial services.”29
The executive order also required the secretary of the treasury to “issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks” and provides financial services and housing directives for the CFPB.30 Finally, the order highlights the direction it hopes independent regulatory agencies not under the direct authority of the president will take on AI, noting:
Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.31
The OMB M-24-10 AI guidance notes that AI used by federal agencies should be automatically presumed rights-impacting if used for “[a]llocating loans; determining financial-system access; credit scoring; determining who is subject to a financial audit; making insurance determinations and risk assessments; determining interest rates; or determining financial penalties (e.g., garnishing wages or withholding tax returns).”32
The financial regulatory agencies have been working on addressing AI in a variety of ways.
The Consumer Financial Protection Bureau (CFPB) has been one of the most proactive federal agencies on the issue.33 Director Rohit Chopra has made statements warning about the myriad risks of AI, including that its need for large datasets and computing power could result in a natural oligopoly: “There could be a handful of firms, and just to be honest, a handful of individuals who ultimately have enormous control over decisions made throughout the world.”34 Chopra has also expressed concern that AI “magnifies disruptions in a market that turn tremors into earthquakes”35 and that AI could be used for illegal and discriminatory lending decisions.36
Accordingly, the CFPB has provided market participants with various guidance about how AI may and may not be used. The CFPB explained that federal law does “not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions.”37 It has also warned that creditors may not “rely on overly broad or vague reasons to the extent that they obscure the specific and accurate reasons relied upon.”38 The CFPB has criticized credit reporting agencies’ use of AI screening tools.39 In conjunction with the U.S. Department of Justice, Equal Employment Opportunity Commission, and Federal Trade Commission, the CFPB warned that AI systems “have the potential to produce outcomes that result in unlawful discrimination” and that “[e]xisting legal authorities apply to the use of [AI] just as they apply to other practices.”40 The CFPB has also penalized firms for relying on faulty automated compliance systems. The bureau ordered Wells Fargo to pay $3.7 billion for compliance failures that resulted in wrongful home foreclosures, car repossessions, and lost benefit payments41 and ordered Hello Digit to pay a $2.7 million fine for causing users to be charged overdraft fees.42 It is reportedly increasing examinations of AI systems.43
At the Treasury Department, Graham Steele, while serving as assistant secretary for financial institutions in October 2023, gave a speech detailing how AI can affect banking, consumer finance, and insurance markets and emphasizing the importance of AI providers engaging in responsible innovation.44 In addition, the Treasury Department appointed a chief artificial intelligence officer as required by the 2023 executive order on AI.45 The Financial Stability Oversight Council (FSOC), which is chaired by the treasury secretary, has identified AI as a potential risk to the financial system and has issued recommendations to the other regulators to monitor AI’s development in their respective jurisdictions.46 In a February 2024 testimony before the U.S. House Committee on Financial Services, Treasury Secretary Janet Yellen noted that the FSOC was “closely monitoring the increasing use of artificial intelligence in financial services, which brings potential benefits such as reducing costs and improving efficiencies and potential risks like cyber and model risk.”47 And in March 2024, the Department of the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection issued a report in response to requirements from the 2023 executive order on AI, entitled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.”48 While focusing on the AI-specific risk of cybersecurity, the “Next Steps: Challenges & Opportunities” chapter contains a small section that notes “Regulation of AI in Financial Services Remains an Open Question” according to those interviewed for the report.49
The federal banking agencies have also begun tackling AI, albeit at a slower pace.50 The Office of the Comptroller of the Currency (OCC) formed an Office of Financial Technology.51 The Federal Deposit Insurance Corporation (FDIC) created FDITech, a tech lab, though it recently reduced its public-facing role.52 Four federal reserve banks—San Francisco, New York, Atlanta, and Boston—have also set up offices to study financial innovation and AI.53 These efforts are intended to focus, in part, on how regulators can use AI to assist in regulating financial institutions as well as to better understand how banks are using AI in their activities. These agencies have also jointly issued a request for information on financial institutions’ uses of AI54 and have proposed a rule to impose heightened standards for the use of home appraisals conducted using algorithms.55
The Securities and Exchange Commission (SEC) is quickly evaluating how regulated institutions use AI in capital markets. Chairman Gary Gensler has given a plethora of speeches discussing the possible harms of AI,56 including in a March 2024 interview with Politico in which he warned of a potential financial crisis caused in part by AI.57 In addition, the agency has launched a Strategic Hub for Innovation and Financial Technology (FinHub) that focuses, in part, on AI generally in the securities markets.58 The SEC proposed a rule to address risks posed to investors from conflicts of interest associated with using predictive data analytics.59 With regard to investment advisers, the SEC’s examinations division has begun soliciting information about advisers’ uses of AI.60 SEC staff have issued guidance61 and a risk alert62 addressing robo-advisers that use algorithms to make investment recommendations.
The Financial Industry Regulatory Authority (FINRA), the self-regulatory organization for securities brokers,63 formed an Office of Financial Innovation to coordinate fintech efforts that include AI64 and published a white paper on AI in the securities industry.65
The Commodity Futures Trading Commission (CFTC) issued “A Primer on Artificial Intelligence in Financial Markets” in 2019 that discusses, among other things, how the CFTC could leverage AI to better regulate its markets.66 More recently, the agency created an enforcement division task force focused on emerging technologies, including AI,67 and its Technology Advisory Committee created a panel to evaluate “responsible artificial intelligence.”68 The CFTC’s commissioners have given speeches on the need for the agency to regulate AI.69
The Biden administration’s work on AI is ongoing, but the AI Bill of Rights, the NIST AI Risk Management Framework, the 2023 executive order on AI, and the OMB M-24-10 AI guidance have highlighted key AI risk mitigation practices to be further developed.70 Due to parallel statutory authorities across multiple agencies, many of these recommendations are referenced repeatedly in the sections below.
These efforts include, but are not limited to, the following:
Relevant statutory authorities
This section explains how various statutes enforced by the federal financial regulators could be used to regulate AI. This list is by no means exhaustive.
Bank Secrecy Act
Relevant agencies: Treasury Department, Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration, Securities and Exchange Commission, Commodity Futures Trading Commission
The Bank Secrecy Act (BSA), enacted in 1970, is designed to combat money laundering and financial crimes.85 The BSA and regulations promulgated thereunder require financial institutions to maintain records and report certain transactions indicative of money laundering or other illicit activities.86 Under these regulations, banks and other financial institutions must verify the identity of all customers, keep detailed records of cash transactions exceeding $10,000, and report suspicious transactions to the Financial Crimes Enforcement Network (FinCEN).87 By mandating these reporting requirements, the BSA aims to enhance transparency in financial dealings, detect potential illegal activities, and safeguard the financial system’s integrity.88
The broad statutory authority allows the treasury secretary and banking and financial regulators to promulgate regulations requiring institutions to create and implement a wide variety of anti-money laundering programs.89
Recommendations
Using these authorities, the Federal Reserve, OCC, FDIC, SEC, and CFTC could consider the following actions:
- Regulate how institutions’ customer identification and suspicious activity reporting programs use AI. As AI becomes more integrated into financial systems, it can help institutions monitor and analyze transactions for BSA compliance more effectively, detecting anomalies or patterns indicative of illicit activities. However, regulators must be cognizant of the harms of offloading such an important law enforcement task to AI systems and should outline best practices for implementing AI systems and require institutions to develop standards for how they use AI to automate anti-money laundering tasks.
- Require banks to periodically review their BSA systems to ensure accuracy and explainability. Accurate and timely reports of suspicious activities must be balanced against financial privacy and FinCEN’s ability to review the reports it receives. Regulators must ensure the AI institutions’ BSA systems use is accurate and can explain why activities are suspicious and therefore flagged. Regulators should require institutions to periodically review their AI—perhaps by hiring outside reviewers—to ensure continued accuracy and explainability to expert and lay audiences. Examiners must be able to review source code and dataset acquisition protocols.
Gramm-Leach-Bliley Act: Disclosure of nonpublic personal information
Relevant agencies: Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration, Securities and Exchange Commission, Commodity Futures Trading Commission, Consumer Financial Protection Bureau
Enacted in 1999, the Gramm-Leach-Bliley Act (GLBA) proclaimed it “the policy of the Congress that each financial institution has an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information.”90 Accordingly, 15 U.S.C. § 6802 provides that “a financial institution may not … disclose to a nonaffiliated third party any nonpublic personal information” unless it has first provided consumers notice.91
The GLBA requires the banking and financial regulators to “establish appropriate … administrative, technical, and physical safeguards” for institutions that 1) “insure the security and confidentiality of customer records and information”; 2) “protect against any anticipated threats or hazards to the security or integrity of such records”; and 3) “protect against unauthorized access to or use of [customer information].”92 Under this authority, the federal banking regulators have implemented interagency guidelines for establishing information security standards93 and issued IT and cybersecurity risk management guidance.94
Recommendations
The regulators should make further use of this authority to ensure resiliency against AI-designed cyber threats, including the following actions:
- Require third-party AI audits for all institutions. AI audits should be required for all institutions. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming but still need to mitigate against AI risk. Accordingly, small institutions should undergo AI security audits by qualified outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. This includes risks that cybercriminals could use AI to impersonate clients such that institutions inadvertently release customer information erroneously, believing that they are interacting with their clients. Regulators should set out guidelines for appropriate conflict checks and firewall protocols for auditors.
- Require red-teaming of AI for the largest institutions. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”95 The largest firms should already be utilizing red-teaming for their AI products. In addition, they should be running red team/blue team exercises, and the agencies should require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed at which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.96 Firms must know how malicious actors can use AI to attack their infrastructure to defend against it effectively. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities.
- Require disclosure of annual resources on AI cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in banking operations, the potential vulnerabilities and risks associated with cyber threats amplify significantly. By mandating such disclosures, stakeholders, including customers, regulators, and investors, gain valuable insights into a bank’s commitment to mitigating cyber risks through AI.
Equal Credit Opportunity Act
Relevant agency: Consumer Financial Protection Bureau
The Equal Credit Opportunity Act (ECOA) was enacted to prevent discrimination in credit granting. The ECOA makes it “unlawful for any creditor to discriminate against any applicant” for credit “on the basis of race, color, religion, national origin, sex or marital status, or age” or “because all or part of the applicant’s income derives from any public assistance program.”97 The ECOA requires creditors to provide reasons for credit denials and grants applicants the right to challenge any decision perceived as discriminatory. Its fundamental goal is to promote fair and equal access to credit for all qualified individuals, fostering a more inclusive and equitable financial landscape.
The ECOA allows the Consumer Financial Protection Bureau to “prescribe regulations to carry out the purposes of [the act],” including those it believes “are necessary or proper to effectuate [the ECOA’s purposes], to prevent circumvention or evasion thereof, or to facilitate or substantiate compliance therewith.”98 It also requires firms to provide “[e]ach applicant against whom adverse action is taken [with] a statement of reasons for such action.”99
Recommendations
Using these authorities, the CFPB could consider the following actions:
- Require lenders to periodically review their lending systems to ensure explainability and that no new discriminatory activity applies. Research suggests that AI-based systems may result in lending decisions that have a disparate impact,100 which is a violation of the ECOA.101 The CFPB has already indicated in guidance that AI-based lending systems cannot be used when those systems “cannot provide the specific and accurate reasons for adverse actions.”102 Nevertheless, the CFPB should require lenders making lending decisions using AI to periodically review those systems—perhaps by hiring outside reviewers—to ensure explainability to expert and lay audiences and to confirm that discrimination does not inadvertently creep in as new data are used. Examiners must review source code and dataset acquisition protocols.
- Prohibit lenders from using third-party credit scores and models developed with unexplainable AI. Many lenders use credit scores or other sources of information from third parties, which themselves may use AI to create those ratings.103 The CFPB should prohibit lenders from using unexplainable scores or models to avoid fair lending requirements and require all lenders subject to the ECOA to obtain information about the explainability of their third-party service providers’ AI.
- Require lenders to employ staff with AI expertise. As described above, many lenders rely on third-party models for lending decisions. Given the pitfalls of algorithmic lending decisions, these firms must maintain diverse teams that include individuals with AI expertise to understand how such models operate and can introduce bias into firms’ lending decisions. These experts are necessary to identify and mitigate potential biases or unintended consequences of algorithmic decision-making. The 2023 executive order on AI required federal agencies to appoint chief artificial intelligence officers (CAIOs),104 whose duties were further outlined in the OMB M-24-10 AI guidance.105 The CFPB should follow that model to require firms to similarly designate a CAIO or designate an existing official to assume the duties of a CAIO.
Fair Credit Reporting Act
Relevant agency: Consumer Financial Protection Bureau
Recognizing that “the banking system is dependent upon fair and accurate credit reporting” and that “inaccurate credit reports directly impair the efficiency of the banking system,” Congress enacted the Fair Credit Reporting Act (FCRA) in 1970.106 The FCRA generally covers all entities that help create, provide, and use consumer reports and allows the Consumer Financial Protection Bureau to regulate those activities. For example, the FCRA prohibits entities that furnish information to consumer reporting agencies (CRAs) from reporting information that they know have errors, mandating they correct and update false information, and allows the CFPB to craft regulations prescribing policies and procedures that must be followed.107 For CRAs themselves, the FCRA excludes particular information from reports and requires agencies to describe to users the key factor in credit score information.108 And for users of consumer reports—which include lenders, employers, and landlords—the FCRA prescribes responsibilities if they take adverse actions based on report information and allows the CFPB to regulate how users provide consumers with credit decision notices and the information contained in such notices.109 The CFPB may also regulate the procedures for instances where consumers wish to dispute the accuracy of information in reports.110 The Federal Trade Commission, CFPB, and other agencies have administrative enforcement authority.111
Using this regulatory authority, the CFPB has issued regulations creating and requiring firms to use model forms and disclosures,112 requiring furnishers of information to “establish and implement reasonable written policies and procedures regarding the accuracy and integrity of the information [provided to credit reporting agencies],”113 and requiring users of credit reports to disclose to consumers when their credit report has been used as a means for determining their risk.114
Recommendations
As it relates to AI, the CFPB should consider using these authorities to take the following actions:
- Require credit reporting agencies to describe whether and to what extent AI was involved in formulating reports and scores. Although the CFPB has issued guidance making clear that the ECOA requires lenders to make their AI systems explainable,115 it has yet to do the same with credit reporting agencies. Given that AI-based systems may result in the creation of credit scores that will result in a disparate impact, the CFPB should use its authority over credit reporting agencies to make clear that the AI used to generate credit scores should describe the extent to which AI was used and ensure the scores are explainable.
- Require credit reporting agencies to periodically review their AI systems to ensure explainability and that no new discriminatory activity applies. Beyond simply requiring credit reporting agencies’ AI systems to be explainable to expert and lay audiences, the CFPB should also require the agencies to periodically review their systems to ensure continued explainability as new data are introduced. CFPB examiners must be able to review source code and dataset acquisition protocols.
- Require credit reporting agencies to provide for human review of information that consumers contest as inaccurate. As part of U.S.C. § 1681i “reasonable reinvestigation” mandate, credit reporting agencies should be required to have a human conduct the reinvestigation of AI systems’ determinations and inputs.116 Since AI-based systems may use black-box algorithms to determine credit scores or inputs that create credit scores, individually traceable data are required for adequate human review. As noted above, general explainability is important but would not be sufficient to allow human reviewers to correct potentially erroneous information under the FCRA.
- Given the preceding recommendation, require users of credit reports to inform consumers of their right to human review of inaccuracies in AI-generated reports in adverse action notices, per 15 U.S.C. § 1681(m)(4)(B).
- Update model forms and disclosures to incorporate disclosure of AI usage. Given the CFPB’s mandate that credit reporting agencies and users of credit reports use model forms and disclosures, the CFPB should update those forms to include spaces for model form users to describe their AI usage.
Importantly, “consumer reports” under the FCRA include those that provide information used “in establishing the consumer’s eligibility for … employment purposes.”117 “Employment purposes” includes the “purpose of evaluating a consumer for employment, promotion, reassignment or retention as an employee.”118 The CFPB should consider several policy changes to explicitly address electronic surveillance and automated management (ESAM) used by employers:
- Require purveyors of workplace surveillance technologies to comply with the FCRA. As AI firms become increasingly used to mine data provided by employers, it is important that ESAM software companies be considered credit reporting agencies and comply with the corresponding restrictions. The CFPB should consider adding such companies to its list of credit reporting agencies119 and issue supervisory guidance explaining the circumstances under which ESAM companies act as CRAs and the corresponding responsibilities that they entail for ESAM companies and employers.
- Ensure ESAM technologies used by employers comply with the FCRA. If the CFPB provides that these technology providers are CRAs, the CFPB must also make clear that users of their software comply with the FCRA. Accordingly, it should consider modifying the “Summary of Consumer Rights” issued by the CFPB to include information about employee FCRA rights concerning employers’ use of ESAM technologies.120 It should also consider modifying “Appendix E to Part 1022” to identify how employers furnishing employee data to ESAM technology companies and data brokers must ensure the accuracy of their furnished information.121
Community Reinvestment Act
Relevant agencies: Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation
Enacted to undo the pernicious effects of redlining,122 the Community Reinvestment Act (CRA) encourages banks to meet the credit needs of the communities in which they operate, particularly low- and moderate-income neighborhoods. The CRA requires banks to actively engage in lending, investment, and service activities in these underserved communities by mandating periodic evaluations of banks’ performance in meeting the community’s credit needs.123 The CRA grants federal banking regulators the authority to regulate banks’ compliance with the law.124
The CRA does not allow regulators to change banks’ lending decisions, only to decide how it will evaluate whether banks comply with the act. The regulators’ rules allow banks to submit strategic plans for complying with the CRA125 and establish assessment areas for determining compliance.126
Recommendation
The federal banking regulators should consider using their authority to:
- Require banks to indicate whether they use AI to comply with CRA regulations and, if so, require those systems to be explainable. Given AI systems’ abilities to wade through mountains of information and identify the most profitable outcomes, banks may use them to game CRA regulations. For example, banks may use AI to help determine the most optimal assessment areas for profitability purposes. Regulators should require banks to disclose if they use AI to comply with the CRA or with regulations promulgated thereunder. In addition, these AI systems should be required to be explainable to expert and lay audiences to ensure that designated assessment areas are logical. Examiners must be able to review source code and dataset acquisition protocols.
Consumer Financial Protection Act: UDAAP authority
Relevant agency: Consumer Financial Protection Bureau
Following the great financial crisis of 2007–2008, Congress enacted the Consumer Financial Protection Act (CFPA) as Title X of the Dodd-Frank Wall Street Reform and Consumer Protection Act in 2010. Among other things, the CFPA created the Consumer Financial Protection Bureau to ensure fairness, transparency, and accountability in providing consumer financial products and services by regulating those products and services and enforcing the nation’s consumer financial protection laws.127 The Consumer Financial Protection Bureau regulates various financial sectors, including banks, credit unions, mortgage servicers, payday lenders, and debt collectors, striving to educate consumers and monitor financial practices.
One of the most potent authorities provided to the CFPB is its authority to “take any action authorized … to prevent a covered person or service provider from committing or engaging in an unfair, deceptive, or abusive act or practice under Federal law in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.”128 Under this so-called UDAAP authority, the CFPB may also write regulations “identifying as unlawful” particular acts or practices and “may include requirements for the purpose of preventing such acts or practices.”129 In other words, the CFPB can regulate consumer financial service providers to ensure their activities are not unfair, deceptive, or abusive.
Recommendations
Using this authority, the CFPB should consider the following actions:
- Require financial institutions’ consumer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict consumer protection standards, periodically reviewing consumer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems must provide consumers with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customers’ money transfers or asset purchases—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and comply with firms’ policies. The CFPB must ensure that institutions’ consumer-facing AI systems are accurate in all respects and require, through rulemaking, periodic review of their systems to ensure accuracy.
- Require AI red-teaming and red team/blue team exercises for the largest institutions. The CFPB’s UDAAP authority can be used to prohibit the inadvertent disclosure of consumers’ information at institutions not subject to the GLBA.130 Nonbank consumer financial service providers hold a wealth of information about customers that malicious AI systems feed off, and they may be liable for customer losses stemming from AI-enabled fraud.131 With AI red-teaming132 or red team/blue team exercises, the red team attempts to attack a company’s information technology infrastructure while the blue team defends against such hacks. The largest firms should already be utilizing AI red-teaming and red team/blue team exercises, but given that real-world attackers have AI at their disposal, the agencies should require this. Having teams use AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.133 Firms must understand how malicious actors can use AI to attack their infrastructure and defend against it. Institutions must conduct AI red-teaming and red team/blue team exercises leveraging AI to fortify their cyber defenses and proactively identify vulnerabilities.
- Require third-party AI audits for all institutions. AI audits should be required by all institutions. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming or red team/blue team exercises134 but still need to mitigate against AI risk. Accordingly, small institutions should be required to undergo AI security audits by outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. The CFPB may require such audits because failure to do so while claiming accurate and secure systems is unfair. Regulators should set guidelines for appropriate conflict checks and firewall protocols for auditors.
- Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Requiring nonbank consumer financial service providers to disclose their annual resources dedicated to cybersecurity and AI risk management and compliance is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial institution operations,135 the potential vulnerabilities and risks associated with cyber threats amplify significantly. The CFPB could enact regulations mandating such resource disclosures for spending on cybersecurity and AI risk management and compliance. By mandating such disclosures, stakeholders, including customers, regulators, and investors, would gain valuable insights into the extent of an institution’s commitment to mitigating cyber risks through AI.
Federal Deposit Insurance Act, Federal Credit Union Act, and Bank Holding Company Act
Relevant agencies: Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration
The Federal Deposit Insurance Act (FDIA) and the Federal Credit Union Act (FCUA) are two of the core statutes that permit banking and credit union regulators to ensure the safety and soundness of institutions under their respective jurisdictions.136 The Bank Holding Company Act (BHCA) similarly provides the Federal Reserve with many of the same authorities for bank holding companies. Under these statutes, banking regulators are required to prescribe standards relating to “internal controls, information systems, and internal audit systems” as well as any “other operational and managerial standards as the agency determines to be appropriate.”137 The National Credit Union Administration (NCUA) is required to “promulgate rules establishing minimum standards … of security devices and procedures.”138 Regulators may also enforce prohibitions against activities that are unsafe or unsound.139
Pursuant to these authorities, regulators have issued a wide array of regulations and guidance designed to ensure financial institutions adhere to the highest operational standards. For example, they have issued guidelines establishing standards for safety and soundness covering loan documentation, credit underwriting, and asset quality.140 They have also issued information security standards “for developing and implementing administrative, technical, and physical safeguards to protect the security, confidentiality, and integrity of customer information.”141 Regulators routinely examine institutions to ensure adherence to heightened standards and to identify unsafe or unsound activities and issue a host of guidance identifying risky acts and practices that institutions may consider addressing.142
Recommendations
Using these authorities, the Federal Reserve, FDIC, OCC, and NCUA should consider the following actions:
- Require financial institutions’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict standards and require those institutions to periodically review their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems provide customers with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customers’ money transfers or asset purchases—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. Regulators must ensure that institutions’ customer-facing AI systems are accurate and require periodic reviews of their systems to ensure accuracy.
- Ensure banks’ capital structures can withstand sudden and deep withdrawals of customer deposits or losses from banks’ risk management processes. Banks’ corporate clients are likely to begin using AI systems for treasury management—including bank deposits—and there are likely to be only a small number of providers of such systems, given the large computing power necessary for effective AI.143 AI-based treasury management systems may automatically move all firms’ cash, simultaneously creating significant movements of cash between financial institutions in short periods of time that result in sudden and significant drops in customer deposits. Regulators must ensure that banks maintain sufficient shareholder capital and high-quality liquid assets that enable them to withstand such shifts without failing.
- Require that AI systems that are parts of banks’ capital, investment, and other risk management models be explainable. Banks today use various systems to automate their capital management strategies, evaluate investment opportunities, and otherwise mitigate risk. They will inevitably use AI for these and other purposes that have significant effects on their profitability and stability. The banking agencies already review firms’ risk management practices regarding the various models they use, and regulators should do the same with AI. Specifically, all AI systems must be explainable to expert and lay audiences. Examiners must be allowed to review source code and dataset acquisition protocols.
- Ensure firms may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not develop their own systems in-house; instead, they will license software from a few competing nonfinancial institutions.144 Financial firms must be able to move between different and competing AI systems to avoid lock-in. Accordingly, regulators should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. Regulators must ensure that there are many—for example, at least five—providers of AI software for banks that provide for base interoperability, so that not all institutions are using the same one or two pieces of software.
- Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in banking operations, the potential vulnerabilities and risks associated with cyber threats amplify significantly. By mandating such disclosures, stakeholders, including customers, regulators, and investors, gain valuable insights into the extent of a bank’s commitment to mitigating cyber risks through AI. Bank and credit union annual disclosures could provide these disclosures.
Dodd-Frank Act: Systemic risk designation
Relevant agency: Financial Stability Oversight Council
The Dodd-Frank Act (DFA), enacted following the great financial crisis of 2007–2008, created the Financial Stability Oversight Council to “identify risks to the financial stability of the United States” and “respond to emerging threats to the stability of the United States financial system.”145 Among the authorities the DFA granted to the FSOC is the ability to designate financial market utilities (FMUs) as systemically important and subject to supervision and regulation by the Federal Reserve. Under statute, FMUs are “any person that manages or operates a multilateral system for the purpose of transferring, clearing, or settling payments, securities, or other financial transactions among financial institutions or between financial institutions and the person.”146
To designate FMUs, the FSOC can merely determine that they “are, or are likely to become, systemically important.”147 To make this determination, the FSOC is statutorily required to consider five factors: 1) “the aggregate monetary value of transactions processed by the financial market utility”; 2) “the aggregate exposure of the financial market utility … to its counterparties”; 3) “the relationship, interdependencies, or other interactions of the financial market utility … with other financial market utilities or payment, clearing, or settlement activities”; 4) “the effect that the failure of or a disruption to the financial market utility … would have on critical markets, financial institutions, or the broader financial system”; and 5) “any other factors that the Council deems appropriate.”148 In the FSOC’s rules detailing its process for designating FMUs, it provides that it makes “two critical determinations” in deciding whether to act: first, “whether the failure of or a disruption to the functioning of the FMU now or in the future could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets”; second, “whether the spread of such liquidity or credit problems among financial institutions or markets could threaten the stability of the financial system of the United States.”149 Using this authority, the FSOC has designated eight FMUs, all clearinghouses.150
In a March 2024 interview with Politico, SEC Chair Gary Gensler warned about the dangers of concentration with AI and financial systems: “We have set up a lot of our systems of oversight and rules around regulating individual entities or activities, whether it’s bank regulators, insurance regulators, securities regulators, commodities regulators.” Gensler added that it was important to be “thinking about [AI] across all the entities — are they potentially all using the same base model or base data?”151 He also noted the threat of AI concentration in the financial system, saying: “I would be quite surprised if in the next 10 or 20 years a financial crisis happens and there wasn’t somewhere in the mix some overreliance on one single data set or single base model somewhere.”152
While AI usage has yet to reach levels that justify designation as FMUs, if AI has the impact and widespread adoption predicted by some, then that future designation may be warranted.
Recommendations
Using this FMU designation authority, the FSOC should consider the following actions in the event that major providers of AI services reach a level of systemic importance to warrant oversight under these authorities:
- Designate major providers of AI services to financial institutions as systemically important if they reach an adoption level that creates vulnerability. It may appear incongruous at first glance to designate AI service providers as not only systemically important but also as systemically important FMUs. They do not facilitate payments, are not clearinghouses, do not provide for settlement of financial transactions, nor do they engage in significant financial transactions with counterparties. However, providers of AI services to the largest and most systemically important financial institutions could still meet the FSOC’s two determinations if they become so important to traders and market makers that, if the AI systems stop working for those firms, it “could create, or increase, the risk of significant liquidity or credit problems [in the markets].”153
Consider, for example, that market makers such as investment banks use AI systems to facilitate trades. If those systems stop working or execute faulty trades, significant liquidity could be removed from the markets, causing asset prices to drop precipitously along with financial instability. Similar arguments may be made for brokers using AI to manage their funding needs: If AI systems stop working, those brokers could lose access to funding sources, causing them to collapse. And the same is potentially true for high-frequency traders using AI to manage their trades—as faulty AI systems could result in flash crashes. Accordingly, the FSOC should monitor which AI systems are relied on by significant players in the markets and consider designating them as systemically important if their failure could threaten the stability of the U.S. financial system.
-
Designate the cloud service providers to those firms designated as systemically important. AI systems rely on cloud service providers, such as Amazon Web Services or Microsoft Azure, to operate; thus, if these cloud providers fail, AI systems also fail.154 Indeed, AI programs run on cloud providers’ servers and require cloud providers’ computing power to conduct the large-scale language processing required for AI. To the extent that AI software is of systemic importance to the financial system and may pose systemic risks if it fails, the fact that AI software cannot operate without cloud providers means that cloud providers are also of systemic importance to the financial system and may pose systemic risks themselves. This is not a new idea; members of Congress and advocacy organizations have previously called for such designation.155 However, the rise of AI gives this proposal new urgency. Accordingly, once the FSOC identifies which AI systems are systemically important, it should determine the cloud providers on which they rely and consider designating them as systemically important.
Securities Exchange Act of 1934
Relevant agency: Securities and Exchange Commission
The Securities Exchange Act of 1934, or “1934 Act,” is a cornerstone of securities regulation in the United States, enacted to ensure transparency, integrity, and fairness within the securities markets.156 The 1934 Act created the Securities and Exchange Commission to regulate the markets and enacted rules governing the secondary trading of securities. It aims to protect investors by mandating the disclosure of crucial financial information, preventing fraudulent practices such as insider trading and market manipulation, and overseeing the operations of securities exchanges.
The 1934 Act governs, and allows the SEC to regulate, brokers, exchanges and alternative trading systems, and clearinghouses, among other institutions. It broadly enables the SEC “to make such rules and regulations as may be necessary or appropriate to implement [the act].”157 In addition, the 1934 Act provides the SEC with authority to enact regulations specific to different market participants or registered entities. For example, Section 15 of the act, permits the SEC to “establish minimum financial responsibility requirements” and “standards of operational capability” for brokers,158 which it has used to enact net capital requirements,159 risk management practices,160 and an array of information technology standards.161 Furthermore, the combination of sections 6, 11A, 15A, and 17A permits the SEC to “facilitate the establishment of a national market system for securities” by allowing it to enact rules requiring exchanges and clearinghouses to “[have] the capacity to . . . carry out the purposes of [the act].”162 Under these authorities, the SEC enacted Regulation Systems Compliance and Integrity, a comprehensive information technology regulation that requires these entities to “establish written policies and procedures” that “ensure that their systems have levels of capacity, integrity, resiliency, availability, and security” and “[create] business continuity and disaster recovery plans.”163
Recommendations
Using these authorities, the SEC should consider the following actions:
- Require that AI systems that are parts of brokers’ capital, investment, and other risk management models be explainable. Brokers use a variety of systems to automate their capital management strategies, evaluate investment opportunities, and mitigate risk. They will inevitably use AI for these and other purposes that significantly affect their profitability and stability. The SEC already regulates brokers’ risk management models,164 and it should do the same with AI. Specifically, all AI systems must be explainable to expert and lay audiences. The SEC should also ensure that it and FINRA’s examiners may review source code and dataset acquisition protocols.
- Require brokers’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards, with those brokers periodically reviewing their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems must provide clients with accurate information about their accounts, their policies and procedures, and the law. In addition, as these AI systems are used for more than simply providing information—such as executing customer trades—it is critical that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. The SEC must ensure that brokers’ customer-facing AI systems undergo periodic review to ensure accuracy through third-party audits.
- Require brokers using AI systems to make investment recommendations to ensure those systems are explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the SEC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests.165 Among other things, AI systems must be explainable to expert and lay audiences. Brokers must also be able to explain why their recommendations are not provided based on conflicts of interest. Furthermore, the SEC should require brokers using AI to make investment recommendations to periodically review those systems and ensure that examiners may review source code and dataset acquisition protocols.
- Require red-teaming of AI for exchanges, alternative trading systems, and clearinghouses. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”166 The largest firms should already be utilizing red teaming for their AI products. In addition, they should be running red team/blue team exercises, and the agencies should require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.167 Firms must be aware of how malicious actors can use AI to attack their infrastructure to be able to defend against it. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities. Given the systemic importance of these firms, the SEC should not allow third-party audits to suffice, but rather deploy multiple steps to ensure security and protection.
- Ensure firms may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not develop their own systems in-house; instead, they will license software from a few competing nonfinancial institutions.168 It will be imperative that financial firms be able to move between different and competing AI systems to avoid lock-in. Accordingly, the SEC should make it a prerequisite of using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. The SEC could require that brokers, exchanges, alternative trading systems, and clearinghouses ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions are using the same one or two pieces of software.
- Require disclosure of annual resources dedicated to cybersecurity spending and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial services, the potential vulnerabilities and risks associated with cyber threats amplify significantly. The SEC should, accordingly, mandate brokers, exchanges, and clearinghouses to disclose their annual expenditures on cybersecurity and AI risk management and compliance. By mandating such disclosures, the SEC can gain valuable insights into the extent of a firm’s commitment to mitigating AI risk management.
Investment Advisers Act of 1940
Relevant agency: Securities and Exchange Commission
The Investment Advisers Act (IAA) regulates the activities of firms providing investment advice to clients. Under the IAA, investment advisers must register with the Securities and Exchange Commission if they manage assets above certain thresholds, becoming registered investment advisers (RIAs); comply with SEC regulations; and adhere to a fiduciary duty vis-à-vis their clients. Under the IAA, the SEC may regulate how firms safeguard client assets over which they have custody169 and may “promulgate rules prohibiting or restricting certain sales practices, conflicts of interest, and compensation schemes for brokers, dealers, and investment advisers that the Commission deems contrary to the public interest and the protection of investors.”170
Recommendations
Accordingly, the SEC should consider the following actions:
- Require that RIAs’ AI systems used to make investment recommendations are explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the SEC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests. Among other things, RIAs’ AI systems must be explainable to both expert and lay audiences and explain why their recommendations are not provided based on conflicts of interest. Furthermore, the SEC should require RIAs that use AI to make investment recommendations to periodically review those systems and ensure that examiners may review source code and dataset acquisition protocols.
- Require RIAs’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards, with RIAs periodically reviewing their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems provide clients with accurate information about their accounts, their firms’ policies and procedures, and the law in a manner that is not misleading. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customer trades—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only legal transactions within firms’ policies. The SEC must ensure that RIAs’ customer-facing AI systems are accurate and require periodic reviews of their systems to ensure accuracy.
- Ensure RIAs may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not be developing their systems in-house; instead, they will license software from a small number of competing nonfinancial institutions.171 It is imperative that RIAs are able to move between different and competing AI systems to avoid lock-in. Accordingly, the SEC should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. The SEC must require that RIAs ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions are using the same one or two pieces of software.
Commodity Exchange Act
Relevant agency: Commodity Futures Trading Commission
The Commodity Exchange Act (CEA) regulates the trading of commodity futures and other derivatives to ensure fair and efficient markets while preventing fraud and manipulation. The CEA created the Commodity Futures Trading Commission to oversee these markets, which requires the registration and regulation of various registrants, trading platforms, and clearinghouses. Originally enacted to protect farmers and ranchers in hedging their risks, the CEA now also covers trades worth trillions of dollars of value.
The CEA allows the CFTC to “make and promulgate such rules and regulations as, in the judgment of the Commission, are reasonably necessary to effectuate any of the provisions or to accomplish any of the purposes of [the act].”172 In addition, the CFTC has specific grants of regulatory authority over different market participants. For example, the CFTC may “prescribe rules applicable to swap dealers and major swap participants,” including rules explicitly related to “business conduct standards” and “minimum capital requirements.”173 For futures commission merchants (FCMs), the CFTC may enact “minimum financial requirements”174 and regulations governing how these entities handle customer assets.175 Exchanges must comply with acceptable business practices;176 “have adequate financial, operational, and managerial resources”; “[create risk management programs] to identify and minimize sources of operational risk”; and “establish and maintain emergency procedures, backup facilities, and a plan for disaster recovery”—and the CFTC may prescribe rules governing all of these activities.177 Similar requirements apply to clearinghouses, which the CFTC can regulate similarly.178 With these authorities, the CFTC has enacted various regulations, including the first rules on algorithmic trading.179 The agency has also recently proposed a rule for cyber and operational resilience.180
Recommendations
Using these myriad authorities, the CFTC should consider the following actions:
- Require AI systems that are parts of futures commission merchants’, swap dealers’, or major swap participants’ capital, investment, or other risk management models to be explainable. Today, these entities use a variety of systems to automate their capital management strategies, evaluate investment opportunities, and mitigate risk. They will inevitably begin using AI for these and other purposes that significantly affect their profitability and stability. The CFTC should regulate its AI models and ensure that all AI systems are explainable to expert and lay audiences. The CFTC should also ensure that it and the National Futures Association’s examiners may review source code and dataset acquisition protocols.
- Require futures commission merchants’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards. As institutions begin using AI chatbots to communicate with customers, these systems provide clients with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customer trades—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. The CFTC must ensure that FCMs’ customer-facing AI systems are accurate in all respects and require periodic reviews of those systems to ensure accuracy and explainability.
- Require that FCMs’ AI systems used to make investment recommendations be explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the CFTC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests. Among other things, AI systems must be explainable to expert and lay audiences and explain why recommendations are not provided based on conflicts of interest. Furthermore, the CFTC should require FCMs using AI to make investment recommendations, to periodically review those systems, and to ensure that examiners can review source code and dataset acquisition protocols.
- Require red-teaming of AI for swap dealers, exchanges, and clearinghouses. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”181 The largest firms should use red-teaming for their AI products. In addition, they should run red team/blue team exercises and require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.182 Firms must be aware of how malicious actors can use AI to attack their infrastructure to be able to defend against it. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities.
- Require third-party AI audits for all institutions. All institutions should require AI audits. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming but still need to mitigate against AI risk. Accordingly, small institutions should be required to undergo AI security audits by outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. Regulators should set out guidelines for appropriate conflict checks and firewall protocols for auditors.
- Ensure firms can move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not be developing their systems in-house; instead, they will license software from a few competing nonfinancial institutions.183 It is imperative that financial firms are able to move between different and competing AI systems to avoid lock-in. Accordingly, the CFTC should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for an easy transition to a competing system upon the contract’s expiration. The CFTC must require that all registrants and registered entities ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions use the same one or two pieces of software.
- Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial services, the potential vulnerabilities and risks associated with cyber threats amplify significantly. Accordingly, the CFTC should mandate that registrants and registered entities disclose their annual expenditures on cybersecurity and AI risk management and compliance. By mandating such disclosures, the CFTC can gain valuable insights into the extent of a firm’s commitment to mitigating AI risks.
Conclusion
The numerous U.S. financial regulators have ample statutory authority to address concerns AI may pose to customers, banks, securities brokers and futures commission merchants, securities and derivatives exchanges, and other market intermediaries. U.S. financial regulators must begin to address these challenges now with their existing authorities and tools to ensure the success and stability of the U.S. financial system in the AI age. GFI and CAP hope this chapter will offer thoughtful options to regulators as they undertake their AI work.
Read the fact sheet
The accompanying fact sheet lists all of the recommendations detailed in this chapter of the report.