Read the full report
This fact sheet collects the recommendations from Chapter 1: “The White House” of the joint report from Governing for Impact (GFI) and the Center for American Progress, “Taking Further Agency Action on AI: How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence.” The chapter notes how the White House and its subordinate agencies, including the Office of Management and Budget (OMB) and the Office of Information and Regulatory Affairs (OIRA), should consider addressing potential artificial intelligence (AI) risks and opportunities beyond the October 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”1 The White House can use existing regulations and executive actions—including the administration of federal grants and federal contracts, the Defense Production Act, and the use of emergency powers such as the International Emergency Economic Powers Act (IEEPA)—to do so. The goal of these recommendations is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. This menu of potential recommendations demonstrates that there are more options for agencies to explore beyond their current work, and that agencies should immediately utilize existing authorities to address AI.
Read the full chapter
The Office of Management and Budget
Uniform guidance for federal awards
The OMB could consider the following actions:
- Develop guidance that adapts the recent OMB M-24-10 AI guidance2 to apply to AI use by other recipients of federal funds, including grants, loans, and other forms of financial assistance. The guidance could establish a similar framework for agencies to assess the safety- and rights-impacting purposes of AI from the OMB M-24-10 AI guidance3 and mitigate the harmful consequences of the applicable risks thereof, using minimum practices for AI risk management. The guidance could urge agencies to impose conditions on federal funds to the extent the statutory sources of those funds allow such conditions.
- Update the uniform guidance for federal awards at 2 C.F.R. Part 200, pursuant to 31 U.S.C. §§ 6307 and 503(a)(2), to incorporate AI risk assessment—and the steps that applicants are taking to mitigate risks—into agencies’ consideration of applications for federal funding, as permitted by the statutory sources for such funding. Specifically, the OMB could update 2 C.F.R. § 200.206(b)(2) to include an assessment of AI risk within its risk evaluation requirements; update 2 C.F.R. § 200.204(c) to require or suggest that the full text of funding opportunity announcements include any AI risk evaluation requirements; and update 2 C.F.R. § 200.211 to require or recommend that federal award publications include the results of AI risk analyses produced during the application process. The current risk evaluation section permits a federal agency to consider the “applicant’s ability to effectively implement statutory, regulatory, or other requirements imposed on non-Federal entities.”4 A revised uniform guidance could explicitly suggest that federal agencies consider the potential for grantees’ use of AI to impact their ability to comply with such requirements and the impact AI use could have on the other categories of risk specified in the current guidance.
Updates to regulatory review
The president, OMB, and OIRA could consider the following actions:
- Issue a new requirement in the regulatory review process that would require agencies to include a brief assessment of 1) the potential effects of significant regulatory actions on AI development, risks, harms, and benefits, and 2) an assessment of the current and anticipated use of AI by regulated entities and how that use is likely to affect the ability of any proposed or final rule to meet its stated objectives. This requirement could follow the format of the benefit-cost analysis required by the current Executive Order 12866. The modification to the regulatory review process could take the form of a new executive order, a presidential memorandum,5 or an amendment to Executive Order 12866 that adds a subsection to §1(b) and/or §6(a).
- Issue a presidential memorandum directing agencies and encouraging independent agencies to review their existing statutory authorities to address known AI risks and consider whether addressing AI use by regulated entities through new or ongoing rulemakings would help ensure that this use does not undermine core regulatory or statutory goals. Such a presidential memorandum would primarily give general direction, similar to the Obama administration’s behavioral sciences action,6 rather than require a specific analysis on every regulation.
The presidential memorandum could direct executive departments and agencies, or perhaps even the chief AI officer established in the 2023 executive order on AI and further detailed in the OMB M-24-10 AI guidance,7 to:
- Identify whether their policies, programs, or operations could be undermined or impaired by the private sector use of AI tools.
- Comprehensively complete the inventory of statutory authorities first requested in OMB Circular M-21-06,8 which directed agencies to evaluate their existing authorities to regulate AI applications in the private sector.
- Outline strategies for deploying such statutory authorities to achieve agency goals in the face of identified private sector AI applications.
Federal contracting
Federal procurement policy and Federal Property and Administrative Services Act (FPASA)
As the OMB prepares the forthcoming procurement guidance mentioned in OMB M-24-10 AI guidance,9 it may also want to consider whether it can include standards that:
- Ensure baseline levels of competition and interoperability, such that agencies do not get locked into using the services of a single AI firm.
Under its FPASA authority, the Federal Acquisition Regulatory Council,10 which is chaired by OMB’s administrator for federal procurement policy, can promulgate a rule that outlines protections for all employees at firms that hold a federal contract as it relates to AI, including potentially through the following actions:
- Incorporate the presumed safety-impacting and rights-impacting uses of AI from the OMB M-24-10 AI guidance to apply to federal contractors and their use of AI systems for workplace management.11
- Require federal contractors employing automated systems to use predeployment testing and ongoing monitoring to ensure safety and that workers are paid for all compensable time and to mitigate other harmful impacts.
- Establish specific requirements regarding pace of work, quotas, and worker input to reduce the safety and health impacts of electronic surveillance and automated management.
- Mandate disclosure requirements when employees are subject to automation or other AI tools.
- Provide discrimination protections related to algorithmic tools, including ensuring that automated management tools can be adjusted to make reasonable accommodations for workers with disabilities.
- Ensure privacy protections for employees and users of AI.
The Executive Office of the President
International Emergency Economic Powers Act (IEEPA), the Communications Act, and Federal Procurement Policy
To prepare the government to use the above powers in the event of an AI system posing emergency threats to the United States, the White House could consider the following actions:
- Direct the National Security Council to develop a memorandum that outlines scenarios wherein AI applications could pose an emergency threat to the country and identifies actions that the president could take through existing statutory schemes and their inherent executive authority under Article II of the Constitution to resolve the threat. The memorandum should study the landscape of imaginable AI applications and devise criteria that would trigger emergency governmental action. Such a memorandum could complement or be incorporated as part of the National Security Memorandum required by the October 2023 executive order on AI.12 The memorandum’s design could echo the National Response Plan, originally developed after 9/11 to formalize rapid government response to terrorist attacks and other emergency scenarios.13 The memorandum could consider authorities:
- Inherent to the president’s constitutional prerogative to protect the nation: For example, the memorandum could identify when it could be appropriate for the president to take military or humanitarian action without prior congressional authorization when immediate action is required to prevent imminent loss of life or property damage.14
- Under the IEEPA: For example, the memorandum could consider the administration’s authority to expand the policies established in the August 2023 IEEPA executive order, using the statute to freeze assets associated with AI technologies and countries of concern that contribute to the crisis at hand.15Follow-up executive action could identify new countries of concern as they arise. As another example, the memorandum could identify triggers for pursuing sanctions under 50 U.S.C. § 1708(b) on foreign persons that support the use of proprietary data to train AI systems or who steal proprietary AI source code from sources in the United States. The memorandum could also explore the president’s authority to investigate, regulate, or prohibit certain transactions or payments related to run away or dangerous AI models in cases where the models are trained or operate on foreign-made semiconductors and the president determines that such action is necessary to “deal with” a national security threat. Even if that model is deployed domestically or developed by a domestic entity, it may still fall within reach of the IEEPA’s potent §1702 authorities if, per 50 U.S.C. §1701, the model: 1) poses an “unusual or extraordinary threat,” and 2) “has its source in whole or substantial part outside the United States.” The administration can explore whether AI models’ dependence on foreign-made semiconductors for training and continued operation meets this second requirement. Indeed, scholars have previously argued that the interconnectedness of the global economy likely subjects an array of domestic entities to IEEPA in the event sufficiently exigent conditions arise.16
- Under the Communications Act: For example, the memorandum could identify scenarios in which the president could consider suspending or amending regulations under 47 U.S.C. § 606(c) regarding wireless devices to respond to a national security threat.17 The bounds of this authority are quite broad, covering an enormous number of everyday devices, including smartphones that can emit electromagnetic radiation.18
- To modify federal contracts: For example, the memorandum could identify possibilities for waiving procurement requirements in a national emergency if quickly making a federal contract with a particular entity would help develop capabilities to combat a rapidly deploying and destructive AI.19
- To take other statutorily or constitutionally authorized actions: The memorandum could organize a process through which the White House and national security apparatus would, upon the presence of the criteria outlined in the memorandum, assess an emergent AI-related threat, develop a potential response, implement that response, and notify Congress and the public of such a response.20 It could also request a published opinion from the Office of Legal Counsel on the legality of the various response scenarios and decision-making processes drawn up pursuant to the recommendations above. This will help ensure that the president can act swiftly but responsibly in an AI-related emergency.
- Share emergency AI plans with the public: The administration should share such emergency processes and memoranda they develop with Congress, relevant committees, and the public where possible.