Authors’ note: For this report, the authors use the definition of artificial intelligence (AI) from the 2020 National Defense Authorization Act, which established the National Artificial Intelligence Initiative.1 This definition was also used by the 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”2 Similarly, this report makes repeated reference to “Appendix I: Purposes for Which AI is Presumed to be Safety-Impacting and Rights-Impacting” of the 2024 OMB M-24-10 memo, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”3
Read the fact sheet
The accompanying fact sheet lists all of the recommendations detailed in this chapter of the report.
The Executive Office of the President (the White House), including its subordinate agencies, can use existing regulations and executive actions—including the administration of federal grants and federal contracts, the Defense Production Act, and the use of emergency powers such as the International Emergency Economic Powers Act (IEEPA)—to potentially address the challenges and opportunities of artificial intelligence (AI). Governing for Impact (GFI) and the Center for American Progress have extensively researched these existing authorities in consultation with numerous subject matter experts. However, the goal is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. Each potential recommendation will require further vetting before agencies act. Even if additional AI legislation is needed, this menu of potential recommendations to address AI demonstrates that there are more options for agencies to explore beyond their current work and that they cannot and should not wait to utilize existing authorities to address AI.
The White House contains numerous agencies and offices that address issues that intersect with AI, including the Office of Science and Technology Policy (OSTP), the National Economic Council (NEC), the National Security Council, and the Office of the National Cyber Director, among many others. Among the most critical is the Office of Management and Budget (OMB), which is responsible for implementing the president’s policies and contains the Office of Information and Regulatory Affairs (OIRA), the government’s regulatory review apparatus.
The White House has already taken action on AI, including the 2022 White House “Blueprint for an AI Bill of Rights”;4 the October 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”;5 new OMB AI guidance for federal agencies finalized in March 2024 on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (OMB M-24-10 AI guidance);6 and the agency inventories and AI use cases7 required by the Advancing American AI Act.8 Much of the AI work produced by the White House has focused on broad principles, targeted efforts by agencies, and guiding the federal government’s use of AI.
The Office of Management and Budget
As the entity responsible for implementing the president’s agenda across the executive branch, 9 the OMB will play a critical role in coordinating federal agencies as they work to mitigate the known risks posed by AI. This section explains how the OMB and the president can continue to protect Americans from the known risks of AI, including by issuing new guidance for agencies in their disbursement of federal funds and through an updated regulatory review process.
AI risks and opportunities
Government spending constituted more than a quarter of the nation’s gross domestic product in 2022.10 It is essential that such spending does not operate at cross purposes with the government’s efforts to mitigate the risks associated with AI. The government should avoid inadvertently or intentionally providing federal money to projects that could supercharge the negative consequences of AI. Relatedly, the government should take steps to ensure that its regulatory efforts—whether they are directly or indirectly related to AI—do not produce unintended consequences that amplify AI risks to the public.
The OMB M-24-10 AI guidance implemented a directive from the executive order on AI to guide “required minimum risk-management practices for Government uses of AI that impact people’s rights or safety.”11 The OMB M-24-10 AI guidance outlined 28 broad purposes where the federal government’s use of AI are “presumed to be safety-impacting” or “rights-impacting.”12 The Biden administration has identified these categories as those that should be subject to heightened scrutiny and required minimum practices.
Of course, as the OMB recognized in its draft AI guidance for federal agencies, responsibly implemented AI has immense potential to improve operations across the federal government.13 For example, AI could assist citizens and businesses in navigating everyday interactions with federal agencies.14 Additionally, as the October 2023 executive order notes, AI could help identify and remediate cybersecurity vulnerabilities or aid in health care research and development.15 The OMB’s approach can appropriately balance the need to mitigate the risks of AI use with the potentially immense upsides.
Current state
The OMB has already incorporated AI risk mitigation into the government’s daily operations.16 In 2020, OMB issued “Circular M-21-06,” which directed agencies to, among other things, describe the statutes that direct or authorize the agency to issue regulations related to the development or use of AI.17 However, with the notable exception of the Department of Health and Human Services (HHS),18 agencies generally failed to comply with this directive.19
Most recently, following President Joe Biden’s issuance of the October 2023 executive order on AI, the OMB released new draft AI guidance for federal agencies20 and finalized that guidance in March 2024 as the OMB M-24-10 AI guidance.21 This AI guidance established new requirements for agencies’ use of AI tools, including “specific minimum risk management practices for uses of AI that impact the rights and safety of the public.”22 These proposed management practices include but are not limited to: completing an AI impact assessment (including the provenance and quality of data used in the AI); testing the AI for performance in a real-world context; independent evaluation; ongoing monitoring and periodic human review; ensuring human decision making is kept in the loop; plain-language documentation; reducing algorithmic bias and using representative data; consulting affected groups; and maintaining opt-out options where practicable.23 Importantly, this guidance focused primarily on agencies’ procurement and use of AI, and not on their regulatory actions to mitigate AI risks created by private actors,24 although CAP and other groups25 have urged the OMB to redouble its efforts to collect agencies’ inventory of statutory authorities that could apply to AI, as required by Executive Order 13859.26
Relevant statutory authorities
The OMB should consider using its statutory authority regarding federal awards, regulatory review, and federal contracting to address key AI issues within its jurisdiction and to direct the federal government’s AI efforts.
Uniform guidance for federal awards
As part of its mission to harmonize and improve operations across agencies, the OMB has the authority to issue guidance to federal agencies on how to disburse awards of federal financial assistance.27
At 31 U.S.C. § 6307, the U.S. Code authorizes the OMB to “issue supplementary interpretative guidelines to promote consistent and efficient use of procurement contracts, grant agreements, and cooperative agreements.”28 At 31 U.S.C. § 503(a)(2), the OMB is directed to “establish governmentwide financial management policies for executive agencies” and “[p]rovide overall direction and leadership to the executive branch on financial management matters by establishing financial management policies and requirements, and by monitoring the establishment and operation of Federal Government financial management systems.”29
Under this authority, the OMB issued the “Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards,” or uniform guidance, in 2014, which is codified at 2 C.F.R. Part 200. The uniform guidance sets forth procedural and substantive guidelines that federal agencies must follow and may consider when disbursing federal awards to nonfederal entities.30 Among other things, the uniform guidance requires federal agencies to publish a notice of funding opportunity for each award, establish a merit review process for applications, and consider the risks associated with making an award, taking into account the awardee’s financial stability, management controls and methods, and history of performance.31Importantly, federal agencies may make exceptions to the uniform guidance’s requirements in their grant processes, and must do so when required by the federal statute governing a particular award.32
In addition to the uniform guidance, the OMB often issues guidance in the form of memoranda and circulars to agencies, advising them on how they should disburse federal financial assistance. For example, the OMB issued a 2020 memorandum to agency heads detailing how they could change and relax administrative requirements for grant recipients during the COVID-19 public health emergency.33 Additionally, in 2023, it released another memorandum applying the “Buy America” provisions from a 2021 executive order and the Infrastructure Investment and Jobs Act to federal grant awardees and subawardees.34
Recommendations
Based on the above-cited authority, the OMB could consider the following actions:
- Develop guidance that adapts the recent OMB M-24-10 AI guidance35 to apply to AI use by other recipients of federal funds, including grants, loans, and other forms of financial assistance. The guidance could establish a similar framework for agencies to assess the safety- and rights-impacting purposes of AI from the OMB M-24-10 AI guidance36 and mitigate the harmful consequences of the applicable risks thereof, using minimum practices for AI risk management. The guidance could urge agencies to impose conditions on federal funds to the extent the statutory sources of those funds allow such conditions.
- Update the uniform guidance for federal awards at 2 C.F.R. Part 200, pursuant to 31 U.S.C. §§ 6307 and 503(a)(2), to incorporate AI risk assessment—and the steps that applicants are taking to mitigate risks—into agencies’ consideration of applications for federal funding, as permitted by the statutory sources for such funding. Specifically, the OMB could update 2 C.F.R. § 200.206(b)(2) to include an assessment of AI risk within its risk evaluation requirements; update 2 C.F.R. § 200.204(c) to require or suggest that the full text of funding opportunity announcements include any AI risk evaluation requirements; and update 2 C.F.R. § 200.211 to require or recommend that federal award publications include the results of AI risk analyses produced during the application process. The current risk evaluation section permits a federal agency to consider the “applicant’s ability to effectively implement statutory, regulatory, or other requirements imposed on non-Federal entities.”37 A revised uniform guidance could explicitly suggest that federal agencies consider the potential for grantees’ use of AI to impact their ability to comply with such requirements and the impact AI use could have on the other categories of risk specified in the current guidance.
These proposals could help prevent federal funds from going toward projects that might accelerate the proliferation of AI harms that affect the safety of the public or the rights of individuals. Further study is needed to determine the exact form that AI risk analysis in federal awards should take.
Updates to regulatory review
Presidents since Richard Nixon have implemented systematic reviews of rulemakings to ensure consistency with statutes and presidential priorities.38 President Ronald Reagan’s Executive Order 12291 centralized regulatory review in the OIRA, a suboffice of OMB, and required that agencies conduct detailed benefit-cost analyses of proposed regulatory actions.39 And President Bill Clinton’s Executive Order 12866 reduced the scope of regulatory review to only those regulatory actions deemed “significant.”40
President Biden most recently revised Executive Order 12866 in April 2023.41Among other changes, the revision increased the threshold for “significance,” directed federal agencies to engage underrepresented communities during rulemaking processes, and directed the OMB to make corresponding changes to Circular A-4, which implements the regulatory review process.42
AI has the potential to impact every aspect of our economy, government, and society—as evidenced by the expansive scope of the October 2023 executive order on AI,43 the myriad safety-impacting and rights-impacting government uses of AI in the OMB M-24-10 AI guidance,44 and the wide range of topics contemplated in the 2023 OSTP request for information for a national AI strategy.45 It is thus reasonable that all regulatory agencies should start to consider the impact of AI on their existing and future abilities to carry out their regulatory requirements.
Recommendations
Based on the above-cited authority, the president, OMB, and OIRA could consider the following actions:
- Issue a new requirement in the regulatory review process that would require agencies to include a brief assessment of 1) the potential effects of significant regulatory actions on AI development, risks, harms, and benefits, and 2) an assessment of the current and anticipated use of AI by regulated entities and how that use is likely to affect the ability of any proposed or final rule to meet its stated objectives. This requirement could follow the format of the benefit-cost analysis required by the current Executive Order 12866. The modification to the regulatory review process could take the form of a new executive order, a presidential memorandum,46 or an amendment to Executive Order 12866 that adds a subsection to §1(b) and/or §6(a).
- Issue a presidential memorandum directing agencies and encouraging independent agencies to review their existing statutory authorities to address known AI risks and consider whether addressing AI use by regulated entities through new or ongoing rulemakings would help ensure that this use does not undermine core regulatory or statutory goals. Such a presidential memorandum would primarily give general direction, similar to the Obama administration’s behavioral sciences action,47 rather than require a specific analysis on every regulation.The presidential memorandum could direct executive departments and agencies, or perhaps even the chief AI officer established in the 2023 executive order on AI and further detailed in the OMB M-24-10 AI guidance,48 to:
- Identify whether their policies, programs, or operations could be undermined or impaired by the private sector use of AI tools.
- Comprehensively complete the inventory of statutory authorities first requested in OMB Circular M-21-06,49 which directed agencies to evaluate their existing authorities to regulate AI applications in the private sector.
- Outline strategies for deploying such statutory authorities to achieve agency goals in the face of identified private sector AI applications.
Federal contracting
Among its recent AI initiatives, the Biden administration has taken steps to address AI in federal contracting. The October 2023 executive order on AI encouraged the U.S. Department of Labor (DOL) to develop nonbinding nondiscrimination guidance for federal contractors using AI in their hiring processes,50 which the DOL issued in April 2024.51 Additionally, the OMB M-24-10 AI guidance both offers and anticipates additional guidance concerning a distinct issue: agencies’ procurement of AI tools.52
This section proposes more forceful action. Through the Federal Property and Administrative Services Act (FPASA),53 the federal government retains the authority to impose binding conditions that promote economy and efficiency in federal procurement54 on federal contractors,55 who collectively employ 1 in 5 U.S. workers.56 This section explains why and how the administration could issue binding regulations to protect the federal contracting workforce from nefarious or poorly developed AI management tools, including but not limited to preventing discrimination in hiring. It also explains why the logic underpinning recent adverse FPASA court decisions would not apply to FPASA conditions on using AI management tools.
AI risks and opportunities
AI harms in the workplace are well documented,57 and government contractors are not immune to these common problems. Many of these harms are explored in more depth in Chapter V, which discusses AI harms affecting all workers. These include discrimination, safety and health, wage and hour compliance, misclassification of employee roles, worker power and datafication, and workforce training and displacement.58 In the federal contracting context, several harms present unique challenges:
- Discrimination: For example, as highlighted in the AI Bill of Rights, automated workplace algorithms, which often rely on AI models, have been shown to produce biases in hiring, retention, and firing processes.59 The OMB M-24-10 AI guidance highlighted that government use of AI to “[d]etermin[e] the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination,” should be presumed rights-impacting.60 For example, a now-discontinued hiring tool built and used by Amazon was reported to reject women applicants by penalizing resumes that included the word “women’s” in their candidate ranking.61
- Physical and mental health harms: Automated management increases worker physical and mental health risks62 and has dire implications for employee privacy.63 The OMB M-24-10 AI guidance highlighted that government use of AI to incorporate “time-on-task tracking; or conducting workplace surveillance or automated personnel management” should be presumed rights-impacting.64
- Privacy breaches: Of particular importance to government contracting, AI technologies may increase government vulnerability to privacy breaches when contractors are tasked with handling sensitive data or tasks.65
- Wage and hour compliance: As technology blurs the line between work and nonwork time, it may become more difficult to assess what time is compensable and therefore should be considered in producing pay determinations. Other risks include opacity and manipulation in algorithmic wage-setting technologies66 and digital wage theft enabled by timesheet rounding.67
Of course, AI offers opportunities to promote the interests of the federal contracting workforce as well. For example, AI tools could potentially allow compliance officers to better identify violations of preexisting FPASA standards.
Current state
The executive order on AI required the DOL to issue guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.68 The DOL has recently finalized that guidance.69 The guidance explains how federal contractors and subcontractors who use AI, algorithms, and automated systems may be at risk of violating the Equal Employment Opportunity Act and provides examples of how contractors can meet their compliance obligations.70 Importantly, the guidance states that federal contractors cannot delegate compliance responsibilities to outside entities, including vendors, and provides several promising practices to maintain compliance.71
Separately, the OMB M-24-10 AI guidance has proposed standards for agencies’ procurement of AI technology and promises to “develop an initial means to ensure that Federal contracts for the acquisition of an AI system or service align with the guidance in this memorandum”72 in accordance with the Advancing American AI Act,73 which was signed into law in December 2022,74 and the 2023 executive order on AI.75 On March 29, 2023, the OMB posted a request for information on “Responsible Procurement of Artificial Intelligence in Government” to help develop that guidance.76
Despite these important steps, neither the executive order on AI, the OMB M-24-10 AI guidance, nor future AI procurement guidance announced in the OMB M-24-10 AI guidance appears likely to cover the AI tools federal contractors may be using to manage their workforces outside of hiring.
Relevant statutory authority
The FPASA authorizes the president to “prescribe policies and directives that the President considers necessary to carry out this subtitle,” namely the FPASA’s goal of promoting economy or efficiency in federal procurement.77
Past administrations have invoked the FPASA to regulate federal contracting in various ways. In the 1970s, courts held that the FPASA authorized the federal government to require contractors to abide by certain anti-discrimination policies.78 Other administrations have invoked the FPASA to require federal contractors to comply with certain workplace standards, including wage and price standards,79 regulations concerning project labor agreements,80 and requirements that contractors provide employees notice of their rights to opt out of joining a union or paying mandatory dues outside of representational activities.81 The federal government has also promulgated FPASA rules requiring contractors to provide disclosures of known violations of federal criminal laws or of the civil False Claims Act,82 creating business ethics awareness and compliance programs,83 and mandating the use of the E-Verify system to confirm employment eligibility of workers.84 In 2011, the Obama administration used the FPASA to mandate that contractors implement screening systems to prevent employee conflicts of interest.85 And in 2016, the Obama administration relied on its FPASA authority to require federal contractors to receive paid sick leave.86
More recently, the Biden administration has deployed its FPASA authority in two high-profile cases: 1) to impose a vaccine or test mandate on the federal contracting workforce and 2) to raise the minimum wage for federal contractors’ employees to $15 per hour in 2022.87 Challengers have successfully won injunctions against both rules in federal courts—although, as explained below, for reasons that do not apply to this proposal.88
Recommendations
As the OMB prepares the forthcoming procurement guidance mentioned in OMB M-24-10 AI guidance,89 it may also want to consider whether it can include standards that:
- Ensure baseline levels of competition and interoperability, such that agencies do not get locked into using the services of a single AI firm.
Under its FPASA authority, the Federal Acquisition Regulatory Council,90 which is chaired by OMB’s administrator for federal procurement policy, can promulgate a rule that outlines protections for all employees at firms that hold a federal contract as it relates to AI, including potentially through the following actions:
- Incorporate the presumed safety-impacting and rights-impacting uses of AI from the OMB M-24-10 AI guidance to apply to federal contractors and their use of AI systems for workplace management.91
- Require federal contractors employing automated systems to use predeployment testing and ongoing monitoring to ensure safety and that workers are paid for all compensable time and to mitigate other harmful impacts.
- Establish specific requirements regarding pace of work, quotas, and worker input to reduce the safety and health impacts of electronic surveillance and automated management.
- Mandate disclosure requirements when employees are subject to automation or other AI tools.
- Provide discrimination protections related to algorithmic tools, including ensuring that automated management tools can be adjusted to make reasonable accommodations for workers with disabilities.
- Ensure privacy protections for employees and users of AI.
Many of these recommendations follow from the executive order on AI,92 the OMB M-24-10 AI guidance,93 the AI Bill of Rights,94 and the National Institute of Standards and Technology (NIST) AI Risk Management Framework;95 other standards from these documents may also be worth considering.
Regulating the use of AI in government contracts advances the FPASA’s statutory goals of economy and efficiency in several ways. For example, AI hiring tools often rely on data that already suffers from bias,96 and relying on AI tools may bake in this data and mask it from potential employers. These biases may increase employee turnover and make contractors vulnerable to legal risks, leading to increased costs for contractors and the government. Furthermore, AI models such as algorithmic management have been linked to safety issues, including increased stress for workers under employer surveillance.97 Worker stress can lead to increased mistakes and safety issues, creating added costs for the government down the line.
These justifications find close analogs in the reasoning that past administrations have used to impose new FPASA obligations that have been upheld in federal court. For example, in Chamber of Commerce v. Napolitano, a federal district court upheld a requirement that contractors ascertain the immigration status of certain new hires using E-Verify, finding that a reasonably close nexus exists so long as the “President’s explanation for how an Executive Order promotes efficiency and economy [is] reasonable and rational.”98 In that case, the court found that President George W. Bush’s conclusion that the E-Verify system would result in fewer immigration enforcement actions, fewer undocumented workers—and “therefore generally more efficient and dependable procurement sources”—was sufficient to meet the nexus requirement.99 The court also held that “[t]here is no requirement … for the President to base his findings on evidence included in a record.”100 Similarly, in this context, regulating the use of AI in government contracts would also lead to a more “dependable procurement” workforce since AI technologies would be tested to root out possible bias or other automation harms. Additionally, some of the earliest exercises of modern presidential procurement power concerned anti-discrimination measures.101
Finally, it is important to note that two high-profile efforts by the Biden administration to impose laudable requirements on federal contractors have suffered setbacks in court. One was an order,102 enjoined by the 5th, 6th, and 11th U.S. Circuit Courts of Appeals,103 obligating contract recipients to require their employees to wear face masks at work and be vaccinated against COVID-19. Another order increased the hourly minimum wage paid by parties who contract with the federal government for workers on or in connection with a federal government contract.104 Despite favorable district court rulings in Arizona and Colorado,105 a court in the Southern District of Texas enjoined the application of the minimum wage rule in three Southern states.106 Recently, however, the 10th U.S. Circuit Court of Appeals upheld the minimum wage rule as applied to seasonal recreational workers, finding that the standard for finding a nexus between the rule and FPASA’s goal of “economy and efficiency” is lenient.107
However, this proposed rule is distinguishable from the minimum wage and COVID-19 rules in several ways. In the COVID-19 case, the 5th Circuit, citing the major questions doctrine, found the FPASA did not clearly authorize the president to impose requirements concerning the conduct of the employees of federal contractors, as opposed to regulating the contractor-employers themselves.108 A rule regulating the use of AI in government contracts would not impose any requirements on employee conduct, even indirectly. Hence, this decision is largely irrelevant to the proposed action.
Even according to the flawed reasoning of the Texas district court’s opinion enjoining the minimum wage rule in three states, the administration could distinguish a rule regulating the use of AI under several theories. For one, regulating the use of AI would not have nearly the same economic ramifications for contractors since it would not require immediate wage increases across the workforce. The proposed rule’s focus would be quality assurance for the use of AI systems, leading to likely savings for the government—the kind of purchasing considerations that fit squarely within the court’s framing of the FPASA as primarily concerned with the “supervisory role of buying and selling of goods.”109
Defense Production Act
The Defense Production Act (DPA) includes a powerful and underutilized subpoena power that may offer the best opportunity for the federal government to get a look inside certain AI models.110
Current state
The executive order on AI laudably invokes the DPA to impose a limited disclosure obligation on the developers of certain new AI models.111 Specifically, the executive order directs the U.S. Department of Commerce to require companies “developing or demonstrating an intent to develop potential dual-use foundation models” to report—on an ongoing basis—training parameters, model weights, and “red-teaming” testing results based on forthcoming NIST guidance.112 According to a news report, these requirements will apply to “all future commercial AI models in the US, but not apply to AI models that have already been launched.”113 The executive order also directs the Department of Commerce to require that people or companies that acquire, develop, or possess “a potential large-scale computing cluster” report the existence and location of those clusters.114
Relevant statutory authority
The executive order’s disclosure directive is well-grounded in statutory authority, as illustrated below. This section seeks to underscore that the president’s DPA authority plausibly extends beyond what the proposal laid out in the executive order.
When it comes to subpoenas, the DPA holds:
The president shall be entitled … to obtain such information from, require such reports and the keeping of such records by, make such inspection of the books, records, and other writings, premises or property of … any person as may be necessary or appropriate, in [the President’s] discretion, to the enforcement or the administration of this chapter and the regulations or orders issued thereunder … [and] to obtain information in order to perform industry studies assessing the capabilities of the United States industrial base to support the national defense.115
This language is quite broad, particularly in the first grant of authority. The second, more qualified grant for industry studies, at least references the terms “industrial base,” which is not defined in the statute, and “national defense,” which is statutorily defined in part as “critical infrastructure protection and restoration.” 116 “Critical infrastructure” is defined, in turn, as “any systems and assets, whether physical or cyber-based, so vital to the United States that the degradation or destruction of such systems and assets would have a debilitating impact on national security, including, but not limited to, national economic security and national public health or safety.” 117 There exists a presumption, waivable by the president, of confidentiality if the company so attests, per 50 U.S.C. §4555(d).118
Beyond military applications, then, the DPA’s subpoena power appears to extend, at minimum, to any AI application that poses a serious threat to basic services—for example, the energy grid or water system—the broader economy, or public health. Notably, the executive order’s definition of dual-use foundation models appears to be somewhat coextensive with the DPA’s definition of “critical infrastructure.”119
However, it is worth emphasizing that the DPA empowers the president to take additional action if necessary. For example, nothing in the statute prevents the administration from applying its reporting requirements to existing AI applications, rather than future ones, as reporting indicates is the current plan.120 Indeed, while the executive order envisions creating an ongoing notification and reporting system, the president still retains the statutory authority to demand, on a one-off basis, a broad array of information from companies that own AI applications capable of threatening the statute’s capacious definition of “national defense.” This authority similarly would allow the president to seek relevant information beyond training parameters, model weights, and red-teaming test results.
Emergency powers
As the nation’s chief executive, the president has a constitutional obligation to respond to exigent national security threats and national emergencies.121 Additionally, Congress has enacted specific statutory schemes endowing the president with enhanced powers under certain emergent circumstances.122 This section explains several potential applications of the president’s emergency powers that are relevant to known risks of AI. It suggests that the White House define the criteria that would lead the president to use these authorities. It also proposes drafting an emergency response plan the government can follow once those criteria are met.
AI risks and opportunities
It is possible that some future AI application may suddenly pose risks that demand an exigent response. Examples of such circumstances might include:
- Financial chaos: AI used in stock prediction and financial decision-making may raise the risk of stock market collapse by increasing the homogeneity of stock trading. As Securities and Exchange Commission (SEC) Chair Gary Gensler warned in a 2020 paper, if trading algorithms all make a simultaneous decision to sell the same asset, it could tank the stock market.123 Mark Warner (D-VA) and John Kennedy (R-LA) have introduced legislation to address threats to financial markets from AI, with Sen. Warner noting, “AI has tremendous potential but also enormous disruptive power across a variety of fields and industries – perhaps none more so than our financial markets.”124
- National security and biodefense: Some of the same features that make AI revolutionary technology with great potential for good—for instance, reducing cost and complexity of scientific endeavors—may also pose national security threats. AI may make it easier for foreign governments and nonstate actors to achieve breakthroughs in areas such as autonomous weaponry, biological warfare, and mass manipulation through high-quality mis-/dis-/mal-information. Any or all the above could threaten the nation’s security.125 The 2023 executive order on AI outlined numerous taskings related to addressing AI’s impact on cybersecurity and biosecurity.126
- Corrupted information and weaponized communications: The 2022 National Science and Technology Council (NSTC) report, “Roadmap for Researchers on Priorities Related to Information Integrity Research and Development,” noted four main categories of harms from corrupted information: harms to consumers and companies, individuals and families, national security, and society and the democratic process.127 In particular, experts repeatedly cite rapidly disseminated and weaponized information campaigns as a key threat of greatly expanded AI. AI allows bad actors to create and publish enormous amounts of mis-/dis-/mal-information that are difficult to distinguish from truth.128 Increasingly sophisticated AI will exploit “cognitive fluency bias,” which refers to humans’ tendency to give more weight to information conveyed in well-written text content or compelling visuals.129 This kind of misinformation is already a key strategy of nonstate and state actors in Russia, China, and Iran, among other countries.130 For instance, a crude version of this “deepfake” strategy was deployed in the Russian war against Ukraine, wherein the Russian government published an AI-generated video of Ukrainian President Volodymyr Zelenskyy calling on Ukrainians to lay down their arms.131 In May 2024, before the U.S. Senate Select Committee on Intelligence, Director of National Intelligence Avril Haines testified:
For example, innovations in AI have enabled foreign influence actors to produce seemingly-authentic and tailored messaging more efficiently, at greater scale, and with content adapted for different languages and cultures. In fact, we have already seen generative AI being used in the context of foreign elections.132
Current state
The national security apparatus has begun to react to the potential threats of AI proliferation. Officials at the U.S. Department of Defense have taken steps to better defend the country’s information ecosystem from rapidly proliferating dis-/mis-/mal-information,133 issued the 2022 “Responsible Artificial Intelligence Strategy and Implementation Pathway” report,134 and spoken publicly about the U.S. military’s AI strategy.135
In August 2023, President Biden signed Executive Order 14105, “Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern.”136 This executive order declared a national emergency based on advances made by “countries of concern” in “sensitive technologies and products critical for the military, intelligence, [and] surveillance.”137 The president issued the executive order pursuant to the International Emergency Economic Powers Act (IEEPA). The executive order included AI in its list of sensitive technologies and directed the U.S. Treasury Department to prohibit outbound investments into those countries of concern and to establish strict regulatory requirements in other countries.138 Relatedly, the Commerce Department initiated export controls in October 2022 that restrict the ability of companies to sell certain advanced computing semiconductors or related manufacturing equipment to China.139 The Commerce Department expanded its AI export controls in October 2023.140
The 2023 executive order on AI also recognized the potential national security implications of the spread of AI, and directed agency actions to mitigate AI risks in critical infrastructure and cybersecurity.141 The order highlighted the potential for AI to increase biosecurity risks and directed various stakeholders to produce a study of those risks and potential mitigation options.142 The executive order also tasked the national security adviser with delivering an additional “National Security Memorandum” on AI to the president in 2024.143
As noted above, President Biden declared a national emergency pursuant to the IEEPA in August 2023 with Executive Order 14105,144 which joined other emergencies involving technology declared via executive order. This includes a national emergency declared in President Donald Trump’s May 2019 Executive Order 13873, “Securing the Information and Communications Technology and Services Supply Chain”;145 it was further expanded by President Biden’s June 2021 Executive Order 14034, “Protecting Americans’ Sensitive Data From Foreign Adversaries,”146 and again in his February 2024 Executive Order 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.”147 Executive Order 14117 directed various federal agencies to issue regulations prohibiting data transfers—through data brokers, employment agreements, investment agreements, and otherwise—to “countries of concern.”148
Relevant authorities
This section identifies ways that the president could exercise their authority in the event of—and in anticipation of—AI systems that may pose a threat to the safety of the American people. Upon the president’s declaration of a national emergency, several authorities throughout the U.S. Code become available.149 These include economic tools such as the IEEPA,150 which authorizes the president to regulate or prohibit international transactions in the event of a national emergency. Since the law’s enactment, presidents have declared 69 emergencies pursuant to the IEEPA.151 At 50 U.S.C. § 1701, the IEEPA authorizes the president to use the statute’s authorities “to deal with any unusual and extraordinary threat, which has its source in whole or substantial part outside the United States, to the national security, foreign policy, or economy of the United States, if the President declares a national emergency with respect to such threat.”152 Subject to some exceptions,153 upon declaration of a national emergency, 50 U.S.C. § 1702 provides the president with authority to take extensive action to “investigate, regulate, or prohibit” a wide range of international transactions and freeze assets of foreign actors.154 At 50 U.S.C. § 1708(b), the IEEPA authorizes the president to “block and prohibit all transactions in all property and interests in property of” foreign persons or entities engaged in or benefiting from “economic or industrial espionage in cyberspace, of technologies or proprietary information developed by United States persons.”155
Available emergency authorities also include infrastructural powers such as those the president possesses over the nation’s communications infrastructure. For example, under the Communications Act at 47 U.S.C. § 606(c), upon “proclamation by the President that there exists war or a threat of war, or a state of public peril or disaster or other national emergency, or in order to preserve the neutrality of the United States,” the president may suspend or amend regulations applicable to any or all stations or devices capable of emitting electromagnetic radiation and may cause the closing of any radio station.156
The president also possesses emergency powers to modify federal contracts. At 41 U.S.C. § 3304, the U.S. Code authorizes executive agencies to use noncompetitive procurement procedures if “it is necessary to award the contract to a particular source to maintain a facility, producer, manufacturer, or other supplier available for furnishing property or services in case of a national emergency or to achieve industrial mobilization.”157
In addition to these and more specific statutory authorities, the president also possesses inherent Article II authority to protect the country from immediate threats in other ways.158 As the U.S. Supreme Court has long recognized, circumstances may arise that demand presidential action in the absence of congressional delegation—particularly, during emergency situations.159 CAP has previously highlighted the need for the administration to prepare to address AI systems that may threaten the safety of the American people.160
Recommendations
To prepare the government to use the above powers in the event of an AI system posing emergency threats to the United States, the White House could consider the following actions:
- Direct the National Security Council to develop a memorandum that outlines scenarios wherein AI applications could pose an emergency threat to the country and identifies actions that the president could take through existing statutory schemes and their inherent executive authority under Article II of the Constitution to resolve the threat. The memorandum should study the landscape of imaginable AI applications and devise criteria that would trigger emergency governmental action. Such a memorandum could complement or be incorporated as part of the National Security Memorandum required by the October 2023 executive order on AI.161 The memorandum’s design could echo the National Response Plan, originally developed after 9/11 to formalize rapid government response to terrorist attacks and other emergency scenarios.162 The memorandum could consider authorities:
- Inherent to the president’s constitutional prerogative to protect the nation: For example, the memorandum could identify when it could be appropriate for the president to take military or humanitarian action without prior congressional authorization when immediate action is required to prevent imminent loss of life or property damage.163
- Under the IEEPA: For example, the memorandum could consider the administration’s authority to expand the policies established in the August 2023 IEEPA executive order, using the statute to freeze assets associated with AI technologies and countries of concern that contribute to the crisis at hand.164 Follow-up executive action could identify new countries of concern as they arise. As another example, the memorandum could identify triggers for pursuing sanctions under 50 U.S.C. § 1708(b) on foreign persons that support the use of proprietary data to train AI systems or who steal proprietary AI source code from sources in the United States. The memorandum could also explore the president’s authority to investigate, regulate, or prohibit certain transactions or payments related to run away or dangerous AI models in cases where the models are trained or operate on foreign-made semiconductors and the president determines that such action is necessary to “deal with” a national security threat. Even if that model is deployed domestically or developed by a domestic entity, it may still fall within reach of the IEEPA’s potent §1702 authorities if, per 50 U.S.C. §1701, the model: 1) poses an “unusual or extraordinary threat,” and 2) “has its source in whole or substantial part outside the United States.” The administration can explore whether AI models’ dependence on foreign-made semiconductors for training and continued operation meets this second requirement. Indeed, scholars have previously argued that the interconnectedness of the global economy likely subjects an array of domestic entities to IEEPA in the event sufficiently exigent conditions arise.165
- Under the Communications Act: For example, the memorandum could identify scenarios in which the president could consider suspending or amending regulations under 47 U.S.C. § 606(c) regarding wireless devices to respond to a national security threat.166 The bounds of this authority are quite broad, covering an enormous number of everyday devices, including smartphones that can emit electromagnetic radiation.167
- To modify federal contracts: For example, the memorandum could identify possibilities for waiving procurement requirements in a national emergency if quickly making a federal contract with a particular entity would help develop capabilities to combat a rapidly deploying and destructive AI.168
- To take other statutorily or constitutionally authorized actions: The memorandum could organize a process through which the White House and national security apparatus would, upon the presence of the criteria outlined in the memorandum, assess an emergent AI-related threat, develop a potential response, implement that response, and notify Congress and the public of such a response.169 It could also request a published opinion from the Office of Legal Counsel on the legality of the various response scenarios and decision-making processes drawn up pursuant to the recommendations above. This will help ensure that the president can act swiftly but responsibly in an AI-related emergency.
- Share emergency AI plans with the public: The administration should share such emergency processes and memoranda they develop with Congress, relevant committees, and the public where possible.
Conclusion
The White House and its subordinate agencies, including the OMB and OIRA, have taken important steps to begin safeguarding government operations and the public from the potential harms of AI. Yet as this section illustrates, policymakers nonetheless retain a number of untapped tools at their disposal that should be further considered to address AI. As AI control technologies and protocols cohere in the coming years, GFI and CAP hope that the preceding recommendations empower officials to think broadly about how executive action could help build a safe and productive AI ecosystem.
Read the fact sheet
The accompanying fact sheet lists all of the recommendations detailed in this chapter of the report.