What You Should Know
The rapid development and deployment of artificial intelligence, including generative AI and large language models, is moving swiftly and may outpace any other technological advancements to date.
This technology will affect virtually every public policy issue area, requiring both vertical and horizontal AI policy solutions.
The Biden-Harris administration must take an all-of-government approach to properly harness AI’s benefits to society while simultaneously mitigating its risks.
The White House can lead this effort by outlining a national AI strategy that makes the AI Bill of Rights binding U.S. law, contains a national jobs plan, and addresses the equity and discrimination risks of AI applications.
On July 7, 2023, the Center for American Progress submitted a comment letter in response to the White House Office of Science and Technology Policy’s (OSTP’s) request for information (RFI) on the national priorities for artificial intelligence (AI). AI1 continues to rapidly advance in sophistication and capture public attention, with generative AI from large language models (LLMs)2 quickly reaching 100 million commercial users and a spate of AI tools already available to federal agencies through leading cloud computing services.3 To date, this speed of deployment has entirely outpaced any governance or regulation of the technology, including its proprietary use by the government.
AI will undoubtedly affect nearly every facet of Americans’ lives. Therefore, it is critical that any national AI strategy4 enumerate how to protect Americans’ civil rights and safety; guard against discrimination and the perpetuation of inequities; promote economic growth and good jobs; respond to labor disruptions; and improve the delivery of public services. All of the federal government must prioritize the development and deployment of tools to address the challenges and opportunities of AI, including immediate executive action, maximum utilization of existing authorities, crafting new AI legislation, and international cooperation. The White House, in particular, should meet the AI moment by developing and publishing a multi-issue national AI strategy that addresses as many of these policy areas as possible to ensure the safety, well-being, and security for the American people. This strategy should chiefly make the AI Bill of Rights binding U.S. law, contain a national jobs plan, and address the equity and discrimination risks of AI applications. To that end, CAP recommends the following for a national AI strategy and national AI priorities:
- Take immediate executive action on AI. To address AI’s challenges and opportunities head-on, and to prioritize protecting the American people and safeguarding the country’s values, President Joe Biden should immediately issue an executive order (EO) on AI.5 This new executive order should make the White House “Blueprint for an AI Bill of Rights”6 binding for federal government use of AI, set an end-of-year deadline for a national AI strategy, and immediately establish a new White House Council on Artificial Intelligence, among other things—all actions previously recommended by CAP7 and endorsed by numerous civil, technology, and human rights organizations.8
- Establish principles-based AI legislation and regulation. CAP believes that fundamentally, direct government regulation will be needed to ensure the development and deployment of trustworthy AI.9 Principles-based regulation, especially for high-risk uses, is critical to ensuring that new and existing artificial intelligence technologies are developed and deployed in a trustworthy and safe manner. In addition to codifying some of the AI Bill of Rights into law, the Biden-Harris administration should support legislative proposals that include new or updated laws to address liability related to AI tools, designate high-risk cases or sectors, require pre-approval for deploying the highest-risk AI applications, prohibit unacceptable risk uses, and prevent and respond to threats of catastrophic risk.
- Ensure AI advances equity and strengthens civil rights. For as long as technology has existed, it has been a tool with the potential to strengthen civil rights and advance equity. Yet too often, technology has resulted in inequitable outcomes and the regression of hard-won civil rights. To balance these opposing tradeoffs, any new AI executive order, Office of Management and Budget (OMB) AI guidance, or other AI efforts from the federal government must center equity and civil rights as ordered by 2021 EO 13985, “Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,”10 and 2023 EO 14091, “Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.”11
- Guarantee AI applications protect rights, safety, and national security. To address and lead on regulating the risks and benefits AI will have on public safety and national security, it is critical that the United States maintain both its technological superiority and its values while promoting its geopolitical competitive advantage. The United States must act as a global model for enacting a human rights-centered approach to AI applications that reinforces democratic principles instead of trading off against them.12 To achieve this, international cooperation among like-minded countries must expand, with the goal of establishing global red lines in developing and deploying AI.13
- Ensure increased AI usage will not negatively affect climate change goals, advances clean energy, and protects the environment. As public investments advance the development of domestic clean energy infrastructure and supply chains, there is an opportunity to leverage AI to improve the availability, efficiency, and affordability of clean energy.14 However, greater use of advanced AI could significantly increase carbon emissions due to the increased computing power required15 and an expected uptick in demand following widespread deployment of generative AI.16 To mitigate this, OSTP should be tasked with assessing if AI will negatively affect the administration’s climate change goals for 2030 and 2050 and issue a report on the positive and negative effects of AI on climate change.
- Deploy AI in ways that promote economic growth and good jobs. AI has the potential to benefit workers by creating jobs, raising worker productivity, lifting wages, boosting economic growth, and increasing living standards.17 However, AI can also harm workers by displacing them, eroding job quality, increasing unemployment, and exacerbating inequities.18 The national AI strategy must contain a plan to address economic impacts from using AI, especially potential job losses. To ensure workers benefit, the government should respond to the needs of workers displaced by AI adoption, including with a job guarantee, and should steer AI generation in a manner to complement—not replace—workers.
- Bolster democracy and protect civic participation from AI-related threats. It is critical that efforts are taken—both by the technology companies that develop and deploy AI and by the federal government—to protect against the threats that AI poses to U.S. democracy. Without meaningful protections and regulatory frameworks, AI should not be allowed to exist if it stands to destabilize democratic institutions. The White House should encourage limiting the use of AI systems as it pertains to election administration; promote and encourage innovation to detect deep fakes and disinformation campaigns; and enhance cybersecurity measures for election infrastructure and ensuring AI system vendor security standards.
- Use AI to innovate in public services. The Biden-Harris administration has a unique opportunity to address innovation in public services with the opportunities created by AI. The administration’s approach should have a few focuses, including how to advance AI for public good with expanded access to government services. The White House should commission a report that outlines how to advance AI for public good by expanding access to government services for more Americans, ensuring greater public participation, and continued protection of rights. The White House should also require federal guidance before enabling widespread deployment of generative AI in office software for federal employees.
The Biden-Harris administration has already undertaken critical steps to address AI, including publishing the Blueprint for an AI Bill of Rights, releasing the National Institute of Science and Technology (NIST) “AI Risk Management Framework,” and releasing the “National Artificial Intelligence Research and Development Strategic Plan: 2023 Update” prioritizing key AI research projects.19 However, additional immediate actions are required to start an all-of-government effort to address the challenges of AI and set up the entities needed to execute the national AI strategy once it is developed. CAP’s full recommendations for the national AI strategy are outlined below. This is a continuation of ongoing work to address the benefits and risks of AI, outline mitigations through existing and new laws, and propose policy solutions to address a wide-ranging set of issue areas that AI will likely affect. This includes previous proposals to take immediate executive action on AI, have policymakers seize the AI moment, address real AI accountability measures, examine the importance of cloud computing to AI, ensure AI creates good new jobs, address competition in the digital sector, and regulate AI and other online services for the future.20
A revised version of CAP’s July 7, 2023, submission to OSTP in response to a request for comments on national priorities for AI for the general public can be found below.21 The original submission can be read here. CAP was also a contributor and signatory to the submission to the OSTP AI RFI from the Leadership Conference on Civil and Human Rights, which can be read here.22
CAP's comments on a national AI strategy
Read CAP’s original submission to the Office of Science and Technology Policy.
Executive summary
The Center for American Progress believes that an all-of-government approach is necessary to address the challenges and opportunities of artificial intelligence (AI) in a National AI Strategy. The Office of Science and Technology Policy’s (OSTP’s) “Request for Information: National Priorities for Artificial Intelligence” (Docket Number: OSTP-TECH-2023-0007)23 outlines five key issue areas: protecting rights, safety, and national security; advancing equity and strengthening civil rights; bolstering democracy and civic participation; promoting economic growth and good jobs; and innovating in public services. These issue areas are critical to a national AI strategy and overlap with the key CAP priorities of building an economy for all, restoring social trust in democracy, advancing racial equity and justice, and tackling climate change and environmental injustice.24 All of the federal government’s tools must be developed and used to address the challenges and opportunities of AI, including immediate executive action, maximum utilization of existing authorities, crafting new AI legislation, and international cooperation. CAP’s recommendations for doing so are detailed here.
Take immediate executive action on AI
To address AI’s challenges and opportunities head-on and to prioritize protecting the American people and safeguarding the country’s values, the president should immediately issue an executive order (EO) on AI. This new AI EO should make the White House “Blueprint for an AI Bill of Rights”25 binding for federal government use of AI and immediately establish a new White House Council on AI, among other things that CAP has previously recommended.26 The Blueprint for an AI Bill of Rights lays out five principles that are essential for the trustworthy use of AI: “systems that are safe and effective; that protect us from algorithmic discrimination; that protect our data privacy, that allow insight into when and how they are being used; and that offer viable alternatives for opting out of their use.”27 Ensuring the federal government implements the administration’s Blueprint for an AI Bill of Rights would provide clear leadership in an uncertain AI space. Immediate action is required to start an all-of-government effort to begin addressing AI challenges and set up the entities needed to execute the national AI strategy once it is developed.
Policy recommendations
- A new AI EO should require federal agencies to implement the Blueprint for an AI Bill of Rights for their AI usage. This should include a plan due to the new White House Council on AI within 90 days for implementation by 2024.28 More than 63 civil rights organizations, led by the Leadership Conference on Civil and Human Rights (LCCHR), support making the AI Bill of Rights binding,29 and four National Artificial Intelligence Advisory Committee (NAIAC) members “advocated to anchor this Committee’s work in a foundational rights-based framework, like . . . OSTP’s October 2022 Blueprint for an AI Bill of Rights.”30
- A new AI EO should require all AI tools deployed by federal agencies or contractors to be assessed under the National Institute of Standards and Technology (NIST) AI Risk Management Framework and summaries to be publicly released. The president should use his authority under the Federal Property and Administrative Services Act of 1949 to require all federal contractors and subcontractors to assess any AI tools they use or deploy under the NIST AI Risk Management Framework,31 with implementing regulations to be expedited by the Federal Acquisition Regulatory Council, and to release public summaries of those risk assessments.32 This recommendation was also made by the NAIAC in its year one report.33
- The president should issue an AI EO immediately and set an end-of-the-year deadline for the national AI strategy. While the development of a national AI strategy is essential, an all-of-government response to the challenge of AI cannot wait. The president should issue a new AI EO immediately, and OSTP should commit to publicly deliver the national AI strategy no later than the end of 2023.
- Require federal agencies to assess AI’s use in enforcing existing regulations and address AI in future rulemaking to the maximum extent practicable. The president should require federal agencies to determine whether the use of AI by the entities they regulate could implicate their enforcement of existing statutes and regulations, and if appropriate, address that use in future rulemaking to the maximum extent possible under existing authorities.34
- Require that all new federal regulations include an analysis of how the rulemaking would apply to AI tools. The president should amend EO 12866, “Regulatory Planning and Review,”35 to require agencies to provide to OMB—and include in any final rule—an assessment of how any proposed regulations would or would not apply to AI tools.36
- Order all federal agencies to identify all existing authorities that allow them to act on AI and publish a plan within 90 days publicly explaining how they will leverage those laws and regulations to protect Americans from AI harms. While technology from automated systems and AI may be new, it does not mean that existing laws and regulations no longer apply. As Federal Trade Commission (FTC) Chair Lina M. Khan noted, “There is no AI exemption to the laws on the books.”37 Following the example of the joint U.S. Department of Justice, FTC, Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission (EEOC) statement on enforcement efforts against discrimination and bias in automated systems,38 the White House should require all agencies to reevaluate and detail their existing authorities on AI; submit an updated plan detailing how they will aggressively leverage their existing authorities on AI; and publicly post the list detailing their existing authorities and plans to use them.
Establish principles-based AI legislation and regulation
As CAP noted in its response to the June 2023 National Telecommunications and Information Administration’s request for comment around AI accountability, “Fundamentally, direct government regulation will be needed to ensure the development and deployment of trustworthy AI.”39 Principles-based regulation—especially for high-risk uses—is critical to ensuring that new and existing artificial intelligence technologies are developed and deployed in a trustworthy and safe manner. The administration’s Blueprint for an AI Bill of Rights40 seeks to enshrine early timeless principles around the rights of people in a world predicted to be increasingly driven by automated systems. Congress should imminently propose principles-based regulation based on the Blueprint for an AI Bill of Rights41 that defines specific practices that would be explicitly outlawed and enumerates broader principles around which regulators could interpret and craft rules.42 Combining clear guardrails and a principles-based approach offers flexibility to address future problems. Legacy approaches to online trust and safety for social media and content-based harms do not apply in the same or easily transferable ways to AI technologies. The president should make critical recommendations to Congress as it begins work on AI legislation.
Policy recommendations
- The president should support AI legislation that includes codifying some of the Blueprint for an AI Bill of Rights principles into law. Specifically, the principles of safe and effective systems43 and algorithmic discrimination protections44 should be codified in statute and then interpreted through regulation by federal agencies. The safe and effective systems principle would ensure the right to be protected from automated systems in an ongoing manner as they are designed and deployed, which could be fulfilled with a duty of care provision. Algorithmic discrimination protections that expressly prohibit algorithmic discrimination, while not undermining any existing civil rights laws, are essential given the already widespread deployment of automated systems with known algorithmic disparate impact concerns. In addition to laying out enforceable principles, the legislation should also lay out specific unlawful practices, and a designated regulator should develop and enforce new rules for specific technologies, practices, or markets.45 Consistent with recent U.S. Supreme Court jurisprudence, the legislation must make specific and clear delegations of authority to avoid challenges under the so-called major questions doctrine.
- Legislative proposals should include new or updated laws to address liability related to AI tools, designate high-risk cases or sectors, require pre-approval for deploying highest-risk AI applications, prohibit unacceptable risk uses, and prevent and respond to threats of catastrophic risk. Comprehensive AI legislation must address liability concerns, allow the designation of high-risk cases and sectors—the highest-risk cases or sectors should be able to go through a government review before deployment—and prohibit certain dangerous uses.46 The high-risk cases or sectors should not be limited to public safety or national security, but should also include areas in which automated systems can have an outsize impact on the individual or collective’s livelihood. AI liability concerns in areas where the impact on Americans’ lives and bodily autonomy are central—such as health care or criminal justice—necessitate special care to avoid racial and other biases in decision-making, as noted in the Blueprint for an AI Bill of Rights.47 Distinguishing these high-risk cases and sectors for the United States will be critical for future AI legislation. For example, the European Union’s Artificial Intelligence Act’s high-risk categories include employment or essential private and public services, such as credit scores, and will become the de facto global standard absent U.S. regulation. Therefore, this represents a likely area for global coordination and harmonization.48
- Congress should ensure that the technical requirements and legal authorities are in place to deal with dangerous AI. To address the potential public safety and national security threats of AI, Congress must provide the necessary technical requirements and legal authority to address threats, such as mandating technical kill switches in AI deployments or granting the president the power to temporarily remove dangerous AI from interstate commerce. The challenges faced in acting against TikTok49—which is owned by a foreign company, ByteDance—are illustrative of the even greater challenges a president might face in attempting to act swiftly against a dangerous, domestically developed AI system. Proposed new laws such as the RESTRICT Act, which would permit executive branch action with expedited interventions from the legislative and judicial branches, may provide inspiration.50
- AI regulations should publicly reinforce the importance of holding private companies accountable for their AI systems. AI regulation should include robust accountability mechanisms to hold “designers, developers, and deployers of automated systems”51 to a high standard, whether through first-party or third-party use of AI. This may include certifications, audits, and assessments. These accountability measures can promote trust with external stakeholders and act as a forcing function for AI developers to develop and design their products responsibly, bolstering the internal processes required to build these systems.
- Designate a specialized AI regulatory agency, either by expanding authorities of an existing federal agency or creating a new federal agency. The broad use and specialized nature of AI means that any new law to address key issues will need expanded administrative capacity in the federal government. Any new AI legislation should designate a federal agency to create and enforce new rules. As CAP’s landmark 2021 report “How to Regulate Tech” noted, “Expansion of existing agencies and consideration of new agencies should both be on the table. In either case, these proposals require significant expansion of the U.S. government’s capacity and expertise.”52
Ensure AI advances equity and strengthens civil rights
For as long as technology has existed, it has been a tool with the potential to strengthen civil rights and advance equity, but it has too often resulted in inequitable outcomes and the regression of hard-won civil rights. As the LCCHR noted in April 2023, “The growing landscape of AI policy must continue to center equity and civil rights,” and “[r]ather than entrench bias and automate discrimination, technology should create opportunity, safety, and benefits for all.”53 Language from the Trump administration’s executive actions on AI and OMB guidance barely mentions equity or civil rights and should not be the federal government’s primary executive branch AI guidance.54 While it is possible for automated systems to increase equity or civil rights, it is often difficult to summon clear real-world examples at the same scale as even the existing harms, much less future predicted harms. As CAP noted in its comments to the FTC and CFPB on their tenant screening request for information, “Any positive examples of automated systems in tenant screening that address concerns should be highlighted where appropriate, and if there are no positive examples to highlight, that should be a clear warning for regulators and the industry.”55
Policy recommendations
- Ensure that any new AI executive order, new OMB AI guidance, and other AI efforts from the federal government center equity and civil rights as ordered by the existing executive orders to advance racial equity and support for underserved communities. Any new AI EO or OMB guidance should center racial equity, support for underserved communities, and civil rights as laid out in 2021 EO 13985, “Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,”56 and 2023 EO 14091, “Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government,”57 and as outlined in CAP’s recommendations to advance equity.58 This guidance should include how agencies can develop shared policy to ensure that their algorithms demonstrate effectiveness—through proper testing and auditing—without compromising anti-discrimination principles prior to use.
- Develop bias and equity training for the federal workforce that relates to AI systems, development, and algorithmic use on diversity, equity, inclusion, and accessibility in the federal workforce. In line with EO 14035, “Executive Order on Diversity, Equity, Inclusion, and Accessibility in the Federal Workforce,”59 federal agencies should coordinate bias and equity trainings in the use of AI through their agency equity teams, in collaboration with the National Science and Technology Council (NSTC) Subcommittee on Equitable Data.
- Support a federal privacy law and encourage existing privacy rulemaking efforts. AI requires vast amounts of data to train and operate, meaning that data protection legislation is critical. The administration should continue efforts to pass comprehensive federal privacy legislation, such as the 117th Congress’ American Data Privacy and Protection Act,60 to protect data and digital civil rights.61 The Blueprint for an AI Bill of Rights includes a data privacy principle,62 and President Biden has repeatedly endorsed privacy legislation.63 In the meantime, to the maximum extent possible, while respecting the independence of agencies, the administration should encourage the FTC rulemaking efforts on commercial surveillance and data security.64
- Appoint a federal chief accessibility officer and develop new priorities for digital accessibility utilizing AI. The president should appoint a new federal chief accessibility officer at the General Services Administration, which currently assists with Section 508 compliance,65 to coordinate aspects of the federal government’s efforts on accessibility, including digital accessibility, and promote the development of AI for accessibility. The Biden-Harris administration previously emphasized the importance of diversity, equity, inclusion, and accessibility with EO 14035 in 2021.66 AI has tremendous potential to increase accessibility online and offline for the disability community, such as the automated generation of closed captioning for online video.67 The federal government has long mandated accessibility in its websites and services with Section 508. Still, no such mandates exist online for nongovernment actors and therefore should be a critical focus.68
Guarantee AI applications protect rights, safety, and national security
To address and lead on regulating the risks and benefits AI will have on public safety and national security, it is critical that the United States maintain both its technological superiority and its values while promoting its geopolitical competitive advantage. The United States must act as a global model for enacting a human rights-centered approach to AI applications that reinforces democratic principles instead of trading off against them.69 As echoed by U.S. Deputy Secretary of Defense Kathleen Hicks in a recent op-ed, it is crucial for U.S. national security forces to utilize AI for defensive and strategic benefits, but those efforts must coincide with promoting a responsible, values-driven approach to AI globally.70 The United States should play a leadership role in addressing potential risks in conjunction with governments around the world. Forums such as the U.S.-EU Trade and Technology Council or the Organization for Economic Cooperation and Development (OECD) enable the United States to regulate AI in lockstep with global partners, which from a national security perspective, is a wiser approach than having the EU regulate AI and the U.S. market unilaterally through the “Brussels effect.” Additionally, robust accountability mechanisms and legal authorities should be introduced now to help mitigate these future risks.
Policy recommendations
- The president should have the technical and legal authority to turn off domestic and foreign AI systems that may pose a threat to the safety of the American people. The president should direct the National Security Council (NSC) and OSTP to assess and offer potential recommendations, including legislation, to mitigate threats from the most potentially dangerous uses of AI—such as runaway artificial general intelligence71—that may pose a threat to the safety and well-being of the United States and its citizens. To address potential risk, the NSC and OSTP should outline options available to the president among existing authorities and highlight gaps in presidential authority to act on AI. They should provide recommendations to the administration and Congress for new technical requirements and legal authorities.
- Increase international cooperation among like-minded countries to establish global red lines in developing and deploying AI.72 International cooperation on AI must expand, possibly taking inspiration from recent efforts to coordinate globally on climate change. International cooperation must increase to ensure that AI usage is aligned with human rights, democracy, and freedom of expression. The European Union will soon regulate AI with the passage of the Artificial Intelligence Act, which will establish risk classifications for different AI uses.73 The United States risks leaving Brussels as the primary AI regulator, as it is for large digital platforms with the Digital Services Act and the Digital Markets Act.74 Notably, China has exported AI-driven surveillance architecture and digital infrastructure to 63 other countries, raising risks of this technology being employed by other authoritarian states and posing a direct national security risk for the United States.75 The United States has an opportunity to help shape the global adoption of AI and ensure that it does not increasingly promote rights-infringing systems.76 The United States should lead international convenings on AI and spearhead a global red line approach for governments to align on appropriate AI usage in national security and consent to which AI practices should be prohibited globally. The United States should utilize existing governmental organizations, such as the OECD, to seek an international convention on AI governance standards. In tandem, OSTP the U.S. Department of Defense, the Department of State, the Department of Commerce, and the NSC should undertake research to determine which AI uses the United States should prioritize for global restriction.
- The United States must invest in AI capabilities—including research and development (R&D)—for military and intelligence defensive purposes. China, the United States’ primary geopolitical competitor, has announced a roadmap to become the world leader in AI by 2030.77 This roadmap encourages China’s commercialization of AI for military usage, reinforcing the importance of how the United States applies AI to defensive security measures. With other governments increasingly utilizing AI within military operations, including AI-enhanced facial recognition software throughout the Russia-Ukraine war,78 it is critical for the United States to maintain a competent technological defense. The U.S. Defense Budget for fiscal year 2024 prioritizes this with $1.8 billion set aside for AI investment.79 This budget should be carefully allocated, including for DOD-funded R&D that can pinpoint U.S. defensive weaknesses that AI technologies can support. R&D should also focus on how the negative implications of AI—for example, amplification of bias and lack of transparency—can play out militarily. For instance, insufficient databases of diverse individuals and amplification of biases through AI will likely have serious implications for using AI facial recognition within conflict zones.
Ensure increased AI usage will not negatively affect climate change goals, advances clean energy, and protects the environment
As public investments advance the development of domestic clean energy infrastructure and supply chains, there is an opportunity to leverage AI to improve the availability, efficiency, and affordability of clean energy.80 However, greater use of advanced AI could significantly increase carbon emissions, due to the increase in computing power required81 and an expected uptick in demand following widespread deployment of generative AI.82 Advanced AI technology such as large language models utilize vastly more computing power,83 and thus more energy and water,84 than traditional forms of cloud computing, which are already of great environmental concern.85 A lack of transparency on advanced AI obscures its carbon footprint and potential impacts on the power grid,86 especially for the few cloud computing providers with AI datacenter infrastructure.87 In addition, generative AI has the potential to further degrade the information integrity spectrum by increasing hypertargeted climate misinformation campaigns. These campaigns, in combination with the oil and gas industry’s current and potential use of AI to accelerate their operational capabilities,88 may collectively jeopardize the government’s efforts to encourage the combination of public and private investments89 necessary for achieving a net-zero emissions economy by 2050.90
Policy recommendations
- OSTP should assess if AI will negatively affect the administration’s climate change goals for 2030 and 2050. It is critical to analyze the impact of AI technology on existing climate commitments from companies or governments and how increased demand for AI is considered in future carbon emission projections. It should also determine a process to help track increased carbon emissions from the expected increase in U.S. government use of AI. This analysis should model the OSTP’s fact sheet on climate and energy implications of crypto assets from 2022.91
- OSTP should research and issue a report on AI’s positive and negative effects on climate change. OSTP should create a report that seeks to understand the effect of advanced AI such as LLMs on increasing or decreasing carbon emissions, modernizing power grid infrastructure, and using data analytics to develop climate adaptation solutions for vulnerable communities. Additionally, it is critical to analyze the impact of technology, including those that may predate LLMs, on existing climate commitments and how increased demand for AI is considered in future carbon emission projections.
- To advance clean energy infrastructure and supply chain initiatives, agencies should ethically and securely leverage AI to expand the government’s operational capacity. The public sector has opportunities within existing guidance that set the foundation for agencies to utilize AI to expand the government’s operational capacity.92 This could condense timelines for intricate interagency efforts required for developing clean energy infrastructure and supply chains, including conducting research,93 accessing large amounts of climate94 and environmental data,95 and encouraging public engagement.96
Deploy AI in ways that promote economic growth and good jobs
AI has the potential to benefit workers by creating jobs, raising worker productivity, lifting wages, boosting economic growth, and increasing living standards.97 However, AI also can harm workers by displacing them, eroding job quality, increasing unemployment, and exacerbating inequities.98 Policymakers will shape whether and how AI benefits or harms workers through action or inaction. Past labor market disruptions, including inadequate policy responses99 to World Trade Organization governance of trade and its associated localized economic and democratic costs,100 prove the need for a proactive, worker-centered strategy. CAP recommends that a central component of the national AI strategy be a plan designed to mitigate against worker harms and ensure that workers benefit from AI.101 Senate Majority Leader Chuck Schumer (D-NY) has named security a key component of his SAFE Innovation AI framework, including national and workforce security.102 Therefore, these policy recommendations include a mix of administrative, legislative, and regulatory ideas.
Policy recommendations
- The national AI strategy must contain a plan to address economic impacts from using AI, especially potential job losses. The national AI strategy must include a plan to address the potential economic and job implications of AI with a proactive, worker-centered strategy to ensure workers benefit from AI and are not solely harmed by it. CAP has previously called for the White House and appropriate agencies to create a national plan of action to address potential economic and job impacts from AI.103
- To ensure workers benefit, the government should respond to the needs of workers displaced by AI adoption, including with a job guarantee. Effective approaches to AI job displacement include adopting a job guarantee to ensure workers can find employment;104 enhancing the social safety net, including modernizing unemployment insurance (UI) by expanding eligibility;105 increasing benefit levels and expanding time allowed on UI; or creating a similar model to the U.S. Department of Labor’s trade adjustment assistance, with a focus on job loss due to technological disruption.
- To mitigate potential labor and economic harms, AI generation should be steered in a manner to complement—not replace—workers. The government should enact labor protections, including EU-style laws106 that limit the ability of firms to lay off workers and ban certain uses and practices, including those that are discriminatory or violate worker privacy. In addition, the federal government must fix the flawed U.S. labor law system that undermines workers’ bargaining rights so that workers who want to form a union can, as well as take additional steps to help workers better use collective bargaining as technology increasingly disrupts industries. Additionally, a national AI laboratory could be created as an extension of the CHIPS and Science Act to create and share labor-augmenting AI tools that help workers.
- Tax policy should be used to direct AI development in a positive direction. This could include a general monopoly tax that would reduce tech giant domination of AI development.107 Conversely, tax credits could reward development and adoption of labor-augmenting AI.
- Expand investment in upskilling, reskilling, and retraining workers, ensuring workers benefit financially and that new jobs are good jobs and are available to a wider pool of workers. As AI grows across sectors of the economy, the workforce involved in developing, training, testing, and using AI will increase proportionately. To ensure that teams, organizations, and governments that use AI include diverse perspectives and backgrounds that can reduce biases and discrimination, federal investment in R&D of AI technologies should provide pathways to education and workforce development opportunities for disadvantaged communities. Similar to collaboration spurred by the CHIPS and Science Act in partnering with historically Black colleges and universities (HBCUs), these investments can place more people from underrepresented backgrounds into the technology sector.108 In addition, registered apprenticeships and other types of labor management training partnerships can help support workforce stability during periods of technology change while ensuring that workers can access paid, high-quality reskilling and retraining programs that serve their needs. This serves the dual purpose of expanding the pool of available workers in new roles and ensuring on-the-job training.
- Adopting active labor market policies. Such policies would target not just workers displaced by AI but also those that potentially could be displaced by ensuring they are ready for the fallouts of technological disruption on work. These can include replicating paid educational leave models in the European Union. For example, in Austria, workers can take two months to one year of paid leave for education.109
- Break down barriers to STEM educational attainment and workforce participation for underrepresented groups, such as Black and Hispanic workers and women, as demand for specific STEM skills grows. Efforts should include partnering with HBCUs,110 Hispanic-serving institutions, Tribal colleges and universities,111 and community colleges.112 In addition, ensuring the STEM industry is equitable, safe, and free from harassment must be a priority, along with improving funding for EEOC and Office of Federal Contract Compliance Programs to ensure better enforcement of federal anti-discrimination and harassment laws.113
- Support the passage of the Senate tech antitrust bills. Potential self-preferencing means competition may be stifled in the AI market, especially as the largest technology companies also control the commercial cloud computing datacenter infrastructure that most advanced AI runs on.114 CAP has endorsed two Senate tech antitrust bills introduced in the 117th Congress, and one that has been reintroduced in the 118th Congress—the American Innovation and Choice Online Act115—that would give enforcers more tools to ensure fair competition among designated large covered platforms, including AI commercial cloud computing.
Bolster democracy and protect civic participation from AI-related threats
It is critical that efforts are taken—both by the technology companies that develop and deploy AI and by the federal government—to protect against the threats that AI poses to U.S. democracy. Without meaningful protections and regulatory frameworks, AI should not be allowed to exist if it stands to destabilize democratic institutions. The most fundamental aspect of American democracy is the right to vote. AI should be limited from interfering with this most core element of democracy. Therefore, humans must maintain oversight of vote counting and election rolls. Over the past few years, the information ecosystem has proven to be critical to the health of democratic institutions. It is now clear that the threat of disinformation can destabilize democracy, as bad actors—both domestic and foreign—can easily spread disinformation online about candidates, election information, and more. AI can exacerbate the existing problem of disinformation and contribute to this threat on a more sophisticated scale.
Policy recommendations
- Encourage the limitation and responsibility of using AI systems as it pertains to election administration. While AI can improve the election process and strengthen the franchise, the potential for harm from AI in election administration is so great that it must have a highly compelling reason for its use. Any application of AI in elections should be treated as a component of election infrastructure and only be allowed after rigorous steps are taken to ensure it is safe, effective, transparent, auditable, and strengthens the franchise. For example, while signature verification is fraught with risk due to its subjective nature and often inadequate training to carry it out successfully, it may be enhanced by AI tools. AI may mitigate inconsistency considerations, presuming that the tools are appropriately trained, auditable, and demonstrate no significant biases. Election officials, election administrators, the civil rights community, and voting rights advocates should be extensively involved in developing and deploying any AI system that pertains to election administration. Processes should retain certain degrees of human oversight to maintain public trust in the electoral process. AI systems used for federal elections should also be required to undergo a federal certification process administered by agencies such as the Cybersecurity and Infrastructure Security Agency (CISA) and the Election Assistance Commission, as well as testing by national laboratories.
- Promote and encourage innovation to detect deep fakes and disinformation campaigns. OSTP should bring together the private and public sectors to develop high-accuracy detection tools for AI-generated content. This includes coordinating with the private sector on new technologies, allocating additional federal funding to allow the NSF or others to research the issue, and developing new technology through agencies such as the Defense Advanced Research Projects Agency. With the looming threat of disinformation through coordinated campaigns, which is becoming increasingly difficult to detect, the federal government should determine what tools exist to help identify AI-generated content and how to make them more widely available to state and local election officials.
- Implement the NSTC roadmap for information integrity research and update the report with new recommendations related to generative AI. The administration should move to implement the 2022 NSTC report “Roadmap for Researchers on Priorities Related to Information Integrity Research and Development”116 and issue a follow-up report with new recommendations following the conclusion of public input on generative AI from the President’s Council of Advisors on Science and Technology Working Group on Generative AI.117
- CISA should continue the efforts started in 2020 to correct any misleading information produced and spread by AI about elections and work closely with information sharing and analysis centers on AI-related issues.118 This should include closely working with state and local election officials and administrators to identify and counter dis- and misinformation. The federal government should engage in public information campaigns and reiterate that accurate information is published on .gov websites. Additionally, CISA should utilize the Elections Infrastructure Information Sharing & Analysis Center and Multi-State Information Sharing and Analysis Center119 to increase awareness about the potential impacts of AI systems on election administration and election dis- and misinformation.
- Enhance cybersecurity measures for election infrastructure and ensure AI system vendor security standards. CISA should explore the possible applications of AI to cyberattack vulnerable election infrastructure, share this knowledge with state and local election officials, and publish its findings and recommendations.120 The federal government should enact comprehensive security requirements for election infrastructure and conduct regular risk assessments to ensure its integrity and resilience while implementing guardrails against cyber threats as it aligns with the administration’s national security strategy and national cybersecurity strategy.121 Additionally, the federal government should require physical and cybersecurity standards for election infrastructure vendors as a matter of critical infrastructure and national security, including any AI system vendors that provide election administration services.
- Ensure that AI-related election interference is socialized and ready to be tackled by the intelligence community (IC). The federal government should ensure that the Office of the Director of National Intelligence’s new Foreign Malign Influence Center (FMIC)122— which has the primary authority for analyzing and integrating intelligence on foreign influence operations—as well as IC agencies’ foreign influence offices are working on socializing the potential impacts of AI-related foreign malign election interference and have responses and countermeasures in place. The FMIC should also be ready to coordinate the IC’s response in the case of any AI-related interference in the 2024 election cycle and ensure that information on any interference is appropriately shared with election officials, media outlets, and the American public in a timely manner. Lastly, the NSC should be involved, and efforts should be made to share information with state and local election authorities.
Use AI to innovate in public services
The Biden-Harris administration is uniquely positioned to address innovation in public services with the opportunities created by AI. The administration’s efforts and actions should have a few areas of focus, including how to advance AI for public good with expanded access to government services. RFI Question 27 asks, “What unique opportunities and risks would be presented by integrating recent advances in generative AI into Federal Government services and operations?”123 It is important to understand that integration of generative AI into the federal government is already underway and likely to accelerate long before the conclusion of the national AI strategy. Generative AI is already commercially available to more than 100 million users.124 In addition, Microsoft has announced the availability of the generative AI technology OpenAI for government, allowing Azure Government cloud computing infrastructure access to OpenAI’s GPT-4, GPT-3, and embeddings.125
Policy recommendations
- Commission a report on advancing AI for the public good for expanded access to government services, ensuring greater public participation and continued protection of rights. President Biden should task the NSTC126 Select Committee on Artificial Intelligence127 with drafting a report articulating a vision for advanced AI for the public good, focusing on leveraging technology to expand access to essential government services and protection of rights while preserving the American public’s privacy. This report should outline how to invest in increasing the capacity of governments to innovate and empower bureaucracies that could better serve needs in housing, health, food security, participatory democracy, and other key citizen engagement points.128
- Require federal guidance before enabling widespread deployment of generative AI in office software. The planned deployment of generative AI through existing services, including the Windows operating system, Office 365, and Google Workspace Suite, means that it will soon be available by default to millions of Americans in their corporate settings.129 Government agencies run versions of this commercial productivity software with additional security and compliance controls,130 with impending deployment of these tools to millions of U.S. government employees131 unless specifically instructed otherwise by the U.S. government. The U.S. government should issue guidance on using generative AI in government tools before it is available by default, as well as on how to mitigate the incorporation of sensitive citizen and government information to advanced AI training data.
Acknowledgments
The authors would like to thank Justine Gluck, Jennifer Lee, Ashleigh Maciolek, Sydney Bryant, Rebecca Mears, Michael Sozan, Tom Moore, Greta Bedekovics, Frances Colon, Cody Hankerson, Rose Khattar, David Madland, Karla Walter, Marc Jarsulic, Heba Malik, Robert Benson, Allison McManus, Justin Dorazio, Christian Rodriguez, Hannah Niles, Dr. Alondra Nelson, Trevor Higgins, Emily Gee, Alan Yu, William Roberts, Ben Olinsky, and Mara Rudman for their contributions to this comment letter.