This column contains an update.
When Vice President Kamala Harris meets with CEOs of the nation’s leading artificial intelligence (AI) companies on Thursday, May 4, she and other senior White House officials will discuss responsible development of the groundbreaking technology and the need for safeguards against bias, deceptive practices, and other risks.
Much of the discussion with the leaders of Alphabet, Anthropic, Microsoft, and OpenAI will likely focus on the Biden administration’s Blueprint for an AI Bill of Rights, released last year to establish key principles for guiding AI development. This is an opportunity for American AI companies to show that they are leaders in responsible AI by committing to implementing the AI Bill of Rights, and other American AI companies should follow suit.
Ahead of the meeting, the White House announced three new actions the federal government is taking on AI—an important first step. But the White House can and should do more.
It’s time for President Joe Biden to issue an executive order that requires federal agencies to implement the Blueprint for an AI Bill of Rights and take other key actions to address the challenges and opportunities of AI. The speed at which change is coming is happening too fast to wait.
New AI products have demonstrated the need for an AI Bill of Rights
In the past few months, the public release of new AI technologies to consumers—such as large language models that enable generative AI—has captured the attention of the world. Automated systems like AI are already present in many places in our lives, often without our knowledge. But those existing systems are often behind the scenes, and their outputs mostly data: machines talking to machines.
The White House Blueprint for an AI Bill of Rights issued last October is a strong start to a government response to protect workers, families, and our democracy from the potential pitfalls of AI.
The recent release of generative AI tools is particularly captivating because what they output—text, images, and video—is easily consumable by people. One can instantly understand the potentially dramatic change that large language models and generative AI could bring to how we work and live. It shows the need to protect our fundamental rights while encouraging responsible innovation in AI.
The White House Blueprint for an AI Bill of Rights issued last October is a strong start to a government response to protect workers, families, and our democracy from the potential pitfalls of AI. The Office of Science and Technology Policy (OSTP) followed up with five principles to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” These core five principles are that automated systems should: 1) be safe and effective; 2) protect from algorithmic discrimination; 3) protect data privacy; 4) provide notice and explanation when they are being used; and 5) allow Americans to opt out in favor of human alternatives. The Blueprint for an AI Bill of Rights does not just articulate these principles; it outlines how they can be put into practice.
American AI companies should adopt the AI Bill of Rights
Each of the companies invited to the White House meeting has publicly pledged to prioritize responsibility and safety in AI. They’ve made public commitments, such as “Responsible AI standards,” “Developing safe & responsible AI,” “AI Principles,” and “Core Views on AI Safety,” even as there are companies that have laid off some working on ethics and AI of those. These corporate documents are a good first step in the responsible development of new technologies, but more steps are needed to demonstrate companies’ seriousness about responsible and ethical AI. Namely, the AI companies meeting with the White House, as well as other American AI companies, should commit to adopting the AI Bill of Rights.
Industry is not the only stakeholder that can provide leadership on responsible innovation that preserves democratic values and protects rights. The president oversees a federal government that is the largest employer in the country. The government’s purchasing power wields enormous market-shaping power in the development of new technologies. The AI Bill of Rights, therefore, is not just a blueprint for private industry but also a road map for the government’s own approach to artificial intelligence.
The White House can take further steps to address AI concerns
On the day of the meeting with AI companies, the White House announced three new actions the federal government is taking on AI, including NSF funding for the National AI Research Institutes and previewing forthcoming policy guidance from the Office of Management and Budget (OMB) on AI use by the U.S. government. While these are important first steps, more comprehensive action on AI is needed from the White House.
The president should issue a new executive order that requires federal agencies to adopt and implement the Blueprint for an AI Bill of Rights and create a White House Council on AI. The order should also direct federal agencies and contractors to assess their own automated systems under the government’s National Institute of Standards and Technology (NIST) AI Risk Management framework; require federal agencies to assess the use of AI in enforcement of existing regulation; and address AI in future rulemaking. Federal agencies would be required to prepare a national plan to address the economic impacts from AI, especially job losses.
AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security.
President Joe Biden
In addition, the president should order the national security apparatus to prepare for potential artificial intelligence systems that may pose a threat to the safety of the American people, including what technical and legal means may be needed to shut down runaway AI systems.
Some federal agencies, such as the Federal Trade Commission and U.S. Department of Justice, are already making clear that just because these are new technologies does not mean they can be used to violate the law. But addressing the challenges and opportunities of AI requires the whole of government and should start with a new AI executive order.
If leading AI companies adopt the Blueprint for an AI Bill of Rights, it will not guarantee that these technologies are developed responsibly; and a new executive order cannot address every challenge presented by AI. But these are critical first steps that can begin today to anchor this new technology in fundamental values and principles while Congress debates the future of AI. Already, Senate Majority Leader Chuck Schumer (D-NY) has announced the development of a new legislative framework on AI, which represents an opportunity to incorporate the principles AI Bill of Rights into law.
As President Biden told his Council of Advisors on Science and Technology in April, “AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security.”
The president can and must act now by issuing an executive order on AI.
* Update, May 4, 2023: Shortly after this piece was published, the White House announced three new actions the federal government is taking on AI, and the piece has been updated to reflect that announcement.