Technological innovation can present new possibilities, including longer, healthier lives; safer workplaces; and equitable economic opportunity. But, as we have been reminded in recent months, with the public debut of generative artificial intelligence (AI) applications—and, soon after, the realization of their potentially socially corrosive impacts—these beneficial outcomes are not preordained. They must be boldly envisioned, intentionally designed, and continually pursued by policymakers, developers, industry, researchers, advocates, and the public.
Recent, rapid developments in AI have been, understandably, accompanied by calls for legislators and government to meet the moment by matching the pace of technological development with legislative action and regulatory discretion. And, last month, a group of prominent researchers and industry leaders took things a step further with a well-publicized letter arguing that we should “pause” development of the most advanced generative AI.
Innovation accelerates timelines; lawmaking and regulation take time. While the advent of some new technologies may require similarly novel governance strategies, rules, and laws, the brisk pace of AI development does not mean that there is nothing that can be done to steer emerging technologies onto a path that benefits society. We do not have to rewrite the social compact with the release of each new chatbot or future AI application. But we do need policy innovation: We can start today, with the tools we have now.
Counterintuitively, sometimes long-standing laws and regulations allow the quickest response to fast-moving AI developments. Government can respond to changing technological currents by leaning into resources, authorities, and laws it already has with urgency and creativity. It can do so by meeting technological innovation with policy innovation—in fact, this has already been set in motion. The U.S. Equal Employment Opportunity Commission (EEOC), for example, has made clear through its Artificial Intelligence and Algorithmic Fairness Initiative that the Americans with Disabilities Act of 1990 applies to the use of automated systems, including employment screening and hiring; reasonable accommodation; and accessibility. Last month, responding to concerns about copyright and the devaluing of creative work spurred by generative AI, the U.S. Copyright Office issued updated guidance that confirmed that it will not register works lacking “human authorship.” While, in Europe, Italy’s data protection authority invoked the 2018 General Data Protection Regulation to ban OpenAI’s ChatGPT until it can provide transparency about its data collection, processing, and use practices.
While this strategy may not—or may not yet—address the constellation of new issues that may arise with generative AI, including potential worker displacement; reports of chatbot outputs alleged to defame or libel; and new modalities for disinformation, it points to how legislators and regulators can be immediately and innovatively responsive to technological change.
While policymakers wield their existing powers to guide AI innovation in the service of public benefit, the policy community must also be exploring and developing new approaches. Identifying opportunities for policy innovation was one aim of the Blueprint for an AI Bill of Rights,* released by the White House Office of Science and Technology Policy in October 2022. President Joe Biden highlighted this last week in remarks before meeting with his science and technology advisors about the “opportunities and risks of artificial intelligence” for “our society, to our economy, to our national security.”
The AI Bill of Rights proposes five principles to guide the design, development, and deployment of automated systems, such as AI. These five key expectations include: systems that are safe and effective; that protect us from algorithmic discrimination; that protect our data privacy, that allow insight into when and how they are being used; and that offer viable alternatives for opting out of their use.
Importantly, the AI Bill of Rights doesn’t stop at the assertion of principles—it highlights how these principles can be enacted, through laws, policies, and approaches. There is a federal law requiring employers to report the costs of surveilling employees during labor disputes that can include the use of automated systems, providing a transparency mechanism to help protect worker organizing. There are tools being developed in industry, academia, and government to mitigate the risks of AI systems, both before deployment and through monitoring over time, such as the National Institute of Standards and Technology Risk Management Framework. There is wide stakeholder engagement in the design of programs and services in the form of community-based participatory research and participatory technology assessments.
Highlighting these laws, policies, and approaches shows how action can be taken now to address the current challenges that advanced AI poses and help guide its future use. Succeeding in this will also require encouraging determination from lawmakers and policymakers. Members of Congress, state legislators, enforcement agencies, and regulators should begin studying and implementing these practices without delay. It will require asking for real accountability from companies, including transparency into data collection methods, training data, and model parameters. And it will also require more public participation—not simply as consumers being brought in as experimental subjects for new generative AI releases but, rather, creating pathways for meaningful engagement in the development process prerelease.
We’re having an AI moment. It will not last forever. In this window of public intrigue, anxiety, and scrutiny, there is an unprecedented opportunity for political engagement. Six months ago, it would have been hard to imagine an ongoing, rich public debate about the current and future use of artificial intelligence. AI is very complex, but the idea of it is no longer a mere abstraction for many. Now, for the first time, we are having a debate about how AI should exist in our society. And, this discussion is happening among more of us than ever before, including AI experts and developers; tech journalists and policymakers; legislators and civil society; consumers and creators; and those living today with some of the most pernicious harms of AI use, including false arrest and imprisonment and housing and medical discrimination.
Acknowledging these real and immediate harms transpiring today does not require that we ignore the grave, future risks posed by AI. Exploring and preparing to address that risk is the job of the president and his national security and human rights policymakers—and Congress should give him additional tools to do so.
The scope of generative AI will impact nearly every facet of human lives in ways that are both indetectable and dramatic. This means we all deserve a role in giving shape and setting terms of how this powerful new tool will be used—including in conversations about when and how it should not be used. We have an opportunity to democratize technology policy.
We must start to act rather than pause. Policymakers should start acting now to create a future in which generative AI and other advanced technologies are placed in the service of human thriving and public benefit. And anyone engaged in advocacy in a movement—from women’s health care rights and civil rights and to climate crisis activism and labor activism—should see AI as a tool that may advance their work or frustrate it, and engage accordingly. There must be deliberation and action about AI that includes more voices—we cannot wait.
Author’s note: Dr. Alondra Nelson was principal deputy director for science and society and acting director of the White House Office of Science and Technology Policy (OSTP), where she led the development of the Blueprint for an AI Bill of Rights.