Center for American Progress

Regulating AI in the Evolving Transatlantic Landscape
Chapter

Regulating AI in the Evolving Transatlantic Landscape

Amid the differences between the United States and the European Union’s approaches to governing artificial intelligence, there is potential for shared leadership in prohibiting its most harmful applications to shape a global framework that balances innovation, democracy, and human rights.

Google CEO Sundar Pichai is seen delivering a speech in a large, dark room.
Google CEO Sundar Pichai delivers a speech during the Artificial Intelligence Action Summit in Paris, France, on February 10, 2025. (Getty/Joel Saget/AFP)

See other chapters in CAP’s Report: Trade, Trust, and Transition: Shaping the Next Transatlantic Chapter

This chapter is part of a report written in collaboration between the Center for American Progress and the Foundation for European Progressive Studies (FEPS).

The author, Dr. Alondra Nelson, served as deputy assistant to the president and principal deputy director for science and society at the White House Office of Science and Technology Policy (OSTP) from 2021 to 2023. At OSTP, she drove the Biden-Harris administration’s strategy to create science and technology policy that expands economic opportunity and civil rights, and she architected the Blueprint for an AI Bill of Rights. She now serves as a distinguished senior fellow at The Center for American Progress.  

Artificial intelligence (AI) has become one of the most consequential technological forces of the current time, with the power to reshape economies, societies, and the very nature of global governance. The proliferation of large language models, generative AI, and predictive algorithms presents not only immense opportunity but also significant risk. This proliferation is occurring against the backdrop of the Trump administration’s approach to AI that prioritizes rapid deployment and economic dominance over oversight and accountability. Such an approach marks a distinct departure from the Biden administration’s policies and stands in stark contrast to Europe’s comprehensive governance framework. As the uses and power of AI models, systems, and tools expands, democratic governments face a defining challenge: determining how to harness AI’s potential benefits while safeguarding civil liberties, protecting democratic values, and mitigating harm.

This chapter explores this moment of reckoning for AI governance, as well as the transatlantic divide on this issue that could define the future trajectory of global technology policy.

Strategic context

Over the past decade, the United States and Europe have developed fundamentally different approaches to regulating technology. The European Union has pursued a comprehensive regulatory framework aimed at protecting users and preserving market fairness, even at a potential cost to innovation. The United States, by contrast, has historically favored a more hands-off approach, prioritizing free markets over strict oversight. The widening adoption of AI technologies—coupled with emerging evidence of their current harms and looming risks such as exacerbating bias, catalyzing labor disruption, increasing surveillance, and widening inequality—heightens the urgency for meaningful and enforceable guardrails. Thoughtful regulation catalyzes the development and adoption of new technologies, ensuring that no tradeoff between safety and progress or rights and innovation is necessary.

The Biden administration sought to strike this balance, encouraging AI development while establishing responsible guidelines by introducing a cohesive series of policy and discretionary measures including the Blueprint for an AI Bill of Rights; an executive order on AI paired with guidance from the White House Office of Management and Budget (OMB); an AI Risk Management Framework issued by the National Institute of Standards and Technology; and voluntary commitments from the 15 leading technology companies. The AI Bill of Rights, a cornerstone of the Biden administration’s AI policy, was driven by a core conviction: Innovation should be encouraged but not come at the expense of fairness, privacy, and human rights. It also reflected an awareness that unchecked expansion of AI could deepen existing inequalities and further undermine public trust. Yet, consistent with the United States’ traditional restraint in directly regulating technology, the administration primarily relied on existing laws alongside industry buy-in and voluntary adherence to guidelines rather than enacting new regulations.

Conversely, the EU has established a comprehensive, multilayered approach to technology governance through landmark regulations including the General Data Protection Regulation, the Digital Services Act (DSA), the Digital Markets Act, and the Artificial Intelligence Act (AI Act). The AI Act is significant for its framework that prohibits practices deemed harmful to civil liberties, such as AI-driven social scoring—assigning individuals a “score” based on various aspects of their personal online and offline behavior—and certain forms of predictive policing—using technology to analyze massive amounts of information in order to predict and help prevent potential future crimes. By implementing a tiered, risk-based system of oversight, European regulators have demonstrated their commitment to preserving individual rights while setting clear boundaries for technological development. This regulatory architecture reflects the EU’s fundamental philosophy that technological progress must be balanced with robust protections for citizens’ privacy, autonomy, and freedom from systemic discrimination.

Policy continuity and change

For years, many U.S. policymakers have observed Europe’s proactive approach to technology regulation with interest, though adapting it to American contexts has always presented unique challenges. Today, the political landscape in the United States has shifted significantly, further distinguishing the American regulatory philosophy from European models. U.S. Vice President JD Vance underscored this divergent approach at the Artificial Intelligence Action Summit in Paris in February 2025, advocating for accelerated AI development with minimal constraints while characterizing AI governance as an impediment to American competitiveness and dismissing concerns about AI risks as merely “hand-wringing about safety.” This approach reflects a vision of AI policy that prioritizes unbridled advancement and free markets over both the rights- and safety-preserving tactics of the Biden administration and the risk-centered framework favored by European regulators. This posture also reflects the Trump administration’s ideological commitment to U.S. technological supremacy, even at the expense of consumer protection, democratic safeguards, and established relationships with allies and partners.

The Trump administration’s related alignment with private-sector AI leaders raises troubling and complex governance questions. The Department of Government Efficiency (DOGE), led by Elon Musk, has already announced plans to integrate AI into federal government operations. This creates an unprecedented situation in which an individual controls both a major AI platform and a major social media network, while directing U.S. technology policy. This creates a dangerous concentration of power and influence, and it raises serious concerns about conflicts of interest and democratic accountability.

Diverging interests and approaches between the United States and the European Union

The Trump administration’s stated hostility toward European-style regulation extends to the transatlantic relationship itself. European regulators are facing growing pressure from American tech companies over compliance with the DSA and the EU AI Act. By signaling opposition to the EU AI Act and other regulatory measures—and suggesting it may defend U.S. tech firms from European oversight—the administration is setting up a potential clash with EU leaders in Brussels. This clash comes against the backdrop of Trump’s new tariff regime announced on April 2, 2025— though global tariffs above 10 percent have been paused until July 8, 2025, and at the time of publication, physical tech products such as phones, computers, and chips have been temporarily exempt—heightening tensions in an escalating global trade war. European Commission President Ursula von der Leyen warned that if trade talks break down, the EU’s response may include plans to tax major U.S. tech firms, threats echoed strongly by European countries such as France. Yet Europe’s continued dependence on American technology exports complicates its regulatory ambitions and dreams of growing its own technology sector.

Despite these headwinds, there are still opportunities to shape AI governance that fosters true innovation and is fundamentally responsible and human centered. While the United States was always unlikely to adopt the EU’s comprehensive regulatory framework, this limitation need not leave it without options. Defined by the Trump administration’s deregulatory, market-driven approach, the political landscape may prevent the passage of comprehensive AI legislation, but alternative mechanisms for responsible oversight and direction still exist and deserve focused attention. U.S. policymakers must leverage policy innovation.

Policy innovation creates pathways for governance that are both responsive and effective, transcending the false dichotomy between public protection and technological development. In the U.S. context, policymakers should strategically leverage existing regulatory frameworks and legal mechanisms, adapting them to address emerging AI challenges rather than waiting for comprehensive new legislation.

Meanwhile, Europe’s implementation of the AI Act offers valuable real-world evidence from which to learn. Those European regulations that successfully foster innovation while preventing harm and building public trust could provide instructive models for future U.S. approaches. In the current environment, there may be opportunities to pursue targeted safeguards capable of winning support across political divides. Even amid polarization, potential exists for consensus around specific AI “red lines”—including some prohibitions that parallel the EU AI Act’s “unacceptable practices.” This includes, for example, prohibitions on real-time remote biometric tracking—or the use of technologies such as facial recognition to identify individuals in real time, often in public spaces—as well as others that extend even further, such as on automated termination of employment or algorithmic denial of health insurance claims. In its recently updated guidance on federal use of AI, the OMB designated a number of “high-impact” purposes of AI that are subject to specific minimum risk management practices, including the production of risk assessments about individuals and the use of biometric identification for one-to-many identification in publicly accessible spaces. This inclusion follows the spirit of the EU’s prohibited practices and signals alignment between the EU and the United States in preventing the worst harms from AI from being realized. Through such focused interventions, the United States can prevent the most harmful applications of AI while maintaining the flexibility needed for beneficial innovation.

Advancing shared agendas

Although broader regulatory philosophies may diverge across the Atlantic, shared concerns about specific AI risks create opportunities for joint action. The United States and the European Union can find common cause in developing targeted prohibitions against clearly harmful applications and establishing robust information-sharing mechanisms, building blocks that could evolve into more comprehensive collaborative governance over time. There are also opportunities to build pathways for true innovation spurred by thoughtful governance.

How the United States and the European Union choose to govern AI will shape not only the trajectory of technological development but also the health of their democracies and the strength of their alliances. The author’s experience developing the AI Bill of Rights made evident that thoughtful constraints often spark more meaningful innovation by directing creative energy toward socially beneficial outcomes. Despite today’s polarized environment, the United States can forge AI governance models that both protect the country’s fundamental values and create space for technological development that truly enhances human potential. Through deliberate and thoughtful governance, U.S. and EU policymakers can guide AI development toward systems that amplify, rather than erode, rights and dignity and can design technologies that democratize opportunity instead of further consolidating power in the hands of a few.

The author would like to thank Megan Shahi for her expertise and stewardship of this project.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. American Progress would like to acknowledge the many generous supporters who make our work possible.

Previous

Next

Chapters

Author

Dr. Alondra Nelson

Distinguished Senior Fellow

Department

National Security and International Policy

Advancing progressive national security policies that are grounded in respect for democratic values: accountability, rule of law, and human rights.

This field is hidden when viewing the form

Default Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

Variable Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.