Center for American Progress

The Department of Defense’s Conflict With Anthropic and Deal With OpenAI Are a Call for Congress To Act
Article

The Department of Defense’s Conflict With Anthropic and Deal With OpenAI Are a Call for Congress To Act

Both Anthropic and OpenAI are calling on elected leaders to provide new AI protections.

Secretary of Defense Pete Hegseth looks on as President Donald Trump speaks.
Secretary of Defense Pete Hegseth looks on as President Donald Trump speaks, March 2026. (Getty/Saul Loeb)

Last week, a dramatic conflict erupted between the U.S. Department of Defense (DOD) and the frontier AI lab Anthropic, makers of the popular Claude AI models and products, over the DOD’s attempts to modify an existing contract to remove any previous restrictions or face significant retribution. The company’s insistence on restrictions against the use of their product for domestic mass surveillance and fully autonomous weapons by the DOD finally culminated in an unprecedented attack on a private company by the federal government.

On Friday, February 27, at 5:14 p.m., Secretary of Defense Pete Hegseth announced on X that he had directed the DOD “to designate Anthropic a Supply-Chain Risk to National Security.” That announcement followed a Truth Social post from President Donald Trump earlier in the day directing the U.S. government to “IMMEDIATELY CEASE all use of Anthropic’s technology.” Anthropic’s statement in response held firm to its previous requirements and promised to challenge the designation in court. Secretary Hegseth’s designation of Anthropic as a “supply chain risk” also included the statement that any company that does business with the DOD may not “conduct any commercial activity with Anthropic,” which would seemingly prohibit numerous companies essential to Anthropic’s business—such as the cloud service providers like Amazon Web Services (AWS) that host Claude—from having the company as customers. This statement wildly exceeds any existing legal “supply chain risk” authority and if upheld by the courts would result in the likely destruction of the company. The DOD has not actually formally publicly acted against Anthropic to implement the “supply chain risk” designation.

This field is hidden when viewing the form

Default Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

Variable Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

Later that night, rival frontier AI lab OpenAI announced that it had signed a contract with the DOD to deploy its AI models on their classified networks—networks where Anthropic’s models were reportedly the only frontier AI models available. The timing of OpenAI signing a deal with the DOD immediately after their extrajudicial actions against a competitor generated significant attention and skepticism about the actual protections claimed were in the deal. Over the last few days, OpenAI has released some of the language from their contract for further examination, though the full contract will likely never be made publicly available. Similarly, the language from Anthropic’s existing or proposed contracts has never been made public, though the President and Secretary of Defense could make all these contracts public if they so choose and the companies agreed.

Beyond the blur of details of the contract disputes; the likely illegal exercise of power by the Trump administration; and the attempted destruction of one of the fastest growing new companies in American history is one fact. Both leading U.S. frontier AI labs, Anthropic and OpenAI, have stated they share two fundamental red lines in the use of advanced AI systems, including their own: They should not be used for domestic mass surveillance and fully or direct autonomous weapons systems. The CEOs of both AI labs have cited their discomfort with private entities having restrictive power over democratic governance and CEO of Anthropic Dario Amodei has called on Congress to act to provide more AI protections from mass surveillance.

These events are a call to action for Congress to both investigate the events involving the DOD and both Anthropic and OpenAI as well as take action to pass legislation providing protections for citizens against mass surveillance enabled by AI.

These events are a call to action for Congress to both investigate the events involving the DOD and both Anthropic and OpenAI as well as take action to pass legislation providing protections for citizens against mass surveillance enabled by AI.

Over the weekend, even as the U.S. government designated Anthropic a “supply chain risk,” its Claude AI models were reportedly being used to plan the U.S. military’s strikes against Iran. To help understand what unfolded over the last few days, here is background and five crucial takeaways that underscore the significance of the DOD and Anthropic conflict, the DOD and OpenAI deal, and the need for Congress to take action.

1. DOD designating Anthropic “a supply chain risk” was an unprecedented and likely illegal move

The appropriate action here would have been to terminate Anthropic’s contract with the DOD—a possibility Anthropic said it supported on Thursday—or take other far-less punitive actions that were available.

Instead, on Friday, Secretary Hegseth’s X post announced the DOD’s action against Anthropic:

In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

Anthropic’s Friday night statement stated they “have not yet received direct communication from the Department of War or the White House on the status of our negotiations.” As of Wednesday afternoon, neither the DOD nor the White House has posted any official statements or documents on their websites about actions against Anthropic.

The “supply chain risk” designation is traditionally utilized against companies connected to foreign adversaries; they have never been applied to contract disputes; and Anthropic claims that this designation has never been publicly used against an American company. Legal scholars have noted the designation seems legally questionable and the challenge the government will have in court defending the designation.

The secretary of defense does have the authority to designate “supply chain risks,” which are defined in both 10 U.S.C. Section 3252 and 41 U.S.C. Section 4713.

The 10 U.S.C. Section 3252 definition is:

The term “supply chain risk” means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system.

The 41 U.S.C. Section 4713 definition is slightly different:

The term “supply chain risk” means the risk that any person may sabotage, maliciously introduce unwanted function, extract data, or otherwise manipulate the design, integrity, manufacturing, production, distribution, installation, operation, maintenance, disposition, or retirement of covered articles so as to surveil, deny, disrupt, or otherwise manipulate the function, use, or operation of the covered articles or information stored or transmitted on the covered articles.

Anthropic’s actions in resisting coerced changes to an existing government contract would not seem to meet either definition of “Supply Chain Risk,” but until the government’s orders against Anthropic are released, we can only make educated guesses.

Additionally, the DOD took an extraordinary step when Secretary Hegseth’s statement added, “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The statutes that authorize a “supply chain risk” designation simply do not give the secretary of defense that authority to order anyone who does business with the DOD to stop conducting commercial activity with Anthropic, which could include having Anthropic as a customer or investing in the company.

The government may claim some other authority or broader national security power to justify the designation, but neither the government nor Anthropic have made public any legal order related to the “supply chain risk,” so we can only guess at what authorities are being invoked. Anthropic has already said that it will sue but have not yet filed any lawsuits.

Because the government has not formally acted against Anthropic, that also means there is still an opportunity for the DOD to deescalate the situation and withdraw or not formally implement the “supply chain risk” designation. The DOD could simply terminate their contract with a transition period, modify the contract to allow Anthropic to adopt terms that are similar to the ones OpenAI agreed to, or take less stringent actions like enacting new DFARS regulations.

Because the government has not formally acted against Anthropic, that also means there is still an opportunity for the DOD to deescalate the situation and withdraw or not formally implement the “supply chain risk” designation. The DOD could simply terminate their contract with a transition period, modify the contract to allow Anthropic to adopt terms that are similar to the ones OpenAI agreed to, or take less stringent actions like enacting new DFARS regulations.

2. The DOD is making clear that it wants to hurt or destroy Anthropic

Beyond the questionable use of existing authorities, the addition of “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic” is a clear attempt to hurt or kill Anthropic’s business. This goes beyond the authorities for a designated “supply chain risk” and is more akin to how the U.S. government uses sanctions on foreign companies or individuals, such as the Specially Designated Nationals and Blocked Persons List. When the U.S. government took similar actions against the Chinese telecom company Huawei, it took an act of Congress to include the kind of restrictions that Secretary Hegseth is claiming to impose on Anthropic.

Again, because the government has not made public its actions against Anthropic, we do not know if they will be using any additional authorities. But if the government’s position were to be upheld by the courts, this would be the commercial equivalent of the death penalty for Anthropic.

This is because Anthropic’s Claude AI models and products run solely on cloud servers in data centers owned by the commercial cloud computing giants AWS and Google Cloud. Both AWS and Google Cloud are DOD contractors, and if they were forced to drop Anthropic as a customer, there are no commercial cloud providers who are not also DOD contractors who could take them as a client. In addition, even if such non-DOD contractor companies were to exist, they would not have capacity to serve Anthropic’s infrastructure needs. The shortage of specialized chips, data centers, and capital—not to mention the time it takes to build those facilities—would mean that Anthropic’s AI models and products would simply go offline, and the company would likely soon follow it. This is an attempt by the DOD to deplatform Anthropic, a move that was met with opposition from conservatives when AWS kicked the social networking site Parler off their cloud servers following the January 6 insurrection at the U.S. Capitol.

Even if this portion of the government’s actions are overturned by the courts, the very fact that the DOD publicly attempted it is a clear message to any other AI company about the consequences of disagreeing with the Trump administration or objecting to anything in their contracts.

3. OpenAI’s decision to sign a contract with the DOD is rightly being subject to tremendous scrutiny

A few hours after Secretary Hegseth announced that he was designating Anthropic a “supply chain risk,” OpenAI co-founder and CEO Sam Altman announced the company had signed a contract with DOD to bring their AI models to their classified systems. OpenAI has stated that they share two of the red line concerns with Anthropic, including mass domestic surveillance and direct autonomous weapons systems—and later, also, “high-stakes automated decisions (e.g. systems such as ‘social credit’).” OpenAI claims that their contract with the DOD “has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s” and that “we protect our red lines through a more expansive, multi-layered approach.”

The news of a rival frontier AI lab signing a deal with the DOD so quickly after the government had announced unprecedented sanctions against a competitor for requesting similar restrictions was a surprise and resulted in a considerable public backlash against OpenAI. Altman later posted on X that “the reason for rushing is an attempt to de-escalate the situation.” OpenAI and Altman later clarified they opposed DOD designating Anthropic a “supply chain risk.”

On Saturday, OpenAI released more details of their contract with the DOD, including select language from their contract regarding limitations on government use and answers to selected questions. CEO Sam Altman also engaged in an hours-long Ask Me Anything (AMA) session on X and stated that “based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.” OpenAI has also emphasized the technical deployment of their models from the cloud and forward deployed engineers as additional safeguards.

On Monday, OpenAI released further language to address criticisms around allowing mass surveillance, including “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and “[f]or the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

It is important that OpenAI released and strengthened the language they did, since we still have not seen the public release of language from Anthropic’s now presumably terminated contract or the proposed modifications to it (some of which will likely emerge in litigation). However, given the tremendous attention and stakes in this situation and the timing of their deal, OpenAI will face questions and skepticism about whether their deal truly protects the public from mass surveillance, unless the entire contract is released and can be publicly analyzed.

OpenAI’s answer to the most critical question regarding its contract comes here:

What happens if the government violates the terms of the contract?

As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.

As Altman posted on Saturday, “[W]e believe the U.S. government is an institution that does its best to follow law and policy.”

Perhaps the most significant question for OpenAI is why they believe they would be allowed to exercise any objections to the DOD contract after the government has attempted to destroy Anthropic for what OpenAI argues were the same or worse contract terms. Why would OpenAI be allowed to exercise the restrictions they sought rather than just be threatened with destruction or nationalization by DOD as Anthropic was?

4. Congress needs to hold hearings on the Trump administration’s actions with Anthropic and OpenAI

On Friday, the chairs of and ranking members of the Senate Armed Services Committee and Defense Appropriations Subcommittee sent a letter to the DOD and Anthropic “urging them to resolve the issue.” Unfortunately, the issue was probably not resolved in a manner the letter envisioned. Congress must now investigate the events that led to the DOD designating Anthropic a “supply chain risk.”

On Sunday, Under Secretary of Defense for Research and Engineering Emil Michael—who was leading the negotiations with Anthropic and OpenAI for the Pentagon—tweeted that Anthropic CEO Dario Amodei was a liar and that he should “testify UNDER OATH on why he is lying and trying to bring shame on our great military!” Under Secretary Michael is infamous in Silicon Valley for his tenure at Uber where he was caught suggesting opposition researchers should be hired to discredit journalists investigating the company.

Under Secretary Michael is right. In light of these events, Anthropic CEO Dario Amodei should testify before Congress immediately—as should Secretary of Defense Hegseth and Under Secretary Michael—to help the public understand the events that unfolded between the two parties over the last few weeks. Similarly, OpenAI CEO Sam Altman should testify to help explain the differences between their contract with the DOD compared to Anthropic’s demands.

In addition to those testimonies, Congress should explore these key issues:

  • The prohibitions Anthropic was demanding, including restrictions on “fully autonomous weapons” and “mass domestic surveillance of Americans” and the Trump administration’s objections to those restrictions
  • What kind of “mass domestic surveillance” that Anthropic was worried DOD would attempt using its AI tools
  • The Trump administration’s definition of “all lawful uses” in the context of the agreement that Anthropic was attempting to negotiate with the DOD and the agreement OpenAI successfully negotiated with them
  • Why the DOD was unwilling to simply terminate Anthropic’s contract, given the difficulties in coming to agreement
  • The justification for the DOD’s unprecedented use of the “supply chain risk” designation against an American company following a contract dispute and the authority they are claiming to demand “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
  • The reported threats to invoke the Defense Production Act (DPA) against Anthropic if they could not resolve their contract dispute, especially in light of the DPA’s upcoming reauthorization

5. Congress needs to pass legislation providing protections for citizens against mass surveillance

Anthropic’s restriction red lines and OpenAI’s agreement with the DOD have also started a broad discussion on the authorities that the DOD and the U.S. government use to collect information on U.S. persons; publicly available and commercially acquired information; and how AI models may be used to process that. News articles published on Sunday indicate Anthropic’s position was related to its tools being used to analyze “bulk data collected from Americans” or “unclassified commercial data.” There has been a lot of legal analysis on the language that OpenAI published, with many analysts believing that OpenAI’s language will allow some level of mass surveillance that the U.S. government already considers to be legal. As noted above, on Monday, OpenAI announced further updates to their contract language with the DOD intended to address concerns around loopholes, especially around commercially acquired information. Both CEOs have publicly stated the importance of democratic governance and Anthropic CEO Dario Amodei has asked for Congress to provide protections from AI mass surveillance in law.

The heart of the problem is that the federal government’s use of AI in conducting surveillance is generally unregulated, by both Congress and the courts. In addition, the few laws addressing government surveillance predated and did not anticipate the use of AI, leaving it to the ethics of AI companies to provide any constraints on the federal government. The events of the last few days have shown that private sector attempts to constrain government use of AI through terms of use or contracting terms are a difficult, if not impossible, task. Perhaps there is more public trust in Anthropic than the DOD, given this DOD’s conduct under Secretary Hegseth. But this episode desperately calls for Congressional legislation to protect Americans’ privacy and oversight of the DOD.

This episode makes clear that Congress must pass legislation providing protections for citizens against mass surveillance enabled by advanced AI. Prohibiting the government from using these advanced tools to enable mass surveillance on its citizens is critical to trusting advanced AI and the AI companies. Additionally, prohibiting AI-enabled mass domestic surveillance requires examining how to limit the data the government is able to acquire about Americans, such as through new federal privacy laws that could limit the collection, sale, purchase, and use of sensitive personal data by private companies or the government.

Conclusion

The events of the last few days have marked a watershed moment for the independence of private AI companies from the U.S. government and have made clear that, without legislation, the use of these tools for warfare and surveillance is not a question of if but, rather, when. It has also been another example of the Trump administration’s attempt to abuse its power and take likely illegal action to attempt to destroy a frontier AI lab that disagreed with the government. Private sector governance is not sufficient to restrain government use, and potential abuse, of advanced AI. Congress cannot wait to act and must begin holding hearings to investigate the administration’s actions and crafting legislation to provide protections for citizens against mass surveillance.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. American Progress would like to acknowledge the many generous supporters who make our work possible.

Author

Adam Conner

Vice President, Technology Policy

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This field is hidden when viewing the form

Default Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

Variable Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.