Secretary of Defense Pete Hegseth is threatening unprecedented retaliation, potentially labeling Anthropic as a “supply chain risk.” This designation could destroy Anthropic’s business ahead of their expected initial public offering (IPO). Alternatively, it’s possible the administration could claim that Anthropic’s AI is so essential to the DOD that it will use the Defense Production Act (DPA) to attempt to compel the company to provide their technology if they do not agree to the government’s demands. It should be explicitly stated that these two threats are directly in conflict with each other: Either Anthropic is a risk to the DOD and should be expelled from their systems because of that danger or it is so essential to the DOD that our national security would be at risk without unrestrained access to it. It cannot be both. The Trump administration likely does not believe either of those things. This is a negotiating tactic to get what it wants from Anthropic.
By Friday, February 27, the DOD could essentially declare war not on a foreign nation but on one of America’s most successful frontier AI companies if it does not bow to its demands. This would be an unprecedented and unnecessary peacetime move that sends the signal to other private companies that they must do the Trump administration’s bidding or face existential consequences.
Background
Anthropic has focused on developing Claude, its proprietary AI model and tools, primarily for the enterprise software market for business and government, including offerings specific to the U.S. government’s unclassified and classified networks. Anthropic has received more than $8 billion in investment funding from Amazon and is hosted on Amazon Web Services (AWS), which includes several government-specific cloud computing services across classification levels. Claude is available on U.S. government unclassified networks and is the only frontier AI tool available to U.S. government users for use with information classified up to the secret level. While the exact government contracts have not been made public, Anthropic must have included some kind of terms of service and usage policies in the DOD and the General Services Administration’s (GSA) contracts or the Pentagon would not be trying so strongly to renegotiate that policy.
According to Axios, “Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude.” The issue came to a head when it was reported that the DOD used Claude in the planning of the raid to capture Venezuelan President Nicolás Maduro and concerns that it violated the Anthropic Usage Policy (though it is not clear what part of the existing usage policy would have been violated by the raid).
The DOD is reportedly insisting “that all AI labs make their models available for ‘all lawful uses'” while “Anthropic is willing to loosen its usage restrictions” except for “the mass surveillance of Americans” and “the development of weapons that fire without human involvement,” which—it should be noted—are only a small fraction of Anthropic’s existing usage policy. Critically, the DOD has not said why it objects to the restriction against using Claude to develop “the mass surveillance of Americans,” which would not be a legal action for the Trump administration. The DOD has only reiterated their position that they want the ability to use the AI for “all lawful uses.” Anthropic may be right to be concerned by this phrasing, as numerous federal courts have repeatedly found the Trump administration’s actions are not lawful.
There is little transparency around how detection models that identify violations of usage policies for AI chatbots are operationalized. The ability to identify and respond to violations of developers’ AI model accessed through an application programming interface (API) by a deployer is even more limited, as CAP has written previously. The monitoring of the U.S. government’s use of Claude, especially on the classified level, is almost certainly extremely limited, and the practical ability for Anthropic to discover and restrict the U.S. government’s use of their tools is questionable. In fact, only when the use of Claude in the Maduro raid was leaked to the press did this issue become part of a broader firestorm.
Terms of service and usage policies as an attempt to restrict the use of advanced dual-use foundation models are unlikely to succeed, especially with government use, showing the real need for actual laws and regulation of AI. Still, Anthropic’s commitment to its values and its attempt to hold on to some restricted uses is admirable. Setting terms of service for your own products is supposed to be legal for businesses in America. Claude is gaining tremendous momentum with fierce competition in the private sector, and being the only frontier AI tool available in classified settings has made Claude enormously valuable to the DOD, which clearly wants to keep using the tool.
The threat of unprecedented retaliation
At a meeting on Tuesday, February 24, 2026, Secretary of Defense Hegseth reportedly demanded Anthropic CEO Dario Amodei “give the military a signed document that would grant full access to its artificial intelligence model.” If Anthropic does not comply, DOD officials are reportedly considering declaring Anthropic a “supply chain risk” or invoking the DPA to gain access to Claude without guardrails. This threatened retaliation against an American company is unprecedented and comes at a particularly pivotal moment in Anthropic’s business.
As former Trump administration AI advisor Dean Ball has noted, if the conflict between Anthropic and the DOD were unresolvable, the normal position would be for DOD to cancel its contract with Anthropic. Anthropic would suffer both a monetary loss in business and the loss of the security reputation that having the DOD as a customer brings but would continue to exist as a company.
But the other threatened retaliations against Anthropic could be critically damaging, and even fatal, to their business.
According to CBS News, “because officials say they aren’t sure the government can trust Anthropic at this point, the Pentagon may decide to officially designate the company as a ‘supply chain risk’ to push them out of government.” If the DOD were to designate Anthropic a “supply chain risk,” it would be utilizing a process previously applied only to foreign companies located in foreign adversary nations that were considered to be security risks, including Russia’s Kaspersky Lab and China’s Huawei and ZTE telecommunications. While the implementation of this could take several different forms—and might be complicated by the fact that Anthropic is an American and not a foreign company—it would ultimately have a significant impact on the company.
Being designated a “supply chain risk” would likely mean that DOD contractors and subcontractors would not be allowed to use Anthropic products and would need to certify that they did not use Anthropic products to build their products.
This comes at a hugely critical moment for Anthropic as its Claude Code product is booming and becoming the coding agent of choice for many major software companies and corporations. Many of those major software companies and corporations are also suppliers, contractors, or subcontractors for the DOD.
While not every company will give up Claude if Anthropic is designated a “supply chain risk,” using Claude will become an affirmative choice to forego any future U.S. government business, and the easiest thing for a company to do to keep its current or future government business is to stop using Anthropic’s products. On Wednesday, February 25, the DOD upped the pressure on Anthropic by asking defense contracting giants Boeing and Lockheed Martin about their use of Claude related to “a potential supply chain risk declaration.”
This could devastate Anthropic at the moment its products are hitting hockey-stick growth. Anthropic announced a $14 billion revenue run rate in 2026 and is preparing for a possible IPO in the next year or two. Being designated a “supply chain risk” could destroy Anthropic’s business momentum and potential IPO. As Dean Ball notes, “this option could be existential for Anthropic.”
Invoking the DPA would be equally unprecedented, but according to CBS News, it would be used because, “Defense officials want full control of Anthropic’s AI technology for use in its military operations.” As Axios reports, “The idea, the senior Defense official said, would be to force Anthropic to adapt its model to the Pentagon’s needs, without any safeguards.” Dean Ball minces no words in describing this as “the quasi-nationalization of a frontier lab.”
While it is unclear which part of the DPA the administration would attempt to use to compel Anthropic to provide its AI models, Title I of the DPA allows for the president to:
… require that performance under contracts or orders … which he deems necessary or appropriate to promote the national defense … [and can] … allocate materials, services, and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense.
Similarly, the definitions of “critical infrastructure” and “critical technology” along with the fact that Anthropic has an existing contract with the DOD could be utilized to help justify the DPA’s invocation.
The Biden administration’s 2023 AI Executive Order attempted to utilize the DPA’s authorities for “the national defense and the protection of critical infrastructure” to require dual-use foundation model developers to provide the government with certain information, drawing significant opposition from industry and AI supporters. The Biden AI EO was later repealed by the Trump administration in early 2025. Those who criticized the Biden administration’s use of the DPA in its AI EO should speak out with equal force against the Trump administration’s far more aggressive threatened use of the same authorities (and some are).
It is not at all clear how the DPA would or could be used by the DOD in this situation, though Alan Rozenshtein posits at Lawfare that the DOD is likely to demand either “Claude Without Contractual Restrictions” or “Forced Retraining,” which would be “the government compelling Anthropic to retrain Claude—to strip the safety guardrails baked into the model’s training, not merely modify the access terms.” Anthropic is likely to challenge any invocation of the DPA in court but “comply under protest (given the DPA provides for criminal penalties for noncompliance)” and the success of the government in court is far from assured.
Anthropic would almost certainly seek legal relief from the courts if the Trump administration were to attempt to declare Anthropic a “supply chain risk” or invoke the DPA. But seeking legal relief can take time, and the damage to Anthropic’s business in the interim could be significant and irreversible. At a moment of fierce competition and nearing an IPO, even a government action that gets reversed by the courts could have devastating consequences for their business, a fact that DOD is almost certainly aware of and is attempting to use to their advantage.
Additionally, we know that when the Trump administration decides an institution is an enemy, it can unleash numerous attacks on multiple fronts, as this administration’s war against Harvard University has shown. If the U.S. government decides Anthropic is an enemy—again, a private U.S. company that is one of the most successful young companies in American history—then it has numerous levers to make its life miserable.
The Trump administration is trying to make an example of Anthropic
Anthropic has been labeled “woke AI” by Secretary Hegseth and the Trump administration. Anthropic is an AI company founded around concerns for AI safety, whose constitution attempts to embrace and encode certain values into AI. Anthropic has opposed some of the most extreme AI deregulation—including opposing the state AI moratorium in the One Big Beautiful Bill—and is funding a pro-AI regulation super PAC. Anthropic is not a perfect company: For example, it recently revised its leading Responsible Scaling Policy to back off its previous commitments. It would be easy to try and dismiss concerns for Anthropic’s conflict with DOD as ideological and solely opposed to the Trump administration.
However, whether Anthropic shares one’s values is beside the point. This is an unprecedented attempt by DOD to threaten a successful private American company into bowing to its requests or face financial destruction or nationalization. Government coercion and threats of this nature toward any U.S. AI company should be opposed. And this is all over the company’s stance that what it believes is the most powerful technology in the history of the world should not be used to build mass surveillance tools—and the Trump administration’s clear refusal of that request. This is another example of the flagrant disregard for the law in the long line of abuses from the Trump administration.
Congress should raise hell about the administration’s attempt to abuse its power and threaten to destroy one of America’s newest and most valuable companies. The Senate and House Armed Services Committees should immediately convene hearings on the DOD’s threatened use of supply chain risk designations and the DPA against a domestic AI company. Members of both parties should demand that the DOD provide Congress with the full terms of its existing contracts with Anthropic and a legal justification for any threatened retaliation. Congress should recognize that the need is real for true AI safeguards and prohibited uses—not just from private companies but codified in law for government use of AI as well.
There may be some kind of de-escalation or detente between Anthropic and the Pentagon. But the damage was done the moment the federal government threatened to destroy or conscript an American AI company. This administration also has a long history of ignoring laws; pushing for the sales of companies to supporters; taking golden shares of companies they consider critical; and more. American AI companies should fear what this administration threatened and where this might end. The Trump administration has made it clear that their position is one of American AI dominance. However, it turns out that said dominance includes dominance over American AI companies too.