Microsoft has entered the legal fray surrounding Anthropic’s dispute with the US Department of Defense, submitting an amicus brief to a federal court in San Francisco that argues forcefully for a temporary restraining order. The brief highlights the potential disruption to the many companies and government systems that have come to rely on Anthropic’s artificial intelligence products. Alongside Microsoft, technology giants Google, Amazon, Apple, and OpenAI have also thrown their support behind Anthropic through a joint filing.
The origins of the conflict lie in a collapsed $200 million contract that would have seen Anthropic’s AI deployed on classified military systems. Anthropic walked away from the deal after the Pentagon refused to agree to restrictions preventing the use of its technology for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth subsequently designated the company a supply-chain risk, a label that has triggered the cancellation of Anthropic’s existing government contracts and effectively bars the company from future federal work.
Microsoft’s extensive partnerships with the Pentagon give its court filing particular weight. The company is part of the $9 billion Joint Warfighting Cloud Capability contract and has signed numerous additional agreements for software and enterprise services across defense, intelligence, and civilian agencies. A Microsoft spokesperson said both national security and responsible AI governance were achievable goals and that the entire tech sector had a stake in finding a path forward together.
Anthropic launched two lawsuits on the same day, challenging the designation in both a California federal court and the DC circuit court of appeals. The company argued that the Pentagon’s use of the supply-chain risk label was unconstitutional and amounted to punishment for Anthropic’s publicly held views on AI safety. Court filings revealed that Anthropic itself does not have full confidence in Claude’s ability to operate safely in autonomous lethal warfare scenarios, which it said was precisely why the restrictions were necessary.
The case is unfolding against a backdrop of growing congressional concern about AI in military operations. Lawmakers have formally demanded answers about whether AI targeting systems were used in a strike on an Iranian school that reportedly killed over 175 people. The intersection of this tragedy and Anthropic’s legal battle has elevated the public conversation about the need for clear, enforceable rules governing the use of artificial intelligence in warfare.
