Microsoft is urging a federal court to temporarily block the Department of War’s decision to blacklist artificial intelligence company Anthropic, arguing the move could disrupt critical military technology systems and harm the United States’ competitiveness in advanced AI development.

In a legal filing submitted Tuesday, Microsoft supported Anthropic’s request for a temporary restraining order that would prevent the Department of War from enforcing the ban while the case proceeds in federal court.

Anthropic filed a lawsuit earlier this week in federal court in San Francisco challenging the Trump administration’s decision to designate the company as a national security supply-chain risk. The company argues the designation is unlawful and claims it was imposed in retaliation after Anthropic refused to allow its Claude artificial intelligence model to be used for autonomous lethal warfare or large-scale domestic surveillance.

The government’s designation blocks the Department of War from using Anthropic’s technology and requires defense contractors and vendors to certify that their systems do not rely on Anthropic AI models when working with the Department of War.

Anthropic said the classification is unprecedented for an American technology firm. Such designations are typically applied to companies tied to foreign adversaries, such as Chinese telecommunications company Huawei.

Microsoft, which partners with multiple artificial intelligence developers and provides cloud infrastructure used in government and military systems, warned that the sudden restriction could force contractors and technology providers to rapidly reconfigure systems already deployed for military operations.

In its court filing, the company argued that immediately removing Anthropic’s technology from Department of War systems could disrupt existing deployments of advanced AI tools used for defense operations and analysis.

“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” Microsoft said in its brief.

The company also cautioned that rapid changes to software configurations used by the Department of War could affect battlefield technology and intelligence systems used by American forces. According to the filing, Anthropic’s Claude model is currently the most widely deployed frontier AI system used by the Department of War and is the only such model operating on certain classified systems.

The dispute comes amid a broader debate over how artificial intelligence should be used in national security applications. Anthropic has said its technology should be developed with strict safeguards and has resisted requests to allow its systems to be used for autonomous lethal weapons or mass domestic surveillance.

Anthropic’s lawsuit claims the federal government retaliated against the company for maintaining those restrictions.

“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle,” the complaint states.

A federal judge will now decide whether to grant the temporary restraining order that would pause the Department of War’s restrictions while the lawsuit moves forward in court.