The Department of Defense is escalating pressure on Anthropic, the creator of the Claude artificial intelligence model, by signaling it could end the Pentagon’s relationship with the company if it does not drop usage restrictions that limit how the military can deploy its technology.

According to administration officials briefed on internal discussions, the Pentagon has grown frustrated with Anthropic’s stance during negotiations over terms for military use of advanced AI. The department wants AI partners to allow U.S. forces to apply their tools for “all lawful purposes,” which includes roles in weapons development, intelligence collection and battlefield operations. Anthropic, however, has resisted broadening that access, particularly around fully autonomous weapons and mass domestic surveillance.

The standoff comes after months of talks with several leading AI firms; alongside Anthropic, companies such as OpenAI, Google and xAI are also being pressed on terms for military use. Officials say at least some of those firms have shown more flexibility in discussions over lifting certain restrictions.

Anthropic’s Claude system is currently among the few AI models authorized for use on classified Pentagon networks under a prototype contract reportedly worth up to $200 million. The model has been tapped for national-security work and, according to media reports, was used via a partner in a classified operation targeting the Venezuelan leadership earlier this year. Anthropic has disputed that its executives discussed specific operational deployments with the Pentagon, emphasizing its engagement has focused on policy terms rather than mission details.

Defense officials, including Secretary of Defense Pete Hegseth, have indicated that if Anthropic does not relent on usage terms, the department may not only terminate its contract but also label the company a “supply chain risk.” Such a designation would effectively bar companies that work with the military from using Anthropic’s technology, a move typically reserved for adversarial foreign entities.

The dispute highlights ongoing tensions between national security imperatives and corporate AI governance frameworks. While the Pentagon argues that unfettered access to advanced AI tools across all lawful military functions is essential for operational effectiveness, Anthropic maintains its safeguards are necessary to prevent ethically problematic applications.

As negotiations continue, the Pentagon’s posture signals a hard line on AI conditions in defense partnerships, with potential implications for how commercial AI technology integrates into U.S. military operations going forward