The National Security Agency is using an artificial intelligence model developed by Anthropic despite a formal supply-chain risk warning issued by the United States Department of Defense, according to a report published Sunday.
The dispute stems from a rare “supply chain risk” designation issued earlier this year by the Pentagon, which effectively barred Anthropic from conducting business with military entities. The designation prompted the company to file a lawsuit in March against multiple federal agencies after it was cut off in February, and contractors were directed to cease working with it.
Despite that restriction, sources told Axios that the NSA has been using Anthropic’s advanced model, Claude Mythos Preview. Another source indicated the tool may be in broader use across the intelligence community, though details remain limited.
It is not clear how the NSA is deploying the model, which is overseen within the Defense Department structure. However, organizations with access to Mythos have reportedly used it for identifying exploitable cybersecurity vulnerabilities, a capability that has raised both interest and concern within national security circles.
Anthropic has tightly restricted access to the model, limiting it to roughly 40 organizations due to what it describes as highly sensitive “offensive cyber” capabilities. Only a dozen of those users have been publicly identified, though one source said the NSA is among the undisclosed participants.
International partners are also involved. Officials tied to the United Kingdom’s AI Security Institute have confirmed access to the system.
The issue has drawn attention at the highest levels of government. Anthropic CEO Dario Amodei met Friday with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss the model’s use and the broader dispute. Sources described the talks as productive, with next steps likely focusing on how non-military agencies interact with the technology.
At the core of the conflict is a disagreement over how the AI can be used. The Defense Department has pushed for broader access for “all lawful purposes,” while Anthropic has resisted, citing concerns about applications such as mass domestic surveillance and autonomous weapons development.
Some Pentagon officials argue the company’s restrictions make it an unreliable partner for national defense needs. Others within the administration are pushing to resolve the standoff to leverage the firm’s advanced capabilities.
Neither Anthropic nor the Defense Department commented publicly on the report.
Comments
No comments yet. Be the first to share your thoughts.