According to a source familiar with the matter, Anthropic’s technology had already been used in certain military operations connected to Iran. These deployments reportedly contributed to concerns about how the systems might be integrated into defence infrastructure.
Because of those concerns, the Pentagon classified the company as a potential supply chain risk. The designation effectively prevents contractors from incorporating Anthropic’s AI models into defence systems tied to military programmes.
Anthropic plans legal challenge
Anthropic chief executive and co founder Dario Amodei said the company plans to challenge the Pentagon’s decision in court. He reiterated that Anthropic remains committed to building artificial intelligence systems with strong safety protections.
The company argues that strict safeguards remain central to its approach to AI development. Consequently, it intends to defend its position as the dispute moves toward a possible legal battle.
Growing competition in the AI industry
Anthropic operates in a highly competitive artificial intelligence sector. Several major technology companies are racing to build powerful AI models capable of answering questions, analysing data and automating complex tasks.
Competitors include OpenAI and Elon Musk’s AI company xAI. Their tools, including ChatGPT and Grok, have rapidly gained widespread adoption across multiple industries.
As artificial intelligence technology becomes more capable, governments worldwide are increasingly debating how it should be used. Military applications have become one of the most sensitive areas of that discussion.
The Pentagon’s decision highlights the growing tension between rapid technological innovation and national security concerns. For Anthropic, the designation could influence its relationship with government partners as it continues to expand its AI development efforts.
With inputs from Reuters

