US Court Backs Pentagon in Anthropic Blacklisting Dispute
A federal appeals court in Washington, D.C., has declined to block the Pentagon’s decision to blacklist artificial intelligence company Anthropic on national security grounds, marking an early legal victory for the Trump administration. However, the ruling is not final and allows the case to continue.
The dispute centres on the Pentagon’s designation of Anthropic as a supply-chain risk, a move that prevents the company from securing defence contracts and could potentially extend to a broader government-wide ban. Anthropic, known for developing the Claude AI assistant, argues that the designation is unlawful and exceeds the authority of Defence Secretary Pete Hegseth.
Legal Battle Over AI Restrictions
Anthropic has challenged the designation in court, claiming it stems from the company’s refusal to remove certain safeguards from its AI systems. These safeguards limit the use of its technology in sensitive applications, including surveillance and autonomous weapons.
Company representatives argue that the decision could result in severe financial losses and long-term reputational damage. Furthermore, they contend that the government’s action violates constitutional protections, including free speech and due process.
In contrast, the Justice Department maintains that the designation is based on contractual disagreements rather than Anthropic’s stance on AI safety. Officials argue that the company’s restrictions could create uncertainty in military operations and potentially compromise system reliability.
Conflicting Court Decisions Add Complexity
The Washington-based appeals court denied Anthropic’s request to pause the designation while legal proceedings continue. This decision contrasts with a separate ruling from a California federal court, which temporarily blocked a related order issued by the Pentagon.
The California judge indicated that the government may have acted improperly, suggesting the designation could represent retaliation against Anthropic’s views. As a result, the company is pursuing two parallel legal challenges under different statutes.
This divergence between courts highlights the complexity of the case and signals that the final outcome remains uncertain.
Broader Implications for AI and National Security
The Pentagon’s move marks the first time a U.S. company has been publicly labelled a supply-chain risk under laws designed to protect military systems from potential threats. Consequently, the case could set a significant precedent for how artificial intelligence firms interact with government agencies.
At the same time, the dispute underscores growing tensions between technological innovation and national security priorities. While governments seek greater control over advanced AI systems, companies are increasingly emphasising ethical boundaries and responsible use.
Moreover, the case raises important questions about the limits of government authority in regulating private technology firms, particularly when those firms resist participation in military applications.
As the legal process unfolds, the outcome is expected to shape future relationships between the defence sector and the rapidly evolving AI industry.
With inputs from Reuters

