Federal Judge Blocks Trump's Ban on Anthropic AI in Government — Rules It Violates First Amendment
A federal judge ruled that the Trump administration violated First Amendment free-speech protections by banning the use of Anthropic's AI models in government systems. The ruling is a significant legal milestone for AI companies operating in the US government sector.
A US federal judge ruled in early April 2026 that the Department of Defense's ban on using Anthropic's Claude AI models — issued as part of broader restrictions on non-NVIDIA AI vendors — constituted First Amendment retaliation. The ruling blocks the ban and orders the DoD to reinstate access to Claude for government contractors and agencies that had been using it.
Background: The Trump Administration's AI Vendor Policy
The Trump administration has been pushing US government agencies to standardize on specific AI vendors — particularly those with NVIDIA hardware supply chains and US-headquartered operations. As part of this policy, several AI models from companies perceived as insufficiently "America First" were restricted or banned from government use. Anthropic — despite being a US company based in San Francisco — was included in these restrictions, allegedly due to concerns about its Constitutional AI approach and safety-focused policies that critics characterized as overly restrictive for government use cases.
The Court's Reasoning
The federal court found that the ban constituted government retaliation for Anthropic's speech — specifically its public advocacy for AI safety regulations and its Constitutional AI training approach, which the judge found to be a form of protected expression embedded in the model's design. The ruling aligns with broader First Amendment jurisprudence that government contracts cannot be denied based on viewpoint discrimination.
"The government cannot penalize a private company for the values embedded in its product when those values constitute protected speech. Restricting a company's government contracts because its AI expresses certain viewpoints is viewpoint discrimination." — Federal Court Opinion, April 2026
What This Means for AI in Government
The ruling has significant implications for how the government can regulate AI vendor selection. It establishes that AI model design choices — including safety guidelines, content policies, and Constitutional AI training — may constitute protected speech. This makes it significantly harder for the government to restrict specific AI vendors based on their product philosophies rather than objective security or performance criteria. For Anthropic, the ruling restores access to the lucrative government and defense contractor market.