HomeBusiness"Federal Judge Halts Pentagon's Blacklisting of Anthropic"

“Federal Judge Halts Pentagon’s Blacklisting of Anthropic”

A federal judge in the United States has issued a temporary injunction against the Pentagon’s blacklisting of Anthropic, marking the latest development in the company’s contentious battle with the military regarding AI safety in combat scenarios. Anthropic, in its lawsuit filed in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by classifying Anthropic as a national security risk in the supply chain. This classification allows the government to identify companies that could potentially expose military systems to infiltration or sabotage by adversaries.

The lawsuit further alleges that the government infringed on Anthropic’s First Amendment right to free speech by retaliating against its stance on AI safety. Anthropic claims it was not afforded the opportunity to challenge the designation, thereby violating its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, concurred with Anthropic’s arguments in a 43-page ruling. However, the injunction will not take immediate effect, as a seven-day period has been granted for the administration to potentially appeal the decision.

The dispute stems from Hegseth’s decision to blacklist Anthropic after the company opposed allowing the military to utilize its AI chatbot, Claude, for surveillance or autonomous weaponry purposes. This action has resulted in Anthropic being barred from certain military contracts, a move that company executives estimate could lead to substantial financial losses and damage to its reputation.

Anthropic contends that AI models lack the necessary reliability for safe deployment in autonomous weapons and opposes domestic surveillance as a violation of rights. While the Pentagon argues that private entities should not have the authority to restrict military operations, it clarified that it has no interest in deploying such technologies for unauthorized purposes.

In her ruling, Judge Lin suggested that the government’s actions were more punitive towards Anthropic rather than driven by national security concerns. She stated, “The record indicates that Anthropic is facing repercussions for criticizing the government’s contracting stance publicly,” emphasizing that penalizing Anthropic for shedding light on the government’s contracting practices constitutes unlawful First Amendment retaliation.

Following the ruling, a spokesperson for Anthropic, Danielle Cohen, expressed satisfaction with the decision and reiterated the company’s commitment to collaborating with the government to ensure the safe and beneficial use of AI technologies for all Americans.

The designation of Anthropic as a supply-chain risk under a government procurement statute marks the first time a U.S. company has received such a public classification, aimed at safeguarding military systems from potential foreign sabotage. Anthropic’s lawsuit challenges the legality of the decision, claiming it lacks factual basis and contradicts the military’s prior positive assessments of Claude.

The Justice Department countered Anthropic’s arguments by suggesting that the company’s refusal to comply with contractual terms could introduce uncertainty within the Pentagon regarding the use of Claude, potentially jeopardizing military operations. The government maintained that the designation was a consequence of Anthropic’s reluctance to adhere to contractual obligations rather than its stance on AI safety.

Anthropic has a separate lawsuit pending in Washington concerning another Pentagon supply-chain risk designation that could result in its exclusion from civilian government contracts.

Must Read
Related News