Anthropic sues the US Department of Defense for the first time! The court ordered a halt to the ban on AI tool commands, criticizing it as "unconstitutional retaliation".
The judge of the Northern District of California federal court in the United States has approved Anthropic's application for a temporary injunction, preventing the United States federal government from implementing the order to disable Anthropic's artificial intelligence technology.
Artificial intelligence (AI) startup Anthropic temporarily won support from the court in its legal showdown with the U.S. Department of Defense. On March 26, local time, Judge Rita F. Lin of the Federal District Court for the Northern District of California approved Anthropic's request for a temporary restraining order, preventing the U.S. federal government's order to disable Anthropic's AI technology from taking effect. "Anthropic is not currently restricted and can continue to fulfill its federal contracts." The restraining order lasts for one week to give the federal government a chance to appeal. As the case is still pending, the ruling is not final.
In her opinion, Rita F. Lin stated, "The government's broad measures do not appear to be aimed at the national security interests claimed by the government." "If the concern is the integrity of the command chain, the Department of Defense could simply stop using Claude. Instead, these measures appear to be punitive against Anthropic." The judge stated that such actions are "typical retaliatory behavior violating the First Amendment."
The standoff between Anthropic and the Department of Defense stems from Anthropic's refusal to allow the Department of Defense to use its Claude model for fully autonomous lethal weapons systems or domestic mass surveillance. Due to Anthropic's refusal of the military's unrestricted use of AI, Trump referred to Anthropic's management as "left-wing lunatics," and on March 5, the Department of Defense labeled Anthropic as a "supply chain risk." This made Anthropic the first domestic company listed as a "supply chain risk" by the U.S. government.
On March 9, Anthropic filed a lawsuit against the Department of Defense and other federal agencies for designating it as a "supply chain risk." The designation of "supply chain risk" will impact business interactions between Anthropic and defense contractors. Anthropic argued in the complaint that the "supply chain risk" designation and other punitive measures could result in hundreds of millions, if not billions, of dollars in losses for the company, and that the classification of being a "supply chain risk" lacks legal basis.
A spokesperson for Anthropic stated at the time that seeking judicial review does not change the company's long-standing commitment to using artificial intelligence to protect national security, but it is a necessary step to protect the company's business, customers, and partners. The company will continue to explore all avenues for resolution, including dialogue with the government.
In a statement, Anthropic welcomed the judge's latest ruling. The company stated, "While this lawsuit was necessary to protect Anthropic, our customers, and partners, our focus remains on productive collaboration with the government to ensure that all Americans benefit from secure and reliable artificial intelligence."
Anthropic also added that, due to its disagreement with the government's position, the company is being excluded from government contracts, and it believes that the legal principles involved in this case will affect all federal contractors who may be excluded due to unwelcome views. Previously, the Trump administration has vowed to exclude Anthropic from all U.S. government agencies through legal battles.
In a hearing earlier this week presided over by Rita F. Lin, a lawyer for the U.S. government stated that trust is a key component in the relationship between the military and service providers, and that Anthropic's attempt to influence the Department of Defense's AI usage policy during contract negotiations undermined this trust. The lawyer argued that the government was concerned about the risk of future "destructive behavior" from Anthropic, including changing the artificial intelligence software purchased from the company.
However, in the ruling, Rita F. Lin stated that the U.S. Department of Justice did not have "proper grounds" to assert that Anthropic's firm stance on restricting its AI technology would make it a "disruptor".
During the hearing, a lawyer for Anthropic pointed out that the Department of Defense can conduct a review before deploying any AI model, and that Anthropic cannot prevent the model from running, change how it operates, shut down the model, or see how the military uses the model.
As part of the legal battle surrounding the restraining order, Anthropic also filed a lawsuit in the Appeals Court in Washington, D.C., focusing on a law that regulates supply chain risk mitigation procedures in procurement. In this case, the company claimed that the measures taken by the Department of Defense were "arbitrary, capricious, and an abuse of discretion."
Related Articles

Panic index skyrockets, hedge funds sell like crazy! Goldman Sachs trading desk warns: "U.S. stocks cannot be optimistic".

Lithography giant, "collapses" loudly.

"The largest aluminum plant in the Middle East" suffered a "major loss" on Saturday, and it is currently unclear whether production has been halted.
Panic index skyrockets, hedge funds sell like crazy! Goldman Sachs trading desk warns: "U.S. stocks cannot be optimistic".

Lithography giant, "collapses" loudly.

"The largest aluminum plant in the Middle East" suffered a "major loss" on Saturday, and it is currently unclear whether production has been halted.

RECOMMEND

Chinese Innovative Drug Assets Attract Major Foreign Acquisition, Cooperation Models Diversify
26/03/2026

Four Giants Subscribe As Memory Manufacturer Confirms TWD 78.718 Billion Private Placement For Capacity Expansion
26/03/2026

Year‑On‑Year Surge Exceeding 500%: Hong Kong IPOs Top HKD 100 Billion This Year
26/03/2026


