Eyes Openers
  • World News
  • Business
  • Stocks
  • Politics
  • World News
  • Business
  • Stocks
  • Politics

Eyes Openers

Business

Anthropic sues US government after being labelled a ‘supply chain risk’ in AI dispute

by March 10, 2026
March 10, 2026
Anthropic sues US government after being labelled a ‘supply chain risk’ in AI dispute

Artificial intelligence company Anthropic has filed an unprecedented lawsuit against the United States government after being formally labelled a “supply chain risk”, escalating a bitter dispute over the military use of advanced AI technology.

The legal action, filed in a federal court in California, challenges a directive issued by the administration of Donald Trump that effectively barred US government agencies from using Anthropic’s AI systems. The company argues the move was politically motivated retaliation after it refused to remove restrictions on how its technology could be deployed by the US military.

Anthropic’s lawsuit claims the decision was “unprecedented and unlawful” and violated constitutional protections around free speech and due process.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the firm said in its complaint. “No federal statute authorises the actions taken here.”

The conflict stems from a disagreement between Anthropic’s chief executive Dario Amodei and US defence officials, including Pete Hegseth, over how the company’s artificial intelligence tools could be used by the Pentagon.

Anthropic has long maintained strict contractual limits on the deployment of its technology, including bans on using its AI models for “lethal autonomous warfare” and for mass domestic surveillance of American citizens.

According to the lawsuit, defence officials demanded that the company remove these restrictions from its government contracts. Anthropic refused, arguing that such safeguards were essential to ensure responsible use of powerful AI systems.

The company said negotiations with the Department of Defense were initially progressing and that both sides had been working toward revised language that would allow continued cooperation while preserving ethical limits.

However, those talks reportedly collapsed after the White House intervened.

Following the breakdown in negotiations, the Pentagon designated Anthropic as a “supply chain risk” — a classification normally applied to companies considered insecure or unreliable partners for government systems.

The designation effectively blocks US government agencies and contractors from using Anthropic’s software tools.

The move was accompanied by public criticism from the Trump administration, with White House officials accusing the company of attempting to dictate military policy.

Liz Huston, a spokesperson for the White House, told reporters that Anthropic was “a radical left, woke company” seeking to impose its own conditions on national defence operations.

“Under the Trump Administration, our military will obey the United States Constitution — not any woke AI company’s terms of service,” Huston said.

Anthropic disputes that characterisation and argues that its restrictions were standard contractual provisions designed to prevent misuse of AI systems.

The legal challenge names a broad list of defendants, including the executive office of President Trump and senior government officials such as Marco Rubio and Howard Lutnick.

The suit also targets 16 federal agencies, including the Departments of Defense, Homeland Security and Energy.

Anthropic claims the directive banning its technology has caused significant reputational and commercial damage.

The company said that both current and prospective commercial contracts were now under threat, potentially jeopardising “hundreds of millions of dollars in the near term”.

It also argued that the decision had created a broader chilling effect across the technology sector by discouraging companies from speaking publicly about the risks associated with advanced AI.

The case has already drawn support from across the technology industry.

Nearly 40 employees from rival companies including Google and OpenAI filed a joint legal brief backing Anthropic’s position, despite the firms being competitors in the rapidly expanding AI sector.

The signatories warned that the deployment of advanced AI systems without safeguards could create serious risks, particularly if used for mass surveillance or autonomous weapons.

“As a group, we are diverse in our politics and philosophies,” the engineers wrote in their submission. “But we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight.”

Anthropic’s flagship AI system, Claude, has become widely used by technology companies and developers for coding, research and enterprise software tasks.

Companies such as Microsoft, Amazon and Meta have confirmed they will continue to use the technology in commercial applications, although not in projects involving US defence agencies.

Anthropic is not seeking financial damages in the case. Instead, it is asking the court to declare the government’s directive unconstitutional and remove the “supply chain risk” designation immediately.

Legal experts believe the dispute could become a landmark case in defining how governments interact with AI developers.

Carl Tobias, a law professor at the University of Richmond, said the case could ultimately reach the US Supreme Court.

“Anthropic may very well win in federal court,” Tobias said. “But this administration is not shy about appealing. It will probably go to the Supreme Court.”

The outcome could have major implications for the fast-growing AI industry, particularly as governments worldwide increasingly rely on private technology firms to supply critical artificial intelligence systems for defence, intelligence and national security operations.

For now, the lawsuit marks a rare moment in which a major technology company is openly challenging government authority over the future deployment of artificial intelligence.

Read more:
Anthropic sues US government after being labelled a ‘supply chain risk’ in AI dispute

previous post
Government-funded mobile mast upgrades reach 50 milestone in Wales
next post
China exports surge despite Trump tariffs as global demand strengthens

Related Posts

Youth unemployment hits 11-year high as rate cut...

February 19, 2026

Walls That Work: Why Physical Office Strategy is...

March 5, 2026

Bindbridge raises $3.8m to fight herbicide resistance with...

March 3, 2026

    Get free access to all of the retirement secrets and income strategies from our experts! or Join The Exclusive Subscription Today And Get the Premium Articles Acess for Free

    By opting in you agree to receive emails from us and our affiliates. Your information is secure and your privacy is protected.

    Popular Posts

    • A GOP operative accused a monastery of voter fraud. Nuns fought back.

      October 24, 2024
    • Trump’s exaggerated claim that Pennsylvania has 500,000 fracking jobs

      October 24, 2024
    • American creating deepfakes targeting Harris works with Russian intel, documents show

      October 23, 2024
    • Tucker Carlson says father Trump will give ‘spanking’ at rowdy Georgia rally

      October 24, 2024
    • Early voting in Wisconsin slowed by label printing problems

      October 23, 2024

    Categories

    • Business (285)
    • Politics (20)
    • Stocks (20)
    • World News (20)
    • About us
    • Privacy Policy
    • Terms & Conditions

    Disclaimer: EyesOpeners.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

    Copyright © 2025 EyesOpeners.com | All Rights Reserved