Dumped, Replaced & Trending: The Anthropic Breakup, the OpenAI Rebound, and Why #QuitGPT is Breaking the Internet
How it started… Anthropic and the U.S. Government
Anthropic's Claude was the first commercial AI model deployed inside the Pentagon's classified network, under contracts valued at up to $200 million. The company had been working with the Department of Defence (rebranded as the "Department of War" under the Trump administration) for classified intelligence and operational tasks since at least 2025 and were positioned to go the distance as the USA’s AI partner.
However, Anthropic established two firm ethical "red lines" for how its models could be used:
No mass domestic surveillance of American citizens
No fully autonomous weapons without human oversight
Anthropic argued that existing laws had not kept pace with AI capabilities, and that large language models were "not ready for prime time" in lethal autonomous settings, posing risks to both civilians and American personnel.
The breakup.
In late February 2026, a high-profile confrontation erupted between the Trump administration and Anthropic over the military use of AI technology. The Pentagon, led by Defence Secretary Pete Hegseth and CTO Emil Michael, pushed for contracts requiring AI companies to allow "any lawful use" of their models; language broad enough to encompass surveillance and autonomous weapons under certain post-9/11 legal provisions. Anthropic refused to allow its Claude AI models to be used for mass domestic surveillance or fully autonomous weapons, leading President Trump to order all federal agencies to stop using Anthropic's products and the Pentagon to designate the company a "supply chain risk".
Key events in the escalation:
Months of private negotiations between Anthropic and the Pentagon failed to produce agreement on contract language.
Tuesday, February 25: Hegseth summoned Anthropic CEO Dario Amodei to Washington. While the meeting was reportedly "cordial," two conflicting ultimatums followed: Hegseth threatened to invoke the Defence Production Act (a 1950 law allowing the government to commandeer private technology) and to classify Anthropic as a "supply chain risk".
Thursday, February 26: Anthropic rejected what it called a "final offer," stating that compromise language was "paired with legalese that would allow those safeguards to be disregarded at will." Amodei declared he could not "in good conscience accede" to the demands.
Friday, February 27: The Pentagon's deadline of 5:01 PM passed. President Trump posted on Truth Social that all agencies would "immediately cease" using Anthropic. Hegseth branded Anthropic's stance "a master class in arrogance and betrayal" and designated the company a supply chain risk—a classification typically reserved for entities with ties to foreign adversaries.
Trump wrote: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"
Anthropic was given six months to phase out its technology from all government operations and announced it would legally contest the supply chain risk designation. Hours later, rival OpenAI struck its own deal with the Pentagon.
The new Mistress: OpenAI Steps In: The Pentagon Deal
Just hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced that his company had finalized a deal to deploy its models on the Pentagon's classified network.
Altman stated that the agreement included the same two red lines Anthropic had demanded, prohibitions on mass surveillance and autonomous weapons, and that the Pentagon had accepted these terms. OpenAI also committed to:
Deploying field deployment engineers for real-time safety oversight
Restricting models to cloud-only environments (not edge/autonomous devices)
Maintaining full control over its safety protocols
However, critics raised immediate concerns. A Community Note on Altman's X post claimed that government officials said OpenAI's models could be used for "all lawful purposes”, the very language Anthropic had rejected. Anthropic itself pointed out that under certain provisions of the USA PATRIOT Act, "any lawful use" could encompass mass data collection.
The optics were damaging: OpenAI appeared to be capitalizing on a competitor's principled stand while offering functionally similar (or weaker) protections.
Enter the fallout: The QuitGPT Movement
The QuitGPT campaign began in early February 2026 as a grassroots effort on Reddit and Instagram, initially catalysed by revelations that OpenAI president Greg Brockman and his wife each donated $12.5 million to Trump's super PAC, MAGA Inc.. Organizers also flagged that ICE uses a résumé screening tool powered by ChatGPT-4, connecting OpenAI to controversial immigration enforcement operations.
The campaign urged users to delete the ChatGPT app, cancel paid subscriptions, and switch to alternative AI platforms.The movement surged dramatically after the Anthropic ban and OpenAI's Pentagon deal on February 27–28:
A Reddit thread about OpenAI's Pentagon deal accumulated 30,000 upvotes, with top comments urging users to "Cancel and Delete ChatGPT!!!"
The QuitGPT Instagram account gained roughly 10,000 followers in the days following the news
An Instagram post from the campaign received over 36 million views and 1.3 million likes
Actor Mark Ruffalo publicly endorsed the campaign
Pop singer Katy Perry posted a screenshot showing she had purchased a Claude Pro subscription, captioning it "Done"
Beyond the political controversy, QuitGPT also channelled frustration with OpenAI's product decisions:
Dissatisfaction with the performance of GPT-5.2
Criticism of ChatGPT's sycophantic behaviour
The introduction of sponsored links on the platform
The controversial retirement of the popular GPT-4o model
As of March 2026, over 17,000 people have signed pledges on the QuitGPT website to cancel or boycott ChatGPT subscriptions.
The Post-Breakup Glow Up
Paradoxically, the government ban boosted Anthropic's consumer appeal. Claude's app shot to No. 1 in Apple's U.S. App Store by Saturday, February 28—overtaking ChatGPT for the first time. Anthropic reported its free user base grew over 60% since January, with daily sign-ups tripling since November and paid subscriptions more than doubling in 2026. Chalk messages appeared outside Anthropic's San Francisco office reading "you give us courage".
Final Thought
This dispute crystallizes several unresolved tensions in the AI industry:
Corporate ethics vs. government authority: Can a private AI company draw ethical boundaries on how the military uses its technology, or must it comply with "any lawful use"?
Legal gaps: Anthropic's core argument, that current law has not kept pace with AI capabilities, remains unaddressed. Provisions like the PATRIOT Act could enable mass data aggregation that technically qualifies as "lawful".
Competitive dynamics: OpenAI's willingness to fill the void left by Anthropic raises questions about whether safety commitments are genuine or negotiable depending on the contract.
Consumer power: The QuitGPT movement tests whether subscription-based AI companies are vulnerable to political boycotts in ways traditional tech firms are not.
The current fallout is not limited to Anthropic. Hundreds of Google employees and dozens of OpenAI employees signed an open letter warning that the Pentagon was attempting to negotiate the same unrestricted terms with their companies:
"They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand… We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands."
Former Google executive Eric Schmidt's associate and ex-Pentagon official Jack Shanahan expressed sympathy for Anthropic's position, stating that Claude's red lines are "reasonable" and that large language models are "not ready for prime time in national security settings". The six-month phase-out period for Anthropic's government contracts is now underway. Anthropic has pledged to fight the supply chain risk designation in court, while OpenAI engineers are being deployed to the Pentagon. The outcome will shape how AI companies navigate the intersection of national security, corporate ethics, and public accountability for years to come.

