In this edition: an overview of one of the biggest AI policy issues the industry has faced at a national level.
This is Q&AI, our blog series aimed at keeping you in the know on updates in the rapidly evolving world of AI. Sometimes, these will be quick updates on new developments in the field. Sometimes, they’ll be tips on tactics, features, or functionality. If you haven’t met me yet, hi: I’m Jen Taylor, CI’s Director of AI Strategy & Implementation, and your (very human) AI BFF.
Q: What is going on between Anthropic and the U.S. government? And how does openai factor in?
A: The dispute between Claude and the Pentagon is quickly turning into one of the biggest AI policy issues the industry has faced at a national level. It addresses who actually owns the decision-making power over generative AI use: the companies building the models, the federal government, or lawmakers who have not yet legislated the space. We are seeing the beginning of something that will likely continue to unfold.
I think the seemingly clear ethics of Claude’s stance make it difficult to see the broader implications. A competing perspective is that the government is elected and charged with protecting the public, even when I disagree with the administration in power. Yet if private AI companies, in this case effectively one CEO, Dario Amodei, can unilaterally set boundaries on national security use, that establishes a powerful precedent.
Here’s what’s been happening:
According to public statements and reporting over the weekend, Claude held firm on two sticking points around limits on autonomous weapons and mass domestic surveillance. President Trump then posted on Truth Social that federal agencies would stop working with Claude moving forward. Following that, Defense Secretary Pete Hegseth posted on X designating the company a “supply chain risk.” That designation is typically associated with foreign entities seen as posing national security concerns, which makes its use here highly unusual if applied to a U.S. company. All of this communication occurred via social media, and it remains unclear whether a formal supply chain risk review process has officially begun. Claude has indicated it would challenge the designation in court.
Critics have framed the move as punitive and politically motivated rather than rooted in traditional supply chain risk criteria. There has also been rhetoric labeling Claude as left wing or “woke.” The messaging appears internally inconsistent, with the Pentagon previously describing advanced AI systems, and Claude specifically, as critical to government success, including reports that it was used over the weekend, while simultaneously labeling the provider a risk. To me, this looks like the government exerting extraordinary pressure on a company for refusing to align with its preferred use cases or broader ideological positioning.
Then Sam Altman entered the picture, stating that OpenAI would step in to provide services to the Pentagon. Reporting indicates OpenAI reached an agreement for military use after Anthropic’s refusal. It remains unclear whether the contractual language in OpenAI’s agreement addresses the same restrictions in the same way Claude reportedly required.
I don’t think one CEO should be able to handcuff the government. But I do think these are extraordinary times, and the government may be consolidating too much power. At some point, someone has to push back.
Hard Fork dropped an extra 30-minute episode Saturday night breaking down the situation. Worth a listen if you want a quick recap.
TALK TO YOU (VERY) SOON,
Jen
Have a question for a future edition? Submit it here!