A Startup's Guide To The Anthropic Vs. Pentagon Showdown
A dozen defense founders and investors tell Upstarts what they're watching closely in the AI contract battle, from capabilities to ethics.

On Friday night, Kindo founder Ron Williams hit pause on Claude.
His Los Angeles-based startup, which offers an AI-native platform for security engineers, abruptly stopped writing any code using models from AI lab Anthropic.
Just hours before, the U.S. Secretary of War, Pete Hegseth, had responded to Anthropic’s refusal to amend its contract language by publicly accusing Anthropic of delivering a “master class in arrogance and betrayal.” Hegseth had gone further, taking the highly unusual move to designate U.S.-based Anthropic a supply-chain risk.
“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote.
At Kindo, which has raised nearly $36 million to build its software, and where some customers sell to the Department of War and wider federal government, Williams says he can’t afford to discount that directive as an empty threat.
Potentially ripping out months of code if a resolution isn’t reached soon would be a “big risk” for Williams’ startup, which has its own aspirations to sell directly to the government soon.
“We think it’s an unfortunate bind to be in, and it creates a lot of uncertainty on how to proceed in leveraging Anthropic,” he tells Upstarts. “Some of our customers are in the same bind as well.”
Anthropic has disputed that such measures are necessary. The company argued, via a blog post responding to Hegseth’s remarks, that if such a designation were formally adopted (which Anthropic disputes it legally can be), it would only affect Department of War contractors in their work specifically for the DoW.
An Anthropic spokesperson declined further comment.
And for now, it looks like Williams and Kindo might be unusual in taking such defensive measures. But it’s far from the only startup to be having such a conversation internally, and may not be the last if the face-off persists. (Anthropic CEO Dario Amodei has reportedly reopened talks.)
The situation is highly fluid, and most of the key players are too big to really count as startups, from Anthropic to OpenAI, whose CEO Sam Altman’s jumped into the fray by signing, then saying he would amend, his own contract.
But there are a few important things to be taking away from this if you’re on the startup side, and not already obsessively ‘monitoring the situation.’
Upstarts spoke to about a dozen founders and investors, many with service backgrounds, and all with some connection to defense tech, to break down what’s going on into five main takeaways.
The TL;DR for busy builders:
This fight only happened because Anthropic is useful
Startups selling to government can’t have it both ways
The DoW’s response hasn’t made such selling easier
Federal staff will probably use AI even more now
There’s a bigger ethical debate to come
We’ll run through each of these points – and what founders and investors are saying – below.
A MESSAGE FROM OUR SPONSOR
Radically different banking loved by over 300,000 entrepreneurs
When I launched Upstarts last year, I knew we needed a banking partner that could move at our speed, and grow with us, too.
Mercury is thoughtfully designed for the unique needs of entrepreneurs like me, offering an easy and intuitive banking experience so I can spend more time with the founders we like to cover.
Building a media startup comes with new challenges every day. But with Mercury, banking is one we don’t need to worry about.
Visit mercury.com to learn more and apply online in minutes, like I did.
Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through Choice Financial Group and Column, N.A., Members FDIC.
1. The back-door compliment
On one thing nearly every insider agrees: none of this would be happening if Anthropic’s models weren’t valued.
Anthropic’s contract with the Pentagon is just $200 million – or about 1% of its revenue run rate, per Bloomberg – but it spread widely across the U.S. Armed Forces.
It’s particularly popular with engineers who favor Claude Code to build their own applications on top, several defense-focused founders and investors say. That’s why Hegseth’s own statement noted that the government would retain access to Claude over a six-month period, they add; it wouldn’t be trivial to cut out.
“The reality is that no one wants to switch, because Claude is the best,” says one defense-focused VC. “Everyone’s waiting and seeing how the negotiations play out.”
While Williams at Kindo was unwilling to take any chances, other startups that expect to work with DoW – and other agencies that might follow its lead – said they planned to continue using Claude until formal guidance is issued, or they were explicitly told to stop.
One defense startup founder says that they typically pit Claude, OpenAI’s Codex and Google’s Gemini models against each other to refine plans and documents while coding, and have no plans to stop.
“Because my service (the deliverable itself) doesn’t use the models, I should be fine,” they say. “I can still develop code with Claude and use it to stress test, just not in the final production.”
Switching costs are another factor in startups opting to wait for the dust to settle, notes another defense investor who asked to speak anonymously.
2. The front-door critique
Investor Ashwin Lalendran, who previously worked at the Air Force Research Lab, stands firmly in the camp that startups should not be dictating ethical value judgments for the government.
“Anthropic tried to be a defense prime, without really accepting what that means,” he says. “I don’t believe the founder or employees should decide how the technology gets deployed. It’s our democracy, the President, Congress, and the courts.”
Andy Markoff, a former U.S. Marine Corps officer and now the co-founder and CEO of Smack Technologies, agrees.
Given that his startup recently raised $32 million to build its own frontier AI lab explicitly for use by the Department of War, that’s no surprise – but Markoff argues that startups selling to the government should trust that the appropriate auditors are already in place. “Someone has that responsibility, but that person is in uniform and took an oath of service to the country,” he says.
Lalendran’s advice for startups looking to sell to the government moving forward: save the closer control for research and development pilots; once your tools are ready to reach production, “get ready for what that means.”
“When it’s in the warfighter’s hands, the latency of decision-making is very important,” he says. “It’s not something that we can call back home to the founder sitting comfortably in San Francisco.”
3. Supply chain uncertainty
Among the supporters of the DoW’s position who spoke to Upstarts, none would go so far as to fully support Hegseth’s supply-chain risk designation.
Other words tossed out anonymously by respondents to describe the supply-chain threat: “scary,” “too extreme,” a “mess.”
Bob Ackerman, co-founder and managing partner of cyber startup incubator DataTribe, notes that the move comes against a backdrop of the current administration attempting to engage more with the tech community.
Additional ambiguity in a historically slow and convoluted process won’t help those efforts, he says, and a reminder to startups that selling to the government isn’t the same as your typical customer.
“It forces companies to consider the complexities of doing business with the U.S. Government,” Ackerman says.
The relationship flows both ways. Ross Fubini, managing partner at XYZ Capital, maintains that startups selling to the government need to believe that it will follow the rule of law. At the same time, the government should also proactively listen to the startups it works with about their views on its appropriate uses to make more informed decisions, he adds.
4. The Streisand effect
Fubini’s biggest takeaway: how much more government employees are using such AI tools than he’d anticipated.
DoW CTO Emil Michael recently claimed that 1.2 million DoW staffers, representing 40% of the department, have adopted use of an AI tool over the last 90 days — up from 80,000.
The current news around Anthropic could lead to a rise in interest in its tools, too. “I think this has made a bunch of people in the government that weren’t paying attention, pay attention,” Fubini says. “Being told you can’t use something, it’s like, ‘wait, what’s that?’”
For officials or policymakers who subscribe to the argument that the U.S. is in a race with China for AI supremacy, finding a way to keep Anthropic in the mix should be a priority, says Ackerman: “Taking one of our best chess pieces off the board is not in our strategic interests.”’
The name of the game will increasingly be resiliency and flexibility of options, multiple respondents say: an “AI agnostic” approach that can adjust if another situation like this comes up. “If you’re using a general-purpose model built by one of the frontier labs, you should be striving to be model agnostic,” says Markoff at Smack.
5. The greater debate
Jeff Eggers, founder and managing partner of Rsquared VC, argues that the most important thing happening here isn’t a dispute over the technology itself, but the “concrete manifestation” of what has been until now an abstract debate around ethics in the AI era.
A former Navy SEAL officer and senior director on the National Security Council during the Obama Administration, Eggers argues that the dispute serves as “an explicit and urgent reminder” that a wider framework for the use of AI alongside our “most previous and vital national requirements” is needed.
“We can’t afford for this to be a second Project Maven,” he says, invoking the 2017 initiative to accelerate machine learning in what was then the Department of Defense, and from which Google withdrew the following year. “Nor can we afford to gloss over the real issues here.”
That may sound abstract for a startup founder, but at Kindo, Williams is already thinking along similar lines: where the ideals of AI lab founders – not just Amodei, but Altman, and others – will end up, and whom they might antagonize, or inconvenience, in doing so.
“What other uses of AI will they want to stop when their personal values shift over time?” Williams asks. “The willingness to disrupt their customers with no warning is not a tenable situation to build a business on.”




