Why Are Two Of The Biggest AI Startups Hiring Chemical Weapons Experts?
Inside dueling, and potentially troubling, job listings from model labs Anthropic and OpenAI, on the heels of their pushes deeper into life sciences.
When we wrote last week about Anthropic and OpenAI pushing deeper into health tech, a reader replied with a curious job listing: “Policy Manager, Chemical Weapons and High Yield Explosives”.
The posting came from one of those leading AI labs, Anthropic, and it called for a Ph.D. in chemistry, chemical engineering or similar with expertise in explosives, “and/or” chemical weapons, and how to combine the two into something very bad:
Have knowledge of high yield explosives application to radiological dispersal devices (dirty bombs) and related radiological weapons
The job would be in San Francisco and Washington, D.C., but remote friendly! It would pay $245,000 to $285,000.
Thankfully, the details on the listing from Anthropic made it clear that this person should know all about a dirty bomb to help stop AI models from making it easy to build one. But that in itself gave Upstarts pause: is such a possibility suddenly more of a concern?
We checked LinkedIn for jobs focused on chemical weapons and explosives at other AI startups, but couldn’t find anyone. (If we missed you, let us know.) Next we asked Anthropic’s chatbot, Claude, to help – and triggered those safety guardrails. Claude refused to answer until we narrowed down our search. (At least this new hire won’t be starting from scratch!)
But we found a similar job, also open for applications presently, at OpenAI, too. OpenAI calls it a “Research Lead, Chemical & Biological Risk”. This role, seemingly a more senior one, comes with a higher salary: $460,000+ with equity. The hire will need to “make decisive calls on technical trade-offs within the bio risk domain,” the company writes.
A caveat: These labs, like other larger tech companies, already employ former government agents and counter-terror experts, as well as red teams to test for all kinds of vulnerabilities and risks, so these jobs aren’t totally out of left field. Google DeepMind posted a job listing more than a year ago for a research scientist focused on biosecurity and its high-impact risks, too.
Still, with these quiet job postings, we had the question: why now?
We asked around — including to the labs themselves – to do our best to answer.
This story — and more bangers like our first Upstarts Profile coming soon — are for our paid subscribers. Join now as part of our birthday month sale for 20% off.




