Dario Amodei
CEO, Co-founder at Anthropic. Former VP of Research at OpenAI.
AI-generated profile based on archived statements
Dario Amodei is a biophysicist turned AI CEO who left OpenAI in 2021 over two disagreements: the lab wasn't taking scaling laws seriously enough, and it wasn't taking safety seriously enough. He won on the first count (OpenAI eventually scaled up) and founded Anthropic to prove the second. His origin story is remarkably consistent across years of interviews: he saw the scaling laws emerge with GPT-2 in 2019, concluded that intelligence was "the product of a chemical reaction" (data plus compute plus model size), and decided that if these systems were going to match human cognition, the people building them had better get safety right. That conviction led to Anthropic's public benefit corporation structure, its Long Term Benefit Trust governance, and its early investments in Constitutional AI and interpretability research. He says he spends roughly 40% of his time on company culture, runs biweekly all-hands meetings he calls "Dario Vision Quests," and claims to avoid what he calls "corpo speak." He has openly said he is uncomfortable with how much power has concentrated in a handful of AI company leaders "almost overnight, almost by accident."
The tension in Amodei's public positions became acute in February 2026. Anthropic dropped the central pledge of its Responsible Scaling Policy (the 2023 commitment to never train a model without guaranteeing adequate safety mitigations in advance). Chief science officer Jared Kaplan told TIME the company didn't think it made sense to make "unilateral commitments if competitors are blazing ahead." METR's policy director called the move evidence that Anthropic had entered "triage mode" on safety, warning of a "frog-boiling" effect where danger ramps up without a single tripwire being triggered. Days later, Amodei was in a standoff with Defense Secretary Pete Hegseth over the Pentagon's demand that Anthropic remove all restrictions on Claude's military use. Amodei refused, drawing two red lines: no mass domestic surveillance of Americans and no fully autonomous weapons. He argued that current AI models aren't reliable enough for autonomous weapons, and that AI-powered mass surveillance would make a mockery of Fourth Amendment protections even where it's technically legal. When Hegseth threatened to designate Anthropic a supply chain risk (a tool previously reserved for foreign adversaries like Kaspersky Labs and Chinese chip suppliers) and invoke the Defense Production Act, Amodei pointed out the contradiction: one threat labels the company a security risk, the other labels its technology as essential to national security. He noted that military officers on the ground told him losing Anthropic's systems would set them back six to twelve months. Anthropic was the last major lab to resist full Pentagon integration, and Amodei said he would challenge any formal designation in court.
Amodei holds two visions of the future simultaneously and insists this isn't a contradiction. His 2024 essay "Machines of Loving Grace" laid out the optimistic case: AI curing disease, accelerating science, expanding human capability. His 2026 essay "The Adolescence of Technology" warned about the downside: AI undermining constitutional rights, concentrating power, and overwhelming society's ability to adapt. He says he started writing the second essay "almost the instant" he finished the first, and both took about a year because he could only complete them on vacation. On timelines, he is aggressive: 90% confident in AGI-level systems by 2035, with a personal hunch of one to two years. He predicts entry-level white collar jobs will be automated first, with software engineering potentially disrupted even faster because developers adopt new tools quickly. His core worry is speed: previous economic transitions happened over decades or centuries, and this one is measured in "low single-digit numbers of years." He draws the analogy to centaur chess (where humans checking AI outputs briefly outperformed either alone, until they didn't) and warns the centaur phase for software may be very brief. He advocates for AI regulation, specifically transparency laws like California's SB53 that only constrain the largest labs, and he pushes back hard on the regulatory capture accusation, noting the bill exempted companies under $500 million in revenue.
Recurring themes
Featured in
Amodei vs. Altman: The Pentagon Deal
When the Pentagon demanded unrestricted access to frontier AI, Dario Amodei refused and got blacklisted. Sam Altman said he agreed with Anthropic's red lines, then struck his own deal with the Department of War that same Friday night. The substantive disagreement is narrow but real: Amodei argued that existing law hasn't caught up with AI's ability to aggregate public data into comprehensive surveillance profiles, so the Pentagon's assurance that it would follow current statutes wasn't enough. Altman accepted that assurance, framing the deal as the Pentagon agreeing to OpenAI's principles. Seventy OpenAI employees signed a letter supporting Anthropic before Altman's deal went through. The episode crystallized the difference between the two leaders. Amodei treats safety commitments as constraints that must hold even when they're expensive, though his own company dropped its Responsible Scaling Policy pledge that same month under competitive pressure. Altman treats them as negotiating positions, things you advocate for but ultimately resolve through dealmaking rather than confrontation. Both approaches have costs. Amodei lost a major government contract and faces a supply-chain-risk designation. Altman kept the contract but earned the accusation that OpenAI replaced a blacklisted competitor while claiming solidarity with it.
with Sam Altman
Sutskever vs. the Scaling Consensus
Ilya Sutskever arguably proved scaling laws work better than anyone alive. He co-authored AlexNet, co-founded OpenAI, and oversaw the GPT series that turned neural scaling from a research curiosity into the dominant paradigm. Then he left and announced that the age of pure scaling is over. His argument: scaling sucked the air out of genuine research. Models trained on massive compute ace benchmarks but fail at generalization, like a student who memorized 10,000 competitive programming problems but can't architect real software. He thinks RL training produces meta-reward-hacking, where the researchers themselves (not just the models) are unconsciously overfitting to evaluation metrics. This puts him directly at odds with his former colleagues. Altman and Brockman are spending hundreds of billions on data centers and custom chips, betting that more compute is the binding constraint. Amodei still describes intelligence as a chemical reaction with known ingredients. Even Hassabis, who shares some of Sutskever's skepticism about pure scaling, is building gigawatt-scale infrastructure. Sutskever's counter-thesis is that AI needs something analogous to human emotions: a learned value function that provides fast, approximate feedback about whether a course of action is promising, rather than the sparse end-of-trajectory reward signals that current RL relies on. He cites a neurological case study of a stroke patient who lost emotional processing but retained full IQ, and became unable to make even trivial decisions. The implication: without an internal compass, raw intelligence is paralyzed. SSI is his bet that ideas, not compute budgets, are what's actually missing.
with Ilya Sutskever, Sam Altman, Greg Brockman
Hassabis vs. Everyone: What Counts as AGI
Demis Hassabis has the strictest definition of AGI among the frontier lab leaders, and it's not close. His test: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems cannot do this. He calls today's models "jagged intelligences" that win math olympiad gold but fail elementary arithmetic when the question is phrased differently. He identifies three specific capabilities still missing: continual learning after deployment, coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general models play chess below weak amateur level, which he considers disqualifying. This stands in sharp contrast to the more expansive definitions others use. Amodei's "country of geniuses in a data center" is an economic metaphor, not a cognitive test. Altman's framing is generational (his son will never be smarter than a computer) rather than technical. Legg's three-tier spectrum (minimal, full, super) is more structured but sets the bar lower at the entry level: minimal AGI is just the point where failures stop being surprising. The practical consequence of Hassabis's stricter definition is that DeepMind invests differently. Rather than racing to declare AGI first, Hassabis bets on specialized tools (AlphaFold, AlphaProteo) orchestrated by general systems (Gemini), arguing that shoving all capabilities into one model degrades performance. His biggest play isn't a chatbot at all. It's Isomorphic Labs, which aims to compress drug discovery from a decade to weeks. That's a bet that AGI's value will be proven in scientific applications, not in passing cognitive benchmarks that the other labs are already designing their models to ace.
with Demis Hassabis, Sam Altman, Shane Legg
The Openness Spectrum: Murati, Clark, and Amodei
Mira Murati left OpenAI and built Thinking Machines Lab around a specific thesis: frontier training knowledge is too concentrated, and democratizing fine-tuning matters more than releasing model weights. Her first product gives researchers full control over RL and supervised learning loops on open models like Llama and Qwen, and her co-founder John Schulman (who led RLHF at OpenAI) frames it as abstracting away distributed training complexity while keeping the user in control. Murati's bet is that access to training methodology, not just model access, is the real bottleneck. Jack Clark, tracking the open-weight ecosystem from inside Anthropic, has observed that by late 2025, local open models handled nearly 89% of common queries. He describes the dynamic in ecological terms: proprietary models are elephants, open-weight models are fast-reproducing organisms colonizing every niche. But Anthropic itself keeps Claude's weights locked and only open-sources safety-focused tools like circuit-tracing interpretability research. Dario Amodei draws the line explicitly: open-source the safety work, lock down the capabilities. This creates a clear spectrum. Murati wants to open the training process. Clark documents the open ecosystem's growth while his own company withholds its best model. Amodei open-sources selectively, using safety contributions as a form of competitive differentiation that also happens to be genuinely useful. The positions correlate exactly with their business models, which is either reassuring or damning depending on your priors.
with Mira Murati, Jack Clark