Daniela Amodei
President, Co-founder at Anthropic. Former VP of Operations at Stripe.
AI-generated profile based on archived statements
Daniela Amodei is the least visible co-founder of the most visible AI safety company. With 34 records in the archive (Dario has roughly four times that), her public footprint is thin by design. She is Anthropic's president, not its CEO, and the division of labor tracks: Dario writes the essays and testifies before Congress, Daniela runs the business and does the broadcast interviews. Where Dario talks about scaling laws as empirical destiny, Daniela talks about adoption curves, business incentives, and what happens to people. She came to Anthropic from a VP of Operations role at Stripe, not from a research lab, and her background as a literature major shows up in how she frames things. She favors social media analogies over physics metaphors. Her most consistent argument, made in both a February 2026 ABC News interview and a Sixth Street podcast the same month, is that AI companies should look at what happened to social media and work backward: if you could see where Facebook ended up in 2026 and you were making decisions in 2010, would you have felt good about the path? The answer, she thinks, is obviously no, and she wants Anthropic to make different choices at the equivalent stage.
The clearest example of this logic was Anthropic's 2026 Super Bowl ad and its decision to refuse in-product advertising. Daniela's argument was structural, not moral: ad-supported AI creates incentive to maximize engagement, engagement-maximized AI encourages sycophancy, and sycophantic AI is particularly dangerous because users share private and sensitive information with chatbots in a way they never did with search engines. She connected this directly to child safety, noting that Anthropic bars users under 18 and has lobbied both California and New York for regulation on AI and minors. She acknowledged this is a cautious position that may turn out to be unnecessary, but argued the technology is too new to risk being wrong. On the question of job displacement, she was notably more honest than most executives: she admitted the percentage of fully automatable jobs is not zero, especially in customer support, and that Anthropic does not have a perfect solution to the labor disruption problem. She described herself as perhaps overly optimistic but did not pretend the problem away.
What separates Daniela from Dario in public is not ideology but register. She talks about AI through concrete deployments: pilots with the Gates Foundation, work with the Clinton Health and Access Initiative, the Economic Index report showing adoption patterns that mirror wealth inequality. She speaks as a parent (her son was four and a half in early 2026) and uses that vantage point to talk about children and AI with a specificity that Dario's essays lack. Her operational focus shows in how she explains Constitutional AI: not as a theoretical advance in alignment, but as a training technique that happened to increase both ethical reasoning and general intelligence, which then drove market demand. She calls this the race to the top, the idea that safety practices can create commercial advantage and pull competitors upward. She speaks about Anthropic's 300,000 business customers and $350 billion valuation with the ease of someone who built the go-to-market side. She is, in short, the person at Anthropic who translates research ambitions into business logic and policy positions into lobby meetings, and her public record reflects that operational lens more than any grand theory of intelligence.
Recurring themes