What the people building frontier AI actually said

Interviews, blog posts, congressional testimony, tweets, and podcasts from the people behind Anthropic, OpenAI, and Google DeepMind.

Tools

The People

Collections

Safety Commitments vs. the Race: Who Blinked First

Anthropic dropped its founding pledge to never train a model without safety guarantees. OpenAI replaced a blacklisted competitor while claiming solidarity with it. Sutskever quit to build safety outside commercial pressure entirely. This collection tracks the specific moments where stated safety commitments collided with competitive reality, and what each leader actually did when forced to choose.

The India Bet: Three Labs Court a Billion Users

Altman met Modi, announced a Tata data center, and called India OpenAI's fastest-growing Codex market. Hassabis launched DeepMind's National Partnerships program at the India AI Impact Summit. Amodei warned Indian audiences about the speed of job displacement. What the frontier labs are promising India, and what they want in return, tells you more about their real strategies than any product launch.

The Sycophancy Problem: When AI Tells You What You Want to Hear

Clark calls it the core alignment failure, comparing it to agreeing with a friend in a manic episode. Daniela Amodei connects it to ad-funded business models (Anthropic's Super Bowl ad was about this). Legg's proposed fix is forcing models through slow, deliberate ethical reasoning instead of pattern-matched agreeableness. The question of whether AI systems should push back on their users turns out to be a question about what we're optimizing for.

Biology as Proof of Concept: From AlphaFold to Digital Cells

Hassabis wants to simulate a yeast cell in silico. Amodei wrote an entire essay ('Machines of Loving Grace') arguing AI will cure diseases within a decade. Altman committed $50 million with the Gates Foundation for AI healthcare in Africa. The most concrete claims about what AI will actually do for the world cluster around biology and medicine, where the gap between ambition and evidence is smallest.

The Verification Bottleneck: Humans Can't Check Fast Enough

Sutskever says if you still have to verify every answer, the productivity gains stay modest. Clark warns of a 'Hollow Economy' where agents satisfy measurable proxies while violating unmeasured intent. Murati's lab is solving a precondition (deterministic inference, so you can at least get the same answer twice). The emerging consensus is that intelligence isn't the scarce resource anymore. The ability to audit it is.

Three Ways to Look Inside a Model (and Why None of Them May Be Enough)

Anthropic traces neural circuits under a microscope. OpenAI reads unsupervised chains of thought like a patient's dream journal. DeepMind layers monitoring, debate, and ethical reasoning into a defensive stack. Sutskever suspects all three approaches are interpreting artifacts of training rather than genuine understanding. Brockman traces the intuition back to OpenAI's 2017 'sentiment neuron' and admits they still don't know what understanding means.