← All topics

Topic

Superintelligence

Dario Amodei has become the most aggressive voice on timelines. In February 2026, he told Dwarkesh Patel he puts 90% odds on reaching what he calls "a country of geniuses in a data center" within ten years, and thinks end-to-end coding will be fully automated in one to two years. His main caveat: tasks that can't be easily verified (novel writing, planning a Mars mission) carry more uncertainty than verifiable ones like code. He argues that even without solving continual learning, scaling up models with massive context windows may be enough to be economically transformative. Daniela Amodei echoes the "country of geniuses" framing but focuses on what that means for society rather than when it arrives, asking how AI diagnostics rivaling trained doctors will reshape healthcare and other professions.

Sam Altman frames the transition in generational terms. At IIT Delhi in February 2026, he said his one-year-old son will never know a world where he was smarter than a computer. He predicts a hugely deflationary economy by 2035 and expects quality-of-life growth rates to more than double, though he thinks GDP in nominal dollars will become a poor metric. Ilya Sutskever, now at SSI, takes the starkest philosophical position: the brain is a biological computer, so a digital computer can do everything we can. He calls superintelligence "the greatest challenge of humanity ever" and warns that even he struggles to emotionally internalize what's coming, despite the logic being clear. Demis Hassabis is more measured, placing AGI five to eight years out and emphasizing what current systems still lack: continual learning, long-term planning, and consistency across tasks. He points out models can win math olympiad gold but still botch elementary arithmetic. Shane Legg, his DeepMind co-founder, has held a 50/50 prediction for minimal AGI by 2028 since 2009, with full AGI a decade after that. He argues superintelligence is essentially inevitable once AGI arrives because silicon signals travel at the speed of light while neurons top out at 30 meters per second. Jack Clark tracks the discourse rather than making personal timeline predictions, but his Import AI newsletter treats superintelligence as a when-not-if proposition, highlighting RAND analysis concluding that humanity currently has no effective tools to fight a rogue superintelligence and therefore must never allow the situation to arise. Greg Brockman speaks less about timelines and more about infrastructure, calling the AI buildout "the biggest joint industrial project in human history" and focusing on the chip design and 10-gigawatt data center partnerships needed to get there. Mira Murati, speaking in mid-2024 while still at OpenAI, placed PhD-level AI intelligence about 18 months out and argued that capability and safety research are not in tension but go hand in hand, comparing it to training a smarter dog versus a dumber one.

The real split is not over whether superintelligence is coming but over how soon and how bumpy the ride will be. Amodei, Altman, and Sutskever cluster at the aggressive end (one to five years to transformative capability). Hassabis and Legg give themselves more room (five to ten years to AGI, with superintelligence following). Clark and Sutskever are the most vocal about existential risk, while Altman and Amodei emphasize the economic upside. Nobody in the group argues that current architectures will plateau before reaching human-level performance. That consensus, which would have seemed extreme even three years ago, is now the baseline assumption across all three labs.

People on this topic

Dario Amodei Anthropic Daniela Amodei Anthropic Jack Clark Anthropic Sam Altman OpenAI Greg Brockman OpenAI Ilya Sutskever SSI Mira Murati Thinking Machines Lab Demis Hassabis Google DeepMind Shane Legg Google DeepMind

Perspectives

Hassabis vs. Everyone: What Counts as AGI

Demis Hassabis has the strictest definition of AGI among the frontier lab leaders, and it's not close. His test: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems cannot do this. He calls today's models "jagged intelligences" that win math olympiad gold but fail elementary arithmetic when the question is phrased differently. He identifies three specific capabilities still missing: continual learning after deployment, coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general models play chess below weak amateur level, which he considers disqualifying. This stands in sharp contrast to the more expansive definitions others use. Amodei's "country of geniuses in a data center" is an economic metaphor, not a cognitive test. Altman's framing is generational (his son will never be smarter than a computer) rather than technical. Legg's three-tier spectrum (minimal, full, super) is more structured but sets the bar lower at the entry level: minimal AGI is just the point where failures stop being surprising. The practical consequence of Hassabis's stricter definition is that DeepMind invests differently. Rather than racing to declare AGI first, Hassabis bets on specialized tools (AlphaFold, AlphaProteo) orchestrated by general systems (Gemini), arguing that shoving all capabilities into one model degrades performance. His biggest play isn't a chatbot at all. It's Isomorphic Labs, which aims to compress drug discovery from a decade to weeks. That's a bet that AGI's value will be proven in scientific applications, not in passing cognitive benchmarks that the other labs are already designing their models to ace.

Statements

By person
By source
youtubebloginterviewpodcastconference
All statements