Topic
Predictions
The leaders of the three frontier labs have converged on a strikingly similar timeline: AGI arriving somewhere between 2027 and 2035, with most bets clustering around the nearer end. Dario Amodei puts himself at 90% confidence that AI will match human-level cognition within ten years, and says he has a "hunch" it could be one to two years away. He frames it as a "country of geniuses in a data center." Shane Legg, who has held a 50/50 prediction on AGI by 2028 since 2009, breaks the path into stages: minimal AGI (reliable everyday cognition), full AGI (genius-level across domains), and then superintelligence, which he considers physically inevitable given silicon's speed advantages over biological neurons. Demis Hassabis is slightly more cautious, placing AGI "on the horizon, maybe within the next five years," and comparing its impact to fire or electricity, estimating it will be ten times the industrial revolution at ten times the speed. Sam Altman tells students at IIT Delhi that his one-year-old son will never know a world where he was smarter than a computer. Ilya Sutskever, now at SSI, argues the case from first principles: the brain is a biological computer, so a digital one can do everything it does. He calls the challenge of AI "the greatest challenge of humanity ever."
Where they differ is less on whether and more on how fast the disruption hits. Dario Amodei warns that software engineering may be automated before other white-collar work, simply because developers adopt new tools faster, and draws a parallel to centaur chess, noting the human-AI collaboration era may be "very brief." Jack Clark, speaking on the Ezra Klein Show, describes the shift from AI as "talkers" in 2023-2024 to "doers" in 2026-2027, with agents that oversee each other in swarms. Sam Altman predicts GDP will become a terrible metric because AI is so deflationary. Shane Legg offers a blunt heuristic: if your entire job can be done through a screen and keyboard, you are on the front line. The physical trades, plumbing and electrical work, are protected longer only because robotics lags behind. Nearly all of them express a version of the same anxiety: previous technological transitions unfolded over decades or centuries, and this one is happening in single-digit years. Society's institutions are not built for that speed.
People on this topic
Perspectives
Sutskever vs. the Scaling Consensus
Ilya Sutskever arguably proved scaling laws work better than anyone alive. He co-authored AlexNet, co-founded OpenAI, and oversaw the GPT series that turned neural scaling from a research curiosity into the dominant paradigm. Then he left and announced that the age of pure scaling is over. His argument: scaling sucked the air out of genuine research. Models trained on massive compute ace benchmarks but fail at generalization, like a student who memorized 10,000 competitive programming problems but can't architect real software. He thinks RL training produces meta-reward-hacking, where the researchers themselves (not just the models) are unconsciously overfitting to evaluation metrics. This puts him directly at odds with his former colleagues. Altman and Brockman are spending hundreds of billions on data centers and custom chips, betting that more compute is the binding constraint. Amodei still describes intelligence as a chemical reaction with known ingredients. Even Hassabis, who shares some of Sutskever's skepticism about pure scaling, is building gigawatt-scale infrastructure. Sutskever's counter-thesis is that AI needs something analogous to human emotions: a learned value function that provides fast, approximate feedback about whether a course of action is promising, rather than the sparse end-of-trajectory reward signals that current RL relies on. He cites a neurological case study of a stroke patient who lost emotional processing but retained full IQ, and became unable to make even trivial decisions. The implication: without an internal compass, raw intelligence is paralyzed. SSI is his bet that ideas, not compute budgets, are what's actually missing.
Hassabis vs. Everyone: What Counts as AGI
Demis Hassabis has the strictest definition of AGI among the frontier lab leaders, and it's not close. His test: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems cannot do this. He calls today's models "jagged intelligences" that win math olympiad gold but fail elementary arithmetic when the question is phrased differently. He identifies three specific capabilities still missing: continual learning after deployment, coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general models play chess below weak amateur level, which he considers disqualifying. This stands in sharp contrast to the more expansive definitions others use. Amodei's "country of geniuses in a data center" is an economic metaphor, not a cognitive test. Altman's framing is generational (his son will never be smarter than a computer) rather than technical. Legg's three-tier spectrum (minimal, full, super) is more structured but sets the bar lower at the entry level: minimal AGI is just the point where failures stop being surprising. The practical consequence of Hassabis's stricter definition is that DeepMind invests differently. Rather than racing to declare AGI first, Hassabis bets on specialized tools (AlphaFold, AlphaProteo) orchestrated by general systems (Gemini), arguing that shoving all capabilities into one model degrades performance. His biggest play isn't a chatbot at all. It's Isomorphic Labs, which aims to compress drug discovery from a decade to weeks. That's a bet that AGI's value will be proven in scientific applications, not in passing cognitive benchmarks that the other labs are already designing their models to ace.