Demis Hassabis
CEO at Google DeepMind. Former Co-founder, CEO at DeepMind.
AI-generated profile based on archived statements
Demis Hassabis founded DeepMind in 2010 with a two-step mission statement that sounded absurd at the time: solve intelligence, then use it to solve everything else. Sixteen years later, as CEO of the merged Google DeepMind, he told the India AI Impact Summit in February 2026 that he considers the project basically on track for around 2030. His bar for AGI is higher than most. He calls today's foundation models "jagged intelligences" that win gold at the International Math Olympiad but stumble on elementary arithmetic when the question is rephrased. He identifies three specific capabilities still missing: continual learning after deployment (current models are frozen once released), coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general foundation models play chess below weak amateur level. His proposed test for true AGI: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems, he says flatly, cannot do this. On what separates good scientists from great ones, he names "taste" (a blend of intuition and creativity) and says it will probably be the hardest thing for machines to replicate. He studied it himself during his neuroscience PhD at UCL under Eleanor McGuire, researching the hippocampus, and maintains that you can only develop scientific taste through active experimentation, not passive learning.
His biggest bet isn't a chatbot. It's drug discovery. AlphaFold solved protein structure prediction (a 50-year grand challenge in biology) and now has over 2 million researchers using it across nearly every country, with 30,000+ citations. AlphaFold 3 extended from static structures to molecular interactions (protein-protein, protein-DNA, protein-ligand), and AlphaProteo reverses the process to design novel proteins that don't exist in nature. He spun out Isomorphic Labs to push into full drug design... targeting compounds, toxicity, absorption, and binding properties, with the stated goal of reducing drug discovery from the current average of 10 years down to months or weeks. He calls this "digital biology," a term rooted in his view that biology is fundamentally an information processing system resisting entropy, and that AI is its natural description language the way mathematics is for physics. Downstream users include the Fleming Centre (antibiotic resistance), the Drugs for Neglected Diseases Initiative (malaria, leishmaniasis, Zika), the Broad Institute (molecular syringes for targeted drug delivery), and the University of Portsmouth (plastic-digesting enzymes). He also mentioned Alpha Genome for rare genetic diseases and described his long-term dream as a virtual cell (starting with something simple like yeast) where experiments run in silico. Separately, DeepMind's Genie 3 world model can now generate playable game worlds from a text prompt, though they stay coherent for only about one minute before dissolving. Hassabis sees this as potentially reviving the small-team golden era of game development from the early 1990s, when he was coding Theme Park at Bullfrog at age 16.
On safety, Hassabis draws a clear two-risk framework. First: bad actors repurposing AI for harm, with bio and cyber risk as the most urgent near-term threats (he says current systems are already "pretty good" at cyber, so defenses need to stay ahead). Second: autonomous systems exceeding their guardrails as the agentic era arrives. He told BBC News in February 2026 that more safety research "needs to be done urgently" and called for international minimum standards. He supports the summit model (UK, Paris, Korea, India) for cross-border dialogue, but flags the open-source recall problem: if a vulnerability is found in released open-source AI, there is no mechanism to patch or recall it. On architecture, he argues that specialized AI tools (AlphaFold, AlphaProteo) should be orchestrated by general systems (Gemini) rather than everything being crammed into a single model. He frames this as an empirical question: putting coding and math data into a foundation model improves general reasoning, but putting protein data in would likely degrade language performance. His deeper intellectual identity is that of a polymath who sees the best advances happening at disciplinary intersections. DeepMind was neuroscience plus engineering plus ML; Isomorphic is ML plus chemistry plus biology. He cites da Vinci and Aristotle as heroes who didn't see boundaries between fields, and says building AI has been "an excuse to learn about other subject areas." He's also increasingly excited about robotics, saying Gemini's multimodal understanding of the physical world makes it a natural fit, and predicting "breakout moments" for robotics within two to three years.
Recurring themes
Featured in
Hassabis vs. Everyone: What Counts as AGI
Demis Hassabis has the strictest definition of AGI among the frontier lab leaders, and it's not close. His test: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems cannot do this. He calls today's models "jagged intelligences" that win math olympiad gold but fail elementary arithmetic when the question is phrased differently. He identifies three specific capabilities still missing: continual learning after deployment, coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general models play chess below weak amateur level, which he considers disqualifying. This stands in sharp contrast to the more expansive definitions others use. Amodei's "country of geniuses in a data center" is an economic metaphor, not a cognitive test. Altman's framing is generational (his son will never be smarter than a computer) rather than technical. Legg's three-tier spectrum (minimal, full, super) is more structured but sets the bar lower at the entry level: minimal AGI is just the point where failures stop being surprising. The practical consequence of Hassabis's stricter definition is that DeepMind invests differently. Rather than racing to declare AGI first, Hassabis bets on specialized tools (AlphaFold, AlphaProteo) orchestrated by general systems (Gemini), arguing that shoving all capabilities into one model degrades performance. His biggest play isn't a chatbot at all. It's Isomorphic Labs, which aims to compress drug discovery from a decade to weeks. That's a bet that AGI's value will be proven in scientific applications, not in passing cognitive benchmarks that the other labs are already designing their models to ace.
with Dario Amodei, Sam Altman, Shane Legg