Topic
Regulation
These leaders all agree AI needs some form of oversight, but the specifics reveal deep fault lines. The most dramatic test case came in February 2026, when Dario Amodei refused to give the Pentagon unrestricted access to Claude, drawing red lines on two specific use cases: domestic mass surveillance (government purchase and AI analysis of bulk private data on Americans) and fully autonomous weapons systems. He argued the technology was too unreliable for autonomous kill decisions and that AI-powered surveillance was "getting ahead of the law." The Pentagon gave Anthropic a three-day ultimatum and then designated it a supply chain risk, with President Trump publicly accusing the company of "putting American lives at risk." Amodei maintained that Congress, not private companies, should ultimately legislate these boundaries, but that someone had to hold the line while lawmakers caught up. Daniela Amodei, meanwhile, has been pushing state-level child safety regulation, disclosing direct engagement with California and New York legislators on age restrictions for AI chatbots and warning that ad-supported AI models create the same perverse engagement incentives that damaged social media. On the international front, Sam Altman and Ilya Sutskever co-authored a 2023 proposal for an IAEA-style body that would inspect AI systems above a capability threshold, track compute usage, and enforce safety standards. But by February 2026, Altman had softened considerably, saying no country has regulation right yet and expressing gratitude that "different countries are trying different approaches." He explicitly warned against using safety as cover for concentrating power: "You don't get to say concentration of power in the name of safety."
Demis Hassabis, speaking at the AI Impact Summit in Delhi in February 2026, called for "smart regulation" targeting two specific threats: bad actors repurposing AI and the technical challenge of controlling increasingly autonomous systems. He framed the safety question through the lens of Carl Sagan's Contact: "How did you manage to get through this technological adolescence without destroying yourselves?" Jack Clark took the most structurally political view, predicting a "high chance" of political movements emerging to freeze human jobs "in bureaucratic amber," driven not by reasoned policy but by "the chaotic winds of political forces." He argued this outcome becomes more likely when AI companies fail to generate visible examples of good economic transitions. Mira Murati struck a more measured note, observing that getting to AGI-level systems "is not just about capability" but about "the entire social infrastructure" those systems will operate in, though she offered fewer specifics on what that infrastructure should look like. Greg Brockman, in a 2023 conversation alongside Elon Musk, emphasized the Einstein-to-nuclear-weapons analogy and cautioned that even the best intentions could produce catastrophic outcomes. The through-line across all of them: everyone wants regulation that constrains the risks they personally find most alarming while preserving their own freedom to build.