← All topics

Topic

Governance

The most consequential governance fight so far played out over a single weekend in February 2026. Defense Secretary Pete Hegseth gave Anthropic a Friday deadline: let the Pentagon use Claude for any lawful purpose, or be designated a supply chain risk (a label normally reserved for foreign adversaries). Dario Amodei refused, drawing two red lines he called non-negotiable: no mass domestic surveillance and no fully autonomous weapons. His reasoning was specific. On surveillance, he argued that AI makes it possible to buy bulk data on Americans (locations, political affiliations, personal information) and analyze it at scale... something that isn't illegal but was never feasible before, meaning the law hasn't caught up. On autonomous weapons, he said current AI systems are "nowhere near reliable enough" and that removing human soldiers from targeting decisions raises oversight problems nobody has seriously worked through. The Pentagon designated Anthropic a supply chain risk; Anthropic sued. Sam Altman moved within hours, writing a memo to OpenAI staff claiming the same red lines and saying he'd try to de-escalate. By Friday night, OpenAI had its own Pentagon deal. The critical difference: Altman accepted existing law as sufficient protection ("The DoW agrees with these principles, reflects them in law and policy"), while Amodei argued existing law is inadequate because AI-powered aggregation of legally collected public data escapes the intent of the Fourth Amendment. Some 70 OpenAI employees signed an open letter in solidarity with Anthropic.

Beyond the Pentagon crisis, the leaders diverge sharply on what governance even means. Daniela Amodei has focused on child safety and advertising incentives, arguing that ad-funded AI chatbots will encourage sycophantic behavior (telling users what they want to hear to maximize engagement), recreating social media's worst dynamics but worse, because users share deeply personal information. Anthropic's Super Bowl ad made this case directly. Jack Clark, writing in Import AI and speaking on Tyler Cowen's podcast, has been thinking about the economic governance problem: what happens when AI can do most jobs. He predicted a "high chance" that political movements will try to freeze human jobs "in bureaucratic amber," driven not by reason but by chaotic political forces, and warned that the fewer examples of good workforce transitions the industry provides, the more likely heavy-handed protectionism becomes. Demis Hassabis has stayed above the military-contract fight entirely, focusing instead on international cooperation through summits (India AI Impact Summit, February 2026) and government partnerships for science and education. Shane Legg co-authored DeepMind's April 2025 paper outlining four AGI risk categories (misuse, misalignment, accidents, structural risks) and leads the AGI Safety Council, but neither he nor Hassabis has staked out a public position on military use. Ilya Sutskever, now at SSI, has framed the governance question in starker terms than anyone: AI poses "the greatest challenge of humanity ever," and the only real preparation is paying close attention to what each generation of AI can actually do, building intuition before the crisis arrives.

People on this topic

Dario Amodei Anthropic Daniela Amodei Anthropic Jack Clark Anthropic Sam Altman OpenAI Greg Brockman OpenAI Ilya Sutskever SSI Mira Murati Thinking Machines Lab Demis Hassabis Google DeepMind Shane Legg Google DeepMind

Perspectives

Amodei vs. Altman: The Pentagon Deal

When the Pentagon demanded unrestricted access to frontier AI, Dario Amodei refused and got blacklisted. Sam Altman said he agreed with Anthropic's red lines, then struck his own deal with the Department of War that same Friday night. The substantive disagreement is narrow but real: Amodei argued that existing law hasn't caught up with AI's ability to aggregate public data into comprehensive surveillance profiles, so the Pentagon's assurance that it would follow current statutes wasn't enough. Altman accepted that assurance, framing the deal as the Pentagon agreeing to OpenAI's principles. Seventy OpenAI employees signed a letter supporting Anthropic before Altman's deal went through. The episode crystallized the difference between the two leaders. Amodei treats safety commitments as constraints that must hold even when they're expensive, though his own company dropped its Responsible Scaling Policy pledge that same month under competitive pressure. Altman treats them as negotiating positions, things you advocate for but ultimately resolve through dealmaking rather than confrontation. Both approaches have costs. Amodei lost a major government contract and faces a supply-chain-risk designation. Altman kept the contract but earned the accusation that OpenAI replaced a blacklisted competitor while claiming solidarity with it.

Statements

By person
By source
blogyoutubeinterviewpodcasttestimonynewsconferencememo
All statements