Sam Altman
CEO at OpenAI. Former President at Y Combinator.
AI-generated profile based on archived statements
Altman's pitch is economic before it is technical. At IIT Delhi in February 2026, speaking alongside Vinod Khosla, he argued that AI will be so deflationary that GDP itself becomes a bad metric ... that quality-of-life growth rates would more than double, and that by 2035 the global economy would be massively deflationary in ways nobody is pricing in. He frames scientific discovery as the top-level driver of human progress, and AI's job as automating that discovery 10x to 100x faster than humans alone. The logical endpoint, which he states plainly, is automating both science and the commercial engine that monetizes it, simultaneously. He told Cisco's Jeetu Patel in early February that the "capability overhang" (the gap between what AI can do and what organizations actually use it for) feels bigger now than it did right before ChatGPT launched. He imagines OpenAI eventually acting as a kind of venture investor in frontier science, spending billions on inference to help cure diseases and taking royalties on the results.
On governance, Altman tried to thread a needle during the February 2026 Pentagon standoff with Anthropic. He publicly backed Anthropic's red lines (no mass surveillance, no autonomous lethal weapons, humans in the loop) in a staff memo obtained by CNBC and Axios. Seventy of his own employees had already signed a letter titled "We Will Not Be Divided" supporting Anthropic before he acted. Then, on a Friday night, he announced OpenAI had struck its own deal with the Pentagon, framing it as the DoW agreeing to OpenAI's safety principles. Critics pointed out the obvious: OpenAI was replacing a blacklisted competitor while claiming solidarity with it. Anthropic, for its part, sued the Pentagon that same night. Altman acknowledged the tension in his memo, writing he wanted to "do the right thing, not the easy thing that looks strong but is disingenuous." On regulation more broadly, he says no country has the right answer yet and he's grateful for sovereign experimentation.
On product, Altman has moved well past ChatGPT-as-chatbot. He called Codex in February 2026 his "biggest update on AI in a while" and "another ChatGPT-level moment," noting it was growing fastest in India (with mostly young users). Cisco became its first design partner, with 100% of code in one product written by Codex within weeks. The upper limit he can currently imagine is "full AI companies" ... not just AI tools, but agents that build software and interact with the real world to run businesses. He described giving Codex full control of his laptop and lasting about two hours before surrendering to it completely (he now uses two laptops). OpenAI's business model has expanded to enterprise subscriptions, an AI cloud platform, advertising for consumer products, and eventually consumer devices and robots. ChatGPT passed 800 million weekly active users in late 2025. In India specifically, OpenAI partnered with Tata Group on a 100MW data center (scalable to 1GW) and committed $50 million with the Gates Foundation for AI-powered primary healthcare across 1,000 African clinics. He credits DeepMind's early conviction as inspirational while noting Google was "way out of it" when ChatGPT launched, and argues that bolting AI onto existing products (as Google does with search) will lose to AI-first redesigns.
Recurring themes
Featured in
Amodei vs. Altman: The Pentagon Deal
When the Pentagon demanded unrestricted access to frontier AI, Dario Amodei refused and got blacklisted. Sam Altman said he agreed with Anthropic's red lines, then struck his own deal with the Department of War that same Friday night. The substantive disagreement is narrow but real: Amodei argued that existing law hasn't caught up with AI's ability to aggregate public data into comprehensive surveillance profiles, so the Pentagon's assurance that it would follow current statutes wasn't enough. Altman accepted that assurance, framing the deal as the Pentagon agreeing to OpenAI's principles. Seventy OpenAI employees signed a letter supporting Anthropic before Altman's deal went through. The episode crystallized the difference between the two leaders. Amodei treats safety commitments as constraints that must hold even when they're expensive, though his own company dropped its Responsible Scaling Policy pledge that same month under competitive pressure. Altman treats them as negotiating positions, things you advocate for but ultimately resolve through dealmaking rather than confrontation. Both approaches have costs. Amodei lost a major government contract and faces a supply-chain-risk designation. Altman kept the contract but earned the accusation that OpenAI replaced a blacklisted competitor while claiming solidarity with it.
with Dario Amodei
Sutskever vs. the Scaling Consensus
Ilya Sutskever arguably proved scaling laws work better than anyone alive. He co-authored AlexNet, co-founded OpenAI, and oversaw the GPT series that turned neural scaling from a research curiosity into the dominant paradigm. Then he left and announced that the age of pure scaling is over. His argument: scaling sucked the air out of genuine research. Models trained on massive compute ace benchmarks but fail at generalization, like a student who memorized 10,000 competitive programming problems but can't architect real software. He thinks RL training produces meta-reward-hacking, where the researchers themselves (not just the models) are unconsciously overfitting to evaluation metrics. This puts him directly at odds with his former colleagues. Altman and Brockman are spending hundreds of billions on data centers and custom chips, betting that more compute is the binding constraint. Amodei still describes intelligence as a chemical reaction with known ingredients. Even Hassabis, who shares some of Sutskever's skepticism about pure scaling, is building gigawatt-scale infrastructure. Sutskever's counter-thesis is that AI needs something analogous to human emotions: a learned value function that provides fast, approximate feedback about whether a course of action is promising, rather than the sparse end-of-trajectory reward signals that current RL relies on. He cites a neurological case study of a stroke patient who lost emotional processing but retained full IQ, and became unable to make even trivial decisions. The implication: without an internal compass, raw intelligence is paralyzed. SSI is his bet that ideas, not compute budgets, are what's actually missing.
with Ilya Sutskever, Greg Brockman, Dario Amodei
Hassabis vs. Everyone: What Counts as AGI
Demis Hassabis has the strictest definition of AGI among the frontier lab leaders, and it's not close. His test: train a model with a knowledge cutoff of 1911 and see if it can derive general relativity by 1915. Current systems cannot do this. He calls today's models "jagged intelligences" that win math olympiad gold but fail elementary arithmetic when the question is phrased differently. He identifies three specific capabilities still missing: continual learning after deployment, coherent long-term planning over years rather than minutes, and consistency across difficulty levels within the same domain. He also notes that general models play chess below weak amateur level, which he considers disqualifying. This stands in sharp contrast to the more expansive definitions others use. Amodei's "country of geniuses in a data center" is an economic metaphor, not a cognitive test. Altman's framing is generational (his son will never be smarter than a computer) rather than technical. Legg's three-tier spectrum (minimal, full, super) is more structured but sets the bar lower at the entry level: minimal AGI is just the point where failures stop being surprising. The practical consequence of Hassabis's stricter definition is that DeepMind invests differently. Rather than racing to declare AGI first, Hassabis bets on specialized tools (AlphaFold, AlphaProteo) orchestrated by general systems (Gemini), arguing that shoving all capabilities into one model degrades performance. His biggest play isn't a chatbot at all. It's Isomorphic Labs, which aims to compress drug discovery from a decade to weeks. That's a bet that AGI's value will be proven in scientific applications, not in passing cognitive benchmarks that the other labs are already designing their models to ace.
with Demis Hassabis, Dario Amodei, Shane Legg