Topic
Competition
The frontier AI labs talk about shared values, but the February 2026 Pentagon crisis exposed how quickly solidarity fractures under pressure. When the Department of War blacklisted Anthropic for refusing to let the military use Claude without restrictions on autonomous weapons and mass surveillance, Sam Altman wrote an internal memo declaring OpenAI shared Anthropic's red lines, and 70 OpenAI employees signed an open letter titled "We Will Not Be Divided." Then Altman struck his own deal with the Pentagon the same week, accepting terms the DOD had refused to offer Anthropic. Anthropic's own safety commitments have bent in a similar direction. In February 2026, the company dropped the core pledge of its Responsible Scaling Policy (the promise to never train a model without adequate safety guarantees in advance). Chief scientist Jared Kaplan told TIME the company "didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead." METR's Chris Painter called it a shift into "triage mode," warning of a "frog-boiling" effect where danger ramps up without any single moment that sets off alarms. Dario Amodei, who left OpenAI in 2021 precisely because he wasn't convinced the institution had "a real and serious conviction" about safety, now runs an organization making the same pragmatic trade-offs he once rejected. He told the Dwarkesh Podcast he spends 40% of his time on company culture, arguing it's "the only thing that will win the AI race" ... a framing that treats safety and competition as the same problem.
The competitive field itself has splintered. Two of OpenAI's most senior alumni have become direct rivals: Mira Murati's Thinking Machines Lab reached talks for a $50 billion valuation (built on an exodus of OpenAI talent including co-founding employee John Schulman), while Ilya Sutskever founded SSI with the explicit goal of building safe superintelligence outside commercial pressure entirely. Altman, for his part, has been generous toward Google, crediting Demis Hassabis and DeepMind with starting serious AI work "before anyone else in the modern era" and calling their comeback from being "way out of it" after ChatGPT's launch "quite impressive." Hassabis returned the compliment indirectly by drawing a sharp line on the real competition: Chinese labs like DeepSeek and Alibaba are now only months behind the Western frontier, but catching up is not the same as inventing something new. "To invent something is about 100 times harder than it is to copy it," he said, adding that he hasn't yet seen evidence of that kind of originality from Chinese teams. Greg Brockman framed the bottleneck differently still, pointing to compute scarcity and energy as the true constraints, noting OpenAI now builds its own data centers through Stargate because "the world has not yet adapted to the demand shock." Jack Clark, meanwhile, predicted the political fallout: a movement to freeze human jobs "in bureaucratic amber" as a response to the speed of displacement. The race, as these leaders define it, is less about who ships the next model and more about who controls the infrastructure, the safety norms, and the political will to manage what comes after.
People on this topic
Perspectives
The Openness Spectrum: Murati, Clark, and Amodei
Mira Murati left OpenAI and built Thinking Machines Lab around a specific thesis: frontier training knowledge is too concentrated, and democratizing fine-tuning matters more than releasing model weights. Her first product gives researchers full control over RL and supervised learning loops on open models like Llama and Qwen, and her co-founder John Schulman (who led RLHF at OpenAI) frames it as abstracting away distributed training complexity while keeping the user in control. Murati's bet is that access to training methodology, not just model access, is the real bottleneck. Jack Clark, tracking the open-weight ecosystem from inside Anthropic, has observed that by late 2025, local open models handled nearly 89% of common queries. He describes the dynamic in ecological terms: proprietary models are elephants, open-weight models are fast-reproducing organisms colonizing every niche. But Anthropic itself keeps Claude's weights locked and only open-sources safety-focused tools like circuit-tracing interpretability research. Dario Amodei draws the line explicitly: open-source the safety work, lock down the capabilities. This creates a clear spectrum. Murati wants to open the training process. Clark documents the open ecosystem's growth while his own company withholds its best model. Amodei open-sources selectively, using safety contributions as a form of competitive differentiation that also happens to be genuinely useful. The positions correlate exactly with their business models, which is either reassuring or damning depending on your priors.