Topic
Open Source
OpenAI was literally founded on the premise that open-sourcing AI would benefit humanity. Ilya Sutskever has said as much directly: the original tactic was to release everything, run as a nonprofit, and let the field grow. But as the compute requirements exploded (GPT-1 to GPT-2 to GPT-3), the team concluded a nonprofit couldn't raise enough money, pivoted to a capped-profit structure, and progressively closed the weights on their best models. Sam Altman, speaking in India in February 2026, acknowledged he worries about the US falling behind on open source and said "we should do more," while also calling OpenAI's agent platform Operator "the most exciting thing to happen in the AI space in quite some time" ... a product that depends on closed, hosted models. OpenAI has released some open-weight models (the GPT-OSS series Jack Clark references in Import AI), but their flagship systems remain API-only. The tension is visible: Altman wants AI to be cheap and abundant for everyone, but the business model requires keeping the best stuff proprietary.
The strongest open-source commitment among this group belongs to Mira Murati. After leaving OpenAI, she built Thinking Machines Lab around the explicit thesis that frontier training capabilities should be accessible to all. Their first product, Tinker (launched January 2025), is a Python API for distributed fine-tuning of open models like Llama and Qwen, giving researchers full control over training loops while abstracting away GPU infrastructure. As former OpenAI researcher Andrej Karpathy put it, Tinker gives "90% algorithmic control and 90% less infrastructure pain." Princeton, Stanford, and Berkeley labs used it in beta to push specialized models well past their default performance. Anthropic takes a more surgical approach: Dario Amodei's lab open-sourced circuit-tracing interpretability tools in May 2025 so anyone can study what happens inside language models, but Claude's weights stay locked. The open contributions are deliberately about safety, not capability. Demis Hassabis, meanwhile, makes the historical case: Google published the Transformer paper, AlphaGo in Nature, and much of the foundational deep learning and RL work that built the modern AI industry. But he's visibly frustrated that competitors took all of it without contributing back, and he suggests (with a kind of resigned shrug) that the resulting slowdown from less sharing might actually be good for safety, given we're perhaps 5 to 10 years from AGI. Jack Clark, tracking the open-weight ecosystem in Import AI, frames it ecologically: proprietary models are lumbering elephants, while open-weight models are fast-reproducing organisms colonizing every digital niche. By late 2025, local open-weight models could handle 88.7% of single-turn queries (up from 23% in 2023). The capability gap is closing fast, but frontier reasoning tasks still belong to the closed models served on cloud infrastructure.
People on this topic
Perspectives
The Openness Spectrum: Murati, Clark, and Amodei
Mira Murati left OpenAI and built Thinking Machines Lab around a specific thesis: frontier training knowledge is too concentrated, and democratizing fine-tuning matters more than releasing model weights. Her first product gives researchers full control over RL and supervised learning loops on open models like Llama and Qwen, and her co-founder John Schulman (who led RLHF at OpenAI) frames it as abstracting away distributed training complexity while keeping the user in control. Murati's bet is that access to training methodology, not just model access, is the real bottleneck. Jack Clark, tracking the open-weight ecosystem from inside Anthropic, has observed that by late 2025, local open models handled nearly 89% of common queries. He describes the dynamic in ecological terms: proprietary models are elephants, open-weight models are fast-reproducing organisms colonizing every niche. But Anthropic itself keeps Claude's weights locked and only open-sources safety-focused tools like circuit-tracing interpretability research. Dario Amodei draws the line explicitly: open-source the safety work, lock down the capabilities. This creates a clear spectrum. Murati wants to open the training process. Clark documents the open ecosystem's growth while his own company withholds its best model. Amodei open-sources selectively, using safety contributions as a form of competitive differentiation that also happens to be genuinely useful. The positions correlate exactly with their business models, which is either reassuring or damning depending on your priors.
Statements
No statements yet
Content tagged with "open-source" will appear once indexed.