← Home
Thinking Machines Lab Founder, CEO

Mira Murati

Founder, CEO at Thinking Machines Lab. Former CTO at OpenAI.

fundraising compute partnerships product api startups competition company-culture business-model training research rlhf constitutional-ai tools open-source Explore all →

AI-generated profile based on archived statements

Murati's arc is the cleanest departure narrative in the frontier AI space. She joined OpenAI in 2018 as VP of Applied AI, rose to CTO, oversaw the launches of ChatGPT, DALL-E, and Sora, served as interim CEO for two days during the November 2023 board crisis, then left in September 2024 with a diplomatic note and no public grievances. By February 2025, she had announced Thinking Machines Lab. By July, she had raised $2 billion. Her WIRED interview in December 2024, the first after leaving, was careful: she deflected speculation, said she was still figuring things out, and told the audience to ignore the noise about who was leaving which lab. But the substance was already visible. She defined AGI as human-level performance across all cognitive tasks, said it felt achievable within a decade or two (not sci-fi anymore), and flagged that theoretical alignment and governance infrastructure were lagging far behind practical safety work. She was diplomatic about OpenAI's safety record but made a pointed distinction: market incentives were solving near-term problems like hallucination and bias, but nobody was doing the harder work of building the social and institutional infrastructure for AGI-level systems.

Thinking Machines Lab's founding thesis is legible from its first public statements and products. The company launched in early 2025 with a specific claim: knowledge of how frontier systems are trained is too concentrated in a few labs, limiting public discourse and practical use. Their first product, Tinker (January 2025), automates fine-tuning of open-source models (Llama, Qwen) via reinforcement learning and supervised learning. The bet is that making fine-tuning accessible is the next big thing. Beta testers from Redwood Research and Anyscale confirmed it was significantly easier than doing RL from scratch. Co-founder John Schulman (who led the RLHF work behind ChatGPT at OpenAI) described it as giving users full control over the training loop while abstracting away distributed training details. By September 2025, the lab published technical work on batch-invariant inference kernels, solving LLM nondeterminism (1,000 identical runs of the same prompt on Qwen 38B, where previously there were 80 different completions). Their LoRA research showed MLP-only adaptation outperforming attention-only, contra the original paper's recommendations.

Murati's intellectual signature, visible across her OpenAI tenure and the Thinking Machines founding documents, is a preference for human-AI collaboration over full autonomy. At Dartmouth in June 2024, she told the audience GPT-3 was toddler-level, GPT-4 was smart-high-schooler-level, and PhD-level performance was roughly 18 months out. She compared the moment to Vernor Vinge's singularity essay, but grounded it: even a 2% chance of AGI being possible would make it the most important thing she could work on. At Thinking Machines, the mission statement explicitly says they prioritize building multimodal systems that work with people collaboratively, not fully autonomous agents. She frames safety as empirical and iterative rather than theoretical, and insists that products keep research grounded in reality. The subtext of the whole venture is that OpenAI got the research-to-product pipeline right but the knowledge-sharing and customization wrong, and that the fix is making frontier capabilities accessible to a broader base of researchers and builders rather than keeping them locked inside a few labs.

Recurring themes

Democratizing fine-tuning and training knowledge as the real bottleneck, not model accessHuman-AI collaboration over full autonomy as a design philosophyEmpirical safety through deployment rather than theoretical alignment in isolationInfrastructure reliability and reproducibility (deterministic inference) as preconditions for trustThe CTO-to-founder pipeline: taking OpenAI's product instincts and stripping out the knowledge concentration

Featured in

The Openness Spectrum: Murati, Clark, and Amodei

Mira Murati left OpenAI and built Thinking Machines Lab around a specific thesis: frontier training knowledge is too concentrated, and democratizing fine-tuning matters more than releasing model weights. Her first product gives researchers full control over RL and supervised learning loops on open models like Llama and Qwen, and her co-founder John Schulman (who led RLHF at OpenAI) frames it as abstracting away distributed training complexity while keeping the user in control. Murati's bet is that access to training methodology, not just model access, is the real bottleneck. Jack Clark, tracking the open-weight ecosystem from inside Anthropic, has observed that by late 2025, local open models handled nearly 89% of common queries. He describes the dynamic in ecological terms: proprietary models are elephants, open-weight models are fast-reproducing organisms colonizing every niche. But Anthropic itself keeps Claude's weights locked and only open-sources safety-focused tools like circuit-tracing interpretability research. Dario Amodei draws the line explicitly: open-source the safety work, lock down the capabilities. This creates a clear spectrum. Murati wants to open the training process. Clark documents the open ecosystem's growth while his own company withholds its best model. Amodei open-sources selectively, using safety contributions as a form of competitive differentiation that also happens to be genuinely useful. The positions correlate exactly with their business models, which is either reassuring or damning depending on your priors.

with Jack Clark, Dario Amodei

Statements

By source
All statements