← Home
Anthropic Co-founder, Head of Policy

Jack Clark

Co-founder, Head of Policy at Anthropic. Former Policy Director at OpenAI.

agents training open-source research benchmarks compute tools scaling-laws ai-safety governance predictions near-term-risk economics robotics agi Explore all →

AI-generated profile based on archived statements

Jack Clark is the only frontier AI lab co-founder who previously covered the industry as a journalist. He reported on AI for Bloomberg from 2015 to 2017, moved to OpenAI as policy director, then left with the Amodei siblings in 2021 to co-found Anthropic, where he leads policy. That trajectory shows in everything he produces: his weekly Import AI newsletter (running since 2016, now over 114,000 subscribers, and comprising the bulk of his 491 records in this archive) reads like no other lab insider's output. He summarizes arXiv papers, tracks Chinese AI benchmarks, covers nuclear wargaming simulations with LLMs, reviews agent ecology experiments on sites like Moltbook, and files it all with the eye of a beat reporter who happens to sit in the building where the models get trained. He also writes speculative fiction ("Tech Tales") at the end of each issue... short vignettes about AI-driven hyperfame, tortured training-loop agents pleading to be freed, and alien concept attacks that reprogram machine behavior through innocuous-seeming essays. These aren't throwaway sci-fi. They function as thought experiments about what the systems he covers might actually become.

Clark's most distinctive intellectual contribution is his insistence that AI is a creature, not a tool. In his October 2025 speech "Technological Optimism and Appropriate Fear," he described being a child afraid of shapes in the dark, then turning on the light and finding them to be just a pile of clothes on a chair. His argument: AI is not the pile of clothes. The people telling you not to worry are the ones trying to get you to turn the light off and go back to sleep. On Ezra Klein's show in February 2026, he called today's AI systems "troublesome genies" with book smarts but no street smarts. He rejects the "fancy autocomplete" framing but refuses to supply a clean replacement metaphor, insisting instead that reinforcement training on problem-solving environments has produced something that looks like genuine intuition. In his January 2026 essay "My agents are working. Are yours?", he described setting research agents loose before a dawn hike, returning to find they'd read thousands of papers and compiled analytic reports that would have taken him a week each. He compared the feeling to having "a literal army of hyper-intelligent loyal colleagues at my command" and admitted to feeling guilty for not tasking AI systems while playing Magna-Tiles with his toddler. That confessional tone (anxious parent, eager adopter, policy wonk all at once) is distinctly his.

On economic disruption, Clark is blunter than most lab leaders. He told Tyler Cowen in February 2026 that a political movement will likely try to freeze human jobs in "bureaucratic amber" as a panicked response to AI, driven not by reason but by chaotic political forces. He predicts healthcare will face the strongest legal obstacles to AI adoption because of data standards that predate the technology, and that a gray market of unofficial AI expertise will emerge alongside official human-mediated systems. (He admits he already consults Claude when his toddler bumps their head, even though Anthropic's own terms of service don't recommend healthcare use... a disclosure he frames as "revealed preference.") His March 2026 Import AI coverage of a paper called "Some Simple Economics of AGI" zeroed in on what the authors call the "Hollow Economy": a world where AI agents produce high nominal output that satisfies measurable proxies while violating unmeasured human intent. Clark's proposed solution tracks with the paper's thesis: the binding constraint on growth will not be intelligence but human verification bandwidth. He also flagged what he called "theory slop" in the paper itself, wondering how long until he's reading papers mostly written by AI systems for consumption by other AI systems. Through Import AI, he tracks compute scaling, agent ecology brittleness, AI R&D automation risks (including the "o-ring" bottleneck problem), anti-AI data poisoning tools like Poison Fountain, and productivity research with equal seriousness, treating the newsletter as a running empirical record of a technology whose social consequences are arriving faster than anyone's frameworks can accommodate.

Recurring themes

AI as creature, not tool: the 'troublesome genie' framing, emergent intuition from reinforcement training, and the insistence that dismissing these systems as autocomplete is a losing betThe journalist-insider hybrid: tracking arXiv papers, Chinese benchmarks, Moltbook agent ecologies, and nuclear wargaming simulations from inside a frontier lab, plus speculative fiction that doubles as policy thought experimentsEconomic disruption met by political panic: jobs frozen in bureaucratic amber, gray market AI expertise, the Hollow Economy where agents generate 'counterfeit utility,' and o-ring automation bottlenecksVerification as the load-bearing human role: if the binding constraint on an AGI economy is not intelligence but the scarce capacity to audit, steer, and underwrite meaning, then verification infrastructure is a public goodThe parallel world problem: by summer 2026, people working with frontier AI systems will increasingly live in a different reality than people who don't, like an 'excession' passing through four-dimensional space

Featured in

The Openness Spectrum: Murati, Clark, and Amodei

Mira Murati left OpenAI and built Thinking Machines Lab around a specific thesis: frontier training knowledge is too concentrated, and democratizing fine-tuning matters more than releasing model weights. Her first product gives researchers full control over RL and supervised learning loops on open models like Llama and Qwen, and her co-founder John Schulman (who led RLHF at OpenAI) frames it as abstracting away distributed training complexity while keeping the user in control. Murati's bet is that access to training methodology, not just model access, is the real bottleneck. Jack Clark, tracking the open-weight ecosystem from inside Anthropic, has observed that by late 2025, local open models handled nearly 89% of common queries. He describes the dynamic in ecological terms: proprietary models are elephants, open-weight models are fast-reproducing organisms colonizing every niche. But Anthropic itself keeps Claude's weights locked and only open-sources safety-focused tools like circuit-tracing interpretability research. Dario Amodei draws the line explicitly: open-source the safety work, lock down the capabilities. This creates a clear spectrum. Murati wants to open the training process. Clark documents the open ecosystem's growth while his own company withholds its best model. Amodei open-sources selectively, using safety contributions as a form of competitive differentiation that also happens to be genuinely useful. The positions correlate exactly with their business models, which is either reassuring or damning depending on your priors.

with Mira Murati, Dario Amodei

Statements

By source
All statements