Greg Brockman
Co-founder, President at OpenAI. Former CTO at Stripe.
AI-generated profile based on archived statements
Greg Brockman co-founded OpenAI in 2015 after four years as Stripe's first CTO (employee number four), and the throughline between those jobs is a specific kind of impatience. At Stripe, he turned a Wells Fargo bank integration quoted at nine months into a 24-hour sprint, failing the first certification attempt, rescheduling two hours later, and passing on the second try. He approaches AI infrastructure the same way. Before OpenAI, he built his own GPU rig with Titan X cards in 2013 to teach himself machine learning. His intellectual turning point wasn't a conference or a paper on scaling laws... it was reading Turing's 1950 paper and fixating on the "child machine" concept: the idea that you don't program intelligence, you build something that learns. AlexNet's 2012 ImageNet win made that idea look practical. When critics said deep learning had no new ideas and was just throwing bigger computers at old problems, Brockman's response was blunt: yes, and that's the point. Sometimes scaling is the innovation.
That conviction became OpenAI's operating philosophy. Brockman has said that when the company started, the founding team believed AGI would come from the right conceptual breakthrough. By 2017, they'd changed their minds: compute was the binding constraint. His 2022 blog post "Time to Be an ML Engineer" argued that running 100,000 GPUs requires engineering discipline distinct from research, and that OpenAI's progress depended on both. He was candid about the friction this created internally: engineers wanted clean abstractions, but researchers needed to see every detail because tiny parameter changes could break model performance in unpredictable ways. His fix was what he called "technical humility" (engineers learning to subordinate their instincts to researcher domain knowledge). Product launches kept testing that balance. ChatGPT hit a million users in five days; GPT-4 image generation reached 100 million in five. Each time, the team had to pull compute from research to keep services running, what Brockman calls "mortgaging the future." He now describes AGI not as a destination but as a continuous process, and openly says AI will "change a lot of fundamentals of the social contract" around employment.
By late 2025, Brockman's focus had shifted to physical infrastructure and vertical integration. The Broadcom partnership, announced in October 2025 after 18 months of development, aims to deploy 10 gigawatts of custom inference chips starting late 2026. OpenAI used its own models to optimize the chip design, finding area reductions that human engineers acknowledged were on their lists but would have taken another month to reach. In January 2026, he announced the SB Energy partnership for a 1.2 GW data center in Milam County, Texas, as part of the $500 billion Stargate commitment. The motivation is concrete: he says teams inside OpenAI have their output directly limited by their compute allocation, and he wants a world where anyone with an idea has the compute to execute it. On the product side, he declared 2025 "the year of agents" and pushed ChatGPT Pulse as an example of proactive AI (agents that act on your behalf without being asked). He frames ChatGPT's future as a "goal amplifier" and envisions hundreds of thousands of semi-autonomous AI agents working as cloud-based co-workers. He also draws a direct analogy between aligning AI and raising children: both learn values from data and feedback, and the techniques for one may well apply to the other.
Recurring themes
Featured in
Sutskever vs. the Scaling Consensus
Ilya Sutskever arguably proved scaling laws work better than anyone alive. He co-authored AlexNet, co-founded OpenAI, and oversaw the GPT series that turned neural scaling from a research curiosity into the dominant paradigm. Then he left and announced that the age of pure scaling is over. His argument: scaling sucked the air out of genuine research. Models trained on massive compute ace benchmarks but fail at generalization, like a student who memorized 10,000 competitive programming problems but can't architect real software. He thinks RL training produces meta-reward-hacking, where the researchers themselves (not just the models) are unconsciously overfitting to evaluation metrics. This puts him directly at odds with his former colleagues. Altman and Brockman are spending hundreds of billions on data centers and custom chips, betting that more compute is the binding constraint. Amodei still describes intelligence as a chemical reaction with known ingredients. Even Hassabis, who shares some of Sutskever's skepticism about pure scaling, is building gigawatt-scale infrastructure. Sutskever's counter-thesis is that AI needs something analogous to human emotions: a learned value function that provides fast, approximate feedback about whether a course of action is promising, rather than the sparse end-of-trajectory reward signals that current RL relies on. He cites a neurological case study of a stroke patient who lost emotional processing but retained full IQ, and became unable to make even trivial decisions. The implication: without an internal compass, raw intelligence is paralyzed. SSI is his bet that ideas, not compute budgets, are what's actually missing.
with Ilya Sutskever, Sam Altman, Dario Amodei