AI History Profile Profile Profiles: The Builders

Sam Altman

From Y Combinator president to OpenAI CEO — the story of the man steering the most prominent AI company in the world through historic breakthroughs and boardroom drama.

Published

Sam Altman

The Operator

Born: 22 April 1985, Chicago, Illinois, USA


Sam Altman is not, in the conventional sense, a scientist. He did not derive the equations that underlie large language models. He did not design the architectures that made GPT possible. He studied computer science at Stanford, dropped out after two years to found a company, and spent the formative years of his career not in research labs but in the ecosystems where technology startups are funded, advised, and eventually either succeed or fail. What he brought to OpenAI, when Elon Musk and the other co-founders recruited him as CEO in 2019, was something different from technical expertise: an unusual combination of organisational intelligence, fundraising ability, and a particular kind of strategic clarity about how to position a non-profit AI safety organisation as it became, increasingly, something that looked more like a commercial technology company.

That transformation — from non-profit research lab to capped-profit structure to full commercial corporation — is the central story of Altman’s tenure at OpenAI, and it is a story about which reasonable people hold strongly divergent views. His defenders argue that the commercial structure was necessary to generate the capital required to pursue the research, that without Microsoft’s billions GPT-4 would not have been possible, and that the mission of beneficial AI is best served by an organisation that can compete for talent and compute with the best-resourced technology companies. His critics argue that the commercial imperatives fundamentally compromised the mission, that an organisation capable of giving a single investor multi-billion dollar returns cannot plausibly claim to be prioritising humanity’s interests over its own, and that Altman’s rhetorical dexterity in maintaining the original mission framing while pursuing commercial objectives represents a form of dishonesty about what OpenAI actually is.

Both positions capture something real. Altman is, on any assessment, one of the most consequential figures in the history of AI — not because of what he discovered but because of what he built and how he positioned it.


St. Louis, Stanford, and Loopt

Altman grew up in St. Louis, Missouri, in a Jewish family — his father was a real estate developer, his mother a dermatologist. He was, by his own account, a precocious and somewhat difficult child who found mainstream schooling unstimulating and spent significant portions of his adolescence on computers. He came out as gay at the age of seventeen, at a time when this was less socially routine than it would later become, and has spoken about the experience as formative in developing his tolerance for being different from the people around him.

He enrolled at Stanford in 2003 to study computer science and, in 2005, left without completing his degree to co-found Loopt, a location-based social networking application. Location-based social networking was, in 2005, a genuinely novel idea — smartphones were not yet ubiquitous, GPS chips were just becoming standard consumer hardware — and Loopt was among the first companies to explore what it might mean to build social experiences around physical proximity.

Loopt was backed by Y Combinator in its very first cohort, in the summer of 2005, and the connection was consequential. Paul Graham, who ran Y Combinator, recognised in Altman a particular kind of founder intelligence — not technical depth but strategic pattern recognition, the ability to see how pieces fit together and to communicate that vision persuasively to investors, partners, and employees. Loopt never became a large company; it was acquired by Green Dot Corporation in 2012 for about 43 million dollars. But it established Altman in the startup ecosystem and brought him into the orbit of Y Combinator in a way that would define his next decade.


Y Combinator and the Startup Mind

In 2014, Paul Graham stepped back from Y Combinator and Altman became its president. The timing was important. Y Combinator was, by 2014, the most influential startup accelerator in the world, the institution that had funded Airbnb, Dropbox, Stripe, and Coinbase, and whose imprimatur had become one of the most reliable signals of quality in the startup ecosystem. Altman’s four years running it were, by most accounts, a period of significant expansion and professionalisation.

The experience was also, for Altman, an education in the dynamics of technology development that no amount of academic research could provide. He saw hundreds of companies at their earliest stages, pattern-matched across industries and geographies, developed views about what made technology transformative versus incremental, and built the network of relationships in venture capital, in large technology companies, and in government that would become essential when he moved to OpenAI.

He also, during this period, became increasingly focused on AI as the most important technology being developed. He invested early in several AI companies, including OpenAI’s predecessor entities, and became convinced that the development of artificial general intelligence was not a distant theoretical prospect but an engineering problem that would be solved within decades — and that how it was solved would be among the most consequential decisions in human history. This conviction was not unique to Altman in Silicon Valley in the mid-2010s, but he held it with more intensity and acted on it more directly than most.


OpenAI and the Structure Question

Altman joined OpenAI’s board in 2015, when it was founded, and became CEO in 2019 when OpenAI transitioned from a pure non-profit to a capped-profit structure that allowed it to raise investment capital. The structural change was Altman’s first major decision at the organisation and it set the tone for everything that followed: a willingness to do whatever was institutionally necessary to maintain competitive position, combined with a rhetorical commitment to the original mission that became increasingly strained as the commercial imperatives intensified.

The partnership with Microsoft, which ultimately involved thirteen billion dollars in investment, was Altman’s most significant achievement as CEO and his most significant compromise. Microsoft’s investment came with conditions — Microsoft would receive exclusive commercial access to OpenAI’s technology for its own products, and OpenAI would use Azure for its computing needs — that meant a single commercial partner had enormous leverage over an organisation whose stated mission was to benefit all of humanity.

Altman’s defence of this arrangement is consistent: without the capital, the research would not be possible; without the research, the mission cannot be achieved; the alternative to Microsoft’s investment was either falling behind competitors or shutting down. This is a defensible position. It is also a position that forecloses a certain kind of institutional independence that the non-profit structure was designed to preserve.

The GPT-4 release, in March 2023, marked the moment when OpenAI became not just a research organisation with commercial products but one of the most prominent technology companies in the world. The ChatGPT product, launched in November 2022, had reached a hundred million users faster than any consumer product in history. Altman’s press appearances, Congressional testimony, and public communications throughout this period demonstrated a gift for managing the complexity of being simultaneously a technologist, a businessman, and a public figure representing an organisation that claimed unique responsibilities about the future of humanity.


The Five Days

In November 2023, the board of OpenAI fired Altman. The specific grounds were not publicly stated beyond a reference to a lack of candour with the board. What followed was the most dramatic corporate governance crisis in the history of technology, compressed into five days.

Microsoft immediately offered Altman a role leading a new AI research team. Virtually all of OpenAI’s employees, including Ilya Sutskever who had voted for the firing, signed an open letter threatening to resign unless the board reversed its decision. The board, which had acted without informing Microsoft, found itself without the authority to withstand the combined pressure of the company’s investors, its employees, and its most important commercial partner.

Altman returned as CEO after five days. The board members who had voted to fire him were replaced. A new board was constituted with more conventional corporate governance experience and less apparent commitment to the original non-profit mission structure. The episode ended Altman’s tenure, if it had ever seriously existed, as a non-profit CEO whose decisions were constrained by mission rather than by commercial logic.

Altman’s own account of the firing has been characteristically opaque. He has described the experience as disorienting but ultimately validating. He has said that it clarified, for him and for the organisation, what OpenAI actually was and what it needed to become. That clarity has meant, in practice, the completion of OpenAI’s transformation into a conventional technology company — a transformation that Altman has overseen with the same combination of pragmatism and rhetorical continuity that has characterised his tenure from the beginning.


The World According to Altman

Altman’s public positions on AI are, like the man himself, difficult to categorise. He has said that ChatGPT may be the most transformative technology in human history. He has also said that we may be building something genuinely dangerous, and that the failure to develop alignment techniques in parallel with capabilities could be catastrophic. He has testified before Congress in favour of AI regulation. He has also lobbied against specific regulatory proposals that he believed would disadvantage OpenAI relative to competitors.

This combination of expressed concern and competitive behaviour is either cynical or genuinely complicated, depending on your prior dispositions. What is consistent across all of Altman’s public statements is a conviction that AGI is coming soon, that it will be transformative in ways both positive and potentially destructive, and that the most important thing is to be in the room when the decisions are made. Whether this conviction makes him the right person to be making those decisions is the question that his career poses, and that history will have to answer.


Key Works & Further Reading

Primary sources:

  • “Planning for AGI and beyond” — Sam Altman (OpenAI blog, 2023). His clearest statement of the mission as he understands it.
  • US Senate testimony on AI oversight (May 2023). His most detailed public engagement with regulatory questions.
  • “Moore’s Law for Everything” — Sam Altman (2021). His vision for AI-driven economic transformation; essential for understanding his broader worldview.

Recommended reading:

  • The Innovators — Walter Isaacson (2014). The history of the computing revolution that produced the ecosystem Altman was formed in.
  • No Filter — Sarah Frier (2020). A case study in what happens when a technology company prioritises growth over mission; illuminating comparison.
  • The Power Law — Sebastian Mallaby (2022). The best account of the venture capital ecosystem that shaped Altman’s career.
  • Troublemakers — Leslie Berlin (2017). The history of Silicon Valley entrepreneurship; essential context for understanding the culture Altman embodies.