AI History Profile Profile Profiles: The Builders

Larry Page & Sergey Brin

How the two Stanford PhD students who built a search engine quietly turned Google into one of the most powerful AI research organizations on the planet.

Published

Larry Page & Sergey Brin

The Accidental Architects

Larry Page: Born 26 March 1973, East Lansing, Michigan, USA Sergey Brin: Born 21 August 1973, Moscow, Russia


They did not set out to build an artificial intelligence company. They set out to organise the world’s information, and the two ambitions turned out, over three decades, to be the same thing. Larry Page and Sergey Brin arrived at Stanford in the mid-1990s as PhD students in computer science with no particular intention of founding a company, and left — without completing their degrees — with a search engine that would become, in the space of a decade, the most important information retrieval system in human history, and in the following decade, one of the most important AI research organisations ever assembled.

The story of Google’s relationship to AI is the story of two separate trajectories that converged. In the first trajectory, Google used machine learning incrementally to improve its core products — spam filtering, ad ranking, search result quality. These applications were important commercially but not transformative scientifically. In the second trajectory, beginning around 2011, Google began investing in fundamental AI research at a scale that no university and few other companies could match, attracting researchers of the calibre of Geoff Hinton, Jeff Dean, and the entire DeepMind team, and producing systems — Google Brain, TensorFlow, Transformer, BERT, Gemini — that defined what large-scale AI research looked like.

Page and Brin did not personally design those systems. But they created the organisation, the culture, and, crucially, the financial resources that made them possible. The connection between the PageRank algorithm of 1998 and the Gemini models of 2024 is not accidental. It runs through two founders who were always, at bottom, more interested in intelligence — in the problem of understanding and organising information — than in any particular technology for achieving it.


Page and Brin: Two Minds, One Mission

Larry Page was born in East Lansing, Michigan, in 1973. His father was a professor of computer science at Michigan State University, and Page grew up in a household where computing was a normal part of life, where Lego Mindstorms and science fiction novels and Popular Science magazines were standard household objects. He attended the University of Michigan for his undergraduate degree in engineering, graduating in 1995, and arrived at Stanford for his doctorate that year.

Sergey Brin was born in Moscow in 1973, the son of a mathematician father and a NASA scientist mother, in a Jewish family that had experienced the antisemitism of Soviet academic life. The family emigrated to the United States in 1979, when Brin was six, settling in Maryland. His father became a mathematics professor at the University of Maryland, and Brin attended that university for his undergraduate degree, completing it at nineteen and arriving at Stanford for his PhD in 1993.

They met at Stanford during orientation week for new PhD students. The pairing was, according to later accounts, immediately combative — they disagreed about almost everything they discussed. This productive friction became the characteristic mode of their collaboration. Page was more focused on execution, more willing to push ideas to implementation before they were fully theorised. Brin was more mathematical, more interested in the formal properties of algorithms, more comfortable with abstraction. Together they formed a combination that was, in the specific domain of information retrieval, more capable than either would have been alone.


PageRank and the Birth of Google

Page’s initial doctoral project was not search. It was an analysis of the link structure of the World Wide Web, conceived as a mathematical object. His supervisor suggested studying the web as a graph — pages as nodes, links as edges — and Page became interested in the question of which pages were most important, where importance was defined recursively: an important page was one that was linked to by many important pages.

The algorithm he and Brin developed to operationalise this recursive definition was PageRank, named partly after the web pages it ranked and partly, with characteristic immodesty, after Page himself. It treated the link structure of the web as a voting system — each link from page A to page B was a vote for B’s importance, with the weight of the vote proportional to A’s own importance — and computed the resulting importance scores through iterative multiplication of a matrix representing the link graph.

The algorithm was not, in itself, a machine learning system. It was a graph analysis algorithm with a particular, elegant mathematical structure. But the approach it embodied — using the aggregate behaviour of large numbers of agents to infer something about the quality and relevance of information — was deeply continuous with the machine learning approaches that would eventually define Google’s AI work. The idea that intelligence could be extracted from data, rather than encoded by hand, was present in PageRank and remained central to everything Google built afterward.

Google the company was incorporated in 1998, in a garage in Menlo Park. The search engine it operated was, from the beginning, notably better than its competitors — Alta Vista, Lycos, Yahoo — in a way that users perceived immediately. Within two years it was the dominant search engine in the English-speaking world. The advertising model that made it commercially successful was developed somewhat separately, by the business team Page and Brin assembled around them, but it was the quality of the search results that drove the user growth that made the advertising valuable.


Building the AI Infrastructure

Google’s internal AI infrastructure developed through a series of decisions that were, individually, unremarkable, but that compounded over two decades into something unprecedented. The hiring of enormous quantities of engineering talent, the investment in custom hardware — the Tensor Processing Units that Google developed specifically for neural network computation — the construction of data centres at a scale that no academic institution could approach, the collection of data through its products that provided training material of a quantity and diversity no competitor could replicate: all of these decisions were made for business reasons but produced, as a side effect, the infrastructure of a world-class AI research organisation.

The acquisition of DeepMind in 2014 was the most visible expression of Page’s personal commitment to AI research. He negotiated the acquisition personally, over Brin’s initially sceptical response, paying a reported £400 million for an organisation that had no revenue and a small team. His reasoning, by various accounts, was that DeepMind was pursuing artificial general intelligence with a seriousness and a scientific rigour that he found compelling, and that its work would be important regardless of whether it produced commercial products on any particular timescale.

Google Brain, established in 2011 through the internal Google effort that included Andrew Ng and Jeff Dean, became the other pole of Google’s AI research. The collaboration between Brain and DeepMind was not always easy — there were tensions about research culture, about publication practices, about the relationship between fundamental research and product development — but it produced, across both organisations, a body of work that included the transformer architecture, BERT, AlphaGo, AlphaFold, and the systems that became Gemini.


The Step Back and What It Meant

In 2019, Page and Brin stepped back from their operational roles at Alphabet, the holding company they had restructured Google into in 2015. Page resigned as Alphabet CEO; Brin resigned as president. Sundar Pichai became CEO of both Google and Alphabet. The founders remained major shareholders and board members, but their day-to-day involvement in the company ended.

The timing was not coincidental. Google was facing intensifying scrutiny over its market power, its data practices, and its content moderation decisions. The regulatory environment in Europe and the United States was becoming more hostile. And the AI capabilities that Google had spent a decade building were beginning to raise questions that went beyond commercial strategy — questions about the appropriate use of systems that could generate human-like text, about the social effects of algorithmic information ranking at global scale, about the responsibility of an organisation that effectively mediated the information environment of billions of people.

Page, by various accounts, had become increasingly uncomfortable with this scrutiny and with the constraints it imposed on the kind of bold, long-horizon bets that he found most interesting. He was more drawn to moonshots — self-driving cars, life extension, stratospheric internet delivery — than to the incremental optimisation and regulatory compliance that had come to dominate Google’s daily operations.

Brin has been somewhat more publicly engaged in the period since their departure, returning to work at Google in 2023 as concerns about competition from OpenAI made the company’s AI strategy suddenly urgent. Page has remained largely invisible.


The Legacy of the Infrastructure

Page and Brin’s contribution to AI history is primarily infrastructural. They did not develop the key algorithmic insights — those came from Hinton and LeCun and Bengio and Vaswani and the research teams at Brain and DeepMind. What they built was the organisation, the data, the compute, and the culture that allowed those insights to be pursued and applied at a scale that no other institution could match.

The transformer architecture, which underlies essentially every large language model in use today, was developed at Google Brain. TensorFlow, the machine learning framework that made large-scale neural network training accessible, was developed at Google and released as open source. The TPU, which provides more efficient neural network computation than GPUs for certain workloads, was developed at Google. The data pipeline practices, the distributed training methods, the hardware design tools — all of these were developed at Google and, through publication and open-source release, diffused through the entire AI research community.

This infrastructure legacy is, in a meaningful sense, more important than any individual product. The AI systems that other companies — OpenAI, Anthropic, Meta, Mistral — build today rest on foundations that Page and Brin, not always intentionally, constructed.


Key Works & Further Reading

Primary sources:

  • “The Anatomy of a Large-Scale Hypertextual Web Search Engine” — Page and Brin (1998). The original Google paper.
  • “Attention Is All You Need” — Vaswani et al., Google Brain (2017). The transformer paper; the most consequential technical work to emerge from the infrastructure Page and Brin built.
  • “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” — Devlin et al., Google (2018). The paper that established transformer models as the basis for language understanding.

Recommended reading:

  • In the Plex — Steven Levy (2011). The most comprehensive account of Google’s internal culture and development through its first decade.
  • The Age of Surveillance Capitalism — Shoshana Zuboff (2019). The most sustained critical account of the data practices that underlie Google’s commercial model and AI infrastructure.
  • Weapons of Math Destruction — Cathy O’Neil (2016). The consequences of algorithmic systems at scale; essential for understanding what Google’s AI infrastructure enables.
  • How Google Works — Schmidt and Rosenberg (2014). An internal perspective on the management philosophy that Page and Brin instilled.