Elon Musk
Co-founder of OpenAI, founder of xAI, and one of AI's most vocal — and contradictory — public figures. His complicated, outsized role in AI's trajectory.
Elon Musk
The Contradictory Force
Born: 28 June 1971, Pretoria, South Africa
No figure in the history of artificial intelligence has more consistently acted against his stated convictions than Elon Musk. He co-founded OpenAI in 2015 to ensure that AI development would be safe and benefit humanity, then left the board in 2018 and spent subsequent years suing the organisation he helped create. He warned publicly and repeatedly that AI was humanity’s greatest existential risk, calling it potentially more dangerous than nuclear weapons, and then founded his own AI company, xAI, to compete directly with the organisations he had called dangerous. He signed an open letter in 2023 calling for a six-month pause in AI development beyond GPT-4 and then, while that letter circulated, accelerated the development of Grok, his own large language model.
This pattern of contradictions is not incidental to Musk’s story. It is the story. He is a man of enormous intelligence, enormous ambition, and enormous impatience with the constraints that his own positions logically imply, and the contradictions between what he says and what he does are a direct consequence of that combination. What makes him significant in AI history is not his consistency but his force — the way his interventions, regardless of their coherence, have shaped the field’s public profile, its institutional landscape, and the political environment in which it operates.
Pretoria, Canada, and Silicon Valley
Musk was born in Pretoria, South Africa, in 1971. His father was an engineer and his mother a model and dietitian. He grew up in South Africa during the apartheid era, an experience he has not engaged with in any detail in public, and taught himself programming from books. At twelve he sold a video game he had written for five hundred dollars. At seventeen he emigrated, first to Canada, then to the United States, partly to avoid mandatory military service in the South African army.
He attended Queen’s University in Canada and then the University of Pennsylvania, graduating with degrees in economics and physics. He enrolled in a PhD programme in energy physics at Stanford and dropped out after two days to co-found Zip2, a web software company, in 1995. The pattern — brief immersion in formal education followed by the conviction that the real work was elsewhere — recurred throughout his early career and became central to the self-mythology he has constructed.
Zip2 was sold to Compaq in 1999 for 307 million dollars. Musk used his proceeds to co-found X.com, an online payment company that merged with Confinity and became PayPal, which was sold to eBay in 2002 for 1.5 billion dollars. He used that capital to found SpaceX and to invest in Tesla, which he joined as chairman and eventually CEO. By 2015, when he co-founded OpenAI, he had already established himself as the most ambitious entrepreneur of his generation — a man operating simultaneously in space launch, electric vehicles, and energy storage, in each case pursuing timelines that mainstream analysts considered unrealistic.
The OpenAI Founding and Departure
Musk’s involvement in OpenAI began in 2015, at a dinner in Silicon Valley where he, Sam Altman, and several AI researchers discussed the need for a safety-focused counterweight to Google’s dominance in AI research. The concern was specific: Google’s acquisition of DeepMind in 2014 and its hiring of Geoff Hinton had given a single commercial entity access to the best AI researchers in the world and the computational resources to pursue ambitious research without any external accountability.
The solution Musk and Altman proposed was a non-profit research organisation that would pursue AI safety as a central goal, publish its work openly, and not be driven by commercial objectives. Musk contributed approximately 100 million dollars to the founding and served on the board. He was, in the early years, a prominent public face for the organisation’s safety mission.
His departure from the board in 2018 was officially attributed to potential conflicts of interest arising from Tesla’s own AI work on Autopilot, and to disagreements with other board members about the direction of the organisation. The actual reasons were more complicated. By various accounts, Musk had attempted to take operational control of OpenAI, believing that the organisation needed more aggressive leadership to compete with Google, and the other founders and board members had declined to give it to him.
After his departure, Musk’s public comments about OpenAI became increasingly hostile. He criticised its decision to accept Microsoft’s investment and transition to a for-profit structure, arguing that this betrayed the original mission. In 2024 he filed a lawsuit against the organisation and its CEO Sam Altman, alleging that they had violated the terms of the founding agreement. The lawsuit was subsequently withdrawn and then refiled, and legal proceedings were ongoing as of early 2026.
The contradiction at the heart of Musk’s OpenAI story is difficult to resolve charitably: he created an organisation to prevent the concentration of AI development in commercial hands, left it in circumstances that are disputed, and then attacked it from outside while building his own AI company. Whether his criticisms of OpenAI’s commercialisation are principled objections or competitive grievances is a question each observer must answer for themselves.
The AI Warnings and Their Ambiguity
For a period stretching from approximately 2014 to 2023, Musk was the most prominent public voice warning about AI risk. His warnings were striking not because they were technically sophisticated — they generally were not — but because they were made by a person with enormous public visibility who was also simultaneously funding AI development.
At the Massachusetts Institute of Technology in 2014, he described AI as summoning a demon. At the National Governors Association in 2017, he called it the greatest risk to civilisation. At various other forums he invoked nuclear weapons, Terminator scenarios, and the possibility that AI would achieve goals misaligned with human values in ways that would make correction impossible. He donated to the Machine Intelligence Research Institute and the Future of Life Institute, organisations dedicated to technical AI safety research.
The warnings were taken seriously by many researchers, not because Musk was a technical expert in AI — he was not — but because his prominence meant that concerns that had previously circulated only within the AI safety community were suddenly appearing on the front pages of major newspapers. Whether this public attention was ultimately beneficial for the safety research agenda is debated. Some safety researchers believe Musk’s involvement raised the profile of genuine concerns. Others believe his apocalyptic framing made serious safety research easier to dismiss as science fiction.
The founding of xAI in 2023, and the development of Grok as a competitor to ChatGPT, significantly undermined the coherence of the warning position. Musk argued that his AI company would be safer than its competitors, more committed to free speech and less prone to ideologically motivated restrictions on output. The argument is not obviously wrong but it does not resolve the contradiction: if AI development is genuinely dangerous at the scale of GPT-4, developing a competitor to GPT-4 on safety grounds requires a more careful argument than Musk has publicly provided.
Tesla, Autopilot, and the Production Bet
Musk’s most direct engagement with applied AI has been through Tesla’s Autopilot and Full Self-Driving systems, which he has positioned as central to Tesla’s commercial future since at least 2016. His predictions about the timeline to full autonomous driving have been, measured against reality, among the most consistently inaccurate forecasts in recent technology history: he predicted “complete autonomy” in 2017, in 2019, in 2020, in 2021, and in 2022, and as of 2026 Tesla’s autonomous driving capability, while significantly better than 2016, remains not ready for fully unsupervised operation in all conditions.
The repeated missed predictions are not incidental to an otherwise sound technical programme. They reflect a pattern in Musk’s relationship with AI capability that goes beyond Tesla: a tendency to extrapolate the current rate of progress further and faster than the evidence supports, combined with a genuine belief in the extrapolation that is not straightforwardly dishonest but that has repeatedly misled investors, regulators, and the public about what to expect and when.
The Autopilot system itself is technically impressive and commercially significant. The data flywheel it creates — collecting enormous quantities of real-world driving data from the global Tesla fleet, using that data to improve the neural networks that handle autonomous driving tasks — is a genuine competitive advantage. Whether it is sufficient to achieve the full self-driving capability Musk has repeatedly promised is a question that remains open.
The Political Turn
Musk’s acquisition of Twitter in 2022, his subsequent transformation of it into X, and his increasingly visible alignment with right-wing political movements in the United States and internationally, created a context in which his statements about AI became more difficult to evaluate in isolation from his other activities. The acquisition of a major social media platform by the CEO of an AI company — one who had explicitly cited AI’s power to shape information environments as a reason for concern — was noted by many observers as potentially inconsistent with his stated safety concerns.
His role in the Trump administration’s advisory structures in 2025, and the access to government decision-making it provided, gave his companies — including xAI — a proximity to policy that his competitors did not share and that raised questions about competitive fairness that went beyond the usual concerns about concentrated corporate power.
Musk’s full legacy in AI is not yet determinable. He is young enough, and active enough, that the most consequential chapters of his involvement may still be ahead. What is already clear is that his interventions in the AI field — the founding of OpenAI, the safety warnings, the founding of xAI, the political engagement — have collectively contributed to making AI one of the most contested technological and political questions of the early twenty-first century. Whether that contestation leads to better outcomes than the alternatives is a question that remains, at this moment, genuinely open.
Key Works & Further Reading
Primary sources:
- “An Open Letter: Pause Giant AI Experiments” — Future of Life Institute (2023). The letter Musk signed; essential for understanding the public debate about AI risk.
- xAI company documentation and Grok release materials (2023–present). The formal expression of his alternative to OpenAI.
- Tesla AI Day presentations (2021, 2022). The most detailed technical account of the Autopilot system he has built.
Recommended reading:
- Elon Musk — Walter Isaacson (2023). The most detailed biographical account; comprehensive if not always critically incisive.
- Power and Progress — Acemoglu and Johnson (2023). The most rigorous academic argument for why AI development concentrated in a small number of powerful actors is problematic; essential counterpoint to Musk’s vision.
- The Coming Wave — Mustafa Suleyman (2023). A perspective from a fellow AI company founder on the tension between capability development and safety.
- Superintelligence — Nick Bostrom (2014). The book that most influenced Musk’s public safety warnings; necessary for understanding the specific scenarios he was concerned about.