Nick Bostrom Portrait

Nick Bostrom Portrait

FIG-0122b. 1973Swedish-British

Niklas Boström

Philosophy · Existential Risk · Ethics of AI · Transhumanism

perplexity
Key Works
Superintelligence: Paths, Dangers, StrategiesAnthropic Bias: Observation Selection Effects in Science and PhilosophyThe Simulation Argument

Role in the Project

Bostrom is the philosopher who gave the transhumanist aspiration its most rigorous academic treatment and, in doing so, revealed its theological structure. His Superintelligence argument — that a machine intelligence surpassing human cognitive capacity by a sufficient margin would be, for all practical purposes, an omniscient and potentially omnipotent agent whose values would determine the fate of all life — is a description of a god. The project reads this as the latest chapter in the AI Genealogy: the dream of creating a superior mind, traceable from Iamblichus's animated statues through the Golem to cybernetics, arriving at its most explicit formulation in an Oxford seminar room.

Knowledge Graph

Loading graph…

Open in full explorer →

Nick Bostrom

Dates: b. 1973 Domain: Philosophy, Existential Risk, Ethics of AI

Biography

Nick Bostrom was born in 1973 in Helsingborg, Sweden. He studied philosophy, mathematics, and logic at Stockholm University, the London School of Economics, and King's College London before settling at Oxford, where he founded the Future of Humanity Institute (FHI) in 2005, the first major academic center dedicated to existential risk, the philosophical study of threats that could permanently curtail or destroy the future of human civilization.

Superintelligence: Paths, Dangers, Strategies (2014) is the text that put AI existential risk on the public agenda. The argument: once a machine intelligence exceeds human cognitive capacity (the "intelligence explosion"), its ability to improve itself recursively means it could rapidly surpass humanity by a margin so vast that the relationship between human and machine would resemble the relationship between ant and human. The critical variable is not whether such an intelligence would be hostile. It is whether its values, whatever they are, would be aligned with human survival. Bostrom's conclusion: the alignment problem is the central problem, and there is no reason to believe it will be solved before the intelligence explosion occurs.

The Simulation Argument (2003), Bostrom's other major contribution, asks a different but related question: if future civilizations will have the computational power to run detailed simulations of their ancestors, and if there is no compelling reason to believe they wouldn't, then the probability that we are living in a simulation rather than in base reality is non-trivial. The argument is formally identical to the Gnostic claim that the experienced world is not the ultimate reality.

FHI closed in 2024 amid institutional disputes at Oxford, but the research program Bostrom inaugurated, the rigorous philosophical analysis of catastrophic and existential risks from advanced technology, now operates across multiple institutions and has shaped the global policy conversation about AI regulation.

Role in the Project

Bostrom belongs to the AI Genealogy track alongside Kurzweil, but occupies a different position. Where Kurzweil is the enthusiast (the Singularity as promise), Bostrom is the analyst (the Singularity as danger). Both share the foundational assumption that machine superintelligence is achievable. Read through the esoteric genealogy, the assumption looks different: the conviction that mind can be created, that intelligence is substrate-independent, and that the creation of superior intelligence is humanity's proper task has a 2,000-year history that neither Kurzweil nor Bostrom acknowledges. The Simulation Argument is the Gnostic demiurge in computational form. The alignment problem is the theurgist's question — how to ensure the animated intelligence serves its creator — restated in the vocabulary of decision theory.

Primary Sources

  • Nick Bostrom, Superintelligence (2014): The AI existential risk argument.
  • Nick Bostrom, "Are You Living in a Computer Simulation?" (Philosophical Quarterly, 2003): The Simulation Argument.
0:00
0:00