Nick Bostrom
Dates: b. 1973 Domain: Philosophy, Existential Risk, Ethics of AI
Biography
Nick Bostrom was born in 1973 in Helsingborg, Sweden. He studied philosophy, mathematics, and logic at Stockholm University, the London School of Economics, and King's College London before settling at Oxford, where he founded the Future of Humanity Institute (FHI) in 2005, the first major academic center dedicated to existential risk, the philosophical study of threats that could permanently curtail or destroy the future of human civilization.
Superintelligence: Paths, Dangers, Strategies (2014) is the text that put AI existential risk on the public agenda. The argument: once a machine intelligence exceeds human cognitive capacity (the "intelligence explosion"), its ability to improve itself recursively means it could rapidly surpass humanity by a margin so vast that the relationship between human and machine would resemble the relationship between ant and human. The critical variable is not whether such an intelligence would be hostile. It is whether its values, whatever they are, would be aligned with human survival. Bostrom's conclusion: the alignment problem is the central problem, and there is no reason to believe it will be solved before the intelligence explosion occurs.
The Simulation Argument (2003), Bostrom's other major contribution, asks a different but related question: if future civilizations will have the computational power to run detailed simulations of their ancestors, and if there is no compelling reason to believe they wouldn't, then the probability that we are living in a simulation rather than in base reality is non-trivial. The argument is formally identical to the Gnostic claim that the experienced world is not the ultimate reality.
FHI closed in 2024 amid institutional disputes at Oxford, but the research program Bostrom inaugurated, the rigorous philosophical analysis of catastrophic and existential risks from advanced technology, now operates across multiple institutions and has shaped the global policy conversation about AI regulation.
Role in the Project
Bostrom belongs to the AI Genealogy track alongside Kurzweil, but occupies a different position. Where Kurzweil is the enthusiast (the Singularity as promise), Bostrom is the analyst (the Singularity as danger). Both share the foundational assumption that machine superintelligence is achievable. Read through the esoteric genealogy, the assumption looks different: the conviction that mind can be created, that intelligence is substrate-independent, and that the creation of superior intelligence is humanity's proper task has a 2,000-year history that neither Kurzweil nor Bostrom acknowledges. The Simulation Argument is the Gnostic demiurge in computational form. The alignment problem is the theurgist's question — how to ensure the animated intelligence serves its creator — restated in the vocabulary of decision theory.
Primary Sources
- Nick Bostrom, Superintelligence (2014): The AI existential risk argument.
- Nick Bostrom, "Are You Living in a Computer Simulation?" (Philosophical Quarterly, 2003): The Simulation Argument.
