Philosopher Nick Bostrom, best known for his 2003 paper "Are You Living in a Computer Simulation?", has shifted focus from the simulation hypothesis to the existential risks posed by artificial intelligence. His latest working paper argues that developing advanced AI could result in the extinction of humanity—but he contends the potential rewards of superintelligence justify the risk.
From Simulation Hypothesis to AI Doomerism
In 2003, while at Oxford, Bostrom published his influential paper proposing that advanced civilizations would likely create simulations of their ancestors. Given enough time, these simulations would generate their own layers of simulated realities, making it statistically improbable that humans inhabit the "base" reality. The idea sparked widespread debate, with figures like Elon Musk endorsing it, while others dismissed it as speculative.
Bostrom later turned his attention to AI, initially sounding alarms about its risks. In 2019, he warned that AI posed a greater threat to humanity than climate change. However, his recent work adopts a more nuanced stance, framing AI as a double-edged sword with profound potential upsides.
"Fretful Optimist" on AI’s Existential Risks
In an interview with Wired’s Steven Levy, Bostrom described himself as a "fretful optimist," balancing excitement for AI’s transformative potential with concerns about catastrophic outcomes. He emphasized the possibility of radically improving human life while acknowledging the real risks of things going wrong.
"I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization. That’s consistent with the real possibility of things going wrong."
Bostrom criticized doomers—including public intellectual Eliezer Yudkowsky—for framing AI development as an existential threat without considering the alternative. He argued that inaction could also lead to stagnation and decline over millennia.
"I guess I’ve been irked by some of the arguments made by doomers who say that if you build AI, you’re going to kill me and my children and how dare you. Like the recent book ‘If Anyone Builds It, Everyone Dies.’ Even more probable is that if nobody builds it, everyone dies! That’s been the experience for the last several 100,000 years."
Levy countered that in a doomer scenario, humanity would face total extinction, whereas inaction might allow gradual progress. Bostrom responded by reframing the debate:
"I have obviously been very concerned with that. But in this paper, I’m looking at a different question, which is, what would be best for the currently existing human population like you and me and our families and the people in Bangladesh? It does seem like our life expectancy would go up if we develop AI, even if it is quite risky."
Controversial Stance on AI’s Future
Bostrom’s latest paper challenges conventional wisdom by suggesting that the benefits of AI—such as extended life expectancy and societal advancement—may outweigh the risks of extinction. His arguments continue to provoke debate among technologists, philosophers, and policymakers.