facebookpixel
Select Page

[Featured image courtesy of Mike MacKenzie, https://www.vpnsrus.com/, and Wikimedia Commons]

Various SAR members have suggested that SAR bring to Santa Fe an expert to talk about the possibilities and risks of deploying artificial intelligence (AI).  We’re working on that, but we also wish to note that SAR had the foresight to bring a major figure in the AI debate, Oxford professor Nick Bostrom, to Santa Fe eight years ago.

Below is a reprise of our original 2015 blog post about Bostrom’s lecture and his book that warned of AI’s dangers, Superintelligence: Paths, Dangers, Strategies.


REFLECTIONS ON NICK BOSTROM’S LECTURE, “CAN WE RESHAPE HUMANITY’S DEEP FUTURE?” 7 JUNE 2015

Nick Bostrom speaking to SAR members at the James Little Auditorium, June 2015

As part of a series of occasional lectures that we’re calling Dispatches from the Edge, on June 7 the School for Advanced Research sponsored a public lecture by Professor Nick Bostrom (Future of Humanity Institute, University of Oxford, UK), “Can We Reshape Humanity’s Deep Future? Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies.”  Bostrom’s talk was a snapshot of his research on existential risk, large-scale events or processes that could lead either to complete extinction of humanity or some form of “permanent stagnation.

Bostrom opened his lecture with a thumbnail history of our species: our emergence as bipedal primates living in small, mobile groups of foragers; the role of the agricultural revolution in supporting larger populations and fostering the emergence of social hierarchy; beginning roughly 250 years ago, the transition to industrial economies and their acceleration of technological innovation; and finally, the digital revolution, which along with the rise of new genetic technologies makes possible (and in Bostrom’s view, inevitable), the emergence of “superintelligence,” cognitive assets that surpass those of contemporary human beings.

Although Bostrom couldn’t rule out the possibility that existential risks can arise from natural phenomena such as supervolcanos or asteroid collisions, he argued that in light of the absence of near-extinction events during the last 100,000 years, the odds of such natural catastrophes presenting a significant existential risk are low.  Far more salient, he argued, is anthropogenic risk, the possibility that our own technological activities will prove uncontrollable and ultimately lethal to humankind.

Superintelligence could conceivably emerge in human form through systematic use of enhancement technologies that would increase human IQ to levels significantly in excess of current norms.  But Bostrom leans toward machine AI as the more likely site of superintelligence, perhaps emerging as early as 2050. In this scenario, AI agents approaching human cognitive levels launch a self-perpetuating process that would quickly bring them to a point at which they could assert their own survival priorities over those of their human creators.  As the situation was described by Elon Musk in a Washington Post interview, “If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way . . . it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers . . . .”

Can we do anything to stop this? Bostrom’s view is that if we focus even modest attention on strategies for preventing or delaying this scenario, the beneficial impact could be significant. By reallocating resources from technologies that increase risk toward efforts to control potentially rogue superintelligence, such as algorithms to ensure ethical behavior favorable to humankind, the most extreme danger might be averted. One especially amusing slide (see below) presented by Bostrom was a graph showing the relative frequency of published studies of human extinction compared to three other topics: In response to questions from the audience, Bostrom expressed doubt about prospects for imposing outright prohibitions on certain kinds of AI work perceived as dangerous. He seemed to lean more toward incremental strategies that would buy humanity time to find ways of mitigating risk so that we would be better prepared for rogue AI when and if it appears. If his lecture was short on concrete solutions, it did make a convincing case for greater attention to the dangers of technologies once praised as utopian but which we must increasingly see as fostering risks whose magnitude we are only now beginning to imagine. The lecture was followed by a reception on the SAR campus.

One of the issues that arose in discussion with attendees was whether this event represented a new departure for SAR, which is principally known for its major contributions to anthropology, archaeology, and Native American art.  My response was that our commitment to the areas of our greatest strength remain undiminished but that SAR also wants to build on its tradition of contributing to big-picture debates about human futures, social justice, and expanding frontiers of knowledge.

Nick Bostrom’s 2015 lecture is available on SAR’s YouTube channel.


This event was made possible by the generous support of our underwriters, the Vera R. Campbell Foundation, Susan L. Foote, Merrilee Caldwell, and Marcus P. Randolph.