Can We Reshape Humanity’s Deep Future?

Possibilities & Risks of Artificial Intelligence (AI), Human Enhancement, and Other Emerging Technologies


WHERE: The James A. Little Theater at the New Mexico School for the Deaf.
WHEN: Sunday, June 7, 2015, 2:00 pm
TICKETS: Book your seats now | More info.


Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues. In this engaging lecture, Professor Bostrom will present the factors to be taken into consideration:

  • Future technology and its capabilities
  • Anthropics
  • Population ethics
  • Human enhancement ethics
  • Game theory
  • Fermi paradox

About Nick Bostrom

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity.

He is the recipient of a Eugene R. Gannon Award and has been listed on Foreign Policy’s Top 100 Global Thinkers list. He was included on Prospect magazine’s World Thinkers list, the youngest person in the top fifteen from all fields and the highest-ranked analytic philosopher. His writings have been translated into twenty-four languages.

Bostrom’s background includes physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller. He is best known for his work in five areas: existential risk; the simulation argument; anthropics; impacts of future technology; and implications of consequentialism for global strategy. He has been referred to as one of the most important thinkers of our age.


SAR thanks these sponsors for underwriting this lecture:


 

Slate, Sept. 2014:

You Should Be Terrified of Superintelligent Machines

In the recent discussion over the risks of developing superintelligent machines—that is, machines with general intelligence greater than that of humans—two narratives have emerged. One side argues that if a machine ever achieved advanced intelligence, it would automatically know and care about human values and wouldn’t pose a threat to us. The opposing side argues that artificial intelligence would “want” to wipe humans out, either out of revenge or an intrinsic desire for survival. 

As it turns out, both of these views are wrong. 

Read more >

Aeon Magazine, Feb. 2013:

Omens

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

Read more >

TEDx/Youtube, Apr. 2015:

TEDx Talks: What happens when our computers get smarter than we are?

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. Nick Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?