Ad

Lex Fridman interviews Nick Bostrom.

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.

Time Index:

  • 0:00 – Introduction
  • 2:48 – Simulation hypothesis and simulation argument
  • 12:17 – Technologically mature civilizations
  • 15:30 – Case 1: if something kills all possible civilizations
  • 19:08 – Case 2: if we lose interest in creating simulations
  • 22:03 – Consciousness
  • 26:27 – Immersive worlds
  • 28:50 – Experience machine
  • 41:10 – Intelligence and consciousness
  • 48:58 – Weighing probabilities of the simulation argument
  • 1:01:43 – Elaborating on Joe Rogan conversation
  • 1:05:53 – Doomsday argument and anthropic reasoning
  • 1:23:02 – Elon Musk
  • 1:25:26 – What’s outside the simulation?
  • 1:29:52 – Superintelligence
  • 1:47:27 – AGI utopia
  • 1:52:41 – Meaning of life
tt ads

Leave a Reply

Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.