Lex Fridman interviews Ann Druyan, the writer, producer, director, and one of the most important and impactful communicators of science in our time.

She co-wrote the 1980 science documentary series Cosmos hosted by Carl Sagan, whom she married in 1981, and her love for whom, with the help of NASA, was recorded as brain waves on a golden record along with other things our civilization has to offer and launched into space on the Voyager 1 and Voyager 2 spacecraft that are now, 42 years later, still active, reaching out farther into deep space than any human-made object ever has. This was a profound and beautiful decision she made as a Creative Director of NASA’s Voyager Interstellar Message Project. This conversation is part of the Artificial Intelligence podcast. In 2014, she went on to create the second season of Cosmos, called Cosmos: A Spacetime Odyssey, and in 2020, the new third season called Cosmos: Possible Worlds, which is being released this upcoming Monday, March 9. It is hosted, once again, by the fun and brilliant Neil deGrasse Tyson.

OUTLINE:

  • 0:00 – Introduction
  • 3:24 – Role of science in society
  • 7:04 – Love and science
  • 9:07 – Skepticism in science
  • 14:15 – Voyager, Carl Sagan, and the Golden Record
  • 36:41 – Cosmos
  • 53:22 – Existential threats
  • 1:00:36 – Origin of life
  • 1:04:22 – Mortality

Lex Fridman interviews Alex Garland, writer and director of many imaginative and philosophical films from the dreamlike exploration of human self-destruction in the movie Annihilation to the deep questions of consciousness and intelligence raised in the movie Ex Machina.

OUTLINE:
0:00 – Introduction
3:42 – Are we living in a dream?
7:15 – Aliens
12:34 – Science fiction: imagination becoming reality
17:29 – Artificial intelligence
22:40 – The new “Devs” series and the veneer of virtue in Silicon Valley
31:50 – Ex Machina and 2001: A Space Odyssey
44:58 – Lone genius
49:34 – Drawing inpiration from Elon Musk
51:24 – Space travel
54:03 – Free will
57:35 – Devs and the poetry of science
1:06:38 – What will you be remembered for?

Lex Fridman interviews John Hopfield, a professor at Princeton, whose life’s work weaved beautifully through biology, chemistry, neuroscience, and physics.

Most crucially, he saw the messy world of biology through the piercing eyes of a physicist. He is perhaps best known for his work on associate neural networks, now known as Hopfield networks that were one of the early ideas that catalyzed

Timeline:

  • 0:00 – Introduction
  • 2:35 – Difference between biological and artificial neural networks
  • 8:49 – Adaptation
  • 13:45 – Physics view of the mind
  • 23:03 – Hopfield networks and associative memory
  • 35:22 – Boltzmann machines
  • 37:29 – Learning
  • 39:53 – Consciousness
  • 48:45 – Attractor networks and dynamical systems
  • 53:14 – How do we build intelligent systems?
  • 57:11 – Deep thinking as the way to arrive at breakthroughs
  • 59:12 – Brain-computer interfaces
  • 1:06:10 – Mortality
  • 1:08:12 – Meaning of life

Lex Fridman interviews Marcus Hutter ,a senior research scientist at DeepMind and professor at Australian National University.

Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
3:32 – Universe as a computer
5:48 – Occam’s razor
9:26 – Solomonoff induction
15:05 – Kolmogorov complexity
20:06 – Cellular automata
26:03 – What is intelligence?
35:26 – AIXI – Universal Artificial Intelligence
1:05:24 – Where do rewards come from?
1:12:14 – Reward function for human existence
1:13:32 – Bounded rationality
1:16:07 – Approximation in AIXI
1:18:01 – Godel machines
1:21:51 – Consciousness
1:27:15 – AGI community
1:32:36 – Book recommendations
1:36:07 – Two moments to relive (past and future)

Lex Fridman interviews Michael Jordan – not that Michael Jordan.

Michael I Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
3:02 – How far are we in development of AI?
8:25 – Neuralink and brain-computer interfaces
14:49 – The term “artificial intelligence”
19:00 – Does science progress by ideas or personalities?
19:55 – Disagreement with Yann LeCun
23:53 – Recommender systems and distributed decision-making at scale
43:34 – Facebook, privacy, and trust
1:01:11 – Are human beings fundamentally good?
1:02:32 – Can a human life and society be modeled as an optimization problem?
1:04:27 – Is the world deterministic?
1:04:59 – Role of optimization in multi-agent systems
1:09:52 – Optimization of neural networks
1:16:08 – Beautiful idea in optimization: Nesterov acceleration
1:19:02 – What is statistics?
1:29:21 – What is intelligence?
1:37:01 – Advice for students
1:39:57 – Which language is more beautiful: English or French?

Lex Fridman lands an interview with the one and only Andrew Ng.

Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:23 – First few steps in AI
5:05 – Early days of online education
16:07 – Teaching on a whiteboard
17:46 – Pieter Abbeel and early research at Stanford
23:17 – Early days of deep learning
32:55 – Quick preview: deeplearning.ai, landing.ai, and AI fund
33:23 – deeplearning.ai: how to get started in deep learning
45:55 – Unsupervised learning
49:40 – deeplearning.ai (continued)
56:12 – Career in deep learning
58:56 – Should you get a PhD?
1:03:28 – AI fund – building startups
1:11:14 – Landing.ai – growing AI efforts in established companies
1:20:44 – Artificial general intelligence

Lex Fridman  interviews Scott Aaronson,a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT.

His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
5:07 – Role of philosophy in science
29:27 – What is a quantum computer?
41:12 – Quantum decoherence (noise in quantum information)
49:22 – Quantum computer engineering challenges
51:00 – Moore’s Law
56:33 – Quantum supremacy
1:12:18 – Using quantum computers to break cryptography
1:17:11 – Practical application of quantum computers
1:22:18 – Quantum machine learning, questinable claims, and cautious optimism
1:30:53 – Meaning of life

Lex Fridman just uploaded the second part of his interview with Vladimir Vapnik.

Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times. This conversation is part of the Artificial Intelligence podcast.

Lex Fridman interviews Jim Keller as part of his AI Podcast series.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:12 – Difference between a computer and a human brain
3:43 – Computer abstraction layers and parallelism
17:53 – If you run a program multiple times, do you always get the same answer?
20:43 – Building computers and teams of people
22:41 – Start from scratch every 5 years
30:05 – Moore’s law is not dead
55:47 – Is superintelligence the next layer of abstraction?
1:00:02 – Is the universe a computer?
1:03:00 – Ray Kurzweil and exponential improvement in technology
1:04:33 – Elon Musk and Tesla Autopilot
1:20:51 – Lessons from working with Elon Musk
1:28:33 – Existential threats from AI
1:32:38 – Happiness and the meaning of life