Lex Fridman interviews Chris Lattner, a world-class software & hardware engineer, leading projects at Apple, Tesla, Google, and SiFive.

OUTLINE:

  • 0:00 – Introduction
  • 2:25 – Working with Elon Musk, Steve Jobs, Jeff Dean
  • 7:55 – Why do programming languages matter?
  • 13:55 – Python vs Swift
  • 24:48 – Design decisions
  • 30:06 – Types
  • 33:54 – Programming languages are a bicycle for the mind
  • 36:26 – Picking what language to learn
  • 42:25 – Most beautiful feature of a programming language
  • 51:50 – Walrus operator
  • 1:01:16 – LLVM
  • 1:06:28 – MLIR compiler framework
  • 1:10:35 – SiFive semiconductor design
  • 1:23:09 – Moore’s Law
  • 1:26:22 – Parallelization
  • 1:30:50 – Swift concurrency manifesto
  • 1:41:39 – Running a neural network fast
  • 1:47:16 – Is the universe a quantum computer?
  • 1:52:57 – Effects of the pandemic on society
  • 2:10:09 – GPT-3
  • 2:14:28 – Software 2.0
  • 2:27:54 – Advice for young people
  • 2:32:37 – Meaning of life

Lex Fridman interviews Scott Aaronson, a quantum computer scientist.

Time index:

  • 0:00 – Introduction
  • 3:31 – Simulation
  • 8:22 – Theories of everything
  • 14:02 – Consciousness
  • 36:16 – Roger Penrose on consciousness
  • 46:28 – Turing test
  • 50:16 – GPT-3
  • 58:46 – Universality of computation
  • 1:05:17 – Complexity
  • 1:11:23 – P vs NP
  • 1:23:41 – Complexity of quantum computation
  • 1:35:48 – Pandemic
  • 1:49:33 – Love

Lex Fridman interviewed François Chollet, an AI researcher at Google and creator of Keras,  for a second time on his podcast.\

OUTLINE:

  • 0:00 – Introduction
  • 5:04 – Early influence
  • 6:23 – Language
  • 12:50 – Thinking with mind maps
  • 23:42 – Definition of intelligence
  • 42:24 – GPT-3
  • 53:07 – Semantic web
  • 57:22 – Autonomous driving
  • 1:09:30 – Tests of intelligence
  • 1:13:59 – Tests of human intelligence
  • 1:27:18 – IQ tests
  • 1:35:59 – ARC Challenge
  • 1:59:11 – Generalization
  • 2:09:50 – Turing Test
  • 2:20:44 – Hutter prize
  • 2:27:44 – Meaning of life

Lex Fridman talks to Russ Tedrake in the latest episode of his AI podcast.

Russ Tedrake is a roboticist and professor at MIT and vice president of robotics research at TRI. He works on control of robots in interesting, complicated, underactuated, stochastic, difficult to model situations. This conversation is part of the Artificial Intelligence podcast.

Outline:

  • 0:00 – Introduction
  • 4:29 – Passive dynamic walking
  • 9:40 – Animal movement
  • 13:34 – Control vs Dynamics
  • 15:49 – Bipedal walking
  • 20:56 – Running barefoot
  • 33:01 – Think rigorously with machine learning
  • 44:05 – DARPA Robotics Challenge
  • 1:07:14 – When will a robot become UFC champion
  • 1:18:32 – Black Mirror Robot Dog
  • 1:34:01 – Robot control
  • 1:47:00 – Simulating robots
  • 2:00:33 – Home robotics
  • 2:03:40 – Soft robotics
  • 2:07:25 – Underactuated robotics
  • 2:20:42 – Touch
  • 2:28:55 – Book recommendations
  • 2:40:08 – Advice to young people
  • 2:44:20 – Meaning of life

Lex Fridman interviews Manolis Kellis,a professor at MIT and head of the MIT Computational Biology Group.

He is interested in understanding the human genome from a computational, evolutionary, biological, and other cross-disciplinary perspectives.

This conversation is part of the Artificial Intelligence podcast.

Content outline:

  • 0:00 – Introduction
  • 3:54 – Human genome
  • 17:47 – Sources of knowledge
  • 29:15 – Free will
  • 33:26 – Simulation
  • 35:17 – Biological and computing
  • 50:10 – Genome-wide evolutionary signatures
  • 56:54 – Evolution of COVID-19
  • 1:02:59 – Are viruses intelligent?
  • 1:12:08 – Humans vs viruses
  • 1:19:39 – Engineered pandemics
  • 1:23:23 – Immune system
  • 1:33:22 – Placebo effect
  • 1:35:39 – Human genome source code
  • 1:44:40 – Mutation
  • 1:51:46 – Deep learning
  • 1:58:08 – Neuralink
  • 2:07:07 – Language
  • 2:15:19 – Meaning of life

Lex Fridman interviews Jitendra Malik, a professor at Berkeley and one of the seminal figures in the field of computer vision, the kind before the deep learning revolution, and the kind after.

He has been cited over 180,000 times and has mentored many world-class researchers in computer science. This conversation is part of the Artificial Intelligence podcast.

Content index:

  • 0:00 – Introduction
  • 3:17 – Computer vision is hard
  • 10:05 – Tesla Autopilot
  • 21:20 – Human brain vs computers
  • 23:14 – The general problem of computer vision
  • 29:09 – Images vs video in computer vision
  • 37:47 – Benchmarks in computer vision
  • 40:06 – Active learning
  • 45:34 – From pixels to semantics
  • 52:47 – Semantic segmentation
  • 57:05 – The three R’s of computer vision
  • 1:02:52 – End-to-end learning in computer vision
  • 1:04:24 – 6 lessons we can learn from children
  • 1:08:36 – Vision and language
  • 1:12:30 – Turing test
  • 1:16:17 – Open problems in computer vision
  • 1:24:49 – AGI
  • 1:35:47 – Pick the right problem

Lex Fridman interviews Brian Kernighan in the latest episode of his podcast.

Brian Kernighan is a professor of computer science at Princeton University. He co-authored the C Programming Language with Dennis Ritchie (creator of C) and has written a lot of books on programming, computers, and life including the Practice of Programming, the Go Programming Language, his latest UNIX: A History and a Memoir. He co-created AWK, the text processing language used by Linux folks like myself. He co-designed AMPL, an algebraic modeling language for large-scale optimization. This conversation is part of the Artificial Intelligence podcast.

Outline:

  • 0:00 – Introduction
  • 4:24 – UNIX early days
  • 22:09 – Unix philosophy
  • 31:54 – Is programming art or science?
  • 35:18 – AWK
  • 42:03 – Programming setup
  • 46:39 – History of programming languages
  • 52:48 – C programming language
  • 58:44 – Go language
  • 1:01:57 – Learning new programming languages
  • 1:04:57 – Javascript
  • 1:08:16 – Variety of programming languages
  • 1:10:30 – AMPL
  • 1:18:01 – Graph theory
  • 1:22:20 – AI in 1964
  • 1:27:50 – Future of AI
  • 1:29:47 – Moore’s law
  • 1:32:54 – Computers in our world
  • 1:40:37 – Life

Lex Fridman interviews Sergey Levine in episode 108 of his podcast.

Sergey Levine is a professor at Berkeley and a world-class researcher in deep learning, reinforcement learning, robotics, and computer vision, including the development of algorithms for end-to-end training of neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, and deep RL algorithms. This conversation is part of the Artificial Intelligence podcast.

Episode outline:

  • 0:00 – Introduction
  • 3:05 – State-of-the-art robots vs humans
  • 16:13 – Robotics may help us understand intelligence
  • 22:49 – End-to-end learning in robotics
  • 27:01 – Canonical problem in robotics
  • 31:44 – Commonsense reasoning in robotics
  • 34:41 – Can we solve robotics through learning?
  • 44:55 – What is reinforcement learning?
  • 1:06:36 – Tesla Autopilot
  • 1:08:15 – Simulation in reinforcement learning
  • 1:13:46 – Can we learn gravity from data?
  • 1:16:03 – Self-play
  • 1:17:39 – Reward functions
  • 1:27:01 – Bitter lesson by Rich Sutton
  • 1:32:13 – Advice for students interesting in AI
  • 1:33:55 – Meaning of life