Lex Fridman shared this lecture by Andrew Trask in January 2020, part of the MIT Deep Learning Lecture Series.

OUTLINE:

0:00 – Introduction
0:54 – Privacy preserving AI talk overview
1:28 – Key question: Is it possible to answer questions using data we cannot see?
5:56 – Tool 1: remote execution
8:44 – Tool 2: search and example data
11:35 – Tool 3: differential privacy
28:09 – Tool 4: secure multi-party computation
36:37 – Federated learning
39:55 – AI, privacy, and society
46:23 – Open data for science
50:35 – Single-use accountability
54:29 – End-to-end encrypted services
59:51 – Q&A: privacy of the diagnosis
1:02:49 – Q&A: removing bias from data when data is encrypted
1:03:40 – Q&A: regulation of privacy
1:04:27 – Q&A: OpenMined
1:06:16 – Q&A: encryption and nonlinear functions
1:07:53 – Q&A: path to adoption of privacy-preserving technology
1:11:44 – Q&A: recommendation systems

Lex Fridman interview Daniel Kahneman in this thought provoking interview.

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of lifeOUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life

Lex Fridman explains that the best way to understand the mind is to build it in the clip from the opening lecture of the MIT Deep Learning lecture series.

Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8

Website: https://deeplearning.mit.edu

This is a clip from the opening lecture of the MIT Deep Learning lecture series.
Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8
Website: https://deeplearning.mit.edu

Lex Fridman interviews Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.

OUTLINE:

0:00 – Introduction
1:56 – What kind of math would aliens have?
3:48 – Euler’s identity and the least favorite piece of notation
10:31 – Is math discovered or invented?
14:30 – Difference between physics and math
17:24 – Why is reality compressible into simple equations?
21:44 – Are we living in a simulation?
26:27 – Infinity and abstractions
35:48 – Most beautiful idea in mathematics
41:32 – Favorite video to create
45:04 – Video creation process
50:04 – Euler identity
51:47 – Mortality and meaning
55:16 – How do you know when a video is done?
56:18 – What is the best way to learn math for beginners?
59:17 – Happy moment

Donald Knuth is one of the greatest and most impactful computer scientists and mathematicians ever. He is the recipient in 1974 of the Turing Award, considered the Nobel Prize of computing.

He is the author of the multi-volume work, the magnum opus, The Art of Computer Programming. He made several key contributions to the rigorous analysis of the computational complexity of algorithms. He popularized asymptotic notation, that we all affectionately know as the big-O notation.

He also created the TeX typesetting which most computer scientists, physicists, mathematicians, and scientists and engineers use to write technical papers and make them look beautiful.

Lex Fridman interviews him in this video.

EPISODE LINKS:
The Art of Computer Programming (book): https://amzn.to/39kxRwB

OUTLINE:
0:00 – Introduction
3:45 – IBM 650
7:51 – Geeks
12:29 – Alan Turing
14:26 – My life is a convex combination of english and mathematics
24:00 – Japanese arrow puzzle example
25:42 – Neural networks and machine learning
27:59 – The Art of Computer Programming
36:49 – Combinatorics
39:16 – Writing process
42:10 – Are some days harder than others?
48:36 – What’s the “Art” in the Art of Computer Programming
50:21 – Binary (boolean) decision diagram
55:06 – Big-O notation
58:02 – P=NP
1:10:05 – Artificial intelligence
1:13:26 – Ant colonies and human cognition
1:17:11 – God and the Bible
1:24:28 – Reflection on life
1:28:25 – Facing mortality
1:33:40 – TeX and beautiful typography
1:39:23 – How much of the world do we understand?
1:44:17 – Question for God

Lex Fridman interviews Melanie Mitchell in the latest edition of his AI Podicast.

Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast.

EPISODE LINKS:
AI: A Guide for Thinking Humans (book) – https://amzn.to/2Q80LbP
Melanie Twitter: https://twitter.com/MelMitchell1

OUTLINE:
0:00 – Introduction
2:33 – The term “artificial intelligence”
6:30 – Line between weak and strong AI
12:46 – Why have people dreamed of creating AI?
15:24 – Complex systems and intelligence
18:38 – Why are we bad at predicting the future with regard to AI?
22:05 – Are fundamental breakthroughs in AI needed?
25:13 – Different AI communities
31:28 – Copycat cognitive architecture
36:51 – Concepts and analogies
55:33 – Deep learning and the formation of concepts
1:09:07 – Autonomous vehicles
1:20:21 – Embodied AI and emotion
1:25:01 – Fear of superintelligent AI
1:36:14 – Good test for intelligence
1:38:09 – What is complexity?
1:43:09 – Santa Fe Institute
1:47:34 – Douglas Hofstadter
1:49:42 – Proudest moment

Lex Fridman interviews Jim Gates, a theoretical physicist and professor at Brown University working on supersymmetry, supergravity, and superstring theory. He served on former President Obama’s Council of Advisors on Science and Technology.

He is the co-author of a new book titled Proving Einstein Right about the scientists who set out to prove Einstein’s theory of relativity.

This conversation is part of the Artificial Intelligence podcast.

EPISODE LINKS:
Jim Gates wiki: http://bit.ly/2ZlFdv0
Proving Einstein Right (book): https://amzn.to/34WLizp

OUTLINE:
0:00 – Introduction
3:13 – Will we ever venture outside our solar system?
5:16 – When will the first human step foot on Mars?
11:14 – Are we alone in the universe?
13:55 – Most beautiful idea in physics
16:29 – Can the mind be digitized?
21:15 – Does the possibility of superintelligence excite you?
22:25 – Role of dreaming in creativity and mathematical thinking
30:51 – Existential threats
31:46 – Basic particles underlying our universe
41:28 – What is supersymmetry?
52:19 – Adinkra symbols
1:00:24 – String theory
1:07:02 – Proving Einstein right and experimental validation of general relativity
1:19:07 – Richard Feynman
1:22:01 – Barack Obama’s Council of Advisors on Science and Technology
1:30:20 – Exciting problems in physics that are just within our reach
1:31:26 – Mortality

Lex Fridman interviews Michael Stevens, the creator of Vsauce — one of the most popular educational YouTube channel in the world, with over 15 million subscribers and over 1.7 billion views.

His videos often ask and answer questions that are both profound and entertaining, spanning topics from physics to psychology.

As part of his channel he created 3 seasons of Mind Field, a series that explored human behavior.

This conversation is part of the Artificial Intelligence podcast.

Lex Fridman interviews Judea Pearl is a professor at UCLA and a winner of the Turing Award. The Turing Awards is the Nobel Prize of computing.

Judea Pearl is a professor at UCLA and a winner of the Turing Award, that’s generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. This conversation is part of the Artificial Intelligence podcast.