Lex Fridman interviews Chris Lattner, a world-class software & hardware engineer, leading projects at Apple, Tesla, Google, and SiFive.


  • 0:00 – Introduction
  • 2:25 – Working with Elon Musk, Steve Jobs, Jeff Dean
  • 7:55 – Why do programming languages matter?
  • 13:55 – Python vs Swift
  • 24:48 – Design decisions
  • 30:06 – Types
  • 33:54 – Programming languages are a bicycle for the mind
  • 36:26 – Picking what language to learn
  • 42:25 – Most beautiful feature of a programming language
  • 51:50 – Walrus operator
  • 1:01:16 – LLVM
  • 1:06:28 – MLIR compiler framework
  • 1:10:35 – SiFive semiconductor design
  • 1:23:09 – Moore’s Law
  • 1:26:22 – Parallelization
  • 1:30:50 – Swift concurrency manifesto
  • 1:41:39 – Running a neural network fast
  • 1:47:16 – Is the universe a quantum computer?
  • 1:52:57 – Effects of the pandemic on society
  • 2:10:09 – GPT-3
  • 2:14:28 – Software 2.0
  • 2:27:54 – Advice for young people
  • 2:32:37 – Meaning of life

Lex Fridman interviews Scott Aaronson, a quantum computer scientist.

Time index:

  • 0:00 – Introduction
  • 3:31 – Simulation
  • 8:22 – Theories of everything
  • 14:02 – Consciousness
  • 36:16 – Roger Penrose on consciousness
  • 46:28 – Turing test
  • 50:16 – GPT-3
  • 58:46 – Universality of computation
  • 1:05:17 – Complexity
  • 1:11:23 – P vs NP
  • 1:23:41 – Complexity of quantum computation
  • 1:35:48 – Pandemic
  • 1:49:33 – Love

Elon Musk has warned us that AI and in particular a digital super intelligent AI might render humanity extinct.

We should therefore proceed very carefully in the development of AI systems. One of the solutions for the AI control problem proposed by Elon Musk, is the integration of AI with the human brain through a brain-computer interface. That is one of the reasons why he founded Neuralink, a company focused on the development of implantable brain–machine interfaces.


Neuralink’s BMI technology might be able to overcome the biological limits of our minds and could even expand our intelligence.The symbiosis between AI and humans, may greatly benefit our species. It may also help humanity to expand out into space. In spite of these possibilities. Musk said that he sees the creation of digital superintelligences as a great risk to the existence of humanity, but he also thinks that we must nevertheless pursue its development.

Yannic Kilcher explains why transformers are ruining convolutions.

This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.


  • 0:00 – Introduction
  • 0:30 – Double-Blind Review is Broken
  • 5:20 – Overview
  • 6:55 – Transformers for Images
  • 10:40 – Vision Transformer Architecture
  • 16:30 – Experimental Results
  • 18:45 – What does the Model Learn?
  • 21:00 – Why Transformers are Ruining Everything
  • 27:45 – Inductive Biases in Transformers
  • 29:05 – Conclusion & Comments

Related resources:

  • Paper (Under Review): https://openreview.net/forum?id=YicbFdNTTy

When you think of “deep learning” you might think of teams of PhDs with petabytes of data and racks of supercomputers.

But it turns out that a year of coding, high school math, a free GPU service, and a few dozen images is enough to create world-class models. fast.ai has made it their mission to make deep learning as accessible as possible.

In this interview fast.ai co-founder Jeremy Howard explains how to use their free software and courses to become an effective deep learning practitioner.

Learn More:

Taking deep learning models to production and doing so reliably is one of the next frontiers of MLOps.

With the advent of Redis modules and the availability of C APIs for the major deep learning frameworks, it is now possible to turn Redis into a reliable runtime for deep learning workloads, providing a simple solution for a model serving microservice.

RedisAI is shipped with several cool features such as support for multiple frameworks, CPU and GPU backend, auto batching, DAGing, and soon will be with automatic monitoring abilities. In this talk, we’ll explore some of these features of RedisAI and see how easy it is to integrate MLflow and RedisAI to build an efficient productionization pipeline.

[Originally aired as part of the Data+AI Online Meetup (https://www.meetup.com/data-ai-online/) and Bay Area MLflow meetup]