Lex Fridman interviews Chris Lattner, a world-class software & hardware engineer, leading projects at Apple, Tesla, Google, and SiFive.

OUTLINE:

  • 0:00 – Introduction
  • 2:25 – Working with Elon Musk, Steve Jobs, Jeff Dean
  • 7:55 – Why do programming languages matter?
  • 13:55 – Python vs Swift
  • 24:48 – Design decisions
  • 30:06 – Types
  • 33:54 – Programming languages are a bicycle for the mind
  • 36:26 – Picking what language to learn
  • 42:25 – Most beautiful feature of a programming language
  • 51:50 – Walrus operator
  • 1:01:16 – LLVM
  • 1:06:28 – MLIR compiler framework
  • 1:10:35 – SiFive semiconductor design
  • 1:23:09 – Moore’s Law
  • 1:26:22 – Parallelization
  • 1:30:50 – Swift concurrency manifesto
  • 1:41:39 – Running a neural network fast
  • 1:47:16 – Is the universe a quantum computer?
  • 1:52:57 – Effects of the pandemic on society
  • 2:10:09 – GPT-3
  • 2:14:28 – Software 2.0
  • 2:27:54 – Advice for young people
  • 2:32:37 – Meaning of life

Lex Fridman interviews Scott Aaronson, a quantum computer scientist.

Time index:

  • 0:00 – Introduction
  • 3:31 – Simulation
  • 8:22 – Theories of everything
  • 14:02 – Consciousness
  • 36:16 – Roger Penrose on consciousness
  • 46:28 – Turing test
  • 50:16 – GPT-3
  • 58:46 – Universality of computation
  • 1:05:17 – Complexity
  • 1:11:23 – P vs NP
  • 1:23:41 – Complexity of quantum computation
  • 1:35:48 – Pandemic
  • 1:49:33 – Love

Elon Musk has warned us that AI and in particular a digital super intelligent AI might render humanity extinct.

We should therefore proceed very carefully in the development of AI systems. One of the solutions for the AI control problem proposed by Elon Musk, is the integration of AI with the human brain through a brain-computer interface. That is one of the reasons why he founded Neuralink, a company focused on the development of implantable brain–machine interfaces.

[]

Neuralink’s BMI technology might be able to overcome the biological limits of our minds and could even expand our intelligence.The symbiosis between AI and humans, may greatly benefit our species. It may also help humanity to expand out into space. In spite of these possibilities. Musk said that he sees the creation of digital superintelligences as a great risk to the existence of humanity, but he also thinks that we must nevertheless pursue its development.

Yannic Kilcher explains why transformers are ruining convolutions.

This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.

OUTLINE:

  • 0:00 – Introduction
  • 0:30 – Double-Blind Review is Broken
  • 5:20 – Overview
  • 6:55 – Transformers for Images
  • 10:40 – Vision Transformer Architecture
  • 16:30 – Experimental Results
  • 18:45 – What does the Model Learn?
  • 21:00 – Why Transformers are Ruining Everything
  • 27:45 – Inductive Biases in Transformers
  • 29:05 – Conclusion & Comments

Related resources:

  • Paper (Under Review): https://openreview.net/forum?id=YicbFdNTTy