Lex Fridman interviews Daphne Koller, a professor of computer science at Stanford University, a co-founder of Coursera with Andrew Ng and Founder and CEO of insitro, a company at the intersection of machine learning and biomedicine.

This conversation is part of the Artificial Intelligence podcast.

Time index:

  • 0:00 – Introduction
  • 2:22 – Will we one day cure all disease?
  • 6:31 – Longevity
  • 10:16 – Role of machine learning in treating diseases
  • 13:05 – A personal journey to medicine
  • 16:25 – Insitro and disease-in-a-dish models
  • 33:25 – What diseases can be helped with disease-in-a-dish approaches?
  • 36:43 – Coursera and education
  • 49:04 – Advice to people interested in AI
  • 50:52 – Beautiful idea in deep learning
  • 55:10 – Uncertainty in AI
  • 58:29 – AGI and AI safety
  • 1:06:52 – Are most people good?
  • 1:09:04 – Meaning of life

Lex Fridman lands an interview with the one and only Andrew Ng.

Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me. This conversation is part of the Artificial Intelligence podcast.

0:00 – Introduction
2:23 – First few steps in AI
5:05 – Early days of online education
16:07 – Teaching on a whiteboard
17:46 – Pieter Abbeel and early research at Stanford
23:17 – Early days of deep learning
32:55 – Quick preview: deeplearning.ai, landing.ai, and AI fund
33:23 – deeplearning.ai: how to get started in deep learning
45:55 – Unsupervised learning
49:40 – deeplearning.ai (continued)
56:12 – Career in deep learning
58:56 – Should you get a PhD?
1:03:28 – AI fund – building startups
1:11:14 – Landing.ai – growing AI efforts in established companies
1:20:44 – Artificial general intelligence

Donald Knuth is one of the greatest and most impactful computer scientists and mathematicians ever. He is the recipient in 1974 of the Turing Award, considered the Nobel Prize of computing.

He is the author of the multi-volume work, the magnum opus, The Art of Computer Programming. He made several key contributions to the rigorous analysis of the computational complexity of algorithms. He popularized asymptotic notation, that we all affectionately know as the big-O notation.

He also created the TeX typesetting which most computer scientists, physicists, mathematicians, and scientists and engineers use to write technical papers and make them look beautiful.

Lex Fridman interviews him in this video.

The Art of Computer Programming (book): https://amzn.to/39kxRwB

0:00 – Introduction
3:45 – IBM 650
7:51 – Geeks
12:29 – Alan Turing
14:26 – My life is a convex combination of english and mathematics
24:00 – Japanese arrow puzzle example
25:42 – Neural networks and machine learning
27:59 – The Art of Computer Programming
36:49 – Combinatorics
39:16 – Writing process
42:10 – Are some days harder than others?
48:36 – What’s the “Art” in the Art of Computer Programming
50:21 – Binary (boolean) decision diagram
55:06 – Big-O notation
58:02 – P=NP
1:10:05 – Artificial intelligence
1:13:26 – Ant colonies and human cognition
1:17:11 – God and the Bible
1:24:28 – Reflection on life
1:28:25 – Facing mortality
1:33:40 – TeX and beautiful typography
1:39:23 – How much of the world do we understand?
1:44:17 – Question for God

In this video from Microsoft Research, Susan Dumais sits down with Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University.

Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze, 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008).

His most recent work has concentrated on probabilistic approaches to natural language processing (NLP) problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.

To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.