Two Minute Papers explains the paper “Local Motion Phases for Learning Multi-Contact Character Movements” in the video below.
GPT-3 has 175 billion parameters/synapses and produces astonishing results.
Human brain has 100 trillion synapses. How much will it cost to train a language model the size of the human brain?
Lex Fridman works through the math.
Two Minute Papers explores the paper “StarGAN v2: Diverse Image Synthesis for Multiple Domains”
Lex Fridman interviews Jitendra Malik, a professor at Berkeley and one of the seminal figures in the field of computer vision, the kind before the deep learning revolution, and the kind after.
He has been cited over 180,000 times and has mentored many world-class researchers in computer science. This conversation is part of the Artificial Intelligence podcast.
- 0:00 – Introduction
- 3:17 – Computer vision is hard
- 10:05 – Tesla Autopilot
- 21:20 – Human brain vs computers
- 23:14 – The general problem of computer vision
- 29:09 – Images vs video in computer vision
- 37:47 – Benchmarks in computer vision
- 40:06 – Active learning
- 45:34 – From pixels to semantics
- 52:47 – Semantic segmentation
- 57:05 – The three R’s of computer vision
- 1:02:52 – End-to-end learning in computer vision
- 1:04:24 – 6 lessons we can learn from children
- 1:08:36 – Vision and language
- 1:12:30 – Turing test
- 1:16:17 – Open problems in computer vision
- 1:24:49 – AGI
- 1:35:47 – Pick the right problem
Two Minute Papers examines the paper “3D Photography using Context-aware Layered Depth Inpainting.”
Two Minute Papers examines the paper “Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis.”
Lex Fridman provides a visual illustration of connection between neural network architecture, hyperparameters, and dataset characteristics.
Visual illustration of connection between neural network architecture, hyperparameters, and dataset characteristics. Explore this connection yourself at: https://playground.tensorflow.org/
In this episode, deeplizard demonstrates the various ways of saving and loading a Sequential model using TensorFlow’s Keras API.
deeplizard demonstrates how to create a confusion matrix, which will aid us in being able to visually observe how well a neural network is predicting during inference.
- 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
- 00:34 Plotting a Confusion Matrix
- 02:48 Reading a Confusion Matrix
- 04:56 Collective Intelligence and the DEEPLIZARD HIVEMIND