deeplizard teaches us how to normalize a dataset. We’ll see how dataset normalization is carried out in code, and we’ll see how normalization affects the neural network training process.

Content index:

  • 0:00 Video Intro
  • 0:52 Feature Scaling
  • 2:19 Normalization Example
  • 5:26 What Is Standardization
  • 8:13 Normalizing Color Channels
  • 9:25 Code: Normalize a Dataset
  • 19:40 Training With Normalized Data

How far can you go with ONLY language modeling?

Can a large enough language model perform NLP task out of the box?

OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding.

Yannic Kilcher explores.

Paper

Time index:

  • 0:00 – Intro & Overview
  • 1:20 – Language Models
  • 2:45 – Language Modeling Datasets
  • 3:20 – Model Size
  • 5:35 – Transformer Models
  • 7:25 – Fine Tuning
  • 10:15 – In-Context Learning
  • 17:15 – Start of Experimental Results
  • 19:10 – Question Answering
  • 23:10 – What I think is happening
  • 28:50 – Translation
  • 31:30 – Winograd Schemes
  • 33:00 – Commonsense Reasoning
  • 37:00 – Reading Comprehension
  • 37:30 – SuperGLUE
  • 40:40 – NLI
  • 41:40 – Arithmetic Expressions
  • 48:30 – Word Unscrambling
  • 50:30 – SAT Analogies
  • 52:10 – News Article Generation
  • 58:10 – Made-up Words
  • 1:01:10 – Training Set Contamination
  • 1:03:10 – Task Exampleshttps://arxiv.org/abs/2005.14165
    https://github.com/openai/gpt-3

Lex Fridman interviews Kate Darling in this episode of the AI Show.

Kate Darling is a researcher at MIT, interested in social robotics, robot ethics, and generally how technology intersects with society. She explores the emotional connection between human beings and life-like machines, which for me, is one of the most exciting topics in all of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.

Time index:

  • 0:00 – Introduction
  • 3:31 – Robot ethics
  • 4:36 – Universal Basic Income
  • 6:31 – Mistreating robots
  • 17:17 – Robots teaching us about ourselves
  • 20:27 – Intimate connection with robots
  • 24:29 – Trolley problem and making difficult moral decisions
  • 31:59 – Anthropomorphism
  • 38:09 – Favorite robot
  • 41:19 – Sophia
  • 42:46 – Designing robots for human connection
    47:01 – Why is it so hard to build a personal robotics company?
    50:03 – Is it possible to fall in love with a robot?
    56:39 – Robots displaying consciousness and mortality
    58:33 – Manipulation of emotion by companies
    1:04:40 – Intellectual property
    1:09:23 – Lessons for robotics from parenthood
    1:10:41 – Hope for future of robotics

Lex Fridman interviews Ilya Sutskever. co-founder of OpenAI.

Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.

This conversation is part of the Artificial Intelligence podcast.

Time index:

  • 0:00 – Introduction
  • 2:23 – AlexNet paper and the ImageNet moment
  • 8:33 – Cost functions
  • 13:39 – Recurrent neural networks
  • 16:19 – Key ideas that led to success of deep learning
  • 19:57 – What’s harder to solve: language or vision?
  • 29:35 – We’re massively underestimating deep learning
  • 36:04 – Deep double descent
  • 41:20 – Backpropagation
  • 42:42 – Can neural networks be made to reason?
  • 50:35 – Long-term memory
  • 56:37 – Language models
  • 1:00:35 – GPT-2
  • 1:07:14 – Active learning
  • 1:08:52 – Staged release of AI systems
  • 1:13:41 – How to build AGI?
  • 1:25:00 – Question to AGI
  • 1:32:07 – Meaning of life