Lex Fridman interview Daniel Kahneman in this thought provoking interview.

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of lifeOUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life

Here’s a talk by Danny Luo Pre-training of Deep Bidirectional Transformers for Language Understanding

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.Toronto Deep Learning Series, 6 November 2018

Paper: https://arxiv.org/abs/1810.04805

Siraj Raval has a video exploring a paper about genomics and creating reliable machine learning systems.

Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that’s not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it’s important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I’ll explain the work of these researchers, we’ll write some code, do some math, do some visualizations, and by the end I’ll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!

Likelihood Ratios for Out-of-Distribution Detection paper: https://arxiv.org/pdf/1906.02845.pdf 

The researcher’s code: https://github.com/google-research/google-research/tree/master/genomics_ood

Lex Fridman explains that the best way to understand the mind is to build it in the clip from the opening lecture of the MIT Deep Learning lecture series.

Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8

Website: https://deeplearning.mit.edu

This is a clip from the opening lecture of the MIT Deep Learning lecture series.
Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8
Website: https://deeplearning.mit.edu

Siraj Raval gets back to inspiring people to get into AI and pokes fun at himself.

Almost exactly 4 years ago I decided to dedicate my life to helping educate the world on Artificial Intelligence. There were hardly any resources designed for absolute beginners and the field was dominated by PhDs. In 2020, thanks to the extraordinary contributions of everyone in this community, all that has changed. It’s easier than ever before to enter into this field, even without an IT background. We’ve seen brave entrepreneurs figure out how to deploy this technology to save lives (medical imaging, automated diagnosis) and accelerate Science (AlphaFold). We’ve seen algorithmic advances (deepfakes) and ethical controversies (automated surveillance) that shocked the world. The AI field is now a global, cross-cultural movement that’s not limited to academics alone. And that’s something all of us should be proud of, we’re all apart of this. I’ve packed a lot into this episode! I’ll give my annual lists of the best ML language and libraries to learn this year, how to learn ML in 2020, as well as 8 predictions about where this field is headed. I had a lot of fun making this, so I hope you enjoy it!

Katherine Bindley of the Wall Street Journal is at CES to take a look at the latest AI-infused cameras on the market.

Two new smart systems use cameras, artificial intelligence and an assortment of sensors to keep watch over you—Patscan looks for threats in public spaces, while Eyeris monitors the driver and passengers in a car. WSJ’s Katherine Bindley visits CES to explores their advantages, as well as their privacy costs.