Lex Fridman shared this lecture by Andrew Trask in January 2020, part of the MIT Deep Learning Lecture Series.

OUTLINE:

0:00 – Introduction
0:54 – Privacy preserving AI talk overview
1:28 – Key question: Is it possible to answer questions using data we cannot see?
5:56 – Tool 1: remote execution
8:44 – Tool 2: search and example data
11:35 – Tool 3: differential privacy
28:09 – Tool 4: secure multi-party computation
36:37 – Federated learning
39:55 – AI, privacy, and society
46:23 – Open data for science
50:35 – Single-use accountability
54:29 – End-to-end encrypted services
59:51 – Q&A: privacy of the diagnosis
1:02:49 – Q&A: removing bias from data when data is encrypted
1:03:40 – Q&A: regulation of privacy
1:04:27 – Q&A: OpenMined
1:06:16 – Q&A: encryption and nonlinear functions
1:07:53 – Q&A: path to adoption of privacy-preserving technology
1:11:44 – Q&A: recommendation systems

Here’s my talk from the Azure Data Fest Philly 2020 last week!

Neural networks are an essential element of many advanced artificial intelligence (AI) solutions. However, few people understand the core mathematical or structural underpinnings of this concept. In this session, learn the basic structure of neural networks and how to build out a simple neural network from scratch with Python.Neural networks are an essential element of many advanced artificial intelligence (AI) solutions. However, few people understand the core mathematical or structural underpinnings of this concept. In this session, learn the basic structure of neural networks and how to build out a simple neural network from scratch with Python.

Eric Reiss (FatDUX Group) started working with user experience (UX) long before the term was even known and speaks about Ethics in AI.

Whenever we say, “That’s not my problem,” or, “My company won’t let me do that,” we are handing over our ethical responsibility to someone else – for better or for worse. Do innocent decisions evolve so that they promote racism or gender discrimination through inadvertent cognitive bias or unwitting apathy? Far too often they do.

We, as technologists, hold incredible power to shape the things to come. I would like to share my thoughts with you so you can use this power to truly build a better world for those who come after us!

Lex Fridman interview Daniel Kahneman in this thought provoking interview.

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of lifeOUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life

Azure machine learning datasets is a great solution to manage your data for machine learning.

With datasets, you can directly access data from multiple sources without incurring extra storage cost; load data for training and inference through unified interface and built in support for open source libraries; track your data in ML experiments for reproducibility.

Learn More:

Here’s a talk by Danny Luo Pre-training of Deep Bidirectional Transformers for Language Understanding

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.Toronto Deep Learning Series, 6 November 2018

Paper: https://arxiv.org/abs/1810.04805

There are 250 billion micro-controllers in the world today. 28.1 billion units were sold in 2018 alone, and IC Insights forecasts annual shipment volume to grow to 38.2 billion by 2023.

What if they all became smart? How would that change our world?

From venturebeat.com:

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.

Will “Network Execubots” decide what films and TV shows get made?

The Hollywood Reporter has just reported that Warner Bros. has signed a deal with a tech company to implement an “AI-driven film management” system.

The system, which may sound more like an administrative tool than an industry game changer, will help the major studio decide which projects receive the proverbial green light: a task that’s daunting for humans, but a potential walk in the park for computer algorithms.

According to THR, the system, created by the Los Angeles-based company, Cinelytics, uses “comprehensive data and predictive analytics” to help “guide decision-making at the greenlight stage.” THR also says that Cinelytics’ tech can “assess the value of a star in any territory,” and even predict how well a film will perform in theaters and secondary markets.