Lex Fridman interview Daniel Kahneman in this thought provoking interview.

Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular book “Thinking, Fast and Slow” that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: “System 1” is fast, instinctive and emotional; “System 2” is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of lifeOUTLINE:
0:00 – Introduction
2:36 – Lessons about human behavior from WWII
8:19 – System 1 and system 2: thinking fast and slow
15:17 – Deep learning
30:01 – How hard is autonomous driving?
35:59 – Explainability in AI and humans
40:08 – Experiencing self and the remembering self
51:58 – Man’s Search for Meaning by Viktor Frankl
54:46 – How much of human behavior can we study in the lab?
57:57 – Collaboration
1:01:09 – Replication crisis in psychology
1:09:28 – Disagreements and controversies in psychology
1:13:01 – Test for AGI
1:16:17 – Meaning of life

Azure machine learning datasets is a great solution to manage your data for machine learning.

With datasets, you can directly access data from multiple sources without incurring extra storage cost; load data for training and inference through unified interface and built in support for open source libraries; track your data in ML experiments for reproducibility.

Learn More:

Here’s a talk by Danny Luo Pre-training of Deep Bidirectional Transformers for Language Understanding

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.Toronto Deep Learning Series, 6 November 2018

Paper: https://arxiv.org/abs/1810.04805

Malte Pietsch delivers this keynote on “Transfer Learning – Entering a new era in NLP” at PyData Warsaw 2019

Transfer learning has been changing the NLP landscape tremendously since the release of BERT one year ago. Transformers of all kinds have emerged, dominate most research leaderboards and have made their way into industrial applications. In this talk we will dissect the paradigm of transfer learning and its effects on pipelines, modelling and the engineers mindset.

There are 250 billion micro-controllers in the world today. 28.1 billion units were sold in 2018 alone, and IC Insights forecasts annual shipment volume to grow to 38.2 billion by 2023.

What if they all became smart? How would that change our world?

From venturebeat.com:

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.

Since it’s still January, we can still make predictions for the year.

2020 will see further democratization of machine learning tools and a lower point of entry for their usage.

This will make data science/AI even more commonplace not only among top tech companies, but also small and medium-sized businesses across various verticals.

However, one aspect which is potentially underrated when looking at the big trends, in terms of the future of data science, is around language frameworks used to make the everyday data science tasks possible. Today, there are two major frameworks, R or Python (or in more pragmatic data science circles, both!). One is praised for having the most beautifully designed data wrangling syntax and plotting libraries, the other for its expressiveness and having the best deep learning libraries available today.

Will “Network Execubots” decide what films and TV shows get made?

The Hollywood Reporter has just reported that Warner Bros. has signed a deal with a tech company to implement an “AI-driven film management” system.

The system, which may sound more like an administrative tool than an industry game changer, will help the major studio decide which projects receive the proverbial green light: a task that’s daunting for humans, but a potential walk in the park for computer algorithms.

According to THR, the system, created by the Los Angeles-based company, Cinelytics, uses “comprehensive data and predictive analytics” to help “guide decision-making at the greenlight stage.” THR also says that Cinelytics’ tech can “assess the value of a star in any territory,” and even predict how well a film will perform in theaters and secondary markets.

This full 6 hour+ course provides a complete introduction to Graph Theory algorithms in computer science.

Code: https://github.com/williamfiset/algorithms
Slides: https://github.com/williamfiset/Algorithms/tree/master/slides/graphtheory

Course created by William Fiset. Check out his YouTube channel: https://www.youtube.com/channel/UCD8yeTczadqdARzQUp29PJw

⭐️ Course Contents ⭐️
⌨️ (0:00:00) Graph Theory Introduction
⌨️ (0:13:53) Problems in Graph Theory
⌨️ (0:23:15) Depth First Search Algorithm
⌨️ (0:33:18) Breadth First Search Algorithm
⌨️ (0:40:27) Breadth First Search grid shortest path
⌨️ (0:56:23) Topological Sort Algorithm
⌨️ (1:09:52) Shortest/Longest path on a Directed Acyclic Graph (DAG)
⌨️ (1:19:34) Dijkstra’s Shortest Path Algorithm
⌨️ (1:43:17) Dijkstra’s Shortest Path Algorithm | Source Code
⌨️ (1:50:47) Bellman Ford Algorithm
⌨️ (2:05:34) Floyd Warshall All Pairs Shortest Path Algorithm
⌨️ (2:20:54) Floyd Warshall All Pairs Shortest Path Algorithm | Source Code
⌨️ (2:29:19) Bridges and Articulation points Algorithm
⌨️ (2:49:01) Bridges and Articulation points source code
⌨️ (2:57:32) Tarjans Strongly Connected Components algorithm
⌨️ (3:13:56) Tarjans Strongly Connected Components algorithm source code
⌨️ (3:20:12) Travelling Salesman Problem | Dynamic Programming
⌨️ (3:39:59) Travelling Salesman Problem source code | Dynamic Programming
⌨️ (3:52:27) Existence of Eulerian Paths and Circuits
⌨️ (4:01:19) Eulerian Path Algorithm
⌨️ (4:15:47) Eulerian Path Algorithm | Source Code
⌨️ (4:23:00) Prim’s Minimum Spanning Tree Algorithm
⌨️ (4:37:05) Eager Prim’s Minimum Spanning Tree Algorithm
⌨️ (4:50:38) Eager Prim’s Minimum Spanning Tree Algorithm | Source Code
⌨️ (4:58:30) Max Flow Ford Fulkerson | Network Flow
⌨️ (5:11:01) Max Flow Ford Fulkerson | Source Code
⌨️ (5:27:25) Unweighted Bipartite Matching | Network Flow
⌨️ (5:38:11) Mice and Owls problem | Network Flow
⌨️ (5:46:11) Elementary Math problem | Network Flow
⌨️ (5:56:19) Edmonds Karp Algorithm | Network Flow
⌨️ (6:05:18) Edmonds Karp Algorithm | Source Code
⌨️ (6:10:08) Capacity Scaling | Network Flow
⌨️ (6:19:34) Capacity Scaling | Network Flow | Source Code
⌨️ (6:25:04) Dinic’s Algorithm | Network Flow
⌨️ (6:36:09) Dinic’s Algorithm | Network Flow | Source Code

Tim Corey explores Entity Framework, an amazing set of tooling around data access.

With EFCore, that tooling becomes even more powerful. So why is it that I still don’t recommend that people use EFCore?

In this video, he walks you through the best practices of Entity Framework and EFCore and point out the pitfalls to avoid. We will discuss where there are problems and what to do to resolve those problems.