Lex Fridman interviews Michael Jordan – not that Michael Jordan.

Michael I Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
3:02 – How far are we in development of AI?
8:25 – Neuralink and brain-computer interfaces
14:49 – The term “artificial intelligence”
19:00 – Does science progress by ideas or personalities?
19:55 – Disagreement with Yann LeCun
23:53 – Recommender systems and distributed decision-making at scale
43:34 – Facebook, privacy, and trust
1:01:11 – Are human beings fundamentally good?
1:02:32 – Can a human life and society be modeled as an optimization problem?
1:04:27 – Is the world deterministic?
1:04:59 – Role of optimization in multi-agent systems
1:09:52 – Optimization of neural networks
1:16:08 – Beautiful idea in optimization: Nesterov acceleration
1:19:02 – What is statistics?
1:29:21 – What is intelligence?
1:37:01 – Advice for students
1:39:57 – Which language is more beautiful: English or French?

Siraj Raval has a video exploring a paper about genomics and creating reliable machine learning systems.

Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that’s not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it’s important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I’ll explain the work of these researchers, we’ll write some code, do some math, do some visualizations, and by the end I’ll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!

Great Learning has provided this free 7 hour course on statistics for Data Science.

This course will be taught by Dr.Abhinanda Sarkar who has his Ph.D. in Statistics from Stanford University. He has taught applied mathematics at the Massachusetts Institute of Technology (MIT); been on the research staff at IBM; led Quality, Engineering Development, and Analytics functions at General Electric (GE); and has co-founded OmiX Labs.

These are the topics covered in this full course:

Statistics vs Machine Learning – 2:22

Types of Statistics [Descriptive, Prescriptive and Predictive] – 9:05

Types of Data – 1:50:45

Correlation – 2:46:02

Covariance – 2:52:33

Introduction to Probability – 4:26:55

Conditional Probability with Baye’s Theorem – 5:24:00

Lex Fridman interviews Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.

OUTLINE:

0:00 – Introduction
1:56 – What kind of math would aliens have?
3:48 – Euler’s identity and the least favorite piece of notation
10:31 – Is math discovered or invented?
14:30 – Difference between physics and math
17:24 – Why is reality compressible into simple equations?
21:44 – Are we living in a simulation?
26:27 – Infinity and abstractions
35:48 – Most beautiful idea in mathematics
41:32 – Favorite video to create
45:04 – Video creation process
50:04 – Euler identity
51:47 – Mortality and meaning
55:16 – How do you know when a video is done?
56:18 – What is the best way to learn math for beginners?
59:17 – Happy moment

Lex Fridman interviews Judea Pearl is a professor at UCLA and a winner of the Turing Award. The Turing Awards is the Nobel Prize of computing.

Judea Pearl is a professor at UCLA and a winner of the Turing Award, that’s generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. This conversation is part of the Artificial Intelligence podcast.

Current statistical tools place the burden of valid, reproducible statistical analyses on the user. Users must have deep knowledge of statistics to not only identify their research questions, hypotheses, and domain assumptions but also select valid statistical tests for their hypotheses. As quantitative data become increasingly available in all disciplines, data analysis will continue to become a common task for people who may not have statistical expertise. Tea, a high-level declarative language for automating statistical test selection and execution, abstracts the details of analyses from users, empowering them to perform valid analyses by expressing their goals and domain knowledge. In this talk, I will discuss the design and implementation of Tea, lessons learned through the process, and other ongoing work in this vein.

Worried about a shark attack when you go to the beach? Then you need to watch this video.

From causation and correlation, to relative and absolute risk, Jennifer Rogers explains how to figure out if the stats we are presented in newspapers are accurate.

Jennifer Rogers holds the position of Director of Statistical Consultancy Services at the University of Oxford having previously worked as a Post-Doctoral Research Fellow in the Department of Statistics funded by the National Institute of Health Research. She has a special interest in the development and application of novel statistical methodologies, particularly in medicine. Her main area of expertise is the analysis of recurrent events and her research has recently focused on developing and implementing appropriate methodology for the analysis of repeat hospitalisations in patients with heart failure but her research has many other applications in medicine such as epilepsy and cancer, but also in retail and engineering. She works alongside other statisticians, clinicians, computer scientists, industry experts and regulators.

Learn the essentials of statistics in this complete (and free!) course from freeCodeCamp.org.

This course introduces the various methods used to collect, organize, summarize, interpret and reach conclusions about data. An emphasis is placed on demonstrating that statistics is more than mathematical calculations. By using examples gathered from real life, students learn to use statistical methods as analytical tools to develop generalizations and meaningful conclusions in their field of study.

## Comments

## A Dark Matter Hunter Is Built With Quantum Computer Parts

## Lee Smolin on Quantum Gravity and Einstein’s Unfinished Revolution

## Lesser known features of the Cosmos DB SDK for .NET

## New Mars Curiosity Rover Pictures

## What’s the right forecasting method?