Here’s an interesting session from the SciPy 2020 virtual conference.

As a foundational tutorial in statistics and Bayesian inference, the intended audience is Pythonistas who are interested in gaining a foundational knowledge of probability theory and the basics of parameter estimation. Knowledge of `numpy`, `matplotlib`, and Python are prerequisites for this tutorial, in addition to curiosity and an excitement to learn new things!

Darwin College Lecture Series hosts Professor Nassim Nicholas Taleb to talk about the impact of extreme events.

While COVID is not directly referenced, it’s clear that the current pandemic counts as an extreme event.

Nassim Nicholas Taleb spent 21 years as a risk taker before becoming a researcher in philosophical, mathematical and (mostly) practical problems with probability. Taleb is the author of a multivolume essay, the Incerto (The Black Swan, Fooled by Randomness, and Antifragile) covering broad facets of uncertainty. It has been translated into 36 languages. In addition to his trader life, Taleb has also published, as a backup of the Incerto, more than 45 scholarly papers in statistical physics, statistics, philosophy, ethics, economics, international affairs, and quantitative finance, all around the notion of risk and probability. He spent time as a professional researcher (Distinguished Professor of Risk Engineering at NYU ’s School of Engineering and Dean’s Professor at U. Mass Amherst). His current focus is on the properties of systems that can handle disorder (“antifragile”). Taleb refuses all honors and anything that “turns knowledge into a spectator sport”.

Lex Fridman interviews Michael Jordan – not that Michael Jordan.

Michael I Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
3:02 – How far are we in development of AI?
8:25 – Neuralink and brain-computer interfaces
14:49 – The term “artificial intelligence”
19:00 – Does science progress by ideas or personalities?
19:55 – Disagreement with Yann LeCun
23:53 – Recommender systems and distributed decision-making at scale
43:34 – Facebook, privacy, and trust
1:01:11 – Are human beings fundamentally good?
1:02:32 – Can a human life and society be modeled as an optimization problem?
1:04:27 – Is the world deterministic?
1:04:59 – Role of optimization in multi-agent systems
1:09:52 – Optimization of neural networks
1:16:08 – Beautiful idea in optimization: Nesterov acceleration
1:19:02 – What is statistics?
1:29:21 – What is intelligence?
1:37:01 – Advice for students
1:39:57 – Which language is more beautiful: English or French?

Great Learning has posted this 11 hour full course on Data Science with Python for Beginners course.

Index:

  • Statistics vs Machine Learning – 2:15
  • Types of Statistics – 8:55
  • Types of Data – 1:50:35
  • Correlation – 2:45:50
  • Covariance – 2:52:23
  • Basics of Python – 4:24:36
  • Python Data Structures – 4:43:58
  • Flow Control Statements in Python – 4:55:58
  • Numpy – 5:32:48
  • Pandas – 5:51:30
  • Matplolib – 6:14:28
  • Linear Regression – 6:38:14
  • Logistic Regression – 9:54:34

Siraj Raval has a video exploring a paper about genomics and creating reliable machine learning systems.

Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that’s not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it’s important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I’ll explain the work of these researchers, we’ll write some code, do some math, do some visualizations, and by the end I’ll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!

Likelihood Ratios for Out-of-Distribution Detection paper: https://arxiv.org/pdf/1906.02845.pdf 

The researcher’s code: https://github.com/google-research/google-research/tree/master/genomics_ood

Great Learning has provided this free 7 hour course on statistics for Data Science.

This course will be taught by Dr.Abhinanda Sarkar who has his Ph.D. in Statistics from Stanford University. He has taught applied mathematics at the Massachusetts Institute of Technology (MIT); been on the research staff at IBM; led Quality, Engineering Development, and Analytics functions at General Electric (GE); and has co-founded OmiX Labs.

These are the topics covered in this full course:

  1. Statistics vs Machine Learning – 2:22
  2. Types of Statistics [Descriptive, Prescriptive and Predictive] – 9:05
  3. Types of Data – 1:50:45
  4. Correlation – 2:46:02
  5. Covariance – 2:52:33
  6. Introduction to Probability – 4:26:55
  7. Conditional Probability with Baye’s Theorem – 5:24:00
  8. Binomial Distribution – 6:17:01
  9. Poisson Distribution – 6:36:02

Lex Fridman interviews Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.

OUTLINE:

0:00 – Introduction
1:56 – What kind of math would aliens have?
3:48 – Euler’s identity and the least favorite piece of notation
10:31 – Is math discovered or invented?
14:30 – Difference between physics and math
17:24 – Why is reality compressible into simple equations?
21:44 – Are we living in a simulation?
26:27 – Infinity and abstractions
35:48 – Most beautiful idea in mathematics
41:32 – Favorite video to create
45:04 – Video creation process
50:04 – Euler identity
51:47 – Mortality and meaning
55:16 – How do you know when a video is done?
56:18 – What is the best way to learn math for beginners?
59:17 – Happy moment

Lex Fridman interviews Judea Pearl is a professor at UCLA and a winner of the Turing Award. The Turing Awards is the Nobel Prize of computing.

Judea Pearl is a professor at UCLA and a winner of the Turing Award, that’s generally recognized as the Nobel Prize of computing. He is one of the seminal figures in the field of artificial intelligence, computer science, and statistics. He has developed and championed probabilistic approaches to AI, including Bayesian Networks and profound ideas in causality in general. These ideas are important not just for AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lies at the core of what is currently missing and what must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. This conversation is part of the Artificial Intelligence podcast.