Lex Fridman interviews Simon Sinek in the latest episode of his podcast.

Simon Sinek is an author of several books including Start With Why, Leaders Eat Last, and his latest The Infinite Game. He is one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big difficult challenges. This conversation is part of the Artificial Intelligence podcast.

OUTLINE:
0:00 – Introduction
3:50 – Meaning of life as an infinite game
10:13 – Optimism
13:30 – Mortality
17:52 – Hard work
26:38 – Elon Musk, Steve Jobs, and leadership

Kurzgesagt – In a Nutshell explores what actually happens when it infects a human and what should we all do.

In December 2019 the Chinese authorities notified the world that a virus was spreading through their communities. In the following months it spread to other countries, with cases doubling within days. This virus is the “Severe acute respiratory syndrome-related coronavirus 2”, that causes the disease called COVID19, and that everyone simply calls Coronavirus.

By now, it’s clear that COVID-19 has become a significant threat to public health globally, prompting many governments to undertake draconian measures to contain or curtail the epidemic.

Most governments are relying on travel restrictions, isolation, and social distancing as the preeminent methods of stopping the spread of the virus.

What if we were to be more surgical in our approach using location data collected from our devices?

We start with the subset of people who we know tested positive. Using cellphone tower data, we can figure out where these infected people have been and how long they have stayed in each location. Epidemiologists tell us that transmission is most likely to occur between people who are within one meter of each other for 15 minutes or more. We know that infections can also happen because the virus can survive on surfaces, and the analysis could incorporate this observation too, but for simplicity’s sake, we leave it out of analysis here.

If you’ve ever attended one of my neural network talks, you know that I point out that what neural networks learn is not what you think they actually learn.

As we come to rely on AI to make increasingly more important decisions, we may want to pause and realize that our training data could be used as a vector for bad actors.

The papers, titled “Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning” [PDF] and “Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF],” explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn’t easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.