Siraj Raval just posted this video on defending AI against adversarial attacks

Machine Learning technology isn’t perfect, it’s vulnerable to many different types of attacks! In this episode, I’ll explain 2 common types of attacks and 2 common types of defenses using various code demos from across the Web. There’s some really dope mathematics involved with adversarial attacks, and it was a lot of fun reading about the ‘cat and mouse’ game between new attack techniques, followed by new defense techniques. I encourage anyone new to the field who finds this stuff interesting to learn more about it. I definitely plan to. Let’s look into some math, code, and examples. Enjoy!

Slideshow for this video:
https://colab.research.google.com/drive/19N9VWTukXTPUj9eukeie55XIu3HKR5TT

Demo project:
https://github.com/jaxball/advis.js

 

Gradient Descent is the workhorse behind much of Machine Learning. When you fit a machine learning method to a training dataset, you’re almost certainly using Gradient Descent.

The process can optimize parameters in a wide variety of settings. Since it’s so fundamental to Machine Learning, Josh Starmer of StatQuest decided to make a “step-by-step” video that shows exactly how it works.

Heads up: there is some singing.