Microsoft Research just posted this video on adversarial machine learning.

As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important.

The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years.

In this talk, I’ll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.

Talk slides:

Siraj Raval just posted this video on defending AI against adversarial attacks

Machine Learning technology isn’t perfect, it’s vulnerable to many different types of attacks! In this episode, I’ll explain 2 common types of attacks and 2 common types of defenses using various code demos from across the Web. There’s some really dope mathematics involved with adversarial attacks, and it was a lot of fun reading about the ‘cat and mouse’ game between new attack techniques, followed by new defense techniques. I encourage anyone new to the field who finds this stuff interesting to learn more about it. I definitely plan to. Let’s look into some math, code, and examples. Enjoy!

Slideshow for this video:

Demo project: