Microsoft Research just posted this video on adversarial machine learning.

As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important.

The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years.

In this talk, I’ll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.

Talk slides:

tt ads