Ad

If you’ve ever attended one of my neural network talks, you know that I point out that what neural networks learn is not what you think they actually learn.

As we come to rely on AI to make increasingly more important decisions, we may want to pause and realize that our training data could be used as a vector for bad actors.

The papers, titled “Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning” [PDF] and “Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF],” explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn’t easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.

tt ads