Facebook recently open-sourced Opacus, a library for training PyTorch models with differential privacy that’s ostensibly more scalable than existing methods.

With the release of Opacus, Facebook says it hopes to provide an easier path for engineers to adopt differential privacy in AI and to accelerate in-the-field differential privacy research.

Typically, differential privacy entails injecting a small amount of noise into the raw data before feeding it into a local machine learning model, thus making it difficult for malicious actors to extract the original files from the trained model. An algorithm can be considered differentially private if an observer seeing its output cannot tell if it used a particular individual’s information in the computation.

Lex Fridman interviews Michael Kearns in the latest episode of his podcast.

Michael Kearns is a professor at University of Pennsylvania and a co-author of the new book Ethical Algorithm that is the focus of much of our conversation, including algorithmic fairness, privacy, and ethics in general. But, that is just one of many fields that Michael is a world-class researcher in, some of which we touch on quickly including learning theory or theoretical foundations of machine learning, game theory, algorithmic trading, quantitative finance, computational social science, and more. This conversation is part of the Artificial Intelligence podcast.