Google late last month debuted experimental tests for its TensorFlow Privacy library designed to reduce the degree to which machine learning models leak identifiable personal information in training data sets, such as for biometric facial recognition.

The test module enables developers to “assess the privacy properties of their classification models,” according to Google. The testing tool is known as a membership inference attack.