Data scientists need the ability to explain their models to executives and stakeholders, so they can understand the value and accuracy of their findings.
The ability to interpret a generated model is crucial to ensure compliance with company policies, industry standards, and government regulations.
Here’s an interesting write up on Model Interpretability in Azure Machine Learning Services.
During the training phase of the machine learning model development cycle. Model designers and evaluators can use interpretability output of a model to verify hypotheses and build trust with stakeholders. They also use the insights into the model for debugging, validating model behavior matches their objectives, and to check for bias or insignificant features.