At a recent talk about the basics mechanics of neural networks, the topic of explainable AI came up.

Basically, no one can as of yet say for certain why a deep learning model makes the choices it does.

There’s a lot of work being done in this space and here’s a great round up of eight tools and frameworks for making AI more transparent.

Due to the ambiguity in Deep Learning solutions, there has been a lot of talk about how to make explainability inclusive of an ML pipeline. Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be […]

Understanding what your AI models are doing is super important both from a functional as well as ethical aspects. In this episode we will discuss what it means to develop AI in a transparent way.

Mehrnoosh introduces an awesome interpretability toolkit which enables you to use different state-of-the-art interpretability methods to explain your models decisions.

By using this toolkit during the training phase of the AI development cycle, you can use the interpretability output of a model to verify hypotheses and build trust with stakeholders.

You can also use the insights for debugging, validating model behavior, and to check for bias. The toolkit can even be used at inference time to explain the predictions of a deployed model to the end users.

Learn more:

As a machine learning project grows, so should its infrastructure. In this talk, Alejandro Saucedo covers some of the key trends in machine learning operations, as well as libraries to watch in 2019.

The talk is based on the “Awesome Machine Learning Operations” list maintained by The Institute for Ethical AI & Machine Learning, and focuses on the topics of reproducibility, orchestration and explainability.