Siraj Raval explores why does a computer algorithm classify an image the way that it does? This is a question that is critical when it comes to AI applied to diagnostics, driving, or any other form of critical decision making.

In this video, he raises awareness around one technique in particular that I found called “Grad-Cam” or Gradient Class Activation Mappings.

At a recent talk about the basics mechanics of neural networks, the topic of explainable AI came up.

Basically, no one can as of yet say for certain why a deep learning model makes the choices it does.

There’s a lot of work being done in this space and here’s a great round up of eight tools and frameworks for making AI more transparent.

Due to the ambiguity in Deep Learning solutions, there has been a lot of talk about how to make explainability inclusive of an ML pipeline. Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be […]

Generally speaking, Neural Networks are somewhat of a mystery. While you can understand the mechanics and the math that powers them, exactly how the network comes to its conclusions are a bit of a black box.

Here’s an interesting story on how researchers are trying to peer into the mysteries of a neural net.

Using an “activation atlas,” researchers can plumb the hidden depths of a neural network and study how it learns visual concepts. Shan Carter, a researcher at Google Brain, recently visited his daughter’s second-grade class with an unusual payload: an array of psychedelic pictures filled with indistinct shapes and warped […]

Neural networks have proven themselves very capable of performing tasks that have eluded researchers for years. When you find out that no one really knows why neural networks behave the way they do, it only adds to their mystique.

The fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) aim to provide insight into how neural networks comes to the conclusions that they do.