Generally speaking, Neural Networks are somewhat of a mystery. While you can understand the mechanics and the math that powers them, exactly how the network comes to its conclusions are a bit of a black box.

Here’s an interesting story on how researchers are trying to peer into the mysteries of a neural net.

Using an “activation atlas,” researchers can plumb the hidden depths of a neural network and study how it learns visual concepts. Shan Carter, a researcher at Google Brain, recently visited his daughter’s second-grade class with an unusual payload: an array of psychedelic pictures filled with indistinct shapes and warped […]

Neural networks have proven themselves very capable of performing tasks that have eluded researchers for years. When you find out that no one really knows why neural networks behave the way they do, it only adds to their mystique.

The fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) aim to provide insight into how neural networks comes to the conclusions that they do.