In this tutorial, learn how to visualize class activation maps for debugging deep neural networks using an algorithm called Grad-CAM.

Then you’ll learn how to implement Grad-CAM using Keras and TensorFlow.

While deep learning has facilitated unprecedented accuracy in image classification, object detection, and image segmentation, one of their biggest problems is model interpretability, a core component in model understanding and model debugging.

Siraj Raval explores why does a computer algorithm classify an image the way that it does? This is a question that is critical when it comes to AI applied to diagnostics, driving, or any other form of critical decision making.

In this video, he raises awareness around one technique in particular that I found called “Grad-Cam” or Gradient Class Activation Mappings.

Here’s a great tutorial on using Keras to create a digit recognizer using the classic MNIST set.

An artificial neural network is a mathematical model that converts a set of inputs to a set of outputs through a number of hidden layers. An ANN works with hidden layers, each of which is a transient form associated with a probability. In a typical neural network, each node of a layer takes all nodes of the previous layer as input. A model may have one or more hidden layers.

What exactly is the difference is between Keras and TensorFlow?

! Keras is actually integrated into TensorFlow. It’s a wrapper around the TensorFlow backend

Technically speaking, you could use Keras with a variety of potential backends.

But what exactly does that mean?

Basically, you are able to make any Keras call you need from within TensorFlow.

You get to enjoy the TensorFlow backend, while leveraging the simplicity of of Keras.

Here’s a great article on Medium that walks you through the process of creating a neural network with Keras.

Here’s an in-depth look at doing Natural Language Processing in the three top frameworks: TensorFlow, PyTorch, and Keras.

Before beginning a feature comparison between TensorFlow vs PyTorch vs Keras, let’s cover some soft, non-competitive differences between them. Non-competitive facts Below we present some differences between the three that should serve as an introduction to TensorFlow vs PyTorch vs Keras. These differences aren’t written in the spirit of […]

Here’s a great article on R-CNN, object detection, and the ins and outs of computer vision.

After exploring CNN for a while, I decided to try another crucial area in Computer Vision, object detection. There are several methods popular in this area, including Faster R-CNN, RetinaNet, YOLOv3, SSD and etc. I tried Faster R-CNN in this article. Here, I want to summarise what I have learned and maybe give you a little inspiration if you are interested in this topic.

Here’s an interesting tutorial for Keras and TensorFlow that predicts employee retention.

In this tutorial, you’ll build a deep learning model that will predict the probability of an employee leaving a company. Retaining the best employees is an important factor for most organizations. To build your model, you’ll use this dataset available at Kaggle, which has features that measure employee satisfaction in a company. To create this model, you’ll use the Keras sequential layer to build the different layers for the model.

James McCaffrey recently gave a talk on binary classification in Keras. Here are his thoughts on the topic.

I recently gave a short workshop/talk at the tech company I work for on binary classification using the Keras neural network code library. The goal of a binary classification problem is to predict something that can take on one of just two possible values. For example, you might want […]

Here’s an interesting article on creating and using custom loss functions in Keras. Why would you need to do this?

Here’s one example from the article:

Let’s say you are designing a Variational Autoencoder. You want your model to be able to reconstruct its inputs from the encoded latent space. However, you also want your encoding in the latent space to be (approximately) normally distributed.

Read more