TensorFlow gets a new release.

Here’s a great round up of the new features and improvements.

Google just release TensorFlow 2.2.0 with many news features and improvements, the new Profiler for TensorFlow 2 for CPU/GPU/TPU. TensorFlow 2.2.0 is dropping the support for Python 2, which is already passed end of life in January 2020. There are many other features and improvements with this version of […]

After the first CNN-based architecture (AlexNet) that win the ImageNet 2012 competition, every subsequent winning architecture uses more layers in a deep neural network to reduce the error rate.

This works for less number of layers, but when we increase the number of layers, there is a common problem in deep learning associated with that called Vanishing/Exploding gradient.

This causes the gradient to become 0 or too large.

Increasing the number of layers, the training and test error rate also increases.

Residual Block:
In order to solve the problem of the vanishing/exploding gradient, this architecture introduced the concept called Residual Network. In this network we use a technique called skip connections . The skip connection skips training from a few layers and connects directly to the output.

A colleague of mine, Ayman El-Ghazali, worked through data from the state of Maryland.

Code is available on GitHub.

I chose not source my data directly from Maryland’s State Government site because the format was not easy to use. The official Maryland Government provided data basically has each day as a column and had the rows as Zip Codes — not as easy as the data provided from the site above. So there may be few discrepancies between the data on a day to day basis, but the totals are identical. You can read about their methodologies of retrieving data from various official State Government websites and the quality of each.