AlexNet is one of the popular variants of the convolutional neural network and used as a deep learning framework.

In this article, we will employ the AlexNet model provided by the PyTorch as a transfer learning framework with pre-trained ImageNet weights. The network will be trained on the CIFAR-10 dataset for a multi-class image classification problem and finally, we will analyze its classification accuracy when tested on the unseen test images. Our aim is to compare the performance of the AlexNet model when it is used as a transfer learning framework and when not used as a transfer learning framework.

deeplizard shows us how to add batch normalization to a convolutional neural network.

Content index:

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:30 What is Batch Norm?
  • 04:04 Creating Two CNNs Using nn.Sequential
  • 09:42 Preparing the Training Set
  • 10:45 Injecting Networks Into Our Testing Framework
  • 14:55 Running the Tests – BatchNorm vs. NoBatchNorm
  • 16:30 Dealing with Error Caused by TensorBoard
  • 19:49 Collective Intelligence and the DEEPLIZARD HIVEMIND

deeplizard teaches us how to set up debugging for PyTorch source code in Visual Studio Code.

Content index:

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:27 Visual Studio Code
  • 00:55 Python Debugging Extension
  • 01:30 Debugging a Python Program
  • 03:46 Manual Navigation and Control of a Program
  • 06:34 Configuring VS Code to Debug PyTorch
  • 08:44 Stepping into PyTorch Source Code
  • 10:36 Choosing the Python Environment00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:27 Visual Studio Code
  • 00:55 Python Debugging Extension
  • 01:30 Debugging a Python Program
  • 03:46 Manual Navigation and Control of a Program
  • 06:34 Configuring VS Code to Debug PyTorch
  • 08:44 Stepping into PyTorch Source Code
  • 10:36 Choosing the Python Environment
  • 12:30 Collective Intelligence and the DEEPLIZARD HIVEMIND

deeplizard debugs the PyTorch DataLoader to see how data is pulled from a PyTorch data set and is normalized.

We see the impact of several of the constructor parameters and see how the batch is built.

Content index:

  • 0:00 Welcome to DEEPLIZARD – Go to deeplizard.com
  • 0:45 Overview of Program Code
  • 3:12 How to Use Zen Mode
  • 3:56 Start the Debugging Process
  • 4:38 Initializing the Sampler Based on the Shuffle Parameter
  • 5:35 Debugging next(iter(dataloader))
  • 7:57 Building the Batch Using the Batch Size
  • 10:37 Get the Elements from Dataset
  • 18:43 Tensor to PIL Image
  • 20:41 Thanks for Contributing to Collective Intelligence

deeplizard teaches us how to normalize a dataset. We’ll see how dataset normalization is carried out in code, and we’ll see how normalization affects the neural network training process.

Content index:

  • 0:00 Video Intro
  • 0:52 Feature Scaling
  • 2:19 Normalization Example
  • 5:26 What Is Standardization
  • 8:13 Normalizing Color Channels
  • 9:25 Code: Normalize a Dataset
  • 19:40 Training With Normalized Data

This may be old news by now, but here’s an interesting write up on OpenAI’s decision to standardize development on PyTorch.

OpenAI has opted to standardise its development on PyTorch, saying the move should make it easier for its developers “to create and share optimised implementations of our models”. The AI non-profit turned profit making concern with a non-profit arm said the move would help it increase its research productivity […]