Adding GPU compute support to Windows Subsystem for Linux (WSL) has been the #1 most requested feature since the first WSL release.

Learn how Windows and WSL 2 now support GPU Accelerated Machine Learning (GPU compute) using NVIDIA CUDA, including TensorFlow and PyTorch, as well as all the Docker and NVIDIA Container Toolkit support available in a native Linux environment.

Clark Rahig will explain a bit about what it means to accelerate your GPU to help with training Machine Learning (ML) models, introducing concepts like parallelism, and then showing how to set up and run your full ML workflow (including GPU acceleration) with NVIDIA CUDA and TensorFlow in WSL 2.

Additionally, Clarke will demonstrate how students and beginners can start building knowledge in the Machine Learning (ML) space on their existing hardware by using the TensorFlow with DirectML package.

Learn more:

In this deeplizard episode, learn how to prepare and process our own custom data set of sign language digits, which will be used to train our fine-tuned MobileNet model in a future episode.

VIDEO SECTIONS

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:40 Obtain the Data
  • 01:30 Organize the Data
  • 09:42 Process the Data
  • 13:11 Collective Intelligence and the DEEPLIZARD HIVEMIND

Learn how Azure ML supports Open Source ML Frameworks and MLflow in AzureML.

Take a walk through a ScikitLearn and Pytorch example to show the built in support for ML frameworks.

Learn More:

deeplizard  introduces MobileNets, a class of light weight deep convolutional neural networks that are vastly smaller in size and faster in performance than many other popular models.

VIDEO SECTIONS

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:17 Intro to MobileNets
  • 02:56 Accessing MobileNet with Keras
  • 07:25 Getting Predictions from MobileNet
  • 13:32 Collective Intelligence and the DEEPLIZARD HIVEMIND

PyTorch, the popular open-source ML framework, has continued to evolve rapidly since the introduction of PyTorch 1.0, which brought an accelerated workflow from research to production.

In this video, take a deep dive on some of the most important new advances, including model parallel distributed training, model optimization and on device deployment as well as the latest libraries that support production scale deployment working in concert with MLFlow.

In this video, Mandy from deeplizard  demonstrates how to use the fine-tuned VGG16 Keras model that we trained in the last episode to predict on images of cats and dogs in our test set.

Index:

  • 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
  • 00:17 Predict with a Fine-tuned Model
  • 05:40 Plot Predictions With A Confusion Matrix
  • 05:16 Collective Intelligence and the DEEPLIZARD HIVEMIND

In this tutorial, see how you can train a Convolutional Neural Network in PyTorch and convert it into an ONNX model.

Once the model is in in ONNX format, you can import that into other frameworks such as TensorFlow for either inference and reusing the model through transfer learning.

This post is the third in a series of introductory tutorials on the Open Neural Network Exchange (ONNX), an initiative from AWS, Microsoft, and Facebook to define a standard for interoperability across machine learning platforms. See: Part 1 , Part 2 . In this tutorial, we will train a […]