In this video, deeplizard will see how we can experiment with large numbers of hyperparameter values easily while still keeping the training loop and results organized.
In this latest deeplizard video , they code a training loop run builder class that will allows for multiple runs with varying parameters. This aids with experimentation of the neural network training process. Remember, it’s called data science and experimentation is part and parcel of the process.
This video picks up on the continuing project that DeepLizard has been working on to build a deep Q-network to master the cart and pole problem. Learn how to manage the environment and process images that will be passed to the deep Q-network as input.
deeplizard examines the difference between concatenating and stacking tensors together. We’ll look at three examples, one with PyTorch, one with TensorFlow, and one with NumPy.
In this video, Deep Lizard builds out a Deep Q Network to tackle the cart-pole problem.
In this video, deeplizard explores how to use TensorBoard to visualize metrics of our PyTorch CNN during training process.
In this video from deeplizard, learn how to build, plot, and interpret a confusion matrix using PyTorch. They also cover about locally disabling PyTorch gradient tracking or computational graph generation.
In this episode from deeplizard, learn how to build the training loop for a convolutional neural network using Python and PyTorch.
Watch this video on Deep Q-learning to implement your own deep Q-network in code.