After a blog post on floppy disks, I figured it was time to bring things back into the 21st century with an explanation of Convolutional Neural Networks.
Brandon Rohrer explains Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) in this informative video.
Here’s an interesting talk from Microsoft Research YouTube channel by Yuija Li about Gated Graph Sequence Neural Networks. Details about the presentation and a link to the paper are below the video.
From the description:
Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.
deeplizard has a great video on learnable parameters in neural networks in PyTorch.
Here’s an interesting video posted to the Microsoft Research YouTube channel about graph neural networks. Graph structured data types are a natural representation for such systems, and several architectures have been proposed for applying deep learning methods to these structured objects.
Kaggle has a great video on how to do Deep Learning from scratch.
Here are parts 1 and 2 of a series by The Coding Train about the fundamentals of neural networks.
Part 2: The Perceptron
The YouTube channel “Two Minute Papers” explores how neural networks can help artists be more creative.
In this video, Arxiv continues his dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions!
This is a continuation from my previous post.