Brandon Rohrer explains back propagation using plumbing.
Two Minute Papers explores the paper “Adversarial Examples Are Not Bugs, They Are Features” in this video.
Jon Wood has created another video showing how to use ML.NET and the (currently) preview version of 1.4 to create a deep neural network model to classify images.
In this edition of Crash Course AI, Jabrils explains how neural networks learn.
In this video learn how an artificial neural network works, and how to build one yourself in Python.
Source code is at https://github.com/jonasbostoen/simple-neural-network
Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.
To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.
While this is technically a press release, there could be something to DarwinAI if it really can increase neural networks performance more than 1600%. We’ll have to keep an eye on this technology. 😉
“The complexity of deep neural networks makes them a challenge to build, run and use, especially in edge-based scenarios such as autonomous vehicles and mobile devices where power and computational resources are limited,” said Sheldon Fernandez, CEO of DarwinAI. “Our Generative Synthesis platform is a key technology in enabling AI at the edge – a fact bolstered and validated by Intel’s solution brief.”
David Bau, a MIT-IBM Watson AI lab research team member, explains how computers show evidence of learning the structure of the physical world.
MIT has unveiled an artificial intelligence system that it said could make an array of AI techniques more accessible to programmers, while also offering adding value to experts.
In this episode from deeplizard, learn how to build the training loop for a convolutional neural network using Python and PyTorch.