I imagine that this will change visual effects forever.
Here’s a great overview of deep learning, an artificial intelligence function that imitates the working of the human brain in processing data and creating patterns for use in decision making.
Deep learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural network (ANN). It has networks capable of learning unsupervised or unstructured data. Deep learning is often known as deep neural learning or deep neural network.
deeplizard demonstrates how to use data augmentation on images using TensorFlow’s Keras API.
- 00:00 Welcome to DEEPLIZARD – Go to deeplizard.com for learning resources
- 00:17 Introduction to Data Augmentation
- 01:32 Image Augmentation with Keras
- 08:16 Collective Intelligence and the DEEPLIZARD HIVEMIND
Watch Sascha Dittmann and friends build out a real time object detection system on the Jetson Nano.
Yannic Kilcher explains why transformers are ruining convolutions.
This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.
- 0:00 – Introduction
- 0:30 – Double-Blind Review is Broken
- 5:20 – Overview
- 6:55 – Transformers for Images
- 10:40 – Vision Transformer Architecture
- 16:30 – Experimental Results
- 18:45 – What does the Model Learn?
- 21:00 – Why Transformers are Ruining Everything
- 27:45 – Inductive Biases in Transformers
- 29:05 – Conclusion & Comments
- Paper (Under Review): https://openreview.net/forum?id=YicbFdNTTy
When you think of “deep learning” you might think of teams of PhDs with petabytes of data and racks of supercomputers.
But it turns out that a year of coding, high school math, a free GPU service, and a few dozen images is enough to create world-class models. fast.ai has made it their mission to make deep learning as accessible as possible.
In this interview fast.ai co-founder Jeremy Howard explains how to use their free software and courses to become an effective deep learning practitioner.
Two Minute Papers examines the paper “Interactive Video Stylization Using Few-Shot Patch-Based Training” in this video.
In this episode, Mandy from deeplizard will be building on what we’ve learned about MobileNet combined with the techniques we’ve used for fine-tuning to fine-tune MobileNet for a custom image data set using TensorFlow’s Keras API.
Two Minute Papers explains why the paper “NeRF in the Wild – Neural Radiance Fields for Unconstrained Photo Collections” is truly revolutionary.