In this video from a recent talk at MIT, Demis Hassabis discusses the capabilities and power of self-learning systems. He illustrates this with reference to some of DeepMind’s recent breakthroughs, and talks about the implications of cutting-edge AI research for scientific and philosophical discovery.

What’s more impressive, is Demis’ biography. From the description:

Speaker Biography: Demis is a former child chess prodigy, who finished his A-levels two years early before coding the multi-million selling simulation game Theme Park aged 17. Following graduation from Cambridge University with a Double First in Computer Science he founded the pioneering video games company Elixir Studios producing award winning games for global publishers such as Vivendi Universal. After a decade of experience leading successful technology startups, Demis returned to academia to complete a PhD in cognitive neuroscience at UCL, followed by postdocs at MIT and Harvard, before founding DeepMind. His research into the neural mechanisms underlying imagination and planning was listed in the top ten scientific breakthroughs of 2007 by the journal Science. Demis is a 5-times World Games Champion, a Fellow of the Royal Society of Arts, and the recipient of the Royal Society’s Mullard Award and the Royal Academy of Engineering’s Silver Medal.

Neural networks have become a hot topic over the last few years, but evaluating the most efficient way to build one is still more art than science. In fact, it’s more trial and error than art. However, MIT may have solved that problem.

The NAS (Neural Architecture Search, in this context) algorithm they developed “can directly learn specialized convolutional neural networks (CNNs) for target hardware platforms — when run on a massive image dataset — in only 200 GPU hours,” MIT News reports. This is a massive improvement over the 48,000 hours Google reported taking to develop a state-of-the-art NAS algorithm for image classification. The goal of the researchers is to democratize AI by allowing researchers to experiment with various aspects of CNN design without needing enormous GPU arrays to do the front-end work. If finding state of the art approaches requires 48,000 GPU arrays, precious few people, even at large institutions, will ever have the opportunity to try.

The MIT Technology Review has an interesting article on one specific way that quantum computing can revolutionize machine learning.

Feature matching is a technique that converts data into a mathematical representation that lends itself to machine-learning analysis. The resulting machine learning depends on the efficiency and quality of this process. Using a quantum computer, it should be possible to perform this on a scale that was hitherto impossible.

We typically imagine robots looking like humans, but there’s a real advantage to other “form factors” that mimic pack animals.

For example, check out this new robot that MIT just made: a mini cheetah robot, the first four-legged robot to do a backflip.

At only 20 pounds the limber quadruped can bend and swing its legs wide, enabling it to walk either right side up or upside down. More practically, the robot can also trot over uneven terrain about twice as fast as an average person’s walking speed.

Oliver Cameron is the Co-Founder and CEO of Voyage. Before that he was the lead of the Udacity Self-Driving Car program that made ideas in autonomous vehicle research and development accessible to the world. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on the GitHub repo.

In this video, Lex Fridman interviews Kyle Vogt,, the President and CTO of Cruise Automation. Cruise Automation leading an effort in trying to solve one of the biggest robotics challenges of our time: vehicle autonomy.

He is the co-founder of 2 successful companies (Cruise and Twitch) that were each acquired for 1 billion dollars. This conversation is part of the Artificial Intelligence podcast and the MIT course 6.S094: Deep Learning for Self-Driving Cars.