Deep learning and AI are fundamentally changing the way data is used in computation. They enable computing capabilities that will transform almost every industry, scientific domain, and public usage of data and compute.
The recent success of deep learning algorithms can be seen as the culmination of decades of progress in three areas: research in DL algorithms, broad availability of big data infrastructure, and the massive growth of computation power produced by Moore’s law and the advent of parallel compute architectures.
Deep learning has been employed successfully in such diverse areas as healthcare, transportation, industrial IoT, finance, entertainment, and retail, in addition to high-performance computing.
Examples shown in this video illustrate how the approach works and how it complements high-performance data analytics and traditional business intelligence.
Professor Pieter Abbeel, a professor at UC Berkeley, discusses the use of deep learning in training robots to perform tasks.
In my previous post, I featured a video on Microsoft Cognitive Toolkit (CNTK). If you’ve not heard of it, CNTK is a production-grade, open-source, deep-learning library. It’s the toolkit behind a Microsoft’s many AI initiatives.
CNTK embraces fully open development, is available on GitHub, and provides support for both Windows and Linux. The latest release packs in several enhancements: most notably Python/C++ API support, easy-to-onboard tutorials (as Python notebooks) and examples, and an easy-to-use Layers interface.
These enhancements, combined with unparalleled scalability on NVIDIA hardware, were demonstrated by both NVIDIA at SuperComputing 2016 and Cray at NIPS 2016.
These enhancements from the CNTK supported Microsoft in its recent breakthrough in speech recognition, reaching human parity in conversational speech.
The toolkit is used in all kinds of deep learning, including image, video, speech, and text data. The speakers will discuss the current features of the toolkit’s release and its application to deep learning projects.
Siraj Raval explains where Deep Learning is going.
Check out this intriguing description for the video, then watch it.
Back-propagation is fundamental to deep learning. Hinton (the inventor) recently said we should “throw it all away and start over”.
What should we do? I’ll describe how back-propagation works, how its used in deep learning, then give 7 interesting research directions that could overtake back-propagation in the near term.
In this documentary by NVIDIA, take a closer look at Deep learning: the fastest-growing field in artificial intelligence, helping computers make sense of infinite amounts of data in the form of images, sound, and text.
Using multiple levels of neural networks, computers now have the capacity to see, learn, and react to complex situations as well or better than humans. Every industry will be impacted by deep learning, and many businesses are already delivering new products and services based on this new way of thinking about data and technology.
Tensorflow, Google’s framework for machine learning and neural networks , has recently been open-sourced.
With this new tool, deep machine learning transitions from an area of research into the mainstream of software engineering.
In this session, Martin Görner teaches you how to pick the correct neural network type for your problem and how to make it behave.
A PhD or familiarity with differential equations is no longer required.
In this YouTube video, Siraj Raval goes through the steps necessary to install and run the StarCraft II Environment that DeepMind recently open-sourced!
He gets into DeepMind’s Reinforcement Learning history, the configuration steps, and then runs a pre-trained Deep Q model at the end that will complete a the shard collection mini-game.