Microsoft Research has a new podcast out talking about learning algorithms.

Deep learning methodologies like supervised learning have been very successful in training machines to make predictions about the world. But because they’re so dependent upon large amounts of human-annotated data, they’ve been difficult to scale. Dr. Phil Bachman, a researcher at MSR Montreal, would like to change that, and he’s working to train machines to collect, sort and label their own data, so people don’t have to.

Today, Dr. Bachman gives us an overview of the machine learning landscape and tells us why it’s been so difficult to sort through noise and get to useful information. He also talks about his ongoing work on Deep InfoMax, a novel approach to self-supervised learning, and reveals what a conversation about ML classification problems has to do with Harrison Ford’s face.

You can tell that AI is going mainstream when there’s a need to create benchmarks to compare performance between competing systems.

Industry-standard benchmarks that compare compute elements against specific workloads are far more useful. For example, an image classification engineer could identify multiple options that meet their performance requirements, then whittle them down based on power consumption, cost, etc. Voice recognition designers could use benchmark results to analyze various processor and memory combinations, then decide whether to synthesize speech locally or in the cloud.

Hardware is getting interesting again.

Here’s an interesting paper published in Nature about Neuromorphic Computing.

Abstract below:

Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign.

Here’s an interesting read on how AI is changing brand management, customer experience, and advertising.

Starbucks uses AI to link up with their rewards members’ accounts and take into account things such as order history, customer preferences, weather conditions, time of day, holiday and even birthdays to make drink and food suggestions. They use weather data so granularly that they can predict tiny variations in demand store-by-store, adjust stock and display and drive sales accordingly.

Here’s an interesting look at what the next decade holds for AI and why hardware is going to be a big part of it.

“What we see happening in the transition to now and toward 2020 is what I call the coming of age of deep learning,” says Singer, pictured below with an NNP-I chip, tells The Next Platform. “This is where the capabilities have been better understood, where many companies are starting to understand how this might be applicable to their particular line of business. There’s a whole new generation of data scientists and other professionals who understand the field, there’s an environment for developing new algorithms and new topologies for the deep learning frameworks. All those frameworks like TensorFlow and MXNet were not really in existence in 2015. It was all hand-tooled and so on. Now there are environments, there is a large cadre of people who are trained on that, there’s a better understanding of the mapping, there’s a better understanding of the data because it all depends on who is using the data and how to use the data.”

This epsisode of the AI Show talks about the new ML assisted data labeling capability in Azure Machine Learning Studio.

You can create a data labeling project and either label the data yourself, or take help of other domain experts to create labels for you. Multiple labelers can use browser based labeling tools and work in parallel.

As human labelers create labels, an ML model is trained in the background and its output is used to accelerate the data labeling workflow in various ways such as active learning, task clustering, and pre-labeling. Finally, you can export the labels in different formats.

Learn More:

Learn how Microsoft is simplifying IoT with the evolution of Azure IoT Central.

Step through a live demo of the new IoT Central retail application template with Avneet Singh, Senior Program Manager, IoT Solutions team.

Learn how this IoT app platform keeps devices connected with built-in device management.

Understand IoT Central makes it easy to integrate into business applications to deliver insights to business decision makers.