Azure Machine Learning has a large library of algorithms from the classification, recommender systems, clustering, anomaly detection, regression and text analytics families.

Each is designed to address a different type of machine learning problem.

In this demo, you will learn how to use Azure Machine Learning designer in a few simple steps and create an end-to-end machine learning pipeline for your data science scenario.

Additional information:

This epsisode of the AI Show talks about the new ML assisted data labeling capability in Azure Machine Learning Studio.

You can create a data labeling project and either label the data yourself, or take help of other domain experts to create labels for you. Multiple labelers can use browser based labeling tools and work in parallel.

As human labelers create labels, an ML model is trained in the background and its output is used to accelerate the data labeling workflow in various ways such as active learning, task clustering, and pre-labeling. Finally, you can export the labels in different formats.

Learn More:

Azure Machine Learning compute instances (formerly Notebook VMs) is a hosted PaaS offering that supports the full lifecycle of inner-loop ML development–from model authoring, to model training and model deployment.

AzureML Compute Instances are deeply integrated with AzureML workspaces and provide a first-class experience for model authoring through integrated notebooks using AzureML Python and R SDK.

Learn More:

The AI Show’s Favorite links:

This episode of the AI show provides a quick overview of new batch inference capability that allows Azure Machine Learning users to get inferences on large scale datasets in a secure, scalable, performant and cost-effective way by fully leveraging the power of cloud.

Learn More:

Batch Inference Documentation

https://aka.ms/batch-inference-documentation

Batch Inference Notebooks

https://aka.ms/batch-inference-notebooks

In this episode of the AI Show, explore updates to the Azure Machine learning service model registry to provide more insights about your model.

Also, learn how you can deploy your models easily without going through the effort of creating additional driver and configuration files.

Learn More:

Related links:

Did you know that you can now train machine learning models with Azure ML once and deploy them in the Cloud (AKS/ACI) and on the edge (Azure IoT Edge) seamlessly thanks to ONNX Runtime inference engine.

In this new episode of the IoT Show, learn about the ONNX Runtime, the Microsoft built inference engine for ONNX models – its cross platform, cross training frameworks and op-par or better performance than existing inference engines.
From the description:
We will show how to train and containerize a machine learning model using Azure Machine Learning then deploy the trained model to a container service in the cloud and to an Azure IoT Edge device with IoT Edge across different HW platform – Intel, NVIDIA and Qualcomm.