This epsisode of the AI Show talks about the new ML assisted data labeling capability in Azure Machine Learning Studio.

You can create a data labeling project and either label the data yourself, or take help of other domain experts to create labels for you. Multiple labelers can use browser based labeling tools and work in parallel.

As human labelers create labels, an ML model is trained in the background and its output is used to accelerate the data labeling workflow in various ways such as active learning, task clustering, and pre-labeling. Finally, you can export the labels in different formats.

Learn More:

Azure Machine Learning compute instances (formerly Notebook VMs) is a hosted PaaS offering that supports the full lifecycle of inner-loop ML development–from model authoring, to model training and model deployment.

AzureML Compute Instances are deeply integrated with AzureML workspaces and provide a first-class experience for model authoring through integrated notebooks using AzureML Python and R SDK.

Learn More:

The AI Show’s Favorite links:

This episode of the AI Show compares deep learning vs. machine learning.

You’ll learn how the two concepts compare and how they fit into the broader category of artificial intelligence. During this demo we will also describe how deep learning can be applied to real-world scenarios such as fraud detection, voice and facial recognition, sentiment analytics, and time series forecasting.

This episode of the AI show provides a quick overview of new batch inference capability that allows Azure Machine Learning users to get inferences on large scale datasets in a secure, scalable, performant and cost-effective way by fully leveraging the power of cloud.

Learn More:

Batch Inference Documentation

https://aka.ms/batch-inference-documentation

Batch Inference Notebooks

https://aka.ms/batch-inference-notebooks

Azure Open Datasets is platform to host data from the open domain such as weather, socioeconomic statistics, machine learning samples, open images, GitHub activity data, etc. on Azure.

Learn more about why are we hosting open data on Azure, how to explore the datasets and how to use them in Azure services such as Azure Machine Learning.

Learn More:

The AI Show’s Favorite links:

Azure Machine Learning now offers two editions that are tailored for your machine learning needs, Enterprise and Basic, making it easy for developers and data scientists to accelerate the end to end machine learning lifecycle. The Basic edition is a one stop destination for open source developers and data scientists who are comfortable with a code first experience. The Enterprise edition boosts productivity with no-code machine learning tools for all ML skill levels, and is tailored for enterprises of various sizes, developers, data engineers or data workers.

Learn More:

The new Azure Machine Learning studio is an immersive web experience for managing the end-to-end lifecycle.

The new web experience brings all of the data science capabilities for data scientists and engineers, across diverse skill levels from no code authoring, to code-first experiences, and their ML assets together in a single web pane to streamline machine learning.

Learn More:

The AI Show’s Favorite links:

With Azure ML Pipelines, all the steps involved in the data scientist’s lifecycle can be stitched together in a single pipeline improving inner-loop agility, collaboration, and reuse of data and code, while maintaining high reliability.

This video explores Azure Machine Learning Pipelines, the end-to-end job orchestrator optimized for machine learning workloads.

Learn More:

The AI Show’s Favorite links:

In this episode of the AI Show, explore updates to the Azure Machine learning service model registry to provide more insights about your model.

Also, learn how you can deploy your models easily without going through the effort of creating additional driver and configuration files.

Learn More:

Related links:

Understanding what your AI models are doing is super important both from a functional as well as ethical aspects. In this episode we will discuss what it means to develop AI in a transparent way.

Mehrnoosh introduces an awesome interpretability toolkit which enables you to use different state-of-the-art interpretability methods to explain your models decisions.

By using this toolkit during the training phase of the AI development cycle, you can use the interpretability output of a model to verify hypotheses and build trust with stakeholders.

You can also use the insights for debugging, validating model behavior, and to check for bias. The toolkit can even be used at inference time to explain the predictions of a deployed model to the end users.

Learn more: