In this video, learn why and how to track assets and code you’re creating in an end to end machine learning workflow.

Time Index:

  • [01:20] How to track assets and artifacts
  • [04:20] Demo – How to keep track of code
  • [05:58] Why it’s important to manage datasets + Demo

For More Info:

In this video, you’ll learn how you can use Azure Event Grid, Azure Machine Learning and Github Actions to create a continuous integration and continuous deployment workflow. You’ll see how to automate the model training and model deployment process end to end.

Time Index:

  • [00:45] Intro
  • [01:09] Demo – Continuous integration steps
  • [04:43] Demo – Continuous deployment steps
  • [08:15] Demo- Test the endpoint

For More Info:

In this video, learn how you can use Azure Event Grid and Azure Machine Learning to trigger and consume machine learnings events. We talk about why eventing is important and how you can enable scenarios such as run failure alerts and retraining models.

Jump To:

  • [00:50] What is Event Grid?
  • [01:32] Why is this useful?
  • [02:32] Demo – How to set up an event subscription
  • [03:40] Demo – How to filter events
  • [05:30] Demo – Logic app example

Links:

DevOps solutions emerged as a set of practices and solutions that combines development-oriented activities (Dev) with IT operations (Ops) in order to accelerate the development cycle while maintaining efficiency in delivery and predictable, high levels of quality.

The core principles of DevOps include an Agile approach to software development, with iterative, continuous, and collaborative cycles, combined with automation and self-service concepts.

However, the DevOps approach to machine learning (ML) and AI are limited by the fact that machine learning models differ from traditional application development in many ways.

From a recent article in Forbes:

However, DevOps approaches to machine learning (ML) and AI are limited by the fact that machine learning models differ from traditional application development in many ways. For one, ML models are highly dependent on data: training data, test data, validation data, and of course, the real-world data used in inferencing. Simply building a model and pushing it to operation is not sufficient to guarantee performance. DevOps approaches for ML also treat models as “code” which makes them somewhat blind to issues that are strictly data-based, in particular the management of training data, the need for re-training of models, and concerns of model transparency and explainability.

Now that you’ve built your model, now what?

The next step is deployment and, arguably, it’s the most important.

That final stage – the crucial cog in your machine learning or deep learning project – is model deployment. You need to be able to get the model to the end user, right? And here’s the irony – the majority of courses, influencers, and even experts – nobody espouses the value of model deployment

 

Here’s an interesting use case for AI in public transit.

TransLink has announced that following a successful pilot program it will be expanding its artificial intelligence program to improve bus departure estimates. “Customers will be able to better plan their journey on TransLink’s new bus network, with a new machine-learning algorithm,” the company said in an email. The AI […]

MLOps (also known as DevOps for machine learning) is the practice of collaboration and communication between data scientists and DevOps professionals to help manage the production machine learning (ML) lifecycle.

Azure Machine Learning service’s MLOps capabilities provide customers with asset management and orchestration services which enable effective ML lifecycle management.

Learn more about MLOps:
https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-model-management-and-deployment