Gaurav Malhotra joins Scott Hanselman to show how you can run your Azure Machine Learning (AML) service pipelines as a step in your Azure Data Factory (ADF) pipelines.

This enables you to run your machine learning models with data from multiple sources (85+ data connectors supported in ADF).

This seamless integration enables batch prediction scenarios such as identifying possible loan defaults, determining sentiment, and analyzing customer behavior patterns.     

Related Links

Data integration is complex and has many moving parts that spans across hybrid data environments. Typically, data integration projects have dependencies upstream and downstream making dependencies an important aspect to consider in any job scheduling.

Gaurav Malhotra joins Scott Hanselman to show how you can create dependent pipelines in Azure Data Factory by creating dependencies between tumbling window triggers in your pipelines. Using these dependencies assures you that the trigger is only executed after the successful execution of the dependent trigger in your data factory.

Related Links:

Donovan Brown and Gopi Chigakkagari discuss how to integrate Azure Pipelines with various 3rd party tools to achieve full DevOps cycle with Multi-cloud support. You can continue to use you existing tools and get Azure Pipelines benefits: application release orchestration, deployment, approvals, and full traceability all the way to the code or issue.


  •      

    Related resources:

    Data integration is complex with many moving parts. It helps organizations to combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows. This can happen due to data not arriving on time, functional code issues in your pipelines, infrastructure issues, etc.

    A common requirement is the ability to rerun failed activities within data integration workflows. In addition, sometimes you need to rerun activities to re-process data due to an error upstream in data processing. Azure Data Factory now enables you to rerun the entire pipeline or choose to rerun downstream from a particular activity inside a pipeline.

    Gaurav Malhotra joins Scott Hanselman to discuss the Azure Data Factory visual tools, which enable you to iteratively create, configure, test, deploy, and monitor data integration pipelines. We took into account your feedback to enable functional, performance, and security improvements to the visual tools.

    For more information:

    Gaurav Malhotra joins Scott Hanselman to discuss Azure Data Factory (ADF) integration with Azure Monitor, which enables you to route your data factory metrics to Operations and Management Suite (OMS). Get the Azure Data Factory Analytics OMS service pack from the Azure marketplace.

    For more information:

    Gaurav Malhotra discusses how you can operationalize Jars and Python scripts running on Azure Databricks as an activity step in a Data Factory pipeline.

    For more information:

    Today’s business managers depend heavily on reliable data integration systems that run complex ETL/ELT workflows (extract, transform/load and load/transform data).

    Gaurav Malhotra joins Scott Hanselman to discuss how you can iteratively build, debug, deploy, and monitor your data integration workflows (including analytics workloads in Azure Databricks) using Azure Data Factory pipelines.

    For more information:

    Gaurav Malhotra shows Donovan Brown how you can now visually build pipelines for Azure Data Factory V2 and be more productive by getting pipelines up & running quickly without writing any code.

    For more information, see: