In this episode of Five Things, John Papa and Jeff Hollan bring you five reasons you should check out Azure Functions today. You can also listen to Jeff dive deeper into serverless on his recent episode of Real Talk JavaScript.

Data integration is complex with many moving parts. It helps organizations to combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows. This can happen due to data not arriving on time, functional code issues in your pipelines, infrastructure issues, etc.

A common requirement is the ability to rerun failed activities within data integration workflows. In addition, sometimes you need to rerun activities to re-process data due to an error upstream in data processing. Azure Data Factory now enables you to rerun the entire pipeline or choose to rerun downstream from a particular activity inside a pipeline.

In this episode of Data Driven, Frank and Andy talk to two guests, Ronald Schmelzer and Kathleen Walch, co-founders of Cognilytica and co-hosts of the AI Today podcast.

Notable Quotes

This video explores the output of GANs described in this paper.

Abstract:

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

One of the promise of IoT is to allow bringing the intelligence of the Cloud to the Edge to run IoT data analytics as close as possible to the data source. This allows to reduce latencies, optimize performance and response times, support offline scenario, comply with privacy policies and regulations, reduce data transfer cost, and more…

One thing you really have to consider when bringing Artificial Intelligence to the edge is the hardware you will need to run these powerful algorithms. Ted Way from the Azure Machine Learning team joins Olivier on the IoT Show to discuss hardware acceleration at the Edge for AI. We will discuss scenarios and technologies Microsoft develops and uses to accelerate AI in the Cloud and at the Edge such as Graphic cards, FPGA, CPU,… To illustrate all this, Ted walks us through real life scenarios and demos IoT Edge running Machine Learning vision algorithms.

Learn more about hardware acceleration for AI at the Edge: https://docs.microsoft.com/azure/machine-learning/service/concept-accelerate-with-fpgas

Create a Free Account (Azure): https://aka.ms/aft-iot