This video explores the output of GANs described in this paper.


We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

One of the promise of IoT is to allow bringing the intelligence of the Cloud to the Edge to run IoT data analytics as close as possible to the data source. This allows to reduce latencies, optimize performance and response times, support offline scenario, comply with privacy policies and regulations, reduce data transfer cost, and more…

One thing you really have to consider when bringing Artificial Intelligence to the edge is the hardware you will need to run these powerful algorithms. Ted Way from the Azure Machine Learning team joins Olivier on the IoT Show to discuss hardware acceleration at the Edge for AI. We will discuss scenarios and technologies Microsoft develops and uses to accelerate AI in the Cloud and at the Edge such as Graphic cards, FPGA, CPU,… To illustrate all this, Ted walks us through real life scenarios and demos IoT Edge running Machine Learning vision algorithms.

Learn more about hardware acceleration for AI at the Edge:

Create a Free Account (Azure):