In this video, Seth and Noel take a lap through Microsoft Cognitive Services and cover the updates and enhancements made public at the BUILD 2018 conference.
In this video from the Ignite conference last September, watch the latest additions to the Cognitive Toolkit, which offer a Python API, as well as a GUI to have a non‐disruptive experience from data load through operationalization with all the steps in between.
The goal is to support classification, object detection and image similarity use case. This is a work in progress. In this session, they demonstrate only a classification pipeline.
This video demonstrates how to use the CNTK libraries to build out a simple image classifier using a neural network.
In this video, Siraj Raval talks about the technology behind Human Pose Estimation. This technology can, among other things, convert humans depicted in video to 3D models.
Think of what this means for motion capture.
Artificial Intelligence has transformed vision technology! Carl and Richard talk to Tim Huckaby about his latest work with vision systems for retail, security and more. Tim talks about how AI has fundamentally changed the way you implement vision systems, taking away many of the limitations on number of people tracked, object and face recognition and so on. The conversation digs into the demonstration done at the Build conference for using regular security cameras to implement a real-time safety tracking system on a construction site – aspirational, but coming soon! And of course, there’s a long conversation about privacy. What is fair, reasonable and wise?
Listen Now –>
This week James Montemagno is joined by Jim Bennett, a Cloud Developer Advocate at Microsoft, who shows us how to use AI inside a mobile app to identify his daughters’ toys.
In the video below, he walks through using the Azure custom vision service to generate a model to identify different toys, then shows how you can use these models from inside your app, both remotely by calling an Azure service, or locally by running the model on your device using CoreML and Tensorflow.
- Azure Custom Vision service
- Custom Vision service docs
- Sample toy identifier app
- Xamarin plugin to use CoreML and Tensorflow with custom vision models
- Find James on: Twitter, GitHub, Blog, and his weekly development podcast Merge Conflict.
- Follow @JamesMontemagno
- Never Miss an Episode: Follow @TheXamarinShow
- Find Jim on: Twitter, GitHub, Blog, Jim’s book – Xamarin In Action
Siraj Raval takes on YOLO. It’s not what you think.
This time it stands for You Only Look Once, a great object detection method for adding computer vision features to live streaming video.
In this talk from SciPy 2017, Daniil Pakhomov goes through the theory of the recent state-of-the-art methods for image segmentation based on FCNs and presents his library which aims to provide a simplified way for users to apply these methods for their own problems.
First play with this online demo of pix2pix and then watch the video below explaining the academic paper attached to it.