Mahesh Yadav, Software Engineer on the Intelligent Edge team, joins the IoT Show to unbox the Microsoft Vision AI DevKit (aka.ms/iotshow/visionaidevkit), a smart camera for the intelligent edge.

The developer kit uses the Qualcomm’s Vision Intelligence 300 Platform which uniquely runs machine learning with hardware acceleration delivering results in milliseconds which is perfect for connected car or connected factory scenarios where you need low latency as well as support offline scenarios.

In this episode, you will see how easy it is to bring up AI on the edge with Azure IoT Edge and Azure Machine Learning.

The DevKit includes a sample AI model that identifies 183 objects including people, laptops, chairs and more. The highlight of the show is a real-time camera demo that asserts that both Mahesh and Olivier really are people.

And it’s always good when an AI affirms your personhood. 😉

Watch Paul de Carlo, Microsoft Cloud Developer Advocate, step through his Intelligent Edge Hands-on Lab (https://aka.ms/iotshow/intelligentedge — which is open source and available for you to try)

During this lab, Paul deploys an IoT Edge module to an NVIDIA Jetson Nano development board to allow for detection of objects from the live video stream of a Webcam. In this episode, the webcam is scanning the floor at Microsoft Ignite 2019 and detects people, backpacks, chairs and more, in real time.

The same setup can detect objects in YouTube video streams, RTSP streams, or HoloLens Mixed Reality Capture and stream up to 32 videos simultaneously. Object Detection is accomplished using YOLOv3-tiny with Darknet.

Learn why Paul and Olivier are never going to give you up, never going to let you down during this memorable episode.

TensorFlow Lite is a framework for running lightweight machine learning models, and it’s perfect for low-power devices like the Raspberry Pi.

This video shows how to set up TensorFlow Lite on the Raspberry Pi for running object detection models to locate and identify objects in real-time webcam feeds, videos, or images. 

Written version of this guide: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md

Will Kwan spent 50 days to create an AI Startup, out of the project out of Y Combinator Startup School.

You can try it out here: https://omnipost.co.

I’m building a machine learning/SaaS startup. In this video, I share the results my first 50 days of full-time work, explaining my business strategy, showing the core features I designed and programmed, and summarizing what I learned from my users. I also give a overview of all the programming frameworks and API’s I used.

Here’s an interesting idea that uses AI to make recycling easier.

Arjun and Vayun realized that separating waste is sometimes confusing and cumbersome—something that can derail people’s good intentions to recycle. Using TensorFlow, they built a “Smart Bin” that can identify types of trash and sort them automatically. The Smart Bin uses a camera to take a picture of the object inserted in the tray, then analyzes the picture with a Convolutional Neural Network, a type of machine learning algorithm designed to recognize visual objects.

Are you interested in Computer Vision, Deep Learning, and OpenCV, but not sure where to start?

Then this step by step guide is for you.

Follow these steps to get OpenCV configured/installed on your system, learn the fundamentals of Computer Vision, and graduate to more advanced topics, including Deep Learning, Face Recognition, Object Detection, and more!

Siraj Raval has designed an image classifier template for you to use as a learning tool.

This is an example of how machine learning can be used in a software-as-a-service context, hopefully it gives you some ideas on how to do something similar. It’s a combination of a few components including a Python web API, Flutter mobile app, and FastAI model training script.

In this episode, he explains the process of building this template and how all the components fit together.

Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible.

Today, computer vision systems do it with greater than 99 percent accuracy.

How?

Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed.

Watch this amazing live demo, where Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.

Siraj Raval has designed a free curriculum to help anyone learn Computer Vision in the most efficient way possible.

My curriculum starts off with low level vision techniques and progressively increases in difficulty until we get to high level analysis techniques i.e deep learning. Don’t worry if you’ve never coded before, i’ve included links to help you learn Python as well. Now is the time to build computer vision solutions, the world needs these menial tasks automated to help liberate humans from drudgery. The tools needed are python, OpenCV, and Tensorflow, all of which have their place and I’ll explain all the details of how it fits together in this video. Enjoy!