Building Vision AI applications has never been that simple with Azure IoT Central and IoT Edge.

Today, you can use various technologies to create Video Analytics solutions end to end, but assembling all these technologies from video acquisition, analytics at the edge to managing the cameras and gateways is not trivial.

The Azure IoT Central team just released a new App Template and IoT Edge modules that will help you do all this in a matter a hours.

Check out this demo heavy episode of the IoT Show with Nandakishor Basavanthappa, PM in the Azure IoT Central team.

Learn more reading the bog post at https://aka.ms/iotshow/VisionAIInIoTCentral

Get started today

  • You can use the new Video Analytics for Object & Motion Detection template to build and deploy your live video analytics solution.
  • You can build Video Analytics solution within hours by leveraging Azure IoT Central, Live Video Analytics, and Intel.
  • You can learn more about Live Video Analytics on IoT Edge here and try out some of the other video analytics scenarios via the quickstarts and tutorials here. These show you how you can leverage open source AI models such as those in the Open Model Zoo repository or YOLOv3, or custom models that you have built, to analyze live video.
  • You can learn more about the OpenVINO™ Inference server by Intel® in Azure marketplace and its underlying technologies here. You can access developer kits to learn how to accelerate edge workloads using Intel®-based accelerators CPUs, iGPUs, VPUs and FPGAs. You can select from a wide range of AI Models from Open Model Zoo

Murtaza’s Workshop – Robotics and AI posted this video to explains how to perform Facial recognition with high accuracy.

We will first briefly go through the theory and learn the basic implementation. Then we will create an Attendance project that will use webcam to detect faces and record the attendance live in an excel sheet.

Link to the Article:
https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78

Form Recognizer is a Cognitive Service that lets you identify and extract text, key/value pairs, and table data from documents. With Form Recognizer you can train custom models to extract structured data from your forms and documents.

Learn about the latest updates in Azure Form Recognizer, including the Form Recognizer v2.1 Preview.

Learn More:

With applications ranging from classifying objects in self driving cars to identifying blood cells in healthcare industry to identifying defective items in manufacturing industry, image classification is one of the most important applications of computer vision.

How does it work? Which framework should you use?

Here’s a great tutorial.

In this article, we will understand how to build a basic image classification model in PyTorch and TensorFlow. We will start with a brief overview of both PyTorch and TensorFlow. And then we will take the benchmark MNIST handwritten digit classification dataset and build an image classification model using CNN (Convolutional Neural Network) in PyTorch and TensorFlow.

YOLO, abbreviated as You Only Look Once, was proposed as a real-time object detection technique by Joseph Redmon et al in their research work.

It frames object detection in images as a regression problem to spatially separated bounding boxes and associated class probabilities.

In this approach, a single neural network divides the image into regions and predicts bounding boxes and probabilities for each region.

Here’s great article on the subject.

In this article, we will learn how to detect objects present in the images. For the detection of objects, we will use the YOLO (You Only Look Once) algorithm and demonstrate this task on a few images. In the result, we will get the image with captioned and highlighted objects with their probability of correct detection.

Here’s a great tutorial on how OpenCV.

Time Index:

  • Introduction to Images: 2:17
  • Installations: 4:37
  • Chapter 1: 9:09
  • Chapter 2: 17:01
  • Chapter 3: 27:31
  • Chapter 4: 34:12
  • Chapter 5: 44:59
  • Chapter 6: 50:04
  • Chapter 7: 56:14
  • Chapter 8: 1:15:37
  • Chapter 9: 1:40:31
  • Project 1: 1:46:03
  • Project 2: 2:15:45
  • Project 3: 2:56:34

Here’s an interesting computer vision project that should make recycling easier.

Trash Classifier Example Project The Trash Classifier project, affectionately known as “Where does it go?!”, is designed to make throwing things away faster and more reliable. The Trash Classifier project uses a Machine Learning model trained in Lobe to identify whether an object goes in the garbage, recycling, compost, […]