If you’ve ever attended one of my neural network talks, you know that I point out that what neural networks learn is not what you think they actually learn.

As we come to rely on AI to make increasingly more important decisions, we may want to pause and realize that our training data could be used as a vector for bad actors.

The papers, titled “Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning” [PDF] and “Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF],” explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn’t easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.

The wide array of options for body scanning have improved patient outcomes, but the process of image interpretation is still labor intensive.

Here’s an interesting article on applying AI to the problem.

Malignant brain tumors are one of the most deadly forms of cancer, partially due to the dreadful diagnosis, but also because of the direct consequences on decreased cognitive function and lasting adverse impact on the quality of life of the patient.

Machine Learning with Phil ponders the question: “is it better to specialize or generalize in artificial intelligence and deep learning?”

The answer depends on your career aspirations. Do you want to be a deep learning research professor?

Do you want to go to work for Google, Facebook, or other global mega corporations?

Or do you want to be your own unicorn start up founder?

Each has their own specialization requirements that Phil breaks down in this video.

Siraj Raval explores why does a computer algorithm classify an image the way that it does? This is a question that is critical when it comes to AI applied to diagnostics, driving, or any other form of critical decision making.

In this video, he raises awareness around one technique in particular that I found called “Grad-Cam” or Gradient Class Activation Mappings.

Whether you realize it or not, AI is here.

Object detection now plays a very important role in our lives, from face detection to unlock your smartphone to detecting bombs in public places such as airports, bus terminals, etc.

These features are a result of the application of machine learning and artificial intelligence research.

Here’s an article covering how TensorFlow can be used to identify objects.

Security and surveillance cameras are very widely used across organisations, so it is important to enhance their effectiveness while saving manpower costs and preventing human errors. In such scenarios, image/video analytics plays a very important role in performing real-time event detection, post-event analysis, and the extraction of statistical and operational data from the videos. Video analytics (VA) is the general analysis of video images to recognise unusual or potentially dangerous behaviour and events in real-time. It can perform three major tasks — provide information, offer assistance, and generate alerts.

Mahesh Yadav, Software Engineer on the Intelligent Edge team, joins the IoT Show to unbox the Microsoft Vision AI DevKit (aka.ms/iotshow/visionaidevkit), a smart camera for the intelligent edge.

The developer kit uses the Qualcomm’s Vision Intelligence 300 Platform which uniquely runs machine learning with hardware acceleration delivering results in milliseconds which is perfect for connected car or connected factory scenarios where you need low latency as well as support offline scenarios.

In this episode, you will see how easy it is to bring up AI on the edge with Azure IoT Edge and Azure Machine Learning.

The DevKit includes a sample AI model that identifies 183 objects including people, laptops, chairs and more. The highlight of the show is a real-time camera demo that asserts that both Mahesh and Olivier really are people.

And it’s always good when an AI affirms your personhood. 😉

Watch Paul de Carlo, Microsoft Cloud Developer Advocate, step through his Intelligent Edge Hands-on Lab (https://aka.ms/iotshow/intelligentedge — which is open source and available for you to try)

During this lab, Paul deploys an IoT Edge module to an NVIDIA Jetson Nano development board to allow for detection of objects from the live video stream of a Webcam. In this episode, the webcam is scanning the floor at Microsoft Ignite 2019 and detects people, backpacks, chairs and more, in real time.

The same setup can detect objects in YouTube video streams, RTSP streams, or HoloLens Mixed Reality Capture and stream up to 32 videos simultaneously. Object Detection is accomplished using YOLOv3-tiny with Darknet.

Learn why Paul and Olivier are never going to give you up, never going to let you down during this memorable episode.

TensorFlow Lite is a framework for running lightweight machine learning models, and it’s perfect for low-power devices like the Raspberry Pi.

This video shows how to set up TensorFlow Lite on the Raspberry Pi for running object detection models to locate and identify objects in real-time webcam feeds, videos, or images. 

Written version of this guide: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md

Will Kwan spent 50 days to create an AI Startup, out of the project out of Y Combinator Startup School.

You can try it out here: https://omnipost.co.

I’m building a machine learning/SaaS startup. In this video, I share the results my first 50 days of full-time work, explaining my business strategy, showing the core features I designed and programmed, and summarizing what I learned from my users. I also give a overview of all the programming frameworks and API’s I used.