Here’s an interesting article in Forbes on how computer vision is being applied in the real world.

Even though early experiments in computer vision started in the 1950s and it was first put to use commercially to distinguish between typed and handwritten text by the 1970s, today the applications for computer vision have grown exponentially. By 2022, the computer vision and hardware market is expected to […]

I’m glad to see that the Azure Custom Vision Service is getting some press. It’s an easy and simple way to build your own computer vision models without having to train on thousands (or tens of thousands) of images. In fact, as little as 15 images can yield workable results.

Here’s an article in www.itbusiness.ca about the service.

“Customers can train their own custom image classifiers and object detectors,” said Tina Coll, the product marketing manager at Microsoft Corp. “For example, a company could choose to detect their own logo in the video of a sports event to track the impact of their advertising or a student might want to count the number of animals passing in front of a nature camera.”

AI is set to disrupt every field and every industry. Healthcare, in particular, seems primed for disruption. Here’s an interesting project out of Stanford.

“One of the really exciting things about computer vision is that it’s this powerful measuring tool,” said Yeung, who will be joining the faculty of Stanford’s department of biomedical data science this summer. “It can watch what’s happening in the hospital setting continuously, 24/7, and it never gets tired.”

Current methods for documenting patient movement are burdensome and ripe for human error, so this team is devising a new way that relies on computer vision technology similar to that in self-driving cars. Sensors in a hospital room capture patient motions as silhouette-like moving images, and a trained algorithm identifies the activity — whether a patient is being moved into or out of bed, for example, or into or out of a chair.

Here’s an interesting article on “oscillatory neural networks” and how physicists trained it to perform image recognition.

An oscillatory neural network is a complex interlacing of interacting elements (oscillators) that are able to receive and transmit oscillations of a certain frequency. Receiving signals of various frequencies from preceding elements, the artificial neuron oscillator can synchronize its rhythm with these fluctuations. As a result, […]

Analyzing people’s social behavior with the use of images and videos is one of the most popular tasks for AI. Researchers have achieved a rather high quality in group-level emotion recognition, but until now it remained impossible to implement this development on a mass scale.

The problem was the requirement of most video systems for images containing face close-ups in good resolution. Ordinary cameras installed on the street or in a supermarket have resolutions too low and are mounted so high that the typical facial regions in the gathered videos are too small to work with.

However, this may no longer be the case.

Alexander Tarasov and Andrey Savchenko, researchers from HSE, have developed an algorithm that is comparable with the existing group-level emotion recognition techniques in terms of recognition accuracy (75.5%). At the same time, it requires only 5MB in the system memory, processes one image or video frame in just one hundredth of a second and can be used with low-quality video data.

Geoffrey Hinton, aka the Godfather of AI, has been instrumental in the AI revolution we are now living in. However, he’s not content just to rest on his laurels and has dream up something new: capsule networks.

Check out this excerpt from an article in the Seattle Times.

With his capsule networks, Hinton aims to finally give machines the same three-dimensional perspective that humans have — allowing them to recognize a coffee cup from any angle after learning what it looks like from only one. This is not something that neural networks can do.

Here’s an interesting article in Forbes on how John Deere is using computer vision to optimize agricultural output.

In just 30 years’ time, it is forecasted that the human population of our planet will be close to 10 billion. Producing enough food to feed these hungry mouths will be a challenge, and demographic trends such as urbanization, particularly in developing countries, will only add to that. Intelligent […]

In something straight out of science fiction, the high tech city of Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network.

From the article:

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.