While this is technically a press release, there could be something to DarwinAI if it really can increase neural networks performance more than 1600%. We’ll have to keep an eye on this technology. 😉

“The complexity of deep neural networks makes them a challenge to build, run and use, especially in edge-based scenarios such as autonomous vehicles and mobile devices where power and computational resources are limited,” said Sheldon Fernandez, CEO of DarwinAI. “Our Generative Synthesis platform is a key technology in enabling AI at the edge – a fact bolstered and validated by Intel’s solution brief.”

InfoWorld write a glowing review of TensorFlow 2.

Of all the excellent machine learning and deep learning frameworks available, TensorFlow is the most mature, has the most citations in research papers (even excluding citations from Google employees), and has the best story about use in production. It may not be the easiest framework to learn, but it’s much less intimidating than it was in 2016. TensorFlow underlies many Google services.

Here’s an interesting computer vision / IoT project you can make at home.

The JeVois machine vision sensor can recognize a wide variety of objects and symbols. My own project, Hedley the Robotic Skull , uses one to track me as I walk around in his field of view. The sensor communicates with an Arduino microcontroller, which moves the pan servo to […]

SparkFun is a company well known for their IoT goodies and, now, they are venturing into the AI space with this new TensorFlow based kit: the SparkFun Artemis.

SparkFun released the company’s first open-source, embedded-systems module, SparkFun Artemis, Engineering Version. The SparkFun Artemis is intended to empower engineers, prototype makers, and R&D teams to integrate the TensorFlow machine-learning platform into any design. Additionally, the SparkFun team has launched three boards with the unshielded module: BlackBoard Artemis; BlackBoard […]

Here’s an interesting idea: an open deep learning compiler stack to compile various deep learning models from different frameworks to the CPU, GPU or specialised accelerators. It’s called the Tensor Virtual Machine or TVM for short.

TVM supports model compilation from a wide range of frontends like TensorFlow, Onnx, Keras, Mxnet, Darknet, CoreML and Caffe2. TVM-compiled modules can be deployed on backends like LLVM (JavaScript or WASM, AMD GPU, ARM or X86), NVidia GPU (CUDA), OpenCL and Metal. TVM also supports runtime bindings for programming languages like JavaScript, Java, Python, C++ and Golang. With a wide range of frontend, backend and runtime bindings, this deep learning compiler enables developers to integrate and deploy deep learning models from any framework to any hardware, via any programming language.

TensorFlow’s high-level APIs help you through each stage of your model-building process.

On this episode of TensorFlow Meets, Laurence Moroney talks with TensorFlow Engineering Manager Karmel Allison about how TF 2.0 will make building models much easier.

This article from Analytics India Magazine lists 10 comparisons between the two top deep learning frameworks: PyTorch and TensorFlow.

Libraries play an important role when developers decide to work in machine learning or deep learning researches. According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow.

Here’s an interesting tutorial for Keras and TensorFlow that predicts employee retention.

In this tutorial, you’ll build a deep learning model that will predict the probability of an employee leaving a company. Retaining the best employees is an important factor for most organizations. To build your model, you’ll use this dataset available at Kaggle, which has features that measure employee satisfaction in a company. To create this model, you’ll use the Keras sequential layer to build the different layers for the model.

Here’s an interesting look on the use of AI and machine learning in the geospatial world.  Given the huge datasets found in remote sensing, it’s not surprising to see that field leading the way in cutting edge data analytics.

From a geospatial perspective, machine learning has long been in wide use. Remote sensing datasets have always been large, so the large data processing power of Machine Learning has been a natural fit. For example, processing satellite images using K Means or ISODATA clustering algorithms was one of the first uses of remote sensing software.