Here’s a great tutorial on how to get started with the new Jetson Nano.

In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including: First boot Installing system packages and prerequisites Configuring your Python development environment Installing Keras and TensorFlow on the Jetson Nano Changing the default camera Classification and object detection with the Jetson Nano I’ll […]

In my previous post, I talked about Jetson Nano being the AI dream machine that Makers have been waiting for, but will it replace the noble Raspberry Pi?

It’s an exciting time for small factor computing. As if the Raspberry Pi wasn’t enough of an all purpose machine, more powerful boards capable of incredible feats keep appearing. The Jetson Nano from Nvidia is a recent addition to the ranks of super powerful machine learning enabled boards. What […]

Nvidia CEO Jensen Huang holds up the Jetson Nano onstage during the GTC keynote address in San Jose, California — its smallest computer ever.

This new embedded computer in its Jetson line for developers deploying AI on the edge and the goal is to make them affordable.

The Jetson Nano developer kit is available today for $100, while the $129 Jetson Mini computer for embedded devices will be available in June.

Soon, AI will make artists of us all — no matter how well (or how poorly) you can draw.

Check out this article on GauGAN.

The neural network isn’t simply replacing doodles and shapes with photorealistic images of rocks, mountains, skies, or water. In addition to taking into account the original shape of the drawing, GauGAN also takes into account other objects in the scene. Turn a patch of grass into a pond and it will create reflections on the surface based on what’s surrounding the new body of water.

This video explores the output of GANs described in this paper.


We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.