Lex Fridman lands another top notch interview.

Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/

While readers of this blog may think that computer vision and neural networks have a long history together, the fact is: they don’t. Machine vision encompasses far more than Hot Dog or Not a Hot Dog. Here’s an interesting look at how deep learning has changed machine vision forever.

Underneath this hyperbole, however, describing the underlying science behind such concepts is more simple. In traditional machine vision systems, for example, it may be necessary to read a barcode on a part, judge its dimensions, or inspect it for flaws. To do so, systems integrators often use off-the-shelf softwarethat offers standard tools that can be deployed to determine a data matrix code, for example, or caliper tools set using graphical user interfaces to measure part dimensions.

Here’s an interesting and skeptical walk through of neural networks vs deep neural networks and what, if anything, makes them different.

Here’s an  excerpt:

The big bang of deep learning – or at least when I heard the boom for the first time – happened in an image recognition project, the ImageNet Large Scale Visual Recognition Challenge, in 2012. In order to recognize images automatically, a convolutional neural network with eight layers – AlexNet – was used. The first five layers were convolutional layers, some of them followed by max-pooling layers, and the last three layers were fully connected layers, all with a non-saturating ReLU activation function. The AlexNet network achieved a top-five error of 15.3%, more than 10.8 percentage points lower than that of the runner up. It was a great accomplishment!

Here’s an insightful blog post on the future of RL (reinforcement learning): Deep RL and why it’s going to be revolutionary.

Until few years back, reinforcement learning techniques were constrained on small discrete systems. An increase in state space(different parameters of the system), the memory and computation power increases exponentially. Before apply reinforcement learning techniques even continuous systems had to be discretized. Many things are now possible with the recent breakthroughs of Deep Neural Networks(DNN), and specially its approximation capability. Combining Reinforcement Learning and DNN, we have developed techniques taking advantage of both fields. The new field is called Deep Reinforcement Learning (DRL) and is responsible for unimaginable breakthroughs in many domains.

Data Scientist was the hottest job title of the last few years. Recently, a new challenger has risen to top of the heap: Machine Learning Engineer. Part and parcel of being an ML Engineer is a solid understanding of deep learning.

Edureka has compiled a list of top deep learning interview questions you must know the answers to in order to ace any interview.

From the article:

Artificial Intelligence is going to create 2.3 million Jobs by 2020 and to crack those job interview I have come up with a set of Deep Learning Interview Questions. I have divided this article into two sections:

Here’s an interesting article on creating and using custom loss functions in Keras. Why would you need to do this?

Here’s one example from the article:

Let’s say you are designing a Variational Autoencoder. You want your model to be able to reconstruct its inputs from the encoded latent space. However, you also want your encoding in the latent space to be (approximately) normally distributed.

Read more www.kdnuggets.com

TensorFlow Dev Summit 2019 just ended and here’s a round up post from Packt Publishing summarizing the big news and coming changes tothe TensorFlow ecosystem.

In a medium blog post, Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist) introduced the TensorFlow Federated framework on the first day. This open source framework is useful for experimenting with machine learning and other computations on decentralized data.

Here’s an interesting look at the cutting edge technologies just on the horizon of AI research and what problems they can potentially solve that current techniques can’t.

Artificial intelligence (AI) is dominated by pattern recognition techniques. Recently, major advances have been made in the fields of image recognition, machine translation, audio processing and several others thanks to the development and refinement of deep learning. But deep learning is not the cure for every problem. In fact, […]