The University of California, San Francisco is developing and training an AI model that could help diagnose tears in knee cartilage, or the meniscus.  A meniscus tear can lead to long-term health challenges  and lifestyle changes, ranging from debilitation to limits on activity. One of the keys to mitigating the consequences of meniscus tears is identifying and treating tears in the meniscus early. Here’s an interesting look at the research currently going on.

While this goal is pretty simple, the path forward is rather complicated. To diagnose a torn meniscus, clinicians need to review and interpret hundreds of high-resolution 3D magnetic resonance imaging (MRI) slices showing a patient’s knee from different angles. Radiologists then assign a numerical score to indicate the presence of a tear and its severity. This labor-intensive, time-consuming process relies heavily on the skills and availability of clinical specialists, and the interpretation of the images themselves can be rather subjective.

Here’s an interesting article on a deep learning toolkit for NLP.

Why are the results of the latest models so difficult to reproduce? Why is the code that worked fine last year not compatible with the latest release of my deep learning framework? Why is a baseline benchmark meant to be straightforward so difficult to set up? In today’s world, […]

Lex Fridman lands another top notch interview.

Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/

While readers of this blog may think that computer vision and neural networks have a long history together, the fact is: they don’t. Machine vision encompasses far more than Hot Dog or Not a Hot Dog. Here’s an interesting look at how deep learning has changed machine vision forever.

Underneath this hyperbole, however, describing the underlying science behind such concepts is more simple. In traditional machine vision systems, for example, it may be necessary to read a barcode on a part, judge its dimensions, or inspect it for flaws. To do so, systems integrators often use off-the-shelf softwarethat offers standard tools that can be deployed to determine a data matrix code, for example, or caliper tools set using graphical user interfaces to measure part dimensions.

Here’s an interesting and skeptical walk through of neural networks vs deep neural networks and what, if anything, makes them different.

Here’s an  excerpt:

The big bang of deep learning – or at least when I heard the boom for the first time – happened in an image recognition project, the ImageNet Large Scale Visual Recognition Challenge, in 2012. In order to recognize images automatically, a convolutional neural network with eight layers – AlexNet – was used. The first five layers were convolutional layers, some of them followed by max-pooling layers, and the last three layers were fully connected layers, all with a non-saturating ReLU activation function. The AlexNet network achieved a top-five error of 15.3%, more than 10.8 percentage points lower than that of the runner up. It was a great accomplishment!

Here’s an insightful blog post on the future of RL (reinforcement learning): Deep RL and why it’s going to be revolutionary.

Until few years back, reinforcement learning techniques were constrained on small discrete systems. An increase in state space(different parameters of the system), the memory and computation power increases exponentially. Before apply reinforcement learning techniques even continuous systems had to be discretized. Many things are now possible with the recent breakthroughs of Deep Neural Networks(DNN), and specially its approximation capability. Combining Reinforcement Learning and DNN, we have developed techniques taking advantage of both fields. The new field is called Deep Reinforcement Learning (DRL) and is responsible for unimaginable breakthroughs in many domains.

Data Scientist was the hottest job title of the last few years. Recently, a new challenger has risen to top of the heap: Machine Learning Engineer. Part and parcel of being an ML Engineer is a solid understanding of deep learning.

Edureka has compiled a list of top deep learning interview questions you must know the answers to in order to ace any interview.

From the article:

Artificial Intelligence is going to create 2.3 million Jobs by 2020 and to crack those job interview I have come up with a set of Deep Learning Interview Questions. I have divided this article into two sections:

Here’s an interesting article on creating and using custom loss functions in Keras. Why would you need to do this?

Here’s one example from the article:

Let’s say you are designing a Variational Autoencoder. You want your model to be able to reconstruct its inputs from the encoded latent space. However, you also want your encoding in the latent space to be (approximately) normally distributed.

Read more www.kdnuggets.com