TensorFlow is already one of the most popular tools for creating deep learning models.

Google this week introduced Neural Structured Learning (NSL) to make this tool even better.

Here’s why, NSL is a big deal.

Neural Structured Learning in TensorFlow is an easy-to-use framework for training deep neural networks by leveraging structured signals along with feature inputs. This learning paradigm implements Neural Graph Learning in order to train neural networks using graphs and structured data. As the researchers mention, the graphs can come from multiple sources such as knowledge graphs, medical records, genomic data or multimodal relations. Moreover, this framework also generalises to adversarial learning.

In this latest deeplizard video , they code a training loop run builder class that will allows for multiple runs with varying parameters. This aids with experimentation of the neural network training process. Remember, it’s called data science and experimentation is part and parcel of the process.

The University of California, San Francisco is developing and training an AI model that could help diagnose tears in knee cartilage, or the meniscus.  A meniscus tear can lead to long-term health challenges  and lifestyle changes, ranging from debilitation to limits on activity. One of the keys to mitigating the consequences of meniscus tears is identifying and treating tears in the meniscus early. Here’s an interesting look at the research currently going on.

While this goal is pretty simple, the path forward is rather complicated. To diagnose a torn meniscus, clinicians need to review and interpret hundreds of high-resolution 3D magnetic resonance imaging (MRI) slices showing a patient’s knee from different angles. Radiologists then assign a numerical score to indicate the presence of a tear and its severity. This labor-intensive, time-consuming process relies heavily on the skills and availability of clinical specialists, and the interpretation of the images themselves can be rather subjective.

Here’s an interesting article on a deep learning toolkit for NLP.

Why are the results of the latest models so difficult to reproduce? Why is the code that worked fine last year not compatible with the latest release of my deep learning framework? Why is a baseline benchmark meant to be straightforward so difficult to set up? In today’s world, […]

Lex Fridman lands another top notch interview.

Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/