Siraj Raval has a video exploring a paper about genomics and creating reliable machine learning systems.

Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that’s not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it’s important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I’ll explain the work of these researchers, we’ll write some code, do some math, do some visualizations, and by the end I’ll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!

Likelihood Ratios for Out-of-Distribution Detection paper: https://arxiv.org/pdf/1906.02845.pdf 

The researcher’s code: https://github.com/google-research/google-research/tree/master/genomics_ood

Siraj Raval gets back to inspiring people to get into AI and pokes fun at himself.

Almost exactly 4 years ago I decided to dedicate my life to helping educate the world on Artificial Intelligence. There were hardly any resources designed for absolute beginners and the field was dominated by PhDs. In 2020, thanks to the extraordinary contributions of everyone in this community, all that has changed. It’s easier than ever before to enter into this field, even without an IT background. We’ve seen brave entrepreneurs figure out how to deploy this technology to save lives (medical imaging, automated diagnosis) and accelerate Science (AlphaFold). We’ve seen algorithmic advances (deepfakes) and ethical controversies (automated surveillance) that shocked the world. The AI field is now a global, cross-cultural movement that’s not limited to academics alone. And that’s something all of us should be proud of, we’re all apart of this. I’ve packed a lot into this episode! I’ll give my annual lists of the best ML language and libraries to learn this year, how to learn ML in 2020, as well as 8 predictions about where this field is headed. I had a lot of fun making this, so I hope you enjoy it!

TensorFlow Lite is a framework for running lightweight machine learning models, and it’s perfect for low-power devices like the Raspberry Pi.

This video shows how to set up TensorFlow Lite on the Raspberry Pi for running object detection models to locate and identify objects in real-time webcam feeds, videos, or images. 

Written version of this guide: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Raspberry_Pi_Guide.md

TensorFlow 2.0 is all about ease of use, and there has never been a better time to get started.

In this talk, learn about model-building styles for beginners and experts, including the Sequential, Functional, and Subclassing APIs.

We will share complete, end-to-end code examples in each style, covering topics from “Hello World” all the way up to advanced examples. At the end, we will point you to educational resources you can use to learn more.

Presented by: Josh Gordon

View the website → https://goo.gle/36smBfW

O’Reilly and TensorFlow teamed up to present the first TensorFlow World last week.

It brought together the growing TensorFlow community to learn from each other and explore new ideas, techniques, and approaches in deep and machine learning.

Presenters in the keynote:

  • Jeff Dean, Google
  • Megan Kacholia, Google
  • Frederick Reiss, IBM
  • Theodore Summe, Twitter
  • Craig Wiley, Google
  • Kemal El Moujahid, Google

O’Reilly has a a great round up post from the first ever TensorFlow World conference that took place earlier this week.

People from across the TensorFlow community came together in Santa Clara, California for TensorFlow World . Below you’ll find links to highlights from the event. TensorFlow World 2019 opening keynote Jeff Dean explains why Google open-sourced TensorFlow and discusses its progress. Accelerating ML at Twitter Theodore Summe offers a […]

Here’s a great tutorial on using Keras to create a digit recognizer using the classic MNIST set.

An artificial neural network is a mathematical model that converts a set of inputs to a set of outputs through a number of hidden layers. An ANN works with hidden layers, each of which is a transient form associated with a probability. In a typical neural network, each node of a layer takes all nodes of the previous layer as input. A model may have one or more hidden layers.

Siraj Raval just posted this video on defending AI against adversarial attacks

Machine Learning technology isn’t perfect, it’s vulnerable to many different types of attacks! In this episode, I’ll explain 2 common types of attacks and 2 common types of defenses using various code demos from across the Web. There’s some really dope mathematics involved with adversarial attacks, and it was a lot of fun reading about the ‘cat and mouse’ game between new attack techniques, followed by new defense techniques. I encourage anyone new to the field who finds this stuff interesting to learn more about it. I definitely plan to. Let’s look into some math, code, and examples. Enjoy!

Slideshow for this video:
https://colab.research.google.com/drive/19N9VWTukXTPUj9eukeie55XIu3HKR5TT

Demo project:
https://github.com/jaxball/advis.js