Here’s an interesting project involving two of my favorite topics: language and AI.

Some estimates put the number of living languages on the continent at 2,000 or more. This can stand in the way of communication, as well as commerce, and earlier the year led to the creation of the Masakhane open source project, an effort being undertaken by African technologists to translate African languages using neural machine translation.

Masakhane works with groups like Translators Without Borders and academics to find language data sets.

In addition to translating native African languages to English, the project will seek to translate dialects like Pidgin English in Nigeria or strands of Arabic in northern and central Africa.

The last few years have seen great strides in the field of computer vision, but what comes next?

Can we teach AI to “feel” something about what it sees? The folks at Getty Images think we can.

At first blush, the idea that AI could “feel” something would seem to be pretty far-fetched. Feelings in general are closely intertwined with our human identities. How any one person feels about something is bound to be different than how another feels. Feelings are, by definition, subjective. In fact, it’s tough to find a more subjective topic than “feelings.” So how does that mesh with the objective functionality of computers?

The solution is relatively straightforward, according to Andrea Gagliano, a senior data scientist with Getty Images.

BERT is one of the most popular algorithms in the NLP spectrum known for producing state-of-the-art results in a variety of language modeling tasks.

Built on top of transformers and seq-to-sequence models, the Bidirectional Encoder Representations from Transformers is a powerful NLP modeling technique that sits at the cutting edge.

Here’s a great write up on how to build a BERT classifier model in TF 2.0.

The success of BERT has not only made it the power behind the top search engine known to mankind but also has inspired and paved the way for many new and better models. Given below are some of the popular NLP models and algorithms which were inspired by BERT:

Microsoft for Startups shares this highlight reel from the Spring MLADS conference.

In case you’re not familiar with MLADS, check out Data Driven’s coverage of the most recent one.

Twice a year, Microsoft assembles over 4,000 of our top data scientists and engineers for a two day internal conference to explore the state of the art around machine learning and data science.

Earlier this year, 30 leading startups who are active in the Microsoft for Startups program came to showcase their solutions and engage directly with the engineering teams.

Machine Learning with Phil has got another interesting look at Deep Q Learning as part of a preview of his course.

The two biggest innovations in deep Q learning were the introduction of the target network and the replay memory. One would think that simply bolting a deep neural network to the Q learning algorithm would be enough for a robust deep Q learning agent, but that isn’t the case. In this video I’ll show you how this naive implementation of the deep q learning agent fails, and spectacularly at that.

This is an excerpt from my new course, Deep Q Learning From Paper to Code which you can get on sale with this link

https://www.udemy.com/course/deep-q-learning-from-paper-to-code/?couponCode=CYBERMONDAY19

Microsoft Research marks its 100th episode with with Gurdeep Pall and Dr. Ashish Kapoor talking about autonomous systems.

There’s a lot of excitement around self-driving cars, delivery drones, and other intelligent, autonomous systems, but before they can be deployed at scale, they need to be both reliable and safe. That’s why Gurdeep Pall, CVP of Business AI at Microsoft, and Dr. Ashish Kapoor, who leads research in Aerial Informatics and Robotics, are using a simulated environment called AirSim to reduce the time, cost and risk of the testing necessary to get autonomous agents ready for the open world. 

See more on this and other Microsoft Research podcast episodes:
https://www.microsoft.com/en-us/research/blog/category/podcast/

Lex Fridman keeps landing the “big fish” on his podcast.

This time, he sits down with Noam Chomsky.

Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.