MSR’s New York City lab is home to some of the best reinforcement learning research on the planet but if you ask any of the researchers, they’ll tell you they’re very interested in getting it out of the lab and into the real world.

One of those researchers is Dr. Akshay Krishnamurthy and today, he explains how his work on feedback-driven data collection and provably efficient reinforcement learning algorithms is helping to move the RL needle in the real-world direction.

Microsoft Research has released this podcast on Harvesting Randomness, HAIbrid Algorithms and Safe AI.

Dr. Siddhartha Sen is a Principal Researcher in MSR’s New York City lab, and his research interests are, if not impossible, at least impossible sounding: optimal decision making, universal data structures, and verifiably safe AI.

Today, he tells us how he’s using reinforcement learning and HAIbrid algorithms to tap the best of both human and machine intelligence and develop AI that’s minimally disruptive, synergistic with human solutions, and safe.

Author and philosopher Nassim Nicholas Taleb delivered a talk at Microsoft Research about how to gain from disorder and chaos, while being protected from fragilities and adverse events.

Taleb argues that many things in life benefit from stress, disorder, volatility, and turmoil. What he calls the antifragile is actually beyond the robust, because it benefits from shocks, uncertainty, and stressors, just as human bones get stronger when subjected to stress and tension. The antifragile needs disorder in order to survive and flourish.

Microsoft’s Project Silica aims to show that glass is the future of long-term data storage.

To prove its usefulness outside the lab, Microsoft partnered with Warner Bros. to write the 1978 Superman film into glass with lasers.

To see the whole process and the Superman glass, CNET visited Microsoft’s Research Lab in Cambridge, England and Warner Bros. Studios in Burbank, California.

What is the universal inference engine for neural networks?

Microsoft Research just posted this video exploring ONNX.

Tensorflow? PyTorch? Keras? There are many popular frameworks out there for working with Deep Learning and ML models, each with their pros and cons for practical usability for product development and/or research. Once you decide what to use and train a model, now you need to figure out how to deploy it onto your platform and architecture of choice. Cloud? Windows? Linux? IOT? Performance sensitive? How about GPU acceleration? With a landscape of 1,000,001 different combinations for deploying a trained model from some chosen framework into a performant production environment for prediction, we can benefit from some standardization.

Here’s a great video from Microsoft Research

Principles of Intelligence: A Celebration of Colleagues and Collaboration was a fun, once-in-a-lifetime gathering in celebration of colleagues and collaborations on Eric Horvitz’ milestone birthday. The event included short talks from Eric’s beloved colleagues and collaborators from over the decades—with the goal of celebrating their ideas, collaborations, and contributions that were influenced by, or that resonated with Eric’s pursuit of principles and applications of machine intelligence.

Session 1 features talks from Andreas Krause (California Institute of Technology), Dafna Shahaf (Stanford University), Ashish Kapoor (Microsoft Research), and Mohsen Bayati (Stanford University).

Microsoft Research just posted this video on adversarial machine learning.

As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important.

The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years.

In this talk, I’ll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.

Talk slides: https://www.microsoft.com/en-us/research/uploads/prod/2019/11/Adversarial-Machine-Learning-SLIDES.pdf

Microsoft Research has posted this interesting video:

To develop an Artificial Intelligence (AI) system that can understand the world around us, it needs to be able to interpret and reason about the world we see and the language we speak. In recent years, there has been a lot of attention to research at the intersection of vision, temporal reasoning, and language.

One of the major challenges is how to ensure proper grounding and perform reasoning across multiple modalities given the heterogeneity resides in the data when there is no or weak supervision of the data.

Talk slides: https://www.microsoft.com/en-us/research/uploads/prod/2019/11/Towards-Grounded-Spatio-Temporal-Reasoning-SLIDES.pdf