Microsoft’s Project Silica aims to show that glass is the future of long-term data storage.

To prove its usefulness outside the lab, Microsoft partnered with Warner Bros. to write the 1978 Superman film into glass with lasers.

To see the whole process and the Superman glass, CNET visited Microsoft’s Research Lab in Cambridge, England and Warner Bros. Studios in Burbank, California.

What is the universal inference engine for neural networks?

Microsoft Research just posted this video exploring ONNX.

Tensorflow? PyTorch? Keras? There are many popular frameworks out there for working with Deep Learning and ML models, each with their pros and cons for practical usability for product development and/or research. Once you decide what to use and train a model, now you need to figure out how to deploy it onto your platform and architecture of choice. Cloud? Windows? Linux? IOT? Performance sensitive? How about GPU acceleration? With a landscape of 1,000,001 different combinations for deploying a trained model from some chosen framework into a performant production environment for prediction, we can benefit from some standardization.

Here’s a great video from Microsoft Research

Principles of Intelligence: A Celebration of Colleagues and Collaboration was a fun, once-in-a-lifetime gathering in celebration of colleagues and collaborations on Eric Horvitz’ milestone birthday. The event included short talks from Eric’s beloved colleagues and collaborators from over the decades—with the goal of celebrating their ideas, collaborations, and contributions that were influenced by, or that resonated with Eric’s pursuit of principles and applications of machine intelligence.

Session 1 features talks from Andreas Krause (California Institute of Technology), Dafna Shahaf (Stanford University), Ashish Kapoor (Microsoft Research), and Mohsen Bayati (Stanford University).

Microsoft Research just posted this video on adversarial machine learning.

As ML is being used for increasingly security sensitive applications and is trained in increasingly unreliable data, the ability for learning algorithms to tolerate worst-case noise has become more and more important.

The reliability of machine learning systems in the presence of adversarial noise has become a major field of study in recent years.

In this talk, I’ll survey a number of recent results in this area, both theoretical and more applied. We will survey recent advances in robust statistics, data poisoning, and adversarial examples for neural networks. The overarching goal is to give provably robust algorithms for these problems, which still perform well in practice.

Talk slides:

Microsoft Research has posted this interesting video:

To develop an Artificial Intelligence (AI) system that can understand the world around us, it needs to be able to interpret and reason about the world we see and the language we speak. In recent years, there has been a lot of attention to research at the intersection of vision, temporal reasoning, and language.

One of the major challenges is how to ensure proper grounding and perform reasoning across multiple modalities given the heterogeneity resides in the data when there is no or weak supervision of the data.

Talk slides:

Microsoft Research features a talk by Wei Wen on Efficient and Scalable Deep Learning (slides)

In deep learning, researchers keep gaining higher performance by using larger models. However, there are two obstacles blocking the community to build larger models: (1) training larger models is more time-consuming, which slows down model design exploration, and (2) inference of larger models is also slow, which disables their deployment to computation constrained applications. In this talk, I will introduce some of our efforts to remove those obstacles. On the training side, we propose TernGrad to reduce communication bottleneck to scale up distributed deep learning; on the inference side, we propose structurally sparse neural networks to remove redundant neural components for faster inference. At the end, I will very briefly introduce (1) my recent efforts to accelerate AutoML, and (2) future work to utilize my research to overcome scaling issues in Natural Language Processing.

See more on this talk at Microsoft Research:

Microsoft Research posted this video about Project Silica, a research project that was highlighted earlier this week at Ignite 2019.

Data that needs to be stored long-term is growing exponentially. Existing storage technologies have a limited lifetime, and regular data migration is needed, resulting in high cost. Project Silica designs a long-term storage system specifically for the cloud, using quartz glass.

Read the blog at
Learn more about the project at

Microsoft Research’s podcast interviews Jenny Sabin, an architectural designer, a professor, a studio principal and MSR’s current Artist in Residence.

On today’s podcast, Jenny and Asta talk about life at the intersection of art and science; tell us why the Artist in Residence program pushes the boundaries of technology in unexpected ways; and reveal their vision of the future of bio-inspired, human-centered, AI-infused architecture.