Here’s an interesting talk from Microsoft Research on a quantum computing case study in conjucction with the University of Washington.

Case study: Quantum computing curriculum developed with the University of Washington Recently, our Quantum Software experts partnered with UW to bring a 10-week Introduction to Quantum Computing and Quantum Programming in Q# to the school of Computer Science. Learn how students can get started with hands-on quantum programming quickly by completing a rich collection of quantum programming exercises in Q# (‘coding katas’).

Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations.

This has led to a rallying cry for model interpretability.

Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools.

This panel discussion hosted by Microsoft Research brings together experts on visualization, machine learning and human interaction to present their views as well as discuss these complicated issues.

This Microsoft Research video covers a new research area is focusing on “microproductivity,” breaking larger tasks down into manageable components conducive to small moments throughout the day.

From the video description:

In this breakout session, we bring together experts from academia and the product side to share their vision of a future where traditional tasks can be accomplished via both focused attention and microproductivity. We will unpack how microproductivity may manifest across different domains and scenarios, identify key challenges in designing for microproductivity, discuss how expected outcomes may be impacted, and put forward an agenda that can move the field toward real-life adaptation.

In case you haven’t already noticed it, PowerPoint now includes AI technologies.

They help people create better presentations and become better presenters. Come see how AI helps make creating presentations quicker and easier with Designer and Presenter Coach.

In this video from Microsoft Research, learn how PowerPoint can listen to you practice and provide helpful tips for improvement.

In this video from Microsoft Research, Susan Dumais sits down with Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University.

Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze, 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008).

His most recent work has concentrated on probabilistic approaches to natural language processing (NLP) problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

With the shift from boxed products to services, rich data is available from all stages of the Software Development Life Cycle.

By leveraging this data, AI can assist software engineers, break down organizational boundaries and make our products more robust.

This video from a recent Microsoft Research event demonstrates several AI powered features like reviewer recommendation, test load reduction and automated root causing for boosting developer and infrastructure productivity.

Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.

To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.

Here’s an interesting talk from Microsoft Research YouTube channel by Yuija Li about Gated Graph Sequence Neural Networks. Details about the presentation and a link to the paper are below the video.

Link to paper

From the description:

Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.