In this video from Microsoft Research, Susan Dumais sits down with Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University.

Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze, 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008).

His most recent work has concentrated on probabilistic approaches to natural language processing (NLP) problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.

To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.