In this video from Microsoft Research, Susan Dumais sits down with Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University.

Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze, 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008).

His most recent work has concentrated on probabilistic approaches to natural language processing (NLP) problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. Here’s an interesting talk about making neural networks that can reason.

To address this gap, we have been developing networks that support memory, attention, composition, and reasoning. Our MACnet and NSM designs provide a strong prior for explicitly iterative reasoning, enabling them to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The Neural State Machine (NSM) design also emphasizes the use of a more symbolic form of internal computation, represented as attention over symbols, which have distributed representations. Such designs impose structural priors on the operation of networks and encourage certain kinds of modularity and generalization. We demonstrate the models’ strength, robustness, and data efficiency on the CLEVR dataset for visual reasoning (Johnson et al. 2016), VQA-CP, which emphasizes disentanglement (Agrawal et al. 2018), and our own GQA (Hudson and Manning 2019). Joint work with Drew Hudson.

AI is set to disrupt every field and every industry. Healthcare, in particular, seems primed for disruption. Here’s an interesting project out of Stanford.

“One of the really exciting things about computer vision is that it’s this powerful measuring tool,” said Yeung, who will be joining the faculty of Stanford’s department of biomedical data science this summer. “It can watch what’s happening in the hospital setting continuously, 24/7, and it never gets tired.”

Current methods for documenting patient movement are burdensome and ripe for human error, so this team is devising a new way that relies on computer vision technology similar to that in self-driving cars. Sensors in a hospital room capture patient motions as silhouette-like moving images, and a trained algorithm identifies the activity — whether a patient is being moved into or out of bed, for example, or into or out of a chair.