Related to a previous post on Data Driven Algorithm Design, this talk from “Geometry of Deep Learning” 2019 event at Microsoft Research
Lex Fridman interviews Keoki Jackson, he CTO of Lockheed Martin.
Lockheed Martin is a company that through its long history has created some of the most incredible engineering marvels that human beings have ever built, including planes that fly fast and undetected, defense systems that intersect threats that could take the lives of millions in the case of nuclear weapons, and spacecraft systems that venture out into space, the moon, Mars, and beyond with and without humans on-board.
Lex Fridman interviews Gustav Soderstrom, the Chief Research & Development Officer at Spotify.He leads Product, Design, Data, Technology & Engineering teams. This interview is part of the ongoing Artificial Intelligence podcast.
David Bau, a MIT-IBM Watson AI lab research team member, explains how computers show evidence of learning the structure of the physical world.
In this video, Lex Fridman interviews Jeff Hawkins. He is the founder of Redwood Center for Theoretical Neuroscience in 2002 and Numenta in 2005. In his 2004 book titled On Intelligence, and in his research before and after, he and his team have worked to reverse-engineer the neocortex and propose artificial intelligence architectures, approaches, and ideas that are inspired by the human brain. These ideas include Hierarchical Temporal Memory (HTM) from 2004 and The Thousand Brains Theory of Intelligence from 2017.
MIT has unveiled an artificial intelligence system that it said could make an array of AI techniques more accessible to programmers, while also offering adding value to experts.
Lex Fridman lands another top notch interview.
Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/
The human brain’s ability to recognize objects is remarkable. If you see under unusual lighting or from unexpected directions, there’s a good chance that your brain will still recognize it, and it’s considered an anamoly when it doesn’t. This robust and precise object recognition is a holy grail for artificial intelligence developers. How our brains do this, however, is still a mystery. Here’s an interesting article from MIT on how researchers may be on to something powerful in the computer vision space.
Think of feedforward DCNNs, and the portion of the visual system that first attempts to capture objects, as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition.
Perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.
Lex Fridman of MIT demonstrates Driver Activity Recognition in a self-driving car by playing Black Betty on the guitar. Yes, you read that right. What a time to be alive, amirite?!
Lex Fridman delivers the first lecture on Human-Centered Artificial Intelligence in this video.