In this video, Lex Fridman interviews Jeff Hawkins. He is the founder of Redwood Center for Theoretical Neuroscience in 2002 and Numenta in 2005. In his 2004 book titled On Intelligence, and in his research before and after, he and his team have worked to reverse-engineer the neocortex and propose artificial intelligence architectures, approaches, and ideas that are inspired by the human brain. These ideas include Hierarchical Temporal Memory (HTM) from 2004 and The Thousand Brains Theory of Intelligence from 2017.

MIT has unveiled an artificial intelligence system that it said could make an array of AI techniques more accessible to programmers, while also offering adding value to experts.

Researchers said the system, called Gen, is similar to TensorFlow, a set of tools developed by Google for automating AI tasks, principally those involved with deep learning and neural networks.

Lex Fridman lands another top notch interview.

Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/

The human brain’s ability to recognize objects is remarkable. If you see  under unusual lighting or from unexpected directions, there’s a good chance that your brain will still recognize it, and it’s considered an anamoly when it doesn’t. This robust and precise object recognition is a holy grail for artificial intelligence developers. How our brains do this, however, is still a mystery.  Here’s an interesting article from MIT on how researchers may be on to something powerful in the computer vision space.

Think of feedforward DCNNs, and the portion of the visual system that first attempts to capture objects, as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition.

Perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash, but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.

Here’s an interesting news article from MIT that could revolutionize NLP and further NLU (Natural Language Understanding).

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence. In computing, learning language is […]

In this video from a recent talk at MIT, Demis Hassabis discusses the capabilities and power of self-learning systems. He illustrates this with reference to some of DeepMind’s recent breakthroughs, and talks about the implications of cutting-edge AI research for scientific and philosophical discovery.

What’s more impressive, is Demis’ biography. From the description:

Speaker Biography: Demis is a former child chess prodigy, who finished his A-levels two years early before coding the multi-million selling simulation game Theme Park aged 17. Following graduation from Cambridge University with a Double First in Computer Science he founded the pioneering video games company Elixir Studios producing award winning games for global publishers such as Vivendi Universal. After a decade of experience leading successful technology startups, Demis returned to academia to complete a PhD in cognitive neuroscience at UCL, followed by postdocs at MIT and Harvard, before founding DeepMind. His research into the neural mechanisms underlying imagination and planning was listed in the top ten scientific breakthroughs of 2007 by the journal Science. Demis is a 5-times World Games Champion, a Fellow of the Royal Society of Arts, and the recipient of the Royal Society’s Mullard Award and the Royal Academy of Engineering’s Silver Medal.