Microsoft Research explores how the brains beget the mind.

How do molecules, cells, and synapses effect reasoning, intelligence, language, science? Despite dazzling progress in experimental neuroscience we do not seem to be making progress in the overarching question — the gap is huge and a completely new approach seems to be required.

As Richard Axel recently put it: “We don’t have a logic for the transformation of neural activity into thought.” What kind of formal system would qualify as this “logic”? I will sketch a possible answer.

(Joint work with Santosh Vempala, Dan Mitropolsky, Mike Collins, Wolfgang Maass, and Larry Abbott.)

Talk slides: https://www.microsoft.com/en-us/research/uploads/prod/2019/09/A-Calculus-for-Brain-Computation-SLIDES.pdf

In this episode of the AI Podcast, Lex Fridman interviews Paola Arlotta.

Paola Arlotta is a professor of stem cell and regenerative biology at Harvard University.

You could say that she studies “naturally intelligent” systems.

Specifically, she is interested in understanding the molecular laws that govern the birth, differentiation and assembly of the human brain’s cerebral cortex. She explores the complexity of the brain by studying and engineering elements of how the brain develops.

It sounds like science fiction: a device that can reconnect a paralyzed person’s brain to his or her body. But that’s exactly what the experimental NeuroLife system does. Developed by Battelle and Ohio State University,

NeuroLife uses a brain implant, an algorithm and an electrode sleeve to give paralysis patients back control of their limbs. For Ian Burkhart, NeuroLife’s first test subject, the implications could be life-changing.

As someone who has spent the last two years working on getting as any certification in data science as I possibly can, many people ask me, “How did you learn so much so fast?”

The answer is simple: study the brain, how it learns, and then test out new ways to learn faster.

In this video, Barbara Oakley explains how the brain is constantly fluctuating between a “learning” mode and an “understanding” mode.

When you’re sitting there reading (and re-reading!) a textbook, unable to make sense of it, your brain is actually learning. It just takes the decompressing part of your brain for it to all be unpacked. It’s called the neural chunk theory and you can learn to utilize it to your advantage by learning how to study differently; small bursts of inactivity and breaks can really make a big difference in how to memorize seemingly difficult information by combining bigger and bigger “chunks” of information until you understand the big picture. It’s fascinating stuff

Elon Musk has a new idea he wants to bring to fruition: the BCI, or Brain Computer Interface.

Are BCI’s the next step in human evolution – an inevitable upgrade we’ll need after the age of smartphones?

If done right, they can help us fulfill our wildest fantasies. We’ll be able to learn anything, experience anything, and be anywhere in seconds. I’ll discuss the philosophical, theoretical, and technical aspects behind the idea.

Siraj Ravel shares his thoughts in this video.