MIT OpenCourseWare provides a this course for free.

Given its prominence to neural networks and quantum computing, now is a good time to learn Linear Algebra.

MIT A 2020 Vision of Linear Algebra, Spring 2020Instructor: Gilbert StrangView the complete course: https://ocw.mit.edu/2020-visionYouTube Playlist: https://www.youtube.com/playlist?list=PLUl4u3cNGP61iQEFiWLE21EJCxwmWvvek

Professor Strang describes independent vectors and the column space of a matrix as a good starting point for learning linear algebra. His outline develops the five shorthand descriptions of key chapters of linear algebra. 

Lex Fridman interviews Kate Darling in this episode of the AI Show.

Kate Darling is a researcher at MIT, interested in social robotics, robot ethics, and generally how technology intersects with society. She explores the emotional connection between human beings and life-like machines, which for me, is one of the most exciting topics in all of artificial intelligence. This conversation is part of the Artificial Intelligence podcast.

Time index:

  • 0:00 – Introduction
  • 3:31 – Robot ethics
  • 4:36 – Universal Basic Income
  • 6:31 – Mistreating robots
  • 17:17 – Robots teaching us about ourselves
  • 20:27 – Intimate connection with robots
  • 24:29 – Trolley problem and making difficult moral decisions
  • 31:59 – Anthropomorphism
  • 38:09 – Favorite robot
  • 41:19 – Sophia
  • 42:46 – Designing robots for human connection
    47:01 – Why is it so hard to build a personal robotics company?
    50:03 – Is it possible to fall in love with a robot?
    56:39 – Robots displaying consciousness and mortality
    58:33 – Manipulation of emotion by companies
    1:04:40 – Intellectual property
    1:09:23 – Lessons for robotics from parenthood
    1:10:41 – Hope for future of robotics

Lex Fridman interviews Eric Weinstein in the latest episode of his podcast.

Eric Weinstein is a mathematician with a bold and piercing intelligence, unafraid to explore the biggest questions in the universe and shine a light on the darkest corners of our society. He is the host of The Portal podcast, a part of which, he recently released his 2013 Oxford lecture on his theory of Geometric Unity that is at the center of his lifelong efforts in arriving at a theory of everything that unifies the fundamental laws of physics. This conversation is part of the Artificial Intelligence podcast.

Time Index:

  • 0:00 – Introduction
  • 2:08 – World War II and the Coronavirus Pandemic
  • 14:03 – New leaders
  • 31:18 – Hope for our time
  • 34:23 – WHO
  • 44:19 – Geometric unity
  • 1:38:55 – We need to get off this planet
  • 1:40:47 – Elon Musk
  • 1:46:58 – Take Back MIT
  • 2:15:31 – The time at Harvard
  • 2:37:01 – The Portal
  • 2:42:58 – Legacy

Lex Fridman delivers a talk with some advice about life and my own journey and passion in artificial intelligence.

The audience is a group of Drexel engineering students, friends and family in Philadelphia, delivered before the outbreak of the coronavirus pandemic.

Time Index:

  • 0:00 – Overview – The Voice poem
  • 6:46 – Artificial intelligence
  • 13:44 – Open problems in AI
  • 14:10 – Problem 1: Learning to understand
  • 17:15 – Problem 2: Learning to act
  • 19:28 – Problem 3: Reasoning
  • 20:44 – Problem 4: Connection between humans & AI systems
  • 23:57 – Advice about life as an optimization problem
  • 24:10 – Advice 1: Listen to your inner voice – ignore the gradient
  • 25:12 – Advice 2: carve your own path
  • 26:28 – Advice 2: Measure passion not progress
  • 28:10 – Advice 4: work hard
  • 29:05 – Advice 5: forever oscillate between gratitude and dissatisfaction
  • 31:10 – Q&A: Meaning of life
  • 33:11 – Q&A: Simulation hypothesis
  • 36:15 – Q&A: How do you define greatness?

MIT Introduction to Deep Learning 6.S191: Lecture 6 with Ava Soleimany.

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Lecture Outline

  • 0:00 – Introduction
  • 0:58 – Course logistics
  • 3:59 – Upcoming guest lectures
  • 5:35 – Deep learning and expressivity of NNs
  • 10:02 – Generalization of deep models
  • 14:14 – Adversarial attacks
  • 17:00 – Limitations summary
  • 18:18 – Structure in deep learning
  • 22:53 – Uncertainty & bayesian deep learning
  • 28:09 – Deep evidential regression
  • 33:08 – AutoML
  • 36:43 – Conclusion

I always knew that reinforcement learning would teach us more about ourselves than any other kind of AI approach. This feeling was backed up in a paper published recently in Nature.

DeepMind, Alphabet’s AI subsidiary, has once again used lessons from reinforcement learning to propose a new theory about the reward mechanisms within our brains.

The hypothesis, supported by initial experimental findings, could not only improve our understanding of mental health and motivation. It could also validate the current direction of AI research toward building more human-like general intelligence.

It turns out the brain’s reward system works in much the same way—a discovery made in the 1990s, inspired by reinforcement-learning algorithms. When a human or animal is about to perform an action, its dopamine neurons make a prediction about the expected reward.

Lex Fridman shared this lecture by Vivienne Sze in January 2020 as part of the MIT Deep Learning Lecture Series.

Website: https://deeplearning.mit.edu
Slides: http://bit.ly/2Rm7Gi1
Playlist: http://bit.ly/deep-learning-playlist

LECTURE LINKS:
Twitter: https://twitter.com/eems_mit
YouTube: https://www.youtube.com/channel/UC8cviSAQrtD8IpzXdE6dyug
MIT professional course: http://bit.ly/36ncGam
NeurIPS 2019 tutorial: http://bit.ly/2RhVleO
Tutorial and survey paper: https://arxiv.org/abs/1703.09039
Book coming out in Spring 2020!

OUTLINE:
0:00 – Introduction
0:43 – Talk overview
1:18 – Compute for deep learning
5:48 – Power consumption for deep learning, robotics, and AI
9:23 – Deep learning in the context of resource use
12:29 – Deep learning basics
20:28 – Hardware acceleration for deep learning
57:54 – Looking beyond the DNN accelerator for acceleration
1:03:45 – Beyond deep neural networks

Lex Fridman explains that the best way to understand the mind is to build it in the clip from the opening lecture of the MIT Deep Learning lecture series.

Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8

Website: https://deeplearning.mit.edu

This is a clip from the opening lecture of the MIT Deep Learning lecture series.
Full video: https://www.youtube.com/watch?v=0VH1Lim8gL8
Website: https://deeplearning.mit.edu