The goal of neuromorphic computing is simple: mimic the neural structure of the brain.
Seeker checks out the current generation of computer chips that’s getting closer to reaching this non-trivial engineering task.
The goal of neuromorphic computing is simple: mimic the neural structure of the brain.
Seeker checks out the current generation of computer chips that’s getting closer to reaching this non-trivial engineering task.
BBC Click “scrubs up” to explore the impact of AI on healthcare.
Lex Fridman interviews Alex Filippenko, an astrophysicist and professor of astronomy at Berkeley.
This mini documentary takes a look at Elon Musk and his thoughts on artificial intelligence.
Some interesting thoughts to consider.
Lex Fridman interviews Chris Lattner, a world-class software & hardware engineer, leading projects at Apple, Tesla, Google, and SiFive.
OUTLINE:
deeplizard demonstrates how to use data augmentation on images using TensorFlow’s Keras API.
VIDEO SECTIONS
Lex Fridman interviews Scott Aaronson, a quantum computer scientist.
Time index:
Watch Sascha Dittmann and friends build out a real time object detection system on the Jetson Nano.
Elon Musk has warned us that AI and in particular a digital super intelligent AI might render humanity extinct.
We should therefore proceed very carefully in the development of AI systems. One of the solutions for the AI control problem proposed by Elon Musk, is the integration of AI with the human brain through a brain-computer interface. That is one of the reasons why he founded Neuralink, a company focused on the development of implantable brain–machine interfaces.
[]
Neuralink’s BMI technology might be able to overcome the biological limits of our minds and could even expand our intelligence.The symbiosis between AI and humans, may greatly benefit our species. It may also help humanity to expand out into space. In spite of these possibilities. Musk said that he sees the creation of digital superintelligences as a great risk to the existence of humanity, but he also thinks that we must nevertheless pursue its development.
Yannic Kilcher explains why transformers are ruining convolutions.
This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.
OUTLINE:
Related resources: