This session is from a recent three-day workshop on understanding the geometrical structure of deep neural networks.

Deep learning is transforming the field of artificial intelligence, yet it is lacking solid theoretical underpinnings.

This state of affair significantly hinders further progress, as exemplified by time-consuming hyperparameters optimization, or the extraordinary difficulties encountered in adversarial machine learning.

This problem is at the confluence of mathematics, computer science, and practical machine learning. We invite the leaders in these fields to bolster new collaborations and to look for new angles of attack on the mysteries of deep learning.

For this month’s episode we have Maoni Stephens, the owner for the .NET GC.  

She’s a deep domain expert in garbage collection for over 15 years and has a lot of interesting learnings to share about building a career towards the architect path.  

In this video, Steve and Maoni discuss everything from getting the most out of of mentors, building learning into a career, the critical skill of debugging others’ code, developing confidence and perseverance towards her goals.     

Siraj Raval wrote a research paper titled “The Neural Qubit” where he describe a quantum machine learning architecture inspired by neurons in the human brain.

Code: https://bit.ly/2jYh8u9
Paper: https://bit.ly/2ltxf3b

From the video description:

I’m pretty excited about quantum computing, it gives me a deep sense of wonder & confusion that i really enjoy. I’m so glad to be so confused (again)! I have lots more quantum machine learning papers to read in the coming weeks. In this episode, I describe the nonlinear motivations behind my paper, how i thought through the research process, and how i eventually came to some interesting results + conclusions. With the help of math, code, & manim(!) animations I’ll give it my best shot explaining some of the complex topics at the very edge of Computer Science I tackled. I hope you find it useful, enjoy!