Hardware is getting interesting again.

Here’s an interesting paper published in Nature about Neuromorphic Computing.

Abstract below:

Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign.

Here’s an interesting look at what the next decade holds for AI and why hardware is going to be a big part of it.

“What we see happening in the transition to now and toward 2020 is what I call the coming of age of deep learning,” says Singer, pictured below with an NNP-I chip, tells The Next Platform. “This is where the capabilities have been better understood, where many companies are starting to understand how this might be applicable to their particular line of business. There’s a whole new generation of data scientists and other professionals who understand the field, there’s an environment for developing new algorithms and new topologies for the deep learning frameworks. All those frameworks like TensorFlow and MXNet were not really in existence in 2015. It was all hand-tooled and so on. Now there are environments, there is a large cadre of people who are trained on that, there’s a better understanding of the mapping, there’s a better understanding of the data because it all depends on who is using the data and how to use the data.”

Tony Gambacorta shows you how to explore your hardware with a serial port.

If you’re interested in hardware but haven’t had a chance to play with any yet, this one’s for you. In this “hello world”-level reversing project we’re checking out a UART (serial port) and using it to access a shell on a *very* soft target. If you decide to try it on your own you’ll find an equipment list, walkthrough references, and some troubleshooting ideas at the link below.

Siraj Raval  interviews Vinod Khosla in the latest edition of his podcast.

Vinod Khosla is an Entrepreneur, Venture Capitalist, and Philanthropist. It was an honor to have a conversation with the Silicon Valley legend that I’ve admired for many years. Vinod co-founded Sun Microsystems over 30 years ago, a company that grew to over 36,000 employees and invented so much foundational software technology like the Java programming language, NFS, and they pretty much mainstreamed the ‘idea’ of open source. After a successful exit, he’s been using his billionaire status to invest in ambitious technologists trying to improve human life. He’s got the coolest investment portfolio I’ve seen yet, and in this hour long interview we discuss everything from AI to education to startup culture. I know that my microphone volume should be higher in this one, I’ll fix that the next podcast. Enjoy!

Show Notes:

Time markers of our discussion topics below:

2:55 The Future of Education
4:36 Vinod’s Dream of an AI Tutor
5:50 Vinod Offers Siraj a Job
6:35 Choose your Teacher with DeepFakes
8:04 Mathematical Models
9:10 Books Vinod Loves
11:00 What is Learning?
14:00 The Flaws of Liberal Arts Degrees
16:10 Indian Culture
21:11 A Day in the Life of Vinod Khosla
23:50 Valuing Brutal Honesty
24:30 Distributed File Storage
30:30 Where are we Headed?
33:32 Vinod on Nick Bostrom
38:00 Vinod’s Rockstar Recruiting Ability
43:00 The Next Industries to Disrupt
49:00 Vinod Offers Siraj Funding for an AI Tutor
51:48 Virtual Reality
52:00 Contrarian Beliefs
54:00 Vinod’s Love of Learning
55:30 USA vs China

Vinod’s ‘Awesome’ Video:
https://www.youtube.com/watch?v=STtAsDCKEck

Khosla Ventures Blog posts:
https://www.khoslaventures.com/blog/all

Books we discussed:

Scale by Geoffrey West:
https://amzn.to/2rs7UV7

Factfulness by Hans Roesling:
https://amzn.to/2GHUlgg

Mindset by Carol Dwicke:
https://amzn.to/2icCNey

36 Dramatic Situations by Mike Figgis:
https://amzn.to/2ol14Vi

Sapiens by Yuval Noah Harari:
https://amzn.to/2amA7J5

21 Lessons for the 21st Century by Yuval Noah Harari:
https://amzn.to/2PKIJZY
 
The Third Pillar by Raghuram R:
https://bit.ly/2ASU98K

Zero to One by Peter Thiel:
https://amzn.to/2ae3NTM

It’s back to school season here in the US and Siraj Raval offers his advice on what laptops programmers (and aspiring programmers).

If you’re a programmer and looking to decide which laptop to get in 2019, I’ve developed a set of recommendations for you based on 3 different budgets! In this episode, I’ll explain my recommendations for the best laptop for programmers in the student, generalist, and professional category. Each of my picks has been judged by me on a wide variety of factors from GPU speed to keyboard comfort to cooling fan ability. These are my own picks, none of these companies paid me to promote them. I hope you find my set of recommendations useful in your journey, enjoy!

Here’s an interesting investing move from two industry heavyweights.

Microsoft co-founder Bill Gates, Uber co-founder Travis Kalanick’s 101100 fund and current Uber CEO Dara Khosrowshahi have invested in Luminous, a small start-up building an artificial intelligence chip.

The investment shows key figures in the technology industry believe there is still an opportunity for a new standard to emerge when it comes to hardware for AI, which can be incorporated into a variety of software applications.

Deep learning is central to recent innovations in AI. If you don’t want to run your code in the cloud and prefer to build your own local rig optimized for TensorFlow and machine learning, then this post is for you.

One point to note is that TensorFlow has a slightly unusual computation scheme which might be particularly intimidating to novice programmers. The computations are built into a ‘computation graph’ which is then run all at once. So to add two variables, a and b, TensorFlow would first encode the ‘computation’ a+b into a computation graph. Before running this graph, trying to access this graph will not give a result – the result hasn’t been processed yet! Instead, it will give you the graph. Only after running the graph will you have access to the actual answer. Bear this in mind, as it will help clear confusions in your later explorations with TensorFlow.