Siraj Raval  interviews Vinod Khosla in the latest edition of his podcast.

Vinod Khosla is an Entrepreneur, Venture Capitalist, and Philanthropist. It was an honor to have a conversation with the Silicon Valley legend that I’ve admired for many years. Vinod co-founded Sun Microsystems over 30 years ago, a company that grew to over 36,000 employees and invented so much foundational software technology like the Java programming language, NFS, and they pretty much mainstreamed the ‘idea’ of open source. After a successful exit, he’s been using his billionaire status to invest in ambitious technologists trying to improve human life. He’s got the coolest investment portfolio I’ve seen yet, and in this hour long interview we discuss everything from AI to education to startup culture. I know that my microphone volume should be higher in this one, I’ll fix that the next podcast. Enjoy!

Show Notes:

Time markers of our discussion topics below:

2:55 The Future of Education
4:36 Vinod’s Dream of an AI Tutor
5:50 Vinod Offers Siraj a Job
6:35 Choose your Teacher with DeepFakes
8:04 Mathematical Models
9:10 Books Vinod Loves
11:00 What is Learning?
14:00 The Flaws of Liberal Arts Degrees
16:10 Indian Culture
21:11 A Day in the Life of Vinod Khosla
23:50 Valuing Brutal Honesty
24:30 Distributed File Storage
30:30 Where are we Headed?
33:32 Vinod on Nick Bostrom
38:00 Vinod’s Rockstar Recruiting Ability
43:00 The Next Industries to Disrupt
49:00 Vinod Offers Siraj Funding for an AI Tutor
51:48 Virtual Reality
52:00 Contrarian Beliefs
54:00 Vinod’s Love of Learning
55:30 USA vs China

Vinod’s ‘Awesome’ Video:
https://www.youtube.com/watch?v=STtAsDCKEck

Khosla Ventures Blog posts:
https://www.khoslaventures.com/blog/all

Books we discussed:

Scale by Geoffrey West:
https://amzn.to/2rs7UV7

Factfulness by Hans Roesling:
https://amzn.to/2GHUlgg

Mindset by Carol Dwicke:
https://amzn.to/2icCNey

36 Dramatic Situations by Mike Figgis:
https://amzn.to/2ol14Vi

Sapiens by Yuval Noah Harari:
https://amzn.to/2amA7J5

21 Lessons for the 21st Century by Yuval Noah Harari:
https://amzn.to/2PKIJZY
 
The Third Pillar by Raghuram R:
https://bit.ly/2ASU98K

Zero to One by Peter Thiel:
https://amzn.to/2ae3NTM

Yannic Kilcher investigates BERT and the white paper associated with it https://arxiv.org/abs/1810.04805

Abstract:We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.

Dani, a game developer, recently made a game and decided to train an AI to play it.

A couple of weeks ago I made a video “Making a Game in ONE Day (12 Hours)”, and today I’m trying to teach an A.I to play my game!

Basically I’m gonna use Neural Networks to make the A.I learn to play my game.

This is something I’ve always wanted to do, and I’m really happy I finally got around to do it. Some of the biggest inspirations for this is obviously carykh, Jabrils & Codebullet!

Siraj Raval explores generative modeling technology.

This innovation is changing the face of the Internet as you read this. It’s now possible to design automated systems that can write novels, act as talking heads in videos, and compose music.

In this episode, Siraj explains how generative modeling works by demoing 3 examples that you can try yourself in your web browser. 

TensorFlow is already one of the most popular tools for creating deep learning models.

Google this week introduced Neural Structured Learning (NSL) to make this tool even better.

Here’s why, NSL is a big deal.

Neural Structured Learning in TensorFlow is an easy-to-use framework for training deep neural networks by leveraging structured signals along with feature inputs. This learning paradigm implements Neural Graph Learning in order to train neural networks using graphs and structured data. As the researchers mention, the graphs can come from multiple sources such as knowledge graphs, medical records, genomic data or multimodal relations. Moreover, this framework also generalises to adversarial learning.