Two Minute Papers explores the counter work to deep fakes with the paper “FaceForensics++: Learning to Detect Manipulated Facial Images.”
Siraj Raval interviews Vinod Khosla in the latest edition of his podcast.
Vinod Khosla is an Entrepreneur, Venture Capitalist, and Philanthropist. It was an honor to have a conversation with the Silicon Valley legend that I’ve admired for many years. Vinod co-founded Sun Microsystems over 30 years ago, a company that grew to over 36,000 employees and invented so much foundational software technology like the Java programming language, NFS, and they pretty much mainstreamed the ‘idea’ of open source. After a successful exit, he’s been using his billionaire status to invest in ambitious technologists trying to improve human life. He’s got the coolest investment portfolio I’ve seen yet, and in this hour long interview we discuss everything from AI to education to startup culture. I know that my microphone volume should be higher in this one, I’ll fix that the next podcast. Enjoy!
Time markers of our discussion topics below:
2:55 The Future of Education
4:36 Vinod’s Dream of an AI Tutor
5:50 Vinod Offers Siraj a Job
6:35 Choose your Teacher with DeepFakes
8:04 Mathematical Models
9:10 Books Vinod Loves
11:00 What is Learning?
14:00 The Flaws of Liberal Arts Degrees
16:10 Indian Culture
21:11 A Day in the Life of Vinod Khosla
23:50 Valuing Brutal Honesty
24:30 Distributed File Storage
30:30 Where are we Headed?
33:32 Vinod on Nick Bostrom
38:00 Vinod’s Rockstar Recruiting Ability
43:00 The Next Industries to Disrupt
49:00 Vinod Offers Siraj Funding for an AI Tutor
51:48 Virtual Reality
52:00 Contrarian Beliefs
54:00 Vinod’s Love of Learning
55:30 USA vs China
Vinod’s ‘Awesome’ Video:
Khosla Ventures Blog posts:
Books we discussed:
Scale by Geoffrey West:
Factfulness by Hans Roesling:
Mindset by Carol Dwicke:
36 Dramatic Situations by Mike Figgis:
Sapiens by Yuval Noah Harari:
Zero to One by Peter Thiel:
Abstract:We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Dani, a game developer, recently made a game and decided to train an AI to play it.
A couple of weeks ago I made a video “Making a Game in ONE Day (12 Hours)”, and today I’m trying to teach an A.I to play my game!
Basically I’m gonna use Neural Networks to make the A.I learn to play my game.
This is something I’ve always wanted to do, and I’m really happy I finally got around to do it. Some of the biggest inspirations for this is obviously carykh, Jabrils & Codebullet!
Two Minute Papers explores OpenAI’s GPT2
Check out this GPT-2 implementation too (thanks Robert Miles for the link!) – write something, then tab, enter, tab, enter and so on: https://transformer.huggingface.co/doc/gpt2-large
OpenAI’s post: https://openai.com/blog/gpt-2-6-month-follow-up/Tweet source: https://twitter.com/gdm3000/status/1151469462614368256
Siraj Raval explores generative modeling technology.
This innovation is changing the face of the Internet as you read this. It’s now possible to design automated systems that can write novels, act as talking heads in videos, and compose music.
In this episode, Siraj explains how generative modeling works by demoing 3 examples that you can try yourself in your web browser.
- Demo 1 (Generating Music): https://colab.research.google.com/notebooks/magenta/piano_transformer/piano_transformer.ipynb
- Demo 2 (Generating Faces):
- Demo 3 (Generating 3D Objects):
- Autoencoders explained:
- Generative Adversarial Networks explained:
- Sequence Models explained:
- Generative Modeling explained:
Here’s a great tutorial on how to build out a neural network with Python in PyTorch.
Jon Wood has created another video showing how to use ML.NET and the (currently) preview version of 1.4 to create a deep neural network model to classify images.
TensorFlow is already one of the most popular tools for creating deep learning models.
Google this week introduced Neural Structured Learning (NSL) to make this tool even better.
Here’s why, NSL is a big deal.
Neural Structured Learning in TensorFlow is an easy-to-use framework for training deep neural networks by leveraging structured signals along with feature inputs. This learning paradigm implements Neural Graph Learning in order to train neural networks using graphs and structured data. As the researchers mention, the graphs can come from multiple sources such as knowledge graphs, medical records, genomic data or multimodal relations. Moreover, this framework also generalises to adversarial learning.
Lex Fridman interviews the one and only Yann LeCun in the latest episode of his podcast.