Earlier today, I shared Lex Fridman’s discussion on DeepMind’s recent advancement on protein folding.

Join DeepMind  Science Engineer Kathryn Tunyasuvunakool to explore the hidden world of proteins and why this discovery is a big deal.

These tiny molecular machines underpin every biological process in every living thing and each one has a unique 3D shape that determines how it works and what it does.

But figuring out the exact structure of a protein is an expensive and often time-consuming process, meaning we only know the exact 3D structure of a tiny fraction of the 200m proteins known to science.

Being able to accurately predict the shape of proteins could accelerate research in every field of biology.

That could lead to important breakthroughs like finding new medicines or finding proteins and enzymes that break down industrial and plastic waste or efficiently capture carbon from the atmosphere.

Computers just got a lot better at mimicking human language. Researchers created computer programs that can write long passages of coherent, original text.

Language models like GPT-2, Grover, and CTRL create text passages that seem written by someone fluent in the language, but not in the truth. That AI field, Natural Language Processing (NLP), didn’t exactly set out to create a fake news machine. Rather, it’s the byproduct of a line of research into massive pretrained language models: Machine learning programs that store vast statistical maps of how we use our language. So far, the technology’s creative uses seem to outnumber its malicious ones. But it’s not difficult to imagine how these text-fakes could cause harm, especially as these models become widely shared and deployable by anyone with basic know-how.

Read more here: https://www.vox.com/recode/2020/3/4/21163743/ai-language-generation-fake-text-gpt2