Computers just got a lot better at mimicking human language. Researchers created computer programs that can write long passages of coherent, original text.

Language models like GPT-2, Grover, and CTRL create text passages that seem written by someone fluent in the language, but not in the truth. That AI field, Natural Language Processing (NLP), didn’t exactly set out to create a fake news machine. Rather, it’s the byproduct of a line of research into massive pretrained language models: Machine learning programs that store vast statistical maps of how we use our language. So far, the technology’s creative uses seem to outnumber its malicious ones. But it’s not difficult to imagine how these text-fakes could cause harm, especially as these models become widely shared and deployable by anyone with basic know-how.

Read more here: https://www.vox.com/recode/2020/3/4/21163743/ai-language-generation-fake-text-gpt2 

OpenAI raised some eyebrows last month when it announced it had figured out a way to get an AI to write more naturally. They, however, decided not to release their entire research for fear that it could cause havoc.

From an article in The Register.

Last month, researchers at OpenAI revealed they had built software that could perform a range of natural language tasks, from machine translation to text generation. Some of the technical details were published in a paper, though the majority of materials was withheld for fear that it could be used maliciously to create spam-spewing bots or churn out tons of fake news. Instead, OpenAI released a smaller and less effective version nicknamed GPT-2-117M.

The one and only John Papa sits down with Brian Clark to talk about all things bots.

What exactly is a bot? What all can bots do?

Watch this video to find out and, if that’s not incentive enough, there may or may not be dancing.