In today’s video, NeuralNine going to build an intelligent AI chatbot using neural networks and natural language processing in Python.
In today’s video, NeuralNine going to build an intelligent AI chatbot using neural networks and natural language processing in Python.
Jon Wood explains Latent Dirilecht Allocation (LDA) in ML.NET.
In this episode of Data Driven, Frank and Andy explore voice assistants and the behind the scenes technology that makes them tick.Transcript coming soon.
Press the play button below to listen here or visit the show page at DataDriven.tv.
Machine Learning Street Talk Tim Scarfe, Yannic Kilcher and Connor Shorten discuss their takeaways from OpenAI’s GPT-3 language model.
OpenAI trained a 175 BILLION parameter autoregressive language model. The paper demonstrates how self-supervised language modelling at this scale can perform many downstream tasks without fine-tuning.
Paper Links:
Content index:
How far can you go with ONLY language modeling?
Can a large enough language model perform NLP task out of the box?
OpenAI take on these and other questions by training a transformer that is an order of magnitude larger than anything that has ever been built before and the results are astounding.
Yannic Kilcher explores.
Paper
Time index:
Jon Wood shows us how to how to produce n-grams from text data in ML.NET.
Code – https://github.com/jwood803/MLNetExamples/blob/master/MLNetExamples/NGrams/Program.cs
N-Gram article – https://blog.xrds.acm.org/2017/10/introduction-n-grams-need/
Computers just got a lot better at mimicking human language. Researchers created computer programs that can write long passages of coherent, original text.
Language models like GPT-2, Grover, and CTRL create text passages that seem written by someone fluent in the language, but not in the truth. That AI field, Natural Language Processing (NLP), didn’t exactly set out to create a fake news machine. Rather, it’s the byproduct of a line of research into massive pretrained language models: Machine learning programs that store vast statistical maps of how we use our language. So far, the technology’s creative uses seem to outnumber its malicious ones. But it’s not difficult to imagine how these text-fakes could cause harm, especially as these models become widely shared and deployable by anyone with basic know-how.
Read more here: https://www.vox.com/recode/2020/3/4/21163743/ai-language-generation-fake-text-gpt2
Machine Learning with Phil ponders the question: “is it better to specialize or generalize in artificial intelligence and deep learning?”
The answer depends on your career aspirations. Do you want to be a deep learning research professor?
Do you want to go to work for Google, Facebook, or other global mega corporations?
Or do you want to be your own unicorn start up founder?
Each has their own specialization requirements that Phil breaks down in this video.
In this Data Point, Frank points out that chatbots are in demand again.
Press the play button below to listen here or visit the show page at DataDriven.tv.
Machine Learning with Phil show you how to do sentiment analysis with TensorFlow 2 in this natural language processing (NLP) tutorial.
This natural language processing model is relatively straight forward, as it’s just an encoder coupled to some bidirectional layers and a couple dense layers to handle the classification. We’ll compare two different models, one with a single LSTM layer and the other with two LSTM layers and some dropout.