Abstract:We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Microsoft Research interviews Mark Hamilton to see how MMLSpark is helping to serve business and the environment.
If someone asked you what snow leopards and Vincent Van Gogh have in common, you might think it was the beginning of a joke. It’s not, but if it were, Mark Hamilton, a software engineer in Microsoft’s Cognitive Services group, budding PhD student and frequent Microsoft Research collaborator, would tell you the punchline is machine learning. More specifically, Microsoft Machine Learning for Apache Spark (MMLSpark for short), a powerful yet elastic open source machine learning library that’s finding its way beyond business and into “AI for Good” applications such as the environment and the arts.
Today, Mark talks about his love of mathematics and his desire to solve big, crazy, core knowledge sized problems; tells us all about MMLSpark and how it’s being used by organizations like the Snow Leopard Trust and the Metropolitan Museum of Art; and reveals how the persuasive advice of a really smart big sister helped launch an exciting career in AI research and development.
Machine Learning can be confusing sometimes.
From the esoteric terms to elevated expositions it seems like a terribly difficult area to get into.
Seth Juarez, like me, started off as a developer, and he tackles the one term that is used all of the time in Machine Learning: the elusive “model.
From the description:
First we set up how machine learning is different, how to think about it, and finally what a model actually is (spoiler alert – think “a function written a different way”). Would love your feedback
Leila Etaati, a Data Soup Summit speaker, will be in the USA for a month and presenting a one day workshop at the following locations
- SQL Saturday Minoseta 11 October:
- Saturday Atlanta BI Edition 18th October:
- Power Platform Orlando 14th October:
MLOps (also known as DevOps for machine learning) is the practice of collaboration and communication between data scientists and DevOps professionals to help manage the production machine learning (ML) lifecycle.
Azure Machine Learning service’s MLOps capabilities provide customers with asset management and orchestration services which enable effective ML lifecycle management.
Jabrils uses an autoencoder to generate a new Pokemon.
The TensorFlow team has been on a journey to make the training, deployment, managing, and scaling of machine learning Machine Learning models as easy as possible.
TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state-of-the-art machine learning and build scalable ML-powered applications.
This video is also subtitled in Chinese, Indonesian, Italian, Japanese, Korean, Portuguese, and Spanish.
Coding TensorFlow → https://goo.gle/2Y43cN4
Siraj Raval explores generative modeling technology.
This innovation is changing the face of the Internet as you read this. It’s now possible to design automated systems that can write novels, act as talking heads in videos, and compose music.
In this episode, Siraj explains how generative modeling works by demoing 3 examples that you can try yourself in your web browser.
- Demo 1 (Generating Music): https://colab.research.google.com/notebooks/magenta/piano_transformer/piano_transformer.ipynb
- Demo 2 (Generating Faces):
- Demo 3 (Generating 3D Objects):
- Autoencoders explained:
- Generative Adversarial Networks explained:
- Sequence Models explained:
- Generative Modeling explained:
Jon Wood has just posted this video on how to use ML.NET to remove stop words in text data.
Here’s a great 90 minute class covering the top 5 machine learning Python libraries.