Here’s a great tutorial on how to use the Azure Machine Learning Designer interface to create machine learning models without code.

You don’t need to write code to build your model, though there’s the option to bring in custom R or Python where necessary. It’s a replacement for the original ML Studio tool, adding deeper integration into Azure’s machine learning SDKs and with support for more than CPU-based models, offering GPU-powered machine learning and automated model training and tuning.

This looks interesting.

Udacity announces the Machine Learning Engineer for Microsoft Azure Nanodegree Program, built in collaboration with Microsoft, offers you the chance to build the practitioner-level skills that companies across industries need. In the program, you’ll strengthen your machine learning skills by training, validating, and evaluating models using Azure Machine Learning, and complete a series of three real-world projects to add to your portfolio.

In the last several months, MLflow has introduced significant platform enhancements that simplify machine learning lifecycle management.

Expanded autologging capabilities, including a new integration with scikit-learn, have streamlined the instrumentation and experimentation process in MLflow Tracking.

Additionally, schema management functionality has been incorporated into MLflow Models, enabling users to seamlessly inspect and control model inference APIs for batch and real-time scoring. 

Yannic Kilcher explains why transformers are ruining convolutions.

This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.

OUTLINE:

  • 0:00 – Introduction
  • 0:30 – Double-Blind Review is Broken
  • 5:20 – Overview
  • 6:55 – Transformers for Images
  • 10:40 – Vision Transformer Architecture
  • 16:30 – Experimental Results
  • 18:45 – What does the Model Learn?
  • 21:00 – Why Transformers are Ruining Everything
  • 27:45 – Inductive Biases in Transformers
  • 29:05 – Conclusion & Comments

Related resources:

  • Paper (Under Review): https://openreview.net/forum?id=YicbFdNTTy

When you think of “deep learning” you might think of teams of PhDs with petabytes of data and racks of supercomputers.

But it turns out that a year of coding, high school math, a free GPU service, and a few dozen images is enough to create world-class models. fast.ai has made it their mission to make deep learning as accessible as possible.

In this interview fast.ai co-founder Jeremy Howard explains how to use their free software and courses to become an effective deep learning practitioner.

Learn More:

Time series are ubiquitous in real-world applications, but often add considerable complications to data science workflows. What’s more, most available machine learning toolboxes (e.g. scikit-learn) are limited to the tabular setting, and cannot easily be applied to time series data.

In this tutorial, you’ll learn how to apply common machine learning techniques to time series and how to extend available toolkits. This is a beginner-friendly tutorial: we assume familiarity with scikit-learn, but no prior experience with time series.

To start, you’ll learn how to distinguish between different kinds of temporal data and associated learning tasks, such as forecasting and time series classification. You’ll then learn how to solve these tasks with machine learning techniques specific to time series data.