Taking deep learning models to production and doing so reliably is one of the next frontiers of MLOps.

With the advent of Redis modules and the availability of C APIs for the major deep learning frameworks, it is now possible to turn Redis into a reliable runtime for deep learning workloads, providing a simple solution for a model serving microservice.

RedisAI is shipped with several cool features such as support for multiple frameworks, CPU and GPU backend, auto batching, DAGing, and soon will be with automatic monitoring abilities. In this talk, we’ll explore some of these features of RedisAI and see how easy it is to integrate MLflow and RedisAI to build an efficient productionization pipeline.

[Originally aired as part of the Data+AI Online Meetup (https://www.meetup.com/data-ai-online/) and Bay Area MLflow meetup]