In this PyData London talk,  Kevin Lemagnen covers something that I’ve long wondered about: the maintainability of code created in data science projects.

Notebooks are great, they allow to explore your data and prototype models quickly. But they make it hard to follow good software practices. In this tutorial, we will go through a case study.We will see how to refactor our code as a testable and maintainable Python package with entry-points to tune, train and test our model so it can easily be integrated to a CI/CD flow.

Adam Paszke speaks at PyData Warsaw 2018 about PyTorch, one of the main tools used for machine learning research.

It’s been developed in beta mode for over 2 years, but this October, a release candidate for 1.0 version has been finally released. In this talk, Adam briefly introduces the library, and then move on to showcase the cutting edge features we introduced recently.

Implementing and Training Predictive Customer Lifetime Value Models in Python are covered in this talk by Jean-Rene Gauthier and  Ben Van Dyke. Customer lifetime value models (CLVs) are powerful predictive models that allow analysts and data scientists to forecast how much customers are worth to a business.

CLV models provide crucial inputs to inform marketing acquisition decisions, retention measures, customer care queuing, demand forecasting, etc. They are used and applied in a variety of verticals, including retail, gaming, and telecom.

Jupyter notebooks are great. They are interactive, customizable and can be made to beautifully illustrate data.

Unfortunately only a small fraction of data scientists takes the full advantage of the possibilities that they bring. In this talk, Jakub Czakon shows you some of the coolest notebook features that will impress your peers, dazzle your clients and make your work a lot more enjoyable.