Protecting Your Machine Learning Against Drift: An Introduction

Oliver Cobb

Beginners Data Science Deep Learning Machine-Learning failures/mistakes

See in schedule: Thu, Jul 29, 12:00-12:45 CEST (45 min) Download/View Slides

Deployed machine learning models can fail spectacularly in response to seemingly benign changes to the underlying process being modelled. Concerningly, when labels are not available, as is often the case in deployment settings, this failure can occur silently and go unnoticed.

This talk will consist of a practical introduction to drift detection, the discipline focused on detecting such changes. We will start by building an understanding of how drift can occur, why it pays to detect it and how it can be detected in a principled manner. We will then discuss the practicalities and challenges around detecting it as quickly as possible in machine learning deployment settings where high dimensional and unlabelled data is arriving continuously. We will finish by demonstrating how the theory can be put into practice using the `alibi-detect` Python library.

There are no hard prerequisites for understanding this talk, although background knowledge on machine learning and statistical hypothesis testing might be useful.

Type: Talk (45 mins); Python level: Beginner; Domain level: Beginner


Oliver Cobb

Seldon Technologies

Oliver performs research in the areas of outlier and drift detection with particular focus on overcoming impracticalities that arise in machine learning deployment contexts. He previously performed geometric deep learning research at a virtual reality startup and obtained an integrated masters degree in mathematics and statistics from the University of Oxford.