Production ML Monitoring: Outliers, Drift, Explainers & Statistical Performance

Alejandro Saucedo

Best Practice Data Science DevOps general Machine-Learning Scientific Libraries (Numpy/Pandas/SciKit/...)

See in schedule

# Session Description

The lifecycle of a machine learning model only begins once it's in production. In this talk we provide a practical deep dive on best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models with Python leveraging advanced monitoring concepts such as concept drift, outlier detector and explainability.

We'll dive into a hands on example, where we will train an image classification machine learning model from scratch using Tensorflow, deploy it, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring techniques into infrastructural components that will enable for scale, introducing the standardised interfaces required for us to enable monitoring across hundreds or thousands of heterogeneous machine learning models.

# Benefits to ecosystem

This talk will benefit the ecosystem by providing cross-functional knowledge, bringing together best practices from data scientists, software engineers and DevOps to tackle the challenge of machine learning monitoring at scale. During this talk we will shed light into best practices in the python ecosystem that can be adopted towards production machine learning, and we will provide a conceptual and practical hands on deep dive which will allow the community to both, tackle this issues and help further the discussion.

Type: Talk (30 mins); Python level: Intermediate; Domain level: Intermediate


Alejandro Saucedo

The Institute for Ethical AI & Machine Learning

Alejandro is the Chief Scientist at the Institute for Ethical AI & Machine Learning, where he leads the development of industry standards on machine learning explainability, adversarial robustness and differential privacy. Alejandro is also the Director of Machine Learning Engineering at Seldon Technologies, where he leads large scale projects implementing open source and enterprise infrastructure for Machine Learning Orchestration and Explainability. With over 10 years of software development experience, Alejandro has held technical leadership positions across hyper-growth scale-ups and has a strong track record building cross-functional teams of software engineers.

LInkedin: https://linkedin.com/in/axsaucedo
Twitter: https://twitter.com/axsaucedo
Github: https://github.com/axsaucedo
Website: https://ethical.institute/