Reproducible and Deployable Data Science with Open-Source Python

Build reproducible and deployable data pipelines with Kedro, Airflow and Great Expectations

Lim Hoang

Data Data Science Development

See in schedule

Data scientists, data engineers and machine-learning engineers often have to team together to create data science code that scales. Data scientists typically prefer rapid iteration, which can cause friction if their engineering colleagues prefer observability and reliability. 

In this talk, we'll show you how to achieve consensus using three open-source industry heavyweights: Kedro, Apache Airflow and Great Expectations. We will explain how to combine rapid iteration while creating reproducible, maintainable and modular data science code with Kedro, orchestrate it using Apache Airflow with Astronomer, and ensure consistent data quality with Great Expectations. 

Kedro is a Python framework for creating reproducible, maintainable and modular data science code. Apache Airflow is an extremely popular open-source workflow management platform. Workflows in Airflow are modelled and organised as DAGs, making it a suitable engine to orchestrate and execute a pipeline authored with Kedro. And Great Expectations helps data teams eliminate pipeline debt, through data testing, documentation, and profiling.

Type: Talk (30 mins); Python level: Intermediate; Domain level: Beginner

Lim Hoang


I am a senior software engineer working with the Kedro team at QuantumBlack. I like working with all sorts of technologies and (programming) languages. In my free time, I try to do standup comedy and learn (human) languages. I’m @limdauto pretty much everywhere on the Internet.