conferences | speakers | series

Continuous and on demand benchmarking

home

Continuous and on demand benchmarking
EuroSciPy 2022

We all know and love our carefully designed CI pipelines, which tests our code and makes sure by adding some code or fixing a bug we aren’t introducing a regression in the codebase. But we often don’t give the same treatment to benchmarking as we give to correctness. The benchmarking tests are usually one off scripts written to test a specific change. In this talk, we will discuss various strategies to test our code for performance regressions using ASV (airspeed velocity) for python projects.

In this talk, we will discuss how we can use ASV (airspeed velocity) and on demand cloud providers to mimic the usual setup of dedicated servers to run benchmarking tests. Currently, a lot of projects use dedicated servers to run their benchmarking suite, which usually is not economical as the majority of the time the hardware is unused. We can get rid of the setup of dedicated hardware + ongoing costs associated with it by only using resources when required. Outline 5 mins: - Why benchmarking? - The usual ways of running the benchmarking suite on dedicated hardware. - Using GitHub Actions reliably for "relative benchmark" testing (A quick shout out to the blog post by Quansight)  10 mins: - How we can use container technology to setup our tests and run them on demand on "dedicated" hardware to cut down on costs and give a lower barrier to entry to set up a benchmarking suite for projects.

Speakers: Mridul Seth