conferences | speakers | series

Scaling scikit-learn: introducing new computational foundations

home

Scaling scikit-learn: introducing new computational foundations
EuroSciPy 2022

scikit-learn is an open-source scientific library for machine learning in Python. In this talk, we will present the recent work carried over by the scikit-learn core-developers team to improve its native performance.

scikit-learn is an open-source scientific library for machine learning in Python. Since its first release in 2010, the library gained a lot of traction in education, research and the wider society, and has set several standards for API designs in ML software. Nowadays scikit-learn is of one the most used scientific library in the world for data analysis. It provides reference implementations of many methods and algorithms to a userbase of millions. With the renewed interest in machine-learning based methods in the last years, other libraries providing efficient and highly optimised methods (such as for instance LightGBM and XGBoost for Gradient-Boosting-based methods) have emerged. Those libraries have encountered a similar success, and have put performance and computational efficiency as top priorities. In this talk, we will present the recent work carried over by the scikit-learn core-developers team to improve its native performance. This talk will cover elements of the PyData ecosystem and the CPython interpreter with an emphasis on their impact on performance. Computationally expensive patterns will then be covered before presenting the technical choices associated with the new foundational implementations, keeping the project requirements in mind. At the end, we will take a quick look at the future work and collaborations on hardware-specialised computational kernels.

Speakers: Julien Jerphanion