Live broadcast: https://www.youtube.com/watch?v=oBPNk5qN0L4
At H&M Group, we are increasingly adopting machine learning algorithms and rapidly developing successful use cases, one of the applications is a dynamic resources allocation (memory and cpu) using data driven analysis and ML to decrease the cost of infrastructure.
The objective of this talk is to show how one of H&M use cases adopted ML workflow using airflow, kubernetes and docker and how to solve the provisioning problem with ML approach.
At H&M we are using Airflow, kubernetes as main components for the machine learning workflow. The increase of the Online shopping during the last two years has impacted the data volume significantly. A lot of companies are struggling with the infrastructure cost when adopting airflow kubernetes/docker as technologies, any person interested can join to have a high level explanation of the solution H&M Group has adapted to encounter this.