Machine learning models have made it to the browser. Virtual backgrounds and background blurs are everywhere! Many recent developments including Tensorflow WASM backend, smaller ML models, pre-trained model repositories have enabled widely used virtual backgrounds and backgrounds blurs.
This talk will explore how a simple background blur works, how developers can code their own blur for a WebRTC call, and most interestingly - what other ML/AI applications can be built using the same framework
Machine learning models and inferences are being run on the browser and have become very performant. Libraries and frameworks like mediapipe, bodypix have made it easy to integrate ML-based experiences into a WebRTC call with relative ease. We're still in the early days of discovering their possibilities in web applications.
In this talk, we will walk through how Daily integrated mediapipe in its WebRTC library. We will explore:
Hopefully, this leaves the audience with ideas for writing custom video processors that might enable the next generation of video experiences on WebRTC.
Speakers: Ravindhran Sankar