conferences | speakers | series

CNI Automagic: Device discovery for semantic network attachment in Kubernetes

home

CNI Automagic: Device discovery for semantic network attachment in Kubernetes
FOSDEM 2023

CNI plugins – we love them for getting our Kubernetes networking untangled. Sometimes – we want to manage them in a more cloud-native fashion, using Kubernetes itself. Doug is here to guide you on a tour of a proof-of-concept CNI plugin, one that automagically probes your nodes for devices, and allows you (or your Kubernetes controllers! Or your AI/ML!?) to add semantics to your Kubernetes network attachment – and help you answer the question: “Which network am I really attaching my pods to?”, letting you express how to map your pod networking to devices, and to networks themselves.

Today, when you use the reference CNI plugins, such as macvlan and ipvlan CNI, you’re given a kind of magical power: Using native Linux capabilities for networking your workloads in Kubernetes. This is low level and powerful. We all know that networking isn’t simple when you’re in the real world, there’s lots of existing infrastructure and grim realities of data centers. Flexibility is clutch here, and low level solutions help us to build towards these realities. However, Kubernetes is meant for scale, and for expression of intent at a higher level. This approach is an example of how we can meet these requirements for low-level definitions of our network attachments to meet Kubernetes goal of large scale – especially at scale and in non-uniform server environments.

This talk will encompass a demo of a brand new CNI plugin and auxiliary components which allow you to map devices to networks, and associate your pods attachments to devices at a higher level, with more automagic, so you don’t have to baby the workload specifications at the lowest possible level, and take advantage of how Kubernetes can help you orchestrate these attachments.

You’ll learn a lot about CNI! You’ll get instructions to take home on how to get this CNI plugin running in your own lab, and it’ll give you a test bed for learning how to make your own CNI plugins too. Even if you’re not a CNI plugin developer, it’ll help you understand better what your CNI plugins are doing, and help you diagnose issues in your own Kubernetes deployment. We’ll also talk about device plugins for Kubernetes and you’ll get to learn about how device availability impacts scheduling workloads.

While this talk encourages you to get into the nitty gritty of CNI – Doug really wants to invite you to become a participant in bigger picture efforts for the Kubernetes networking community, too. There is a nascent Kubernetes-native MultiNetworking effort, the Kubernetes Network Plumbing Working Group, and upcoming work for CNI 2.0. This talk, and PoC project represents some problems that we need to solve as a community – If we don’t represent the community and what we all want – we lose out to the commercial efforts which will steamroll the community and leave our (already overtaxed!) sysadmins in the dirt. We all need common ground to work together upon, and it’s up to the community to set those standards and let the commercial efforts follow.

CNI is the “Container Networking Interface”, and it’s an API for setting up networking for containers, and also used by Kubernetes. We love it for its simplicity, and for the fact that it’s a common ground for networking in Kubernetes, and we already know, container networking isn’t always easy. What you might not know is that CNI itself is “container orchestration engine agnostic.” For a 1.0, this limitation makes sense, and we need to respect the origins of this thinking, and it allows some overarching constructs that are useful for both orchestration engines, and container runtimes (like crio and containerd), but we need a step forward in terms of Kubernetes given how ubiquitous it’s become, and continues to become in the networking space.

Doug believes that we need an evolution, or potentially, a revolution for CNI 2.0. One that gives us a layer to speak to CNI using Kubernetes. If we don’t have this pathway, it encourages developers to ignore CNI and build their own Rube Goldberg machine, and sysadmins have already had to chase the rolling ball down the chute to light the match to drop the mousetrap enough as it is. We need standards, not contraptions!

Speakers: Douglas Smith