The Netmap framework provides a simple and efficient user-space API for direct access to Ethernet NICs and other fast software interfaces (e.g., VALE switches, pipes and monitors). Because of its flexibility, performance and ease of use, Netmap is an attractive solution to implement high-speed portable Virtual Network Functions. This talk shows how to write packet processing applications using the Netmap API, and run them inside QEMU VMs, over passed-through Netmap interfaces. With Netmap, applications running in two VMs or containers can exchange up to 20-30 Mpps (per-core) at minimum packet size.
The need for an alternative mechanism and APIs for network I/O has been recognized by several O.S. bypass projects (DPDK, PF RING), and comes from the performance limitations of the traditional socket API (and the associated O.S. implementation) in terms of maximum packet rate. Using a traditional socket API a single processor core is not able to send or receive more than 1-2 million packets per second (Mpps) at minimum packet size (60 bytes), despite of much faster modern NICs which support 10-100 Mpps. These limitations are largely due to per-packet size-independent costs: system call, packet copy across user/kernel boundary, VFS layer overheads, dynamic (de)allocation of packet metadata (e.g. sk_buff on Linux), NIC register access and interrupts. Moreover, moving networking to userspace facilitates experimentation and improves portability. The bypass solutions overcome these limitations by pre-allocating packet buffers, mapping those buffers in the application address space and allowing applications to send and receive multiple packets using a single operation (e.g. a system call or a NIC register access). They also use simple packet representation structures optimized for raw packet I/O rather than for full-fledged protocol stack. When combined together, these techniques allow user-space applications to send/receive tens of millions of packets per second, saturating the NIC capacity even with short packets. Exploring Netmap is a good introduction to these topics, common to all frameworks. However, Netmap brings some additional benefits that are not found elsewhere: it does not force applications to resort to busy-polling, it protects devices from uncontrolled user-space access, and it introduces a common API which can also be used for fast VM networking and Inter-Process Communication. Netmap is available on both Linux and FreeBSD.
The Netmap framework has evolved significantly from its inception as a user-space packet I/O interface to NIC hardware in 2011. It is now a flexible network I/O tool that supports many backends (in addition to NICs) and virtualized environments, accessible with the same API. The VALE programmable switch (part of Netmap) acts as a virtual switch for Virtual Machines (VMs) and physical NICs, supporting hundreds of virtual ports and over 20 Mpps per-core between its ports. Netmap pipes are point-to-point virtual links that connect processes or VMs at over 40 Mpps, useful for service function chaining. Netmap as a fast network backend has been integrated into hypervisors like QEMU, Bhyve and VirtualBox. Accelerated network I/O is also possible for lightweight virtualization (containers) by means of native support for Linux veth devices (over 40 Mpps). Finally, a virtual pass-through device allows any Netmap interface (e.g. a VALE port, NIC or pipe endpoint) to be safely exposed inside a VM, enabling unprecedented packet-rates (20-40 Mpps) between VMs.
These Netmap features constitute the datapath building blocks for Network Function Virtualization (NFV) deployments. We are not aware of other technologies that allow applications running in two VMs or containers to exchange up to 20-30 Mpps at minimum packet size. With such powerful I/O capabilities, we believe Netmap is the preferred candidate to implement NFV applications such as load balancers, Intrusion Detection Systems, firewalls, etc.
Speakers: Vincenzo Maffione