Comparison of Networking Solutions for Kubernetes¶

Kubernetes requires that each container in a cluster has a unique, routable IP. Kubernetes doesn’t assign IPs itself, leaving the task to third-party solutions.

In this study, our goal was to find the solution with the lowest latency, highest throughput, and the lowest setup cost. Since our load is latency-sensitive, our intent is to measure high percentile latencies at relatively high network utilization. We particularly focused on the performance under 30–50% of the maximum load, because we think this best represents the most common use cases of a non-overloaded system.

Competitors¶ Docker with --net=host ¶ This was our reference setup. All other competitors are compared against this setup. The –net=host option means that containers inherit the IPs of their host machines, i.e. no network containerization is involved. A priori, no network containerization performs better than any network containerization; this is why we used this setup as a reference. Flannel¶ Flannel is a virtual network solution maintained by the CoreOS project. It’s a well-tested, production ready solution, so it has the lowest setup cost. When you add a machine with flannel to the cluster, flannel does three things: Allocates a subnet for the new machine using etcd Creates a virtual bridge interface on the machine (called docker0 bridge) Sets up a packet forwarding backend: aws-vpc Register the machine subnet in the Amazon AWS instance table. The number of records in this table is limited by 50, i.e. you can’t have more than 50 machines in a cluster if you use flannel with aws-vpc . Also, this backend works only with Amazon’s AWS. host-gw Create IP routes to subnets via remote machine IPs. Requires direct layer2 connectivity between hosts running flannel. vxlan Create a virtual VXLAN interface. Because flannel uses a bridge interface to forward packets, each packet goes through two network stacks when travelling from one container to another. IPvlan¶ IPvlan is driver in the Linux kernel that lets you create virtual interfaces with unique IPs without having to use a bridge interface. To assign an IP to a container with IPvlan you have to: Create a container without a network interface at all Create an ipvlan interface in the default network namespace Move the interface to the container’s network namespace IPvlan is a relatively new solution, so there are no ready-to-use tools to automate this process. This makes it difficult to deploy IPvlan with many machines and containers, i.e. the setup cost is high. However, IPvlan doesn’t require a bridge interface and forwards packets directly from the NIC to the virtual interface, so we expected it to perform better than flannel.

Load Testing Scenario¶ For each competitor we run these steps: Set up networking on two physical machines Run tcpkali in a container on one machine, let is send requests at a constant rate Run Nginx in a container on the other machine, let it respond with a fixed-size file Capture system metrics and tcpkali results We ran the benchmark with the request rate varying from 50,000 to 450,000 requests per second (RPS). On each request, Nginx responded with a static file of a fixed size: 350 B (100 B content, 250 B headers) or 4 KB.