kube-proxy load balance test (en)

1. What is kube-proxy?

Kubernetes is an orchestration tool for containers. kube-proxy is a component of kubernetes running on kubernetes minion. Its role is the network proxy and load balancer for each container. It uses iptables statistic extension module with random mode and probability settings.

In this test, I want to see how evenly kube-proxy balances the traffic load.

This is the test environment: one master and 4 minions.

  • kubemaster: 192.168.24.31
  • kubeminion1: 192.168.24.41
  • kubeminion2: 192.168.24.42
  • kubeminion3: 10.0.24.43
  • kubeminion4: 10.0.24.44

I’ll create nginx container with replicaset 4 so each container runs on each minion.

First, create nginx deployment yaml manifest file(dep_nginx.yml).

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        volumeMounts:
        - name: documentroot
          mountPath: /usr/share/nginx/html
      volumes:
      - name: documentroot
        hostPath:
          path: /srv/nginx

In container spec, I put /srv/nginx hostPath into /usr/share/nginx/html on a container since I need to tell each nginx server returns its own html page.

Create /srv/nginx directory on each minion.

$ sudo mkdir /srv/nginx

Now I put a simple html document in /srv/nginx/index.html on each minion.

$ echo "<h1>kubeminion1's nginx</h1>" | sudo tee /srv/nginx/index.html

kubeminion1’s nginx on kubeminion1, kubeminion2’s nginx on kubeminion2, etc…

I haven’t run any pods at all. So I run kubectl to run nginx pods.

$ kubectl create -f dep_nginx.yml

It takes some time to run all 4 replicas. Let’s see all 4 pods are running.

$ kubectl get po -o wide
nginx-1024743661-5rh83     1/1       Running   0          1d        10.100.84.4 192.168.24.41
nginx-1024743661-bhcgs     1/1       Running   0          1d        10.100.27.2 192.168.24.42
nginx-1024743661-gr311     1/1       Running   0          1d        10.100.79.3 10.0.24.44
nginx-1024743661-xhuvs     1/1       Running   0          1d        10.100.69.4 10.0.24.43

Good! 4 pods are spread over each minion.

One more thing to do is put the service on this deployment so that the external client can reach the nginx web containers.

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  externalIPs: ["192.168.24.41"]

The externalIPs setting will expose nginx web port on 192.168.24.41 which is kubeminion1.

Let’s see the service is up.

$ kubectl get svc
NAME            CLUSTER-IP       EXTERNAL-IP     PORT(S)             AGE
nginx-svc       10.100.216.9     192.168.24.41   80/TCP              1d

EXTERNAL-IP is 192.168.24.41. Good~

Check if iptables rules are set up correctly on each minion.

$ sudo iptables -S -t nat | grep statistic
-A KUBE-SVC-R2VK7O5AFVLRAXSH -m comment --comment "default/nginx-svc:" -m
statistic --mode random --probability 0.25000000000 -j KUBE-SEP-GQOPG5ZI7EFB76CL
-A KUBE-SVC-R2VK7O5AFVLRAXSH -m comment --comment "default/nginx-svc:" -m
statistic --mode random --probability 0.33332999982 -j KUBE-SEP-NQ634KA75OUBVO7B
-A KUBE-SVC-R2VK7O5AFVLRAXSH -m comment --comment "default/nginx-svc:" -m
statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TIWXH2ANNGANIYGQ

Good~ See iptables rules with statistic module with random mode and probability settings.

Everything sets up all right.

2. Test

To test how evenly kube-proxy balance the load, I use very raw tools - curl and a little bash script. I know there are better tools but curl is enough for this test.

This is the script(kube_proxy_lb_test.sh)

#!/bin/bash

m1=0
m2=0
m3=0
m4=0

for ((c=1;c<=100;c++))
do
  RES=$(curl -s http://192.168.24.41)
  #echo $RES

    case $RES in
      *kubeminion1*)
        m1=$(($m1+1))
        ;;
      *kubeminion2*)
        m2=$(($m2+1))
        ;;
      *kubeminion3*)
        m3=$(($m3+1))
        ;;
      *kubeminion4*)
        m4=$(($m4+1))
        ;;
    esac
done
echo $m1:$m2:$m3:$m4

As you can see, this script is very simple.

  1. initialize counter m1, m2, m3, m4.
  2. run curl to external ip 192.168.24.41 and save the html output to RES
  3. increase the counter for each output.
  4. loop step 2 and 3 100 times.
  5. print each counter at the end.

3. Result

I expected 25 responses per each nginx pod but the result betrayed me. Here is the result.

test test no minion1 minion2 minion3 minion4 deviation
100 reqs 1 27 27 25 21 22%
. 2 15 34 22 29 56%
. 3 20 24 26 30 33%
. 4 26 22 27 25 19%
. 5 27 25 27 21 22%
. SUM 115 132 127 126 13%

The deviation is too big - max. 56%. It’s not good. But if you see the SUM row, the deviation is down to 13%. That means there were not enough samples in this test.

So I increased the request count from 100 to 1200.

Now see what happened.

test test no minion1 minion2 minion3 minion4 deviation
1200 reqs 1 325 296 278 301 14%
. 2 293 289 300 318 9%
. 3 298 292 289 321 10%
. 4 324 283 280 313 14%
. 5 296 276 318 310 13%
. SUM 1536 1436 1465 1563 8%

As you can see, the deviation is a lot better - max. 14%.

Increase the request count to 12000. Now what do you expect?

test test no minion1 minion2 minion3 minion4 deviation
12000 reqs 1 3003 2977 3019 3001 1%
. 2 3067 2894 3025 3014 6%
. 3 3025 3000 2941 3034 3%
. SUM 9095 8871 8985 9049 2%

We see a lot better load balancing.

I think as we have more samples, the deviation will converge to 0%.

References

  • kubernetes
  • iptables statistic module (man iptables-extension)