Experience in adding a HAProxy for NodePort

636 views
Skip to first unread message

Guangya Liu

unread,
Mar 8, 2017, 5:08:51 AM3/8/17
to Kubernetes user discussion and Q&A
Hi,

I was doing some performance test of NodePort and HAProxy. Basically, I want to know how much benefit can I get if adding a HAProxy for NodePort services.

The test that I did is as following:

1) Have four nodes, one master and three workers and the worker node also act as proxy node.
2) Create a service with NodePort and a deployment of nginx with three replica.

```
apiVersion: v1
kind: Service
metadata:
  labels:
    run: my-nginx
  name: my-nginx
  namespace: default
spec:
  ports:
  - name: nginx
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: my-nginx
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: my-nginx
  name: my-nginx
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      run: my-nginx
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - image: gyliu/nginxv1:1.0
        imagePullPolicy: IfNotPresent
        name: my-nginx
        ports:
        - containerPort: 80
          protocol: TCP
```
3) Wait till the service and deployment ready

```
[root@bd002 ~]# kubectl get pods -owide
NAME                        READY     STATUS    RESTARTS   AGE       IP             NODE
my-nginx-1427292677-7ctsn   1/1       Running   0          55m       10.1.170.138   9.111.253.115
my-nginx-1427292677-rlnkp   1/1       Running   5          55m       10.1.19.11     9.111.253.114
my-nginx-1427292677-xmrpv   1/1       Running   0          55m       10.1.37.205    9.111.253.118
[root@bd002 ~]# kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.0.0.1     <none>        443/TCP        3d
my-nginx     10.0.0.168   <nodes>       80:30069/TCP   55m
```

4) Set up a HAProxy on master node for the three proxy nodes.

```
listen ingress
  mode tcp
  balance         roundrobin
  timeout client  3h
  timeout server  3h
  option          clitcpka
  server node02 bd003:30069 check inter 5s rise 2 fall 3
  server node03 bd004:30069 check inter 5s rise 2 fall 3
  server node04 bd005:30069 check inter 5s rise 2 fall 3
```

5) Test performance of HAProxy and NodePort when accessing services with `ab`.

HAProxy:

```
[root@bd002 ~]# ab -r -c 1000 -n 50000 http://localhost:30069/  >>>> `localhost` is where the proxy node is running.
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests


Server Software:        nginx/1.11.10
Server Hostname:        localhost
Server Port:            30069

Document Path:          /
Document Length:        6459 bytes

Concurrency Level:      1000
Time taken for tests:   4.484 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Total transferred:      334750000 bytes
HTML transferred:       322950000 bytes
Requests per second:    11151.78 [#/sec] (mean)
Time per request:       89.672 [ms] (mean)
Time per request:       0.090 [ms] (mean, across all concurrent requests)
Transfer rate:          72911.28 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   41 173.3     10    2554
Processing:     3   31  50.7     21    2030
Waiting:        0   22  34.3     16    2015
Total:          8   73 181.1     35    3031

Percentage of the requests served within a certain time (ms)
  50%     35
  66%     40
  75%     43
  80%     46
  90%     56
  95%    239
  98%   1035
  99%   1045
 100%   3031 (longest request)
```

NodePort:
```
[root@bd002 nodeport-ingress]# ab -r -c 1000 -n 50000 http://9.111.253.118:30069/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 9.111.253.118 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests


Server Software:        nginx/1.11.10
Server Hostname:        9.111.253.118
Server Port:            30069

Document Path:          /
Document Length:        6459 bytes

Concurrency Level:      1000
Time taken for tests:   4.946 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Total transferred:      334750000 bytes
HTML transferred:       322950000 bytes
Requests per second:    10109.01 [#/sec] (mean)
Time per request:       98.922 [ms] (mean)
Time per request:       0.099 [ms] (mean, across all concurrent requests)
Transfer rate:          66093.55 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   57 240.4     10    3030
Processing:     0   27  54.0     16    2582
Waiting:        0   23  52.5     12    2577
Total:          0   84 249.4     28    3266

Percentage of the requests served within a certain time (ms)
  50%     28
  66%     36
  75%     42
  80%     46
  90%     65
  95%    272
  98%   1046
  99%   1062
 100%   3266 (longest request)
```

The only difference with HAProxy and NodePort access mode is that HAProxy can distribute all of the request to different proxy node and then the proxy node distribute all of the request to pods based on iptables. But with NodePort, all of the request will be handled by one proxy node, and the proxy node will distribute all of the request to pods.

I did several rounds of test and comparison, but seems the HAProxy does not help much for the performance. My understanding for this case is that the pods are the bottleneck if all of the pods are fully used by the request, so even with a HAProxy, the performance still does not improve much due to the fixed pods. 

Can anyone share some experience in using HAProxy with NodePort?

Thanks,

Guangya 

Matthias Rampke

unread,
Mar 8, 2017, 5:27:06 AM3/8/17
to Kubernetes user discussion and Q&A
The HAProxy adds a hop, so at least in the low-traffic case it won't make it faster.

We essentially use this setup, with auto-generated HAProxy configurations. There are two benefits for this: the network load (which is considerable in our case) is spread over more Kubernetes nodes, and HAProxy handles node downtime automatically. If you were to use the nodeport directly, how would removed / decommissioned / rebooting nodes be handled?

/MR

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Guangya Liu

unread,
Mar 8, 2017, 7:34:11 AM3/8/17
to kubernet...@googlegroups.com
Thanks Matthias!

Yes, using auto-generated HAProxy configurations would be an ideal solution. The problem is that I'm not sure the overhead of the hop that HAProxy added.

The reason that I'm asking this question is because I want to get the best performance when accessing some services, was only for some test purpose.

But based on my test, seems the HAProxy hop does not impact performance much, and I'm planning to add it to the test, it can definitely help for case when remove/reboot nodes.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

Rodrigo Campos

unread,
Mar 8, 2017, 7:47:44 AM3/8/17
to kubernet...@googlegroups.com
I really doubt you will notice a performance degradation. Several sites use it on top of pretty load intensive traffic sites. I think it's worth the test :-)
Reply all
Reply to author
Forward
0 new messages