Hi,
I was doing some performance test of NodePort and HAProxy. Basically, I want to know how much benefit can I get if adding a HAProxy for NodePort services.
The test that I did is as following:
1) Have four nodes, one master and three workers and the worker node also act as proxy node.
2) Create a service with NodePort and a deployment of nginx with three replica.
```
apiVersion: v1
kind: Service
metadata:
labels:
run: my-nginx
name: my-nginx
namespace: default
spec:
ports:
- name: nginx
port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: my-nginx
name: my-nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
run: my-nginx
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- image: gyliu/nginxv1:1.0
imagePullPolicy: IfNotPresent
name: my-nginx
ports:
- containerPort: 80
protocol: TCP
```
3) Wait till the service and deployment ready
```
[root@bd002 ~]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-1427292677-7ctsn 1/1 Running 0 55m 10.1.170.138 9.111.253.115
my-nginx-1427292677-rlnkp 1/1 Running 5 55m 10.1.19.11 9.111.253.114
my-nginx-1427292677-xmrpv 1/1 Running 0 55m 10.1.37.205 9.111.253.118
[root@bd002 ~]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 3d
my-nginx 10.0.0.168 <nodes> 80:30069/TCP 55m
```
4) Set up a HAProxy on master node for the three proxy nodes.
```
listen ingress
mode tcp
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server node02 bd003:30069 check inter 5s rise 2 fall 3
server node03 bd004:30069 check inter 5s rise 2 fall 3
server node04 bd005:30069 check inter 5s rise 2 fall 3
```
5) Test performance of HAProxy and NodePort when accessing services with `ab`.
HAProxy:
```
[root@bd002 ~]# ab -r -c 1000 -n 50000
http://localhost:30069/ >>>> `localhost` is where the proxy node is running.
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests
Server Software: nginx/1.11.10
Server Hostname: localhost
Server Port: 30069
Document Path: /
Document Length: 6459 bytes
Concurrency Level: 1000
Time taken for tests: 4.484 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 334750000 bytes
HTML transferred: 322950000 bytes
Requests per second: 11151.78 [#/sec] (mean)
Time per request: 89.672 [ms] (mean)
Time per request: 0.090 [ms] (mean, across all concurrent requests)
Transfer rate: 72911.28 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 41 173.3 10 2554
Processing: 3 31 50.7 21 2030
Waiting: 0 22 34.3 16 2015
Total: 8 73 181.1 35 3031
Percentage of the requests served within a certain time (ms)
50% 35
66% 40
75% 43
80% 46
90% 56
95% 239
98% 1035
99% 1045
100% 3031 (longest request)
```
NodePort:
```
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Benchmarking 9.111.253.118 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests
Server Software: nginx/1.11.10
Server Hostname: 9.111.253.118
Server Port: 30069
Document Path: /
Document Length: 6459 bytes
Concurrency Level: 1000
Time taken for tests: 4.946 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 334750000 bytes
HTML transferred: 322950000 bytes
Requests per second: 10109.01 [#/sec] (mean)
Time per request: 98.922 [ms] (mean)
Time per request: 0.099 [ms] (mean, across all concurrent requests)
Transfer rate: 66093.55 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 57 240.4 10 3030
Processing: 0 27 54.0 16 2582
Waiting: 0 23 52.5 12 2577
Total: 0 84 249.4 28 3266
Percentage of the requests served within a certain time (ms)
50% 28
66% 36
75% 42
80% 46
90% 65
95% 272
98% 1046
99% 1062
100% 3266 (longest request)
```
The only difference with HAProxy and NodePort access mode is that HAProxy can distribute all of the request to different proxy node and then the proxy node distribute all of the request to pods based on iptables. But with NodePort, all of the request will be handled by one proxy node, and the proxy node will distribute all of the request to pods.
I did several rounds of test and comparison, but seems the HAProxy does not help much for the performance. My understanding for this case is that the pods are the bottleneck if all of the pods are fully used by the request, so even with a HAProxy, the performance still does not improve much due to the fixed pods.
Can anyone share some experience in using HAProxy with NodePort?
Thanks,
Guangya