Skupper with custom DNS name in the claims token

39 views
Skip to first unread message

Mike Cruzz

unread,
Jul 24, 2025, 7:11:38 AMJul 24
to Skupper
Just came across this great product and it's quite a different approach and more network focussed that other competitors.

I had a question in regards to the claims url. At the moment when I skupper init, the claim has the NLB dns name in it.

Is there a way to have a private hosted zone name instead so we can redirect it through a proxy when it needs to go from site A to site B?


Thanks for any info.

M

Mike Cruzz

unread,
Jul 24, 2025, 2:32:28 PMJul 24
to Skupper
I've just been using version 2 and have got it up. In the 1.92 version the skupper network status and skupper service status -v were quite handy to see a map on the cli.. do you think this functionality or better will be in version 2?

Thanks again.

Fernando Giorgetti

unread,
Jul 26, 2025, 3:34:02 PMJul 26
to Mike Cruzz, Skupper
Is there a way to have a private hosted zone name instead so we can redirect it through a proxy when it needs to go from site A to site B?

At present if you want to provide a custom host, you can use the --ingress-host flag, during skupper init.
You will also need to provide a different ingress type (as load-balancer implies using the assigned IP).
You can try: nodeport or nginx-ingress-v1, for example.

do you think this functionality or better will be in version 2?

In V2 you can get similar information, by looking at the .status.network field of your Site.
You can try this: kubectl get site my-site -o json | jq .status.network


--
You received this message because you are subscribed to the Google Groups "Skupper" group.
To unsubscribe from this group and stop receiving emails from it, send an email to skupper+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/skupper/be33ff47-404a-4d0a-9976-570f8347c719n%40googlegroups.com.


--

Fernando Giorgetti

Red Hat

fgio...@redhat.com   


Mike Cruzz

unread,
Jul 26, 2025, 7:58:19 PMJul 26
to Skupper
Thank you Fernando for your response. So basically, the use case I was testing was using it with AWS private links.

Now what happens is that I have cluster 1 connected to cluster 2 via aws private link. The problem I faced here is in order to get access to site in cluster 2, the site that generates the token uses the NLB dns name in the token.

But my private link endpoints are in cluster 1 vpc... so they need to use the local VPC endpoint dns to get to NLB in account 2.

The way I got around this problem was in cluster 1 core dns, I did a rewrite for cluster 2 NLB dns to vpc1 vpc endpoint and I was able to establish the site link.

But it would be good in the V2 version if there was a way you could define some way in the yaml how custom dns could be used to get to remote sites. 
The reason being I can only create the vpc private link endpoints after I know the nlb dns name, so it's a chicken and egg situation where I won't know what dns name will be generated.. but if for every site, in the yaml, I can just have an attribute or annotation to say use this dns when issuing tokens then after the sites are setup, I can manage everything in route53 and create alias for the private hosted zone to the vpc endpoint and this would allow me to seamlessly define all the sites I need to connect with from one yaml file inside the site definition. 

The other problem I was looking to solve was to find a way how cluster 1 can create a hub within the cluster. Then all the local name spaces would create its own namespace skupper routers.. and these would connect to the cluster hub.

same thing on cluster 2 side

Then I could just connect the 2 hubs together and all name spaces would be able to see each other.

in the V2, I got it working by building a leaf spine setup so I have 1 cluster as a spine and 2 and 3 clusters as leafs

Then all name spaces connect to spines and they can see each other. I went back to evpn model as inspiration of connecting all the name spaces instead of full mesh.

However there is a network hop hit.. because I was doing I perf tests and unfortunately I don't get good performance


kubectl logs iperf3-max-performance-test-kkgzd -n titan -f
=== MAXIMUM PERFORMANCE TEST SUITE ===
Hardware: AWS t4g.xlarge (4 vCPU, 16GB, Up to 5Gbps burst)
Target: iperf3-server (capella namespace, cluster 3)
Source: titan namespace, cluster 1
=========================================
TEST 1: Baseline TCP single stream (60s)
Connecting to host iperf3-server, port 5201
[  5] local 10.0.1.212 port 36918 connected to 172.20.59.177 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  1.43 GBytes  1.23 Gbits/sec    0    437 KBytes      
[  5]  10.00-20.00  sec  1.43 GBytes  1.23 Gbits/sec    0    437 KBytes      
[  5]  20.00-30.00  sec  1.44 GBytes  1.23 Gbits/sec    0    437 KBytes      
[  5]  30.00-40.00  sec  1.43 GBytes  1.23 Gbits/sec    0    437 KBytes      
[  5]  40.00-50.00  sec  1.37 GBytes  1.17 Gbits/sec    0    437 KBytes      
[  5]  50.00-60.00  sec  1.45 GBytes  1.24 Gbits/sec    0    437 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  8.54 GBytes  1.22 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  8.53 GBytes  1.22 Gbits/sec                  receiver

Server output:
iperf 3.12
Linux iperf3-server-6d7fcf5487-6w4zm 6.1.141-165.249.amzn2023.aarch64 #1 SMP Tue Jul  1 18:00:46 UTC 2025 aarch64
-----------------------------------------------------------
Server listening on 5201 (test #15)
-----------------------------------------------------------
Time: Sat, 26 Jul 2025 12:43:13 GMT
Accepted connection from 10.0.2.207, port 55814
      Cookie: d3hd7lzylpgpqum6ctv3zutagsszaormw64v
      TCP MSS: 0 (default)
[  5] local 10.0.2.62 port 5201 connected to 10.0.2.207 port 55826
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 60 second test, tos 0
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   136 MBytes  1.14 Gbits/sec                  
[  5]   1.00-2.00   sec   141 MBytes  1.18 Gbits/sec                  
[  5]   2.00-3.00   sec   148 MBytes  1.24 Gbits/sec                  
[  5]   3.00-4.00   sec   148 MBytes  1.24 Gbits/sec                  
[  5]   4.00-5.00   sec   146 MBytes  1.22 Gbits/sec                  
[  5]   5.00-6.00   sec   150 MBytes  1.25 Gbits/sec                  
[  5]   6.00-7.00   sec   146 MBytes  1.22 Gbits/sec                  
[  5]   7.00-8.00   sec   148 MBytes  1.24 Gbits/sec                  
[  5]   8.00-9.00   sec   150 MBytes  1.26 Gbits/sec                  
[  5]   9.00-10.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  10.00-11.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  11.00-12.00  sec   144 MBytes  1.21 Gbits/sec                  
[  5]  12.00-13.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  13.00-14.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  14.00-15.00  sec   144 MBytes  1.21 Gbits/sec                  
[  5]  15.00-16.00  sec   150 MBytes  1.25 Gbits/sec                  
[  5]  16.00-17.00  sec   142 MBytes  1.20 Gbits/sec                  
[  5]  17.00-18.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  18.00-19.00  sec   151 MBytes  1.27 Gbits/sec                  
[  5]  19.00-20.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  20.00-21.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  21.00-22.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  22.00-23.00  sec   150 MBytes  1.26 Gbits/sec                  
[  5]  23.00-24.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  24.00-25.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  25.00-26.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  26.00-27.00  sec   145 MBytes  1.22 Gbits/sec                  
[  5]  27.00-28.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  28.00-29.00  sec   141 MBytes  1.18 Gbits/sec                  
[  5]  29.00-30.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  30.00-31.00  sec   146 MBytes  1.22 Gbits/sec                  
[  5]  31.00-32.00  sec   131 MBytes  1.09 Gbits/sec                  
[  5]  32.00-33.00  sec   146 MBytes  1.22 Gbits/sec                  
[  5]  33.00-34.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  34.00-35.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  35.00-36.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  36.00-37.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  37.00-38.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  38.00-39.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  39.00-40.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  40.00-41.00  sec   144 MBytes  1.21 Gbits/sec                  
[  5]  41.00-42.00  sec   143 MBytes  1.20 Gbits/sec                  
[  5]  42.00-43.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  43.00-44.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  44.00-45.00  sec   150 MBytes  1.26 Gbits/sec                  
[  5]  45.00-46.00  sec   151 MBytes  1.27 Gbits/sec                  
[  5]  46.00-47.00  sec   103 MBytes   865 Mbits/sec                  
[  5]  47.00-48.00  sec   120 MBytes  1.00 Gbits/sec                  
[  5]  48.00-49.00  sec   147 MBytes  1.23 Gbits/sec                  
[  5]  49.00-50.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  50.00-51.00  sec   150 MBytes  1.25 Gbits/sec                  
[  5]  51.00-52.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  52.00-53.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  53.00-54.00  sec   149 MBytes  1.25 Gbits/sec                  
[  5]  54.00-55.00  sec   150 MBytes  1.26 Gbits/sec                  
[  5]  55.00-56.00  sec   146 MBytes  1.22 Gbits/sec                  
[  5]  56.00-57.00  sec   148 MBytes  1.24 Gbits/sec                  
[  5]  57.00-58.00  sec   146 MBytes  1.22 Gbits/sec                  
[  5]  58.00-59.00  sec   145 MBytes  1.22 Gbits/sec                  
[  5]  59.00-60.00  sec   150 MBytes  1.26 Gbits/sec                  
[  5]  60.00-60.00  sec   597 KBytes  1.20 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate
[  5] (sender statistics not available)
[  5]   0.00-60.00  sec  8.53 GBytes  1.22 Gbits/sec                  receiver
rcv_tcp_congestion cubic


iperf Done.

===========

I also did iperf with direct skupper router site to site cross cluster but still only 2Gbps was max I could get. I was expectin maybe around ~4.7 considering overhead

Test Environment
  • Hardware: AWS t4g.xlarge (4 vCPU, 16GB RAM, Up to 5 Gbps network)
  • Test Tool: iperf3 (60-second sustained throughput test)
  • Test Pattern: Single TCP stream, 10-second intervals for 60 sec

+--------------------------+-------------+----------------+----------------------+-------------------+---------------------------------------------------------------+
| Test Scenario                     | Throughput  | Retransmit  | Network Utilization | Performance Loss  | Architecture                                                                                  |
+--------------------------+-------------+----------------+----------------------+-------------------+---------------------------------------------------------------+
| Baseline: Intra-Cluster       | 4.95 Gbps   | 131                | 99%                            | 0% (baseline)          | Pod → K8s networking → Pod                                                  |
| Raw AWS Cross-Cluster    | 4.80 Gbps   | 724                | 96%                            | 3% vs baseline        | Pod → VPC → Private Link → NLB → VPC → Pod                |
| Skupper Cross-Cluster      | 2.00 Gbps   | 1                     | 40%                            | 58% vs raw AWS     | Pod → Skupper → Private Link → NLB → Skupper → Pod  |
+--------------------------+-------------+----------------+----------------------+-------------------+---------------------------------------------------------------+


What was interesting was when I did 4 parallel streams

kubectl exec -n titan -it iperf-cross-cluster-client -- iperf3 -c iperf3-server -p 5201 -t 60 -P 4 -i 10
Connecting to host iperf3-server, port 5201
[  5] local 10.0.1.51 port 56028 connected to 172.20.162.91 port 5201
[  7] local 10.0.1.51 port 56042 connected to 172.20.162.91 port 5201
[  9] local 10.0.1.51 port 56046 connected to 172.20.162.91 port 5201
[ 11] local 10.0.1.51 port 56052 connected to 172.20.162.91 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  1.25 GBytes  1.07 Gbits/sec   20    594 KBytes      
[  7]   0.00-10.00  sec  1.25 GBytes  1.07 Gbits/sec   41    926 KBytes      
[  9]   0.00-10.00  sec  1.25 GBytes  1.08 Gbits/sec   18    760 KBytes      
[ 11]   0.00-10.00  sec  1.25 GBytes  1.07 Gbits/sec   33    638 KBytes      
[SUM]   0.00-10.00  sec  4.99 GBytes  4.29 Gbits/sec  112            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  10.00-20.00  sec  1.32 GBytes  1.14 Gbits/sec    1    900 KBytes      
[  7]  10.00-20.00  sec  1.31 GBytes  1.13 Gbits/sec    0    926 KBytes      
[  9]  10.00-20.00  sec  1.32 GBytes  1.13 Gbits/sec    1    760 KBytes      
[ 11]  10.00-20.00  sec  1.31 GBytes  1.12 Gbits/sec    1    638 KBytes      
[SUM]  10.00-20.00  sec  5.26 GBytes  4.52 Gbits/sec    3            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  20.00-30.00  sec  1.33 GBytes  1.14 Gbits/sec    1   1.36 MBytes      
[  7]  20.00-30.00  sec  1.32 GBytes  1.14 Gbits/sec    2    926 KBytes      
[  9]  20.00-30.00  sec  1.32 GBytes  1.13 Gbits/sec    3    813 KBytes      
[ 11]  20.00-30.00  sec  1.32 GBytes  1.13 Gbits/sec    0   1022 KBytes      
[SUM]  20.00-30.00  sec  5.29 GBytes  4.55 Gbits/sec    6            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  30.00-40.00  sec  1.31 GBytes  1.13 Gbits/sec    2   1.36 MBytes      
[  7]  30.00-40.00  sec  1.31 GBytes  1.12 Gbits/sec    1    926 KBytes      
[  9]  30.00-40.00  sec  1.32 GBytes  1.14 Gbits/sec    1    638 KBytes      
[ 11]  30.00-40.00  sec  1.31 GBytes  1.12 Gbits/sec    1   1022 KBytes      
[SUM]  30.00-40.00  sec  5.26 GBytes  4.51 Gbits/sec    5            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  40.00-50.00  sec  1.26 GBytes  1.08 Gbits/sec    1   1.36 MBytes      
[  7]  40.00-50.00  sec  1.25 GBytes  1.07 Gbits/sec    2    926 KBytes      
[  9]  40.00-50.00  sec  1.26 GBytes  1.08 Gbits/sec    0    638 KBytes      
[ 11]  40.00-50.00  sec  1.24 GBytes  1.07 Gbits/sec    3   1022 KBytes      
[SUM]  40.00-50.00  sec  5.01 GBytes  4.30 Gbits/sec    6            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  50.00-60.00  sec  1.31 GBytes  1.12 Gbits/sec    1   1.36 MBytes      
[  7]  50.00-60.00  sec  1.29 GBytes  1.11 Gbits/sec    1   1.49 MBytes      
[  9]  50.00-60.00  sec  1.31 GBytes  1.13 Gbits/sec    2   1.11 MBytes      
[ 11]  50.00-60.00  sec  1.31 GBytes  1.12 Gbits/sec    0   1022 KBytes      
[SUM]  50.00-60.00  sec  5.22 GBytes  4.48 Gbits/sec    4            
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  7.78 GBytes  1.11 Gbits/sec   26             sender
[  5]   0.00-60.01  sec  7.78 GBytes  1.11 Gbits/sec                  receiver
[  7]   0.00-60.00  sec  7.73 GBytes  1.11 Gbits/sec   47             sender
[  7]   0.00-60.01  sec  7.72 GBytes  1.11 Gbits/sec                  receiver
[  9]   0.00-60.00  sec  7.79 GBytes  1.11 Gbits/sec   25             sender
[  9]   0.00-60.01  sec  7.78 GBytes  1.11 Gbits/sec                  receiver
[ 11]   0.00-60.00  sec  7.73 GBytes  1.11 Gbits/sec   38             sender
[ 11]   0.00-60.01  sec  7.73 GBytes  1.11 Gbits/sec                  receiver
[SUM]   0.00-60.00  sec  31.0 GBytes  4.44 Gbits/sec  136             sender
[SUM]   0.00-60.01  sec  31.0 GBytes  4.44 Gbits/sec                  receiver

iperf Done.

=================

and 16 parallel

kubectl exec -n titan -it iperf-cross-cluster-client -- iperf3 -c iperf3-server -p 5201 -t 60 -P 16 -i 10
Connecting to host iperf3-server, port 5201
[  5] local 10.0.1.51 port 49826 connected to 172.20.162.91 port 5201
[  7] local 10.0.1.51 port 49830 connected to 172.20.162.91 port 5201
[  9] local 10.0.1.51 port 49834 connected to 172.20.162.91 port 5201
[ 11] local 10.0.1.51 port 49844 connected to 172.20.162.91 port 5201
[ 13] local 10.0.1.51 port 49858 connected to 172.20.162.91 port 5201
[ 15] local 10.0.1.51 port 49870 connected to 172.20.162.91 port 5201
[ 17] local 10.0.1.51 port 49872 connected to 172.20.162.91 port 5201
[ 19] local 10.0.1.51 port 49878 connected to 172.20.162.91 port 5201
[ 21] local 10.0.1.51 port 49888 connected to 172.20.162.91 port 5201
[ 23] local 10.0.1.51 port 49902 connected to 172.20.162.91 port 5201
[ 25] local 10.0.1.51 port 49914 connected to 172.20.162.91 port 5201
[ 27] local 10.0.1.51 port 49926 connected to 172.20.162.91 port 5201
[ 29] local 10.0.1.51 port 49936 connected to 172.20.162.91 port 5201
[ 31] local 10.0.1.51 port 49946 connected to 172.20.162.91 port 5201
[ 33] local 10.0.1.51 port 49954 connected to 172.20.162.91 port 5201
[ 35] local 10.0.1.51 port 49968 connected to 172.20.162.91 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec   371 MBytes   311 Mbits/sec  373    524 KBytes      
[  7]   0.00-10.00  sec   364 MBytes   305 Mbits/sec  396    498 KBytes      
[  9]   0.00-10.00  sec   369 MBytes   310 Mbits/sec  445    481 KBytes      
[ 11]   0.00-10.00  sec   361 MBytes   302 Mbits/sec  266    551 KBytes      
[ 13]   0.00-10.00  sec   363 MBytes   305 Mbits/sec  297    507 KBytes      
[ 15]   0.00-10.00  sec   355 MBytes   298 Mbits/sec  552    507 KBytes      
[ 17]   0.00-10.00  sec   352 MBytes   295 Mbits/sec  277    498 KBytes      
[ 19]   0.00-10.00  sec   378 MBytes   317 Mbits/sec  311    551 KBytes      
[ 21]   0.00-10.00  sec   353 MBytes   296 Mbits/sec  421    524 KBytes      
[ 23]   0.00-10.00  sec   359 MBytes   301 Mbits/sec  356    507 KBytes      
[ 25]   0.00-10.00  sec   363 MBytes   304 Mbits/sec  327    542 KBytes      
[ 27]   0.00-10.00  sec   358 MBytes   300 Mbits/sec  297    516 KBytes      
[ 29]   0.00-10.00  sec   352 MBytes   296 Mbits/sec  365    542 KBytes      
[ 31]   0.00-10.00  sec   360 MBytes   302 Mbits/sec  351    577 KBytes      
[ 33]   0.00-10.00  sec   370 MBytes   310 Mbits/sec  386    489 KBytes      
[ 35]   0.00-10.00  sec   364 MBytes   306 Mbits/sec  379    498 KBytes      
[SUM]   0.00-10.00  sec  5.66 GBytes  4.86 Gbits/sec  5799            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  10.00-20.00  sec   353 MBytes   296 Mbits/sec  308    472 KBytes      
[  7]  10.00-20.00  sec   351 MBytes   295 Mbits/sec  320    542 KBytes      
[  9]  10.00-20.00  sec   354 MBytes   297 Mbits/sec  250    498 KBytes      
[ 11]  10.00-20.00  sec   351 MBytes   294 Mbits/sec  189    498 KBytes      
[ 13]  10.00-20.00  sec   355 MBytes   298 Mbits/sec  227    507 KBytes      
[ 15]  10.00-20.00  sec   355 MBytes   298 Mbits/sec  167    516 KBytes      
[ 17]  10.00-20.00  sec   352 MBytes   295 Mbits/sec  231    516 KBytes      
[ 19]  10.00-20.00  sec   370 MBytes   311 Mbits/sec  205    507 KBytes      
[ 21]  10.00-20.00  sec   352 MBytes   295 Mbits/sec  194    551 KBytes      
[ 23]  10.00-20.00  sec   360 MBytes   302 Mbits/sec  191    498 KBytes      
[ 25]  10.00-20.00  sec   353 MBytes   297 Mbits/sec  228    489 KBytes      
[ 27]  10.00-20.00  sec   340 MBytes   286 Mbits/sec  197    612 KBytes      
[ 29]  10.00-20.00  sec   350 MBytes   294 Mbits/sec  275    524 KBytes      
[ 31]  10.00-20.00  sec   352 MBytes   295 Mbits/sec  234    498 KBytes      
[ 33]  10.00-20.00  sec   353 MBytes   296 Mbits/sec  249    489 KBytes      
[ 35]  10.00-20.00  sec   359 MBytes   301 Mbits/sec  219    638 KBytes      
[SUM]  10.00-20.00  sec  5.53 GBytes  4.75 Gbits/sec  3684            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  20.00-30.00  sec   338 MBytes   284 Mbits/sec  221    350 KBytes      
[  7]  20.00-30.00  sec   332 MBytes   279 Mbits/sec  161    516 KBytes      
[  9]  20.00-30.00  sec   329 MBytes   276 Mbits/sec  197    568 KBytes      
[ 11]  20.00-30.00  sec   338 MBytes   283 Mbits/sec   99    542 KBytes      
[ 13]  20.00-30.00  sec   345 MBytes   289 Mbits/sec  171    568 KBytes      
[ 15]  20.00-30.00  sec   334 MBytes   280 Mbits/sec  108    402 KBytes      
[ 17]  20.00-30.00  sec   340 MBytes   286 Mbits/sec  172    524 KBytes      
[ 19]  20.00-30.00  sec   339 MBytes   284 Mbits/sec  193    446 KBytes      
[ 21]  20.00-30.00  sec   341 MBytes   286 Mbits/sec  155    419 KBytes      
[ 23]  20.00-30.00  sec   340 MBytes   286 Mbits/sec  139    376 KBytes      
[ 25]  20.00-30.00  sec   337 MBytes   283 Mbits/sec  215    516 KBytes      
[ 27]  20.00-30.00  sec   338 MBytes   284 Mbits/sec  144    559 KBytes      
[ 29]  20.00-30.00  sec   334 MBytes   280 Mbits/sec  252    402 KBytes      
[ 31]  20.00-30.00  sec   332 MBytes   278 Mbits/sec  113    306 KBytes      
[ 33]  20.00-30.00  sec   341 MBytes   286 Mbits/sec  191    376 KBytes      
[ 35]  20.00-30.00  sec   344 MBytes   288 Mbits/sec  134    393 KBytes      
[SUM]  20.00-30.00  sec  5.28 GBytes  4.53 Gbits/sec  2665            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  30.00-40.00  sec   356 MBytes   299 Mbits/sec  151    516 KBytes      
[  7]  30.00-40.00  sec   360 MBytes   302 Mbits/sec  179    516 KBytes      
[  9]  30.00-40.00  sec   356 MBytes   298 Mbits/sec  161    533 KBytes      
[ 11]  30.00-40.00  sec   354 MBytes   297 Mbits/sec  123    524 KBytes      
[ 13]  30.00-40.00  sec   358 MBytes   301 Mbits/sec  122    568 KBytes      
[ 15]  30.00-40.00  sec   360 MBytes   302 Mbits/sec  127    542 KBytes      
[ 17]  30.00-40.00  sec   357 MBytes   300 Mbits/sec  106   69.9 KBytes      
[ 19]  30.00-40.00  sec   342 MBytes   287 Mbits/sec  147    551 KBytes      
[ 21]  30.00-40.00  sec   360 MBytes   302 Mbits/sec  141    157 KBytes      
[ 23]  30.00-40.00  sec   355 MBytes   298 Mbits/sec  149    402 KBytes      
[ 25]  30.00-40.00  sec   363 MBytes   304 Mbits/sec  171    516 KBytes      
[ 27]  30.00-40.00  sec   347 MBytes   291 Mbits/sec  109    577 KBytes      
[ 29]  30.00-40.00  sec   356 MBytes   299 Mbits/sec  162    568 KBytes      
[ 31]  30.00-40.00  sec   358 MBytes   301 Mbits/sec  120    586 KBytes      
[ 33]  30.00-40.00  sec   362 MBytes   303 Mbits/sec  141    516 KBytes      
[ 35]  30.00-40.00  sec   353 MBytes   296 Mbits/sec  127    542 KBytes      
[SUM]  30.00-40.00  sec  5.56 GBytes  4.78 Gbits/sec  2236            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  40.00-50.00  sec   352 MBytes   296 Mbits/sec  221    271 KBytes      
[  7]  40.00-50.00  sec   356 MBytes   298 Mbits/sec  219    341 KBytes      
[  9]  40.00-50.00  sec   354 MBytes   297 Mbits/sec  191    323 KBytes      
[ 11]  40.00-50.00  sec   362 MBytes   304 Mbits/sec  119    385 KBytes      
[ 13]  40.00-50.00  sec   356 MBytes   299 Mbits/sec  121    367 KBytes      
[ 15]  40.00-50.00  sec   356 MBytes   298 Mbits/sec  159    245 KBytes      
[ 17]  40.00-50.00  sec   358 MBytes   300 Mbits/sec  155    393 KBytes      
[ 19]  40.00-50.00  sec   354 MBytes   297 Mbits/sec  138    358 KBytes      
[ 21]  40.00-50.00  sec   358 MBytes   301 Mbits/sec  188    341 KBytes      
[ 23]  40.00-50.00  sec   363 MBytes   305 Mbits/sec  142    385 KBytes      
[ 25]  40.00-50.00  sec   352 MBytes   295 Mbits/sec  199    306 KBytes      
[ 27]  40.00-50.00  sec   358 MBytes   300 Mbits/sec  126    367 KBytes      
[ 29]  40.00-50.00  sec   348 MBytes   292 Mbits/sec  246    332 KBytes      
[ 31]  40.00-50.00  sec   365 MBytes   306 Mbits/sec   95    428 KBytes      
[ 33]  40.00-50.00  sec   354 MBytes   297 Mbits/sec  224    280 KBytes      
[ 35]  40.00-50.00  sec   366 MBytes   307 Mbits/sec  122    446 KBytes      
[SUM]  40.00-50.00  sec  5.58 GBytes  4.79 Gbits/sec  2665            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  50.00-60.00  sec   356 MBytes   299 Mbits/sec  218    280 KBytes      
[  7]  50.00-60.00  sec   361 MBytes   303 Mbits/sec  192    402 KBytes      
[  9]  50.00-60.00  sec   359 MBytes   301 Mbits/sec  204    271 KBytes      
[ 11]  50.00-60.00  sec   358 MBytes   300 Mbits/sec  159    332 KBytes      
[ 13]  50.00-60.00  sec   363 MBytes   305 Mbits/sec  208    280 KBytes      
[ 15]  50.00-60.00  sec   354 MBytes   297 Mbits/sec  218    253 KBytes      
[ 17]  50.00-60.00  sec   366 MBytes   307 Mbits/sec  222    218 KBytes      
[ 19]  50.00-60.00  sec   348 MBytes   292 Mbits/sec  187    315 KBytes      
[ 21]  50.00-60.00  sec   355 MBytes   297 Mbits/sec  192    166 KBytes      
[ 23]  50.00-60.00  sec   366 MBytes   307 Mbits/sec  156    315 KBytes      
[ 25]  50.00-60.00  sec   346 MBytes   290 Mbits/sec  195    184 KBytes      
[ 27]  50.00-60.00  sec   351 MBytes   294 Mbits/sec  246    428 KBytes      
[ 29]  50.00-60.00  sec   361 MBytes   303 Mbits/sec  178    288 KBytes      
[ 31]  50.00-60.00  sec   357 MBytes   299 Mbits/sec  160    385 KBytes      
[ 33]  50.00-60.00  sec   358 MBytes   301 Mbits/sec  207    271 KBytes      
[ 35]  50.00-60.00  sec   360 MBytes   302 Mbits/sec  131    358 KBytes      
[SUM]  50.00-60.00  sec  5.59 GBytes  4.80 Gbits/sec  3073            
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  2.08 GBytes   297 Mbits/sec  1492             sender
[  5]   0.00-60.01  sec  2.07 GBytes   297 Mbits/sec                  receiver
[  7]   0.00-60.00  sec  2.07 GBytes   297 Mbits/sec  1467             sender
[  7]   0.00-60.01  sec  2.07 GBytes   296 Mbits/sec                  receiver
[  9]   0.00-60.00  sec  2.07 GBytes   297 Mbits/sec  1448             sender
[  9]   0.00-60.01  sec  2.07 GBytes   296 Mbits/sec                  receiver
[ 11]   0.00-60.00  sec  2.07 GBytes   297 Mbits/sec  955             sender
[ 11]   0.00-60.01  sec  2.07 GBytes   296 Mbits/sec                  receiver
[ 13]   0.00-60.00  sec  2.09 GBytes   299 Mbits/sec  1146             sender
[ 13]   0.00-60.01  sec  2.08 GBytes   298 Mbits/sec                  receiver
[ 15]   0.00-60.00  sec  2.06 GBytes   296 Mbits/sec  1331             sender
[ 15]   0.00-60.01  sec  2.06 GBytes   295 Mbits/sec                  receiver
[ 17]   0.00-60.00  sec  2.08 GBytes   297 Mbits/sec  1163             sender
[ 17]   0.00-60.01  sec  2.07 GBytes   296 Mbits/sec                  receiver
[ 19]   0.00-60.00  sec  2.08 GBytes   298 Mbits/sec  1181             sender
[ 19]   0.00-60.01  sec  2.08 GBytes   297 Mbits/sec                  receiver
[ 21]   0.00-60.00  sec  2.07 GBytes   296 Mbits/sec  1291             sender
[ 21]   0.00-60.01  sec  2.06 GBytes   295 Mbits/sec                  receiver
[ 23]   0.00-60.00  sec  2.09 GBytes   300 Mbits/sec  1133             sender
[ 23]   0.00-60.01  sec  2.09 GBytes   299 Mbits/sec                  receiver
[ 25]   0.00-60.00  sec  2.07 GBytes   296 Mbits/sec  1335             sender
[ 25]   0.00-60.01  sec  2.06 GBytes   295 Mbits/sec                  receiver
[ 27]   0.00-60.00  sec  2.04 GBytes   293 Mbits/sec  1119             sender
[ 27]   0.00-60.01  sec  2.04 GBytes   292 Mbits/sec                  receiver
[ 29]   0.00-60.00  sec  2.05 GBytes   294 Mbits/sec  1478             sender
[ 29]   0.00-60.01  sec  2.05 GBytes   293 Mbits/sec                  receiver
[ 31]   0.00-60.00  sec  2.07 GBytes   297 Mbits/sec  1073             sender
[ 31]   0.00-60.01  sec  2.07 GBytes   296 Mbits/sec                  receiver
[ 33]   0.00-60.00  sec  2.09 GBytes   299 Mbits/sec  1398             sender
[ 33]   0.00-60.01  sec  2.08 GBytes   298 Mbits/sec                  receiver
[ 35]   0.00-60.00  sec  2.10 GBytes   300 Mbits/sec  1112             sender
[ 35]   0.00-60.01  sec  2.09 GBytes   299 Mbits/sec                  receiver
[SUM]   0.00-60.00  sec  33.2 GBytes  4.75 Gbits/sec  20122             sender
[SUM]   0.00-60.01  sec  33.1 GBytes  4.74 Gbits/sec                  receiver

iperf Done.


=====

compared with pod1 node1 to pod2 node2 same cluster

kubectl exec -n default -it iperf-baseline-client -- iperf3 -c iperf-baseline-server -p 5201 -t 60 -P 4 -i 10
Connecting to host iperf-baseline-server, port 5201
[  5] local 10.0.1.112 port 38118 connected to 172.20.17.190 port 5201
[  7] local 10.0.1.112 port 38132 connected to 172.20.17.190 port 5201
[  9] local 10.0.1.112 port 38144 connected to 172.20.17.190 port 5201
[ 11] local 10.0.1.112 port 38156 connected to 172.20.17.190 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  1.97 GBytes  1.70 Gbits/sec  190    577 KBytes      
[  7]   0.00-10.00  sec   918 MBytes   770 Mbits/sec  164    367 KBytes      
[  9]   0.00-10.00  sec  1.88 GBytes  1.62 Gbits/sec  234    918 KBytes      
[ 11]   0.00-10.00  sec  1.03 GBytes   886 Mbits/sec  153    516 KBytes      
[SUM]   0.00-10.00  sec  5.78 GBytes  4.97 Gbits/sec  741            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  10.00-20.00  sec  1.97 GBytes  1.69 Gbits/sec  205    839 KBytes      
[  7]  10.00-20.00  sec   930 MBytes   780 Mbits/sec  165    507 KBytes      
[  9]  10.00-20.00  sec  1.89 GBytes  1.62 Gbits/sec  233    769 KBytes      
[ 11]  10.00-20.00  sec  1.01 GBytes   870 Mbits/sec  164    498 KBytes      
[SUM]  10.00-20.00  sec  5.78 GBytes  4.97 Gbits/sec  767            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  20.00-30.00  sec  1.95 GBytes  1.67 Gbits/sec  208    874 KBytes      
[  7]  20.00-30.00  sec   990 MBytes   830 Mbits/sec  152    454 KBytes      
[  9]  20.00-30.00  sec  1.89 GBytes  1.62 Gbits/sec  218    787 KBytes      
[ 11]  20.00-30.00  sec   990 MBytes   831 Mbits/sec  151    463 KBytes      
[SUM]  20.00-30.00  sec  5.77 GBytes  4.95 Gbits/sec  729            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  30.00-40.00  sec  1.96 GBytes  1.69 Gbits/sec  168    804 KBytes      
[  7]  30.00-40.00  sec   926 MBytes   777 Mbits/sec  141    620 KBytes      
[  9]  30.00-40.00  sec  1.89 GBytes  1.63 Gbits/sec  191    795 KBytes      
[ 11]  30.00-40.00  sec  1.02 GBytes   875 Mbits/sec  169    428 KBytes      
[SUM]  30.00-40.00  sec  5.78 GBytes  4.96 Gbits/sec  669            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  40.00-50.00  sec  1.98 GBytes  1.70 Gbits/sec  227    577 KBytes      
[  7]  40.00-50.00  sec   985 MBytes   826 Mbits/sec  155    498 KBytes      
[  9]  40.00-50.00  sec  1.89 GBytes  1.63 Gbits/sec  243    559 KBytes      
[ 11]  40.00-50.00  sec   971 MBytes   814 Mbits/sec  154    437 KBytes      
[SUM]  40.00-50.00  sec  5.78 GBytes  4.97 Gbits/sec  779            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  50.00-60.00  sec  1.97 GBytes  1.69 Gbits/sec  190    629 KBytes      
[  7]  50.00-60.00  sec   988 MBytes   828 Mbits/sec  159    393 KBytes      
[  9]  50.00-60.00  sec  1.89 GBytes  1.62 Gbits/sec  277    551 KBytes      
[ 11]  50.00-60.00  sec   982 MBytes   824 Mbits/sec  154    428 KBytes      
[SUM]  50.00-60.00  sec  5.78 GBytes  4.97 Gbits/sec  780            
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  11.8 GBytes  1.69 Gbits/sec  1188             sender
[  5]   0.00-60.00  sec  11.8 GBytes  1.69 Gbits/sec                  receiver
[  7]   0.00-60.00  sec  5.60 GBytes   802 Mbits/sec  936             sender
[  7]   0.00-60.00  sec  5.60 GBytes   802 Mbits/sec                  receiver
[  9]   0.00-60.00  sec  11.3 GBytes  1.62 Gbits/sec  1396             sender
[  9]   0.00-60.00  sec  11.3 GBytes  1.62 Gbits/sec                  receiver
[ 11]   0.00-60.00  sec  5.94 GBytes   850 Mbits/sec  945             sender
[ 11]   0.00-60.00  sec  5.93 GBytes   850 Mbits/sec                  receiver
[SUM]   0.00-60.00  sec  34.7 GBytes  4.96 Gbits/sec  4465             sender
[SUM]   0.00-60.00  sec  34.7 GBytes  4.96 Gbits/sec                  receiver

iperf Done.


=================

and pod to pod via direct private link cross cluster

kubectl exec -it iperf-cross-cluster-client -n titan -- iperf3 -c vpce-xxx.vpce-svc-xxx.us-west-2.vpce.amazonaws.com -p 5201 -t 60 -P 4 -i 10
Connecting to host vpce-xxx.vpce-svc-xxx.us-west-2.vpce.amazonaws.com, port 5201
[  5] local 10.0.1.51 port 57452 connected to 10.0.1.31 port 5201
[  7] local 10.0.1.51 port 54848 connected to 10.0.0.158 port 5201
[  9] local 10.0.1.51 port 57462 connected to 10.0.1.31 port 5201
[ 11] local 10.0.1.51 port 57470 connected to 10.0.1.31 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-10.00  sec  1.18 GBytes  1.01 Gbits/sec  142    775 KBytes      
[  7]   0.00-10.00  sec  1.48 GBytes  1.27 Gbits/sec  106    808 KBytes      
[  9]   0.00-10.00  sec  1.22 GBytes  1.05 Gbits/sec  139   1003 KBytes      
[ 11]   0.00-10.00  sec  1.91 GBytes  1.64 Gbits/sec  135    840 KBytes      
[SUM]   0.00-10.00  sec  5.79 GBytes  4.97 Gbits/sec  522            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  10.00-20.00  sec  1.10 GBytes   944 Mbits/sec  127    726 KBytes      
[  7]  10.00-20.00  sec  1.54 GBytes  1.32 Gbits/sec   76    734 KBytes      
[  9]  10.00-20.00  sec  1.16 GBytes   995 Mbits/sec  117    669 KBytes      
[ 11]  10.00-20.00  sec  1.98 GBytes  1.70 Gbits/sec  120    848 KBytes      
[SUM]  10.00-20.00  sec  5.77 GBytes  4.96 Gbits/sec  440            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  20.00-30.00  sec  1.54 GBytes  1.32 Gbits/sec  169    881 KBytes      
[  7]  20.00-30.00  sec  1.43 GBytes  1.23 Gbits/sec  131    465 KBytes      
[  9]  20.00-30.00  sec  1.19 GBytes  1.02 Gbits/sec  137    555 KBytes      
[ 11]  20.00-30.00  sec  1.61 GBytes  1.38 Gbits/sec  136    563 KBytes      
[SUM]  20.00-30.00  sec  5.77 GBytes  4.95 Gbits/sec  573            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  30.00-40.00  sec  1.39 GBytes  1.20 Gbits/sec  102    742 KBytes      
[  7]  30.00-40.00  sec  1.39 GBytes  1.19 Gbits/sec  174   1.47 MBytes      
[  9]  30.00-40.00  sec  1.22 GBytes  1.05 Gbits/sec  108    750 KBytes      
[ 11]  30.00-40.00  sec  1.78 GBytes  1.53 Gbits/sec  189    726 KBytes      
[SUM]  30.00-40.00  sec  5.78 GBytes  4.96 Gbits/sec  573            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  40.00-50.00  sec  1.35 GBytes  1.16 Gbits/sec   63   1003 KBytes      
[  7]  40.00-50.00  sec  1.37 GBytes  1.18 Gbits/sec   51    775 KBytes      
[  9]  40.00-50.00  sec  1.20 GBytes  1.03 Gbits/sec  113    595 KBytes      
[ 11]  40.00-50.00  sec  1.86 GBytes  1.59 Gbits/sec  104   1.08 MBytes      
[SUM]  40.00-50.00  sec  5.77 GBytes  4.96 Gbits/sec  331            
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]  50.00-60.00  sec  1.21 GBytes  1.04 Gbits/sec   93    775 KBytes      
[  7]  50.00-60.00  sec  1.60 GBytes  1.37 Gbits/sec   36   1.72 MBytes      
[  9]  50.00-60.00  sec  1.02 GBytes   876 Mbits/sec  135    563 KBytes      
[ 11]  50.00-60.00  sec  1.90 GBytes  1.63 Gbits/sec  119    808 KBytes      
[SUM]  50.00-60.00  sec  5.73 GBytes  4.92 Gbits/sec  383            
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  7.78 GBytes  1.11 Gbits/sec  696             sender
[  5]   0.00-60.00  sec  7.77 GBytes  1.11 Gbits/sec                  receiver
[  7]   0.00-60.00  sec  8.80 GBytes  1.26 Gbits/sec  574             sender
[  7]   0.00-60.00  sec  8.80 GBytes  1.26 Gbits/sec                  receiver
[  9]   0.00-60.00  sec  7.00 GBytes  1.00 Gbits/sec  749             sender
[  9]   0.00-60.00  sec  7.00 GBytes  1.00 Gbits/sec                  receiver
[ 11]   0.00-60.00  sec  11.0 GBytes  1.58 Gbits/sec  803             sender
[ 11]   0.00-60.00  sec  11.0 GBytes  1.58 Gbits/sec                  receiver
[SUM]   0.00-60.00  sec  34.6 GBytes  4.95 Gbits/sec  2822             sender
[SUM]   0.00-60.00  sec  34.6 GBytes  4.95 Gbits/sec                  receiver

iperf Done.



=================

I dont know in v2 is there was any more tunings as I couldn't find much in the documentation since this is a new release.

But overall, this is great to finally have something more network focussed than all these sidecars and proxy's and vpns. Great project to have come across! We can finally abstract networking away from the IPv4 constraints.

Do you think the bottle neck for single stream is in AMQP layer?

Thank you again for the assistance.

Mike Cruzz

unread,
Jul 28, 2025, 10:19:36 PMJul 28
to Skupper
Hello again

I did another test to confirm if iperf was not the correct one and I came across https://dev.to/pragmagic/testing-service-mesh-performance-in-multi-cluster-scenario-istio-vs-kuma-vs-nsm-4agj

where they used fortio

This was skupper leaf -> skupper hub -> skupper leaf

test1:
#!/bin/bash

# Skupper Fortio Performance Test Script
# Replicating the original service mesh performance test methodology
# for leaf-to-leaf communication through Skupper hub

set -e

# Configuration (matching original test parameters)
TARGET_QPS=6000
DURATION="60s"
ITERATIONS=10
SERVER_URL="http://fortio-nginx-server-capella" # Skupper-exposed service
RESULTS_DIR="skupper_test_results_$(date +%Y%m%d_%H%M%S)"

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Skupper Fortio Performance Test${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Target QPS: $TARGET_QPS"
echo "Duration: $DURATION"
echo "Iterations: $ITERATIONS"
echo "Server URL: $SERVER_URL"
echo "Results Directory: $RESULTS_DIR"
echo ""

# Create results directory
mkdir -p "$RESULTS_DIR"

# Check if we're in the right cluster context
echo -e "${YELLOW}Checking cluster context...${NC}"
CURRENT_CONTEXT=$(kubectl config current-context)
echo "Current context: $CURRENT_CONTEXT"

# Verify Skupper status
echo -e "${YELLOW}Checking Skupper status...${NC}"
skupper status || echo "Warning: Skupper status check failed"
echo ""

# Verify pods are running
echo -e "${YELLOW}Checking pod status...${NC}"
kubectl get pods -l app=fortio-client-titan -o wide
echo ""

# Function to run a single test iteration
run_test_iteration() {
local iteration=$1
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')

echo -e "${YELLOW}Running test iteration $iteration at $timestamp${NC}"

# Start port-forward in background (matching original methodology)
echo "Starting port-forward to Fortio client..."
kubectl port-forward pod/fortio-client-titan 8080:8080 &
PF_PID=$!

# Wait for port-forward to establish
sleep 5

# Test connectivity first
echo "Testing connectivity to server..."
if ! curl -s --max-time 10 "http://localhost:8080/fortio/rest/run?qps=1&t=5s&url=${SERVER_URL}/ping" > /dev/null; then
echo -e "${RED}ERROR: Cannot reach server through Fortio. Skipping iteration $iteration.${NC}"
kill $PF_PID 2>/dev/null || true
return 1
fi

# Warm-up run (discard results - matching original methodology)
echo "Running warm-up test..."

# Brief pause
sleep 2

# Actual test run
echo "Running actual performance test..."
local result_file="${RESULTS_DIR}/iteration_${iteration}.json"

echo -e "${GREEN}Test iteration $iteration completed successfully${NC}"

# Extract and display key metrics using jq (if available)
if command -v jq &> /dev/null; then
local qps=$(jq -r '.ActualQPS // "N/A"' "$result_file")
local avg_latency=$(jq -r '.DurationHistogram.Avg // "N/A"' "$result_file")
local p90=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==90) | .Value // "N/A"' "$result_file")
local p99=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99) | .Value // "N/A"' "$result_file")
local p999=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99.9) | .Value // "N/A"' "$result_file")

echo " → QPS: $qps"
echo " → Avg Latency: $avg_latency"
echo " → P90: $p90"
echo " → P99: $p99"
echo " → P99.9: $p999"

# Log summary to CSV
echo "$iteration,$timestamp,$qps,$avg_latency,$p90,$p99,$p999" >> "${RESULTS_DIR}/summary.csv"
else
echo " → Results saved to $result_file (install jq for metric extraction)"
fi
else
echo -e "${RED}ERROR: Test iteration $iteration failed${NC}"
fi

# Cleanup port-forward
kill $PF_PID 2>/dev/null || true
wait $PF_PID 2>/dev/null || true

# Brief pause between iterations
sleep 5
echo ""
}

# Function to analyze results
analyze_results() {
if ! command -v jq &> /dev/null; then
echo -e "${YELLOW}jq not available. Install jq for detailed analysis.${NC}"
return
fi

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Test Results Analysis${NC}"
echo -e "${GREEN}========================================${NC}"

# Create summary CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9" > "${RESULTS_DIR}/summary.csv"

local total_qps=0
local total_latency=0
local count=0

for result_file in "${RESULTS_DIR}"/iteration_*.json; do
if [[ -f "$result_file" ]]; then
local qps=$(jq -r '.ActualQPS // 0' "$result_file")
local latency=$(jq -r '.DurationHistogram.Avg // 0' "$result_file")

if [[ "$qps" != "0" && "$latency" != "0" ]]; then
total_qps=$(echo "$total_qps + $qps" | bc -l 2>/dev/null || echo "$total_qps")
total_latency=$(echo "$total_latency + $latency" | bc -l 2>/dev/null || echo "$total_latency")
((count++))
fi
fi
done

if [[ $count -gt 0 ]]; then
local avg_qps=$(echo "scale=2; $total_qps / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_latency=$(echo "scale=4; $total_latency / $count" | bc -l 2>/dev/null || echo "N/A")

echo "Successful iterations: $count/$ITERATIONS"
echo "Average QPS: $avg_qps"
echo "Average Latency: $avg_latency ms"

# Save final summary
cat > "${RESULTS_DIR}/final_summary.txt" << EOF
Skupper Fortio Performance Test Results
=====================================
Test Configuration:
- Target QPS: $TARGET_QPS
- Duration: $DURATION
- Iterations: $ITERATIONS
- Server URL: $SERVER_URL

Results:
- Successful iterations: $count/$ITERATIONS
- Average QPS: $avg_qps
- Average Latency: $avg_latency ms

Cluster Architecture:
- Client: Cluster 1 (Leaf)
- Hub: Cluster 2 (Skupper router)
- Server: Cluster 3 (Leaf)
- Instance Type: g5g.xlarge (4 vCPUs, 16GB RAM, ARM)

- Istio: 496 QPS, 2.01ms avg latency
- Kuma: 886 QPS, 1.13ms avg latency
- NSM: 1,332 QPS, 0.74ms avg latency
EOF
else
echo -e "${RED}No successful test iterations found.${NC}"
fi
}

# Main execution
echo -e "${YELLOW}Starting test sequence...${NC}"
echo ""

# Initialize CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9" > "${RESULTS_DIR}/summary.csv"

# Run test iterations
for i in $(seq 1 $ITERATIONS); do
run_test_iteration $i
done

# Analyze results
analyze_results

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Test Completed${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Results saved in: $RESULTS_DIR"
echo ""
echo "To view detailed results:"
echo " ls -la $RESULTS_DIR/"
echo " cat ${RESULTS_DIR}/final_summary.txt"
echo ""
echo "To compare with original paper results:"
echo " - Your Skupper setup (leaf-to-leaf via hub)"
echo " - Original Istio: 496 QPS, 2.01ms"
echo " - Original Kuma: 886 QPS, 1.13ms"
echo " - Original NSM: 1,332 QPS, 0.74ms"



Test iteration 1 completed successfully
  → QPS: 1689.7733437691763
  → Avg Latency: 0.002363649161380045
  → P90: 0.002918676825847542
  → P99: 0.0032936058394160577
  → P99.9: 0.0068275161290323565

-----------


test 2
#!/bin/bash

# Optimized Skupper Fortio Performance Test Script
# Enhanced for maximum throughput performance

set -e

# Enhanced Configuration for higher performance
TARGET_QPS=6000
DURATION="60s"
ITERATIONS=10
RESULTS_DIR="skupper_test_results_$(date +%Y%m%d_%H%M%S)"

# Fortio optimization parameters
FORTIO_CONNECTIONS=50 # Increase connection pool
FORTIO_THREADS=8 # Match your CPU cores
FORTIO_HTTP_VERSION=1.1 # Use HTTP/1.1 for better connection reuse
FORTIO_TIMEOUT=30s # Longer timeout for high load
FORTIO_BUFFER_SIZE=8192 # Larger buffer size

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Skupper Fortio Performance Test${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Target QPS: $TARGET_QPS"
echo "Duration: $DURATION"
echo "Connections: $FORTIO_CONNECTIONS"
echo "Threads: $FORTIO_THREADS"
echo "HTTP Version: $FORTIO_HTTP_VERSION"
echo "Results Directory: $RESULTS_DIR"
echo ""

# Create results directory
mkdir -p "$RESULTS_DIR"

# Optimize system settings for high throughput
optimize_system() {
echo -e "${YELLOW}Optimizing system settings for high throughput...${NC}"

# Increase file descriptor limits
ulimit -n 65536

# TCP optimizations (if running with privileged access)
if [[ $EUID -eq 0 ]]; then
echo "Applying TCP optimizations..."
sysctl -w net.core.somaxconn=65535 2>/dev/null || echo "Warning: Could not set somaxconn"
sysctl -w net.core.netdev_max_backlog=5000 2>/dev/null || echo "Warning: Could not set netdev_max_backlog"
sysctl -w net.ipv4.tcp_max_syn_backlog=65536 2>/dev/null || echo "Warning: Could not set tcp_max_syn_backlog"
sysctl -w net.ipv4.tcp_keepalive_time=600 2>/dev/null || echo "Warning: Could not set tcp_keepalive_time"
sysctl -w net.ipv4.tcp_keepalive_intvl=60 2>/dev/null || echo "Warning: Could not set tcp_keepalive_intvl"
sysctl -w net.ipv4.tcp_keepalive_probes=3 2>/dev/null || echo "Warning: Could not set tcp_keepalive_probes"
else
echo "Non-root user: Skipping system TCP optimizations"
fi

echo "File descriptor limit: $(ulimit -n)"
}

# Check if we're in the right cluster context
echo -e "${YELLOW}Checking cluster context...${NC}"
CURRENT_CONTEXT=$(kubectl config current-context)
echo "Current context: $CURRENT_CONTEXT"

# Verify Skupper status
echo -e "${YELLOW}Checking Skupper status...${NC}"
skupper status || echo "Warning: Skupper status check failed"
echo ""

# Verify pods are running
echo -e "${YELLOW}Checking pod status...${NC}"
kubectl get pods -l app=fortio-client-titan -o wide
echo ""

# Optimize system settings
optimize_system

# Function to run optimized test iteration
run_optimized_test_iteration() {
local iteration=$1
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')

echo -e "${YELLOW}Running optimized test iteration $iteration at $timestamp${NC}"

# Start port-forward in background with increased buffer
echo "Starting optimized port-forward to Fortio client..."
kubectl port-forward pod/fortio-client-titan 8080:8080 &
PF_PID=$!

# Wait for port-forward to establish
sleep 5

# Test connectivity first
echo "Testing connectivity to server..."
if ! curl -s --max-time 10 "http://localhost:8080/fortio/rest/run?qps=1&t=5s&url=${SERVER_URL}/ping" > /dev/null; then
echo -e "${RED}ERROR: Cannot reach server through Fortio. Skipping iteration $iteration.${NC}"
kill $PF_PID 2>/dev/null || true
return 1
fi

# Enhanced warm-up run with optimized parameters
echo "Running enhanced warm-up test..."

# Brief pause to let connections stabilize
sleep 3

# Optimized test run with enhanced parameters
echo "Running optimized performance test..."
local result_file="${RESULTS_DIR}/iteration_${iteration}.json"

# Build optimized Fortio URL with all performance parameters
fortio_url+="?qps=${TARGET_QPS}"
fortio_url+="&t=${DURATION}"
fortio_url+="&c=${FORTIO_CONNECTIONS}"
fortio_url+="&timeout=${FORTIO_TIMEOUT}"
fortio_url+="&compression=false"
fortio_url+="&url=${SERVER_URL}"

if curl -s --max-time 180 "$fortio_url" > "$result_file"; then
echo -e "${GREEN}Optimized test iteration $iteration completed successfully${NC}"

# Extract and display key metrics
if command -v jq &> /dev/null; then
local qps=$(jq -r '.ActualQPS // "N/A"' "$result_file")
local avg_latency=$(jq -r '.DurationHistogram.Avg // "N/A"' "$result_file")
local p90=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==90) | .Value // "N/A"' "$result_file")
local p99=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99) | .Value // "N/A"' "$result_file")
local p999=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99.9) | .Value // "N/A"' "$result_file")
local error_rate=$(jq -r '.ErrorsDurationHistogram.Count // 0' "$result_file")
local total_requests=$(jq -r '.DurationHistogram.Count // "N/A"' "$result_file")

echo " → QPS: $qps"
echo " → Avg Latency: $avg_latency"
echo " → P90: $p90"
echo " → P99: $p99"
echo " → P99.9: $p999"
echo " → Total Requests: $total_requests"
echo " → Errors: $error_rate"
echo " → Success Rate: $(echo "scale=2; (($total_requests - $error_rate) / $total_requests) * 100" | bc -l 2>/dev/null || echo "N/A")%"

# Log summary to CSV
echo "$iteration,$timestamp,$qps,$avg_latency,$p90,$p99,$p999,$error_rate,$total_requests" >> "${RESULTS_DIR}/summary.csv"
else
echo " → Results saved to $result_file (install jq for metric extraction)"
fi
else
echo -e "${RED}ERROR: Optimized test iteration $iteration failed${NC}"
fi

# Cleanup port-forward
kill $PF_PID 2>/dev/null || true
wait $PF_PID 2>/dev/null || true

# Brief pause between iterations
sleep 5
echo ""
}

# Enhanced results analysis
analyze_optimized_results() {
if ! command -v jq &> /dev/null; then
echo -e "${YELLOW}jq not available. Install jq for detailed analysis.${NC}"
return
fi

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Test Results Analysis${NC}"
echo -e "${GREEN}========================================${NC}"

# Create enhanced summary CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9,Errors,Total_Requests" > "${RESULTS_DIR}/summary.csv"

local total_qps=0
local total_latency=0
local total_errors=0
local total_requests=0
local count=0

for result_file in "${RESULTS_DIR}"/iteration_*.json; do
if [[ -f "$result_file" ]]; then
local qps=$(jq -r '.ActualQPS // 0' "$result_file")
local latency=$(jq -r '.DurationHistogram.Avg // 0' "$result_file")
local errors=$(jq -r '.ErrorsDurationHistogram.Count // 0' "$result_file")
local requests=$(jq -r '.DurationHistogram.Count // 0' "$result_file")

if [[ "$qps" != "0" && "$latency" != "0" ]]; then
total_qps=$(echo "$total_qps + $qps" | bc -l 2>/dev/null || echo "$total_qps")
total_latency=$(echo "$total_latency + $latency" | bc -l 2>/dev/null || echo "$total_latency")
total_errors=$(echo "$total_errors + $errors" | bc -l 2>/dev/null || echo "$total_errors")
total_requests=$(echo "$total_requests + $requests" | bc -l 2>/dev/null || echo "$total_requests")
((count++))
fi
fi
done

if [[ $count -gt 0 ]]; then
local avg_qps=$(echo "scale=2; $total_qps / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_latency=$(echo "scale=4; $total_latency / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_errors=$(echo "scale=0; $total_errors / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_requests=$(echo "scale=0; $total_requests / $count" | bc -l 2>/dev/null || echo "N/A")
local success_rate=$(echo "scale=2; (($avg_requests - $avg_errors) / $avg_requests) * 100" | bc -l 2>/dev/null || echo "N/A")

echo "Successful iterations: $count/$ITERATIONS"
echo "Average QPS: $avg_qps"
echo "Average Latency: $avg_latency ms"
echo "Average Errors per test: $avg_errors"
echo "Average Success Rate: $success_rate%"
echo "Target QPS Achievement: $(echo "scale=1; ($avg_qps / $TARGET_QPS) * 100" | bc -l 2>/dev/null || echo "N/A")%"

# Save enhanced final summary
cat > "${RESULTS_DIR}/final_summary.txt" << EOF
Optimized Skupper Fortio Performance Test Results
================================================
Test Configuration:
- Target QPS: $TARGET_QPS
- Duration: $DURATION
- Iterations: $ITERATIONS
- Connections: $FORTIO_CONNECTIONS
- Threads: $FORTIO_THREADS
- Server URL: $SERVER_URL

Optimization Settings:
- HTTP Version: $FORTIO_HTTP_VERSION
- Connection Pool: $FORTIO_CONNECTIONS connections
- Timeout: $FORTIO_TIMEOUT
- Buffer Size: $FORTIO_BUFFER_SIZE bytes
- File Descriptors: $(ulimit -n)

Results:
- Successful iterations: $count/$ITERATIONS
- Average QPS: $avg_qps
- Average Latency: $avg_latency ms
- Average Success Rate: $success_rate%
- Target Achievement: $(echo "scale=1; ($avg_qps / $TARGET_QPS) * 100" | bc -l 2>/dev/null || echo "N/A")%

Cluster Architecture:
- Client: Cluster 1 (Leaf)
- Hub: Cluster 2 (Skupper router)
- Server: Cluster 3 (Leaf)
- Instance Type: g5g.xlarge (4 vCPUs, 16GB RAM, ARM)

Performance Comparison:
- Your Optimized Result: $avg_qps QPS, $avg_latency ms
- Original Istio: 496 QPS, 2.01ms avg latency
- Original Kuma: 886 QPS, 1.13ms avg latency
- Original NSM: 1,332 QPS, 0.74ms avg latency

Optimization Impact:
- QPS Improvement: $(echo "scale=1; (($avg_qps - 1690) / 1690) * 100" | bc -l 2>/dev/null || echo "N/A")% vs baseline
EOF
else
echo -e "${RED}No successful test iterations found.${NC}"
fi
}

# Main execution
echo -e "${YELLOW}Starting optimized test sequence...${NC}"
echo ""

# Initialize enhanced CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9,Errors,Total_Requests" > "${RESULTS_DIR}/summary.csv"

# Run optimized test iterations
for i in $(seq 1 $ITERATIONS); do
run_optimized_test_iteration $i
done

# Analyze optimized results
analyze_optimized_results

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Test Completed${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Results saved in: $RESULTS_DIR"
echo ""
echo "Optimization Summary:"
echo " - Connection Pool: $FORTIO_CONNECTIONS connections"
echo " - Threads: $FORTIO_THREADS threads"
echo " - HTTP Version: $FORTIO_HTTP_VERSION"
echo " - File Descriptors: $(ulimit -n)"
echo ""
echo "To view detailed results:"
echo " ls -la $RESULTS_DIR/"
echo " cat ${RESULTS_DIR}/final_summary.txt"


Optimized test iteration 1 completed successfully
  → QPS: 5999.577142203553
  → Avg Latency: 0.0035976515378000028
  → P90: 0.004332794156013566
  → P99: 0.009241448692152909
  → P99.9: 0.01559895833333353
  → Total Requests: 360000
  → Errors: 0
  → Success Rate: N/A%



------

test3

#!/bin/bash

# Optimized Skupper Fortio Performance Test Script
# Enhanced for maximum throughput performance

set -e

# Enhanced Configuration for higher performance
TARGET_QPS=6000
DURATION="60s"
ITERATIONS=10
RESULTS_DIR="skupper_test_results_$(date +%Y%m%d_%H%M%S)"

# Fortio optimization parameters
FORTIO_CONNECTIONS=50 # Increase connection pool
FORTIO_THREADS=8 # Match your CPU cores
FORTIO_HTTP_VERSION=2 # Use HTTP/2 for better multiplexing
FORTIO_TIMEOUT=30s # Longer timeout for high load
FORTIO_BUFFER_SIZE=8192 # Larger buffer size

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Skupper Fortio Performance Test${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Target QPS: $TARGET_QPS"
echo "Duration: $DURATION"
echo "Connections: $FORTIO_CONNECTIONS"
echo "Threads: $FORTIO_THREADS"
echo "HTTP Version: $FORTIO_HTTP_VERSION"
echo "Results Directory: $RESULTS_DIR"
echo ""

# Create results directory
mkdir -p "$RESULTS_DIR"

# Optimize system settings for high throughput
optimize_system() {
echo -e "${YELLOW}Optimizing system settings for high throughput...${NC}"

# Increase file descriptor limits
ulimit -n 65536

# TCP optimizations (if running with privileged access)
if [[ $EUID -eq 0 ]]; then
echo "Applying TCP optimizations..."
sysctl -w net.core.somaxconn=65535 2>/dev/null || echo "Warning: Could not set somaxconn"
sysctl -w net.core.netdev_max_backlog=5000 2>/dev/null || echo "Warning: Could not set netdev_max_backlog"
sysctl -w net.ipv4.tcp_max_syn_backlog=65536 2>/dev/null || echo "Warning: Could not set tcp_max_syn_backlog"
sysctl -w net.ipv4.tcp_keepalive_time=600 2>/dev/null || echo "Warning: Could not set tcp_keepalive_time"
sysctl -w net.ipv4.tcp_keepalive_intvl=60 2>/dev/null || echo "Warning: Could not set tcp_keepalive_intvl"
sysctl -w net.ipv4.tcp_keepalive_probes=3 2>/dev/null || echo "Warning: Could not set tcp_keepalive_probes"
else
echo "Non-root user: Skipping system TCP optimizations"
fi

echo "File descriptor limit: $(ulimit -n)"
}

# Check if we're in the right cluster context
echo -e "${YELLOW}Checking cluster context...${NC}"
CURRENT_CONTEXT=$(kubectl config current-context)
echo "Current context: $CURRENT_CONTEXT"

# Verify Skupper status
echo -e "${YELLOW}Checking Skupper status...${NC}"
skupper status || echo "Warning: Skupper status check failed"
echo ""

# Verify pods are running
echo -e "${YELLOW}Checking pod status...${NC}"
kubectl get pods -l app=fortio-client-titan -o wide
echo ""

# Optimize system settings
optimize_system

# Function to run optimized test iteration
run_optimized_test_iteration() {
local iteration=$1
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')

echo -e "${YELLOW}Running optimized test iteration $iteration at $timestamp${NC}"

# Start port-forward in background with increased buffer
echo "Starting optimized port-forward to Fortio client..."
kubectl port-forward pod/fortio-client-titan 8080:8080 &
PF_PID=$!

# Wait for port-forward to establish
sleep 5

# Test connectivity first
echo "Testing connectivity to server..."
echo -e "${RED}ERROR: Cannot reach server through Fortio. Skipping iteration $iteration.${NC}"
kill $PF_PID 2>/dev/null || true
return 1
fi

# Enhanced warm-up run with optimized parameters
echo "Running enhanced warm-up test..."

# Brief pause to let connections stabilize
sleep 3

# Optimized test run with enhanced parameters
echo "Running optimized performance test..."
local result_file="${RESULTS_DIR}/iteration_${iteration}.json"

# Build optimized Fortio URL with all performance parameters
fortio_url+="?qps=${TARGET_QPS}"
fortio_url+="&t=${DURATION}"
fortio_url+="&c=${FORTIO_CONNECTIONS}"
fortio_url+="&timeout=${FORTIO_TIMEOUT}"
fortio_url+="&compression=false"
fortio_url+="&http2=true"
fortio_url+="&url=${SERVER_URL}"

if curl -s --max-time 180 "$fortio_url" > "$result_file"; then
echo -e "${GREEN}Optimized test iteration $iteration completed successfully${NC}"

# Extract and display key metrics
if command -v jq &> /dev/null; then
local qps=$(jq -r '.ActualQPS // "N/A"' "$result_file")
local avg_latency=$(jq -r '.DurationHistogram.Avg // "N/A"' "$result_file")
local p90=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==90) | .Value // "N/A"' "$result_file")
local p99=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99) | .Value // "N/A"' "$result_file")
local p999=$(jq -r '.DurationHistogram.Percentiles[]? | select(.Percentile==99.9) | .Value // "N/A"' "$result_file")
local error_rate=$(jq -r '.ErrorsDurationHistogram.Count // 0' "$result_file")
local total_requests=$(jq -r '.DurationHistogram.Count // "N/A"' "$result_file")

echo " → QPS: $qps"
echo " → Avg Latency: $avg_latency"
echo " → P90: $p90"
echo " → P99: $p99"
echo " → P99.9: $p999"
echo " → Total Requests: $total_requests"
echo " → Errors: $error_rate"
echo " → Success Rate: $(echo "scale=2; (($total_requests - $error_rate) / $total_requests) * 100" | bc -l 2>/dev/null || echo "N/A")%"

# Log summary to CSV
echo "$iteration,$timestamp,$qps,$avg_latency,$p90,$p99,$p999,$error_rate,$total_requests" >> "${RESULTS_DIR}/summary.csv"
else
echo " → Results saved to $result_file (install jq for metric extraction)"
fi
else
echo -e "${RED}ERROR: Optimized test iteration $iteration failed${NC}"
fi

# Cleanup port-forward
kill $PF_PID 2>/dev/null || true
wait $PF_PID 2>/dev/null || true

# Brief pause between iterations
sleep 5
echo ""
}

# Enhanced results analysis
analyze_optimized_results() {
if ! command -v jq &> /dev/null; then
echo -e "${YELLOW}jq not available. Install jq for detailed analysis.${NC}"
return
fi

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Test Results Analysis${NC}"
echo -e "${GREEN}========================================${NC}"

# Create enhanced summary CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9,Errors,Total_Requests" > "${RESULTS_DIR}/summary.csv"

local total_qps=0
local total_latency=0
local total_errors=0
local total_requests=0
local count=0

for result_file in "${RESULTS_DIR}"/iteration_*.json; do
if [[ -f "$result_file" ]]; then
local qps=$(jq -r '.ActualQPS // 0' "$result_file")
local latency=$(jq -r '.DurationHistogram.Avg // 0' "$result_file")
local errors=$(jq -r '.ErrorsDurationHistogram.Count // 0' "$result_file")
local requests=$(jq -r '.DurationHistogram.Count // 0' "$result_file")

if [[ "$qps" != "0" && "$latency" != "0" ]]; then
total_qps=$(echo "$total_qps + $qps" | bc -l 2>/dev/null || echo "$total_qps")
total_latency=$(echo "$total_latency + $latency" | bc -l 2>/dev/null || echo "$total_latency")
total_errors=$(echo "$total_errors + $errors" | bc -l 2>/dev/null || echo "$total_errors")
total_requests=$(echo "$total_requests + $requests" | bc -l 2>/dev/null || echo "$total_requests")
((count++))
fi
fi
done

if [[ $count -gt 0 ]]; then
local avg_qps=$(echo "scale=2; $total_qps / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_latency=$(echo "scale=4; $total_latency / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_errors=$(echo "scale=0; $total_errors / $count" | bc -l 2>/dev/null || echo "N/A")
local avg_requests=$(echo "scale=0; $total_requests / $count" | bc -l 2>/dev/null || echo "N/A")
local success_rate=$(echo "scale=2; (($avg_requests - $avg_errors) / $avg_requests) * 100" | bc -l 2>/dev/null || echo "N/A")

echo "Successful iterations: $count/$ITERATIONS"
echo "Average QPS: $avg_qps"
echo "Average Latency: $avg_latency ms"
echo "Average Errors per test: $avg_errors"
echo "Average Success Rate: $success_rate%"
echo "Target QPS Achievement: $(echo "scale=1; ($avg_qps / $TARGET_QPS) * 100" | bc -l 2>/dev/null || echo "N/A")%"

# Save enhanced final summary
cat > "${RESULTS_DIR}/final_summary.txt" << EOF
Optimized Skupper Fortio Performance Test Results
================================================
Test Configuration:
- Target QPS: $TARGET_QPS
- Duration: $DURATION
- Iterations: $ITERATIONS
- Connections: $FORTIO_CONNECTIONS
- Threads: $FORTIO_THREADS
- Server URL: $SERVER_URL

Optimization Settings:
- HTTP Version: $FORTIO_HTTP_VERSION
- Connection Pool: $FORTIO_CONNECTIONS connections
- Timeout: $FORTIO_TIMEOUT
- Buffer Size: $FORTIO_BUFFER_SIZE bytes
- File Descriptors: $(ulimit -n)

Results:
- Successful iterations: $count/$ITERATIONS
- Average QPS: $avg_qps
- Average Latency: $avg_latency ms
- Average Success Rate: $success_rate%
- Target Achievement: $(echo "scale=1; ($avg_qps / $TARGET_QPS) * 100" | bc -l 2>/dev/null || echo "N/A")%

Cluster Architecture:
- Client: Cluster 1 (Leaf)
- Hub: Cluster 2 (Skupper router)
- Server: Cluster 3 (Leaf)
- Instance Type: g5g.xlarge (4 vCPUs, 16GB RAM, ARM)

Performance Comparison:
- Your Optimized Result: $avg_qps QPS, $avg_latency ms
- Original Istio: 496 QPS, 2.01ms avg latency
- Original Kuma: 886 QPS, 1.13ms avg latency
- Original NSM: 1,332 QPS, 0.74ms avg latency

Optimization Impact:
- QPS Improvement: $(echo "scale=1; (($avg_qps - 1690) / 1690) * 100" | bc -l 2>/dev/null || echo "N/A")% vs baseline
EOF
else
echo -e "${RED}No successful test iterations found.${NC}"
fi
}

# Main execution
echo -e "${YELLOW}Starting optimized test sequence...${NC}"
echo ""

# Initialize enhanced CSV header
echo "Iteration,Timestamp,QPS,Avg_Latency,P90,P99,P99.9,Errors,Total_Requests" > "${RESULTS_DIR}/summary.csv"

# Run optimized test iterations
for i in $(seq 1 $ITERATIONS); do
run_optimized_test_iteration $i
done

# Analyze optimized results
analyze_optimized_results

echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN} Optimized Test Completed${NC}"
echo -e "${GREEN}========================================${NC}"
echo "Results saved in: $RESULTS_DIR"
echo ""
echo "Optimization Summary:"
echo " - Connection Pool: $FORTIO_CONNECTIONS connections"
echo " - Threads: $FORTIO_THREADS threads"
echo " - HTTP Version: $FORTIO_HTTP_VERSION"
echo " - File Descriptors: $(ulimit -n)"
echo ""
echo "To view detailed results:"
echo " ls -la $RESULTS_DIR/"
echo " cat ${RESULTS_DIR}/final_summary.txt"


Optimized test iteration 5 completed successfully
  → QPS: 5999.484497194329
  → Avg Latency: 0.004090626600566666
  → P90: 0.0058742508068234224
  → P99: 0.013394621026894853
  → P99.9: 0.025467032967033423
  → Total Requests: 360000
  → Errors: 0
  → Success Rate: N/A%


I guess I wanted to understand if these tests were a more accurate way to represent the skupper performance?

Are there more real world tests that would line up to give an accurate estimate of true performance? If also there are any optimisations I need to do that would be most helpful.

Thanks again.
Reply all
Reply to author
Forward
0 new messages