TRex CPS on console different from reported CPS on BIG IP Load Balancer

278 views
Skip to first unread message

Yusef Skinner

unread,
Oct 30, 2023, 11:58:44 AM10/30/23
to TRex Traffic Generator
I am conducting cps performance testing on an BIG IP Load Balancer

TRex client ----- BIG IP ----- TRex dest

Issue that I am finding is that I am seeing the targeted cps be reached on my TRex client but the reported CPS on the BIG IP is much lower. This only occurs while using TRex to create the connections compared to other test applications I've used addtionality the CPS reported on the BIG IP is much lower when the traffic is being generated by TRex instead of other test applications.

Here's current config and commands used:

clientVM traffic NIC is 192.168.1.4
BIG IP external NIC is 192.168.1.21
BIG IP internal NIC is 192.168.2.21
serverVM traffic NIC is 192.168.2.4 - .19

Client yaml:

- port_limit  : 2
  version : 2
  interfaces : ['2338:00:02.0', 'dummy']
  ext_dpdk_opt : ['--vdev=net_vdev_netvsc0,iface=eth1,ignore=1', 'dummy']
  port_info :
             - ip : 192.168.1.4
               dest_mac : 00:0d:3a:f8:59:4b
             - ip : 0.0.0.0
               dest_mac : 00:0d:3a:f8:59:4b

Dest yaml:

- port_limit      : 2
  version         : 2
  interfaces      : ['dummy', '7ff5:00:02.0']
  ext_dpdk_opt    : ['dummy', '--vdev=net_vdev_netvsc0,iface=eth1,ignore=1']
  port_info       :
             - ip : 0.0.0.0
               dest_mac   : 00:22:48:c2:96:e0
             - ip : 192.168.2.4
               dest_mac   : 00:22:48:c2:96:e0

IP Addresses in http_manual_commands.py on Client:

ip_gen_c = ASTFIPGenDist(ip_range=["192.168.1.4", "192.168.1.4"], distribution="seq")
        ip_gen_s = ASTFIPGenDist(ip_range=["192.168.1.21", "192.168.1.21"], distribution="seq")

IP Addresses in http_manual_commands.py on Dest:

ip_gen_c = ASTFIPGenDist(ip_range=["192.168.1.4", "192.168.1.4"], distribution="seq")
        ip_gen_s = ASTFIPGenDist(ip_range=["192.168.1.21", "192.168.1.21"], distribution="seq")

Client start TRex Command:
sudo ./t-rex-64 -i -c 1 --no-ofed-check --cfg /etc/trex_cfg_client.yaml -v 7 --astf --astf-client-mask 0x1

start -f astf/http_manual_commands.py -m 500000 -d 60

Server start TRex Command:
sudo ./t-rex-64 -i -c 1 --no-ofed-check --cfg /etc/trex_cfg_server.yaml -v 7 --astf --astf-server-only --no-key

start -f astf/http_manual_commands.py

Results from test run:
TRex reported cps 498k-500k
Connections established:     3,277,379 
BIG IP reported new connections/s: 57k
BIG IP open connections: 27k <--expected to be near zero

Concerns:
Non matching trex reported CPS to BIG IP CPS
Known limitations when using TRex to test load balancers
IPs used correct for a test scenario like this one 

Thank you for any guidance on this!

-Yusef Skinner

hanoh haim

unread,
Oct 31, 2023, 10:02:58 AM10/31/23
to Yusef Skinner, TRex Traffic Generator
Hi Yusef, 
Can you share the TUI Console output from both TRex servers?


Thanks
Hanoh
Message has been deleted

Yusef Skinner

unread,
Oct 31, 2023, 10:54:57 AM10/31/23
to TRex Traffic Generator
Client:

         |      client       |      server       |
---------------------------+-------------------+-------------------+---------------------------
            m_active_flows |                 0 |                 0 | active open flows
               m_est_flows |                 0 |                 0 | active established flows
              m_tx_bw_l7_r |             0 bps |             0 bps | tx L7 bw acked
        m_tx_bw_l7_total_r |             0 bps |             0 bps | tx L7 bw total
              m_rx_bw_l7_r |             0 bps |             0 bps | rx L7 bw acked
                m_tx_pps_r |             0 pps |             0 pps | tx pps
                m_rx_pps_r |             0 pps |             0 pps | rx pps
                m_avg_size |               0 B |               0 B | average pkt size
                m_tx_ratio |          100.07 % |               0 % | Tx acked/sent ratio
                         - |                   |                   |
                         - |                   |                   |
                       TCP |                   |                   |
                         - |                   |                   |
          tcps_connattempt |           3521087 |                 0 | connections initiated
             tcps_connects |           3291046 |                 0 | connections established
               tcps_closed |           3521087 |                 0 | conn. closed (includes drops)
            tcps_segstimed |          10103179 |                 0 | segs where we tried to get rtt
           tcps_rttupdated |           9873138 |                 0 | times we succeeded
             tcps_sndtotal |          14805385 |                 0 | total packets sent
              tcps_sndpack |           3291046 |                 0 | data packets sent
              tcps_sndbyte |         876750663 |                 0 | data bytes sent by application
           tcps_sndbyte_ok |         819470454 |                 0 | data bytes sent by tcp
              tcps_sndctrl |           3521229 |                 0 | control (SYN|FIN|RST) packets sent
              tcps_sndacks |           7993110 |                 0 | ack-only packets sent
             tcps_rcvtotal |          10071739 |                 0 | total packets received
 tcps_rcvpack |           6582092 |                 0 | packets received in sequence
              tcps_rcvbyte |         421253888 |                 0 | bytes received in sequence
           tcps_rcvackpack |           9873138 |                 0 | rcvd ack packets
           tcps_rcvackbyte |         819470454 |                 0 | tx bytes acked by rcvd acks
        tcps_rcvackbyte_of |           3291046 |                 0 | tx bytes acked by rcvd acks - overflow acked
            tcps_conndrops |            230041 |                 0 | embryonic connections dropped
       tcps_rexmttimeo_syn |           1411016 |                 0 | retransmit SYN timeouts
            tcps_keeptimeo |            223728 |                 0 | keepalive timeouts
            tcps_keepdrops |            223728 |                 0 | connections dropped in keepalive
                         - |                   |                   |
                       UDP |                   |                   |
                         - |                   |                   |
                         - |                   |                   |
               Application |                   |                   |
                         - |                   |                   |
                         - |                   |                   |
                Flow Table |                   |                   |
                         - |                   |                   |
                   err_cwf |             13189 |                 0 | client pkt without flow
                   err_dct |           2478913 |                 0 | duplicate flow - more clients require


Server

        |      client       |      server       |
---------------------------+-------------------+-------------------+---------------------------
            m_active_flows |                 0 |                 0 | active open flows
               m_est_flows |                 0 |                 0 | active established flows
              m_tx_bw_l7_r |             0 bps |             0 bps | tx L7 bw acked
        m_tx_bw_l7_total_r |             0 bps |             0 bps | tx L7 bw total
              m_rx_bw_l7_r |             0 bps |             0 bps | rx L7 bw acked
                m_tx_pps_r |             0 pps |             0 pps | tx pps
                m_rx_pps_r |             0 pps |             0 pps | rx pps
                m_avg_size |               0 B |               0 B | average pkt size
                m_tx_ratio |               0 % |               0 % | Tx acked/sent ratio
                         - |                   |                   |
                         - |                   |                   |
                       TCP |                   |                   |
                         - |                   |                   |
              tcps_accepts |                 0 |           3291046 | connections accepted
             tcps_connects |                 0 |           3291046 | connections established
               tcps_closed |                 0 |           3291084 | conn. closed (includes drops)
            tcps_segstimed |                 0 |           6582092 | segs where we tried to get rtt
           tcps_rttupdated |                 0 |           9873138 | times we succeeded
             tcps_sndtotal |                 0 |           9873896 | total packets sent
              tcps_sndpack |                 0 |           3291046 | data packets sent
              tcps_sndbyte |                 0 |         421253888 | data bytes sent by application
           tcps_sndbyte_ok |                 0 |         421253888 | data bytes sent by tcp
              tcps_sndctrl |                 0 |                17 | control (SYN|FIN|RST) packets sent
              tcps_sndacks |                 0 |           6582833 | ack-only packets sent
             tcps_rcvtotal |                 0 |          13164533 | total packets received
 tcps_rcvpack |                 0 |           6582092 | packets received in sequence
              tcps_rcvbyte |                 0 |         819470454 | bytes received in sequence
           tcps_rcvackpack |                 0 |           9873138 | rcvd ack packets
           tcps_rcvackbyte |                 0 |         421253888 | tx bytes acked by rcvd acks
        tcps_rcvackbyte_of |                 0 |           3291046 | tx bytes acked by rcvd acks - overflow acked
                tcps_drops |                 0 |               142 | connections dropped
            tcps_conndrops |                 0 |                38 | embryonic connections dropped
          tcps_timeoutdrop |                 0 |                17 | conn. dropped in rxmt timeout
       tcps_rexmttimeo_syn |                 0 |                88 | retransmit SYN timeouts
               tcps_badsyn |                 0 |               142 | bogus SYN, e.g. premature ACK
                         - |                   |                   |
                       UDP |                   |                   |
                         - |                   |                   |
                         - |                   |                   |
               Application |                   |                   |
                         - |                   |                   |
                         - |                   |                   |
                Flow Table |                   |                   |
                         - |                   |                   |
                err_no_syn |                 0 |                26 | server first flow packet with no SYN
           err_no_template |                 0 |               484 | server can't match L7 template


Let me know if additional context is needed.
@Hanoch

Yusef Skinner

unread,
Nov 2, 2023, 10:01:06 AM11/2/23
to TRex Traffic Generator
HI Hanoch, are you able to see the TUI ouputs from both TRex Servers?

Yusef Skinner

unread,
Nov 2, 2023, 10:01:55 AM11/2/23
to TRex Traffic Generator
Posting again:
On Tuesday, October 31, 2023 at 10:02:58 AM UTC-4 Hanoch Haim wrote:

Yusef Skinner

unread,
Nov 8, 2023, 7:01:19 PM11/8/23
to TRex Traffic Generator
Hi Hanoch,

Do the TUI results point to a configuration error for using TRex to generator traffic through a Load Balancer?

/Yusef Skinner

On Tuesday, October 31, 2023 at 10:02:58 AM UTC-4 Hanoch Haim wrote:

Dominik Tran

unread,
Nov 9, 2023, 6:46:44 AM11/9/23
to TRex Traffic Generator
Although I am not familiar with the details, could reason be duplicate flows?

Based on your code snippets you use single IP 192.168.1.4 for client and single IP 192.168.1.21 for server. Default L4 destination port is 80. L4 source ports are random. As flow is typically defined by IP addresses, L4 ports and L4 protocol, the only thing that changes in this setup per new connection is source L4 port.

But there are only 65k ports. If you generate 500k CPS, you get almost 10 connections per second on given flow tuple. Some of them can be active at the same time if previous connection is not terminated before next one arrives. Counter err_dct from TUI stats could indicate this.

I do not know implementation of given IP Load Balancer, but maybe it "merges" parallel connections on flow tuple and reports it as one? You could try to increase IP ranges to reduce the err_dct counter ideally to zero.

Yusef Skinner

unread,
Nov 9, 2023, 10:43:13 AM11/9/23
to TRex Traffic Generator
I notice the same behavior at a lower cps target, for example target: 60k BIG IP is only "seeing" 35k new conn/s but still keeps 15k "open connections".
Bigger issue that I am troubleshooting is that BIG IP is holding open LARGE number of connections. This is only occurring with my TRex testing setup because of the traffic pattern being generated (not seeing with other tools). 
Also, this type of behavior does not occur for these scenarios (works successfully):

TRexClient ------- FIREWALL ------- TRexServer
TRexClient -------  TRexServer

NewQuestion
Is there a way to force my TRex client to send a FIN to end a connection?
I want to customize the traffic pattern I am currently using so that I have to wait for the connection to be closed "automatically" or based on when a response is received. I want the connection to be close as soon as the client receives the final ACK from server and the client then closes the TCP connection with a FIN. 

-Yusef Skinner

hanoh haim

unread,
Nov 10, 2023, 3:54:36 AM11/10/23
to Yusef Skinner, TRex Traffic Generator
Hi Yusef, 
sorry for the late reply, yes there is an issue with the configuration, try to reduce the rate to one flow/sec and capture it in both side to understand the issue ..
   
   tcps_conndrops |            230041 |                 0 | embryonic connections dropped
   tcps_rexmttimeo_syn |           1411016 |                 0 | retransmit SYN timeouts
   tcps_keeptimeo |            223728 |                 0 | keepalive timeouts
   tcps_keepdrops |            223728 |                 0 | connections dropped in keepalive

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/fa53b8ee-f21c-4075-a3c5-03219b90cb8an%40googlegroups.com.


--
Hanoh
Sent from my iPhone

Yusef Skinner

unread,
Nov 10, 2023, 10:43:55 AM11/10/23
to TRex Traffic Generator
Hi Hanoch,

I am looking to change my current configuration. 


NewQuestion
Is there a way to force my TRex client to send a FIN to end a connection?
I want to customize the traffic pattern I am currently using so that I have to wait for the connection to be closed "automatically" or based on when a response is received. I want the connection to be close as soon as the client receives the final ACK from server and the client then closes the TCP connection with a FIN. 

-Yusef Skinner

hanoh haim

unread,
Nov 12, 2023, 9:36:39 AM11/12/23
to Yusef Skinner, TRex Traffic Generator
Yes, this is the default behavior. have a look in the manual how to tune the connection closer 

suggest to first try this in the lab with loopback (one trex as client/server) capture the traffic and loop into the pcap file 
Thanks
Hanoh

Yusef Skinner

unread,
Nov 14, 2023, 10:21:59 AM11/14/23
to TRex Traffic Generator
Sorry, there was a small typo in my last message.

Overall, I want to "customize" astf/http_manual_commands.py and insert a forced FIN to be sent from Client to Dest. Not sure where to find an example of this in the manual, could you share a code snippet or direct link to where you can force send a FIN?

/Yusef Skinner

hanoh haim

unread,
Nov 15, 2023, 8:29:46 AM11/15/23
to Yusef Skinner, TRex Traffic Generator

Yusef Skinner

unread,
Nov 15, 2023, 12:56:59 PM11/15/23
to TRex Traffic Generator
Thank you Hanoch!
my last query pertains to being able to just create and complete as many TCP connections as possible and not request and send data. 

Is it appropriate to just remove the send and recv functions or is there a script for that as well?

/Yusef Skinner

Yusef Skinner

unread,
Nov 15, 2023, 1:07:02 PM11/15/23
to TRex Traffic Generator
I believe using this flow matches my intention:

prog_c = ASTFProgram() prog_c.connect(); ## connect prog_c.reset(); ## send RST from client side prog_s = ASTFProgram() prog_s.wait_for_peer_close(); # wait for client to close the socket

hanoh haim

unread,
Nov 16, 2023, 2:52:15 AM11/16/23
to Yusef Skinner, TRex Traffic Generator
You can play with this using the simulator 

Thanks
Hanoh

Reply all
Reply to author
Forward
0 new messages