Trex - Unable to send traffic at line rate

873 views
Skip to first unread message

devisr...@gmail.com

unread,
Dec 14, 2018, 4:17:47 PM12/14/18
to TRex Traffic Generator
Dear Team ,

On my intel server I use two 40G ports connected back to back & I am doing performance test using trex using IPv6 packets & I could not send packets at the line rate when the packet is small[size < 1000] . I could reach the line rate when the packet size is bigger [size > 1000]

For a packet size of 152 bytes , I should be able to send 28.5 million packets per sec , however for a test period of 60 sec , it only reached 12 million packets with line utilization of ~15Gbps . Even though the line capacity is 40Gbps & can send 28.5 MPPS , it never goes beyond 12MPPS or 15Gbps . I am already using 7 cores for the trex .


One more observation is that , if I use "STLScVmRaw , STLVmFlowVar" to create packets streams , I could see a improvements in the PPS & the line utilization, but still its not a big improvement , just adds +5MPPS.

Is there any configuration or fine tuning I should do to achieve the line rate for the packets of smaller size ?

Thanks,
Devis Reagan

hanoh haim

unread,
Dec 15, 2018, 10:56:56 AM12/15/18
to devisr...@gmail.com, TRex Traffic Generator
I would compare the numbers to this benchmark document.


Thanks,
Hanoh

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/b092e0b9-5f42-4fbb-832e-8bffde6bf9b0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Hanoh
Sent from my iPhone

Devis Reagan

unread,
Dec 17, 2018, 10:54:59 AM12/17/18
to TRex Traffic Generator
Thanks for the update .. 

Have tried as per the report , below is the summary ..

Finding Summary : Able to see line rate [40G] for the packet sizes 1514, 590 as in the report , however for the packet sizes 128 [multiplier 99.5%] & 
64 [multiplier 31.5mpps] there is a very high loss observed for the given multipliers . Pls find attached results for 590 , 128 & 64.

Pls let me know what changes or tuning needed to achieve the reported line rate & mpps for the packet sizes 128 & 64 . I am not sure what I am missing , 
tried as per the report you shared.

My topology is : Trex port 0 [40G] -> Optic 40G QSFP-> Trex port 1 [40G] 
Used stl/bench.py for traffic generation , default file given in trex stl directory. There are no switch between the trex port , they are directly connected .

Pls do check the attached doc for the report ..

Thanks,
Devis Reagan

--
You received this message because you are subscribed to a topic in the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/trex-tgn/dSmXU4d8WG4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to trex-tgn+u...@googlegroups.com.
trex-logs.docx

hanoh haim

unread,
Dec 17, 2018, 4:03:19 PM12/17/18
to Devis Reagan, TRex Traffic Generator
In our benchmark we are using *one* port from each dual port NIC.( 2 out of 4 ports 2 NIC)
2x40Gb NIC actually can deliver only 40Gb not 80Gb

If this is not the case please provide topology and ./dpdk_setup_ports -t output and 
trex_cfg.yaml

Hanoh

You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.

To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Devis Reagan

unread,
Dec 18, 2018, 12:52:19 AM12/18/18
to hanoh haim, TRex Traffic Generator
Hi , Thanks a lot for the update . Yes , I am using one port from the dual port NIC , not using the second port as it will
not anyway deliver 80G .
pls find the attached sheet for the dpdk_setup_ports.py output for more clarity if below mail does not look good .
Hope you checked my earlier report attached which showed the trex output for 128 & 64 packet sizes test , the problem
is only with the 128 & 64 size packets , other packet sized are matching with your report .

Setup is : Trex Port 0 of NIC 1 <--40G QSFP---> Trex Port 0 of NIC 2

Note : Modified the mac's below for internal reason ..
[root@Trex scripts]# ./dpdk_setup_ports.py  -i
By default, IP based configuration file will be created. Do you want to use MAC based config? (y/N)y
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| ID | NUMA |   PCI   |        MAC        |                   Name                    | Driver  | Linux IF  |  Active  |
+====+======+=========+===================+===========================================+=========+===========+==========+
| 0  | 0    | 18:00.0 | aa:aa:aa:b5:77:ec | Ethernet Controller X710 for 10GbE SFP+   | i40e    | enp24s0f0 |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 1  | 0    | 18:00.1 | aa:aa:aa:b5:77:ed | Ethernet Controller X710 for 10GbE SFP+   | i40e    | enp24s0f1 |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 2  | 0    | 1a:00.0 | aa:aa:aa:b5:72:b0 | Ethernet Controller X710 for 10GbE SFP+   | i40e    | enp26s0f0 |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 3  | 0    | 1a:00.1 | aa:aa:aa:b5:72:b1 | Ethernet Controller X710 for 10GbE SFP+   | i40e    | enp26s0f1 |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 4  | 0    | 3d:00.0 | aa:aa:aa:51:44:64 | Ethernet Connection X722 for 10GBASE-T    | i40e    | eno1      | *Active* |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 5  | 0    | 3d:00.1 | aa:aa:aa:51:44:65 | Ethernet Connection X722 for 10GBASE-T    | i40e    | eno2      |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 6  | 1    | 86:00.0 | aa:aa:aa:c9:ec:f8 | Ethernet Controller XL710 for 40GbE QSFP+ | igb_uio |           |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 7  | 1    | 86:00.1 |                   | Ethernet Controller XL710 for 40GbE QSFP+ |         |           |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 8  | 1    | af:00.0 | aa:aa:aa:c3:e3:98 | Ethernet Controller XL710 for 40GbE QSFP+ | igb_uio |           |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
| 9  | 1    | af:00.1 |                   | Ethernet Controller XL710 for 40GbE QSFP+ |         |           |          |
+----+------+---------+-------------------+-------------------------------------------+---------+-----------+----------+
Please choose even number of interfaces from the list above, either by ID , PCI or Linux IF
Stateful will use order of interfaces: Client1 Server1 Client2 Server2 etc. for flows.
Stateless can be in any order.
For performance, try to choose each pair of interfaces to be on the same NUMA.
Enter list of interfaces separated by space (for example: 1 3) : 6 8
For interface 6, assuming loopback to it's dual interface 8.
Destination MAC is aa:aa:aa:c3:e3:98. Change it to MAC of DUT? (y/N).n
For interface 8, assuming loopback to it's dual interface 6.
Destination MAC is aa:aa:aa:c9:ec:f8. Change it to MAC of DUT? (y/N).n
Print preview of generated config? (Y/n)y
### Config file generated by dpdk_setup_ports.py ###
- version: 2
  interfaces: ['86:00.0', 'af:00.0']
  port_bandwidth_gb: 40
  port_info:
      - dest_mac: aa:aa:aa:c3:e3:98 # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  aa:aa:aa:c9:ec:f8
      - dest_mac: aa:aa:aa:c9:ec:f8 # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  aa:aa:aa:c3:e3:98
  platform:
      master_thread_id: 0
      latency_thread_id: 1
      dual_if:
        - socket: 1
          threads: [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,54,55,56,57,58,59,60,61,62,63]

Save the config to file? (Y/n)y
Default filename is /etc/trex_cfg.yaml
Press ENTER to confirm or enter new file: trex_cfg_new.yaml
Saved to trex_cfg_new.yaml.
[root@Trex scripts]#

The above is the output given by the dpdk_setup_ports.py , but I changes as below as I have dual sockets , all other are same as it is.
- port_limit: 2
  version: 2
  interfaces: ['86:00.0', 'af:00.0']
  port_info:
      - dest_mac: aa:aa:aa:c3:e3:98 # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  aa:aa:aa:c9:ec:f8
      - dest_mac: aa:aa:aa:c9:ec:f8 # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  aa:aa:aa:c3:e3:98

  platform:
      master_thread_id: 0
      latency_thread_id: 1
      dual_if:
        - socket: 0
          threads: [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35]
        - socket: 1
          threads: [54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71]
System info :
================
[root@Trex scripts]# lscpu | grep CPU
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                72
On-line CPU(s) list:   0-71
CPU family:            6
CPU MHz:               1303.812
CPU max MHz:           3700.0000
CPU min MHz:           1000.0000
NUMA node0 CPU(s):     0-17,36-53
NUMA node1 CPU(s):     18-35,54-71
[root@Trex scripts]#
[root@Trex scripts]# lscpu | grep Core
Core(s) per socket:    18
[root@Trex scripts]# lscpu | grep core
Thread(s) per core:    2
[root@Trex scripts]#

Thanks,
Devis Reagan
trex-forum.txt

hanoh haim

unread,
Dec 18, 2018, 1:02:38 AM12/18/18
to Devis Reagan, TRex Traffic Generator
Hi Devis, 
The script configuration is the right one. Try to keep it. 
Your NICS are located at NUMA 1 (both) and you provided NUMA 0 (for both) --> this explain the low performance (this is the worst configuration)
You could try another thing that will improve it even more. Move one of the NICs to a different slot in NUMA 0, this way one will be located in NUMA 0 and one in NUMA 1.
Give the 4 ports to TRex but use only two. This way you will utilized all the resource you have.

Devis Reagan

unread,
Dec 18, 2018, 9:01:20 AM12/18/18
to hanoh haim, TRex Traffic Generator
Thanks for the info , I could not test today . Will try these changes & update back tomorrow..

Thanks ,
Devis Reagan

Devis Reagan

unread,
Dec 19, 2018, 1:48:42 AM12/19/18
to hanoh haim, TRex Traffic Generator
Hi Hanoh,
 I have tried to use all 4 ports for the configuration perspective & used two port only
 to send traffic . Also I used the config generated by the tool as it is .
 I can send at line rate for 590 packet & also for 200 size , but still 128 packet
 size seeing huge loss with ierros , pls ref the attached screen shots .
 One thing I have not done is moving the card physically to Numa 0 , Let me
 know if this physical move can increase the performance for the 128 packet size .
 Also let me know if any other suggestions that can improve the performance for
 128 packet size.

[root@Trex scripts]# cat /etc/trex_cfg_19th_new.yaml

### Config file generated by dpdk_setup_ports.py ###
- version: 2
  interfaces: ['86:00.0', '86:00.1', 'af:00.0', 'af:00.1']
  port_bandwidth_gb: 40
  port_info:
      - dest_mac: xx:xx:xx:c3:e3:98 # 86:00.0
        src_mac:  xx:xx:xx:c9:ec:f8
      - dest_mac: xx:xx:xx:c3:e3:99 # 86:00.1
        src_mac:  xx:xx:xx:c9:ec:f9
      - dest_mac: xx:xx:xx:c9:ec:f8 # af:00.0
        src_mac:  xx:xx:xx:c3:e3:98
      - dest_mac: xx:xx:xx:c9:ec:f9 # af:00.1
        src_mac:  xx:xx:xx:c3:e3:99
  platform:
      master_thread_id: 0
      latency_thread_id: 1
      dual_if:
        - socket: 1
          threads: [18,19,20,21,22,23,24,54,55,56,57,58,59,60]
        - socket: 1
          threads: [25,26,27,28,29,30,31,32,33,34,35,61,62,63]
[root@Trex scripts]#

Thanks,
Devis Reagan
trex-19th-log.docx

hanoh haim

unread,
Dec 19, 2018, 2:56:25 AM12/19/18
to Devis Reagan, TRex Traffic Generator
Hi Devis, 
Very nice and thorough document -- it helped me to see the issue.
It looks that you are good from NUMA now but hitting the new XL710 firmware issue  (27MPPS)

have a look here for more information:
Cisco firmware can reach 42MPPS. so either try to downgrade or buy from Cisco 

thanks,
Hanoh 

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Devis Reagan

unread,
Dec 19, 2018, 12:28:27 PM12/19/18
to hanoh haim, TRex Traffic Generator
Hi Hanoh ,

As you pointed this may be a firmware issue . Will check on firmware downgrade and confirm 


Thanks ,
Devis Reagan

Devis Reagan

unread,
Jan 8, 2019, 1:17:26 AM1/8/19
to hanoh haim, TRex Traffic Generator
HI Hanoh ,

 Have tried with firmware version 5.0.5 & able to see the MPPS beyond 27MPPS and its matching with your benchmark report . Mainly for 128byte size of packets we are seeing ~full line utilization & the reported MPPS .. Attached the test results ref 

Thanks for the support 

Thanks,
Devis Reagan
trex-new-log.docx

hanoh haim

unread,
Jan 8, 2019, 1:27:43 AM1/8/19
to Devis Reagan, TRex Traffic Generator
Hi Devis,
I’ve just got an answer from Intel, will update the bug (no workaround).
Could you share the way to downgrade the Intel firmware for others sake?

Thanks,
Hanoh

hanoh haim

unread,
Dec 4, 2019, 9:07:33 AM12/4/19
to Devis Reagan, TRex Traffic Generator
Hi Devis, 
Intel provided a fix, would like to test it (better late than never)?

thanks
Hanoh


Reply all
Reply to author
Forward
0 new messages