Trex with 25G NICs

1,307 views
Skip to first unread message

rke...@gmail.com

unread,
Aug 8, 2018, 7:29:11 PM8/8/18
to TRex Traffic Generator
Hi,

I am using ENA 25G NICs. When using iperf3/nuttcp I can get between 18Gbps to 20Gbps using varying packet size. When using Trex it is always stuck at less than 10Gbps. What config needs to be changed for Trex to generate 25Gbps traffic. Irrespective of 'port_bandwidth' value in /etc/trex_cfg.yaml, following output is seen when Trex is started

link         :  link : Link Up - speed 10000 Mbps - full-duplex

/etc/trex_cfg.yaml

### Config file generated by dpdk_setup_ports.py ###
- port_limit: 2
  version: 2
  interfaces: ["00:06.0","00:07.0"]
  limit_memory    : 4096
  c                : 1
  port_bandwidth_gb : 40  <<<< Doesn't matter if it is "10" or "25" or "40"
  port_info       :
    - src_mac    :   [0x02,0xe6,0x3c,0x3b,0x09,0x52]
      dest_mac   :   [0x02,0x47,0x59,0x58,0xd4,0x64]
    - src_mac    :   [0x02,0x99,0x33,0x2c,0x90,0x2e]
      dest_mac   :   [0x02,0xba,0x9b,0xe1,0x6e,0x34]
  platform :
      master_thread_id  : 1
      latency_thread_id : 3
      dual_if   :
           - socket   : 0
             threads  : [4,5,6,7]


============Trex startup logs===============
sudo ./t-rex-64 -i -v 8
Killing Scapy server... Scapy server is killed
Starting Scapy server.... Scapy server is started
The ports are bound/configured.
Starting  TRex v2.43 please wait  ...
Using configuration file /etc/trex_cfg.yaml
 port limit     :  2
 port_bandwidth_gb    :  40
 if_mask        : None
 is low-end : 0
 stack type : 
 limit_memory        : 4096
 thread_per_dual_if      : 1
 if        :  00:06.0, 00:07.0,
 enable_zmq_pub :  1
 zmq_pub_port   :  4500
 m_zmq_rpc_port    :  4501
 src     : 02:e6:3c:3b:09:52
 dest    : 02:47:59:58:d4:64
 src     : 02:99:33:2c:90:2e
 dest    : 02:ba:9b:e1:6e:34
 memory per 2x10G ports 
 MBUF_64                                   : 16380
 MBUF_128                                  : 8190
 MBUF_256                                  : 8190
 MBUF_512                                  : 8190
 MBUF_1024                                 : 8190
 MBUF_2048                                 : 4095
 MBUF_4096                                 : 128
 MBUF_9K                                   : 512
 TRAFFIC_MBUF_64                           : 65520
 TRAFFIC_MBUF_128                          : 32760
 TRAFFIC_MBUF_256                          : 8190
 TRAFFIC_MBUF_512                          : 8190
 TRAFFIC_MBUF_1024                         : 8190
 TRAFFIC_MBUF_2048                         : 32760
 TRAFFIC_MBUF_4096                         : 128
 TRAFFIC_MBUF_9K                           : 512
 MBUF_DP_FLOWS                             : 524288
 MBUF_GLOBAL_FLOWS                         : 5120
 master   thread  : 1 
 rx  thread  : 3 
 dual_if : 0
    socket  : 0 
   [   4   5   6   7     ] 
CTimerWheelYamlInfo does not exist 
 set driver name net_ena
 driver capability  : TCP_UDP_OFFLOAD
 Number of ports found: 2
zmq publisher at: tcp://*:4500
 wait 1 sec .
port : 0
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0
port : 1
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0
 number of ports         : 2
 max cores for 2 ports   : 1
 max queue per port      : 3
 -------------------------------
RX core uses TX queue number 0 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0 
 -------------------------------

-Per port stats table
      ports |               0 |               1
 -----------------------------------------------------------------------------------------
   opackets |        30320767 |               0
     obytes |     47109808806 |               0
   ipackets |               0 |        29204404
     ibytes |               0 |     45375297750
    ierrors |               0 |         1086020
    oerrors |               0 |               0
      Tx Bw |       9.30 Gbps |       0.00  bps

-Global stats enabled
 Cpu Utilization : 89.8  %  20.7 Gb/core
 Platform_factor : 1.0 
 Total-Tx        :       9.30 Gbps 
 Total-Rx        :       9.01 Gbps 
 Total-PPS       :     748.60 Kpps 
 Total-CPS       :       0.00  cps 

 Expected-PPS    :       0.00  pps 
 Expected-CPS    :       0.00  cps 
 Expected-BPS    :       0.00  bps 

 Active-flows    :        0  Clients :        0   Socket-util : 0.0000 %   
 Open-flows      :        0  Servers :        0   Socket :        0 Socket/Clients :  -nan
 Total_queue_full : 32437998        
 drop-rate       :       0.00  bps  
 current time    : 50.0 sec 
 test duration   : 0.0 sec 


Thanks.

hanoh haim

unread,
Aug 9, 2018, 12:03:04 AM8/9/18
to rke...@gmail.com, TRex Traffic Generator
You should forward this question to DPDK ML.
Are you getting the iperf3 result while sending traffic from *one* interface?

From what I can see the DPDK driver reports an interface speeds of only 10Gb and notifying that there is a backpresure to SW (queue full).

Thanks,
Hanoh


--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/e27d3c41-1819-41a3-a32a-c3c4ce03b1dd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Hanoh
Sent from my iPhone

rke...@gmail.com

unread,
Aug 9, 2018, 12:30:52 AM8/9/18
to TRex Traffic Generator
I am using same setup and same interfaces when testing 'iperf3/nuttcp/Trex', not sure what you mean by "Are you getting the iperf3 result while sending traffic from *one* interface?"

AWS ENA NICs do not report link speed, below discussion from ENA has a possible explanation.


Having said that, both 'iperf3/nuttcp' can detect and pump traffic depending on 10G/25G NICs so why not Trex? Before I post this question on DPDK ML, I would like to understand it correctly from Trex perspective. In the configuration file /etc/trex_cfg.yaml, I specify port bandwidth as '25' which should take effect irrespective of what is reported by NIC link speed?

hanoh haim

unread,
Aug 9, 2018, 3:43:33 AM8/9/18
to rke...@gmail.com, TRex Traffic Generator
Hi, 

The DPDK driver is a different code to interact with the NIC in user-space. It would be very strange that DPDK would be slower than kernel driver.
The link you sent is for kernel driver that might be totally different. 
With i40vf we could achieve  ~10MPPS with one DP core
 
see here:

10MPPS with 1500B/packet should give ~120Gb/sec theoretically --> so there is another bottleneck that is not understood (maybe there is a policer/limit for one tx-queue )

I would suggest to verify DPDK using pkt-gen which is part of DPDK and then ask the question there. 

thanks,
Hanoh 

hanoh haim

unread,
Aug 9, 2018, 5:57:41 AM8/9/18
to rke...@gmail.com, TRex Traffic Generator
Hi, 
BTW I see that this issue is still open 
https://github.com/amzn/amzn-drivers/issues/45

might be related to your issue.

thanks,
Hanoh

hanoh haim

unread,
Aug 9, 2018, 6:00:08 AM8/9/18
to rke...@gmail.com, TRex Traffic Generator
and Even better (TRex and pkt-gen)

https://github.com/amzn/amzn-drivers/issues/68

rke...@gmail.com

unread,
Aug 9, 2018, 7:17:00 AM8/9/18
to TRex Traffic Generator
Thanks for your help but I don't think the links you sent for the issues on Amazon drivers are related to my testing.

I am now building Trex 2.43 code to understand why links are reported as 10Gbps instead of 25Gbps, however, I run into issues with the built Trex 2.43 code but latest installed 2.43 Trex binary works fine. In the src/dpdk directory I only see 'lib' and 'drivers' directories, I would like to enable PMD driver debugs (which is usually controlled via DPDK config file). Can you tell me how to enable that?

===========Compiled 2.43 Trex code, errors out====================

/home/rkerur/trex-core/scripts#sudo ./t-rex-64 -i -v 8

Killing Scapy server... Scapy server is killed

Starting Scapy server.... Scapy server is started

The ports are bound/configured.

Starting  TRex v2.43 please wait  ... 

Using configuration file /etc/trex_cfg.yaml 

 port limit     :  2 

 port_bandwidth_gb    :  25 

EAL: Error - exiting with code: 1

  Cause: Cannot configure device: err=-22, port=0



==================Installed v2.43 binary================

/home/rkerur/trex/v2.43# sudo ./t-rex-64 -i -v 8

Killing Scapy server... Scapy server is killed

Starting Scapy server.... Scapy server is started

The ports are bound/configured.

Starting  TRex v2.43 please wait  ... 

Using configuration file /etc/trex_cfg.yaml 

 port limit     :  2 

 port_bandwidth_gb    :  25 

   opackets |               0 |               0 

     obytes |               0 |               0 

   ipackets |               0 |               0 

     ibytes |               0 |               0 

    ierrors |               0 |               0 

    oerrors |               0 |               0 

      Tx Bw |       0.00  bps |       0.00  bps 


hanoh haim

unread,
Aug 9, 2018, 8:14:27 AM8/9/18
to rke...@gmail.com, TRex Traffic Generator


try to run 

./t-rex-64-debug image with -v 7

if this does not work, try this file 


this is the latest build (wasn't released with 18.08)

rke...@gmail.com

unread,
Aug 9, 2018, 12:19:21 PM8/9/18
to TRex Traffic Generator
Hi,

I am thinking the problem Trex generator when  25G ENA NIC is used, explanation below ...,  unless you tell me that Trex is looking for something specific from ENA PMD driver then I will check on DPDK ML. Let me know your inputs.

DUT (Device-Under-Test) has 2 DPDK ENA interfaces based on dpdk-17.11.2
Use Iperf3/nuttcp traffic generator on ENA interfaces (1 for Tx and 1 for Rx). I get close to 20Gbps for TCP/UDP and it varies based on packet size.

Above tests show cases that ENA kernel driver (traffic generator) and ENA DPDK PMD (DUT) can sustain 20Gbps.

Replace Iperf3/nuttcp traffic generator with Trex, max I get is 10Gbps. I downloaded Trex code, modified ENA driver to report 25Gbps as link speed (though DUT has the same DPDK code base and it works fine for both 10G and 25G), modified number of MBUFs allocation and when I start modified Trex it shows following output, but traffic is still less than 10Gbps.

sudo ./t-rex-64 -i -v 8

Killing Scapy server... Scapy server is killed

Starting Scapy server.... Scapy server is started

The ports are bound/configured.

Starting  TRex v2.43 please wait  ... 

Using configuration file /etc/trex_cfg.yaml 

 port limit     :  2 

 port_bandwidth_gb    :  25 

 if_mask        : None 

 is low-end : 0 

 stack type :  

 thread_per_dual_if      : 1 

 if        :  00:06.0, 00:07.0,

 enable_zmq_pub :  1 

 zmq_pub_port   :  4500 

 m_zmq_rpc_port    :  4501 

 src     : 02:e6:3c:3b:09:52

 dest    : 02:47:59:58:d4:64

 src     : 02:99:33:2c:90:2e

 dest    : 02:ba:9b:e1:6e:34

 memory per 2x10G ports  

 MBUF_64                                   : 32760 

 MBUF_128                                  : 16380 

 MBUF_256                                  : 16380 

 MBUF_512                                  : 16380 

 MBUF_1024                                 : 16380 

 MBUF_2048                                 : 8190 

 MBUF_4096                                 : 128 

 MBUF_9K                                   : 512 

 TRAFFIC_MBUF_64                           : 131040 

 TRAFFIC_MBUF_128                          : 65520 

 TRAFFIC_MBUF_256                          : 16380 

 TRAFFIC_MBUF_512                          : 16380 

 TRAFFIC_MBUF_1024                         : 16380 

 TRAFFIC_MBUF_2048                         : 65520 

 TRAFFIC_MBUF_4096                         : 128 

 TRAFFIC_MBUF_9K                           : 512 

 MBUF_DP_FLOWS                             : 524288 

 MBUF_GLOBAL_FLOWS                         : 5120 

 master   thread  : 1  

 rx  thread  : 3  

 dual_if : 0 

    socket  : 0  

   [   4   5   6   7     ]  

CTimerWheelYamlInfo does not exist  

 set driver name net_ena 

 driver capability  : TCP_UDP_OFFLOAD 

 Number of ports found: 2

zmq publisher at: tcp://*:4500

rte_eth_dev_flow_ctrl_get: Function not supported

rte_eth_led_on: Function not supported

rte_eth_dev_set_link_up: Function not supported

rte_eth_stats_reset: Function not supported

rte_eth_stats_reset: Function not supported

rte_eth_promiscuous_disable: Function not supported

rte_eth_allmulticast_disable: Function not supported

rte_eth_dev_flow_ctrl_get: Function not supported

rte_eth_led_on: Function not supported

rte_eth_dev_set_link_up: Function not supported

rte_eth_stats_reset: Function not supported

rte_eth_stats_reset: Function not supported

rte_eth_promiscuous_disable: Function not supported

rte_eth_allmulticast_disable: Function not supported

 wait 1 sec .

port : 0 

------------

link         :  link : Link Up - speed 25000 Mbps - full-duplex

promiscuous  : 0 

port : 1 

------------

link         :  link : Link Up - speed 25000 Mbps - full-duplex

promiscuous  : 0 

 number of ports         : 2 

 max cores for 2 ports   : 1 

 max queue per port      : 3 

 -------------------------------

RX core uses TX queue number 0 on all ports

 core, c-port, c-queue, s-port, s-queue, lat-queue

 ------------------------------------------



hanoh haim

unread,
Aug 9, 2018, 1:36:48 PM8/9/18
to rke...@gmail.com, TRex Traffic Generator
The DUT working in 20Gb means the Rx->Tx path can do 20Gb but it does not mean the Tx->Rx would be the same. It is different path and code.

ENA driver guys could help you. The main *problem* from TRex perspective is the back -pressure from the NIC (queue-full indication)

Try to reduce the rate and TRex CPU will be reduced dramatically.

TRex can do 200MPPS look into the driver limitation 

Thanks,
Hanoh


--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages