Throughput tests with ndr_bench_fs_latency

492 views
Skip to first unread message

bastien pivot

unread,
Apr 6, 2021, 10:33:25 AM4/6/21
to TRex Traffic Generator
Hi team!

I would like to have some help regarding the results of the script ndr_bench_fs_latency.py 
I have run the script 3 times for package sizes: imix and 1280 bytes only.
I work in loopback, without DUT and with an I350 NIC. (speed 1Gb/s)
The output table was different each time. Is there a reason for this? 
I can't post screenshots but :

run 1
imix / Line utilization 43.64% / Total L1 872.89 Mb/s / Total L2 828.15 Mb/s .... / multiplier 43.79%
1280 / Line utilization 43.64% / Total L1 872.89 Mb/s / Total L2 828.15 Mb/s .... / multiplier 43.79%

run 2
imix / Line utilization 99.75% / Total L1 1.99 Gb/s / Total L2 1.89Gb/s .... / multiplier 100%
1280 / Line utilization 99.74% / Total L1 1.99 Gb/s / Total L2 1.96b/s .... / multiplier 100%

run 3
imix / Line utilization 88.10% / Total L1 1.76 Gb/s / Total L2 1.67Gb/s .... / multiplier 88.32%
1280 / Line utilization 99.47% / Total L1 1.99 Gb/s / Total L2 1.96b/s .... / multiplier 100%

Is it possible that there is a configuration problem in the dpdk kernel?

Also, I would like to know about the line utilization values, is there a link with the theoretical maximum throughput? I don't quite understand Tx utilization.

On the other hand, I have some generals questions. 
Is there a maximum number of sessions (L7) that can be generated in astf mode?
For astf tests, I use CapInfo with l7_percent argument for each stream. Is it better to work with the 'cps' parameter ?

I am still discovering how Trex works so I don't assimilate all the great features.
Thanks

Bastien

hanoh haim

unread,
Apr 6, 2021, 12:36:29 PM4/6/21
to bastien pivot, TRex Traffic Generator
You probably have a random packet drop that stops the script each time in different point. 
Try to run it manually to find the point and then try to run it using the script. 

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/bca3ccb6-c73e-4548-9a73-a57ca9c019bfn%40googlegroups.com.


--
Hanoh
Sent from my iPhone

bastien pivot

unread,
Apr 7, 2021, 2:50:53 AM4/7/21
to TRex Traffic Generator
Ok thank you Hanoh, I will try this !
If you have time, can you tell me a few words about 'line utilization' . Why isn't it always 100% and decreasing with the size of the tested packages?

Thanks again

Bastien

Besart Dollma

unread,
Apr 7, 2021, 10:40:37 AM4/7/21
to TRex Traffic Generator
NDR - Non drop rate, in other words we are trying to find the maximal traffic we can transmit without dropping the packets. 
Clearly we start with line rate 100% but if you have drops here, you need to transmit less traffic, hence the line rate drops.
Thanks,

bastien pivot

unread,
Apr 15, 2021, 10:32:45 AM4/15/21
to TRex Traffic Generator
Hi !
Thanks a lot for your answer, it was not clear for me before.

I have new questions for the continuation of my project.
In order to improve dpdk, I followed the recommendation steps on doc.dpdk (BIOS and Hugespages configuration). Do you have some advices for dpdk configuration ?

Also I noticed that many packages were 'queue fulled' during my throughputs tests, with ndr script.
 So I decided to start trex server with ' --queue-drop' argument, and now all my results trun around 95 - 100 %. Is it a good way  for fixing queue full packets ? 
And mainly I would like to ask you, what is the role of queues and what does queue-full mean exactly ?

Thanks
Regards,

Bastien

Besart Dollma

unread,
Apr 17, 2021, 4:52:48 AM4/17/21
to TRex Traffic Generator
Hi Bastien,
Regarding queue-full, each NIC has queues, and in case it can't transmit/receive packets at a given rate, it will queue them and transmit/receive them later. Think of it as buffering. When queues are full, packets are dropped, since we can't store new packets. There are different approaches how to implement which packets are dropped, for example FIFO. In case a packet is dropped because of queue-full, TRex will retransmit the packet. If you use --queue-drop, it will not. 
Anyway, in NDR we offer the possibility for you to decide if queue full is allowed/not allowed or how much of it is allowed. It really depends on what you are trying to test. Check the -q CLI parameter:

parser.add_argument('-q', '--q-full',
dest='q_full_resolution',
help='Percent of traffic allowed to be queued when transmitting above DUT capability.\n'
'0%% q-full resolution is not recommended due to precision issues. [percents 0-100]',
default=2.00,
type=float)

Thanks,
Bes

bastien pivot

unread,
Apr 20, 2021, 2:20:51 AM4/20/21
to TRex Traffic Generator
Hi Bes, 
Thank you for your complete answer, I have indeed changed the value of '--q-full' and my results are closer to the expected ones.
I also noticed important differences in the imix flow test. When I set pdr = 0% my results are not stable while if I set pdr = 0.0001 % everything seems to be consistent.
 Can we consider that 0.0001 % is equivalent to 0 %, or for TRex there is a real difference?

Thanks
Have a nice day
Bastien


Besart Dollma

unread,
Apr 20, 2021, 6:51:08 AM4/20/21
to TRex Traffic Generator
Yes, 0.0001 is equivalent to 0. It is written in the comments and help that pdr 0 is not suggested as it provides unstable results.
Thanks,

bastien pivot

unread,
Apr 27, 2021, 10:08:01 AM4/27/21
to TRex Traffic Generator
Hi Bes , Hanoh,
Thanks to yours advices my tests are working perfectly! 
Now I added a NIC, an Intel 82599ES 10-Gibabit in order to achieve 1G and 10Gb/s with fiber. But during tests I have observed just ''ierrors'' and no ''oerrors'' on tui. I launched Trex with no hyper-threading.
So I changed my configuration to :
- version: 2
  c           : 4
interfaces: [PCI1 , PCI2]
port_info: 
  -ip: 1.1.1.1
   default_gw: 2.2.2.2
-ip: 2.2.2.2
  default_gw: 1.1.1.1

platform:
  master_thread_id : 1
  latency_thread_id : 7
  dual_if:
    -socket : 0
     threads: [2,3,4,5,6]

Running trex server with 5 cores (5 per dual port) has reduced ''ierrors" values , so I would like to ask you if is increasing the number of cores the only way to fix it ? because I don't have any available at the moment.

Thanks
Regards

Bastien

hanoh haim

unread,
Apr 27, 2021, 10:12:10 AM4/27/21
to bastien pivot, TRex Traffic Generator
I think you are reaching the NIC limit. It does not support line rate. X710 supports it

bastien pivot

unread,
Apr 28, 2021, 4:45:12 AM4/28/21
to TRex Traffic Generator
Hi,
What surprises me is that when I run simples stateless scripts from trex-console like ''imix.py''  or ''udp_for_benchmarck.py'' I have ierrors even if I launch scripts with -m 1% argument. At this moment the line rate turn around 190 Mbps, so the NIC should support this easily. 
Do you have any others ideas please? 
For better understanding of ierrors can you tell me few words about how trex generate these errors? 

Thanks 
Regards

Bastien

Besart Dollma

unread,
Apr 30, 2021, 2:43:15 AM4/30/21
to TRex Traffic Generator
How Bastien,
ierrors are usually L1 errors like CRC, undersize or oversize packets,  jabber etc.
Might be the nics are problematic or the cable. Other option is the driver is problematic but none but you has reported that so chances are low.
Thanks,

bastien pivot

unread,
Apr 30, 2021, 5:48:39 AM4/30/21
to TRex Traffic Generator
Hi Bes,
Thanks again for your advices!
After checking I didn't have the last driver version for the NIC, I upgraded it. 
Also I've changed cables and modules SPF, ierrors value have decreased but there was still some.  
By chance, when I disconnected fiber and next must do arp resolution form trex-console, everything starts to work well. I can launch scripts at 100% of line rate(10Gb/s) from trex-console. I can't exactly explain why it's working now.

An other thing, the NIC is a 1 and 10 Gb/s. So when I'm running throughput test with ndr_bench_fs_lantency.py, TRex consider 100% of line utilization  for 2Gb/s bi-directional. Is there any way to specify 10Gb/s before tests?

Have a good day
Regards 

Bastien

Besart Dollma

unread,
Apr 30, 2021, 7:22:29 AM4/30/21
to TRex Traffic Generator
Hi, 
I understand you are running in STL. The initial rate is 100% - meaning it is the line rate. If TRex sees the interfaces are synced on 10Gbps it will transmit at 10Gbps. In your case, the interfaces are synced on 1Gbps because of one of the NICs, and as such it transmits at the given rate.
Thanks,

bastien pivot

unread,
May 10, 2021, 4:32:17 AM5/10/21
to TRex Traffic Generator
Hi Bes, 
I resolved my issues last week, my computer didn't have enough hugepages and din't support bi-directional trafic. 
Now I'm trying to sync the 10G NIC to 1G but even if I change change link speed with ethtool config, TRex server start with 10G link speed port attributs.
Do you know how to specify to the server to start with 1G link speed ? I didn't find the information in trex doc

Thanks and have a good day!

Bastien

Besart Dollma

unread,
May 10, 2021, 5:37:18 AM5/10/21
to TRex Traffic Generator
Hi Bastien, 
1) ethtool won't help, the ethtool changes are made for Linux, however if you are using DPDK (which I suppose you are), the driver will be provided by DPDK.
2) Can you please connect with the console and post the output of portattr -a
3) Do you need bi-dir traffic at 1Gbps? If not simply change the direction. 
4) You might want to play with https://doc.dpdk.org/guides/linux_gsg/index.html in order to change the speed of the port (haven't tried doing it) after the interface is bound.
Thanks

bastien pivot

unread,
May 10, 2021, 7:29:06 AM5/10/21
to TRex Traffic Generator
1) Yes, that's what I thought I actually using DPDK indeed.
2)this is the ouput :
trex>portattr -a
Port Status

     port       |          0           |          1           
----------------+----------------------+---------------------
driver          |      net_ixgbe       |      net_ixgbe       
description     |  82599ES 10-Gigabit  |  82599ES 10-Gigabit  
link status     |          UP          |          UP          
link speed      |       10 Gb/s        |       10 Gb/s        
port status     |         IDLE         |         IDLE         
promiscuous     |         off          |         off          
multicast       |         off          |         off          
flow ctrl       |         none         |         none         
vxlan fs        |          -           |          -           
--              |                      |                      
layer mode      |         IPv4         |         IPv4         
src IPv4        |       1.1.1.1        |       2.2.2.2        
IPv6            |         off          |         off          
src MAC         |  f8:f2:1e:af:XX:X8   |  f8:f2:1e:af:XX:X9   
---             |                      |                      
Destination     |       2.2.2.2        |       1.1.1.1        
ARP Resolution  |  f8:f2:1e:af:XX:X9   |  f8:f2:1e:af:XX:X8   
----            |                      |                      
VLAN            |          -           |          -           
-----           |                      |                      
PCI Address     |     0000:05:00.0     |     0000:05:00.1     
NUMA Node       |          0           |          0           
RX Filter Mode  |    hardware match    |    hardware match    
RX Queueing     |         off          |         off          
Grat ARP        |         off          |         off          
------          |                      |                      

trex>

3) Idealy I would like to run bi-dir tests for 1Gb/s Ethernet , 1Gb/s Fiber and 10Gb/s Fiber ( not at the same time of course). That's why I use a 1G and 10G NIC, with the aim of easily changing the speed of the fiber card.
4) you are probably right, the dpdk doc should present solutions! 

Thanks

Besart Dollma

unread,
May 10, 2021, 11:38:57 AM5/10/21
to TRex Traffic Generator
I see,
Then the best solution is to indeed configure the port speed using DPDK. I hope there is some way to do, I haven't tried it. Worst case scenario, the driver is included in TRex, you can poke around the code and change the speed.

bastien pivot

unread,
May 11, 2021, 3:07:47 AM5/11/21
to TRex Traffic Generator
Hi !
I will look for solutions with DPDK first. The driver is included in TRex, can you tell me where to look in the trex-core-master folder?
Sometimes, when I change fiber cable, while I use the same type of cable, TRex sets the link speed to 1gb/s after the command: trex(service)> arp
it would be possible to modify / add a command from the trex console code to make the ARP request select a certain speed?

Thanks

Besart Dollma

unread,
May 14, 2021, 3:06:23 AM5/14/21
to TRex Traffic Generator
Hi, 
I don't see how ARP has anything to do with link speed.
As for the drivers you can find them here: https://github.com/cisco-system-traffic-generator/trex-core/tree/master/src/drivers
Thanks,

bastien pivot

unread,
May 20, 2021, 2:23:46 AM5/20/21
to TRex Traffic Generator
Hi Bes,
I found a solution, I just changed my SFP for 1G modules. I think this is the best way to simply control the link speed.
Also, I don't have any ierrors now so you were right, I had some hardware defects not on the NIC but on my SFP+ 

Thanks again for your advices! 

Bastien

Besart Dollma

unread,
May 20, 2021, 6:17:26 AM5/20/21
to TRex Traffic Generator
Glad you achieved what you wanted :) 
You are welcome
Bes,

bastien pivot

unread,
Jun 22, 2021, 3:06:12 AM6/22/21
to TRex Traffic Generator
Hi Bes,
I come back to you to ask you a simple question, I want to use TRex in astf mode and generate tcp sessions from a PCAP, only one, in order to determine the maximum number of sessions supported by the DUT. Are there any scripts already implemented in TRex for that? 
Thank you for your interest.

Bastien

Besart Dollma

unread,
Jun 25, 2021, 2:43:50 AM6/25/21
to TRex Traffic Generator
Hi Bastien, 
Create a PCAP file with one TCP session only! After that, TRex can replay that session with different IPs. 
Thanks,
Reply all
Reply to author
Forward
0 new messages