Hi team!
I would like to have some help regarding the results of the script ndr_bench_fs_latency.py
I have run the script 3 times for package sizes: imix and 1280 bytes only.
I work in loopback, without DUT and with an I350 NIC. (speed 1Gb/s)
The output table was different each time. Is there a reason for this?
I can't post screenshots but :
run 1
imix / Line utilization 43.64% / Total L1 872.89 Mb/s / Total L2 828.15 Mb/s .... / multiplier 43.79%
1280 / Line utilization 43.64% / Total L1 872.89 Mb/s / Total L2 828.15 Mb/s .... / multiplier 43.79%
run 2
imix / Line utilization 99.75% / Total L1 1.99 Gb/s / Total L2 1.89Gb/s .... / multiplier 100%
1280 / Line utilization 99.74% / Total L1 1.99 Gb/s / Total L2 1.96b/s .... / multiplier 100%
run 3
imix / Line utilization 88.10% / Total L1 1.76 Gb/s / Total L2 1.67Gb/s .... / multiplier 88.32%
1280 / Line utilization 99.47% / Total L1 1.99 Gb/s / Total L2 1.96b/s .... / multiplier 100%
Is it possible that there is a configuration problem in the dpdk kernel?
Also, I would like to know about the line utilization values, is there a link with the theoretical maximum throughput? I don't quite understand Tx utilization.
On the other hand, I have some generals questions.
Is there a maximum number of sessions (L7) that can be generated in astf mode?
For astf tests, I use CapInfo with l7_percent argument for each stream. Is it better to work with the 'cps' parameter ?
I am still discovering how Trex works so I don't assimilate all the great features.
Thanks
Bastien