Today I am running a Stateful test on 2 side-by-side DPDK VMs.
./t-rex-64 -i --astf -f astf/emix2.py -m 1000
As I run this stateful test of a variety of traffic types, this is what I am seeing. Of concern, obviously, is this drop rate, and this queue_full count, which is enormous.
I don't understand the what and why of the drop rate yet, but what is this queue_full? Does that have something to do with ring buffer? Or some kind of Trex queuing or buffering?
on VM A:
trex>stats
Global Statistics
connection : localhost, Port 4501 total_tx_L2 : 198.73 Mbps ▼▼▼
version : ASTF @ v2.97 total_tx_L1 : 221.7 Mbps ▼▼▼
cpu_util. : 91.06% @ 2 cores (1 per dual port) total_rx : 143.16 Mbps ▼▼▼
rx_cpu_util. : 0.0% / 0 pps total_pps : 143.52 Kpps ▼▼
async_util. : 0% / 18.93 bps drop_rate : 55.57 Mbps
total_cps. : 27.03 Kcps ▼ queue_full : 46,061,965 pkts
And, on VM B:
trex>stats
Global Statistics
connection : localhost, Port 4501 total_tx_L2 : 286.74 Mbps ▲▲▲
version : ASTF @ v2.97 total_tx_L1 : 310.6 Mbps ▲▲▲
cpu_util. : 96.2% @ 2 cores (1 per dual port) total_rx : 57.21 Mbps
rx_cpu_util. : 0.0% / 0 pps total_pps : 149.13 Kpps ▼▼
async_util. : 0% / 58.79 bps drop_rate : 229.52 Mbps
total_cps. : 60.57 Kcps ▲▲▲ queue_full : 439,276,153 pkts
The VMs, btw, have the following NICs on them and only two of the four NICs on each VM are in play, which I presume is because of the way the test is written.
Each VM has 4 ports, as follows:
VM A