Can someone explain to me what queue_full is? And how to address it?

1,158 views
Skip to first unread message

Mark Wittling

unread,
May 13, 2022, 5:08:38 PM5/13/22
to TRex Traffic Generator
Today I am running a Stateful test on 2 side-by-side DPDK VMs.

./t-rex-64 -i --astf -f astf/emix2.py -m 1000

As I run this stateful test of a variety of traffic types, this is what I am seeing. Of concern, obviously, is this drop rate, and this queue_full count, which is enormous.

I don't understand the what and why of the drop rate yet, but what is this queue_full? Does that have something to do with ring buffer? Or some kind of Trex queuing or buffering?

on VM A:
trex>stats
Global Statistics

connection   : localhost, Port 4501                       total_tx_L2  : 198.73 Mbps ▼▼▼
version      : ASTF @ v2.97                               total_tx_L1  : 221.7 Mbps ▼▼▼
cpu_util.    : 91.06% @ 2 cores (1 per dual port)         total_rx     : 143.16 Mbps ▼▼▼
rx_cpu_util. : 0.0% / 0 pps                               total_pps    : 143.52 Kpps ▼▼
async_util.  : 0% / 18.93 bps                             drop_rate    : 55.57 Mbps
total_cps.   : 27.03 Kcps ▼                               queue_full   : 46,061,965 pkts

And, on VM B:
trex>stats
Global Statistics

connection   : localhost, Port 4501                       total_tx_L2  : 286.74 Mbps ▲▲▲
version      : ASTF @ v2.97                               total_tx_L1  : 310.6 Mbps ▲▲▲
cpu_util.    : 96.2% @ 2 cores (1 per dual port)          total_rx     : 57.21 Mbps
rx_cpu_util. : 0.0% / 0 pps                               total_pps    : 149.13 Kpps ▼▼
async_util.  : 0% / 58.79 bps                             drop_rate    : 229.52 Mbps
total_cps.   : 60.57 Kcps ▲▲▲                             queue_full   : 439,276,153 pkts

The VMs, btw, have the following NICs on them and only two of the four NICs on each VM are in play, which I presume is because of the way the test is written.
Each VM has 4 ports, as follows:
VM A

hanoh haim

unread,
May 15, 2022, 4:06:44 AM5/15/22
to Mark Wittling, TRex Traffic Generator
Hi Mark, 
Have you looked into the FAQ?

Thanks
Hanoh


Mark Wittling

unread,
May 16, 2022, 9:47:37 AM5/16/22
to TRex Traffic Generator
You mean this link? https://trex-tgn.cisco.com/trex/doc/trex_faq.html

Searching for the word queue, I only saw two mentions of the word queue, neither in the context of what the queue_full counter signifies.

Mark Wittling

unread,
May 16, 2022, 10:11:31 AM5/16/22
to TRex Traffic Generator
I ran across this in another post:

"Bigger window is required to overcome the BDP. Higher rate is higher BDP requiring higher window.
However TRex has one core with limited queue size (Tx and Rx) and TCP get to a point that it tries to burst more than the tx queue size and stuck with queue_full.
Try to tune the window size."

I am running the emix2.py test, and does this test accept tcp parameters? (i.e. -t mss=5000,size=4000,loop=100000,win=1024,pipe=1)
I tried passing tcp parms in and it wouldn't take them.

Mark Wittling

unread,
May 16, 2022, 10:16:38 AM5/16/22
to TRex Traffic Generator
Okay, I see that the http_eflow2.py takes specific parameters for tcp, that the emix2.py does not take.

    def get_profile(self, tunables, **kwargs):
        parser = argparse.ArgumentParser(description='Argparser for {}'.format(os.path.basename(__file__)),
                                         formatter_class=argparse.ArgumentDefaultsHelpFormatter)
        parser.add_argument('--size',
                            type=int,
                            default=1,
                            help='size is in KB for each chuck in the loop')
        parser.add_argument('--loop',
                            type=int,
                            default=10,
                            help='how many chunks to download')
        parser.add_argument('--win',
                            type=int,
                            default=32,
                            help='win: in KB, the maximum window size. make it big for BDP')
        parser.add_argument('--pipe',
                            type=int,
                            default=0,
                            help="pipe  : Don't block on each send, make them in the pipeline should be 1 for maximum performance.")
        parser.add_argument('--mss',
                            type=int,
                            default=1460,
                            help='the mss of the traffic.')
        args = parser.parse_args(tunables)

        size = args.size
        loop = args.loop
        if loop < 2:
            loop = 2
        mss = args.mss
        assert mss > 0, "mss must be greater than 0"
        win = args.win
        pipe= args.pipe
        return self.create_profile(size, loop, mss, win, pipe)
Reply all
Reply to author
Forward
0 new messages