Latency stats doesn't show all the packets, though capture has all the packets

17 views
Skip to first unread message

chris...@gmail.com

unread,
Sep 29, 2025, 9:21:10 AMSep 29
to TRex Traffic Generator
Hi All , 

I am using trex stateless, v 3.06.

I am running trex within a kubernetes pod.
My cfg is :
- version: 2
  interfaces: ["<pci id>","dummy"]
  stack     : linux_based
  platform:
    master_thread_id: 4
    latency_thread_id: 5
    dual_if:
      - socket: 0
        threads: [36,37]


I am sending more packets from trex and is testing qos shaping in UL direction.After shaping  , I am expecting 31,647 packets but I am seeing only 31221 packets , wherein pcap on trex has all 31,647 packets .This happens when buffer size on DUT is 512 or 4096 .When buffer size on DUT is 64 it works fine . 

Expected = 31,647 Received =  31221 Delta = 426 
I checked the last 4 bytes added by trex and it seems to match in all packets. 
In stats -l i see some drops but the count doesnt match up to 426 .What are those seq_too_high drops?Also , what is this error - Errors       |         5.73 K?? Can someone please help here ?


trex(service)>stats -s
Streams Statistics

  PG ID    |        17
-----------+------------------
Tx pps     |             0 pps
Tx bps L2  |             0 bps
Tx bps L1  |             0 bps
---        |
Rx pps     |             0 pps
Rx bps     |             0 bps
----       |
opackets   |             37460
ipackets   |             31221
obytes     |          56190000
ibytes     |          46956384
-----      |
opackets   |       37.46 Kpkts
ipackets   |       31.22 Kpkts
obytes     |          56.19 MB
ibytes     |          46.96 MB

trex(service)>stats -l
Latency Statistics

   PG ID     |       17
-------------+---------------
TX pkts      |          37460
RX pkts      |          31221
Max latency  |         616084
Min latency  |             16
Avg latency  |         615556
-- Window -- |
Last max     |         616054
Last-1       |              0
Last-2       |              0
Last-3       |              0
Last-4       |              0
Last-5       |              0
Last-6       |              0
Last-7       |              0
Last-8       |              0
Last-9       |              0
Last-10      |              0
Last-11      |              0
Last-12      |              0
Last-13      |              0
---          |
Jitter       |            308
----         |
Errors       |         5.73 K

trex(service)>stats --lh
Latency Histogram

   PG ID     |       17
-------------+---------------
600000       |          28258
500000       |            492
400000       |            492
300000       |            493
200000       |            492
100000       |            492
90000        |             49
80000        |             49
70000        |             49
60000        |             50
50000        |             49
40000        |             49
30000        |             49
20000        |             49
10000        |             50
9000         |              5
8000         |              5
- Counters - |
dropped      |           5726
dup          |              0
out_of_order |              0
seq_too_high |           5726
seq_too_low  |              0

trex(service)>
 My stream:
# !!! Auto-generated code !!!
from trex.stl.api import *
from scapy.contrib.gtp import *

class STLS1(object):
    def get_streams(self, port_id = 0,direction = 0, **kwargs):
        streams = []

        packet = (Ether(dst='d2:7f:1f:c3:39:ff',src='ae:a8:65:bd:6a:fa',type=33024) /
                  Dot1Q(vlan=321,type=35153) /
                <custom header>/
                  <custom header>/
                  GTP /
                  Raw(load=b'a' * 1446))

        vm = STLVM()
        stream = STLStream(packet = STLPktBuilder(pkt = packet, vm = vm),stream_id = 17,name = '<>',mode =STLTXCont(pps = 1000),flow_stats = STLFlowLatencyStats(pg_id = 17))
        streams.append(stream)


        return streams

def register():
    return STLS1()
trex start up logs:
The ports are bound/configured.
Starting  TRex v3.06 please wait  ...
Using configuration file /tmp/trex_cfg.yaml
 port limit     :  not configured
 port_bandwidth_gb    :  10
 port_speed           :  0
 port_mtu             :  0
 if_mask        : None
 is low-end : 0
 stack type : linux_based
 thread_per_dual_if      : 1
 if        :  6c:05.6, dummy,
 enable_zmq_pub :  1
 zmq_pub_port   :  4500
 m_zmq_rpc_port    :  4501
 memory per 2x10G ports
 MBUF_64                                   : 16380
 MBUF_128                                  : 8190
 MBUF_256                                  : 8190
 MBUF_512                                  : 8190
 MBUF_1024                                 : 8190
 MBUF_2048                                 : 4095
 MBUF_4096                                 : 128
 MBUF_9K                                   : 512
 TRAFFIC_MBUF_64                           : 65520
 TRAFFIC_MBUF_128                          : 32760
 TRAFFIC_MBUF_256                          : 8190
 TRAFFIC_MBUF_512                          : 8190
 TRAFFIC_MBUF_1024                         : 8190
 TRAFFIC_MBUF_2048                         : 32760
 TRAFFIC_MBUF_4096                         : 128
 TRAFFIC_MBUF_9K                           : 512
 MBUF_DP_FLOWS                             : 524288
 MBUF_GLOBAL_FLOWS                         : 5120
 master   thread  : 15
 rx  thread  : 16
 dual_if : 0
    socket  : 0
   [   47   48     ]
CTimerWheelYamlInfo does not exist
 flags           : 8010f00
 write_file      : 0
 verbose         : 7
 realtime        : 1
 flip            : 0
 cores           : 1
 single core     : 0
 flow-flip       : 0
 no clean close  : 0
 zmq_publish     : 1
 vlan mode       : 0
 client_cfg      : 0
 mbuf_cache_disable  : 0
 cfg file        :
 mac file        :
 out file        :
 client cfg file :
 duration        : 0
 factor          : 1
 mbuf_factor     : 1
 latency         : 0 pkt/sec
 zmq_port        : 4500
 telnet_port     : 4501
 expected_ports  : 2
 tw_bucket_usec  : 20.000000 usec
 tw_buckets      : 1024 usec
 tw_levels       : 3 usec
 port : 0 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 1 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 2 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 3 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 4 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 5 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 6 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 7 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 8 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 9 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 10 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 11 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 12 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 13 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 14 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 15 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 16 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 17 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 18 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 19 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 20 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 21 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 22 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 23 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 24 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 25 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 26 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 27 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 28 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 29 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 30 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 port : 31 dst:00:00:00:01:00:00  src:00:00:00:00:00:00
 Total Memory :
 MBUF_64                                   : 81900
 MBUF_128                                  : 40950
 MBUF_256                                  : 16380
 MBUF_512                                  : 16380
 MBUF_1024                                 : 16380
 MBUF_2048                                 : 36855
 MBUF_4096                                 : 1024
 MBUF_DP_FLOWS                             : 524288
 MBUF_GLOBAL_FLOWS                         : 5120
 get_each_core_dp_flows                    : 524288
 Total memory                              :     248.40 Mbytes
 core_list : 15,16,47
 sockets : 0
 active sockets : 1
 ports_sockets : 1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
 phy   |   virt
 47      1
DPDK args
 xx  -l  15,16,47  -n  4  --log-level  8  --main-lcore  15  -a  <pci id>  --legacy-mem
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:6c:05.6 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
                 input : [pci id, dummy]
                 dpdk : [pci id]
             pci_scan : [pci id]
                  map : [ 0, 255]
 TRex port mapping
 -----------------
 TRex vport: 0 dpdk_rte_eth: 0
 TRex vport: 1 dpdk_rte_eth: 255
 set driver name net_iavf
 driver capability  : TCP_UDP_OFFLOAD  TSO  SLRO
 set dpdk queues mode to ONE_QUE
 DPDK devices 1 : 1
-----
 0 : vdev pc id
-----
 Number of ports found: 2 (dummy among them: 1)


if_index : 0
driver name : net_iavf
min_rx_bufsize : 1024
max_rx_pktlen  : 9728
max_rx_queues  : 256
max_tx_queues  : 256
max_mac_addrs  : 64
rx_offload_capa : 0x8266f
tx_offload_capa : 0x119fbf
rss reta_size   : 64
flow_type_rss   : 0x3ffc
tx_desc_max     : 4096
tx_desc_min     : 64
rx_desc_max     : 4096
rx_desc_min     : 64
zmq publisher at: tcp://*:4500
 rx_data_q_num : 1
 rx_drop_q_num : 0
 rx_dp_q_num   : 0
 rx_que_total : 1
 --
 rx_desc_num_data_q   : 512
 rx_desc_num_drop_q   : 4096
 rx_desc_num_dp_q     : 512
 total_desc           : 512
 --
 tx_desc_num     : 1024
port 0 desc: Ethernet Adaptive Virtual Function
 rx_qid: 0 (512)
iavf_configure_queues(): request RXDID[22] in Queue[0]
 wait 1 sec .
port : 0
------------
link         :  link : Link Up - speed 100000 Mbps - full-duplex
promiscuous  : 0
 number of ports         : 2
 max cores for 2 ports   : 1
 tx queues per port      : 3
 -------------------------------
RX core uses TX queue number 65535 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0
 -------------------------------
base stack ctor
Locking cleanup file /var/lock/trex_cleanup
Cleanup of old namespaces related to Linux-based stack
Cleaning IFs with postfixes T
Cleaning IFs with postfixes L
Cleanup Done
Using netns prefix trex-a-
add port node
add node
Run pending tasks for ticket 0
Running in BG
*******Pretest after resolving ********
Pre test info start ===================
Port 0:
  Sources:
    ip: 0.0.0.0 mac: 0a:09:02:47:5e:1d
  Destinations:
Port 1:
  Sources:
  Destinations:
Pre test info end ===================
 add_node_internal   0a:09:02:47:5e:1d
Namespaced node ctor
Linux node ctor
Pre test statistics for port 0
 opackets                                 : 0
 obytes                                   : 0
 ipackets                                 : 12
 ibytes                                   : 768
 m_tx_arp                                 : 0
 m_rx_arp                                 : 0
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Linux handle_pkt: got broadcast or multicast
Run pending tasks for ticket 0
No pending tasks
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Linux handle_pkt: got broadcast or multicast
Packet filtered out by Vlan Filter
sent: 0
Reply all
Reply to author
Forward
0 new messages