Looks like vdev bonding might be broken in TRex v3.05

50 views
Skip to first unread message

Haklat haklat

unread,
Jul 5, 2024, 8:48:31 AMJul 5
to TRex Traffic Generator
Hi,
it looks like DPDK bonding might be broken in the newest TRex version. Config that works for v3.04 fails to start in v3.05.

TRex v3.05:
DPDK args
 xx  -l  5,53,7,55  -n  4  --log-level  7  --main-lcore  5  --vdev=net_bonding0,mode=2,slave=0000:89:01.0,slave=0000:89:09.4,xmit_policy=l34  -a  0000:89:01.0  -a  0000:89:09.4  -m  2048  --file-prefix  trex1  
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/trex1/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:89:01.0 (socket 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:89:09.4 (socket 1)
bond_probe(3802) - Invalid args in mode=2,slave=0000:89:01.0,slave=0000:89:09.4,xmit_policy=l34
vdev_probe(): failed to initialize net_bonding0 device
EAL: Bus (vdev) probe failed.

TELEMETRY: No legacy callbacks, legacy socket not created
  size of interfaces_vdevs 2
Failed to find  net_bonding0 in DPDK vdev
 DPDK devices 2 : 2
-----
 0 : vdev 0000:89:01.0
 1 : vdev 0000:89:09.4
-----
Killing Scapy server... Scapy server is killed
Killing Cmds server... Cmds server is killed


TRex v3.04:
DPDK args
 xx  -l  5,53,7,55  -n  4  --log-level  7  --main-lcore  5  --vdev=net_bonding0,mode=2,slave=0000:89:01.1,slave=0000:89:09.2,xmit_policy=l34  -a  0000:89:01.1  -a  0000:89:09.2  -m  2048  --file-prefix  trex1  
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/trex1/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:89:01.1 (socket 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:89:09.2 (socket 1)
TELEMETRY: No legacy callbacks, legacy socket not created
  size of interfaces_vdevs 2
 ===>>>found net_bonding0 2
 dummy interface skipped
 set driver name net_bonding
 driver capability  : TCP_UDP_OFFLOAD  TSO  SLRO
 set dpdk queues mode to MULTI_QUE
 DPDK devices 3 : 3
-----
 0 : vdev 0000:89:01.1
 1 : vdev 0000:89:09.2
 2 : vdev net_bonding0
-----
 Number of ports found: 2 (dummy among them: 1)
 
 
 
 TRex config file...
 
  cat /etc/trex_cfg.yaml
- version: 2
  stack         : linux_based
  interfaces    : ['--vdev=net_bonding0,mode=2,slave=0000:89:01.1,slave=0000:89:09.2,xmit_policy=l34', 'dummy']   # list of the interfaces to bind run ./dpdk_nic_bind.py --status
  port_mtu      : 9000
  prefix        : trex1
  limit_memory  : 2048
#  rx_desc       : 4096
#  tx_desc       : 4096
  port_info     :  # set e.g. ip,gw,vlan,eth mac addr

                 - ip         : 172.100.59.254
                   default_gw : 172.100.59.1
                   vlan       : 1059
  memory:
      mbuf_64: 256000  # Default need increase for scaling field engine traffic to 100 slice networks with 32 addresses each
  platform:
      master_thread_id: 5
      latency_thread_id: 53
      dual_if:
        - socket: 1
          threads: [7,55]

//Thanks Håkan

Haklat haklat

unread,
Jul 5, 2024, 10:35:28 AMJul 5
to TRex Traffic Generator
Hi,
think found maybe were it broke after looking in this file in v3.04 and v3.05
dpdk/drivers/net/bonding/rte_eth_bond_pmd.c
Think slave has been renamed to member

v3.05:
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
        "member=<ifc> "
        "primary=<ifc> "
        "mode=[0-6] "
        "xmit_policy=[l2 | l23 | l34] "
        "agg_mode=[count | stable | bandwidth] "
        "socket_id=<int> "
        "mac=<mac addr> "
        "lsc_poll_period_ms=<int> "
        "up_delay=<int> "
        "down_delay=<int>");

v3.04

RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
        "slave=<ifc> "
        "primary=<ifc> "
        "mode=[0-6] "
        "xmit_policy=[l2 | l23 | l34] "
        "agg_mode=[count | stable | bandwidth] "
        "socket_id=<int> "
        "mac=<mac addr> "
        "lsc_poll_period_ms=<int> "
        "up_delay=<int> "
        "down_delay=<int>");

Changing slave to member in config brings things further, but fails with some offload things missing:

DPDK args
 xx  -l  5,53,7,55  -n  4  --log-level  7  --main-lcore  5  --vdev=net_bonding0,mode=2,member=0000:89:01.5,member=0000:89:09.0,xmit_policy=l34  --no-pci  --no-huge  -m  2048  --file-prefix  trex1  

EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/trex1/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: WARNING: Main core has no memory on local socket!

TELEMETRY: No legacy callbacks, legacy socket not created
  set driver name net_bonding
 driver capability  : SLRO
 set dpdk queues mode to MULTI_QUE
 DPDK devices 1 : 1
-----
0 : vdev net_bonding0
-----
Number of ports found: 2 (dummy among them: 1)


if_index : 0
driver name : net_bonding
min_rx_bufsize : 0
max_rx_pktlen  : 16128
max_rx_queues  : 1024
max_tx_queues  : 1024
max_mac_addrs  : 16
rx_offload_capa : 0x0
tx_offload_capa : 0x0
rss reta_size   : 0
flow_type_rss   : 0x2003ffffc
tx_desc_max     : 65535
tx_desc_min     : 0
rx_desc_max     : 65535
rx_desc_min     : 0
zmq publisher at: tcp://*:4500
rx_data_q_num : 0
 rx_drop_q_num : 0
 rx_dp_q_num   : 2
 rx_que_total : 2
 --  
 rx_desc_num_data_q   : 512
 rx_desc_num_drop_q   : 4096
 rx_desc_num_dp_q     : 512
 total_desc           : 1024
 --  
 tx_desc_num     : 1024
port 0 desc: Unknown
Requested TX offload MULTI_SEGS is not supported
Requested RX offload SCATTER is not supported

Killing Scapy server... Scapy server is killed
Killing Cmds server... Cmds server is killed
 
Will be slow in answering mail for some time now.

BR//Håkan

Shlomo Yaakov

unread,
Jul 9, 2024, 7:06:23 AMJul 9
to TRex Traffic Generator

Hi,

We appreciate you letting us know about this issue. We have created a patch to address it, which will be included in the upcoming release. In the meantime, you can apply the patch from the following link.

Thanks,
Shlomo Yaakov

Reply all
Reply to author
Forward
0 new messages