Trex v3.00 with E810-XXV and sr-iov doesn't work

349 views
Skip to first unread message

Oleg Kashtanov

unread,
Jan 23, 2023, 8:44:00 AM1/23/23
to TRex Traffic Generator
Hi!
I'm trying to use trex v3.00 with E810-XXV in sr-iov mode in ASTF.
There are no errors during trex daemon start, but during the test I see only increasing oerrors counter in trex-console and no RX, TX packets on cisco nexus which is connected to the trex host.
Annotation 2023-01-23 162957.png
Without sr-iov on the same physical port trex works fine.
Any ideas how to debug that?
Thank you in advance!
My /etc/trex_cfg.yaml:
### Config file generated by dpdk_setup_ports.py ###

- version: 2
  interfaces: ['17:01.0', '17:01.1']
  c: 16
  port_info:
      - dest_mac: be:82:92:78:7d:f9 # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  6a:54:e4:d9:12:cd
        vlan: 100
      - dest_mac: 6a:54:e4:d9:12:cd # MAC OF LOOPBACK TO IT'S DUAL INTERFACE
        src_mac:  be:82:92:78:7d:f9
        vlan: 200

  platform:
      master_thread_id: 0
      latency_thread_id: 24
      dual_if:
        - socket: 0
          threads: [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]

sudo ./t-rex-64 --astf -f ~/trex-perf/trex_profiles/udp_max_pps.py -c 1 -v 7 -m 1 -d 60 -t size=64
Converting ASTF profile /home/administrator/trex-perf/trex_profiles/udp_max_pps.py to json /tmp/astf.json
The ports are bound/configured.
Starting  TRex v3.00 please wait  ...
Using configuration file /etc/trex_cfg.yaml
 port limit     :  not configured
 port_bandwidth_gb    :  10
 port_speed           :  0
 port_mtu             :  0
 if_mask        : None
 is low-end : 0
 stack type :  
 thread_per_dual_if      : 16
 if        :  17:01.0, 17:01.1,
 enable_zmq_pub :  1
 zmq_pub_port   :  4500
 m_zmq_rpc_port    :  4501
 src     : 6a:54:e4:d9:12:cd
 dest    : be:82:92:78:7d:f9
 src     : be:82:92:78:7d:f9
 dest    : 6a:54:e4:d9:12:cd
 memory per 2x10G ports  
 MBUF_64                                   : 16380
 MBUF_128                                  : 8190
 MBUF_256                                  : 8190
 MBUF_512                                  : 8190
 MBUF_1024                                 : 8190
 MBUF_2048                                 : 4095
 MBUF_4096                                 : 128
 MBUF_9K                                   : 512
 TRAFFIC_MBUF_64                           : 65520
 TRAFFIC_MBUF_128                          : 32760
 TRAFFIC_MBUF_256                          : 8190
 TRAFFIC_MBUF_512                          : 8190
 TRAFFIC_MBUF_1024                         : 8190
 TRAFFIC_MBUF_2048                         : 32760
 TRAFFIC_MBUF_4096                         : 128
 TRAFFIC_MBUF_9K                           : 512
 MBUF_DP_FLOWS                             : 524288
 MBUF_GLOBAL_FLOWS                         : 5120
 master   thread  : 0  
 rx  thread  : 24  
 dual_if : 0
    socket  : 0  
   [   1   2   3   4   5   6   7   8   9   10   11   12   13   14   15   16   17   18   19   20   21   22   23     ]  
CTimerWheelYamlInfo does not exist  
 flags           : 18010f00
 write_file      : 0
 verbose         : 7
 realtime        : 1
 flip            : 0
 cores           : 1
 single core     : 0
 flow-flip       : 0
 no clean close  : 0
 zmq_publish     : 1
 vlan mode       : 1
 client_cfg      : 0
 mbuf_cache_disable  : 0
 cfg file        : /home/administrator/trex-perf/trex_profiles/udp_max_pps.py
 mac file        :  
 out file        :  
 client cfg file :  
 duration        : 60
 factor          : 1
 mbuf_factor     : 1
 latency         : 0 pkt/sec
 zmq_port        : 4500
 telnet_port     : 4501
 expected_ports  : 2
 tw_bucket_usec  : 20.000000 usec
 tw_buckets      : 1024 usec
 tw_levels       : 3 usec
 port : 0 dst:be:82:92:78:7d:f9  src:6a:54:e4:d9:12:cd vlan:100
 port : 1 dst:6a:54:e4:d9:12:cd  src:be:82:92:78:7d:f9 vlan:200
 port : 2 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 3 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 4 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 5 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 6 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 7 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 8 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 9 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 10 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 11 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 12 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 13 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 14 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 15 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 16 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 17 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 18 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 19 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 20 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 21 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 22 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 23 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 24 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 25 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 26 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 27 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 28 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 29 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 30 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 port : 31 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0
 Total Memory :
 MBUF_64                                   : 81900
 MBUF_128                                  : 40950
 MBUF_256                                  : 16380
 MBUF_512                                  : 16380
 MBUF_1024                                 : 16380
 MBUF_2048                                 : 36855
 MBUF_4096                                 : 1024
 MBUF_DP_FLOWS                             : 524288
 MBUF_GLOBAL_FLOWS                         : 5120
 get_each_core_dp_flows                    : 524288
 Total memory                              :     248.40 Mbytes  
 core_list : 0,24,1
 sockets : 0  
 active sockets : 1
 ports_sockets : 1
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
 phy   |   virt  
 1      1  
DPDK args
 xx  -l  0,24,1  -n  4  --log-level  8  --main-lcore  0  -a  0000:17:01.0  -a  0000:17:01.1  --legacy-mem  
                 input : [17:01.0, 17:01.1]
                 dpdk : [0000:17:01.0, 0000:17:01.1]
             pci_scan : [0000:17:01.0, 0000:17:01.1]
                  map : [ 0, 1]
 TRex port mapping
 -----------------
 TRex vport: 0 dpdk_rte_eth: 0
 TRex vport: 1 dpdk_rte_eth: 1
 set driver name net_iavf
 driver capability  : TCP_UDP_OFFLOAD  TSO  SLRO
 set dpdk queues mode to ONE_QUE
 DPDK devices 2 : 2
-----
 0 : vdev 0000:17:01.0
 1 : vdev 0000:17:01.1
-----
 Number of ports found: 2


if_index : 0
driver name : net_iavf
min_rx_bufsize : 1024
max_rx_pktlen  : 9728
max_rx_queues  : 256
max_tx_queues  : 256
max_mac_addrs  : 64
rx_offload_capa : 0x9226f
tx_offload_capa : 0x19fbf
rss reta_size   : 64
flow_type_rss   : 0x3ffc
tx_desc_max     : 4096
tx_desc_min     : 64
rx_desc_max     : 4096
rx_desc_min     : 64
zmq publisher at: tcp://*:4500
 rx_data_q_num : 1
 rx_drop_q_num : 0
 rx_dp_q_num   : 0
 rx_que_total : 1
 --  
 rx_desc_num_data_q   : 512
 rx_desc_num_drop_q   : 4096
 rx_desc_num_dp_q     : 512
 total_desc           : 512
 --  
 tx_desc_num     : 1024
port 0 desc: Ethernet Adaptive Virtual Function
 rx_qid: 0 (512)
port 1 desc: Ethernet Adaptive Virtual Function
 rx_qid: 0 (512)
 wait 1 sec .
port : 0
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0
port : 1
------------
link         :  link : Link Up - speed 10000 Mbps - full-duplex
promiscuous  : 0
 number of ports         : 2
 max cores for 2 ports   : 1
 tx queues per port      : 3
 -------------------------------
RX core uses TX queue number 65535 on all ports
 core, c-port, c-queue, s-port, s-queue, lat-queue
 ------------------------------------------
 1        0      0       1       0      0  
 -------------------------------
Using json file /tmp/astf.json
*******Pretest after resolving ********
Pre test info start ===================
Port 0:
  Sources:
    ip: 0.0.0.0 vlan: 100 mac: 6a:54:e4:d9:12:cd
  Destinations:
Port 1:
  Sources:
    ip: 0.0.0.0 vlan: 200 mac: be:82:92:78:7d:f9
  Destinations:
Pre test info end ===================
Pre test statistics for port 0
 opackets                                 : 0
 obytes                                   : 0
 ipackets                                 : 0
 ibytes                                   : 0
 m_tx_arp                                 : 0
 m_rx_arp                                 : 0
Pre test statistics for port 1
 opackets                                 : 0
 obytes                                   : 0
 ipackets                                 : 0
 ibytes                                   : 0
 m_tx_arp                                 : 0
 m_rx_arp                                 : 0

 uname -a
Linux generator 5.10.0-20-amd64 #1 SMP Debian 5.10.158-2 (2022-12-13) x86_64 GNU/Linux

ethtool -i ens9f1
driver: ice
version: 1.10.1.2.2
firmware-version: 2.50 0x800077bd 1.2960.0
expansion-rom-version:
bus-info: 0000:17:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

ip a
6: ens9f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether b4:96:91:f6:51:94 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether ee:e4:3c:ac:b8:d0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
    vf 1     link/ether 02:ef:9d:17:fc:c2 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
    altname enp23s0f0
Message has been deleted

Oleg Kashtanov

unread,
Jan 23, 2023, 11:27:45 AM1/23/23
to TRex Traffic Generator
I want to add that in my case I don't use virtuallization.
The idea is to use 1 physical cable with 2 vlan's (one for client and one for server).
It is a case to use Trex on sr-iov mode on bare metal, without virtualization?

понедельник, 23 января 2023 г. в 16:44:00 UTC+3, Oleg Kashtanov:

Oleg Kashtanov

unread,
Jan 25, 2023, 5:04:14 AM1/25/23
to TRex Traffic Generator
I've upgreaded ice and iavf drivers and the oerrors are gone.
Currently I see that traffic goes fine, but only between 2 VFs...
It's not clear why it isn't go throughput the PF.
I use dpdk mode. 

понедельник, 23 января 2023 г. в 19:27:45 UTC+3, Oleg Kashtanov:

Oleg Kashtanov

unread,
Jan 25, 2023, 8:12:03 AM1/25/23
to TRex Traffic Generator
I decided to try vlan usage on VFs instead of trex:
  • I removed vlan's from trex_cfg.yaml config
  • Added vlans to VFs:
    ip link set dev ens9f0 vf 0 vlan 100
    ip link set dev ens9f0 vf 1 vlan 200
  • Trex wasn't started due the errors:
Jan 25 16:07:00 generator systemd[1]: Started TREX Service.
Jan 25 16:07:02 generator t-rex-64[20638]: Trying to bind to vfio-pci ...
Jan 25 16:07:02 generator t-rex-64[20638]: /usr/bin/python3 dpdk_nic_bind.py --bind=vfio-pci 0000:17:01.0 0000:17:01.1
Jan 25 16:07:02 generator t-rex-64[20638]: The ports are bound/configured.
Jan 25 16:07:05 generator t-rex-64[20744]: Starting  TRex v3.00 please wait  ...
Jan 25 16:07:05 generator t-rex-64[20744]:  set driver name net_iavf
Jan 25 16:07:05 generator t-rex-64[20744]:  driver capability  : TCP_UDP_OFFLOAD  TSO  SLRO
Jan 25 16:07:05 generator t-rex-64[20744]: Warning TSO is supported and asked to be disabled by user
Jan 25 16:07:05 generator t-rex-64[20744]: Warning SLRO is supported and asked to be disabled by user
Jan 25 16:07:05 generator t-rex-64[20744]:  set dpdk queues mode to MULTI_QUE
Jan 25 16:07:05 generator t-rex-64[20744]:  Number of ports found: 2
Jan 25 16:07:05 generator t-rex-64[20744]: zmq publisher at: tcp://*:4500
Jan 25 16:07:05 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:05 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:05 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:05 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:05 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:05 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:06 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:06 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:06 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:06 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:06 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:06 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:07 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:07 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:07 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:07 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:07 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:07 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:08 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:08 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:08 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:08 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:08 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:08 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:09 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:09 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:09 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:09 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:09 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:09 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:10 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:10 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:10 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:10 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:10 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:10 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:11 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:11 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:11 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:11 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:11 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:11 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:12 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:12 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:12 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:12 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:12 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:12 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:13 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:13 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:13 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:13 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:13 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:13 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:14 generator t-rex-64[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:14 generator t-rex-64[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:14 generator t-rex-64[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:14 generator xx[20744]: iavf_execute_vf_cmd(): Return failure -5 for cmd 6
Jan 25 16:07:14 generator xx[20744]: iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
Jan 25 16:07:14 generator xx[20744]: iavf_dev_start(): configure queues failed
Jan 25 16:07:15 generator t-rex-64[20744]: EAL: Error - exiting with code: 1
Jan 25 16:07:15 generator t-rex-64[20744]:   Cause: rte_eth_dev_start: err=-1, port=0
Jan 25 16:07:15 generator xx[20744]: EAL: Error - exiting with code: 1
                                       Cause:
Jan 25 16:07:15 generator xx[20744]: rte_eth_dev_start: err=-1, port=0
Jan 25 16:07:16 generator systemd[1]: trex.service: Main process exited, code=exited, status=1/FAILURE
Jan 25 16:07:16 generator systemd[1]: trex.service: Failed with result 'exit-code'.
Jan 25 16:07:16 generator systemd[1]: trex.service: Consumed 4.233s CPU time.

Is it an known issue?
среда, 25 января 2023 г. в 13:04:14 UTC+3, Oleg Kashtanov:

hanoh haim

unread,
Jan 29, 2023, 6:15:44 AM1/29/23
to Oleg Kashtanov, TRex Traffic Generator
Hi Oleg, 
try to add --software  to the command line. This driver is not fully supported
Thanks
Hanoh

Oleg Kashtanov

unread,
Jan 31, 2023, 7:37:01 AM1/31/23
to TRex Traffic Generator
Hi Hanoh.
I tried low_end mode and it works.

воскресенье, 29 января 2023 г. в 14:15:44 UTC+3, Hanoch Haim:

Haklat haklat

unread,
May 21, 2024, 7:46:39 AMMay 21
to TRex Traffic Generator
Hi,
I have recently had same issue (v3.04) for E810 SRIOV.
iavf_configure_queues(): request RXDID[22] in Queue[0]
iavf_configure_queues(): request RXDID[22] in Queue[1]

iavf_execute_vf_cmd(): Return failure -5 for cmd 6
iavf_configure_queues(): Failed to execute command of VIRTCHNL_OP_CONFIG_VSI_QUEUES
iavf_dev_start(): configure queues failed

 If I have VF configured for transparent VLAN (leave out the VLAN config of VF) it starts without those issues, but with vlan filter configured in VF (like in your case) I have the same issue.
 
 Issue in my case was fixed/worked around  by setting port_mtu in trex_cfg.yaml. The default port_mtu 0 was the issue in my case when using SRIOV with vlan filter.

interfaces : ["replace_pci_left", "replace_pci_right"] # list of the interfaces to bind run ./dpdk_nic_bind.py --status
port_mtu : 9000 ##### Add this line with wanted MTU size up to same as PF MTU
stack : linux_based

BR//Håkan

Reply all
Reply to author
Forward
0 new messages