testpmd> set nbcore 2
Number of forwarding cores set to 2
testpmd> show config fwd
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support disabled, MP ov|
er anonymous pages disabled
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00link report:
11,289,284,558 sent on id1_NIC.tx -> id1_Virtio.rx (loss rate: 0%)
3,454 sent on id1_Virtio.tx -> id1_NIC.rx (loss rate: 0%)
load: time: 1.00s fps: 345,025 fpGbps: 4.154 fpb: 123 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 345,407 fpGbps: 4.159 fpb: 125 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 344,867 fpGbps: 4.152 fpb: 125 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 344,177 fpGbps: 4.144 fpb: 122 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 343,507 fpGbps: 4.136 fpb: 121 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 344,604 fpGbps: 4.149 fpb: 124 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 331,253 fpGbps: 3.988 fpb: 124 bpp: 1496 sleep: 0 us
load: time: 1.00s fps: 330,553 fpGbps: 3.980 fpb: 121 bpp: 1496 sleep: 0 usI need some advice to get a better understanding of some performance problems which I'm facing with my Intel 82599ES 10G card.
return {
{ vlan = 21,
mac_address = "12:54:12:34:21:01",
port_id = "p2v21",
},
{ vlan = 22,
mac_address = "12:54:12:34:22:01",
port_id = "p2v22",
},
} snabb snabbnfv traffic -k 10 -D 0 0000:03:00.0 /root/scripts/snabb_port1.cfg /root/vhost-sockets/vm1.socket
-chardev socket,id=char0,path=/root/vhost-sockets/vm1.socket,server \
-netdev type=vhost-user,id=net0,chardev=char0 \
-device virtio-net-pci,netdev=net0,mac=12:54:12:34:21:01 \--
You received this message because you are subscribed to the Google Groups "Snabb Switch development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snabb-devel...@googlegroups.com.
To post to this group, send email to snabb...@googlegroups.com.
Visit this group at https://groups.google.com/group/snabb-devel.
root@ubuntu-1:~# dmesg |tail
[ 1005.271388] eth0: bad gso type 139.
[ 1005.276708] eth0: bad gso type 139.
[ 1005.284418] eth0: bad gso type 139.
[ 1005.325198] eth0: bad gso type 139.
[ 1005.329673] eth0: bad gso type 139.
[ 1005.333645] eth0: bad gso type 139.
[ 1005.388844] eth0: bad gso type 139.
[ 1005.391828] eth0: bad gso type 139.
[ 1005.492153] eth0: bad gso type 139.
[ 1005.494391] eth0: bad gso type 139.root@ubuntu-1:~# dmesg |tail
[ 1169.382930] eth0: bad gso type 22.
[ 1169.782999] skbuff: bad partial csum: csum=35649/29830 len=124
[ 1194.483068] eth0: bad gso type 22.
[ 1194.883119] skbuff: bad partial csum: csum=35649/29830 len=124
[ 1200.985208] eth0: bad gso type 22.
[ 1201.025183] skbuff: bad partial csum: csum=35649/29830 len=124
[ 1203.915220] eth0: bad gso type 22.
[ 1203.955103] skbuff: bad partial csum: csum=35649/29830 len=124
[ 1205.805172] eth0: bad gso type 22.
[ 1205.845151] skbuff: bad partial csum: csum=35649/29830 len=124
On 23 June 2014 14:12, Luke Gorrie <lu...@snabb.co> wrote:
...
FYI the numbers that I see on chur (2GHz Xeon) right now, for traffic being looped through a VM and back onto the network, is:
1514 byte packets: 9.82 Gbps
256 byte packets: 6.50 Gbps
64 byte packets: 1.68 Gbps
and we are focused on improving these scores over the coming week+.
Cheers,
-LukeMay I ask another question?
the VLAN id which we use in the port configuration files, is it a realy VLAN tag id? does it mean I should use the tagged frames to send packets to VM's?
FYI the numbers that I see on chur (2GHz Xeon) right now, for traffic being looped through a VM and back onto the network, is:
1514 byte packets: 9.82 Gbps
256 byte packets: 6.50 Gbps
64 byte packets: 1.68 Gbps
and we are focused on improving these scores over the coming week+.
Cheers,
-Luke
And I'm really interested - how that test environment looked like?

[ciberkot@chur:~]$ ps -ef | grep kvm
root 544 2 0 13:40 ? 00:00:00 [kvm-irqfd-clean]
root 5514 1 95 21:11 ? 00:27:55 qemu-system-x86_64 -daemonize -drive if=virtio,file=/home/ciberkot/ubuntu-s.qcow2 -M pc -smp 2 --enable-kvm -cpu host -m 1024 -numa node,memdev=mem -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -chardev socket,id=char0,path=/home/ciberkot/vhost-sockets/vm-0-0.socket,server -netdev type=vhost-user,id=net0,chardev=char0 -device virtio-net-pci,netdev=net0,mac=52:54:11:00:00:01 -chardev socket,id=char1,path=/home/ciberkot/vhost-sockets/vm-3-0.socket,server -netdev type=vhost-user,id=net1,chardev=char1 -device virtio-net-pci,netdev=net1,mac=52:54:33:00:00:03 -serial telnet:127.0.0.1:10003,server,nowait,nodelay -serial file:./ubuntu-1_.log -netdev tap,id=hostnet3,ifname=tapMGMT,script=no -device e1000,netdev=hostnet3,id=net3,mac=52:54:77:00:00:07,bus=pci.0,addr=0xf -vnc :1
root 5522 2 0 21:11 ? 00:00:00 [kvm-pit/5514]
ciberkot 7840 7761 0 21:40 pts/5 00:00:00 grep kvm
[ciberkot@chur:~]$ ps -ef | grep traf
root 5415 1 4 21:10 ? 00:01:21 snabb snabbnfv traffic -k 10 -D 0 0000:01:00.0 ./port-0-0.cfg ./vhost-sockets/vm-0-0.socket
root 5420 1 4 21:11 ? 00:01:19 snabb snabbnfv traffic -k 10 -D 0 0000:03:00.0 ./port-3-0.cfg ./vhost-sockets/vm-3-0.socket
ciberkot 7842 7761 0 21:40 pts/5 00:00:00 grep traf
[ciberkot@chur:~]$ sudo snabb packetblaster replay test1.pcap 01:00.0 03:00.0
failed to lock /sys/bus/pci/devices/0000:01:00.0/resource0
lib/hardware/pci.lua:114: assertion failed!
stack traceback:
core/main.lua:116: in function <core/main.lua:114>
[C]: in function 'assert'
lib/hardware/pci.lua:114: in function 'map_pci_memory'
apps/intel/intel10g.lua:89: in function 'open'
apps/intel/loadgen.lua:20: in function 'new'
core/app.lua:165: in function <core/app.lua:162>
core/app.lua:197: in function 'apply_config_actions'
core/app.lua:110: in function 'configure'
program/packetblaster/packetblaster.lua:51: in function 'run'
core/main.lua:56: in function <core/main.lua:32>
[C]: in function 'xpcall'
core/main.lua:121: in main chunk
[C]: at 0x0044f740
[C]: in function 'pcall'
core/startup.lua:1: in main chunk
[C]: in function 'require'
[string "require "core.startup""]:1: in main chunkhi Luke,
I tried to run the packetblaster according to your suggestion, but it fails for some reason:
[ciberkot@chur:~]$ sudo snabb packetblaster replay test1.pcap 01:00.0 03:00.0
failed to lock /sys/bus/pci/devices/0000:01:00.0/resource0
lib/hardware/pci.lua:114: assertion failed!
01:00.1 and 03:00.1 and push the traffic through the interconnecting cable to the interfaces, which are connected to mz VM... right?[root@chur:/home/ciberkot]# tail -f nohup1.out
load: time: 1.00s fps: 854,090 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 0 us
load: time: 1.00s fps: 855,193 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 0 us
load: time: 1.00s fps: 855,802 fpGbps: 0.377 fpb: 6 bpp: 42 sleep: 2 us
load: time: 1.00s fps: 856,801 fpGbps: 0.377 fpb: 6 bpp: 42 sleep: 0 us
link report:
0 sent on id3_NIC.tx -> id3_Virtio.rx (loss rate: 0%)
19,541,020 sent on id3_Virtio.tx -> id3_NIC.rx (loss rate: 0%)
load: time: 1.00s fps: 857,460 fpGbps: 0.377 fpb: 6 bpp: 42 sleep: 0 us
load: time: 1.00s fps: 855,061 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 1 us
load: time: 1.00s fps: 855,151 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 1 us
load: time: 1.00s fps: 854,459 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 2 us
load: time: 1.00s fps: 855,459 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 0 us
load: time: 1.00s fps: 854,774 fpGbps: 0.376 fpb: 6 bpp: 42 sleep: 0 us
load: time: 1.00s fps: 850,750 fpGbps: 0.374 fpb: 6 bpp: 42 sleep: 0 us[root@chur:/home/ciberkot]# snabb packetblaster replay test1.pcap 01:00.1
Transmissions (last 1 sec):
apps report:
nic1
0000:01:00.1 TXDGPC (TX packets) 12,241,923 GOTCL (TX octets) 783,374,592
Transmissions (last 1 sec):
apps report:
nic1
0000:01:00.1 TXDGPC (TX packets) 14,880,434 GOTCL (TX octets) 952,337,216
Transmissions (last 1 sec):
apps report:
nic1
0000:01:00.1 TXDGPC (TX packets) 14,880,852 GOTCL (TX octets) 952,371,648