I have some general questions and hoping someone might be able to answer.
QUESTION: When I am running http_simple.yaml with latency, i notice that port 0 rx_ok results stop while port 1 continues to increment. port 1 will eventually stop as well. both port's tx_ok continue to increment until the test ends. can anyone explain this behavior, is it expected?
---
I have TREX running on the following hardware using ubuntu 16.04:
Motherboard Custom
Processor Intel Atom C2758
OS Drive 32GB Industrial Compact Flash
Memory 8GB DDR3
NICs 6
Network devices using DPDK-compatible driver
============================================
0000:00:14.0 'Ethernet Connection I354' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
0000:00:14.1 'Ethernet Connection I354' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
Network devices using kernel driver
===================================
0000:00:14.2 'Ethernet Connection I354' if=enp0s20f2 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
0000:00:14.3 'Ethernet Connection I354' if=enp0s20f3 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
0000:03:00.0 'I210 Gigabit Network Connection' if=enp3s0 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:04:00.0 'I210 Gigabit Network Connection' if=enp4s0 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
=====================
<none>
---
i am running:
sudo ./t-rex-64 -f cap2/http_simple.yaml -c 4 -m 1000 -d 100 -l 1000
---
root@dna-ubuntu:/opt/trex/v2.56# cat cap2/http_simple.yaml
- duration : 0.1
generator :
distribution : "seq"
clients_start : "192.168.0.1"
clients_end : "192.168.0.255"
servers_start : "10.10.0.1"
servers_end : "10.10.0.240"
clients_per_gb : 201
min_clients : 101
dual_port_mask : "1.0.0.0"
tcp_aging : 0
udp_aging : 0
cap_ipg : true
cap_info :
- name: avl/delay_10_http_browsing_0.pcap
cps : 2.776
ipg : 10000
rtt : 10000
w : 1
root@dna-ubuntu:/opt/trex/v2.56#
---
FULL TEST RESULTS:
-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 3977692 | 6470261
obytes | 322120664 | 9266465204
ipackets | 1529927 | 1678111
ibytes | 2186351822 | 135878125
ierrors | 5 | 5
oerrors | 0 | 0
Tx Bw | 25.81 Mbps | 742.63 Mbps
-Global stats enabled
Cpu Utilization : 1.4 % 28.4 Gb/core
Platform_factor : 1.0
Total-Tx : 768.44 Mbps
Total-Rx : 30.35 Kbps
Total-PPS : 104.65 Kpps
Total-CPS : 2.77 Kcps
Expected-PPS : 102.71 Kpps
Expected-CPS : 2.78 Kcps
Expected-BPS : 767.80 Mbps
Active-flows : 366 Clients : 252 Socket-util : 0.0023 %
Open-flows : 277217 Servers : 240 Socket : 366 Socket/Clients : 1.5
drop-rate : 768.41 Mbps
current time : 112.7 sec
test duration : 0.0 sec
-Latency stats enabled
Cpu Utilization : 0.1 %
if| tx_ok , rx_ok , rx check ,error, latency (usec) , Jitter max window
| , , , , average , max , (usec)
----------------------------------------------------------------------------------------------------------------
0 | 99978, 23742, 0, 1, 425 , 14226, 572 | 3185 832 4397 14142 4755 3799 4719 1450 11725 4628 3818 3436 10331
1 | 99978, 42225, 0, 1, 151 , 13292, 91 | 2196 3264 1021 3500 2218 3794 689 557 657 8930 3781 3318 2118
*** TRex is shutting down - cause: 'test has ended'
latency daemon has stopped
All cores stopped !!
==================
interface sum
==================
---------------
port : 0
------------
opackets : 3986687
obytes : 322805342
ipackets : 1529928
ibytes : 2186351886
ierrors : 5
Tx : 16.21 Mbps
port : 1
------------
opackets : 6485087
obytes : 9287897342
ipackets : 1678155
ibytes : 135880941
ierrors : 5
Tx : 468.85 Mbps
Cpu Utilization : 1.4 % 17.9 Gb/core
Platform_factor : 1.0
Total-Tx : 485.06 Mbps
Total-Rx : 19.35 Kbps
Total-PPS : 66.01 Kpps
Total-CPS : 1.71 Kcps
Expected-PPS : 102.71 Kpps
Expected-CPS : 2.78 Kcps
Expected-BPS : 767.80 Mbps
Active-flows : 0 Clients : 252 Socket-util : 0.0000 %
Open-flows : 277600 Servers : 240 Socket : 0 Socket/Clients : 0.0
drop-rate : 485.04 Mbps
==================
==================
interface sum
==================
------------------------
per core stats core id : 1
------------------------
------------------------
per core per if stats id : 1
------------------------
port 0, queue id :0 - client
----------------------------
port 1, queue id :0 - server
----------------------------
------------------------
per core stats core id : 2
------------------------
------------------------
per core per if stats id : 2
------------------------
port 0, queue id :1 - client
----------------------------
port 1, queue id :1 - server
----------------------------
------------------------
per core stats core id : 3
------------------------
------------------------
per core per if stats id : 3
------------------------
port 0, queue id :2 - client
----------------------------
port 1, queue id :2 - server
----------------------------
------------------------
per core stats core id : 4
------------------------
------------------------
per core per if stats id : 4
------------------------
port 0, queue id :3 - client
----------------------------
port 1, queue id :3 - server
----------------------------
==================
generators
==================
normal
-------------
min_delta : 10 usec
cnt : 0
high_cnt : 0
max_d_time : 0 usec
sliding_average : 0 usec
precent : -nan %
histogram
-----------
m_total_bytes : 2.23 Gbytes
m_total_pkt : 2.57 Mpkt
m_total_open_flows : 69.40 Kflows
m_total_pkt : 2567800
m_total_open_flows : 69400
m_total_close_flows : 69400
m_total_bytes : 2399366200
normal
-------------
min_delta : 10 usec
cnt : 0
high_cnt : 0
max_d_time : 0 usec
sliding_average : 0 usec
precent : -nan %
histogram
-----------
m_total_bytes : 2.23 Gbytes
m_total_pkt : 2.57 Mpkt
m_total_open_flows : 69.40 Kflows
m_total_pkt : 2567800
m_total_open_flows : 69400
m_total_close_flows : 69400
m_total_bytes : 2399366200
normal
-------------
min_delta : 10 usec
cnt : 0
high_cnt : 0
max_d_time : 0 usec
sliding_average : 0 usec
precent : -nan %
histogram
-----------
m_total_bytes : 2.23 Gbytes
m_total_pkt : 2.57 Mpkt
m_total_open_flows : 69.40 Kflows
m_total_pkt : 2567800
m_total_open_flows : 69400
m_total_close_flows : 69400
m_total_bytes : 2399366200
normal
-------------
min_delta : 10 usec
cnt : 0
high_cnt : 0
max_d_time : 0 usec
sliding_average : 0 usec
precent : -nan %
histogram
-----------
m_total_bytes : 2.23 Gbytes
m_total_pkt : 2.57 Mpkt
m_total_open_flows : 69.40 Kflows
m_total_pkt : 2567800
m_total_open_flows : 69400
m_total_close_flows : 69400
m_total_bytes : 2399366200
==================
latency
==================
Cpu Utilization : 0.1 %
if| tx_ok , rx_ok , rx check ,error, latency (usec) , Jitter max window
| , , , , average , max , (usec)
----------------------------------------------------------------------------------------------------------------
0 | 100287, 23742, 0, 1, 425 , 14226, 572 | 3185 832 4397 14142 4755 3799 4719 1450 11725 4628 3818 3436 10331
1 | 100287, 42225, 0, 1, 151 , 13292, 91 | 2196 3264 1021 3500 2218 3794 689 557 657 8930 3781 3318 2118
cpu : 0.1 %
port 0
-----------------
counter
-----------
m_tx_pkt_ok : 100287
m_pkt_ok : 23742
m_seq_error : 1
-----------
min_delta : 10 usec
cnt : 288
high_cnt : 288
max_d_time : 14226 usec
sliding_average : 425 usec
precent : 100.0 %
histogram
-----------
h[30] : 1
h[40] : 9
h[50] : 13
h[60] : 19
h[70] : 70
h[80] : 96
h[90] : 109
h[100] : 6995
h[200] : 8923
h[300] : 4729
h[400] : 1440
h[500] : 408
h[600] : 183
h[700] : 88
h[800] : 46
h[900] : 24
h[1000] : 208
h[2000] : 112
h[3000] : 93
h[4000] : 56
h[5000] : 23
h[6000] : 23
h[7000] : 17
h[8000] : 14
h[9000] : 13
h[10000] : 30
jitter : 572
port 1
-----------------
counter
-----------
m_tx_pkt_ok : 100287
m_pkt_ok : 42225
m_unsup_prot : 1
-----------
min_delta : 10 usec
cnt : 420
high_cnt : 420
max_d_time : 13292 usec
sliding_average : 151 usec
precent : 100.0 %
histogram
-----------
h[20] : 15
h[30] : 1569
h[40] : 1786
h[50] : 2199
h[60] : 2134
h[70] : 2229
h[80] : 2194
h[90] : 2228
h[100] : 15657
h[200] : 8038
h[300] : 2501
h[400] : 739
h[500] : 287
h[600] : 137
h[700] : 66
h[800] : 51
h[900] : 30
h[1000] : 138
h[2000] : 80
h[3000] : 45
h[4000] : 19
h[5000] : 13
h[6000] : 13
h[7000] : 12
h[8000] : 11
h[9000] : 9
h[10000] : 25
jitter : 91
rx_checker is disabled
---------------
port : 0
------------
opackets : 3986687
obytes : 322805342
ipackets : 1529928
ibytes : 2186351886
ierrors : 5
Tx : 16.21 Mbps
port : 1
------------
opackets : 6485087
obytes : 9287897342
ipackets : 1678157
ibytes : 135881069
ierrors : 5
Tx : 468.85 Mbps
Cpu Utilization : 1.4 % 17.9 Gb/core
Platform_factor : 1.0
Total-Tx : 485.06 Mbps
Total-Rx : 19.35 Kbps
Total-PPS : 66.01 Kpps
Total-CPS : 1.71 Kcps
Expected-PPS : 102.71 Kpps
Expected-CPS : 2.78 Kcps
Expected-BPS : 767.80 Mbps
Active-flows : 0 Clients : 252 Socket-util : 0.0000 %
Open-flows : 277600 Servers : 240 Socket : 0 Socket/Clients : 0.0
drop-rate : 485.04 Mbps
summary stats
--------------
Total-pkt-drop : 7263689 pkts
Total-tx-bytes : 9610702684 bytes
Total-tx-sw-bytes : 13237884 bytes
Total-rx-bytes : 2322232955 byte
Total-tx-pkt : 10471774 pkts
Total-rx-pkt : 3208085 pkts
Total-sw-tx-pkt : 200574 pkts
Total-sw-err : 0 pkts
Total ARP sent : 6 pkts
Total ARP received : 611 pkts
maximum-latency : 14226 usec
average-latency : 425 usec
latency-any-error : ERROR
root@dna-ubuntu:/opt/trex/v2.56#
im still banging my head against a wall. to add more, i am noticing that i always seem to be getting input errors (ierrors) on at least one (sometimes both) ports 0 and 1.
when the test is done, i also see latency-any-error : ERROR.
i tried with -k 10. when i do this, the rx_ok column on latency stats does not increment at all.
it seems very alarming that Total-Rx seems to drop off so much.
i'll continue to post with more findings. if anyone can assist, it would be much appreciated.
cheers!
--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/6693afc3-20ee-42d1-9009-17c4b92e848f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/24a5086f-3373-4ed5-a32e-c9b4b3937618%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.