[ASTF] IPv4 vs. IPv6 throughput

257 views
Skip to first unread message

Andreas Bourges

unread,
Jun 8, 2018, 5:34:05 AM6/8/18
to TRex Traffic Generator
Hi,

...I'm using an astf profile based on astf/sfr_full.py. When using IPv4 I can reach ~76gbps using 2 xl710 cards. For IPv6 testing I changed the profile to:

class Prof1():
def __init__(self):
pass # tunables

def create_profile(self):
# ip generator
ip_gen_c = ASTFIPGenDist(ip_range=["172.28.0.1", "172.28.100.255"], distribution="seq")
ip_gen_s = ASTFIPGenDist(ip_range=["172.30.0.1", "172.30.255.255"], distribution="seq")
ip_gen = ASTFIPGen(glob=ASTFIPGenGlobal(ip_offset="0.1.0.0"),
dist_client=ip_gen_c,
dist_server=ip_gen_s)

c_glob_info = ASTFGlobalInfo()
c_glob_info.ipv6.src_msb ="2a00:4986:04ff:65a0:0000:1000::"
c_glob_info.ipv6.dst_msb ="2a00:4986:04ff:65a0:0000:2000::"
c_glob_info.ipv6.enable =1

profile = ASTFProfile(default_ip_gen=ip_gen,
default_c_glob_info=c_glob_info,
cap_list=[
ASTFCapInfo(file="/opt/trex/CURRENT/avl/delay_10_http_get_0.pcap", cps=102.0,port=8080),
[...]


Using the same command to start the profile, I expected to reach ~75 gbps as well, but only get around 2-3 gbps?!? Is there a known limitation regarding astf/ipv6 speed? The setup is currently in back-2-back mode (ports-looped back by a Nexus 5672UP).

Thanks,

Andreas


output of IPv6 run:

bash# sudo ./t-rex-64 --astf -f profiles/PERF01/emix-ipv6.py -c 12 -m 75gbps


-Per port stats table
ports | 0 | 1 | 2 | 3
-----------------------------------------------------------------------------------------
opackets | 16272796 | 8076978 | 16269360 | 8075624
obytes | 2536538093 | 820455540 | 2531500735 | 820357504
ipackets | 8075573 | 16269329 | 8076978 | 16273052
ibytes | 820352401 | 2531496288 | 820455540 | 2536604590
ierrors | 0 | 0 | 0 | 0
oerrors | 0 | 0 | 0 | 0
Tx Bw | 1.06 Gbps | 268.02 Mbps | 1.05 Gbps | 267.99 Mbps

-Global stats enabled
Cpu Utilization : 13.6 % 1.6 Gb/core
Platform_factor : 1.0
Total-Tx : 2.64 Gbps
Total-Rx : 2.64 Gbps
Total-PPS : 2.06 Mpps
Total-CPS : 222.56 Kcps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-L7-BPS : 0.00 bps

Active-flows : 1034634 Clients : 25848 Socket-util : 0.0636 %
Open-flows : 5970439 Servers : 65520 Socket : 1034634 Socket/Clients : 40.0
drop-rate : 0.00 bps
current time : 30.1 sec
test duration : 3569.9 sec
*** TRex is shutting down - cause: 'CTRL + C detected'
latency daemon has stopped
labadmin@dest515x-lbtrex01:~/TREX$


hanoh haim

unread,
Jun 8, 2018, 6:02:52 AM6/8/18
to Andreas Bourges, TRex Traffic Generator
Can you look into tcp counters for the issue?

Try to change to c=1. 
With IPv6 and RSS (c>1) there is a specific configuration of the rss mask, we have a regression test on this one so it should work.

Make sure it does not relates to it.

Thanks,
Hanoh

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/69f05d40-b3ca-4dac-b009-abdc731cdf12%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Hanoh
Sent from my iPhone

hanoh haim

unread,
Jun 8, 2018, 6:12:09 AM6/8/18
to Andreas Bourges, TRex Traffic Generator

Andreas Bourges

unread,
Jun 8, 2018, 8:15:39 AM6/8/18
to TRex Traffic Generator
Hi Hanoch,

thanks for your prompt reply!

-> ipv6 prefix limitation is not a problem (by pure luck):

Server-PRefix-1 : 2a00:4986:4ff:65a0:0:2000:ac1e:0/112
Server-PRefix-2 : 2a00:4986:4ff:65a0:0:2000:ac1f:0/112

Changing -c *does* improve throughput:

-c 1: 25.47 Gbps
-c 2: 48.83 Gbps
-c 3: 3.65 Gbps
-c 4: 69.63 Gbps
-c 5: 3.65 Gbps
-c 6: 3.65 Gbps

-c 12: 3.62 Gbps
-c 13: 3.62 Gbps
-c 14: 3.62 Gbps

...it seems that "-c 4" gives the best results, however I don't understand why?!
For IPv4 "-c 14" results in ~77gbps and queue_full starts increasing later.

regards,

Andreas

hanoh haim

unread,
Jun 8, 2018, 8:23:13 AM6/8/18
to Andreas Bourges, TRex Traffic Generator
Hi, 
Could you look into the TCP counters to figure out the problem? 

It is a functional issue and not a performance issue.

Regarding the number of cores. In the general case, (IPv4 too) you should opt to the minimal number of cores. 
Adding more cores not always will give a better performance. 

I've tested c=5 with X710 and it seems to work 

$sudo ./t-rex-64 -f astf/param_ipv6.py -m 10000 -c 5 --astf -l 1 -d 1000


                       |          client   |            server   |  
 -----------------------------------------------------------------------------------------
       m_active_flows  |            5989  |             5990  |  active open flows
          m_est_flows  |            5974  |             5977  |  active established flows
         m_tx_bw_l7_r  |      19.86 Mbps  |        2.56 Gbps  |  tx L7 bw acked
   m_tx_bw_l7_total_r  |      19.86 Mbps  |        2.56 Gbps  |  tx L7 bw total
         m_rx_bw_l7_r  |       2.56 Gbps  |       19.86 Mbps  |  rx L7 bw acked
           m_tx_pps_r  |      49.84 Kpps  |      249.24 Kpps  |  tx pps
           m_rx_pps_r  |     259.20 Kpps  |       59.82 Kpps  |  rx pps
           m_avg_size  |         1.04 KB  |          1.04 KB  |  average pkt size
           m_tx_ratio  |      100.01  %%  |        99.99  %%  |  Tx acked/sent ratio
                    -  |             ---  |              ---  |  
                  TCP  |             ---  |              ---  |  
                    -  |             ---  |              ---  |  
     tcps_connattempt  |          183897  |                0  |  connections initiated
         tcps_accepts  |               0  |           183891  |  connections accepted
        tcps_connects  |          183882  |           183878  |  connections established
          tcps_closed  |          177908  |           177901  |  conn. closed (includes drops)
       tcps_segstimed  |          550669  |           733551  |  segs where we tried to get rtt
      tcps_rttupdated  |          550631  |           732532  |  times we succeeded
          tcps_delack  |          182900  |                0  |  delayed acks sent
        tcps_sndtotal  |          916448  |          4584231  |  total packets sent
         tcps_sndpack  |          183882  |          4217458  |  data packets sent
         tcps_sndbyte  |        45790353  |       5901380532  |  data bytes sent by application
      tcps_sndbyte_ok  |        45786618  |       5885570184  |  data bytes sent by tcp
         tcps_sndctrl  |          183897  |                0  |  control (SYN|FIN|RST) packets sent
         tcps_sndacks  |          548669  |           366773  |  ack-only packets sent 
         tcps_rcvpack  |         4400129  |           366760  |  packets received in sequence
         tcps_rcvbyte  |      5885281100  |         45785622  |  bytes received in sequence
      tcps_rcvackpack  |          366749  |           732532  |  rcvd ack packets
      tcps_rcvackbyte  |        45783630  |       5869701612  |  tx bytes acked by rcvd acks 
   tcps_rcvackbyte_of  |          182879  |           366750  |  tx bytes acked by rcvd acks - overflow acked
         tcps_preddat  |         4033380  |                0  |  times hdr predict ok for data pkts 
                    -  |             ---  |              ---  |  
                  UDP  |             ---  |              ---  |  
                    -  |             ---  |              ---  |  
                    -  |             ---  |              ---  |  
           Flow Table  |             ---  |              ---  |  
                    -  |             ---  |              ---  |  



thanks,
Hanoh 


--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+unsubscribe@googlegroups.com.

To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andreas Bourges

unread,
Jun 8, 2018, 8:30:19 AM6/8/18
to TRex Traffic Generator

Using -c 4 works for me now. Here's an output from -c 12

sudo ./t-rex-64 --astf -f profiles/PERF01/emix-ipv6.py -c 12 -m 75gbps

| client | server |
-----------------------------------------------------------------------------------------
m_active_flows | 1033407 | 12574 | active open flows
m_est_flows | 160610 | 12105 | active established flows
m_tx_bw_l7_r | 0.00 bps | 0.00 bps | tx L7 bw acked
m_tx_bw_l7_total_r | 0.00 bps | 0.00 bps | tx L7 bw total
m_rx_bw_l7_r | 0.00 bps | 0.00 bps | rx L7 bw acked
m_tx_pps_r | 580.97 Kpps | 580.98 Kpps | tx pps
m_rx_pps_r | 0.00 pps | 0.00 pps | rx pps
m_avg_size | 0.00 B | 0.00 B | average pkt size
m_tx_ratio | 0.00 %% | 0.00 %% | Tx acked/sent ratio


- | --- | --- |
TCP | --- | --- |
- | --- | --- |

tcps_connattempt | 4312560 | 0 | connections initiated
tcps_accepts | 0 | 15678572 | connections accepted
tcps_closed | 3439763 | 15678103 | conn. closed (includes drops)
tcps_segstimed | 4312560 | 15678572 | segs where we tried to get rtt
tcps_sndtotal | 15678647 | 15678572 | total packets sent
tcps_sndbyte | 5806917946 | 0 | data bytes sent by application
tcps_sndctrl | 15678647 | 0 | control (SYN|FIN|RST) packets sent
tcps_sndacks | 0 | 15678572 | ack-only packets sent
tcps_drops | 0 | 15678103 | connections dropped
tcps_conndrops | 3439763 | 0 | *embryonic connections dropped
tcps_rexmttimeo_syn | 11366087 | 0 | *retransmit SYN timeouts
tcps_keeptimeo | 3439763 | 0 | *keepalive timeouts
tcps_keepdrops | 3439763 | 0 | *connections dropped in keepalive


- | --- | --- |
UDP | --- | --- |
- | --- | --- |

udps_accepts | 0 | 2286604 | connections accepted
udps_connects | 2286632 | 0 | connections established
udps_closed | 2126022 | 2274499 | conn. closed (includes drops)
udps_sndbyte | 2756285420 | 138954480 | data bytes transmitted
udps_sndpkt | 5182837 | 2341398 | data packets transmitted
udps_rcvbyte | 0 | 2756243541 | data bytes received
udps_rcvpkt | 0 | 5182761 | data packets received
udps_keepdrops | 2122800 | 61157 | *keepalive drop


- | --- | --- |
Flow Table | --- | --- |
- | --- | --- |

err_cwf | 18019633 | 0 | *client pkt without flow
err_rx_throttled | 0 | 1 | rx thread was throttled

hanoh haim

unread,
Jun 8, 2018, 8:35:38 AM6/8/18
to Andreas Bourges, TRex Traffic Generator
Hi, 

This counters indicates that there is a RSS issue (packets does not get back to the client) 

       tcps_conndrops  |         3439763  |                0  | *embryonic connections dropped
  tcps_rexmttimeo_syn  |        11366087  |                0  | *retransmit SYN timeouts
       tcps_keeptimeo  |         3439763  |                0  | *keepalive timeouts
       tcps_keepdrops  |         3439763  |                0  | *connections dropped in keepalive


 
Does  astf/param_ipv6.py work for you?

I wonder what make this issue, Could you try to reduce the profile to somthing minimal that create the issue?


thanks,
Hanoh

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+unsubscribe@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andreas Bourges

unread,
Jun 8, 2018, 9:35:11 AM6/8/18
to TRex Traffic Generator
Hi Hanoch,

I used another profile now, http_simple:

from trex_astf_lib.api import *


class Prof1():
def __init__(self):
pass

def get_profile(self, **kwargs):
# ip generator
ip_gen_c = ASTFIPGenDist(ip_range=["172.28.0.0", "172.28.100.255"], distribution="seq")
ip_gen_s = ASTFIPGenDist(ip_range=["172.30.0.0", "172.30.255.255"], distribution="seq")


ip_gen = ASTFIPGen(glob=ASTFIPGenGlobal(ip_offset="0.1.0.0"),
dist_client=ip_gen_c,
dist_server=ip_gen_s)

c_glob_info = ASTFGlobalInfo()
c_glob_info.ipv6.src_msb ="2a00:4986:04ff:65a0:0000:1000::"
c_glob_info.ipv6.dst_msb ="2a00:4986:04ff:65a0:0000:2000::"
c_glob_info.ipv6.enable =1

return ASTFProfile(default_ip_gen=ip_gen,
default_c_glob_info=c_glob_info,
# cap_list=[ASTFCapInfo(file="/opt/trex/CURRENT/avl/delay_10_http_browsing_0.pcap",
cap_list=[ASTFCapInfo(file="/opt/trex/CURRENT/cap2/http_post.pcap",
cps=5000)])


def register():
return Prof1()

--

sudo ./t-rex-64 --astf -f profiles/PERF03/http_simple-ipv6.py -c 4 -m 55gbps


--> using "-c 4" gives me 82G throuput,
--> "-c 12 " results in only 2.4G:


-Per port stats table
ports | 0 | 1 | 2 | 3
-----------------------------------------------------------------------------------------

opackets | 44756452 | 22379489 | 44759164 | 22378297
obytes | 3938570256 | 2193189922 | 3938808572 | 2193073106
ipackets | 22378194 | 44759132 | 22379564 | 44756671
ibytes | 2193063012 | 3938805776 | 2193197272 | 3938589358


ierrors | 0 | 0 | 0 | 0
oerrors | 0 | 0 | 0 | 0

Tx Bw | 773.86 Mbps | 430.91 Mbps | 773.89 Mbps | 430.89 Mbps

-Global stats enabled
Cpu Utilization : 22.9 % 0.9 Gb/core
Platform_factor : 1.0
Total-Tx : 2.41 Gbps
Total-Rx : 2.41 Gbps
Total-PPS : 3.30 Mpps
Total-CPS : 274.80 Kcps

Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-L7-BPS : 0.00 bps

Active-flows : 1647304 Clients : 25848 Socket-util : 0.1012 %
Open-flows : 11930930 Servers : 65520 Socket : 1647304 Socket/Clients : 63.7
drop-rate : 0.00 bps
current time : 45.9 sec
test duration : 3554.1 sec


| client | server |
-----------------------------------------------------------------------------------------
m_active_flows | 1647312 | 601 | active open flows
m_est_flows | 0 | 0 | active established flows


m_tx_bw_l7_r | 0.00 bps | 0.00 bps | tx L7 bw acked
m_tx_bw_l7_total_r | 0.00 bps | 0.00 bps | tx L7 bw total
m_rx_bw_l7_r | 0.00 bps | 0.00 bps | rx L7 bw acked

m_tx_pps_r | 1.10 Mpps | 1.10 Mpps | tx pps


m_rx_pps_r | 0.00 pps | 0.00 pps | rx pps
m_avg_size | 0.00 B | 0.00 B | average pkt size
m_tx_ratio | 0.00 %% | 0.00 %% | Tx acked/sent ratio
- | --- | --- |
TCP | --- | --- |
- | --- | --- |

tcps_connattempt | 19731277 | 0 | connections initiated
tcps_accepts | 0 | 75959801 | connections accepted
tcps_closed | 18083965 | 75959200 | conn. closed (includes drops)
tcps_segstimed | 19731277 | 75959799 | segs where we tried to get rtt
tcps_sndtotal | 75959948 | 75959799 | total packets sent
tcps_sndbyte | 203863553964 | 0 | data bytes sent by application
tcps_sndctrl | 75959948 | 0 | control (SYN|FIN|RST) packets sent
tcps_sndacks | 0 | 75959800 | ack-only packets sent
tcps_drops | 0 | 75959200 | connections dropped
tcps_conndrops | 18083965 | 0 | *embryonic connections dropped
tcps_rexmttimeo_syn | 56228671 | 0 | *retransmit SYN timeouts
tcps_keeptimeo | 18083965 | 0 | *keepalive timeouts
tcps_keepdrops | 18083965 | 0 | *connections dropped in keepalive


- | --- | --- |
UDP | --- | --- |
- | --- | --- |

- | --- | --- |
Flow Table | --- | --- |
- | --- | --- |

err_cwf | 75959416 | 0 | *client pkt without flow
err_rx_throttled | 16 | 39 | rx thread was throttled

hanoh haim

unread,
Jun 8, 2018, 9:50:21 AM6/8/18
to Andreas Bourges, TRex Traffic Generator
Hi Andreas, 

Ok, managed to reconstruct and it seems I was wrong in my calculation of RSS function (for IPv6)
There is another minor limitation - there should be 8 zero with MSB of the IP addr (destination)
The MSB of the destination IP should be zero. 

in your case it is : 172 -> 0x10101100 => the first bit is wrong. 
172.30.0.0

try to set the MSB to zero, (number less than 128 +mask*port)

my 48.x.0.0 worked by chance..

Could you validate this?


thanks,
Hanoh


--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+unsubscribe@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andreas Bourges

unread,
Jun 8, 2018, 10:26:10 AM6/8/18
to TRex Traffic Generator
Hi Hanoch,


...verified - changing 172 to 44 (disable bit 128) fixes the throughput issue even when using 12 cores.

-> this is a IPv6 only limitation, right?

Thanks,

Andreas

hanoh haim

unread,
Jun 8, 2018, 10:35:40 AM6/8/18
to Andreas Bourges, TRex Traffic Generator
Yes. You could mix IPv6 and IPv4 using ASTF.
For IPv6 case the destination IPv6 address tuple match the location of the destination port of IPv4 (RSS spec). This is the reason there is a need to make sure it is zero in case of IPv6 

Thanks,
Hanoh

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.

To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andreas Bourges

unread,
Jun 10, 2018, 10:58:42 AM6/10/18
to TRex Traffic Generator
Hi Hanoch,

...one last question:

Since the test-cases and all the documentation currently use the wrong Prefix (bit 128 is set), I wonder if it's dangerous to run the test using 4 cores and keep the prefixes. Or do you recommend to substitute the prefixes and rewrite our docs?

Additionally - will this behaviour be fixed in the future, or does it work as designed?

Thanks,


Andreas

hanoh haim

unread,
Jun 10, 2018, 11:14:15 AM6/10/18
to Andreas Bourges, TRex Traffic Generator
Yes, it could be fixed. The reason it happens with XL710 is the RSS indirect table of 512 and not 256/128. With 82599 there is a table of 256. In this case there should be no issue.

C=4,8 (256 % c == 0) should be a workaround too.

I’m going to CLUS, hope to look into it next week.

Thanks,
Hanoh

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+u...@googlegroups.com.
To post to this group, send email to trex...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

hanoh haim

unread,
Jul 10, 2018, 5:02:48 AM7/10/18
to Andreas Bourges, TRex Traffic Generator
Hi Andreas, 

This is the fix for this issue. I've tested it on mlx5/xl710 which has 512 RSS RETA tables 



diff --git a/src/main_dpdk.cpp b/src/main_dpdk.cpp
index dc53dfe..7775392 100644
--- a/src/main_dpdk.cpp
+++ b/src/main_dpdk.cpp
@@ -6352,22 +6352,29 @@ void CPhyEthIF::configure_rss_astf(bool is_client,
     uint16_t q;
     uint16_t indx=0;
     for (int j = 0; j < reta_conf_size; j++) {
-        reta_conf[j].mask = ~0ULL;
-        for (int i = 0; i < RTE_RETA_GROUP_SIZE; i++) {
-            while (true) {
-                q=(indx + skip) % numer_of_queues;
-                if (q != skip_queue) {
-                    break;
+        if (j<4) {
+            reta_conf[j].mask = ~0ULL;
+            for (int i = 0; i < RTE_RETA_GROUP_SIZE; i++) {
+                while (true) {
+                    q=(indx + skip) % numer_of_queues;
+                    if (q != skip_queue) {
+                        break;
+                    }
+                    skip += 1;
                 }
-                skip += 1;
+                reta_conf[j].reta[i] = q;
+                indx++;
+            }
+        }else{
+            reta_conf[j].mask = ~0ULL;
+            for (int i = 0; i < RTE_RETA_GROUP_SIZE; i++) {
+                reta_conf[j].reta[i] = reta_conf[j%4].reta[i];
             }
-            reta_conf[j].reta[i] = q;
-            indx++;
         }
     }              
     assert(rte_eth_dev_rss_reta_update(m_repid, &reta_conf[0], dev_info.reta_size)==0);
 
-    #ifdef RSS_DEBUG
+     #ifdef RSS_DEBUG
      rte_eth_dev_rss_reta_query(m_repid, &reta_conf[0], dev_info.reta_size);
      int j; int i;
 
@@ -6375,7 +6382,9 @@ void CPhyEthIF::configure_rss_astf(bool is_client,
      /* verification */
      for (j = 0; j < reta_conf_size; j++) {
          for (i = 0; i<RTE_RETA_GROUP_SIZE; i++) {
-             printf(" R %d  %d \n",(j*RTE_RETA_GROUP_SIZE+i),reta_conf[j].reta[i]);
+             if (reta_conf[j].mask & (1<<i)) {
+                 printf(" R (%d:%d) %d  %d \n",j,i,(j*RTE_RETA_GROUP_SIZE+i),reta_conf[j].reta[i]);
+             }
          }
      }
     #endif


On Sun, Jun 10, 2018 at 6:14 PM, hanoh haim <hhaim...@gmail.com> wrote:
Yes, it could be fixed. The reason it happens with XL710 is the RSS indirect table of 512 and not 256/128. With 82599 there is a table of 256. In this case there should be no issue.

C=4,8 (256 % c == 0) should be a workaround too.

I’m going to CLUS, hope to look into it next week.

Thanks,
Hanoh
On Sun, 10 Jun 2018 at 17:58 'Andreas Bourges' via TRex Traffic Generator <trex...@googlegroups.com> wrote:
Hi Hanoch,

...one last question:

Since the test-cases and all the documentation currently use the wrong Prefix (bit 128 is set), I wonder if it's dangerous to run the test using 4 cores and keep the prefixes. Or do you recommend to substitute the prefixes and rewrite our docs?

Additionally - will this behaviour be fixed in the future, or does it work as designed?

Thanks,


Andreas

--
You received this message because you are subscribed to the Google Groups "TRex Traffic Generator" group.
To unsubscribe from this group and stop receiving emails from it, send an email to trex-tgn+unsubscribe@googlegroups.com.

To post to this group, send email to trex...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/trex-tgn/6222d5b0-d02d-4752-84a6-1916710e9df2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Hanoh
Sent from my iPhone

hanoh haim

unread,
Jul 10, 2018, 5:06:48 AM7/10/18
to Andreas Bourges, TRex Traffic Generator
Reply all
Reply to author
Forward
0 new messages