Experiment using utahddc and uky

29 views
Skip to first unread message

Raj Thakkar

unread,
Apr 21, 2014, 12:51:40 PM4/21/14
to geni-...@googlegroups.com, Kandah, Farah
Hello,

                 I was trying an experiment of file transfer using the resources from utahddc and uky using stitcher, but the throughput i am getting is not that good. I have tried this experiment before and the throughput was allright. Is there any upgrade going on in any of the resources used in the experiment.

Resources used
utahddc-ig
uky-pg

User
urn=urn:publicid:IDN+ch.geni.net+user+kbx429

Slice name: nyuky


Thank you

Regards
Raj Thakkar

Aaron Helsinger

unread,
Apr 21, 2014, 12:59:14 PM4/21/14
to geni-...@googlegroups.com, Raj Thakkar, Kandah, Farah
You didn't say what throughput you are getting, or what you requested. But note that as of Omni2.5, the capacity you request defaults to 20MBps. You can change this with a commandline argument to stitcher, --defaultCapacity. See README-stitching.txt

Aaron
--
GENI Users is a community supported mailing list, so please help by responding to questions you know the answer to.
 
If this is your first time posting a question to this list, please review http://groups.geni.net/geni/wiki/GENIExperimenter/CommunityMailingList
---
You received this message because you are subscribed to the Google Groups "GENI Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to geni-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Raj Thakkar

unread,
Apr 21, 2014, 1:07:38 PM4/21/14
to Aaron Helsinger, geni-...@googlegroups.com, Kandah, Farah
Thank you Aaron for the reply. I was transferring a 50mb file and the throughput i am getting is 0.084 MB/s. The attached screenshot is from the experiment i performed displays the transfer time in secs and throughput when i transfer 50mb file.


Thank you

Regards
Raj Thakkar
 
Throughput and Transfer time utahddc-uky.PNG

Leigh Stoller

unread,
Apr 22, 2014, 8:18:22 AM4/22/14
to geni-...@googlegroups.com, Aaron Helsinger, Kandah, Farah
> Thank you Aaron for the reply. I was transferring a 50mb file and the throughput i am getting is 0.084 MB/s. The attached screenshot is from the experiment i performed displays the transfer time in secs and throughput when i transfer 50mb file.

Hi It would be helpful if you also provided precise instructions
to duplicate your test.

Leigh





Raj Thakkar

unread,
Apr 22, 2014, 10:07:46 AM4/22/14
to geni-...@googlegroups.com

Hello Leigh,

                      I was trying to send a 50mb file from one node to another using python and calculate the transfer time and throughput. I tried this experiment using nodes from utahddc and uky, Sending files from utahddc to uky and I got the results which I mentioned in previous email. So is there any updates going on in any of resources. Because I have tried this experiment using python code on different resources and I got throughput more than 7MB/s.

Thank you

Regards
Raj Thakkar

Sarah Edwards

unread,
Apr 22, 2014, 10:23:36 AM4/22/14
to geni-...@googlegroups.com, Sarah Edwards
Hi Raj,

Can you send us:
 * The RSpec for your setup
 * The name of your slice
 * The output of running iperf [1] on the links of interest

Thanks,
Sarah Edwards

[1] If you don't know how to run iperf, step 5.2 in our intro tutorial should explain it. See here: http://groups.geni.net/geni/wiki/GENIExperimenter/Tutorials/GettingStarted_PartI/Procedure/Execute#a5.2Installanduseiperf
*******************************************************************************
Sarah Edwards
GENI Project Office

BBN Technologies
Cambridge, MA
phone:    (617) 873-2329
email:    sedw...@bbn.com





Raj Thakkar

unread,
Apr 22, 2014, 2:56:29 PM4/22/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah
Hello Sarah,
                      
                           The iperf output and the rspec used for the experiment is attached with this e-mail. The slice name for the experiment is nyuky. The user detail is urn=urn:publicid:IDN+ch.geni.net+user+kbx429


Thank you

Regards
Raj Thakkar
iperf -c uky.JPG
iperf -s utahddc.JPG
utahuky.rspec

Sarah Edwards

unread,
Apr 24, 2014, 9:13:13 AM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
Hi Raj,

I setup the same stitched topology that you have (except that I made the raw-pcs into xen nodes; see attached for rspec) and ran iperf and I get very different results than you did although still not what I would expect.

I get the following:
sedwards@pc-1:~$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.1 port 5001 connected with 192.168.1.2 port 60957
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.2 sec  20.0 MBytes  16.5 Mbits/sec


pc-2:~% iperf -c 192.168.1.1
------------------------------------------------------------
Client connecting to 192.168.1.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.2 port 60957 connected with 192.168.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  20.0 MBytes  16.6 Mbits/sec

I see two strange things:
 (1) I get much more bandwidth than you do but not the full 100 Mbps that is set in the RSpec which seems suspicious. I tried iperf both directions and tried longer data transfers (using -t) and always got something under 20 Mbps.  It looks to me like I got the default capacity of 20 Mbps instead of the 100Mbps specified in the request, but I don't know why.
 (2) The RSpec sets the image for one node and not the other.  Just as a general thing, it might be nice to set the image for both nodes so you know what you are going to get.

I'm not sure if anyone else can shed any light on the situation.

Sarah

utahuky-xen.rspec

Aaron Helsinger

unread,
Apr 24, 2014, 9:44:41 AM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
When I try this, the full expanded request clearly says "100000" (in KBps) in all relevant spots for "capacity". So if some AM decided you only asked for 20000, then that is an AM error, not a stitching tool error. Of course, requested capacity is not the same as available actual capacity.

Aaron
<iperf -c uky.JPG><iperf -s utahddc.JPG><utahuky.rspec>

Leigh Stoller

unread,
Apr 24, 2014, 9:59:25 AM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
> I setup the same stitched topology that you have (except that I made the raw-pcs into xen nodes; see attached for rspec) and ran iperf and I get very different results than you did although still not what I would expect.

If you can tell me which aggregates, I can take a quick look to
see if anything has a funky setting.

Leigh





Raj Thakkar

unread,
Apr 24, 2014, 10:10:00 AM4/24/14
to geni-...@googlegroups.com, Farah Kandah

Thank you Sarah for the reply. I will try the experiment using the rspec. I will let you know what are the iperf results I am getting

--
GENI Users is a community supported mailing list, so please help by responding to questions you know the answer to.

If this is your first time posting a question to this list, please review http://groups.geni.net/geni/wiki/GENIExperimenter/CommunityMailingList
---
You received this message because you are subscribed to the Google Groups "GENI Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to geni-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

<iperf -c uky.JPG><iperf -s utahddc.JPG><utahuky.rspec>

Sarah Edwards

unread,
Apr 24, 2014, 11:46:58 AM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
Leigh and Raj,

Luisa reminded me that if you only have one TCP flow in iperf on a xen VM then the bandwidth will max out at 20 Mbps. She says she doesn't see this with raw-pcs.

So I ran iperf again [1] with `-P 6` and got 96.7Mbps which sounds right and is consistent with Luisa's experience testing the racks.

Also Raj's RSpec connects utahddc and PG kentucky. He said his slice name was `nyuky`.

Hope this helps,
Sarah


[1] $ iperf -c 192.168.1.2 -t 60 -P 6
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.1.1 port 43484 connected with 192.168.1.2 port 5001
[ 3] local 192.168.1.1 port 43482 connected with 192.168.1.2 port 5001
[ 6] local 192.168.1.1 port 43485 connected with 192.168.1.2 port 5001
[ 8] local 192.168.1.1 port 43487 connected with 192.168.1.2 port 5001
[ 4] local 192.168.1.1 port 43483 connected with 192.168.1.2 port 5001
[ 7] local 192.168.1.1 port 43486 connected with 192.168.1.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 8] 0.0-60.0 sec 120 MBytes 16.8 Mbits/sec
[ 4] 0.0-60.0 sec 120 MBytes 16.7 Mbits/sec
[ 7] 0.0-60.1 sec 115 MBytes 16.1 Mbits/sec
[ 5] 0.0-60.1 sec 120 MBytes 16.7 Mbits/sec
[ 3] 0.0-60.1 sec 105 MBytes 14.6 Mbits/sec
[ 6] 0.0-60.1 sec 114 MBytes 15.9 Mbits/sec
[SUM] 0.0-60.1 sec 694 MBytes 96.7 Mbits/sec

Raj Thakkar

unread,
Apr 24, 2014, 11:51:33 AM4/24/14
to geni-...@googlegroups.com, Kandah, Farah, Sarah Edwards

The slice is expiring today. I tried  renewing sliver yesterday but it failed.  I am trying the experiment again with raw - pc's and will again check the iperf output.

Thank you

Regards
Raj Thakkar

Nicholas Bastin

unread,
Apr 24, 2014, 11:57:08 AM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
On Thu, Apr 24, 2014 at 11:46 AM, Sarah Edwards <sedw...@bbn.com> wrote:
Luisa reminded me that if you only have one TCP flow in iperf on a xen VM then the bandwidth will max out at 20 Mbps.  She says she doesn't see this with raw-pcs.

The window size is way too small if you want single-flow throughput to be high, particularly given the latency in WAN connections (the resultant bandwidth delay product will be much too high for iperf's default window sizes).

You simply need to set the window size larger on both the server and client and your performance should be better for a single flow, regardless of raw-pc or xen VM. (The default raw PC window size is likely a lot larger, hiding this problem, but if you want deterministic performance from iperf you always need to set the window sizes you're going to use, otherwise it makes a guess at a default that is going to be highly intolerant of any latency in your connection).

Barring any other shaping going on, the throughput you're getting with a 23.5K window means you have a connection with roughly 12ms of delay.  To get 100 Mbits (or close to it) on this connection, you'd need a window size of 150K or larger (and make sure your buffer sizes on both end can support that large of a window).

--
Nick

Leigh Stoller

unread,
Apr 24, 2014, 12:06:47 PM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu
Excellent message Nick. May I suggest that we start a FAQ at the
top of geni-users (pinned), and that we start adding things like
this.

Question: “Why do I see poor bandwidth on my stitched links with iperf?”
Answer:

> The window size is way too small if you want single-flow throughput to be high, particularly given the latency in WAN connections (the resultant bandwidth delay product will be much too high for iperf's default window sizes).
>
> You simply need to set the window size larger on both the server and client and your performance should be better for a single flow, regardless of raw-pc or xen VM. (The default raw PC window size is likely a lot larger, hiding this problem, but if you want deterministic performance from iperf you always need to set the window sizes you're going to use, otherwise it makes a guess at a default that is going to be highly intolerant of any latency in your connection).
>
> Barring any other shaping going on, the throughput you're getting with a 23.5K window means you have a connection with roughly 12ms of delay. To get 100 Mbits (or close to it) on this connection, you'd need a window size of 150K or larger (and make sure your buffer sizes on both end can support that large of a window).
>
>

Leigh





Brecht Vermeulen

unread,
Apr 24, 2014, 12:17:24 PM4/24/14
to geni-...@googlegroups.com, Sarah Edwards, Kandah, Farah, Raj-T...@mocs.utc.edu

typically also handy in debugging link performance is to throw in an UDP
iperf to see what is possible on the links and when packet loss begins
to start:
server side: iperf -s -u
client side: iperf -c xxx -b 50M

so you can vary bandwidth and packet size and see what happens

Brecht
> --
> GENI Users is a community supported mailing list, so please help by
> responding to questions you know the answer to.
>
> If this is your first time posting a question to this list, please
> review
> http://groups.geni.net/geni/wiki/GENIExperimenter/CommunityMailingList
> ---
> You received this message because you are subscribed to the Google
> Groups "GENI Users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to geni-users+...@googlegroups.com
> <mailto:geni-users+...@googlegroups.com>.

Raj Thakkar

unread,
Apr 24, 2014, 12:25:29 PM4/24/14
to Nicholas Bastin, Kandah, Farah, geni-...@googlegroups.com, Sarah Edwards

Thank you Nicholas for the reply. I will go ahead and look into Window size and include it in server and client

Reply all
Reply to author
Forward
Message has been deleted
0 new messages