Test of large number of users concurrently downloading large file -- not a candidate for LoadRunner?

36 views
Skip to first unread message

Carter, Rick

unread,
Oct 30, 2015, 7:08:08 AM10/30/15
to LR-Loa...@googlegroups.com
Hi all,

From my reading, I think this is not feasible but running it past you all anyway.

The test proposed is for a web-based utility to save all a user's data from a site (which is going away), and several hundred concurrent users may be trying to download a zip file that could be on the order of a small number of GB in size to their desktops.

From my reading, it seems ambiguous but I gather that simulating a download of that size would use (file size) of LoadRunner memory. True?

And of course, if I actually downloaded, say, 200 users each downloading 2GB it'll just be a test of the Loadrunner system's hard drive more than anything.

Am I wrong about either one of these conjectures? Any smart ideas of how to do this? I think LoadRunner is probably not the right tool.

- Rick

--
Rick Carter
Application Systems Administrator Senior
Information and Technology Services
University of Michigan
rkca...@umich.edu
734-647-5941

André Luyer

unread,
Nov 4, 2015, 10:01:31 AM11/4/15
to LoadRunner
So you want to test concurrent downloads, and not actually store the files to disk.
Then a normal LoadRunner web script will do, because that is precisely what functions like web_url do.
No, it will not require the "file size amount" of memory.

But you need to adjust the TCP receive buffer size. For example:
web_set_sockets_option("TCP_BUFFER_SIZE", "626340");
will allow a single vuser to utilize full 1 Gbps of bandwidth when the Round Trip Time (RTT) is 5 msec.
(The default the TCP receive buffer size of 12288 bytes is too small -- max 20 Mbps @ RTT 5 msec.)
The optimal setting depends on the bandwidth-delay product (BDP) between the load generator and the webserver (and number of vusers), see https://en.wikipedia.org/wiki/TCP_tuning .
Over-sizing the TCP buffer is no problem.

André

James Pulley

unread,
Dec 1, 2015, 11:00:37 AM12/1/15
to LoadRunner
Have you considered that 200 users each downloading a 2GB file is a wonderful candidate for an internal CDN solution to store the file upon the first download from the user, where the others users will retrieve the file locally after that.?   I could easily see 400GB of download requests gumming up not only a server but all of the pipes to and from the target and the client.

If you have riverbed appliances throughout the campus then you should be able to configure these items to cache your target and filetype so the file winds up being served locally in different buildings.  If not, then I would consider deploying some small squid/varnish servers to serve as front end proxies and then using DNS to point to the local instance of application proxy based upon client IP address.  You could use this proxy to speed up all of the static components: style sheets, images, javascript files, fonts, etc...  If you have all of the clients downloading this file at once and the file is common and can be pre-generated/pre-seeded, then take that path.   Pre-seed the cache with the file an hour before the common request activity begins, such as seeding the file at 6am before people start turning on their PCs at 7am.   If 200 people need the exact same file at the same time then I would also consider a multicast model for future versions of the app, allowing an idle thread to listen to the multicast address and receive the file when is broadcast for use.

As Andre noted, the download is not likely to be ab issue software wise, but in total bandwidth to the load generator, yes.   You are representing 200 network links coming in and aggregating that to a single network link.  If I had to guess, your clients are on 1gb/sec links and your server on at least one 10gb link.   Your total aggregate for your clients would be 20x your server, however even at a minimum of 3 load generators in your test bed your load generator clients would be at .33333x of your server.  The load generator pipes would likely become a bottleneck in the execution of the test.

James Pulley
LiteSquare/PerfBytes/LoadRunner By The Hour/Cloud Architect/NewCoe LLC

Rick Carter

unread,
Dec 1, 2015, 12:29:12 PM12/1/15
to LR-Loa...@googlegroups.com
James,

Great idea, except each of these files will ideally be downloaded only once — long story short, we’re decommissioning a service that, simplistically, could be thought of as having been our own private dropbox.com equivalent with no storage quotas (research project data can take a LOT of space, and there’s been no restriction on which projects can use it — research, student groups, probably staff fantasy football stuff); then the user will get to decide where they want to store it after that (box.com, dropbox, for many as likely as not on a DVD-R on a bookshelf in a Faculty lab somewhere).

I’ve asked the developers to consider a batch model — user requests the zip-up, gets emailed when ready to download (and we could throttle based on the zipping load as well as how many are currently downloading to send the “ok it’s ready” email) — although this is not urgent for everyone to run at once (the system will be decommissioned with many months’ notice), I suspect the typical “day one” Dudley Do-Rights and last day “oops better make my Grad Student do this after i’ve been given 20 warnings” high loads.

- Rick
--
Rick Carter
Application Administrator Senior
Information Technology Services
University of Michigan
+1-734-647-5941
rkca...@umich.edu


--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.
To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at http://groups.google.com/group/LR-LoadRunner.
To view this discussion on the web visit https://groups.google.com/d/msgid/LR-LoadRunner/1a67f0d7-164a-44ec-86b7-592d3e91ef16%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages