In the past, I used Microsoft Web Application Stress Tool and Pylot to stress test web applications. I'd written a simple home page, login script, and site walkthrough (in an ecommerce site adding a few items to a cart and checkout).
The reports generated by these tools never made much sense to me, and I would spend many hours trying to figure out what kind of concurrent load the site would be able to support. It was always worth it because the stupidest bugs and bottlenecks would always come up (for instance, web server misconfigurations).
What have you done, what tools have you used, and what success have you had with your approach? The part that is most interesting to me is coming up with some kind of a meaningful formula for calculating the number of concurrent users an app can support from the numbers reported by the stress test application.
JMeter is an open-source load testing tool, written in Java. It's capable of testing a number of different server types (for example, web, web services, database, just about anything that uses requests basically).
It does however have a steep learning curve once you start getting to complicated tests, but it's well worth it. You can get up and running very quickly, and depending on what sort of stress-testing you want to do, that might be fine.
I've used The Grinder. It's open source, pretty easy to use, and very configurable. It is Java based and uses Jython for the scripts. We ran it against a .NET web application, so don't think it's a Java only tool (by their nature, any web stress tool should not be tied to the platform it uses).
We did some neat stuff with it... we were a web based telecom application, so one cool use I set up was to mimick dialing a number through our web application, then used an auto answer tool we had (which was basically a tutorial app from Microsoft to connect to their RTC LCS server... which is what Microsoft Office Communicator connects to on a local network... then modified to just pick up calls automatically). This then allowed us to use this instead of an expensive telephony tool called The Hammer (or something like that).
Anyways, we also used the tool to see how our application held up under high load, and it was very effective in finding bottlenecks. The tool has built in reporting to show how long requests are taking, but we never used it. The logs can also store all the responses and whatnot, or custom logging.
I highly recommend this tool, very useful for the price... but expect to do some custom setup with it (it has a built in proxy to record a script, but it may need customization for capturing something like sessions... I know I had to customize it to utilize a unique session per thread).
A little late to this party. I agree that Pylot is the best up-and-coming open source tool out there. It's simple to use and is actively worked on by a great guy (Corey Goldberg). As the founder of OpenQA, I'm also happy that Pylot now is listed on our home page and uses some of our infrastructure (namely the forums).
However, I also recently decided that the entire concept of load testing was flawed: emulating HTTP traffic, with applications as complex as they have become, is a pain in the butt. That's why I created the commercial tool BrowserMob. It's an external load testing service that uses Selenium to control real web browsers when playing back load.
The approach obviously requires a ton more hardware than normal load testing techniques, but hardware is actually pretty cheap when you are using cloud computing. And a nice side effect of this is that the scripting is much easier than normal load testing. You don't have to do any advanced regex matching (like JMeter requires) to extract out cookies, .NET session state, Ajax request parameters, etc. Since you're using real browsers, they just do what they are supposed to do.
Sorry to blatantly pitch a commercial product, but hopefully the concept is interesting to some folks and at least gets them thinking about some new ways to deal with load testing when you have access to a bunch of extra hardware!
Both tools will handle HTTP and HTTPS and have a proxy recorder to get you started.Both tools use the Controller model to drive multiple test agents so scalability is not an issue (given access to the Cloud).
A hard call as the learning curve is steep with both tools as you get into the more complicated scripting requirements for url rewriting, correlation, providing unique data per Virtual User and simulating first time or returning Users (by manipulating the HTTP Headers).
That said I would start with Jmeter as this tool has a huge following and there are many examples and tutorials on the web for using this tool. If and when you come to a 'road block', that is something you can't 'easily' do with Jmeter then have a look at the Grinder. The good news is both these tools have the same Java requirement and a 'mix and match' solution is not out of the question.
This is a relatively new approach because it relies on the availability of resources that can now be provisioned from the Cloud. With this approach a Selenium (WebDriver) script is taken and run within a headless browser (i.e. WebDriver = New HtmlUnitDriver()) driver in multiple threads.
The scalability is compromised as more VMs will be needed to drive the load, as compared with a HTTP driver such as the Grinder or Jmeter. That said, if you are looking to drive 500 Virtual Users then with 20 Amazon Small Instances (6 cents an hour each) at a cost of just $1.20 per hour gives you load that is very close to the Real User Experience.
We recently started using Gatling for load testing. I would highly recommend to try out this tool for load testing. We had used SOASTA and JMETER in past. Our main reason to consider Gatling is following:
I tried WebLoad it's a pretty neat tool. It comes with and test script IDE which allows you to record user action on a website. It also draws a graph as it perform stress test on your web server. Try it out, I highly recommend it.
Trying all mentioned here, I found curl-loader as best for my purposes. very easy interface, real-time monitoring, useful statistics, from which I build graphs of performance. All features of libcurl are included.
Blaze meter has a chrome extension for recording sessions and exporting them to JMeter (currently requires login). You also have the option of paying them money to run it on their cluster of JMeter servers (their pricing seems much better than LoadImpact which I've just stopped using):
You asked this question almost a year ago and I don't know if you still are looking for another way of benchmarking your website. However since this question is still not marked as solved I would like to suggest the free webservice LoadImpact (btw. not affiliated). Just got this link via twitter and would like to share this find. They create a reasonable good overview and for a few bucks more you get the "full impact mode". This probably sounds strange, but good luck pushing and braking your service :)
It allows you to put scripts together in a test in any way you want and configure the number of iterations, the number of users in each iteration, the ramp up time to introduce each new user and the delay between each iteration. Tests can also be scheduled in the future.
We use the Microsoft tool mentioned - Microsoft Web Application Stress Tool. It is the easiest tool I have used. It is limited in many ways, including only being able to hit port 80 on manually created tests. But, its ease of use means it actually gets used.
So, during development, we include very basic multi-user testing (using selenium), which checks for basic craziness like broken session management, obvious concurrency issues, and obvious resource contention problems. Non-trivial projects include this in the continuous integration process, so we get very regular feedback.
For projects that don't have extreme performance requirements, we include basic performance testing in our testing; usually, we script out the tests using BadBoy, and import them into JMeter, replacing the login details and other thread-specific things. We then ramp these up to the level that the server is dealing with 100 requests per second; if the response time is less than 1 second, that's usually sufficient. We launch and move on with our lives.
For projects with extreme performance requirements, we still use BadBoy and JMeter, but put a lot of energy into understanding the bottlenecks on the servers on our test rig(web and database servers, usually). There's a good tool for analyzing Microsoft event logs which helps a lot with this. We typically find unexpected bottlenecks, which we optimize if possible; that gives us an application that is as fast as it can be on "1 web server, 1 database server". We then usually deploy to our target infrastructure, and use one of the "Jmeter in the cloud" services to re-run the tests at scale.
There are a lot of good tools mentioned here. I wonder if tools are an answer the question: "How do you stress test a web application?" The tools don't really provide a method to stress a Web app. Here's what I know:
Stress testing shows how a Web app fails while serving responses to an increasing population of users. Stress testing shows how the Web app functions while it fails. Most Web apps today - especially the Social/Mobile Web apps - are integrations of services. For example, when Facebook had its outage in May 2011 you could not log onto Pepsi.com's Web app. The app didn't fail entirely, just a big portion of its normal function become unavailable to users.
Performance testing shows a Web app's ability to maintain response times independent of how many users are concurrently using the app. For example, an app that handles 10 transactions per second with 10 concurrent users should handle 20 transactions per second at 20 users. If the app handles less than 20 transactions per second the response times are growing longer and the app is not able to achieve linear scalability.
c80f0f1006