Is there a faster way to import external data into LoadRunner's analysis tool?

346 views
Skip to first unread message

Gaz Davidson

unread,
Dec 17, 2010, 10:27:21 AM12/17/10
to LR-Loa...@googlegroups.com
During my current engagement I couldn't get rstatd working over firewalls. sar doesn't give detailed enough results but the main information I'm looking for is output by vmstat, so I wrote this little Python script to convert vmstat to CSV so LoadRunner can import it, it works great:


However, now I'm actually trying to import this data into the analysis tool, which is an incredibly painful manual task. Here's what it looks like:

1. Open analysis file.
2. Wait for the analysis tool to open
3. Choose "Tools -> External Monitors -> Import data" from the menu
4. Click "Add File" and select my file (which I have moved to %tmp% because LR defaults there every time)
5. Enter the machine name
6. Click next
7. Select the monitor type from the tree
8. Click Finish
9. Wait for the "Importing data" window to have finished working
10. Click the close button
11. Wait for the graphs to refresh (deleting all the graphs beforehand speeds this step up)
12. For each machine in the SUT, (10 of them) GOTO 3
13. For each test executed, GOTO 1.

Following this process, importing counters manually takes over 400 soul-crushing, spirit destroying, time wasting steps. As there's no way to remove external data once it's been imported, if I make a mistake (ie, bad machine name), I have to start again from scratch. This hardly a good use of my time, I feel ashamed of billing the client for time spent doing tasks that a trained monkey could do.

Does anyone have any advice on how to automate this process?

James Pulley

unread,
Dec 17, 2010, 11:38:04 AM12/17/10
to lr-loa...@googlegroups.com

Best way, use SiteScope and it’s ability to grab all of the stats during the test.   It can grab all of the same data elements as vmstat and needs to leverage a SSH connection through the firewall, which it sounds like you might already have if you have terminal access.  8.x and above you get 500points of SiteScope with your LR purchase for monitoring foundations.

 

You would think that you could just open up the database and just start bringing in new elements to the [datapoint meter] table along with its associated tables.   There are so many relations to manage that the import using that model is cumbersome and prone to error.  If you’re using SQL Server as a backend and you are in a location where the laws allow for the exploration of the behavior of a system for integration purposes then you may want to examine the query logs on SQL Server for the queries used to import the data through the analysis tool to see how you might automate the direct import into the backend data store.   You need to be real careful on jurisdiction on this one.   As the EU is a bit more liberal on examining the behavior of software for integration purposes, in the US that tends to run afoul of reverse engineering provisions of a license.  

 

I well understand your pain on the multiple external file front.   At one of our clients we have some machines that we monitor indirectly.   We use a mechanism similar to yours involving VMSTAT with the logs directly output in the CSV format at the time of collection, avoiding the perl step.   It is still a manual pain to import each one however by host.   It would be nice if future versions of the import allowed for a single field to define host.   Then we consolidate files for a single import versus multiple imports.

 

I would press for SiteScope with the SSH access for the time savings (and bill savings) issue for the client.   Barring that, hire an intern at a really cheap rate to do that grudge work.  Or, chat with your lawyer and accountant about the business value of a Christmas shopping and engineering trip to a location with a bit more liberal laws on examining behavior for integration purposes ;)

 

James Pulley, http://www.loadrunnerbythehour.com/PricingMatrix

--
You received this message because you are subscribed to the Google "LoadRunner" group.
To post to this group, send email to LR-Loa...@googlegroups.com
To unsubscribe from this group, send email to
LR-LoadRunne...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/LR-LoadRunner?hl=en

Floris Kraak

unread,
Dec 17, 2010, 12:02:11 PM12/17/10
to lr-loa...@googlegroups.com

You may want to take a look at birt - http://www.eclipse.org/birt/phoenix/ - as a way to do the things you'd normally do with the analysis tool. It accepts csv files as database sources and can generate some really nice graphs.at the very least they look better...

Op 17 dec. 2010 16:27 schreef "Gaz Davidson" <garethd...@gmail.com> het volgende:

Gaz Davidson

unread,
Dec 17, 2010, 1:05:37 PM12/17/10
to LR-Loa...@googlegroups.com, lr-loa...@googlegroups.com


On Friday, December 17, 2010 4:38:04 PM UTC, James Pulley wrote:

Best way, use SiteScope and it’s ability to grab all of the stats during the test.   It can grab all of the same data elements as vmstat and needs to leverage a SSH connection through the firewall, which it sounds like you might already have if you have terminal access.  8.x and above you get 500points of SiteScope with your LR purchase for monitoring foundations.


Good idea. I'll have to try this for the next long-term project I work on, I've not used SiteScope yet mostly due to the licensing constraints but also because I'm usually battling against time; on a consultancy gig it's difficult to justify time spent doing a proof of concept, it's a case of do as much as possible before a deadline, bolting botch-job workarounds together when things go wrong! 

I can't really go making big changes now as I'm pretty much at the end of the engagement, two Windows systems tested and doing the write-up on a Linux system, one Linux system still to test; say 10 days of work if all goes well. 

You would think that you could just open up the database and just start bringing in new elements to the [datapoint meter] table along with its associated tables.   There are so many relations to manage that the import using that model is cumbersome and prone to error.  If you’re using SQL Server as a backend and you are in a location where the laws allow for the exploration of the behavior of a system for integration purposes then you may want to examine the query logs on SQL Server for the queries used to import the data through the analysis tool to see how you might automate the direct import into the backend data store.   You need to be real careful on jurisdiction on this one.   As the EU is a bit more liberal on examining the behavior of software for integration purposes, in the US that tends to run afoul of reverse engineering provisions of a license.  


Currently working for a US client and don't have a SQL server to store the results. I usually work in the UK though where we have fairly liberal laws regarding reverse engineering, sounds like a fun project for next time I'm at a client site for an extended period of time!

... Then we consolidate files for a single import versus multiple imports.

That's a winning solution; I'll make a simple script to combine the CSV files, which will optimize the innermost loop away, but mean I have to manually filter results to make pretty graphs. By storing the columns like "Measurement (Machine)" I can sort alphabetically and easily do manual filters.

Great stuff, thanks 'Pulley

Gaz Davidson

unread,
Dec 17, 2010, 1:41:06 PM12/17/10
to LR-Loa...@googlegroups.com, lr-loa...@googlegroups.com
Ooh I'll check this out too, thanks.

Venkat P

unread,
Dec 17, 2010, 1:44:00 PM12/17/10
to LoadRunner
The company that I am currently contracting with has an impressively
designed and built in house tool that is used to 'load' external data
into the Analysis.

Essentially one invokes a batch file that uses perl and java to
extract, format, save it into a single CSV and then load them directly
into the loadrunner results. (regardless of the number of servers
being monitored etc - they are fed in a text file at one time along
with start and stop times.)

Opening the results in Analysis and selecting a particular system
resource graphs shows the external data in one place - after which it
is upto us to filter them out.

The more I look at it, the more I think its a work of genius saving
hundreds if not thousands of man hours every year...HP could learn
something from this tool..

Floris Kraak

unread,
Dec 17, 2010, 5:24:31 PM12/17/10
to lr-loa...@googlegroups.com

If such a tool was open source we would not be held back by the speed at which HP chooses to catch up..

Op 17 dec. 2010 19:44 schreef "Venkat P" <vjp...@gmail.com> het volgende:

Gaz Davidson

unread,
Dec 17, 2010, 9:41:35 PM12/17/10
to LR-Loa...@googlegroups.com, lr-loa...@googlegroups.com
Rather than supplement deficient proprietary software with open source, it would be better to have a full-featured, free software load testing tool.

I'd write one myself if I didn't have a vested interest in LoadRunner's success ;-)

James Pulley

unread,
Dec 18, 2010, 9:34:15 AM12/18/10
to lr-loa...@googlegroups.com

In the best of free market opportunities, you have found a need and I encourage you to begin such a project.

Floris Kraak

unread,
Dec 18, 2010, 10:23:53 AM12/18/10
to lr-loa...@googlegroups.com

Replacing one piece at a time works. Add birt to this and we have the analysis piece sorted. Integrate that with jmeter or some other decent tool for the scripting part and we might be getting somewhere...

Op 18 dec. 2010 03:41 schreef "Gaz Davidson" <garethd...@gmail.com> het volgende:

Gaz Davidson

unread,
Dec 19, 2010, 6:17:23 AM12/19/10
to LR-Loa...@googlegroups.com, lr-loa...@googlegroups.com
I've successfully used FunkLoad for web and web services testing when working for Betfair earlier this year. It only has a rudimentary recorder and the reporting isn't great, but produces XML files which we managed to get some quite pretty graphs from using matplotlib. The biggest problems were that it's memory hungry, hard to get running on 64-bit platforms (well, Windows anyway) and CPython doesn't really like to use more than one CPU so you need a few load generators.

One of the Java devs made JMX monitors for the J2EE stuff, and being the great company Betfair are, we got full permission to release the new monitors, fixes, graphing tools and continuous integration tools as open source, though I don't know how much of this has made it back into the wild yet.

I'll certainly use it again if I'm on a project that demands test automation or has licensing constraints, there's something magical about performance reports at the end of the build process as developers check in code!
Reply all
Reply to author
Forward
0 new messages