Load Runner Analysis

312 views
Skip to first unread message

Aroorganesha

unread,
Jan 6, 2016, 8:18:24 PM1/6/16
to LoadRunner
Hi All,

Do we have any option to filter with the absolute time in the load runner analysis.

I would like to find the server bottle neck for the specific time period (for eg. 3:15 - 4:15 AM EST ) but when am trying to global filter it has only option available for elapse time not for the absolute time .

Let me know if we have option or work around.

Thanks
Aroor Ganesha

André Luyer

unread,
Jan 8, 2016, 8:57:08 AM1/8/16
to LoadRunner
No there is no option to filter on absolute time, but since the absolute start time is known (see summary report) that shouldn't be a problem.

Why not simply change the graphs to absolute time (via Display Options)?

André

Aroorganesha

unread,
Jan 11, 2016, 10:05:46 AM1/11/16
to LoadRunner
Agreed, But the problem is want to filter the measurement for the specific absolute time.

Mital Majmundar

unread,
Jan 11, 2016, 10:05:47 AM1/11/16
to LR-Loa...@googlegroups.com

I think there is an option of filtering with absolute time.

@Arrorganesha -Can you try by using the raw data from the analysis window?

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.
To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.
To view this discussion on the web visit https://groups.google.com/d/msgid/LR-LoadRunner/8918f890-7ae0-42c7-8b3f-c50c608415a9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

James Pulley

unread,
Jan 11, 2016, 10:31:30 AM1/11/16
to LoadRunner
You have the start time at the beginning page of the report.  You know your absolute time as an offset to the start time.   Be a pragmatist, calculate the offset, use that for your filter.  Then, as Andre notes, change the value for the graphs to absolute time.  The pragmatic approach gets you a solution within ten minutes versus five days of email back and forth.

Or, you can export all of your timing record data and data point data into a set of nam value fields associated with time and then use any external tool such as R or Splunk to ingest the data for reporting.   This is likely on the plus five days basis.

Or, you can use open source reporting tools like Jasper Reports (Jaspersoft) or Pentaho to design your own reports when use absolute time as a filter condition versus offset time.  In order to take advantage of this you should look at the results database which gets built, particularly the Meta Table which holds the schema for the rest of the database.   Pentaho's meta table structure lends itself quite readily to this type of query as you could take the absolute start time from one table and add to that the offset time from the beginning for each of the items in the [ * Meter ] Tables, which contain the timing records, datapoints, breakdown, etc...   You would then have meta tables with absolute time (Epoch timestamp) which could then be used for your reporting.  Estimated time to first usable (installation, plus learning curve, samples, etc....) about 30 days.   Time for each report after that, not much more beyond what is required for running analysis, generating a .LRA structure and then pointing your Jasper or Pentaho templates at the new data source.

Consider the pragmatic path. It's almost always faster, but perhaps not shiny an object as you might desire

James Pulley
NewCOE/LoadRunnerByTheHour/TheScriptFarm/LiteSquare/PerfBytes


On Monday, January 11, 2016 at 10:05:47 AM UTC-5, Mital wrote:

I think there is an option of filtering with absolute time.

@Arrorganesha -Can you try by using the raw data from the analysis window?

On Jan 8, 2016 3:57 PM, "André Luyer" <an...@luyer.nl> wrote:
No there is no option to filter on absolute time, but since the absolute start time is known (see summary report) that shouldn't be a problem.

Why not simply change the graphs to absolute time (via Display Options)?

André


On Thursday, 7 January 2016 02:18:24 UTC+1, Aroorganesha wrote:
Hi All,

Do we have any option to filter with the absolute time in the load runner analysis.

I would like to find the server bottle neck for the specific time period (for eg. 3:15 - 4:15 AM EST ) but when am trying to global filter it has only option available for elapse time not for the absolute time .

Let me know if we have option or work around.

Thanks
Aroor Ganesha

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunner+unsubscribe@googlegroups.com.

Mital Majmundar

unread,
Jan 11, 2016, 10:31:31 AM1/11/16
to LR-Loa...@googlegroups.com

So if u alreay have the raw data for absolute time then just copy it in an excel nd get the graph for specific tome.

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.

Aroor Ganesha

unread,
Jan 12, 2016, 9:09:26 AM1/12/16
to LR-Loa...@googlegroups.com
Yes , Thanks for your support on this.

I have export the raw data into separate excel sheet now and able to process the graph manually.

Reason for looking for this option is , i have made the VBS script to generate the report automatically from the analysis file and mailing to the business team as soon as the script execution is completed.

I will look for the James 3rd option as well now for my .net batch application compatibility and post you .

Regards
AroorGanesh



On Mon, Jan 11, 2016 at 8:30 AM, James Pulley <loadrunn...@jamespulley.com> wrote:
You have the start time at the beginning page of the report.  You know your absolute time as an offset to the start time.   Be a pragmatist, calculate the offset, use that for your filter.  Then, as Andre notes, change the value for the graphs to absolute time.  The pragmatic approach gets you a solution within ten minutes versus five days of email back and forth.

Or, you can export all of your timing record data and data point data into a set of nam value fields associated with time and then use any external tool such as R or Splunk to ingest the data for reporting.   This is likely on the plus five days basis.

Or, you can use open source reporting tools like Jasper Reports (Jaspersoft) or Pentaho to design your own reports when use absolute time as a filter condition versus offset time.  In order to take advantage of this you should look at the results database which gets built, particularly the Meta Table which holds the schema for the rest of the database.   Pentaho's meta table structure lends itself quite readily to this type of query as you could take the absolute start time from one table and add to that the offset time from the beginning for each of the items in the [ * Meter ] Tables, which contain the timing records, datapoints, breakdown, etc...   You would then have meta tables with absolute time (Epoch timestamp) which could then be used for your reporting.  Estimated time to first usable (installation, plus learning curve, samples, etc....) about 30 days.   Time for each report after that, not much more beyond what is required for running analysis, generating a .LRA structure and then pointing your Jasper or Pentaho templates at the new data source.

Consider the pragmatic path. It's almost always faster, but perhaps not shiny an object as you might desire

James Pulley
NewCOE/LoadRunnerByTheHour/TheScriptFarm/LiteSquare/PerfBytes


On Monday, January 11, 2016 at 10:05:47 AM UTC-5, Mital wrote:

I think there is an option of filtering with absolute time.

@Arrorganesha -Can you try by using the raw data from the analysis window?

On Jan 8, 2016 3:57 PM, "André Luyer" <an...@luyer.nl> wrote:
No there is no option to filter on absolute time, but since the absolute start time is known (see summary report) that shouldn't be a problem.

Why not simply change the graphs to absolute time (via Display Options)?

André


On Thursday, 7 January 2016 02:18:24 UTC+1, Aroorganesha wrote:
Hi All,

Do we have any option to filter with the absolute time in the load runner analysis.

I would like to find the server bottle neck for the specific time period (for eg. 3:15 - 4:15 AM EST ) but when am trying to global filter it has only option available for elapse time not for the absolute time .

Let me know if we have option or work around.

Thanks
Aroor Ganesha

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.

For more options, visit https://groups.google.com/d/optout.



--
Regards
Ganesh

James Pulley

unread,
Jan 12, 2016, 9:37:45 AM1/12/16
to LoadRunner
So, what you are saying is that you add zero value in terms of analysis of the results as compared to the requirements nor identification of resource issues which lead to poor performance.

The value is not the test or the test report.  The value is the analysis of the test.  When you spawn these off as soon as the test ends you are adding zero value to the output and very likely you are sending it to people that their perspective of performance testing is , "I get a report.  I have no idea what it means, but I get a report."   That is an exact quote for what I hear weekly from people I speak with.

To bring it back to how this impacts you directly:
  • High value = good rates and lowered risk in production
  • Low value = low rates and increased risk in production
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunner+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunner+unsubscribe@googlegroups.com.

To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.



--
Regards
Ganesh

Aroorganesha

unread,
Jan 13, 2016, 9:32:44 AM1/13/16
to LoadRunner
No.. that's not right, reason for doing this automation process is , mail will get trigger to our DBA , Architect, and the support team on the server metrics for further analysis , they will also send us the AWR report and if any other process running ( scheduled process ) will cause this issue or not.

Basically this will give them the interim analysis report alone for us to take a call on deep drive investigation.

Regards
Ganesh
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.



--
Regards
Ganesh

Kevyland

unread,
Jan 13, 2016, 9:43:38 AM1/13/16
to LoadRunner
+1 to James,

How many times have you sent a report and just receive "thanks" and nothing more.
We bury ourselves in useless reporting that generates nothing but expense, time lost and little understanding and no actions.
Great example, I report a bug and it sits in the backlog.  A customer reports the same issue and now it's a priority.  
See what motivates a management decision to get something fixed.  
Cest le vie
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.

To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.



--
Regards
Ganesh

James Pulley

unread,
Jan 13, 2016, 10:13:20 AM1/13/16
to LoadRunner
Now it would appear that we have hit upon the root issue.  

You need to line up external data sources/monitor data with information collected externally and that data is on a absolute time basis versus a time offset basis.    I want to offer a different path, use the integrated monitors and also the ability to import external datapoints into analysis and then your issue goes away.   The ability to have integrated monitor data with results has been a part of LoadRunner since at least 1995 and the ability to import into the Analysis tool (arrived with version 6.x) has been there (I believe) since it's inception.   This will allow to you align all of the data for monitors along a common time scale.  This also eliminates the labor component of having people sit there and watch/collect external data during the test.   If the monitors must be external, then the data can be collected into files from perfmon and the command line tools for 'nix and then integrated into analysis using the import capability.   It is only when you have an integrated view of resources and response times that you can identify issue and add value. 

I take back the observation on value on distribution to stakeholders as this type of report with no analysis distribution is common.   As you are attempting to collect data on a common timescale I propose above using the built in capabilities on the internal monitors and the ability to import external data into analysis for datapoints.

James Pulley

unread,
Jan 13, 2016, 10:13:20 AM1/13/16
to LoadRunner
I see your point and accept that this happens often, that performance results are simply dismissed as not important or not enough time to fix.   I collect those items identified in production and also in test but which were not fixed and then try to measure the impact of not fixing them in dollars related to downtime, sales, etc....   That should be delivered to management quarterly so your value can be recognized.

There is a cultural item that comes into play these days which is a difficult one to address with many project managers and developers, the issue of "seniority."  When performance testing first came upon the scene only the most seasoned of individuals were pulled for the efforts, people with decades of robust foundation skills in development, architecture, networking, database design, systems engineering, etc....   Over the past decade and a half there has been a quantum shift, given life by the marking falsehood that "any business analyst can do this work with tool 'X.'"  People are pushed from manual to automated to performance QA as starter jobs right out of school with little accounting for foundation skills or value delivered.  People are then "promoted" from this group to project management positions and developers.  So, the "cultural seniority" part of this problem on PMs and Dev's dismissing the issue is that they do not see performance testers as peers, but as someone that they can say, "Oh, that's nice.  I have been where you are and I can tell you that is no important.  When you grow up you will understand..."

That dismissive barrier of "seniority," even though I have decades of actual experience beyond that of the young PM or Dev, is one often encountered and which all to often leads to identified issues not being addressed.   I can document millions of dollars in lost sales and downtime across multiple companies directly attributable to this dev and pm dismissiveness on performance defects.

My job is to reduce the risk in deployment.  If someone else increases the risk by not fixing a known issue and that issue happens in production, well, the cost of that defect should be measured against the value of that person who blocked the fix.  
Reply all
Reply to author
Forward
0 new messages