Jenkins Performance Plugin provides all reporting as a cumulation of all runs, not just the relevant run for the report.

22 views
Skip to first unread message

Stuart Kenworthy

unread,
Jun 18, 2019, 9:12:03 AM6/18/19
to Jenkins Users
Produced using:
- Jenkins 1.121.2
- Performance Plugin 3.15
- Plot 2.1.0
- Jmeter 5.1

I have googled, searched this group and search the jenkins issue log and cannot find an example of this being recorded but cannot imagine I am the only one having this issue.

I may be missing something but looking at the Jenkins Performance Plugin "comparison with previous build" graph appears to show the difference between the current report and the previous report, however both reports are just an accumulation of all runs up until that point which pretty much makes any of the reporting provided by the plugin useless.

Say your jmeter script is 10 samplers.Run 1 has 0 failures and run 2 has 100% failure rate, the report looks like this:

URI     | Samples  | Errors
Login   | 20 (+10)      | 50% (+50%)

As a comparison to the previous run I would expect it to look like

URI     | Samples  | Errors
Login   | 10 (10)        | 100% (0%)

Say run 3 was again a clean run the report looks like

URI     | Samples  | Errors
Login   | 30 (+10)      | 33.33% (-16.66%)

Once again as a comparison to the previous run I would expect it to look like this as we do not care about the first run in a comparison again the previous build:

URI     | Samples  | Errors
Login   | 10 (10)        | 0% (100%)

Note I also did not expect a + or - in the adjusted figure as surely this should just contain the last results.

It also appears all the graphs are based on the cumulative figures so the never actually represent the run the seem to represent.

Take this graph for example, we were having downstream issues which resulted in each run hitting a 98% error rate, the last 2 runs were around 25% error rate but rather than the percentage of errors dropping to around 25% to reflect that run, it once again just provides the cumulative error rate of all run before it.


Surely it should look like this:
This also breaks things like threshold checking for build failures which also uses the cumulative figure.

Deleting runs from jenkins also has unesired effects such as reducing the number of samplers in the next report but not previous statistics.

Is there any way to fix this reporting?

Thanks

Stuart Kenworthy

unread,
Jul 31, 2019, 6:49:22 AM7/31/19
to Jenkins Users
After a bit more of a play around I have identified a workaround, but ultimately it was down to unexpected behaviour by both Jenkins and JMeter.

Firstly, Jenkins was not creating a new workspace for every build and was not clearing down the workspace after each build. This meant that all existing artefacts were still present from previous builds when new builds were being started.

Secondly, when starting up, JMeter simply apends to any existing log file instead of creating it from scratch.

The combination of the 2 meant that the results.csv file on the Jenkins runner was what contained all previous run data, not the perfReport plugin. The perfReport plugin appeared to be doing everything correctly.

I have now added a cleardown section to my jenkins file and everything looks good.

However I did notice that in failed or unstable builds, the graphs were not being populated. This appears to be caused by these conditions causing the plugin to fail before generating the reports. A quick fix for this was to have 2 perfReport entries in your stage. 1 that creates the reports then 1 that processes the pass criteria.

    stage('Publish report') {
        perfReport
'result.csv'
        perfReport sourceDataFiles
: 'result.csv',
        filterRegex
: '',
        failBuildIfNoResultFile
: true,
        errorFailedThreshold
: 5,
        errorUnstableThreshold
: 1
   
}

If anyone has a better solution to this final issue, I am all ears but happy with the way it is running now.
Reply all
Reply to author
Forward
0 new messages