assertThat(currentSpeed, isLessOrEqualTo(expectedSpeed))
expectedSpeed
- this has given much better results.
(I was also advised to use the std deviation if necessary - I haven't applied it yet)--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendeve...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/behaviordrivendevelopment/db9073b6-5309-b042-8143-b463304dd424%40iDIAcomputing.com.
Hi Mani,
Although my question will be how can we still write automated tests that help alerting us during ci/cd or even developer local machine builds. Basically letting the concerned parties know within a good scope of time that their work has potentially degraded the performance (if that is the case) of the system.
There are two different statements here. “Letting the concerned parties know within a good scope of time” is the goal (technically: noticing soon enough that rectifying the situation doesn’t noticeably impact your flow of value). “Alerting us during CI/CD or even developer local machine builds” is a local optimisation and may even be chasing ghosts, i.e. it isn’t reasonably solvable.
Having a separate perf stage that runs out-of-band of your usual build, either daily, hourly, or whatever works, is a great way to trend your performance over time. You will likely start with something simplistic for round-trip times, and get more sophisticated as the app and your knowledge increase.
What you mention about analysis the graph would that not be similar to some statical analysis of several runs? When the slope tilts towards and range of numbers it would mean the performance is improving or degrading? Is this a reasonable way to also check?
You can start just by checking by eye. Also if someone notices a sudden degradation in performance, you can check the graphs to see if that correlates. On one project someone (an internal user) noticed the app “felt more sluggish”. The team looked at the various performance graphs and saw a massive drop three months before that no one had noticed. Because they had the graph they were able to pinpoint the date the change probably happened, and then they checked the commits that day and found the unintended behaviour. It took a matter of hours to identify, isolate, fix, test and redeploy.
You can get fancy with automated analysis of the graphs, but the simplest approach is just to look at them every week or so.
Kind regards,
Daniel