Thanks, Joel. I've benefited from being able to use
perf.html#/comparechooser and will look forward to the performance
This is very helpful to present PGO separated and to know that
higher is better for canvasmark, but it is not yet ready to
The treeherder version seems to randomly choose which and how many
of the results to load and so the comparison changes after
reloads of the page.
> upcoming work:
> 2) continue polishing perfherder graphs, compare-view
Perhaps the above issue is already in this work and you know it
will be addressed by next quarter, but, if not, can we keep the
snarkfest version running please until this is resolved?
Today, the treeherder version is not loading enough results to do a
reasonable comparison, while the snarkfest version doesn't seem to
have the problem and presents results almost instantly.
If I may sneak in a request or two, then a number of results or an
estimate of standard error in the mean would be helpful to know
how to interpret the standard deviations. Also, a way to see
whether higher or lower is better even for those tests without
enough data to detect a statistically significant change would be
I'm keen to find out what is in the "Details" links, but they
currently just ask me to "wait a minute".