If you are subscribed to the GitHub repository, you've already received notifications on the following, but I wanted to share them here as well.
We have work in progress to add a new hardware environment performance score to the results web-site for Round 19 and beyond. This is a fairly simple combination of filtering to a subset of frameworks followed by some arithmetic to yield a single score for the hardware environment. It will end up looking something like this:
Related, we will be added a composite score feature for frameworks. Only frameworks that implement all test types will be included in the composite scoring. That will look something like this:
Take a look at the "Work in Progress" section of the Wiki at GitHub for more details on both:
If you have any thoughts, suggestions, or questions on these, I'd love your feedback.
And related to the tagging for the hardware performance rating, we are considering adding a "Verified" tag for maintainers of frameworks. If you have feedback on that idea, you can comment on
the related GitHub issue.
Thanks!