Hi All!
TL;DR: This is the first year running with new rules! Looking for
feedback as you see situations and ambiguity issues that are making
things unfair, either to this list or to Amy, Erick, and Kamel
in-person. Also please record and post videos of your runs (ideally with
a view of the scoresheet at the end) so we can all see how the
competition is going!
Thanks to everyone who was involved in providing feedback for updating
the rules over the past few months! The previous rules served us well
and I'm definitely looking forward to seeing how this latest evolution
lets you demonstrate the new capabilities that your robots have gained
in the intervening years.
Of course, as the first encounter with real competition, there will be
things in the rules that sound like a good idea to all of us now and
worked when we roleplayed them, but that we find don't work quite the
way that we intend during the competition.
So we're definitely looking for feedback ... things that we liked about
the rules, things that we didn't, and features that we would like to see.
In particular, we're looking for situations where a particular
capability that you think is useful to responders could be better
reflected in the tests, either by adjusting existing tests (apparatus
and/or procedure), or in new tests. Of course, we can't promise to be
able to accommodate everything as we want to keep the tests to a
manageable number and we need to also balance ease of replication,
fairness, and measurement science aspects, but knowing what you want to
see in the rules is always good information that we can try our best to
act on.
We're also looking out for unforseen situations and corner cases where
the rules appear to break down. Kamel Saidi in particular has a lot of
experience in the test methods from which a lot of RMRC is derived and
can help with evaluating technical aspects there - if the behavior of a
given robot and test is actually intended versus an issue that needs to
be addressed.
We are going to try as much as we can to keep to the rulebook this year
but if we do see a situation where a particular test or rule just isn't
working, is unfairly penalizing a particular type of robot, or is
ambiguous and being interpreted differently by different people, we can
have a discussion about that and see if there is a fair way of dealing
with it for now, and incorporating it into an update for next
year. Please bring these issues up to Amy, Erick, and/or Kamel. We can
have a discussion about how to address these issues, both for this
competition and going forward.
In support of further rule development, I encourage everyone to video
record their runs (I suggest incorporating a view of the final
scoresheet into the end of the video), sharing it, and posting links
here so we can all see how the new and modified tests and rules are
working. When we have a discussion about what is working well and what
might benefit from further improvement, this allows us to see exactly
what people are talking about.
I also encourage everyone to post commentary, suggestions, and
highlights to this list - given so many of our regular folks, both
organizers and teams, are unable to travel this year, it would be nice
to keep everyone in the loop of the discussions.
Cheers!
- Raymond
--
https://raymondsheh.org