Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Final parcour difficulty

140 views
Skip to first unread message

Philipp Hock

unread,
Apr 8, 2025, 11:40:20 AMApr 8
to The Open Academic Robot Kit
Dear RMRC Community,
I wanted to ask, how the final round is set up, specifically if there are only the hard versions of the parcour, which are in the afternoon on the pre-rounds, or also the normal ones, which are in the morning? As while we were participating in the german open there were both options available in the final round.

Best Regards
Philipp Hock
CJTec Team from Germany

Raymond Sheh

unread,
Apr 8, 2025, 5:23:50 PMApr 8
to Philipp Hock, The Open Academic Robot Kit, Raymond Sheh
Hi Philipp, All,

Great question!

We are running according to the rules that were published 2023-10-15[1] (which were a clarification of the rules under which the 2023 competition were run).

Just as a reminder to everyone, during the preliminary rounds, the arena elements are set up separately so many teams can run their robots at the same time, in the individual arena elements. We make sure that there is at least a brief period each morning when the arena elements are at their easier setting, so that, for instance a team who has technical difficulties on the first day and can't perform any runs will still have the opportunity to try the easier setting.

For the finals, the arena elements are put together into a continuous maze, as a collaborative effort of the finalist teams. The competition organizers mediate between teams to make sure that all teams have input.

Part of this collaborative design process includes deciding on the appropriate difficulty level of each arena element. Of course, we expect that the teams who reach the finals will be the most capable teams and, thus, we expect that for arena elements that have a difficulty setting, we would expect that this will be set to the most difficult option, such as setting the K-rails to crossover slope and running 2 layers high on the hurdle. However, there may be compelling arguments for an exception, such as leaving the labyrinth flat if we're seeing good progress being made in teams building maps. While we are hoping that teams will exhibit good mapping and good terrain capability, if we're not quite there yet I'd lean towards that being a mapping test first.


I hope this answers your question! Everyone please do feel free to ask further questions in here.

Cheers!

- Raymond


--
You received this message because you are subscribed to the Google Groups "The Open Academic Robot Kit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to oarkit+un...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/oarkit/07389904-69cf-474a-93fc-5b7daecfe9ean%40googlegroups.com.

Jannis Arnet

unread,
Apr 23, 2025, 3:46:14 PMApr 23
to The Open Academic Robot Kit
Dear RMRC Community,
at the German Open 2025 there was an individual configuration for the course "Hurdles". One side was double and the other was single. If a team rode the double side, they received an extra point. This made the course fairer for all teams, as the teams that didn't complete the double were still able to ride and the teams that completed the double were still rewarded.
This configuration was very fair, especially in the final, as some teams would otherwise have had to make long detours.
Because this configuration proved to be very successful at the German Open 2025, I wanted to suggest it here in the forum so that everyone knows about it.
Best regards
Jannis Arnet

Raymond Sheh

unread,
Apr 23, 2025, 8:43:19 PMApr 23
to Jannis Arnet, The Open Academic Robot Kit, raymo...@gmail.com
Hi Jannis, All,


Great to hear from you! Thanks for your suggestion.


There are many different ways of scoring the various arena elements and certainly one way we have tried in the past is manually weighting the levels of difficulty, as you suggest, particularly in the RoboCupRescue Robot League. Of course we are open to refining this as no scoring system is perfect. Here is a bit of background, just so we know what we're trying to optimize for and why we *mostly* moved away from that.

The issue with manually deciding on weightings is that all of the different arena elements have different levels of difficulty. Some of them are obvious. The single level hurdle is harder than the pinwheel ramps, so surely the single level hurdle should be worth more points. But afterwards you have a more difficult choice. How many points are the stairs worth relative to the double hurdle? What is pushing a button on dexterity worth relative to sand and gravel? What is a QR code worth? Unfortunately, in the past this has degenerated into different groups arguing that the thing they did really well was really hard and worth more points than the thing that they don't do so well in and things can become generally unpleasant. More worryingly, we have directly observed this becoming a cultural issue, where teams from cultures that were more used to being vocal dominated and pushed the competition into favoring their approach while those from cultures that are more averse to confrontation tended to just disappear. 

So it became a priority to reduce the need for such debates and instead have the weightings automatically adjust based on what teams were doing well.


In the preliminaries, we solve this problem by normalizing all scores to the highest performance in that test. It's not a perfect solution of course but it has the nice property where there is an incentive for a team to really push performance in something that might have been neglected by other teams.

An important detail to note is in the rulebook[1], page 13, "Setting", paragraph 3, which states:

"Tests with multiple settings are considered separately for the purpose of scoring." 

What this means is that the scores for hurdles at 1 layer should never appear in the same column as the scores for hurdles at 2 layers. It does also mean that a team that can do hurdles at 2 layers also needs to do the hurdles at 1 layer - because as far as scoring is concerned, they are completely separate tests. We don't consider this a waste of time - perhaps at 1 layer it's a speed run while at 2 layers, it's a really impressive feat of engineering and control. If only one team can do even just a single layer 2 hurdle, this can result in an incentive that's even more than the suggested doubling of 2 layer hurdle scores[2].


We have deliberately chosen the number of permutations of apparatus and setting to be suitable for the number of teams and slots at a typical world championship (although we are also open to suggestions for ones to add or drop). I do realize that sometimes regional opens don't have as many slots available, be it due to a limitation of the time available or the number of experienced judges. In such a situation, the organizers have a choice to make between a few compromise options that include:
1: Removing some options (e.g., only having the harder settings or removing the easier apparatuses altogether). 
2: Weighting the harder setting (as you suggest - it sounds like this was the choice made at the German Open). 
3: Having the number of laps at the harder setting also count as the number of laps at the easier setting. This can be an issue if the harder setting really slows the robots down.

Of course none of these options are ideal and we don't recommend them if at all possible but if a regional competition is slot-limited, there may be no ideal solution.


The finals is a different type of competition and is actually intended to be a race. According to the original plan, the team places were to be decided by who finishes all of the arena elements first. Points only matter if no-one finishes all of the arena elements. So as originally conceived, it doesn't matter if some arena elements are easier than others - all teams have to do them anyway, if they're easy then all teams spend less time doing them.

Not being able to traverse a particular arena element in the finals (or only being able to traverse it in one direction) carries with it a double penalty. The first is that it means that point isn't available to the team so they already can't win (unless no teams finish). The second is that the team wastes time having to detour around that element.

So in a sense, the finals already give us a natural 'normalization' that double-penalizes teams for not being able to do something. We don't need to add additional complication by having to do accounting for how much each element is worth. This also avoids the aforementioned arguments around the relative difficulty of each arena element. 

We can still get arguments around how the arena is laid out. This is where the folks administering the competition need to make sure that all teams who reached the finals have a say in the arena design and that all teams are able to advocate for their interests, regardless of their background. I'm particularly open to suggestions for improving this part of the competition! 
[2] For example, if all teams can do 10 laps at difficulty 1, but only one team does even just 1 lap at difficulty 2, that one team gets 200 points (across 2 runs) while the other teams only get 100 points (across 1 run). The only way that that team could get 200 points in the weighting system is by also being able to do 10 laps at difficulty (possible but unlikely if they were also only able to do 10 laps at difficulty 1, especially if difficulty 2 is so much harder that only one team could do it). Of course there are always corner cases where one combination of scores is more or less incentivised compared to another scoring method.

Eli Schr

unread,
Apr 25, 2025, 12:48:04 PMApr 25
to The Open Academic Robot Kit
Dear All,

This thread seems to be about clarifying rules. In this context, we have a question ourselves.
Which variant of hazmat signs is used? There exist several variants around the world and we are not sure which one is relevant.

Best Regards,
Elias Schramm - CJTec

Raymond Sheh

unread,
Apr 26, 2025, 9:10:29 AMApr 26
to Eli Schr, The Open Academic Robot Kit, raymo...@gmail.com
Hi Elias, 

Great to hear from you! Threads are cheap so feel free to make new ones. I've just renamed this one. ;-) 

I agree it can be clearer and it'll be in the updated guide we're currently working on. 

In the meantime, as RMRC is a variation on the broader RoboCupRescue Robot League (RRL), some details defer to the RRL guidance. In the case of the hazmat signs, these are part of the sensor crate (see page 44 of the arena construction guide[1], linked from the page that https://rrl-rmrc.org redirects to). The RRL guidance that is being referenced is available on the main RRL website, specifically see the bottom of https://rrl.robocup.org/forms-guides-labels/

I've now added this information to our main website as well. 

Please keep the questions and suggestions for improvement coming!

Cheers!

- Raymond



Eli Schr

unread,
Apr 26, 2025, 11:52:59 AMApr 26
to The Open Academic Robot Kit
Hi,

Sorry to ask again. The website lists two packages:
and both display different hazmat signs. The variants used seem to differ. Also the second one has Flammable Liquid and no Fuel Oil, while the first one has Fuel Oil, but no Flammable Liquid. 



Best Regards,
Elias Schramm - CJTec

Raymond Sheh

unread,
Apr 26, 2025, 12:01:14 PMApr 26
to Eli Schr, The Open Academic Robot Kit, raymo...@gmail.com
Hi Elias, 

Sorry about the confusion, the PTZ-Combo Chart is for a different test that exists in RRL but not in RMRC (yet!)[1]. So we're going with the individual hazmat labels. 

(Additional clarification regarding the sensor box is coming - this has changed recently in RRL but we're keeping it the "old style" for RMRC for now.)

Cheers!

- Raymond



[1] Actually the 'PTZ-Combo Chart' file is used in the labyrinth in Major. In RMRC we use QR codes only. The reason for this is because the additional targets in the Combined Visual Acuity Target may need to be scored manually. In Major this is less of a problem because the runs are a lot longer in duration but because RMRC does 5 minutes per run, it was decided to stick to QR codes only so we have a different set of targets as linked in the construction guide.

Eli Schr

unread,
Apr 26, 2025, 12:13:42 PMApr 26
to The Open Academic Robot Kit
Thanks!

And a final question: Explosives and Blasting Agents do not have to be classified based on their number, do they?
So the hazmat signs, that will be used, will be identical to the ones inside the "hazmat 2020" folder, wont they? I ask because there also exist variants with black text, etc.



Best Regards,
Elias Schramm - CJTec


Raymond Sheh

unread,
Apr 26, 2025, 7:31:50 PMApr 26
to Eli Schr, The Open Academic Robot Kit, raymo...@gmail.com
Hi Elias, 

So the answer is 'kind of'. 

The *end goal* is that teams would be given points for identifying each characteristic that makes up any hazmat label. For example, for the hazmat label below, a team would get 1 point for identifying each of the following characteristics of a "standard" hazmat label:
- Background (orange)
- Symbol (pictograph of an explosion)
- Hazard name ("EXPLOSIVES")
- Division ("1.1")
- Class ("1")


Of course, not all standard hazmat labels have all of these elements (e.g., not all labels have the division number) so a point would be awarded for each element that was present and correctly identified (and a point deducted for each incorrect one - or each one reported that doesn't exist). 

By this logic, eventually we want to get the competition to the point that we wouldn't provide examples of hazmat labels - we would just say that all labels in the arena would be compliant with 49 CFR. We realize that *right now* this is a bit challenging because, for instance, the pictographs will differ from one manufacturer to another, as would the fonts and so-on. It also makes the test very complicated.

Therefore, *for now* we have simplified the competition all the way down. If you have a look at page 29 of the rulebook, we simply say "Successful inspection of a sensor test scores one point." The "sensor tests" are the various targets in the sensor crate including the hazmat label (for the Dexterity and Labyrinth tests) and the Landolt-C optotypes in the Linear Rail (in the Dexterity test). This means that if you correctly identify the hazmat label, which *for now* will be a printout of one of the exact labels we have given you, you score 1 point. This does mean that identifying "BLASTING AGENTS" when in fact the label is "EXPLOSIVES" will yield no points.


I hope that clarifies things! Please let us know if you have any further questions. 

This might also be an opportunity to update our approach to sensing - I'll start another thread.


Cheers!

- Raymond

Philipp Hock

unread,
May 4, 2025, 8:34:06 AMMay 4
to The Open Academic Robot Kit
Dear RMRC Community,
Just to be clear, in the finale points dont matter unless no one can pass all parcour elements? What would happen if for example one team finishes all parcour elements and another not, but would have more points due scanning hazmat/landolt or even driving autonomously? Would the team that completed all parcours win, even tho it would technically have less points than the other team?  I would suggest that the placement would only be based on time if the teams have the same amount of points, because if not it would be unnecesary to try to drive or scan autonomously as it is a lot slower then driving non autonomously.

Best Regards
Philipp Hock
Team CJTec from Germany
Raymond Sheh schrieb am Donnerstag, 24. April 2025 um 02:43:19 UTC+2:

Raymond Sheh

unread,
May 4, 2025, 9:06:51 AMMay 4
to Philipp Hock, The Open Academic Robot Kit, raymo...@gmail.com
Hi Philipp, 


Great questions! I just realized where the ambiguity is in what I wrote, I'll clarify.


The quick answer - the scoring as you describe is what effectively happens ... the most points win, time is a tiebreaker. This is explained in the "Finals" section starting at the bottom of page 19 of the rulebook, which still holds. 

"Arena scores are gained by traversing obstacles and terrains. This represents the ability of a robot to get to where it needs to go to perform its task. In the finals arena, points can also be collected by sensor and dexterity tests. This represents the ability of a robot to perform its task. All points that have been collected completely autonomously are multiplied by 4. During the final, points can be collected both autonomously and in teleoperation."


A couple of things to note. 

- Sensing and manipulation tasks are also arena elements. 

- The original concept of completing all arena elements basically means maxing out the arena, which was plausible with teleop, before autonomy was incorporated into the rules (2019 and before). With the addition of the 4x autonomy multiplier, this principle of maxing out the arena becomes collecting all available points (which includes autonomy) - I'm guessing this is the bit that was ambiguous in my previous email as I flipped from "arena elements" to "points" but forgot to explain the translation in the middle. Of course, much as doing the whole arena autonomously within the time limit is a nice goal, I also fully realize that it isn't so likely to happen with current capabilities so it becomes a case of points with time as the tiebreaker. 

- There are some corner cases that we considered, for instance a team might just do all of the 'easy' elements autonomously. Right now, I think this is OK because we want to encourage even simple autonomy. We can revise this later if we find that this becomes a problem. 


Does this help? Of course please do continue asking questions and for clarifications!


Cheers!

- Raymond
Reply all
Reply to author
Forward
0 new messages