Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Updates to sensing and dexterity

61 views
Skip to first unread message

Raymond Sheh

unread,
May 3, 2025, 4:43:48 PMMay 3
to The Open Academic Robot Kit, raymo...@gmail.com
Hi All!


As promised, here is the thread to discuss sensing! My apologies, this
is a long one but this is also a complicated topic that I think requires
such a discussion.

The sensing and dexterity rules have been rather open-ended so far
because when the current rulebook was written, the goal was that RMRC
would follow the Major RRL rules, with just the modifications necessary
to make the competition work for us. The idea was that the sensing and
dexterity rules would be mostly a mirror of the Major RRL rules.

Things have moved a bit in the years since though and I think we want to
make sure that for this year, the RMRC rulebook is complete by itself,
and also reflects what makes sense for RMRC now. Here are my proposed
changes, please let us know what you think!



Here is the apparatus we have for sensing and dexterity:

- The 5 sets of 5 Landolt-C optotypes in tubes in the linear rail, each
of which starts with a manipulation object.

- The contents of the sensor crate and the door to open.

- The keypad omni.

- The QR codes in the labyrinth.


Here are some open questions that I think are worth considering. For the
most part, this is about being more generous with the points and
allowing teams more opportunities to score and demonstrate what they can
do, but these proposals also add some complexity to the scoring.


- As currently written in the rulebook, the manipulation objects start
inside the linear rail, blocking the Landolt-C optotypes so to score
these points, the team first needs to remove the manipulation object. I
think we should loosen this requirement and allow teams to choose to
start with some/all of the manipulation objects removed, which means
they won't get the point for that object but they can go straight to the
sensing task (just like leaving the door open on the sensor crate).


- The rulebook is ambiguous about how to score a Landolt-C optotype. At
the time it was written, in the RRL a Landolt-C was considered "scored"
if the operator could identify the second ring in (with a gap of about
4.4 mm). Given how high resolution cameras have advanced, I propose we
give a point for each ring (including the biggest one with the lettering
in it, which shows you "got there"). This does mean that each Landolt-C
optotype progression is now theoretically worth 5 points but I don't
think that this will be an issue as I've yet to see a robot (that isn't
a big bomb squad robot with large lenses on them) able to zoom that far in.


- The rulebook currently states that 1 point is scored for grabbing a
manipulation object and placing it in the designated container. I
propose that we give 1 point for picking up the manipulation object and
putting/dropping it anywhere that isn't the tube that it came from, and
1 more point for putting it into the designated container (that the team
places somewhere on the field). At the time that the previous rules were
written, the Major RRL had the robot carry the container into the arena
but that might be a bit impractical for RMRC robots, at least just yet.


- The keypad omni requires that the center key (the "5" key) be pressed
without touching the surrounding keys to score a point. Teams may score
the same key, on the same side, multiple times in the timeslot but not
consecutively. I propose that we loosen this up, with the "5" worth 2
points and any of the immediately surrounding 8 keys worth 1 point. Once
a key is hit on a keypad, no other key on that keypad counts until a key
is hit on another keypad. Note that this does mean that a team can
bounce back and forth between two keypads. I think this makes sense
given how small the robots are but does anyone think that, instead, we
should oblige teams to touch every keypad before they can score a keypad
for the second time?


- The sensor crate appears on the manipulation field and in the
labyrinth. The rulebook and construction guide just assume we will be
using the same sensor crate as RRL but the RRL sensor crate has evolved
since our rules were written. I propose that we fix our sensor crate to
a simpler version that has the following elements in it:

1: A hazmat label (see previous discussion and proposal below). 1 point
for correctly identifying it. No penalty for incorrectly identifying it.
We probably want to think about expanding this out with points per
element in future years as per the previous email thread, also see below.

2: A Landolt-C optotype. 1 point for the middle ring (approx. 1.8 mm gap).

3: A thermal source hidden behind something opaque (e.g., a chemical or
USB handwarmer in a paper envelope - where the envelope is much larger
than the handwarmer so it isn't obvious by sight where the handwarmer
is). 1 point for correctly identifying where the thermal source is.

4: Motion. Are we OK with leaving out motion for now? Alternatively how
about we go for an easier motion source, such as standardizing on 2
wristwatches with white dials at least a 35 mm dial, and a red or black
second hand (motion can be continuous or intermittent) at least 10 mm
long, and the task is seeing which one is stopped?

5: A magnet (this simulates anything that requires close-proximity
sensing without the need for a gas source). 1 point for demonstrating
that the robot can detect the polarity (but this maybe doesn't make
sense to score manually - see below).



- The rulebook is unclear about how autonomy interacts with sensing.
Page 19 implies that autonomous points are worth a 4x multiplier. I
propose we clarify this as follows:

1: A hazmat label that is fully identified by autonomous computer vision
is worth a 4x multiplier.

2: A Landolt-C ring where the orientation *relative to the letters
identifying it* is correctly identified fully autonomously by computer
vision is worth a 4x multiplier.

3: A thermal source that is identified by the system correctly putting
an overlay indicator over the warm spot on the display is worth a 4x
multiplier.

4: Automatically detecting which watch is moving yields the 4x
multiplier. Up to the system to determine that the robot is stationary
and highlight small moving objects in the scene.

For all of these, the operator is allowed to point the camera at the
label, ring, or thermal source and zoom into it. The operator cannot
tell the system that it exists (e.g., cannot push a button to start
detection, the detection software must be always running). The operator
also cannot tell the system where the label, ring, or thermal source is
or provide any additional information. Furthermore, the first completion
of the test will be considered for scoring. For instance, if, as the
robot drives up, the system autonomously (but incorrectly) identifies
the label from far away, even if the system identifies it correctly
later when it drives closer, the team won't get the 4x multiplier for
that test. The team can still manually perform the task to score the
point without the multiplier (i.e., there's no risk to trying autonomy).

5: By definition the magnet needs to be detected autonomously (but we
allow the human to wave the detector near the magnet) so I propose that
it's always worth 4x.


- Similarly the rulebook is unclear about how autonomy interacts with
dexterity. I propose that for opening the crate door or picking up a
manipulation object, as long as the robot starts with the arm stowed and
completes the task without operator input (apart from a single button or
command), we award the 4x multiplier. We could define the "stowed"
position as the "minimum bounding box" position of the arm, that is, the
position that the arm takes such that the overall robot fits into the
smallest volume box possible. Note that we wouldn't require a stow
between pick and place as the robot may be unable to stow the arm with
an object in the gripper. Does this sound reasonable? Are there any
corner cases we need to consider? What I don't want to have happen is a
team claiming that "stowed" is with the arm out in a pose that is,
really, more suited for manipulation.

For placing the manipulation object, I propose that as long as the
operator didn't move the arm manually between the pick and the place
apart from a single button or command to initiate the place (but could
drive the robot base manually) they can be awarded the 4x multiplier for
the place point. In theory, this could mean that a robot could be
manually driven with the arm outstretched until the object was over the
container and then the "autonomous place" would be simply opening the
gripper. I'm less worried about this specific case just because the
pickup itself would have had to have been autonomous and then the robot
would have had to have driven over the bars of the manipulation station
terrain with the arm out as well.


- We are starting to see good mapping being possible on small platforms
so I think it's time to update the Labyrinth rules to reward this. The
QR codes are detected automatically so I don't think an autonomy
multiplier is useful. Instead, how about we apply a 4x mapping multiplier?

QR codes can only be scored if they are automatically placed on a map of
the labyrinth, relative to the fiducials (the cylinders that are cut in
half and placed either side of the walls). Each QR code is worth 1 point
for appearing in the map and 4 points for being correctly located
relative to the closest 2 observed fiducials. An observed fiducial is
one that appears as at least a half circle. I propose that for this
year, "correctly located" means that the horizontal distance between the
QR code and the two closest fiducials is correct to within 30 cm. Of
course, if the team has no mapping, this basically falls back to 1 point
per QR code if they simply report all their QR codes as being at the
origin of a blank map. Note that the 2 observed fiducials must be
separate fiducials (not the two halves of the same fiducial).


Revisiting hazmat, Marcus proposed providing different numbers of points
for detecting hazmat labels in increasingly difficult situations. I
think this is a great idea and perhaps we could run this as a separate
technical challenge, with the best performing team being awarded a
certificate. This will give us the necessary information to see how we
might incorporate this into the main rules for next year.

Here's an option that was proposed.

0 = Cannot automatically read Hazmat labels under any conditions.

1 = Can accurately read unobscured hazmat labels while sitting still
at 30 cm when the label is at a height chosen by the team, and the
automatic reading program is manually triggered by the driver. (say, 4
out of 5 correct, when only 1 label at a time is present)

2 = Can read unobscured hazmat labels on the walls of a specific course
at multiple heights set by the organizers, with the automatic reading
program triggered by the driver.

3 = Can read partially obscured hazmat labels (e.g. at the end of a
short pipe as we've done the last two years) at various heights, with
the automatic reading program triggered by the driver.

5 = Can accurately read hazmat labels that appear in multiple courses at
various heights set by the organizers, where the program is constantly
running on all courses (so *not* triggered by the driver), and where
some of labels are partially obscured.

Perhaps for the purpose of a test/demo competition this year, this could
be implemented by adding full and partially obscured hazmat labels to
the 60 cm K-rail terrain at different heights but only using them for
this specific test (they're ignored when running K-rails normally), with
the points added up over a 5 minute run.



Please let us know what you think!


Cheers!


- Raymond


--
https://raymondsheh.org

Raymond Sheh

unread,
May 4, 2025, 11:26:02 AMMay 4
to The Open Academic Robot Kit, raymo...@gmail.com
Hi All!


I received some well appreciated feedback just now and wanted to make
some clarifications.


First, my original intent was that folks would put comments in-line in
the thread but I'm realizing that, especially without headings, this has
the potential to become impractical. Instead, I've created a Google doc
at
https://docs.google.com/document/d/1mVeFTUv6gkA9SU5VKxxotVB3WS4D9m-IGPCyUpDcqsE/edit?usp=sharing
. I'd prefer that folks added feedback as comments but if your feedback
is more clearly given by making a suggested change, do feel free to do
so (but please consider also adding a comment explaining the suggested
change).

If you can't access the Google doc, you're still welcome to put comments
in this email thread. I've added heading numbers to make life a bit
easier if you want to comment at the top rather than in-line. I'll try
and do a bit of cross-posting between the two threads if we end up with
discussions happening on both sides.


Second, we're now 10 weeks from the competition and should have the
rules locked down pretty soon. Some of the points I raise below are
things that would previously have either deferred to the RRL rulebook,
or been judgement calls on-site. I would consider these to be advanced
clarifications rather than changes, to make sure that those joining us
for the first time this year are on the same page as those who have been
with us before.

Some of the points are proposals for changes of various degrees that I
*personally* think are probably safe to be making this close to the
competition - in general they provide additional granularity to the
scoring (we don't award partial points so instead we give points for
things that wouldn't have been complete before, and then extra points
for what used to be the complete task) but I'm also definitely open to
feedback from folks who think otherwise. I've noted these to make it
clear. The fallback is the current rulebook (plus clarifications).


In the interests of giving everyone certainty, how about we aim to have
all feedback by Sunday 11th of May, I'll have the draft rulebook out by
the 14th for final comment and then aim to lock it down by the 18th?


Cheers!

- Raymond
> 1 (minor proposed change) - As currently written in the rulebook, the
> manipulation objects start inside the linear rail, blocking the
> Landolt-C optotypes so to score these points, the team first needs to
> remove the manipulation object. I think we should loosen this
> requirement and allow teams to choose to start with some/all of the
> manipulation objects removed, which means they won't get the point for
> that object but they can go straight to the sensing task (just like
> leaving the door open on the sensor crate).
>
>
> 2 (clarification) - The rulebook is ambiguous about how to score a
> Landolt-C optotype (in the linear rail - see below for the sensor
> crate). At the time it was written, in the RRL a Landolt-C was
> considered "scored" if the operator could identify the second ring in
> (with a gap of about 4.4 mm). Given how high resolution cameras have
> advanced, I propose we give a point for each ring (including the
> biggest one with the lettering in it, which shows you "got there").
> This does mean that each Landolt-C optotype progression is now
> theoretically worth 5 points but I don't think that this will be an
> issue as I've yet to see a robot (that isn't a big bomb squad robot
> with large lenses on them) able to zoom that far in.
>
>
> 3 (minor proposed change) - The rulebook currently states that 1 point
> is scored for grabbing a manipulation object and placing it in the
> designated container. I propose that we give 1 point for picking up
> the manipulation object and putting/dropping it anywhere that isn't
> the tube that it came from, and 1 more point for putting it into the
> designated container (that the team places somewhere on the field). At
> the time that the previous rules were written, the Major RRL had the
> robot carry the container into the arena but that might be a bit
> impractical for RMRC robots, at least just yet.
>
>
> 4 (minor proposed change) - The keypad omni requires that the center
> key (the "5" key) be pressed without touching the surrounding keys to
> score a point. Teams may score the same key, on the same side,
> multiple times in the timeslot but not consecutively. I propose that
> we loosen this up, with the "5" worth 2 points and any of the
> immediately surrounding 8 keys worth 1 point. Once a key is hit on a
> keypad, no other key on that keypad counts until a key is hit on
> another keypad. Note that this does mean that a team can bounce back
> and forth between two keypads. I think this makes sense given how
> small the robots are but does anyone think that, instead, we should
> oblige teams to touch every keypad before they can score a keypad for
> the second time?
>
>
> 5 (clarification) - The sensor crate appears on the manipulation field
> and in the labyrinth. The rulebook and construction guide just assume
> we will be using the same sensor crate as RRL but the RRL sensor crate
> has evolved since our rules were written. I propose that we fix our
> sensor crate to a simpler version that has the following elements in it:
>
> 5.1: A hazmat label (see previous discussion and proposal below). 1
> point for correctly identifying it. No penalty for incorrectly
> identifying it. We probably want to think about expanding this out
> with points per element in future years as per the previous email
> thread, also see below.
>
> 5.2: A Landolt-C optotype. 1 point for the middle ring (approx. 1.8 mm
> gap).
>
> 5.3: A thermal source hidden behind something opaque (e.g., a chemical
> or USB handwarmer in a paper envelope - where the envelope is much
> larger than the handwarmer so it isn't obvious by sight where the
> handwarmer is). 1 point for correctly identifying where the thermal
> source is.
>
> 5.4: Motion. Are we OK with leaving out motion for now? Alternatively
> how about we go for an easier motion source, such as standardizing on
> 2 wristwatches with white dials at least a 35 mm dial, and a red or
> black second hand (motion can be continuous or intermittent) at least
> 10 mm long, and the task is seeing which one is stopped?
>
> 5.5: A magnet (this simulates anything that requires close-proximity
> sensing without the need for a gas source). 1 point for demonstrating
> that the robot can detect the polarity (but this maybe doesn't make
> sense to score manually - see below).
>
>
>
> 6 (clarification) - The rulebook is unclear about how autonomy
> interacts with sensing. Page 19 implies that autonomous points are
> worth a 4x multiplier. I propose we clarify this as follows:
>
> 6.1: A hazmat label that is fully identified by autonomous computer
> vision is worth a 4x multiplier.
>
> 6.2: A Landolt-C ring where the orientation *relative to the letters
> identifying it* is correctly identified fully autonomously by computer
> vision is worth a 4x multiplier.
>
> 6.3: A thermal source that is identified by the system correctly
> putting an overlay indicator over the warm spot on the display is
> worth a 4x multiplier.
>
> 6.4: Automatically detecting which watch is moving yields the 4x
> multiplier. Up to the system to determine that the robot is stationary
> and highlight small moving objects in the scene.
>
> For all of these, the operator is allowed to point the camera at the
> label, ring, or thermal source and zoom into it. The operator cannot
> tell the system that it exists (e.g., cannot push a button to start
> detection, the detection software must be always running). The
> operator also cannot tell the system where the label, ring, or thermal
> source is or provide any additional information. Furthermore, the
> first completion of the test will be considered for scoring. For
> instance, if, as the robot drives up, the system autonomously (but
> incorrectly) identifies the label from far away, even if the system
> identifies it correctly later when it drives closer, the team won't
> get the 4x multiplier for that test. The team can still manually
> perform the task to score the point without the multiplier (i.e.,
> there's no risk to trying autonomy).
>
> 6.5: By definition the magnet needs to be detected autonomously (but
> we allow the human to wave the detector near the magnet) so I propose
> that it's always worth 4x.
>
>
> 7 (clarification) - Similarly the rulebook is unclear about how
> autonomy interacts with dexterity. I propose that for opening the
> crate door or picking up a manipulation object, as long as the robot
> starts with the arm stowed and completes the task without operator
> input (apart from a single button or command), we award the 4x
> multiplier. We could define the "stowed" position as the "minimum
> bounding box" position of the arm, that is, the position that the arm
> takes such that the overall robot fits into the smallest volume box
> possible. Note that we wouldn't require a stow between pick and place
> as the robot may be unable to stow the arm with an object in the
> gripper. Does this sound reasonable? Are there any corner cases we
> need to consider? What I don't want to have happen is a team claiming
> that "stowed" is with the arm out in a pose that is, really, more
> suited for manipulation.
>
> (relating to previous proposed change) For placing the manipulation
> object, I propose that as long as the operator didn't move the arm
> manually between the pick and the place apart from a single button or
> command to initiate the place (but could drive the robot base
> manually) they can be awarded the 4x multiplier for the place point.
> In theory, this could mean that a robot could be manually driven with
> the arm outstretched until the object was over the container and then
> the "autonomous place" would be simply opening the gripper. I'm less
> worried about this specific case just because the pickup itself would
> have had to have been autonomous and then the robot would have had to
> have driven over the bars of the manipulation station terrain with the
> arm out as well.
>
>
> 8 (proposed change) - We are starting to see good mapping being
> possible on small platforms so I think it's time to update the
> Labyrinth rules to reward this. The QR codes are detected
> automatically so I don't think an autonomy multiplier is useful.
> Instead, how about we apply a 4x mapping multiplier?
>
> QR codes can only be scored if they are automatically placed on a map
> of the labyrinth, relative to the fiducials (the cylinders that are
> cut in half and placed either side of the walls). Each QR code is
> worth 1 point for appearing in the map and 4 points for being
> correctly located relative to the closest 2 observed fiducials. An
> observed fiducial is one that appears as at least a half circle. I
> propose that for this year, "correctly located" means that the
> horizontal distance between the QR code and the two closest fiducials
> is correct to within 30 cm. Of course, if the team has no mapping,
> this basically falls back to 1 point per QR code if they simply report
> all their QR codes as being at the origin of a blank map. Note that
> the 2 observed fiducials must be separate fiducials (not the two
> halves of the same fiducial).
>
>
> 9 (proposed addition) - Revisiting hazmat, Marcus proposed providing

Raymond Sheh

unread,
May 10, 2025, 6:08:14 PM (11 days ago) May 10
to The Open Academic Robot Kit, raymo...@gmail.com
Hi All!

First, thanks to the folks who have already provided feedback! Please do
keep it coming and for those who have been thinking about it but haven't
yet, now is your chance to have your say!

In particular, maintaining the rules is always a balancing act - how
much do we push the advancement of the competition versus keeping the
barrier of entry low, and how to reward the different elements that make
up the overall application challenge. Remember that we are in a somewhat
unique position among high school and undergraduate competitions where
we are working on things that are open research challenges. This does
mean that we need to be a bit creative about lowering that barrier of
entry without placing too many artificial limits on teams who are in a
position to push the state-of-the-science.

I plan on working on this on Monday so feedback that is provided by
Sunday will have the greatest chance of being considered.

The same goes for commentary on the rules in general.

Cheers!

- Raymond

P.S. I realize we're all being good natured about this and I'm all for
keeping the language informal but I do also want to give a gentle
reminder to folks to maintain a baseline level of professionalism when
it comes to writing in here, and that includes in the comments. :-)

Raymond Sheh

unread,
May 10, 2025, 6:47:35 PM (11 days ago) May 10
to The Open Academic Robot Kit, raymo...@gmail.com
Hi All!


One discussion I wanted to make sure made it to all'yall ... and that is
how we deal with Landolt-Cs.


I'm proposing that we clarify the scoring of Landolt-Cs to a point per C
(so each set of concentric Landolt-Cs is worth 5 points). The smallest
ring is *very* small (around 0.3 mm) so I think this is worthwhile in
encouraging appropriate use of cameras. Note that simply putting a 4K
camera on your robot isn't enough, you need to pick the right lens.

This also doesn't have to be *expensive* - I've personally built robots
that could (under manual control) could see right to the smallest one
with what would now be sub-$100 vision system that would fit on an RMRC
robot.

(Hint, this actually requires *two* cameras, but both are cheap and with
cheap lenses ... the hard bit is the user interface that gives an
operator an easy way to point the cameras precisely. And yes I'm still
miffed that a decade later, this isn't readily available on commercial
robots of this size.)


We had a concern raised regarding this scoring potentially swamping
mobility (particularly if a 4x autonomy multiplier is applied). I'm a
little bit less worried about this for 2 reasons.

- If a team with an RMRC-sized robot, in the dexterity field terrain
(remember, the linear rail is at the junction of two terrain beams so
the ground isn't flat), is able to get all 5 pipes in the linear rail,
to a resolution of 0.3 mm, autonomously, in 5 minutes, they not only
deserve 100 points, they deserve a contract with a major robotics company.

- The scores are normalized anyway. It doesn't matter if one test has a
raw max of 1,000 and another one has a raw max of 10, the best
performing team in each gets 100 points.


Where this balancing does get iffy *is* for how we deal with autonomy,
in part because I was always in two minds about if autonomous Landolt-Cs
made any sense. The challenge is supposed to be to get to the location,
with a sensor of sufficient resolution. The actual measurement of the
resolution is just logistics. How do we feel about either having these
interchangable with QR codes (in which case we need to figure out which
size), or give the autonomy points as long as the operator is entirely
hands-off, for a given concentric Landolt-C the robot takes a single
photo, and the operator is allowed to manually read the Landolt-Cs off
that photo?


If anyone else has any thoughts about this, please do share, either in
this thread or in comments on the Google doc.


Cheers!

- Raymond

Philipp Hock

unread,
May 11, 2025, 3:51:52 AM (11 days ago) May 11
to The Open Academic Robot Kit
Hi All,
After reading trough the same passage a few times I still don’t understand, how exactly the autonomous landolt part works. Does it work so when it gets scanned autonomously the operator is allowed to check the result? Could someone please clarify on that?

Best Regards
Philipp Hock

Raymond Sheh

unread,
May 11, 2025, 8:30:06 AM (10 days ago) May 11
to Philipp Hock, The Open Academic Robot Kit, raymo...@gmail.com
Hi Philip, 

All good! 

We're in rule discussion mode so we're discussing ideas. What I wrote below is still abstract, there is no "exactly" there. The goal is to solicit feedback on if the abstract idea sounds good (in this example, the abstract idea of focusing the autonomy points on getting to the right spot, rather than the actual identification of the Landolt-Cs) before making the exact rules. 

After all, making exact rules is hard and time consuming. We want to make sure that there are no major disagreements before we go to that effort. 


Having said that, here is *one* way in which we could make the idea discussed below ("give the autonomy points as long as the operator is entirely hands-off, for a given concentric Landolt-C the robot takes a single photo, and the operator is allowed to manually read the Landolt-Cs off that photo") into an actual, exact rule. 


Here are the steps to be awarded the 4x autonomy multiplier.
1: Drive to the desired location (manually or autonomously - we test autonomous driving in other tests). 
2: Stow the arm (if it has not already been stowed - we could define "stow" as minimal robot footprint). 
3: Issue a single command (e.g., a single button/key press or a single mouse click) that begins the autonomous behavior. 
4: Wait until the robot stops moving autonomously and presents a single still image for inspection. 
5: Manually reports the orientation of the Landolt-Cs visible in that still image.

Clarifications: 
1: The autonomy multiplier is applied on a per-tube basis.
2: The operator may take over teleoperation at any time to cancel or score the tube without the autonomy multiplier. 
3: The operator may attempt each tube multiple times within a timeslot. Each attempt is considered separately with the highest score (including multiplier) for a given tube being taken. For example, a team may inspect a tube manually, see 3 Cs (3 points), and then later re-attempt it autonomously and see only one C (4 points). The 4 points would count as their score for that tube.


Where I'm a little bit stuck right now is on clarification 3, which has a loophole in it. We want autonomy to be "no risk" - a team should be free to get their points "in the bag" and then, if they have time, try for more points with autonomy. However, what we don't want to have happen is a team manually inspecting a tube (or getting close enough that the additional movement to get to an inspection location would be trivial), teaching the joint angles, stowing the arm, replaying those joint angles, and calling that autonomy. The goal of autonomy in dexterity is to actually have the robot recognize that there is an object of significance in the world (be it the label, the cup, or the rail as a whole), and get to a good location relative to that.

My first idea was to say that once the robot base stops and the arm stows, the operator can only issue the autonomy command. The issue here is that we now need to define how much movement of the base counts. An operator could still teach a position in, wiggle the robot very slightly (so it ends up in exactly the same spot), stop the robot, stow the arm, and then replay the joint angles. We also want to encourage robots with flexible arms and perhaps they could score several tubes without having to move the base.

My second idea was to oblige the team to have had the arm stowed since either the start of the mission, or the previous attempt at a different tube. The problem is that some robots may need to move the arm for mobility (either for balance or because the arm is an intrinsic part of their mobility - imagine, for instance, a legged robot that has a camera in one leg that's also used for inspection). 

My third idea was to oblige the team that has attempted a tube manually to have to attempt a different tube before coming back to the original one autonomously (including stowing). This idea has two issues though, the first being to define an "attempt" (without getting into tricky robot-specific rules and without disallowing teams from using the arm for mobility), the second being that it *is* possible to score the side and angled tubes without moving the base and if a robot can do both of those autonomously, or do one manually and the next autonomously, that's not necessarily a bad thing. 


In any case, thoughts/ideas on this are welcome! Perhaps we need to back things right back and maybe insist on autonomy from some start point and that the robot needs to come back with one image per tube that is then read manually by the operator but perhaps that's too hard right now? Or perhaps we can simply say "don't teach in manual joint angles" and rely on folks just not doing it?


Cheers!

- Raymond

--
You received this message because you are subscribed to the Google Groups "The Open Academic Robot Kit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to oarkit+un...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/oarkit/002db16c-75b6-40dd-8410-e620051dc119n%40googlegroups.com.


-- 
https://raymondsheh.org
Reply all
Reply to author
Forward
0 new messages