What if ROS2 teleop with keyboard/controller is not the way to go but point and click is?

539 views
Skip to first unread message

Ryan D

unread,
May 13, 2021, 3:21:27 AM5/13/21
to HomeBrew Robotics Club
I had a thought when I was trying to figure out how to get SLAM running on my robot.

keyboard/controller input is a dead end, its either going to be too stuttery to prevent de-syncing odometry or to prone to de-syncing due to not confirming sent movement commands actually executed. So, what if instead of looking into manually sending move instructions several times per second we looked into doing it once every few seconds with something more akin to point n' click, and we let some sort of on board a*-like pathfinding script be in charge of finding out how to get from point A to B based on initial standing still SLAM map?

I found this ros package here:

that has a sort of point in the direction you want to go, but I'm thinking something more along the lines of click once and the robot will try to navigate its way there.

It would probably be closer to a full self driving car that way as well.

camp .

unread,
May 13, 2021, 10:48:00 AM5/13/21
to hbrob...@googlegroups.com
Teleop is a is a matter of convenience and practicality. Yes, with true SLAM the robot should be able to map and navigate simultaneously.

With ROS1 mapping and navigation are seperate worlds. You need to create the map, save it and then you can navigate within it.

With ROS2 and nav2_bringup you can run slam_toolbox (mapping) with nav2 simultaneously so you can click-and-map your way around as you were suggesting.

https://github.com/ros-planning/navigation2

What I'd be interested in is an "explore" package where the robot wonders around autonomously creating the map. This would be working towards the goal of lifelong mapping. However, when I continously map and navigate over time I find the map gets distorted. I find navigation is better in the long run with a static map (i.e. saved) assuming of course the environment hasn't changed.

- Camp
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/37c555fb-d216-4278-b3ad-1737e29994den%40googlegroups.com.

Ryan D

unread,
May 13, 2021, 1:42:58 PM5/13/21
to HomeBrew Robotics Club

Ah, I haven't looked into  navigation 2 but looking at its GitHub page I guess that is what I'm describing xD. 

If slam_toolbox/navigation 2 has edge detection, the map could probably be sent to a new topic that has a sort of search pattern/general grasp of the room and then send movement intentions to that. 

Does navigation2 already have that though?

Martin Spencer

unread,
May 13, 2021, 1:51:10 PM5/13/21
to hbrob...@googlegroups.com
ah always assume worst case.  dynamic environment with "living things" to be avoided-

just rabid about safety around all living things-



> well,
>
> there is no problem trying to avoid hitting humans if there are
> no more humans to hit?
>
> On Thu, May 13, 2021 at 9:58 AM Martin Spencer
>
> > yes, we call that "auto-pilot seek"
> >
> > requires human quick sense/avoid of moving and/or unmapped
> > obstacles to be safetyfirst-
> >
> > this functionality enables "hunter killer" robots-

Chris Albertson

unread,
May 13, 2021, 2:11:01 PM5/13/21
to hbrob...@googlegroups.com
Ryan,

What you describe is the way it is done.  The ROS navigation stack will drive to a waypoint while avoiding obstacles.

You are also mistaken about the way teleop works.  It never sends "move commands"  What is sent are rate commands and they are sent, something like 20 times persecond so thatif one is dropped the robot continues moving at the previously commanded rate.   If the rate commands are sent at 20 times persecond missing one will not matter because you would not have changed the commanded rates b much over a 0.05 second interval and the next command will fix the problem   The robot will not "stutter"

A "move" command would say "drive 0.25 meters at 1 meter per second" and then the robot instantly accelerates and stops if the next command is dropped.   Butthat is not how it works the command is "run at 1 meter per second while rotating about the z-axis at zero radians per second" and thenthis command is resent continously.  But notice that receiving 1,000 copies ofthis command is the same effect as receiving one copy.  It does not matter how many are sent or if 10% of them are lost.   The robot can't "get out of sync".

The robot knows where it is because it's designer equipted it with multiple redundent sensors.  It is stromngly suggested in the documentation to use multiple complemetary sourse of location data.  not just wheel encoders but IMU(s), magnetic compas, GPS, visual odometry and also the LIDAR and map.   All of this gets combined in a Kalman filter.   The filter does very well if the errors in the data sourses are not correlated.  So even if the IMU drifts and the wheels slip the errors are different and the slips and drifts are discovered and corrected.

When you do use the full navigation stack the base controller is still getiing the EXACT same commands that are sent by teleop-keyboard with the exact same chace that they might be dropped.  But again it does not matter.

The system works in layers.  At the base is the based controler and sensors, then you have the "SLAM" system that uses lidar or stereo vision.  This system can navigate to a waypoint.  Above that you have a path planner that is based on a couple cost maps and in ROS 2 uses decision trees.    The next step has to be user-implemeted and might be some kind of AI who decides on where the waypoints need to be or they could be manualy enterd by clicking on a map.  You beer-fetching robot what have to know the beer is in the fridg and the fridge is in the kitchen so it would assign itself a waypoint that is in the kitchen.

Yes we could get closer to a full self-drive car.  In fact there is a ROS 2 based project that has done this:  "Autoware" is capable of driving a real car. but I don't know much about how well it works on real roads with real cars.  You can download Autoware and run it on a car simulation and experiment with all the above concepts and never have to build a physical robot car.



--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/37c555fb-d216-4278-b3ad-1737e29994den%40googlegroups.com.


--

Chris Albertson
Redondo Beach, California

Chris Albertson

unread,
May 13, 2021, 2:29:19 PM5/13/21
to hbrob...@googlegroups.com
There are basically two general ways to navigate a robot and which to use depends on the purpose of the robot

1) the "clasic" SLAM based system that uses LIDAR to map the environment and then again LIDAR to locate itself within this mapped environment.  This works very well if the purpose of the robot is to drive aroubd in a pre-mapped (mostly indoor) environment and
2) The system used by self-drive cars.   Cars needs to drive "anyplace" can't use a map as described above.  they need to be able to recognise common objects like lane markers, curbs, other cars and stop signs even if they have never seen that exact marker, curb, car or sign  so they rely very heavy on trained neural networks.   This technique works indoors were SLAM is used too but is harder to implement.

#1 is used when the task you give therobot is "Go to location (x,y) and (x,y) is within a premapped space
#2 is used when the task you give therobot is  "follow the sidewalk and make a left turn at the corner" 

When it is possible use #1 as is is easier and works well and can run on a smaller computer.


camp .

unread,
May 13, 2021, 5:48:26 PM5/13/21
to HomeBrew Robotics Club
    With ROS1 move_base path planning, all you could do was navigate to the goal, and that was it. With ROS2 Nav2 waypoint following, you could have the robot perform custom behaviors at each waypoint, so if you wanted to interject some different behavior(s) depending on what the robot senses along the way, that could be done with a plugin.
 
[Nav2] Waypoint Follower Executor Plugins!

Sampsa Ranta

unread,
May 13, 2021, 10:15:19 PM5/13/21
to hbrob...@googlegroups.com
Hello,

please remember, you might need to enable SLAM to also do relocalization as a plain dead-reckoning algorithm cannot do that good. 

Thumb rule is angle pose will get more error than travelled distance with plain odometry.

So ít should not be just navigation and mapping. It's also localization and relocalization, by trying to cancel accumulated errors.

- Making loops help some SLAMs generating the map as the offset of dead reckoning path versus mapping topology can be used to improve the localization model quality
- Some SLAMs can also localize your bot when the world coordinate after it travels a bit and is able to "recognise" the familiar features with confidence 
- So, after the world position is assumed known, the SLAM algorithm can be used to count out some accumulated error. Just need to have the weights and error estimates right so you know what data is how reliable

But to get purfect, you might need quite some tuning..

I've seen that slam_toolbox to start accumulating a lot of errors at least without relocalization when I tried your stuff.

And obstacle avoidance can be made with map and without map. While going towards the waypoint, the robot could be made to deviate from the costmap calculated path. 

But then it needs to acknowledge the obstacle being there and have an algorithm to avoid it. Some prefer stateless avoidance, some stateful. So it's a bit of a question of taste if you would want to add temporary obstacles into your global map..

Cheers,
 Sampsa


Martin Spencer

unread,
May 13, 2021, 10:15:31 PM5/13/21
to Chris Albertson, hbrob...@googlegroups.com
Great theoretical discussion, Chris!

May I respectfully ask what the WCET is for the stack you identify?

Also the accuracy of the sensor fusion for orientation?  We achieved .5 degree with no accumulation on a subset of the sensors you describe is why I ask.

thanks! 
> >
>
>
> --
>
> Chris Albertson
> Redondo Beach, California
>
> --
> You received this message because you are subscribed to the
> Google Groups "HomeBrew Robotics Club" group. To unsubscribe
> from this group and stop receiving emails from it, send an email
> discussion on the web visit

Martin Spencer

unread,
May 13, 2021, 10:15:40 PM5/13/21
to hbrob...@googlegroups.com
Well said-

However, as you know, even with "true" SLAM, one may not have cores/clock sufficient to keeping from bogging obstacle avoidance down to drunken response level.... <g>

long term maps get distorted due x,y drift and orientation loss.

Martin Spencer

unread,
May 13, 2021, 10:15:40 PM5/13/21
to Chris Albertson, hbrob...@googlegroups.com
If I may respectfully beg to differ, a little....

Our SLAM began with scanning sonar (20+ years ago), then scanning sonar/IR, then Kinect depth camera's

LiDAR is intrinsically flawed due to being 2D, too accurate and generally too slow for WCET to be human quick.

Again, great presentation!



> There are basically two general ways to navigate a robot and
> which to use depends on the purpose of the robot
>
> 1) the "clasic" SLAM based system that uses LIDAR to map the
> environment and then again LIDAR to locate itself within this
> mapped environment.  This works very well if the purpose of the
> robot is to drive aroubd in a pre-mapped (mostly indoor)
> environment and 2) The system used by self-drive cars.   Cars
> needs to drive "anyplace" can't use a map as described above.
> they need to be able to recognise common objects like lane
> markers, curbs, other cars and stop signs even if they have
> never seen that exact marker, curb, car or sign  so they rely
> very heavy on trained neural networks.   This technique works
> indoors were SLAM is used too but is harder to implement.
>
> #1 is used when the task you give therobot is "Go to location
> #(x,y) and
> (x,y) is within a premapped space
> #2 is used when the task you give therobot is  "follow the
> #sidewalk and
> >> nal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=In
> >> ternal&af_sub2=Global_YGrowth&af_sub3=EmailSignature>
> >> --
> > You received this message because you are subscribed to the
> > Google Groups "HomeBrew Robotics Club" group. To unsubscribe
> > from this group and stop receiving emails from it, send an
> > email to hbrobotics+...@googlegroups.com. To view this
> > discussion on the web visit
> >
>
>
> --
>
> Chris Albertson
> Redondo Beach, California
>
> --
> You received this message because you are subscribed to the
> Google Groups "HomeBrew Robotics Club" group. To unsubscribe
> from this group and stop receiving emails from it, send an email
> discussion on the web visit

Charles de Montaigu

unread,
May 13, 2021, 10:15:40 PM5/13/21
to hbrob...@googlegroups.com
Ryan,

Your GitHub link you provided is a good starting point since it is ROS 2 plus multi modal.

But as you know I am a Big Fan of Navigation2 & it's use of Behavior Trees and how they leverage it to give us a custom flexible robot Nav.

Nav2 has the Follow a Dynamic Point sample based turtlebot (Rviz/Gazebo). It a great jump in point to Learn BT Nav2.

I was planning on making a second fly by discussion on BTon the Sunday ROS 2 meet... 

https://navigation.ros.org/tutorials/docs/navigation2_dynamic_point_following.html

Nav2 is a framework so your Robot Nav app sit on top of Nav2

Charles

Reply all
Reply to author
Forward
0 new messages