Creating a map using rviz/Slam Toolbox

129 views
Skip to first unread message

thomasco...@gmail.com

unread,
Jan 22, 2022, 10:11:14 AM1/22/22
to ROS for Arlobot
Hi Chris,

After watching your TwoFlower video several times it appears that you used 2 D Nav Goal to make your map?

Regards,
TCIII

Christen Lofland

unread,
Jan 22, 2022, 12:10:12 PM1/22/22
to ROS for Arlobot
Yes, that is correct. Them method of guiding the robot in Rviz is to select the "2D Nav Goal" option and then you draw a "ray" at some point on the map with it by clicking on a spot on the map without letting go, and dragging in a direction, then letting go. The location you initially click is the destination that the robot will attempt to reach, and the direction of the arrow is the direction the robot will be "facing" when it is done.

Once you do that you should see a yellow and a green line appear. One is the global path, that is a path it calculates to get all of the way to the destination, and the other is the local path, that essentially the "next move" as it travels.

You can also toy with switching these off and on to see what they are on the screen:
Map
Global Costmap
Local Costmap

The Global map is used by the global planner, and is essentially the map itself.
The Local Costmap issued by the local planner and is entirely made up of the results of analyzing the live lidar scan input.

Christen Lofland

unread,
Jan 22, 2022, 12:13:23 PM1/22/22
to ROS for Arlobot
One more thing, you may also see the "2D Pose Estimate" option. You shouldn't need to use this, but what it does is tell the robot where on the map it is, so that it can relocalize itself if it got lost.

Slam Toolbox is set up such that it expects you to always load maps with the robot in the exact location that it was when you saved the map, so typically I don't need to use this.

However, if you find the robot somehow got very lost or disoriented, based on where it shows on the map vs. where you know it is and what the lidar looks like, you can use the "2D Pose Estimate" to tell it where it is. It is called "estimate" because you don't have to be exact, and the robot will initially take your word for it, but immediately start localizing itself again.

Christen Lofland

unread,
Jan 22, 2022, 12:17:03 PM1/22/22
to ROS for Arlobot
I should point out also that the "2D Nav Goal" and the "2D Pose Estimate" are the only two inputs you are able to give to the robot from Rviz. Everything else you see here is 100% "read only". i.e. Checking and unchecking those boxes only affects your view, it is not changing what data the robot uses or sees.

It is possible to update other parameters on the robot via Rviz, but not easily. You won't do it by mistake that is.

Also an important truck, if the robot is getting away from you or just behaving badly, and you can't get to it physically, put a "2D Nav Goal" right on top of the robot, and it should change it's goal to where it is and stop. It seems obvious once you do it, but it is good to remember.

Christen Lofland

unread,
Jan 22, 2022, 12:24:48 PM1/22/22
to ROS for Arlobot
I keep thinking of things...

It is entirely possible to start a map making session, and control the robot via joystick, keyboard, or the input form the web site to make the map. You just need to drive the robot around enough so that it has "seen" the entire room, and then you will see in Rviz that the map is complete.

However, my experience is that localization and map making is often poor using this method. The reason is that your inputs will always be very "course". Lots of small turns and movements, and those add up to a lot of drift as the robot attempts to merge the odometry from the wheels and the visual input from the lidar.

By using Rviz to give the robot "2D Nav Goals", you allow the robot to control itself, and it operates itself in large "curves" (often straight lines, but they could be curved in theory) that are easy for it to keep track of, and the map is just of a much higher quality. Hence making the map in Rviz with "2D Nav Goals" instead of driving it around by hand.

Once you have the map, and saved it, and loaded it again, well now the benefit is that you can send the robot to a location, which is the goal. For it to "navigate" rather than just be a remote control car.

You may also ask, "Why do I need to save the map an then load it?"
Two reasons:
1. If this is your house, it is nice to just load and use the map, rather than having to manually operate the robot to build it every time you start the robot.
2. When in "map making mode" the code is constantly evaluating the room and trying to improve the map. This means that it is easy to mess up the map too. However, if you load a map, the robot won't change it, so if something "catastrophic" happens, like the robot slipping and thinking it rotated 180 degrees when it didn't, the robot may be a little lost, but it won't edit the map into a mess, and you an use the "2D Pose Estimate" to help the robot know where it was again if it cannot figure it out on its own.

thomasco...@gmail.com

unread,
Jan 22, 2022, 12:45:03 PM1/22/22
to ROS for Arlobot
Hi Chris,

Thanks for the detailed response, much appreciated.

I will give it a shot as soon as I get my Xbox One gamepad controller working the way I want with Arlobot.

Regards,
TCIII

thomasco...@gmail.com

unread,
Jan 31, 2022, 9:00:28 AM1/31/22
to ROS for Arlobot
Hi Chris,

What is the best way to display the web interface and rviz?

Do you use VNC on a Windows PC to view the robot computer desktop and the web interface and then use a desktop terminal to start view-navigation.sh?

Or do you browse to the web interface on a Windows PC and then use docker view-navigation.sh to launch rviz?

Suggestions? 

Christen Lofland

unread,
Jan 31, 2022, 11:44:18 AM1/31/22
to ROS for Arlobot
Remember, my current Pi install on my robot doesn't even run a graphical desktop, so it cannot run Rviz or a web browser. I have not gotten around to experimenting with MATE yet.

I use the web interface from a browser on my laptop. That is the point of the web interface, that you can easily use it from any computer on the netowrk.

I also use Rviz from my laptop. I was using the Docker version when I ran Ubuntu on my laptop, but now that I found Rviz works so well on Windows (https://github.com/chrisl8/ArloBot/wiki/Running-Rviz-on-Windows) I just use Rviz on Windows directly on my laptop too.

In short, I use both a web browser and Rviz on my Windows laptop to control the robot.

thomasco...@gmail.com

unread,
Jan 31, 2022, 1:31:54 PM1/31/22
to ROS for Arlobot
Hi Chris,

Okay, so I have two choices then:

If I install rviz on my Windows PC I can brows to the Web Interface and run rviz on the PC or

Browse to the Web Interface on my Nano 4GB running Ubuntu 18.04 and use 'docker view-navigation.sh' to launch rviz.

By the way have you had a chance to review my proposal to add a docking station to the ArloBot configuration?

Regards,
TCIII

Reply all
Reply to author
Forward
0 new messages