Yes, anything that shows up in the global cost map affects the initial (global) plan, and anything in the local cost map affects the execution of the plan. The configuration files specify the easy way to get data into the cost maps, usually by just using sensor data. I say this because there are harder ways to get data into the cost maps but probably few ever do that.
Puck currently isn’t using a depth camera, but I’ll turn them on again later. Even with a depth camera, the field of view is highly limiting. Puck has two depth cameras which together give nearly 180 degrees of view, but only out front of the robot. Think of a robot that has a single depth camera with a field of view of, say, nearly 90 degrees. This is fine for viewing the world about a meter ahead of the robot, but it is blind to a lot of stuff that isn’t at least a few inches directly ahead of the robot and directly in front of the robot. And distances beyond a couple of meters or at the edges of the picture are pretty inaccurate. That inaccuracy at the left and right edges makes the effective field of view even less than the specs for the camera.
My house has narrow passages in nearly every place. In the living room, I have a coffee table and a couch right next to it, and chairs that are 2 or 3 feet away. In the kitchen there are chairs that are not up tight against the table and are moved all the time. Whenever the robot makes a turn, the robot would be blind to nearly everything of interest if I had only a single depth camera. And then there is the whole issue of backing up, which is a normal part of path recovery. My 12 proximity sensors placed at specially chosen locations go a long way to giving me a view to likely, nearby obstacles in my house.
Puck’s main purpose is to work on all those safety systems that I haven’t addressed in the previous 15 robots I built. I’m trying to build a robot that I can trust, especially one that I can trust not to harm me or items in the house. To even simply go from the computer room to the living room to get near to me, even driving the robot with a remote control is difficult. With my human vision I have to stand nearly atop the robot as I use my joystick to maneuver between tables and chairs, or between the piano bench and nearby table legs. The drive wheels are not centered on the body of the robot, so turning, especially turning while backing is hard while avoiding crashing into walls with the butt of the robot.
Kind of my trade mark saying is that “everything about robots is hard”. I’ve been a member of this Home Brew Robotics Club for a fairly long time and we have demos of robots at nearly every meeting. Over that time, there still hasn’t been even one robot shown that can do the level one table bot challenge—go from one end of the table to the other and back—seriously. They all require the robot to be pointed at reasonable angles, to have good lighting, to have table surfaces that are appropriately reflective, to not have any obstacles along the way and so on. 15 years of watching computers and pretty much none of them work if the robot is initially placed at a position very near the edge of the table and heading and an angle just off from parallel to the side of the table. And don’t get me started on the floorbot challenge. It’s all fun and show in the robot club, but I’m trying to do something a bit more serious. And it’s hard. Every damn aspect about it is hard. I can’t even easily get cables to stay reliably connected in the face of vibration. Sure, they work for an hour, but not for a month.
A simple goal I’d like to be able to solve is to have the robot to simply be able to reliably move from an arbitrary place in my house to some other arbitrary place. And the person commanding the move is, say, in bed. No one will be there to help the robot extricate itself if it gets stuck. Every week, Puck gets improved to where the number of common situations that will cause the robot to fail gets less and less.