Adding SONAR-like sensors to the Nav2 stack

306 views
Skip to first unread message

Michael Wimble

unread,
Nov 2, 2022, 3:34:09 PM11/2/22
to hbrob...@googlegroups.com
I recently got my 4 SONAR sensors and 8 time-of-flight sensors working on Puck such that they now contribute to the local costmap, in addition to the usual LIDAR sensors (I have 2 now on Puck). I wrote a short article on how you alter your Nav2 YAML configuration file to do this, along with the caveats you should be aware of to make this work correctly.

https://wimblerobotics.wimble.org/wp/2022/11/02/adding-sonar-like-sensors-to-nav2/

camp .

unread,
Nov 2, 2022, 7:57:34 PM11/2/22
to hbrob...@googlegroups.com
Does nav2_bt_navigator (I believe this is the module that drives the robot) navigate around the obstacle when using this range configuration?

Great work on the blog, BTW!
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+unsub...@googlegroups.com.

Chris Albertson

unread,
Nov 3, 2022, 3:13:29 AM11/3/22
to hbrob...@googlegroups.com
From my reading of the blog Michael posted, the planner uses a cost map to plan the least cost route.  The sonars add data to the Map.  So, when the planner sees the map, it does not know if the data came from Lidar, TOF sensor chip or Sonar.

If the planner uses the sonar data would depend on what is placed on the map by other sensors.   I would be surprised that sonar would detect something not seen by a depth camera.  I would think the sonar is redundant.

Maybe Michael will clarify.

There are two arguments
1) More sensors are better because one will see what the other misses and
2) Tesla sells cars that do not even use depth, just plain and cheap video cameras that you can buy for $20, and his cars don't crash even when moving at freeway speeds in traffic.    

I think(?) #2 works only because they use a really big computer.  Having better sensors like LIDAR reduces the amount of computation.

To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/711971620.665640.1667433439758%40mail.yahoo.com.


--

Chris Albertson
Redondo Beach, California

Michael Wimble

unread,
Nov 3, 2022, 1:37:41 PM11/3/22
to 'Stephen D. Williams' via HomeBrew Robotics Club
Yes, anything that shows up in the global cost map affects the initial (global) plan, and anything in the local cost map affects the execution of the plan. The configuration files specify the easy way to get data into the cost maps, usually by just using sensor data. I say this because there are harder ways to get data into the cost maps but probably few ever do that.

Puck currently isn’t using a depth camera, but I’ll turn them on again later. Even with a depth camera, the field of view is highly limiting. Puck has two depth cameras which together give nearly 180 degrees of view, but only out front of the robot. Think of a robot that has a single depth camera with a field of view of, say, nearly 90 degrees.  This is fine for viewing the world about a meter ahead of the robot, but it is blind to a lot of stuff that isn’t at least a few inches directly ahead of the robot and directly in front of the robot. And distances beyond a couple of meters or at the edges of the picture are pretty inaccurate. That inaccuracy at the left and right edges makes the effective field of view even less than the specs for the camera.

My house has narrow passages in nearly every place. In the living room, I have a coffee table and a couch right next to it, and chairs that are 2 or 3 feet away.  In the kitchen there are chairs that are not up tight against the table and are moved all the time. Whenever the robot makes a turn, the robot would be blind to nearly everything of interest if I had only a single depth camera. And then there is the whole issue of backing up, which is a normal part of path recovery. My 12 proximity sensors placed at specially chosen locations go a long way to giving me a view to likely, nearby obstacles in my house.

Puck’s main purpose is to work on all those safety systems that I haven’t addressed in the previous 15 robots I built. I’m trying to build a robot that I can trust, especially one that I can trust not to harm me or items in the house. To even simply go from the computer room to the living room to get near to me, even driving the robot with a remote control is difficult. With my human vision I have to stand nearly atop the robot as I use my joystick to maneuver between tables and chairs, or between the piano bench and nearby table legs. The drive wheels are not centered on the body of the robot, so turning, especially turning while backing is hard while avoiding crashing into walls with the butt of the robot.

Kind of my trade mark saying is that “everything about robots is hard”. I’ve been a member of this Home Brew Robotics Club for a fairly long time and we have demos of robots at nearly every meeting. Over that time, there still hasn’t been even one robot shown that can do the level one table bot challenge—go from one end of the table to the other and back—seriously. They all require the robot to be pointed at reasonable angles, to have good lighting, to have table surfaces that are appropriately reflective, to not have any obstacles along the way and so on. 15 years of watching computers and pretty much none of them work if the robot is initially placed at a position very near the edge of the table and heading and an angle just off from parallel to the side of the table. And don’t get me started on the floorbot challenge. It’s all fun and show in the robot club, but I’m trying to do something a bit more serious. And it’s hard. Every damn aspect about it is hard. I can’t even easily get cables to stay reliably connected in the face of vibration. Sure, they work for an hour, but not for a month. 

A simple goal I’d like to be able to solve is to have the robot to simply be able to reliably move from an arbitrary place in my house to some other arbitrary place. And the person commanding the move is, say, in bed. No one will be there to help the robot extricate itself if it gets stuck. Every week, Puck gets improved to where the number of common situations that will cause the robot to fail gets less and less.

Dave Everett

unread,
Nov 3, 2022, 2:55:01 PM11/3/22
to hbrob...@googlegroups.com
In our robot project down here, we have added sonar to the map. It helps when there are obstacles that are not on the plane of the lidar. 

We also have "cliff" sensors in the lidar which create no-go zones during mapping and navigation.

As the body of our robot blocks the rear of the lidar, we also added a sonar there when backing up to the charger.

Dave

Chris Albertson

unread,
Nov 4, 2022, 3:09:15 PM11/4/22
to hbrob...@googlegroups.com

You are doing what everyone should be doing.  I decided to take your approach but for  walking robots.  I see so many really bad robots.  Only a very few like Boston Dynamic's "Spot" are done right, but with a price of over $75K. (BTW, my next door neighbor's work just paid $200K for a BD Spot with added sensors and programming)

So my goal is to do animal-like walking for under $100 per joint.  This means a quadruped for $1,200.   I may not live long enough to complete the software as I think "PID" is the wrong kind of controller.  PID is reactive and does not plan ahead.  What animals do is 3 orders of magnitude harder.   What they (we) do is rather than reacting to imbalance, we plan a foot placement to prevent imbalance, we look at the floor, We know we need to make a turn in a step or two, and plan the next few steps.  OK, we react to a residual imbalance that we could not remove by planning.

We use different planning methods for different time horizons. for minutes-ahead and for seconds-ahead and for milliseconds.  Animals continuously adjust all their plans as they get new data.    This is as said 1,000 times more complex than a PID loop.  And this is just for walking, we are not yet to the point of finding the kitchen or avoiding obstacles.

Someday I will have this working.   Then I will just download your work off your GitHub. (I assume you are putting this out someplace, please do, even if it is a churning disorganized mess at the time like my code is.)

I have always wanted to work on a project where the bar is set far higher than we know how to reach. But really it can't be done by one person.   We can each only do a part of it.

BTW, we just got another recue dog.  He is tiny (10 pounds and under 1 foot tall at the shoulder)  and older (13 years old) too.  At night I take him out and he can't see the steps on the porch and is afraid to step down or up.  So he senses the ground with one foot.  He will feel for the ground before commiting.  The 10 inch step is like a cliff to him.    Humans do this too sometimes, we test footholds is we are unsure of them.

I am thinking how a robot could do this.  Using a foot/leg as a sensor.  Pressure sensors on the feet would help.   I also read a paper where they placed an IMU chip in each foot.   No joke, they are (were) cheap and tiny.   I am thinking ahead to stair climbing too. and how to walk over, not around some obstacles.

Your use of a costmap is a good idea.I think it offers a natural way to do sensor fusion.   Sonar, TOF, foot pressure sensors and stereo vision can all contribute to a costmap.

Michael Wimble

unread,
Nov 4, 2022, 9:00:20 PM11/4/22
to hbrob...@googlegroups.com
I’m hoping to open up my githubs soon. I’m working on cleaning up the Teensy Monitor code as it has a few good tricks and design ideas for using the Arduino-class machines for sensor input. I like following your stuff, but haven’t gotten into walking robots yet. But I am interested in hearing how you make progress. Some of your work will translate into something I can use in my robot. 

I’m glad to hear your work with the rescue dog, and, even more, how you translated that into an opportunity to make your walking robot better. When I work on my gripper, I’ll also be tackling how I can sense force feedback to adjust the grip according to what is being picked up.

Thanks for the feedback.

Chris Albertson

unread,
Nov 4, 2022, 9:57:23 PM11/4/22
to hbrob...@googlegroups.com
On Fri, Nov 4, 2022 at 6:00 PM Michael Wimble <mwi...@gmail.com> wrote:
 When I work on my gripper, I’ll also be tackling how I can sense force feedback to adjust the grip according to what is being picked up.

Just use a gripper from Yale OpenHand.   No one has time to develop something better.  They have already inverted years of research so we don't have to.  They use dynamixel servos that have torque feedback and compliant fingers so you can compute the grip pressure without sensors in the fingertips.     They can lift an empty wine glass or a full soup can.   

Here is a leter video on one of the more advanced hands, they have simple ones too.   Just looking at this it is clear I'm not going to develop anything like it myself
Reply all
Reply to author
Forward
0 new messages