Curbs / drop offs

20 views
Skip to first unread message

Brain Higgins

unread,
Sep 3, 2021, 4:11:13 AM9/3/21
to hbrob...@googlegroups.com
What sensor detects curbs or drop offs? How does a robot keep from going over the edge and rolling / tipping over ?

Brian

Sent from my iPhone 12 Pro Max

Chris Albertson

unread,
Sep 3, 2021, 1:47:45 PM9/3/21
to hbrob...@googlegroups.com
THere is debate over what works best and what has the best performance to cost ratio.

Elon Musk of Tesla continues to say that you ONLY need stereo vision and his proof is that humans can drive cars and walk on sidewalks and only use stereo vision.

The counter argument is that the other companies all use multiple 3D LIDAR sensors and their cars are much better at driving than any Tesla.

So my opinion is that Elon Musk's theory is correct, stereo vios should be all that is needed but in practical terms, vision processing software is not so good and LIDAR data is much easier to process and in 2021 gives better result.

Again,my opinion. 
1) Depth cameras are a really good compromise
2) no sensor can work if processed frame at a time.  MANY frames of data must be accumulated to build-up an internal model of the world.  So you are not going to detect a curb in noisy single frame of data.  You need to look for some number of frames and build up an understanding of the local environment.  All humans, dogs and self-drive cars do this.  So do not expect to ever be able to buy a sensor that will detect every kind of object, sensors provide raw data, detection requires UNDERSTANDING the data.

With one frame of data you would not be able to know the difference between a wall attached to a building and a bus that was going to run you down.  You have to watch it for a half second and see if it is moving and how fast and in what direction and only then can you can "it is a curb because it is low and moving at me at the same speed I am moving forward".

It is critical to know your own speed and heading or else EVERYTHING looks like it is a moving object that is heading at you on a near collision course.  Not until after you subject or your own motion can you see it is a fixed step.

All of the engineering is in the processing. The argument for lidar over vision is that it is easier to process lidar.  My argument is that a depth camera is very much like a lidar but cheaper and works well at shorter distances.

My guess  -- You are going to need a depth camera and a powerful, trained neural network.  The network converts pixels and depth onto a list of objects and objects then go to a tracker.  Your motion planning must reference the list of tracked objects.  This is not a job for a small microcontroller.

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/FAE24D4D-E346-488C-A90D-21688E1FC417%40comcast.net.


--

Chris Albertson
Redondo Beach, California

Gmail

unread,
Sep 3, 2021, 2:26:38 PM9/3/21
to hbrob...@googlegroups.com
I’ve seen a caster ball in a sort of a socket arrangement used. I think you can buy them as replacement parts for some robot vacuums. 



Or a downward facing distance sensor could be used. 


Thomas

-
Want to learn more about ROBOTS?









On Sep 3, 2021, at 1:11 AM, Brain Higgins <see...@comcast.net> wrote:

What sensor detects curbs or drop offs? How does a robot keep from going over the edge and rolling / tipping over ?


Brian

Sent from my iPhone 12 Pro Max

dpa

unread,
Sep 3, 2021, 2:35:24 PM9/3/21
to HomeBrew Robotics Club
Hi Brian,

The outdoor robots I'm familiar with are designed to handle curbs.  So there is no rolling or tipping over.   Here are a couple of videos of my jBot outdoor robot dealing with curbs.  The first one is an early version of the bot and a low res crappy video, but at least shows the performance (sorry for the slow download speed from our old server):


The second video is better and is a 9 minute youtube of a much longer run, but you can skip to around the 7:30 minute mark to see it dealing with a parking lot full of cars and curbs:


The main sensors on that robot are 4-element sonar array.  

This probably doesn't help you much with your particular problem.  But it does suggest that curbs --- and stairs --- can be handled with the appropriate mechanical design. 

regards,
dpa

Jon Moeller

unread,
Sep 3, 2021, 2:37:01 PM9/3/21
to hbrob...@googlegroups.com
I don’t buy the “only-stereo-vision” approach. 

Eyes move & people move their heads around as they navigate. That feedback loop contributes significantly to sensemaking in the real world & cannot easily be discounted. And it solves for more than just occlusion as well: yes, I can peek around the car in front of me by moving my head a bit and looking through their windows, but I can also gauge the distance and speed of other objects much better by leveraging the parallax created by head motion.

In other words, there is no feedback from perception to perception in today’s world. 99.999% of robots don’t ask themselves “how can I see this better?”, they just see how they see.


--
--

jon moeller

Chris Albertson

unread,
Sep 3, 2021, 4:23:55 PM9/3/21
to hbrob...@googlegroups.com
Yes, technically a cliff sensor does detect a curb but I think Brian is wanting something he can use on his bicycle.  It does not good if you have to already be at the cliff with part ofthe bike hnging over it to detect a cliff.     I assumehe needs to detect the curb while there is still time to plan an alternate route pr at the worse, stop.

If we assume a casual bike rider travels at 10 miles per hour or just about 4.5 meters per second and he needs a 2 second warning then he needs to track the local topography out to about 9 or 10 meters.   Maybe he rides slower or faster by a factor of 2.  I don't know.  But then he'd need to tack the ground from between 2 to 20 meters ahead.

From my own experience on a bicycle.  I simply can not ride fast (20 mph) on a crowded bike path because I can not predict the actions of others 20 meters ahead.  I have to slow down or go some place else.   My point is that the look-ahead distance determines your maximum speed.   So I'd hate to have to depend on a downward looking sensor with a look ahead of just a few inches.

Finally we don't know Brian's goal.  Is he trying to avoid riding over a curb or riding up a curb or maybe riding on the street and drifting to far right and grazing into a curb?   Or "all of the above" or was his use of "curb" a stand-in for all kinds of minor discontinuities including potholes.?     Writing requirements is HARD but none the less is needed if the software is to do what is expected.

If you want to mimic what riders with good vision do, they look now and then at the curb and then remember where it is.   They really don't need to track it so closely.

Michael Wimble

unread,
Sep 3, 2021, 9:34:17 PM9/3/21
to hbrob...@googlegroups.com
I’m just beginning to look into semantic segmentation which seems like it might be one component of a detector network. WIth proper training, it should be able to often detect the difference between sidewalk, road and curbing. Maybe also driveways, handicap ramps and, dare I hope, potholes. All it needs is a camera, an AI node and some training. I’m thinking a few tens of thousands of photos with markup should work to start.

On Sep 3, 2021, at 1:23 PM, Chris Albertson <alberts...@gmail.com> wrote:

Yes, technically a cliff sensor does detect a curb but I think Brian is wanting something he can use on his bicycle.  It does not good if you have to already be at the cliff with part ofthe bike hnging over it to detect a cliff.     I assumehe needs to detect the curb while there is still time to plan an alternate route pr at the worse, stop.

If we assume a casual bike rider travels at 10 miles per hour or just about 4.5 meters per second and he needs a 2 second warning then he needs to track the local topography out to about 9 or 10 meters.   Maybe he rides slower or faster by a factor of 2.  I don't know.  But then he'd need to tack the ground from between 2 to 20 meters ahead.

From my own experience on a bicycle.  I simply can not ride fast (20 mph) on a crowded bike path because I can not predict the actions of others 20 meters ahead.  I have to slow down or go some place else.   My point is that the look-ahead distance determines your maximum speed.   So I'd hate to have to depend on a downward looking sensor with a look ahead of just a few inches.

Finally we don't know Brian's goal.  Is he trying to avoid riding over a curb or riding up a curb or maybe riding on the street and drifting to far right and grazing into a curb?   Or "all of the above" or was his use of "curb" a stand-in for all kinds of minor discontinuities including potholes.?     Writing requirements is HARD but none the less is needed if the software is to do what is expected.

If you want to mimic what riders with good vision do, they look now and then at the curb and then remember where it is.   They really don't need to track it so closely.

On Fri, Sep 3, 2021 at 11:26 AM Gmail <thomas...@gmail.com> wrote:
I’ve seen a caster ball in a sort of a socket arrangement used. I think you can buy them as replacement parts for some robot vacuums. 



Or a downward facing distance sensor could be used. 

<images.jpg>

Thomas

-
Want to learn more about ROBOTS?









On Sep 3, 2021, at 1:11 AM, Brain Higgins <see...@comcast.net> wrote:

What sensor detects curbs or drop offs? How does a robot keep from going over the edge and rolling / tipping over ?

Brian

Sent from my iPhone 12 Pro Max

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/FAE24D4D-E346-488C-A90D-21688E1FC417%40comcast.net.

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/977C043A-8594-4E1E-9CE4-B6692161DD1F%40gmail.com.


--

Chris Albertson
Redondo Beach, California

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.

Chris Albertson

unread,
Sep 4, 2021, 3:45:06 AM9/4/21
to hbrob...@googlegroups.com
And there is the problem, You have to shoot and label 10,000 photos.    Say it takes a half hour per photo, that is 5,000 hours of very boring work.

One trick I learned in one of the self-drive car courses is when collecting data you can mount 3 or even 4 cameras on a car and drive it around and get data 3 or 4 times faster.  It is even better if the cameras are widely separated so they get different views.  

You can start with a pre-trained segmenter and do transfer learning and hope that fewer photos will be required.

Believe me, having depth information will help a lot when it is time to figure out WHERE those detected objects are.  A normal RGB image gives only a direction angle relative to the camera boresight.   Having depth allows you to place it in 3D world coordinates.   Then when you get the next frame you have moved so the angles to nearby objects have changed and you can't use them for matching.  But the 3d world coordinates have not changed, you match on that.

So, the segmenter returns a polygon in pixel (x,y) coordinates.  This is very helpful but what is needed is a polynomial equation in world coordinates for the path to be followed.  

Brian Higgins

unread,
Sep 4, 2021, 8:31:24 AM9/4/21
to hbrob...@googlegroups.com
What curb analysis is for:
For my bike and I am moving towards “LOOMO” the robotic Segway for transportation.  LOOMO is programmable.  In the visually impaired world there are three categories of obstacles drop offs, overhangs, and an object directly in front. 
Drop offs are the most recent obstacle I’m studying. I have been studying the other two for a while.

Brian

Sent from my iPhone 12 Pro Max

On Sep 4, 2021, at 12:45 AM, Chris Albertson <alberts...@gmail.com> wrote:


Reply all
Reply to author
Forward
0 new messages