Big Orange Gets New Point at Goal and Go Behavior

18 views
Skip to first unread message

Jim DiNunzio

unread,
Oct 2, 2022, 3:13:53 PM10/2/22
to RSSC-list, hbrob...@googlegroups.com

Hi All,

I added a new feature to Big Orange, point to a place on the floor and say, “go there,” and Orange will go to that spot. I’m using the Google Mediapipe BlazePose NN human pose detection + find landmarks models running on the Luxonis Oak-D stereo camera which runs software DepthAI. It first adjusts the camera to get the whole body in view. Then from the pose values of the shoulder to the wrist, I form a ray vector and find the intersection with the floor plane, then convert to world space and tell the robot’s NAV goal the 2D coordinates.

 

Main Video:

https://youtu.be/oYBI4hL-16c

 

And a little humor that an error case made possible:

https://youtu.be/XVzjYcGbYmk

 

Orange Go to Goal Presentation from HBRC September meeting:

https://youtu.be/q5bkFvdEoqI?t=4976

 

Up to date code for high level system is on Github here:

https://github.com/jimdinunzio/big-orange

 

Other Links:

 

DepthAI Blazepose

https://github.com/geaxgx/depthai_blazepose

 

Google Mediapipe Pose using BlazePose

https://google.github.io/mediapipe/solutions/pose

https://google.github.io/mediapipe/solutions/pose.html#models

 

BlazePose Paper:

https://arxiv.org/abs/2006.10204

 

Jim

 

Gmail

unread,
Oct 2, 2022, 3:24:10 PM10/2/22
to j...@dinunzio.com, RSSC-list, hbrob...@googlegroups.com
Jim,
That’s really great! How reliable is it?

Also, how well does Orange do with avoiding chair legs and table legs?



Thomas

-
Want to learn more about ROBOTS?









On Oct 2, 2022, at 12:13 PM, Jim DiNunzio <j...@dinunzio.com> wrote:


--
You received this message because you are subscribed to the Google Groups "RSSC-List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rssc-list+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rssc-list/005701d8d693%241a34bcb0%244e9e3610%24%40dinunzio.com.

Jim DiNunzio

unread,
Oct 2, 2022, 4:35:55 PM10/2/22
to MJ Chan, j...@dinunzio.com, RSSC-list, hbrob...@googlegroups.com
It runs locally on Oak-d camera hardware. Currently all Orange functions run locally on oak-d camera HW, SBC LattePanda Delta, or Coral Edge TPU including good quality language STT using vosk 50mb model and TTS using windows SAPI. No dependence on the internet makes demos on location easier. But if I start using a LLM then It will be at least cloud local.

Jim

On Oct 2, 2022, at 12:21 PM, MJ Chan <iglo...@gmail.com> wrote:


Nice. Is the NN running locally? Or from the cloud? 

On Sun, Oct 2, 2022 at 12:13 PM Jim DiNunzio <j...@dinunzio.com> wrote:

Hi All,

I added a new feature to Big Orange, point to a place on the floor and say, “go there,” and Orange will go to that spot. I’m using the Google Mediapipe BlazePose NN human pose detection + find landmarks models running on the Luxonis Oak-D stereo camera which runs software DepthAI. It first adjusts the camera to get the whole body in view. Then from the pose values of the shoulder to the wrist, I form a ray vector and find the intersection with the floor plane, then convert to world space and tell the robot’s NAV goal the 2D coordinates.

 

Main Video:

https://youtu.be/oYBI4hL-16c

 

And a little humor that an error case made possible:

https://youtu.be/XVzjYcGbYmk

 

Orange Go to Goal Presentation from HBRC September meeting:

https://youtu.be/q5bkFvdEoqI?t=4976

 

Up to date code for high level system is on Github here:

https://github.com/jimdinunzio/big-orange

 

Other Links:

 

DepthAI Blazepose

https://github.com/geaxgx/depthai_blazepose

 

Google Mediapipe Pose using BlazePose

https://google.github.io/mediapipe/solutions/pose

https://google.github.io/mediapipe/solutions/pose.html#models

 

BlazePose Paper:

https://arxiv.org/abs/2006.10204

 

Jim

 

--
You received this message because you are subscribed to the Google Groups "RSSC-List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rssc-list+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rssc-list/005701d8d693%241a34bcb0%244e9e3610%24%40dinunzio.com.
--

Best regards,
MJ

Jim DiNunzio

unread,
Oct 2, 2022, 5:39:57 PM10/2/22
to Gmail, RSSC-list, hbrob...@googlegroups.com

The full feature end to end as I have implemented it still needs some tuning for a reliable live demo, but currently the biggest impact to reliability is automatically tilting the camera to get most of the person including head in view so the NN can detect the skeleton well, and I will revisit that. Reliability of the NN pose model is reasonable for non-edge case or occlusion heavy poses. In short, if NN can see your head and arm it works pretty well.

 

Jim

MJ Chan

unread,
Oct 2, 2022, 5:41:41 PM10/2/22
to j...@dinunzio.com, RSSC-list, hbrob...@googlegroups.com
Nice. Is the NN running locally? Or from the cloud? 
On Sun, Oct 2, 2022 at 12:13 PM Jim DiNunzio <j...@dinunzio.com> wrote:
--
You received this message because you are subscribed to the Google Groups "RSSC-List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rssc-list+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rssc-list/005701d8d693%241a34bcb0%244e9e3610%24%40dinunzio.com.
--

Best regards,
MJ

Gmail

unread,
Oct 2, 2022, 9:19:35 PM10/2/22
to j...@dinunzio.com, RSSC-list, hbrob...@googlegroups.com
I see. Ok. I find that reliability often takes a large part of development time.

 And chair legs? 



Thomas

-
Want to learn more about ROBOTS?









On Oct 2, 2022, at 2:39 PM, Jim DiNunzio <j...@dinunzio.com> wrote:



Jim DiNunzio

unread,
Oct 2, 2022, 10:31:46 PM10/2/22
to Gmail, RSSC-list, hbrob...@googlegroups.com

Yes, I agree about reliability. It takes a lot of time and is generally not that fun. But that’s why this is a hobby and not a job (for me). I can choose to spend more time on it or not. Getting to live demo capability is usually a ways from completing a reasonably reliable feature. I find that if I take a break after reaching demo capability and do other things, I can come back to work on reliability of a feature with fresh energy and make it better.

 

Regarding chair legs, Orange uses LiDAR for mapping the room including any chair legs which appear as occupied areas on the map and any path finding during navigation will avoid occupied areas.

 

Jim

Gmail

unread,
Oct 4, 2022, 3:10:22 AM10/4/22
to j...@dinunzio.com, RSSC-list, hbrob...@googlegroups.com
So it avoids chair legs 99% of the time ?




Thomas

-
Want to learn more about ROBOTS?









On Oct 2, 2022, at 7:31 PM, Jim DiNunzio <j...@dinunzio.com> wrote:



Chris Albertson

unread,
Oct 4, 2022, 12:02:52 PM10/4/22
to Gmail, RSSC-list, hbrob...@googlegroups.com
If you are very worried about chair legs, then what you need, in fact the only thing that can work, is a layered defense that uses perhaps three methods.  If each is only 80% effective (20% chance of collision) then there is only a 0.8% chance you will hit a leg.    Maybe use a LIDAR, ultrasound and finally a bumper with a switch.  The detection methods have to be independent if the probabilities are to multiply.    I don't think any single solution will get you even 95% let alone 99.99

But as Jim says, how reliable does a hobby project need to be?   When you plan to sell 10,000 roots a year then they need to be reliable,



--

Chris Albertson
Redondo Beach, California

Gmail

unread,
Oct 4, 2022, 12:46:21 PM10/4/22
to Chris Albertson, RSSC-list, hbrob...@googlegroups.com
Yes I may have a robot that might be for sale. 

Yesterday I was testing a small metal detector for finding metal table legs. It worked very well on its own. I will integrate it with my robot over the weekend and do further testing. I’m am worried that the robot’s metal chassis will be an insurmountable issue. 

I tried just ping sensors and they failed miserably at detecting a 1” round metal chair leg. 

Thomas

-
Want to learn more about ROBOTS?









On Oct 4, 2022, at 9:02 AM, Chris Albertson <alberts...@gmail.com> wrote:



Chris Albertson

unread,
Oct 4, 2022, 2:10:00 PM10/4/22
to Gmail, RSSC-list, hbrob...@googlegroups.com
I have to agree with Elon Musk's logic when he says the vision alone can work.  The proof, he says is that humans can drive cars using only vision.

My guess i that vision is the best method for avoiding chair legs.   But vision must be done the way humans and animals use it.   We see a chair and know it has legs and know about where those legs are relative to the chair.   A robot would have to use a neural network to recognize and avoid situations with hazards.  THAT is how the self-drive cars work.  No one is smart enough to hand-code a working obstacle avoidance system.  Today these systems are trained, not coded.  It works.  Today there are thousands of cars on the road driving outdoors in traffic and they (1) don't crash and (2) are in situations and places they have never been before in un-mapped environments. They use vision-only


Jim DiNunzio

unread,
Oct 4, 2022, 4:15:39 PM10/4/22
to Chris Albertson, Gmail, RSSC-list, hbrob...@googlegroups.com

Hi Thomas,
Yes, 99% for that problem. I don’t recall the last time Orange bumped into a chair or table leg in normal use around the house. However, I don’t have any highly reflective like chrome chair legs, but do have some metal ones. 

Jim

On Oct 4, 2022, at 11:10 AM, Chris Albertson <alberts...@gmail.com> wrote:



Gmail

unread,
Oct 4, 2022, 9:22:18 PM10/4/22
to Jim DiNunzio, Chris Albertson, RSSC-list, hbrob...@googlegroups.com
What was the name of your navigation package?




Thomas

-
Want to learn more about ROBOTS?









On Oct 4, 2022, at 1:15 PM, Jim DiNunzio <jimdi...@gmail.com> wrote:


Reply all
Reply to author
Forward
0 new messages