Labrador Systems

37 views
Skip to first unread message

camp .

unread,
Jan 4, 2022, 4:01:56 PM1/4/22
to HomeBrew Robotics Club
    I think this is a good product, and notice that it comes from the guy who was the first product manager for Lego Mindstorms and helped launch "Mint" which became iRobot's Braava. Not a bad record for mobile robotics.
 
    It fits my vision for a "Smart Table." A tray table that can reliably get from one place to another. It's human-assisted delivery, and I think a great starting place for mobile robotics.
 
Here’s a home robot that actually looks useful: a self-driving shelf
 
labrador
 
Enjoy,
Camp

Gmail

unread,
Jan 4, 2022, 5:43:54 PM1/4/22
to hbrob...@googlegroups.com
I can only assume that the people in the ad are actors. I would love to see a real demo. 



Thomas

-
Want to learn more about ROBOTS?









On Jan 4, 2022, at 1:01 PM, camp . <ca...@camppeavy.com> wrote:


--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/1195235276.1625753.1641330109990%40mail.yahoo.com.

Dave Everett

unread,
Jan 4, 2022, 5:57:23 PM1/4/22
to hbrob...@googlegroups.com
I’m out all day so just watched a bit on my phone. As usual we don’t see a strong case for the robot, just appeals to emotion.

Anyone remember Readybot? They had some good ideas and Labrador reminds me of it a bit.

Dave

Dave Everett

unread,
Jan 5, 2022, 3:19:52 AM1/5/22
to hbrob...@googlegroups.com
Just got back and watched the whole video. I like the idea and I think it could have some utility. It still requires someone to load things on the table of course, but it would still be useful.

It reminds me of a robot I built some years ago to move washing from the laundry to the loungeroom for my mother who was frail. She still had to load the basket, but it would navigate by itself and back.

The lifting part reminded me strongly of the Readybot, but the readybot had 2 arms as well.

Dave

Mike Dooley

unread,
Jan 7, 2022, 11:46:09 AM1/7/22
to HomeBrew Robotics Club
Hello.. we came across your posts, and wanted to respond to some of the questions. (and we're a big fan of homebrew robotics)

First, we posted a quick overview on YouTube of the live demo we're running at CES this week.  Hope this gives you a better sense of how the Retriever works in real time.


Second, the four people in our debut video (Madge, Janet, Armando and Tricia) aren't actors.  They participated in our in-home pilot studies in 2021. (we connected with them and others through networking with physical therapists, care providers and Facebook ads we put out.)

We placed the robots for 5 to 8 weeks in each of there homes, running the robot's autonomously and letting them use them in whatever way they wanted.  Several of the scenarios you see in the video were inspired by what they did with the robots in their homes.  (from the laundry and the grocery delivery scenes, down to the tortilla soup and taquitos ..lol) 

Usage tracked as high as over 100 times per month among the pilot users. (meaning how many times they sent the robot to different places)  We didn't have retrieval enabled in those pilots, so the use cases were using the robot to move things and keep items close by.  (what we are now calling the Caddie model - but with the lifting feature adding)

And finally.. the reason it's emotional is because we believe it has incredible practical value for their situations.  (hence the high repeat usage rate)

We really see this an massively underserved part of our society - and that more products like this should be speaking to their needs.

I hope that helps.. happy to answer other questions.

Thanks!

Mike

(the LEGO MindStorms, Braava and now Labrador guy ;)

Scott Monaghan

unread,
Jan 7, 2022, 12:34:44 PM1/7/22
to hbrob...@googlegroups.com
Very cool Mike! Thanks so much for sharing.

Some high-level questions: 
  • Without giving away any secret sauce, how does the bot handle SLAM in tight, often changing spaces?
    • What types of sensors are used and how? 
    • What known SLAM techniques are you using? Are you rolling your own magic for this?
  • How do the bus stops work? Do they have any kind of physical tags that the robot uses to find them, or are their locations fully determined through SLAM?




Mike Dooley

unread,
Jan 7, 2022, 6:20:36 PM1/7/22
to hbrob...@googlegroups.com
Hi Scott,

It's great to be part of the discussion.

For obstacle avoidance, we currently use Intel RealSense D435s to track obstacles and help path plan around them.  

For SLAM, we're running our own special blend of 3D visual positioning algorithms - similar to what's used in Augmented Reality - fused with other software and hardware elements to significantly boost performance and reliability - especially for the home and other challenging environments. (We were awarded a Grant from the National Science Foundation for this work.)  

The bus stops are essentially waypoints with elastic routes that connect them together.  Each bus stop holds the location, orientation and height of the robot.  So for example, the Retriever can raise to countertop height in the kitchen and lower to side table height next to an armchair.  (we do that in a way that there can be significant changes in the environment and we still maintain decent localization)

For the product, we do the initial training of the home, but plan on having an app that will let people edit and add bus stops on their own. The product page on our website has a helpful image.


I hope that helps.

Best,

Mike







You received this message because you are subscribed to a topic in the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/hbrobotics/syCDRgNF73M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CAMzUCWPvUx50svV%3Dc-JRn3KdVJrsBxMqTv8ohC0Ndf%3DwcUm%2BcA%40mail.gmail.com.


--
Mike Dooley
Co-founder & CEO
Calabasas, CA

Scott Monaghan

unread,
Jan 7, 2022, 10:30:34 PM1/7/22
to hbrob...@googlegroups.com
All very cool!

Thanks for the detailed answer.

A lot of us weekend-warriors were hit hard by the news that Intel is discontinuing their RealSense line. Sounds like you have to figure it out too.Have you been able to settle on a replacement depth cam module for Labrador?

--

Gmail

unread,
Jan 8, 2022, 3:24:05 AM1/8/22
to hbrob...@googlegroups.com
Ha! Yeah. Saw that. 




Thomas

-
Want to learn more about ROBOTS?









On Jan 7, 2022, at 7:30 PM, Scott Monaghan <scott.m...@gmail.com> wrote:



Chris Albertson

unread,
Jan 8, 2022, 12:54:45 PM1/8/22
to hbrob...@googlegroups.com
I read Intel's announcement and I think the RealSense D415, 435 ad D455 camera will be available for a long time to come.  But they have shut down development of new products.   The "buy" button still works on the Intel Realsense web site.

That said for a new product, I would not go with Intel.    The Oak-D is a family of cameras that seem maybe be even better at what we want to do.      

And there is always the very low cost option of using just one $20 webcam to get depth.  Take a photo, move a tiny bit to the left and take another photo and send the pair to OpenCV and it will build a depth map.  If you are not in a hurry, this works. 



--

Chris Albertson
Redondo Beach, California

Scott Monaghan

unread,
Jan 8, 2022, 1:01:02 PM1/8/22
to hbrob...@googlegroups.com
Chris,

Similar to your monocular idea, I was thinking that you could compare frames of the bot moving forward.

Since the closest things will have the most change you could use a simple subtraction of the frames where the nearest things have the most change and the furthest have the least.

If you wanted to get fancier you could use an edge detection filter and use feature detection with deep learning model.


Mike Dooley

unread,
Jan 8, 2022, 4:14:16 PM1/8/22
to hbrob...@googlegroups.com
Hi Chris and Scott, 

Without getting into much detail, those are the "areas" we are interested in.  

We just opened more positions for expanding our engineering team.  If anyone on the board knows someone with a strong background in computer vision and SLAM, please direct them to our website at https://labradorsystems.com/

We also have a position for DevOps where we're looking for someone with a deep background in managing releases with Linux and ROS.  

Thanks!

Mike

Chris Albertson

unread,
Jan 8, 2022, 4:49:47 PM1/8/22
to hbrob...@googlegroups.com
You were asking about frame subtraction,... don't reinvent this.  It is well studied and open source software is decades ahead of want one person could invent.  They call it "Structure from Motion"   The idea is that you can drive around an object and see how it's 2D look changes as you move and then use this to compute the 3D shape of the object.    It is a generalization of stereo vision to "n" frames, taken from "n" different locations.  Stereo then becomes the special case where n=2.

It works two ways.  We can learn the 3D structure of the object or we can learn the path taken by the camera.  The later part is important for the robot problem


There is strong agreement that humans and other animals do this. We move through the environment and as we move we notice things like parallax and from this are able to perceive depth as greater distances then we could if we depended only on interocular distance.   There is at least one bird I know of that moves it's head quickly from sid to side before jumping on it's prey.  They think it does this to get better depth information.    So "structure from motion" is used in nature and computer vision too.

The even larger field is called "photogrammetry" and you can buy photogrametric software from companies like Adobe and Autodesk.   With the Abobe software you can take out an iPhone and take 20 Photos of a building or a car and then the software stitches this into a 3D object.     There is Open Source software for this also.

The bottom line is you don't have to invent this, just sort through the existing software.



Alan Federman

unread,
Jan 8, 2022, 10:27:26 PM1/8/22
to hbrob...@googlegroups.com, Chris Albertson
I have had luck using a ROS package called Spencer people detect. It uses a depth camera like a Real Sense (I have a Asus Xtion), but it also can incorporate or substitute lidar and stereo cameras. At Stanford they were using it in conjunction with Yolo to navigate in crowded environments. I only tested it on a laptop, but it should work on a Pi4 or Nvidia micro.

Should be useful in making service robots work.

Steve " 'dillo" Okay

unread,
Jan 9, 2022, 1:37:26 PM1/9/22
to HomeBrew Robotics Club
Yes, this is seriously well-trod ground w/ one or more PhDs coming out of this technique.
A bunch of us that were part of the SVROS contesting team used RTABMAP back in 2014. It uses this approach for localization and navigation. It works great in environments where there's enough visual difference in an environment to find features to "latch onto". I do remember our robot in the Kinect Challenge getting seriously lost in a big ballroom area w/ a heavily repeating carpet pattern(Which we fixed by adjusting the FOV to look above the carpet) but navigating with amazing accuracy everywhere else in the hotel we were in. 
I've seen some people on the OAK/Luxonis Discord are using it with their OAK-D cameras.
I think Cartographer might be using this approach as well.
It might be useful to have another look and see what's new with it in the past few years.

'dillo
Reply all
Reply to author
Forward
0 new messages