That is a great reference.
Helps validate whether all of the options have been considered,
for the main mechanics anyway.
This only mentions the sensor side.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/946db0d3-82fa-493e-8e19-9b8634f84597n%40googlegroups.com.
Stephen D.
Williams
Founder: VolksDroid, Blue Scholar Foundation |
I personally like the three-finger hand—two fingers and an opposable thumb, with the thumb positioned between the two fingers. That configuration allows gripping almost anything,
Except for a drinking straw, sheet of paper, sock, wire, wet noodlle :)
On 9/30/25 12:24 AM, Dave Everett wrote:
On Tue, 30 Sep 2025 at 17:10, Thomas Messerschmidt <thomas...@gmail.com> wrote:
I personally like the three-finger hand—two fingers and an opposable thumb, with the thumb positioned between the two fingers. That configuration allows gripping almost anything,
Except for a drinking straw, sheet of paper, sock, wire, wet noodlle :)
I have struct limits on my parallel gripper. Drink can, jar, mug, plate, gold nuggets.
If you can do 3 fingers well, 4-5 shouldn't be much harder. While the uneven geometry of the human hand seems unnecessary, sometimes it comes in handy. So to speak. In any case, it is an interesting design & engineering problem.
sdw
Dave
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/a6eb465e-aba8-436a-bb48-91f371d5c04bn%40googlegroups.com.
Here is an example of how large-scale human hand videos can be used to train a robot hand policy with RL: https://ivl.cs.brown.edu/research/gigahands.html (see the "Application: Motion Retargeting (with Physics)" section at the bottom for the RL-trained policy).
Vision-language-action models are also trained using human hand videos, and these models seem to produce more human-hand-like motions than models trained with gripper videos: https://beingbeyond.github.io/Being-H0/
The hardware should not be hard or expensive or bad. But, so far, it mostly still is. Seems like we can bridge that shortly. I have some ideas to try now.
The whole point of trying to solve the hardware well is to get to the point where we can concentrate on control without having artificial gaps that make that even harder than it fundamentally is. How many existing robotic hands would be competitive with a human typing on a keyboard, playing a piano, or even tying your shoestrings? Even with perfect control software or a human remotely controlling.
Sdw
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/95A8F139-8BA4-4DB8-887E-84340AB854A8%40gmail.com.
Hoping to get this kind of training running on my ML server soon. If anyone has run these, please share experience.
sdw
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/581f53f3-4b38-4004-9fcd-6b1565acef9en%40googlegroups.com.
RL training can typically be done with a single 4090 (or 5090). ManipTrans (used in GigaHands, https://maniptrans.github.io/) mentions ~2 days of training with a 4090.
VLA training/fine-tuning requires more compute. The Being-H0 paper mentions using 32 x A800-80G GPUs.
You received this message because you are subscribed to a topic in the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/hbrobotics/5ik9dJT9lkQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/7180b578-d0cc-48c2-a0b7-ada019e0addd%40lig.net.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/581f53f3-4b38-4004-9fcd-6b1565acef9en%40googlegroups.com.
I don't think there are any copyright issues at all for training an RL model on hand movements & activities from any video that you can legally obtain. So almost everything on the Internet, YouTube, etc. should be fair game. An RL model is not going to duplicate a movie or a book, unless we're talking sign language.
sdw
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/CADyjTyap%2B5FMT7rc-AJ-oJVJH%3DDeML3eh6zu9a8CFf-9mN7s-A%40mail.gmail.com.
2 very fast 96GB GPUs + very fast 768GB of RAM + very fast 64c128t CPU cores should compare reasonably well with 32 x A800-80G GPUs - just might take 12 times as long to compute. Hope this kind of training checkpoints well. However, server class cards with NVLink sharing memory may be much better than more serial processing on a couple PCIe-linked GPUs. That's the kind of thing I need to find out in detail.
I wonder how feasible it is to do distributed GPU training. The amount of data that needs to be exchanged and how often it needs to be exchanged will control that.
Stephen
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/CA%2B%3DR913AszPECfJ%3DEp9%3D17_RmYTS2ddZ2Lg-igb0kh6AwSwVHg%40mail.gmail.com.
The hand is an interesting problem because there are so many features, details, but also constraints on most practical situations.
I'd like to get most of the degrees of freedom in the core hardware, but then be able to share motors in simplified / minimized versions. Using spring return for fingers / hand (except perhaps forefinger + thumb) simplifies a lot and isn't too much of a loss as nearly all of human actuation can be mimicked with that.
sdw
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/1624791375.2993772.1759360035531%40mail.yahoo.com.
If you put tape over your two smallest fingers and use the other three, you can do almost anything you could do with all five fingers. Maybe with practice, you could even be good at it. What makes this possible is that you are still using a human brain to control the three fingers.
Last week, I was looking at E-Nable.org, an open source organization for various open source hand models, people willing to do various levels of 3D printing & other support. Definitely interesting overlap.
sdw
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/FFCD1394-D69A-4D7A-8C15-271FA659FD56%40gmail.com.
For avatar use, there is a human in the loop, so the hard part of the control loop doesn't have to be solved. That is a big use that people keep expecting to skip completely. I expect it to be a big thing for a while.
One trick to simplifying hand control is to put a camera in the palm or over the knuckles. Then the target is relative to both the hand and the camera at once, eliminating remote sensing & association.
Control will be difficult, although RL might rescue us. I'd just be happy to be in that problem space with the hand hardware solved satisfactorily. I have a lot of constraints that I want to solve for that: high-quality*, lightweight, slim, quiet, and cheap or very cheap. Either repairable or so cheap it is just recycled & replaced when it fails.
*many degrees of freedom, precise (in a closed loop), strong, fast.
Stephen
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/FFCD1394-D69A-4D7A-8C15-271FA659FD56%40gmail.com.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/8AB87CE7-05A9-421B-945D-6A3489A0BCDD%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/CAOSZ7XTUUPMdUKnRrGX8gNuGSOqkdvGLW0yhUwnEo_XVSjvYrw%40mail.gmail.com.
That is a nice design, especially if the goal is maximum lift capacity. Good argument for 7 degrees of freedom being enough for most things. Because it is relatively large, they can fit significant gear motors right into the hand. A smaller hand might not work as well. But it simplifies robot design to have the hand be self-contained rather than forearm-driven, like humans mostly are. Our thumb angle + finger spread uses local muscles - a good observation when designing a hand: An anthropomorphic design would have lifting strength via tendons + local lateral / angle selection by local motors.
I like the way the thumb/finger converts between an additional finger vs a fully opposable thumb. It seems to be able to align with the middle finger or to oppose between the other fingers. That allows configuration for both of those important modes.
It is not going to fit anthropomorphic accommodations very well, except large ones. A more humanoid hand would still be better for those.
Stephen
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/CA%2BKVXVPEbvJ02Y7iiQMKSuqLr-2SDyGqoKKwGOoyJBwm4T%2BdHQ%40mail.gmail.com.