Fwd: The Role of Open Source in Robotics - Chris Paxton

29 views
Skip to first unread message

Stephen Williams

unread,
Oct 25, 2025, 2:33:53 PM (12 days ago) Oct 25
to hbrob...@googlegroups.com

So many good things in Chris Paxton's post.

Interesting take on ROS2 alternatives.  I suspect if/when I get into ROS2 more, I'll have to urge to fix structural things.

The AI/ML motion control solutions, SLAM, RL are most important to solve.  Low cost & light weight motor + controller + reducers would be nice.  The rest we can solve in a variety of ways.

I've been considering similar magnetic tactile sensing, am aware of some of the audible + camera approaches, and have been considering a novel optical approach.  It isn't too hard to sense a single touch point using any of these methods.  For a lot of money & headache, you can sense a moderate number of touch points.  This is far from the goal of sensing moderate to many touch points using minimal sensors, mechanics, wires / etc., microcontrollers, and available pins on microcontrollers. I have a few approaches that might work that I plan to try "soon".  Goes well with my new humanoid hand design.

If anyone is really good at mechanical design, this is my current puzzle:

Design a 3D printed model, printable as a single component at small scale, that has has two flat pieces close together where: small movement from pressure causes a large horizontal movement.  But as it is 3D printed, the surfaces might not be smooth enough for a friction ramp or similar.  Flexures perhaps?  Diaphragm?



Open source software has always played a crucial role in robotics, and that role has continued to evolve with the robotics field
͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­




Forwarded this email? Subscribe here for more

The Role of Open Source in Robotics

Open source software has always played a crucial role in robotics, and that role has continued to evolve with the robotics field

 



READ IN APP
 


XLeRobot is part of a new wave of open-source robot hardware powered by cheaper components, 3D printing, and internet-scale collaboration. Source: Github

Getting started in robotics, as anyone will tell, you, is very hard.

Part of the problem is that robotics is multidisciplinary; there’s math, coding, hardware, algorithms; machine learning vs. good-old-fashioned software engineering, etc. It’s hard for one person to manage all of that without a team, and without years to build upon. And at the same time, there’s no “PyTorch for robotics,” and no HuggingFace Transformers either. You can’t really just dive in by opening up a terminal and running a couple commands, something that the broader machine learning community has made beautifully simple.

Sure, you could download a simulator like ManiSkill or NVIDIA Isaac Lab, run a couple reinforcement learning demos, and end up with a robot policy in simulation, but you still need a real robot to run it on, like the SO-100 arm.

And yet open robotics, in a way, is at a turning point.

ROS1 — the venerable, old Robot Operating System which raised a generation of roboticists, myself included — is gone. Dead, due to be replaced by ROS2, which has had a mixed reception, to say the least. It’s got a number of issues, and in particular is fairly cumbersome for development.

And, in parallel, the accessibility of high-quality and affordable robot actuators (largely manufactured in China) has collided with the proliferation of 3D printing, to cause a Cambrian explosion of open-source hardware projects. These projects are making robotics accessible at the lower end, letting people outside of well-funded robotics labs and universities experiment with end-to-end robot learning capabilities.

This has all led to a new wave of open-source robotics projects, a Cambrian explosion of new robotic hardware and software that’s been filling in the gaps in current hardware and software, and making robotics accessible for tooling that works very differently from previous generations of hardware and software.

A Recent History of Open Software



LeRobot is probably the king of the new wave of open-source, learning-first robotics software — though it has serious gaps still compared to the good old Robot Operating System. Source: SO-100 Arm on Github

The Robot Operating System, developed largely by the legendary Willow Garage robotics incubator before its dissolution, used to be a center-of-mass for open robotics. Largely, now, this has been replaced with a much more python-centric and diffuse ecosystem, prominently featuring model releases mediated by HuggingFace and its fantastic array of python packages and open-source robotics code.

ROS, at its heard, is and was a middleware, a communications layer which made it easy to coordinate different processes and disparate robotics systems. This was crucial in the era when most robotics development was fragmentary and model-based: you need to move your robot around, you need a SLAM stack like Nav2, which was formerly a part of ROS.

But as software development has gotten easier, driven in part by the fast-moving python ecosystem and ML culture, as well as by the explosion of open-source and readily usable packages on Github, we’ve seen ROS fade from prominence.

There’s no real replacement, nor should there be. Cleaner software interfaces which just accept Numpy arrays, ZMQ for messaging (as we used in StretchAI), and so on make it very easy to accomplish the goals of the old ROS without the inflexibility.

And this all means that now we can see a vibrant, decentralized open source ecosystem, largely building off of LeRobot. For example you can check out ACT-based reward function learning by Ville Kuosnamen, installable via Pypi:

In the end, I think this is a much better model. While pip has its weaknesses, the decentralization and simplicity of the system means that you can, for example, replace it with something like astral’s uv when it comes along.

Visualizations from companies like Foxglove and rerun have also expanded into the niches that ROS’s aging rviz is vacating. Rerun in particular is fully open source, and extremely AI friendly with powerful, flexible, and easy-to-use APIs that make visualizing learning data easy — presumably why it’s also a feature of LeRobot.

My own contribution to all of this is Stretch AI, a software package I released last year which makes it possible to do long-horizon mobile manipulation in the home. Part of this is support by Peiqi Liu for DynaMem, which allows a robot to move around in a scene dynamically build a 3d map which can be used for open-vocabulary queries. This is built on top of, in part, many of these tools, from ROS2-based robot control software, python-based custom network code, Rerun visualizations and a variety of open models and LLMs.

Open Hardware



K-Bot, from K-Scale Labs, an open source humanoid robot. Source: K-Scale Labs

One increasingly fascinating trend has been towards open hardware.

HuggingFace has recently been a great champion of this, building HopeJr, their open-source humanoid robot. And they’re hardly the only one, with K-Scale’s open-source humanoid soon to follow.

But open source hardware is particularly useful where there isn’t a clear scientific consensus on what the correct solution is. This is why, for example, I covered a lot of open-source tactile sensors in my post on giving robots a sense of touch:

We’re seeing the same thing happen with hands. Robot hands are an area that has been sorely in need of improvement; current hands are broadly not very dexterous. New hands, like the Wuji or Sharpa hands, are extremely impressive but are still very expensive and not too broadly available.

This has led to a ton of iteration in the open source space, like the LEAP hand:

We also see the RUKA hand from NYU, which again is cheap, humanlike, and relatively easy to build. Projects like the Yale OpenHand program have been trying to close this gap for a long time.

And we can see a similar thing with robots. There are the So-100 arms, LeKiwi, and XLeRobot. Other notable projects include OpenArm:

This is a fully open-source robot arm, with a BOM (bill of materials) cost of about $6.5k. Find the OpenArm project website here, or a thread by Jack with more information. And, of course, open-source champions at HuggingFace have been working on a variety of open humanoids.

All in all, robotics is still at a very early point - so it’s great to see people iterating and building in public. These projects can provide a foundation and lots of valuable knowledge for further experimentation, research, and commercialization of robotics down the line.

What’s Still Missing?

There are lots of cool open source projects for hardware and foundation models, but there are still relatively few large open-source data collection efforts. Personally, I hope that organizations like HuggingFace, BitRobot, or AI2 can help with this.

And in addition, I think we still really need more good open-source SLAM tools. SLAM, if you don’t know, is Simultanteous Localization and Mapping — it’s the process of taking in sensor measurements and identifying the robot’s 6DOF location in the world.

Everyone is using iPhones (DexUMI) or Aria glasses (EgoMimic, EgoZero), or just like a quest 3 or whatever to do this right now — see the Robot Utility Models work, or DexUMI, which we did a RoboPapers podcast on. A lot of the tools exist — like GT-SLAM — but it’s still too hard to just take and deploy on a new robot.

The Future of Open Source in Robotics

We need open robotics. Even those of us working in private companies — like myself — will always benefit from having a healthy, strong ecosystem of tools available. All of us move much faster when we work together. And, more importantly, it helps the small players keep up. Not everyone can be Google, a single monolithic company.

As open-source roboticist Ville Kousmanen wrote:

An open source Physical AI ecosystem offers an alternative to commercial models, and allows thousands of robotics startups around the world to compete on equal footing with Goliaths hundreds of times their size.

Open source is also powering what I think is my favorite trend in robotics lately. You can actually run your own code on others’ robots! Physical Intelligence has released it’s new flagship AI model, pi0.5, on HuggingFace, which leads to fast open-source reproductions:

This video is from Ilia Larchenko on X, who we recently interviewed on RoboPapers (give the podcast a listen!) and who is a feature of the fast-moving open-robotics community. And you can even deploy open vision-language-action models like SmolVLA on open-source robots like XLeRobot and get some cool results. Even German Chancellor Friedrich Merz is getting in on open-source robot action!

I’m happy to see how lively and dynamic the modern open-source robotics ecosystem has become, helping make robotics more accessible for students, researchers, startups and hobbyists than ever before.



Share

Leave a comment

It Can Think is free and I have no plans to change that, but please comment, share, follow along. The reason I do this is because I enjoy discussing robotics!

Leave a comment

You can also follow me on Twitter/X or Bluesky for updates, or click the subscribe button:

© 2025 Chris Paxton



Chris Albertson

unread,
Oct 25, 2025, 3:20:42 PM (12 days ago) Oct 25
to hbrob...@googlegroups.com


On Oct 25, 2025, at 11:33 AM, 'Stephen Williams' via HomeBrew Robotics Club <hbrob...@googlegroups.com> wrote:


About open source.   Of course you need it.  None of us would live long enough to reinvent 10,000 wheels. 


Design a 3D printed model, printable as a single component at small scale, that has has two flat pieces close together where: small movement from pressure causes a large horizontal movement.  But as it is 3D printed, the surfaces might not be smooth enough for a friction ramp or similar.  Flexures perhaps?  Diaphragm?


The solution is to not treat 3D printed parts as “finished”.  It the ramp is not smooth enough you hand lap it with #300 and then #600 wet and dry sand paper, then spray it iwth a hard filler-primer and then #600 or finer per.    You can make a mirror finish if you like.    You would need a very hard kind of plastic.  Metal might be best.   You would need to reduce fiction by using ball bearings.

In general, I think of 3D prints as low-precision parts but you can machine them.   I have a small lathe can sometime turn a printed part to clean up a hole and then press-fit a ball bearing unit or a broze sleave bearing.  Treat the 3D prints as you would a rough sand-cast metal part.

But notice the trap?  In is a nearly universal mistake among beginning engineers that jump to solutions and not talk about requirements. first.   What problem is being solved here.  what forces are required and is there a cost, mass, and volume budget?  We see the same with software too, beginners will just strat writing code before they think about requirements and overall design.   This works for simple projects but fails for larger scale projects.   Don’t under estimate the compleity of a humaniod robot.  No one has a humoid robot yet that is better then a self-balancing motoerized maniquin.  The current state of the art is not there yet.   And even companies with 8 and 9 figure budgets have not solved it.  

Stephen Williams

unread,
Oct 25, 2025, 6:03:18 PM (12 days ago) Oct 25
to hbrob...@googlegroups.com, Chris Albertson


On 10/25/25 12:20 PM, Chris Albertson wrote:


On Oct 25, 2025, at 11:33 AM, 'Stephen Williams' via HomeBrew Robotics Club <hbrob...@googlegroups.com> wrote:


About open source.   Of course you need it.  None of us would live long enough to reinvent 10,000 wheels. 


Absolutely!  The tricky balance is that people need to make money, and sometimes you just have to build up resources & momentum to get to higher levels.

Some things should be open source because only by many people working on them will they be vibrant, viable, able to move forward without infinite funding for something that can't / shouldn't be captured to pay that back.  Things that have to be commodities should also be open source: We paid far too much for commercial operating systems, office applications, image & design editors, etc., things that all could have and should have been open source.

Also, unless you are truly innovating, you're just making incremental iterations on top of existing ideas.  That has value, but it is much different than having a big insight after a lot of research that warrants an enforced revenue stream.  The culture of giving credit, suggested donations, and providing related services over trying strong legal capture of something is probably better in many cases.  Especially in an important fast evolving area.

This is a good example of trying to think through & choose among those approaches:

Lately, I've been thinking about a good, universal, widely usable robotic limb docking system.  I've looked at many bolt on, stud-based twist & lock, and saw a couple references to using the micro 4/3rds lens mount.  Now that I can precisely cut sheet / plate metal, I expanded my design search space.  I am now thinking of a design that combines a few flat metal parts with a 3D printable docking mechanism that would allow a quick, solid, precise, and strong insert-twist-lock mechanism that could be scaled up or down.  Then the joint can be released by twisting a surface ring, untwisting, pulling the joint free.

This should be able to transmit power, signals, even optical connections that also connect with the twist & lock.  For tendon driven systems, hydraulic + tendon, or similar, there are ways for the interior of the coupling to have mechanical couplings.  I'm considering some options there now that I have a solid, simple design for the main coupling.

So I have a solid coupling design that I'm sure will work well - testing soon.  Probably will have at least one option for mechanical coupling.  Should I just publish & open source the whole thing?  Lots of merit to that.  But here, not only are we in a situation where there is no standard mechanism, just some temporary expensive proprietary approaches, and certainly no inexpensive flexible hackable mechanism.  Is it important to consider whether the responsible thing would be to reduce confusion, incompatibility, and net cost of robotics by working to create standards?  And, getting to the point of producing this, assuming it works and a much better solution doesn't pop up immediately, is very expensive.  Getting my lab to the point of being able to iterate & produce something like this has been very expensive.

Sometimes a large company, especially people working at the large companies, manage to open source something because it just wouldn't be a viable product, isn't their line of business, etc.  It can be beneficial in a variety of ways (public sentiment, recruiting, mutual savings in development costs), while sidestepping the complexity of legal-bound employees getting permission & logistics to create a spin off company, side business, etc.  But others are not in that situation.

I can provide the parametric design, and cut metal parts, perhaps even be a distributor / retailer for the best pogo or blade type connectors.  Benefit of copyright, but for this kind of solution, probably not patentable or only weakly so, and probably not worth it.

But an important service that could be important here is developing standard sizes & configurations that cover common use cases, as they evolve, in addition to an open experimental space.  There is an aspect of this that could act as a key indicating what standard / module # a limb attachment adheres to so only matching units are attached.  Could also laser etch model, standard, serial# on each piece, maintain a database + documentation.  Then charge something nominal to manage all of that, maybe through a non-profit.  (I have been considering launching https://BlueScholar.org for several years.  I now have a broad range of things I might put under / through it, things like this.  But maybe keep it simpler.)

Which path is more desirable, better overall?  Assuming my design works well, wouldn't the need to standardize & manage interoperability be inevitable by someone at some point to avoid total confusion, enable interoperability between robots & components?

I suspect most commercial solutions that solve all of this cost a lot, probably $50-500 per joint.  My approach will likely be <$10 in mechanical parts (maybe $1 at scale) plus inexpensive connectors, perhaps $4-10.

These are about $1 for 12v 1a: https://www.aliexpress.us/item/3256803564385751.html

These are $5 for 9 pins: https://www.aliexpress.us/item/3256805098127227.html

Anyway, interesting to think through.  What does everyone think?  I could go either way on this.




Design a 3D printed model, printable as a single component at small scale, that has has two flat pieces close together where: small movement from pressure causes a large horizontal movement.  But as it is 3D printed, the surfaces might not be smooth enough for a friction ramp or similar.  Flexures perhaps?  Diaphragm?


The solution is to not treat 3D printed parts as “finished”.  It the ramp is not smooth enough you hand lap it with #300 and then #600 wet and dry sand paper, then spray it iwth a hard filler-primer and then #600 or finer per.    You can make a mirror finish if you like.    You would need a very hard kind of plastic.  Metal might be best.   You would need to reduce fiction by using ball bearings.

In general, I think of 3D prints as low-precision parts but you can machine them.   I have a small lathe can sometime turn a printed part to clean up a hole and then press-fit a ball bearing unit or a broze sleave bearing.  Treat the 3D prints as you would a rough sand-cast metal part.

This needs to work with zero extra labor, and as close to zero labor as possible.  That is a key design constraint.  See below.



But notice the trap?  In is a nearly universal mistake among beginning engineers that jump to solutions and not talk about requirements. first.   What problem is being solved here.  what forces are required and is there a cost, mass, and volume budget?  We see the same with software too, beginners will just strat writing code before they think about requirements and overall design.   This works for simple projects but fails for larger scale projects.   Don’t under estimate the compleity of a humaniod robot.  No one has a humoid robot yet that is better then a self-balancing motoerized maniquin.  The current state of the art is not there yet.   And even companies with 8 and 9 figure budgets have not solved it.  


I didn't want to bore people with all of my requirements.  When you are exploring a particular approach, the constraints you are operating within are often obvious.  Also, we here generally have been considering many of these constraints.

After a lot of experience, architecting systems, overall designs, consideration of what requirements are optimal & possible, considering all existing designs that you can remember & fit in your mental design space, and filtering for what is feasible given budget, tools, time, one can somewhat implicitly work through all of those requirements & constraints so you can pick a solution space and focus on solving the problem.  For hard problems, this becomes an iterative process over long periods of time so you keep refreshing & exploring with that context.  Possibly, after considering everything that others have tried, by using honed problem solving skills and other metaskills that may be generally applied, coming up with new solutions, new combinations, or radically thinking outside the box.

I have over 40 years of software development experience, plus a whole range of things around that and various distantly & unrelated things I was curious about.  I have always been a big mechanically minded, plus I have always visualized systems & mechanisms.  I've been a mechanical designer for a lot less time, although a number of years at this point.  I have found that after learning a lot about existing mechanical approaches, expanding my ability to visualize & mentally simulate various mechanisms, building some of them while also observing many that others have built, and finally learning CAD well enough, that I can use my range of metaskills to more or less repeatedly iterate through design solutions.  Thinking about how that is working for me is something I should write about, later.


My design constraints for a humanoid robotic hand:

Very capable: strong, fast, precise enough, resilient, lasts long enough (but can wear if cheap enough to replace as needed), lightweight.

Strong with a good strength to weight ratio:

    Ideal: Hand + entire arm should weigh 5lbs / 2.25kilos with the ability to lift 22lbs / 10kilos.

    First pass is probably more like 10lbs / 4.5kilos lifting 11lbs / 5kilos or less depending on motors + reducers.

Inexpensive, quiet, can be aesthetically pleasing.

Should be possible to scale up & down significantly with the same approach.  Is a moderately sized doll / toddler hand possible?  Giant or monster size?

Implications: complex, many-part, and/or labor intensive solutions are not inexpensive.  Needs to be able to be built at scale with minimal finish work.

Strategy: Solve for humanoid hands & limbs first: fewer anthropomorphological gaps, assume some optimality of human form.  Hard constraints of solving that complexity in a small package sharpen the design space, probably eliminate some local optima.  Can simplify later for situations that do not need that, such as with switchable 3 finger / 2 finger + thumb design.


These are my current design goals within that:

Solve minimal number of actual parts, assembly, processing, expense, time by a design that combines many mechanisms into a single FDM print + tendons + some form of sensor 'wiring'.  Might contain a couple small motors, if directly mimicking human thumb muscles, but not necessary.  A key aspect of designing for FDM while trying to minimize overhead is designing for the impreciseness of the resulting print.  Things should mostly work even with that if the design is good.  A really good example are the amazingly precise mechanisms built with flexures, such as: https://openflexure.org/

There are a number of elements in this design to solve all of the degrees of freedom, spring return, movement & limit constraints, etc.  Trying to leave room for tactile & position sensing, maybe torque although that can be done on the tendons and/or drive system.  It would be many parts + fasteners in a traditional build of the basic mechanics.  Trying to make use of various ideas to mostly collapse those into this single print.

With the right material & design details, I feel my current design can meet those requirements.  Time to see what I've forgotten or have wrong.  Now have to work through all the details, finish CAD, print, test, iterate.


I keep switching attention on tactile sensing to completely different approaches.  Right now, being entertained by a largely mechanical, very inexpensive approach that could provide for all of the basic sensing elements of a humanoid hand.

Related to that, I solved the problem I posed earlier: A bifold flexure spring oriented in the desired direction converts small pressure movements between two plates to larger lateral movement.  Now thinking of simplest ways to amplify that.

Can't wait to share the finished, working result.  Right now, it is still half-baked, not for polite company yet.


scienteer

--

Stephen D. Williams
Founder: VolksDroid, Blue Scholar Foundation

Chris Albertson

unread,
Oct 25, 2025, 9:21:26 PM (12 days ago) Oct 25
to Stephen Williams, hbrob...@googlegroups.com
It is going to be really hard to beat the Micro M4/3 system.  The K-Bot uses this system for interchangeable hands.  Look at the retail price for a set of two extension tubes.  This is a total of four surfaces and it comes with plated, spring-loaded contacts for electrical signals.  You could salvage enough parts from this set to use one on one robot.  I doubt you could make these for the price you can buy them from B&H.   Apparently, these are hard to make as it is easy to get them either too tight or too loose.

Then in that K-Bot teardown video, they had an arm removed in about 15 seconds using just a power screwdriver.  I doubt any quick-change system can be lighter and more compact than simply using a half dozen M4 screws.

Stephen Williams

unread,
Oct 26, 2025, 12:23:18 PM (11 days ago) Oct 26
to hbrob...@googlegroups.com, Chris Albertson

Nice, a source for those, and not an unreasonable price...  Duplicating that machining would be difficult & expensive.  The contacts are nice, although they clearly have a limited power capability.  But they slide past each other, so careful measures would be needed to avoid blowing sensing pins with power.  And they are just one size, one strength.  They are meant to be sturdy to a point to hold up a heavy lens of a few pounds, but nothing like what you would want a robotic end effector to be able to manipulate.

Just using 4-6 M4 screws + disconnecting a cable for undocking a joint is not terrible.  That's the alternative to a twist-lock joint coupling.  Both have advantages.

So it is a fine choice for a certain range, experimenting, but it doesn't seem like a long-term solution.  You aren't going to attach legs of a walking humanoid robot with those.  To use those, you'd need to have something strong to screw those tiny screws into.  It isn't clear you could reuse the contact points while creating a mating surface with a limb / socket.  Maybe bigger screws all the way through those units?

The essence of it is a metal plate that slides through another metal plate, locking in place when twisted.  With contacts that connect at the stopping point and a latch of some kind to release.  My design is scalable, would have the docking plastic a standardized model you would add to the end of your parts for the plug / socket shapes.  The contacts should be at an angle so that pins never cross-wire so you don't blow things up even if you have high amperage flowing.  Optical connections should be possible.  And you should have per-standard / part / version keying so that only matching parts will dock.

Adding physical connectivity, tendon / rotation points, makes it more interesting.  Those are difficult to deal with across connection points.  To solve that, you first need a docking mechanism that has an open center and good positioning.


Scienteer

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/C12D702B-4367-4CBF-903F-71FB877E650F%40gmail.com.

Alan Timm

unread,
Oct 26, 2025, 1:00:53 PM (11 days ago) Oct 26
to HomeBrew Robotics Club
Man, I just took the time to read through the original article and there's so many great things in there.  XLeRobot being a < $1000 version of the Aloha mobile platform alone is kinda of amazing.  I'll need to take a closer look at the BOM to see how they did it.  And there's a kit available?  Wow!
Reply all
Reply to author
Forward
0 new messages