Re: [HBRobotics] Digest for hbrobotics@googlegroups.com - 3 updates in 2 topics

7 views
Skip to first unread message

Russ Hall

unread,
Feb 15, 2026, 7:26:18 PM (7 days ago) Feb 15
to hbrob...@googlegroups.com
New little robot

Feb. 15 Using the Raspberry Pi 5 (8Gig mem) for high level processing is working well so far. The Raspi 4 (4Gig mem) does the robot movement just fine. Both are connecting to the wireless via their own little antennas. (I found that initial wireless connection must be done at close range. Trying it in another bedroom, upstairs, didn't work at first. But with the Raspi next to the router, it is willing to make that first SSL connection!) Two big 3S Lipo batteries power this robot, since just the Raspis draw about 5 amps! The Oak camera is at least an amp, and the motors, probably just part of an amp, though they can spike the current draw. The motors run on the full voltage of the 3S batteries, usually over 12V. This also powers the BEC (Oak camera and headlights) and the Sabrent powered USB hub. I bought a smaller, 5V powered hub, but it wouldn't power even one Raspi! It takes the bigger, 12V powered hub to provide the proper current to these SBCs. Using Rtabmap with Matt's own example Depthai launch file, the robot made a 3D map of the basement very well.  I split the launch file, the Raspi 5 does 3 nodes and the base station laptop handles the 3 latter nodes, including the rtabmapviz module. I haven't fired up the Velodyne yet with this setup, since it will require a good launch file to integrate the Oak-D and the Lidar properly. The Velodyne should help the 3D map to be accurate and complete. With this setup, there aren't glitches or Raspis freezing up (like I have seen in past days). WT (wacky tracky) drives very well; one can go slow and rotate slowly, so the mapping doesn't get thrown off. The robot lower level control is the Ekumen Andino software, controlling two BTS7960 controllers. I made it this way so that it could be transferred right over to the big yard robot with big motors and batteries. Andino uses an Arduino to control the motor controllers. It has good odometry, Ros2 control and IMU integration, and doesn't publish a lot of unneeded topics. The IMU in the Oak camera works with Rtabmap, as seen in Matt's launch file. He also uses the Madgwick node to smooth data. I tried other antennas and even mounting a wireless router on the robot, but that didn't help performance. The Raspi 4 has Ros Humble and the 5 has Ros Jazzy, but this doesn't matter at this point. I have also seen buffer overruns on the terminal windows, but still the robot works all right! Probably some parameters need tweaking yet. (In initial testing, with just one Raspi 4 with 8Gig memory, 2D lidar processing, mapping, and autonomous driving worked quite well indoors. I was anxious, though, to get into the 3D with Velodyne and depth cameras. I also have an early ZED but that requires Jetson, which makes it more complicated. For outdoors use, the ZED may be the better camera. We will see!) 


20260215_170332.jpg
20260215_170320.jpg
20260215_170312.jpg


On Sun, Feb 15, 2026 at 6:02 AM <hbrob...@googlegroups.com> wrote:
Michael Wimble <mwi...@gmail.com>: Feb 15 12:44AM -0800

Yet another "I did it so you don't have to" repo is available.
 
https://github.com/wimblerobotics/sigyn_ai
 
This will eventually support the Pi with AI Hat, OAK-D, and Jetson Orin
Nano deployments.
 
The way it works now:
 
* Take a bunch of pictures with and without objects you want to recognize.
* Upload the pictures to your RoboFlow account. Don't have one? Make a
free one.
* Use RoboFlow to annotate the images. If you choose objects that it
already knows about, it will likely annotate them for you as you go
along and you just have to agree. Otherwise define some object
classes, draw some boxes.
* Build the appropriate version to download. For NVIDIA and Pi, you
want your images to be 640x640 (stretched). For the OAK-D, best
performance is images at 416x416. I've only  used one of the Yolo
models. Use Yolo5 for the OAK-D (although if you get the latest
depthai and use Docker to run ros2/kilted you might be able to use
later Yolo models) I also add:
o Preprocessing: auto-adjust contrast using adaptive equalization.
o Augmentations: rotation between -10 and +10 degrees, bounding
box blur up to 2.5px, bounding box motion blur: length 100px,
andgle 0 degrees, frames 1
 
You also want to split your image set into train/validate/test. Use
about 80% of the images for training, 10% each for validation and
testing. It's good to have images that don't have the objects of
interest as well.
 
This gives the recognizer extra ability to recognize not-so-perfect
objects.
 
*
 
# Download from RoboFlow
python src/utils/roboflow_download.py --project FCC4 --version 4 --format yolov8
 
*
 
Create a copy of the config file, update it for your needs.
 
*
 
# Train with config file
python src/training/train.py --config configs/training/can_detector_pihat.yaml
 
*
 
# Export for Pi 5 + Hailo-8
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt --device pi5_hailo8
 
# Export for OAK-D
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt -
 
*
 
# Deploy to specific camera
 
python src/deployment/deploy.py --model can_detector_pihat_v1
--target sigyn --camera gripper_cam
 
This assumes you need to remote deploy. Skip the last step if not. To
remote deploy, set up ssh on both machines. Use ssh-copy-id so that you
can issue commands without needing passwords. Run the script. You may
need to adjust the deployment script as it probably assumes my directory
structure.
 
I've only done this with the OAK-D on my Sigyn robot so far. I'll be
testing and fixing the Pi and NVIDIA scripts soon.
 
Each device has a lot of tricky stuff to get it to work. This was the
best effort between Claude and myself to simplify this. I think the
script detects if you have an appropriate GPU and uses it. I have an
older 2060 which I'm about to upgrade to a 3060. With my 2060, training
on about 130 images, I think it took about 2 or 3 minutes to train and
deploy.
 
You probably have questions. Good for you. Glad to see you're paying
attention. I may or may not have answers. We can negotiate for
consultation. I need a better robot arm, or home-made chocolate chip
cookies using the Nestles recipe WITH NO MODIFICATIONS. I know you're
mother had some secret changes to the recipe. Good! Keep it a secret, still.
 
Don't forget to STAR my repos--Christmas is coming and I want Santa to
know how good I've been.
Pito Salas <rps...@brandeis.edu>: Feb 15 07:27AM -0400

Hi Michael,
 
That's a great contribution. The code, packaging and documentation looks nice! Can you share the Claude.md's you use in this or other developments? (I'm assuming you had an ai coding assistant here and there).
 
Best,
 
Pito Salas
 
Boston Robot Hackers &&
 
Computer Science Faculty, Brandeis University
 
On Feb 15, 2026, at 4:44 AM, Michael Wimble <mwi...@gmail.com> wrote:
 

 
Yet another "I did it so you don't have to" repo is available.
 
https://github.com/wimblerobotics/sigyn_ai
 
This will eventually support the Pi with AI Hat, OAK-D, and Jetson Orin Nano deployments.
 
The way it works now:
 
Take a bunch of pictures with and without objects you want to recognize.
 
Upload the pictures to your RoboFlow account. Don't have one? Make a free one.
 
Use RoboFlow to annotate the images. If you choose objects that it already knows about, it will likely annotate them for you as you go along and you just have to agree. Otherwise define some object classes, draw some boxes.
 
Build the appropriate version to download. For NVIDIA and Pi, you want your images to be 640x640 (stretched). For the OAK-D, best performance is images at 416x416. I've only used one of the Yolo models. Use Yolo5 for the OAK-D (although if you get the latest depthai and use Docker to run ros2/kilted you might be able to use later Yolo models) I also add:
 
Preprocessing: auto-adjust contrast using adaptive equalization.
 
Augmentations: rotation between -10 and +10 degrees, bounding box blur up to 2.5px, bounding box motion blur: length 100px, andgle 0 degrees, frames 1
 
You also want to split your image set into train/validate/test. Use about 80% of the images for training, 10% each for validation and testing. It's good to have images that don't have the objects of interest as well.
 
This gives the recognizer extra ability to recognize not-so-perfect objects.
 
# Download from RoboFlow
 
python src/utils/roboflow_download.py --project FCC4 --version 4 --format yolov8
 
Create a copy of the config file, update it for your needs.
 
# Train with config file
 
python src/training/train.py --config configs/training/can_detector_pihat.yaml
 
# Export for Pi 5 + Hailo-8
 
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt --device pi5_hailo8
 
# Export for OAK-D
 
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt -
 
# Deploy to specific camera
 
python src/deployment/deploy.py --model can_detector_pihat_v1 --target sigyn --camera gripper_cam
 
This assumes you need to remote deploy. Skip the last step if not. To remote deploy, set up ssh on both machines. Use ssh-copy-id so that you can issue commands without needing passwords. Run the script. You may need to adjust the deployment script as it probably assumes my directory structure.
 
I've only done this with the OAK-D on my Sigyn robot so far. I'll be testing and fixing the Pi and NVIDIA scripts soon.
 
Each device has a lot of tricky stuff to get it to work. This was the best effort between Claude and myself to simplify this. I think the script detects if you have an appropriate GPU and uses it. I have an older 2060 which I'm about to upgrade to a 3060. With my 2060, training on about 130 images, I think it took about 2 or 3 minutes to train and deploy.
 
You probably have questions. Good for you. Glad to see you're paying attention. I may or may not have answers. We can negotiate for consultation. I need a better robot arm, or home-made chocolate chip cookies using the Nestles recipe WITH NO MODIFICATIONS. I know you're mother had some secret changes to the recipe. Good! Keep it a secret, still.
 
Don't forget to STAR my repos--Christmas is coming and I want Santa to know how good I've been.
 
--
 
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
 
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
 
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/ac868033-cc7e-48ec-8f1f-e36e2b78a6a9%40gmail.com.
Nathan Lewis <rob...@nrlewis.dev>: Feb 14 06:11PM -0800

Hey all!
 
Applications opened a few days ago for an event called OpenSauce. It’s an event focused on active exhibition that attracts a younger audience and is technology / engineering focused rather the open ended “anything generally creative” of Maker Faire.
 
https://opensauce.com/
 
If we are genuinely interested in attracting some fresh blood to the club, I think the club should attend. We’d need to put together an active demo of some of the clubs robots.
 
It’s also at the San Mateo Event Center, the same place RoboGames and Maker Faire used to be at.
 
Any thoughts or ideas?
 
- Nathan
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to hbrobotics+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages