


Michael Wimble <mwi...@gmail.com>: Feb 15 12:44AM -0800
Yet another "I did it so you don't have to" repo is available.
https://github.com/wimblerobotics/sigyn_ai
This will eventually support the Pi with AI Hat, OAK-D, and Jetson Orin
Nano deployments.
The way it works now:
* Take a bunch of pictures with and without objects you want to recognize.
* Upload the pictures to your RoboFlow account. Don't have one? Make a
free one.
* Use RoboFlow to annotate the images. If you choose objects that it
already knows about, it will likely annotate them for you as you go
along and you just have to agree. Otherwise define some object
classes, draw some boxes.
* Build the appropriate version to download. For NVIDIA and Pi, you
want your images to be 640x640 (stretched). For the OAK-D, best
performance is images at 416x416. I've only used one of the Yolo
models. Use Yolo5 for the OAK-D (although if you get the latest
depthai and use Docker to run ros2/kilted you might be able to use
later Yolo models) I also add:
o Preprocessing: auto-adjust contrast using adaptive equalization.
o Augmentations: rotation between -10 and +10 degrees, bounding
box blur up to 2.5px, bounding box motion blur: length 100px,
andgle 0 degrees, frames 1
You also want to split your image set into train/validate/test. Use
about 80% of the images for training, 10% each for validation and
testing. It's good to have images that don't have the objects of
interest as well.
This gives the recognizer extra ability to recognize not-so-perfect
objects.
*
# Download from RoboFlow
python src/utils/roboflow_download.py --project FCC4 --version 4 --format yolov8
*
Create a copy of the config file, update it for your needs.
*
# Train with config file
python src/training/train.py --config configs/training/can_detector_pihat.yaml
*
# Export for Pi 5 + Hailo-8
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt --device pi5_hailo8
# Export for OAK-D
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt -
*
# Deploy to specific camera
python src/deployment/deploy.py --model can_detector_pihat_v1
--target sigyn --camera gripper_cam
This assumes you need to remote deploy. Skip the last step if not. To
remote deploy, set up ssh on both machines. Use ssh-copy-id so that you
can issue commands without needing passwords. Run the script. You may
need to adjust the deployment script as it probably assumes my directory
structure.
I've only done this with the OAK-D on my Sigyn robot so far. I'll be
testing and fixing the Pi and NVIDIA scripts soon.
Each device has a lot of tricky stuff to get it to work. This was the
best effort between Claude and myself to simplify this. I think the
script detects if you have an appropriate GPU and uses it. I have an
older 2060 which I'm about to upgrade to a 3060. With my 2060, training
on about 130 images, I think it took about 2 or 3 minutes to train and
deploy.
You probably have questions. Good for you. Glad to see you're paying
attention. I may or may not have answers. We can negotiate for
consultation. I need a better robot arm, or home-made chocolate chip
cookies using the Nestles recipe WITH NO MODIFICATIONS. I know you're
mother had some secret changes to the recipe. Good! Keep it a secret, still.
Don't forget to STAR my repos--Christmas is coming and I want Santa to
know how good I've been.
Pito Salas <rps...@brandeis.edu>: Feb 15 07:27AM -0400
Hi Michael,
That's a great contribution. The code, packaging and documentation looks nice! Can you share the Claude.md's you use in this or other developments? (I'm assuming you had an ai coding assistant here and there).
Best,
Pito Salas
Boston Robot Hackers &&
Computer Science Faculty, Brandeis University
On Feb 15, 2026, at 4:44 AM, Michael Wimble <mwi...@gmail.com> wrote:
Yet another "I did it so you don't have to" repo is available.
https://github.com/wimblerobotics/sigyn_ai
This will eventually support the Pi with AI Hat, OAK-D, and Jetson Orin Nano deployments.
The way it works now:
Take a bunch of pictures with and without objects you want to recognize.
Upload the pictures to your RoboFlow account. Don't have one? Make a free one.
Use RoboFlow to annotate the images. If you choose objects that it already knows about, it will likely annotate them for you as you go along and you just have to agree. Otherwise define some object classes, draw some boxes.
Build the appropriate version to download. For NVIDIA and Pi, you want your images to be 640x640 (stretched). For the OAK-D, best performance is images at 416x416. I've only used one of the Yolo models. Use Yolo5 for the OAK-D (although if you get the latest depthai and use Docker to run ros2/kilted you might be able to use later Yolo models) I also add:
Preprocessing: auto-adjust contrast using adaptive equalization.
Augmentations: rotation between -10 and +10 degrees, bounding box blur up to 2.5px, bounding box motion blur: length 100px, andgle 0 degrees, frames 1
You also want to split your image set into train/validate/test. Use about 80% of the images for training, 10% each for validation and testing. It's good to have images that don't have the objects of interest as well.
This gives the recognizer extra ability to recognize not-so-perfect objects.
# Download from RoboFlow
python src/utils/roboflow_download.py --project FCC4 --version 4 --format yolov8
Create a copy of the config file, update it for your needs.
# Train with config file
python src/training/train.py --config configs/training/can_detector_pihat.yaml
# Export for Pi 5 + Hailo-8
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt --device pi5_hailo8
# Export for OAK-D
python src/export/export.py --model models/checkpoints/can_detector_pihat_v1/weights/best.pt -
# Deploy to specific camera
python src/deployment/deploy.py --model can_detector_pihat_v1 --target sigyn --camera gripper_cam
This assumes you need to remote deploy. Skip the last step if not. To remote deploy, set up ssh on both machines. Use ssh-copy-id so that you can issue commands without needing passwords. Run the script. You may need to adjust the deployment script as it probably assumes my directory structure.
I've only done this with the OAK-D on my Sigyn robot so far. I'll be testing and fixing the Pi and NVIDIA scripts soon.
Each device has a lot of tricky stuff to get it to work. This was the best effort between Claude and myself to simplify this. I think the script detects if you have an appropriate GPU and uses it. I have an older 2060 which I'm about to upgrade to a 3060. With my 2060, training on about 130 images, I think it took about 2 or 3 minutes to train and deploy.
You probably have questions. Good for you. Glad to see you're paying attention. I may or may not have answers. We can negotiate for consultation. I need a better robot arm, or home-made chocolate chip cookies using the Nestles recipe WITH NO MODIFICATIONS. I know you're mother had some secret changes to the recipe. Good! Keep it a secret, still.
Don't forget to STAR my repos--Christmas is coming and I want Santa to know how good I've been.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/ac868033-cc7e-48ec-8f1f-e36e2b78a6a9%40gmail.com.
Nathan Lewis <rob...@nrlewis.dev>: Feb 14 06:11PM -0800
Hey all!
Applications opened a few days ago for an event called OpenSauce. It’s an event focused on active exhibition that attracts a younger audience and is technology / engineering focused rather the open ended “anything generally creative” of Maker Faire.
https://opensauce.com/
If we are genuinely interested in attracting some fresh blood to the club, I think the club should attend. We’d need to put together an active demo of some of the clubs robots.
It’s also at the San Mateo Event Center, the same place RoboGames and Maker Faire used to be at.
Any thoughts or ideas?
- Nathan
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to hbrobotics+...@googlegroups.com.