I've just opened a repo that might be useful if you're running computer-vision-based object detection on your robot:Â wimblerobotics/sigyn_ai
What it does
It's a complete, scripted pipeline that takes you from raw images all the way to a running detector on your robot:
Capture images → RoboFlow dataset → Train on GPU (or Colab) → Export → Compile → Deploy
No Jupyter notebooks required for the normal path — everything is shell scripts and YAML configs.
Who might find it useful
Tested hardware and models
Training a single-class detector (my Coke Zero cans, ~360 images) takes 5–10 minutes on a 3060. A Google Colab path is also documented for those without a local GPU.
One-command pipelines
Related repos (also being opened)
The full capture-to-train loop touches a few other repos I'm also making public:
You
don't need any of those to use sigyn_ai for
training — they're only relevant if you want the on-robot
image-capture workflow.
Apache
2.0.Â
Let me know if things work for you, or could use improvements or
clarification. Don't forget to start the repos if you like them.
Don't forget to tip the wait staff.
These repos are intended to be starting points for your own work.
I might make changes if they don't make major headaches in my
usage on my robot.
— Michael