An old Jetson Nano for AI?

98 views
Skip to first unread message

Sergei Grichine

unread,
Feb 26, 2026, 1:50:56 PMFeb 26
to hbrob...@googlegroups.com

I've been playing with my old Jetson Nano (B01), trying to put it to work on anything AI related.

Anybody who has an old Nano and wants to dust it off - welcome to the club ;-)


Best Regards,
-- Sergei

Sampsa Ranta

unread,
Feb 27, 2026, 8:44:43 AMFeb 27
to hbrob...@googlegroups.com
I was able recently able to get accelerated YoloV11 to work with my 4GB Jetson Nano Developer kit for interference camera frames. Worked fine like 11FPS.

It actually supports Jetpack 4.6.7 and nvidia-docker2.. I was working with Jetson Nano on Duckietown Duckiebot platform.

Here is how it looked,

The thing with Jetson Nano Jetpack is, it's not GPU there.. It's an embedded platform with software stack only built for the Ubuntu 18.. I got it working when I rebuilt ROS inside a container based on NVidia L4T-cuda image (tho I did most of this work within QEMU).

I recall there was even Ultralytics Docker image I used as baseline on how I did this. There was some benchmark tool for Jetson Nano. Um, they still seem to support Jetpack 4.x,

The baseline Docker image was NVidia l4t-cuda that provided the baseline support..

The solution I saw for Jetson Nano was rather weird, as it exported some of the shared libraries from base image into the Docker image and it all had to be Ubuntu 18 based. There was some Python 3.8 package  from some repo I used to get things started with.. My Docker build was incremental and had more layers that built the stuff. This was based on how Duckietown had also built their stuff. 

This was the final Docker image payload to run, the Python code that does yolo interference is also in this repo,

Starting the interference was very slow and memory intensive and caused camera acquisition pipeline to fail, but once it got running I was able to restart acquisition and get it working..

Cheers for the club!

- Sampsa

--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/CA%2BKVXVMuMrEEwhOu1B-dBO4Tg1w_1dajVx1C2VCtkKQ%2BWmog%3DA%40mail.gmail.com.

Sergei Grichine

unread,
Mar 4, 2026, 1:30:42 PMMar 4
to hbrob...@googlegroups.com
Sampsa,

Thanks for your detailed response and the links.

Based on your work, I experimented with my Jetson Nano. I put my observations and code in a repository: https://github.com/slgrobotics/jetson_nano_b01

I will keep playing with it for a while. At the moment, I have the image grabber feeding the model engine and saving/updating the frame in a shared folder (a mounted volume).

I'm not sure where it will ultimately lead me. So far, I've learned quite a bit about Docker and video pipelines. Models to follow ;-)

Let me know if you have any thoughts; everyone is welcome to chime in.

Best Regards,
-- Sergei


David Murphy

unread,
Mar 4, 2026, 3:38:01 PMMar 4
to hbrob...@googlegroups.com
Along these lines, I have a couple of Tx2s a Tx1 and a few TK1s. 
I have been contemplating whether the TX2s at least a still viable for use..as nvidia doesn’t really keep software uptodate for these. The TK1, well that Ubuntu 16 I think so..

Anyway..any thoughts on where it’s worth the effort to try and run models on the TX2s? I was thinking of them mostly as running vision models.


Sampsa Ranta

unread,
Mar 4, 2026, 3:39:39 PMMar 4
to hbrob...@googlegroups.com
Hi Sergei,

good stuff, glad to hear you found my stuff useful.

Took a while to figure the stuff out. My work was obviously based on the work of others, too. dusty-nv Jetson Containers was a good source also. A bit of hybrid to get it all working.

FYI, jtop shows also usage on the HW accelerators, there are HW encoders for JPEG for example.

Duckietown camera acquisition I used from the heir stack works a bit similar way, it just uses some of their own messaging service ..

Some platforms needed something extra for nvargus stuff if used headless. A bit of an untrivial piece of configuration to find unlessy you exactly know what you're after. 
echo "/usr/lib/aarch64-linux-gnu/tegra-egl" > /etc/ld.so.conf.d/nvidia-tegra-egl.conf
ldconfig

For FPS, reducing frame size improves frames per second for interference.

Duckietown new stack runs is based on more recent Ubuntu. But ROS/ROS2 with Dockers allows using different environment and dependencies on different Docker containers. It's easier to clean up Docker images than trying to install all stuff into single base image. Duckietown uses these to harmonize the student environments.

Duckietown camera solution is somewhat more oriented for multiple camera image consumers, so the camera is abstracted as publisher node.

They had nice autolab concept where the environment analyzed bots movements and run competitions for students there. There were overhead cameras that localized the bots and plotted telemetry over the map to analyse how good the drive was.  The actual bots use machine vision to drive, the model is bundled as Docker file so it the competition bot downloaded for example mine from Dockerhub. Some competitor models drove really good. Otherwise I would show you the videos but their AWS S3 bucket ended too bloated and I was told bringing back stuff online from there would cost a fortune. Here is something I collected, video stream of the best driving (sorry, this has some other ML interference bundled, I might have the plain mp4 somewhere),

His solution was ML based. The actual bot had Jetson Nano and interference was actually computed real time on separate desktop computer. So there was a bit of network latency and then ML processing latency on the desktop before sending the commands back over wifi.

For the YoloV11. I used the Ultralytics stuff to train my model and then the ultralytics jetson 4.x image to convert the model that onnx model you found in image. For ML training, Jetson Nano might not have enough juice, I run that on my RTX 3090 GPU. You could convert some of the ultralytics smaller models there if you want to detect persons. Ultralytics ML models are often based on COCO dataset that has a bit misbalance on training material bins, much more human pictures than other classes.

I'm a bit interested how I could use Jetson Nano with Frigate and have a decent ML model to detect wildlife. But seems hard to find ML models that would detect deers in the garden (thinking it's an open buffet for them).

- Sampsa


Sampsa Ranta

unread,
Mar 4, 2026, 4:11:42 PMMar 4
to hbrob...@googlegroups.com
David,

it's embedded .. Low power, old CODA, non trivial to take into use. 4GB total shared memory allows mostly running small models. Number of CUDA cores is quite limited..

If you want to deploy model for extended period of time or have power constraints, it can be tempting solution.

But for Frigate I mentioned, also Intel i3-N100 can do quite good with small models quantized for the target with OpenVINO. Frigate only runs ML interference after detecting motion by CPU algo.

If you look at Frigate NVR, for camera recording and surveillance, they support few platforms.

Older versions of Frigate also supported older Jetson Nanos, I think. But needs a bit digging.

If you want to try out models, high end GPU has much more memory and you will get things done in less time. I have Nvidia RTX 3090 with 24GB memory and it can run bigger Alibaba Qwen models with decent number of tokens.

Depends on your use case and alternatives if it's worth. 

- Sampsa



David Murphy

unread,
Mar 4, 2026, 4:16:00 PMMar 4
to hbrob...@googlegroups.com
Thanks Sampsa,

I was thinking of these mostly as edge device/intelligent cameras and mics.
I have a good size workstation for more up to date stuff. I definitely appreciate the ‘non trivial’ comment.
I’ll have a look at frigate.

Cheers
David

Sampsa Ranta

unread,
Mar 4, 2026, 4:33:33 PMMar 4
to hbrob...@googlegroups.com
David,

for edge intelligence, I think older Jetson Nano might be still valid option. Depending on the model.

What it might lack most is good storage option.

But still, it can still only be as good as your model.

image.png

- Sampsa

Sergei Grichine

unread,
Mar 9, 2026, 4:52:43 PMMar 9
to hbrob...@googlegroups.com, sampsa...@gmail.com
I managed to tame the Nano and have it work with ROS2 as a "TCP/IP inference server". 

Of course, based on the docker image made by Sampsa Ranta (thanks!).


The Nano provides decent recognition rate (FPS, a.k.a. server-calls-per-second) around 6 calls per second, if called from my development machine.

All standard ROS2 (Jazzy) tools apply.

image.png

Best Regards,
-- Sergei


Sampsa Ranta

unread,
Mar 9, 2026, 6:24:50 PM (14 days ago) Mar 9
to Sergei Grichine, hbrob...@googlegroups.com
Hi,

good stuff!

The ROS stack for my Docker build is made in dependency. I think this is partly from dusty-nv and partly from Duckietown, with a bit of salt and pepper,

vs

I think it could be possible to build similar with ROS2. Duckietown has ROS2 Docker images for Jetson Nano, but these use more recent Ubuntu as base image.

When I did the ROS stuff, the Ubuntu 18 ROS Noetic was no longer available as packages from repository. EOL hitting it..

So, Duckietown just depend it this from ROS2 repository, what is easier when using newer Ubuntu. I somewhat followed Duckietown on how the final image is built in incremental steps, which was easier to get each step working than one monolithic. For Ubuntu 18, I think dusty-nv might have more compatible way of building ROS2 with bootstrap. Trying to build complete set of packages would fail with issues on other libraries, but small scale bootstrap version could be possible,

vs

The environmental variables on ROS bootstrap affected the difficulty quite some. nvdusty has good skeleton I think, however not sure if it's kept up to date (if it works out of the box).

Another thing that might interest you is, Flask based MJPEG streaming server with mimetype='multipart/x-mixed-replace; boundary=frame'. Unless you're already familiar, how to stream number of images as video to web browser..

Something like this allows streaming interference results directly to a web browser,

Yours,
 Sampsa Ranta


Sergei Grichine

unread,
Mar 9, 2026, 8:32:48 PM (14 days ago) Mar 9
to Sampsa Ranta, hbrob...@googlegroups.com

Hi Sampsa,

Thanks for the links. At the moment, I am pretty much done with the project, as I believe the Nano can be easily used for basic inference in any ROS2 robot, and it works well (even recognizing giraffe, broccoli and refrigerator among 79 others).

When it comes to ROS, there are several obvious showstoppers that make running ROS (of any kind) on the Jetson Nano counterproductive. The old ROS1 could theoretically be bridged to modern ROS2 (Jazzy or Kilted), but that introduces additional overhead and potential instability. Compiling ROS2 on a Nano under Ubuntu 18 is an exercise better left to truly desperate souls.

To put it bluntly, the Jetson Nano is no longer a general-purpose machine (like an RPi 5, for example). At best, it is now an aging appliance and should be used as such, with minimal architectural complexity.

That is why I am not running anything ROS-related on it — neither in the container nor on the Nano's obsolete host OS (both Ubuntu 18.04).

Instead, I run a simple Python program (server) that listens on a TCP/IP socket for a JPEG image, runs the model, and returns everything the model produces (class IDs/labels and bounding boxes). Simple, fast, reliable.

The ROS2 node (in my case, running on my development machine) subscribes to an image topic. It sends the JPEG bytes directly to the server on the Nano and then packs the response into a Detection2DArray message.

This keeps the Nano lean while allowing ROS2 to do what it needs to do on the real modern machine without bridging to an obsolete interface.

The Nano’s biggest problem seems to be the lack of RAM—4 GB is already pretty tight for inference tasks. Once it starts swapping (to the SD card), it is effectively dead for all practical purposes.

I obviously used your image `masterhapero/dt-duckpack-yolov11:ente-arm64v8` as the base in my Dockerfile. It is a solid foundation. It also includes ROS1 and your repo, which was useful for my experiments (and for borrowing from your code), but it is a bit heavy for the “lean server” role.

For my purposes, the obvious improvement would be to find or build a leaner image capable of running the model only - in a Python 3.8 environment. I might explore this path once I get to actually deploying my Nano “appliance” for real work.

Meanwhile, if anyone wants to put their Nanos to work - I am here. 

Best Regards,
-- Sergei

Sergei Grichine

unread,
Mar 12, 2026, 9:29:15 PM (11 days ago) Mar 12
to hbrob...@googlegroups.com, Sampsa Ranta
I added a section to my notes describing the "headless appliance" mode for the Nano: https://github.com/slgrobotics/jetson_nano_b01/blob/main/README.md#headless-operation

In this mode, the Nano powers up with the TCP/IP server container already running and ready to respond to my ROS2 node, serving images and detections from the Nano’s camera.

Hopefully this helps someone save a Nano or two from the trash can.

Best Regards,
-- Sergei

Sergei Grichine

unread,
Mar 13, 2026, 12:16:53 PM (10 days ago) Mar 13
to hbrob...@googlegroups.com
Hardening network access - this is what might be useful for all our on-board computers, not only for my "Jetson Nano Camera Appliance":


Best Regards,
-- Sergei

Sergei Grichine

unread,
Mar 15, 2026, 2:46:13 PM (8 days ago) Mar 15
to hbrob...@googlegroups.com
It's a good time to dust off your old Jetson Nanos (B01, 4GB RAM only) - especially if you have a camera, connected to it. The server code can be used on any other YOLO-capable board with minimal modifications.


Why does it beat OAK-D Lite hands down? - see this ;-)
  • Full control over the AI stack

  • Unlimited model flexibility

  • Network-accessible inference service

  • Much lower bandwidth usage

  • Easier debugging and development

Best Regards,
-- Sergei

Reply all
Reply to author
Forward
0 new messages