--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/890092334.267025.1625882584394%40connect.xfinity.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/4b5fc838-08a0-4552-b78a-afc5d52b2716n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/16d58df1-4050-4ccc-b8ec-d20757684fd3n%40googlegroups.com.
ros melodic works fine, but since it uses python 2, it is not suitable as a development platform. Once upgrading the Nano to 20.04, It runs Noetic just fine, but has no access to the accelerated GPU.
--What I am thinking for sophisticated robotics development of any accelerated GPU ROS you need a dedicated platform($$$$) that then can deploy those portions of the code to an onboard Jetson.This is the approach used for the Nvidia jetbot.Well, learning is often a painful process.On 07/09/2021 7:44 PM Kevin Chow <kvnc...@gmail.com> wrote:Have you tried installing ROS without using a Docker image? Haven't used a Jetson Nano before... but I have used an Jetson TX2 with ROS before (not from a Docker image), and it works. (Although I didn't set up that TX2 though... I think most of the ROS-related stuff were just installed via apt-get)
On Friday, July 9, 2021 at 7:03:06 PM UTC-7 Alan Federman wrote:on Nvidia Nano and Xavier you can only run Foxy and Noetic in docker. BTW the install process takes about 5 hours.Docker is designed to work CLI primarily but it can work with GUI in most situations.Apparently the Nvidia docker images for ROS don't support the Nvidia accelerated display. I found this out by installing the ros2 turtlesim and finding out it can't run on a Nano.So at this point I am reaching the conclusion that Nvidia Jetson is just not worth it as a ROS development platform.--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/16d58df1-4050-4ccc-b8ec-d20757684fd3n%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/1741885587.268222.1625925845545%40connect.xfinity.com.
Another option is to run the roscore on the jetson and setup your PC to connect to the roscore on jetson rather than launching it on the PC. Run the nodes that do not need GUI on the jetson and run the nodes that need GUI like rviz on your PC. If you setup it correctly, they should be able to communicate to each other on network.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CABbxVHuc7hsY2_s4%3DDLnu7S43XuBJFxj2JhrrTZ9LaOGjcRbXw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CAPD-i8OQFgn9ku-NCuWHbOyXJaDD1TBO6J9PO%3DsRmPAQ5jf35g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CANcQ5GfR7QCwh1dbMw-KRHMSz-OFLPQhtAsxxLYOb3dXev-aQw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CABbxVHta0M%3DEFM7VFKhUceF1RwWRtT6%2BQ8%2BvfO2RLzs9HUrSjw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CABbxVHsLmrZmvz1jVa3V5vF%2B6mt_o3wrduJfO_P7OkhSsen%2BLA%40mail.gmail.com.
Alan,
Fergs started porting the Openni2 package
supporting the Xtions to ROS2 and I took it and worked out some
of the kinks I ran into and last I played with it under Foxy
with Ubuntu 20.04, I could get images as direct topics but using
Image Transport didn't work. I saw online where there was a fix
for the problems with Image Transport in general but never got
around to trying to use them. I figured they'd be part of
Galactic but I haven't had a chance to work on it lately.
https://github.com/TimCraig/openni2_camera_param
Tim
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/1719266719.331367.1626553045068%40connect.xfinity.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/52b5c0d3-03e6-45ff-a15d-e5ca50ecd82cn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CAF0acfQALoBZGpkY0jy5V1x%3DjdeDpjF1Sk4WG5ptnYokh60WQQ%40mail.gmail.com.
My goal is on one smaller robot with one ARM64 computer do people detection and navigation - in other words solve the last mile problem.
Mike,
I decided to sacrifice "versatility" for a
working solution. Since I think most of the time people aren't
interesting on switching things around all the time, I went with
selecting which streams are published through parameters.
Tim
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CAF0acfQALoBZGpkY0jy5V1x%3DjdeDpjF1Sk4WG5ptnYokh60WQQ%40mail.gmail.com.
Thanks Fergs,I am glad I am just not hallucinating. I am confused as to working with Foxy or Galactic, since both are advertised as working with Ubuntu 20.04Any reason to use one over the other? Will your openni2 port work with Galactic?
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/1425657094.334590.1626627658877%40connect.xfinity.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/CABbxVHuPZFEgf40VP8O1yVPUUFLpJBLErR%2BynjoEdHKPq2d%3DnQ%40mail.gmail.com.
I thought I would add my two cents worth since the ElsaBot project I have been working on (the one I showed at the last club challenge meeting) includes some of the ingredients discussed above…
My robot is running Ubuntu 20.04 and a clone of ROS 2 Rolling from around mid April. I’m running a bunch of nodes including:
I planned on using one RPi4 but found just the core nodes for robot base control and nav consumed around 75% of the processor. The other nodes needed around 45% of the processor. To keep the project going, I cloned the RPi4 SD card and just added another RPi4 to the robot. That allowed plenty of processing head room.
I grouped the base control and nav related nodes on one RPi and the rest of the nodes on the other. While developing, I usually run the nodes I am working on from my laptop since builds are so much quicker there. It’s easy to then commit the changes and push to Github and then fetch those changes to the RPi where they normally run (and then wait for the painfully slow compile to finish).
I switched from Foxy to Rolling since I encountered a bug that was causing a deadlock (https://github.com/ros2/rclcpp/issues/1285). It is handy building from source in case you want to better understand the code by putting in some log prints or making some minor change to prove-out your understanding of things.
The robot uses the OAK-D camera to run human pose detection NN and a MobileNet SSD NN for person detection. Except for the post processing required for the pose detection NN, the AI processing doesn’t take too much CPU. I’ve constrained the video output resolution from the OAK-D to 640x360 to reduce the overhead of sending it to the webui clients that display it.
The OAK-D has an Intel Movidius Myriad X which can be configured to process the output from one of the on-device cameras or can be used independently of the video sources (like the Google TPU use case). I'm using python for the vision and tracking nodes since the post-processing, setup, and image handling is much easier with python.
I’m thinking of moving to a Celeron-based SBC such as this one (https://www.seeedstudio.com/Odyssey-Blue-J4125-128GB-p-4921.html) so I can get back to one CPU. It’s still somewhat power consumption friendly and can be powered from 12-19VDC. It also has a built-in Arduino which would be handy for low-level control. The X86 aspect might also make life a little easier. Anybody tried one of those or a Celeron-based board for ROS2?
Scott Horton
> I’m thinking of moving to a Celeron-based SBC such as this one> (> ) so I can get back to one CPU.
It’s still somewhat power> consumption friendly and can be powered from 12-19VDC. It also> has a built-in Arduino which would be handy for low-level> control.
--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbrobotics/e5cf706a-d1f9-4363-92c0-a8edaef7226en%40googlegroups.com.