LINOROBOT2 IS OUT

1,073 views
Skip to first unread message

jime...@gmail.com

unread,
Sep 22, 2021, 4:14:40 PM9/22/21
to LINOROBOT
I'm thrilled to announce that linorobot2 is out.

The firmware uses microROS to expose the topics required by Nav2 (ROS2's default navigation stack) and comes with a whole lot of new features. Here's a summary:

Easier sensor integration. There's a new abstraction class for motor drivers and IMUs so you can focus on integrating new sensors' API without worrying about the logic on how to add the sensors' output to linorobot2's firmware.

New calibration process for motor drivers and encoders. It can be frustrating to debug misconfigured pin assignments especially when the motors start spinning aggressively without knowing the cause of this weird behavior.  I added a minimal calibration firmware (without ROS2 connected) that would walk you through correcting misconfigured pins. The calibration process also calculates the motor's COUNTS_PER_REV variable in case this information is not available to the user.

- Automatic IMU calibration.  I omitted the accelerometers in calculating the robot's odometry and solely used the output from the gyroscope. This allows the robot to calibrate the gyroscope during bootup and cut the tedious process of rotating the robot in different axes after assembling the robot.

Improved kinematics library. The refactored library automatically calculates the motors' maximum velocity output based on the defined RPM and kinematic properties of the robot. This allows the robot to cap each motor's velocity to a healthy speed to preserve the motion required from /cmd_vel whenever the max velocity is reached. Controlling the robot is also buttery smooth with this new library especially with a joystick. You'll love it, I promise! :D

- New laser sensors. 
a. YDLidar
b. LDLidar
c. Zed Camera
d. Intel Realsense Series

- Parameterized Xacro. You can easily create a URDF from the parameterized xacro available in linorobot2_description package. 

- Simulation Pipeline. You can now simulate the robot you built in Gazebo using the defined robot properties in the parameterized xacro file. This allows users to fine-tune high level applications ie Nav2, AMCL, SlamToolbox in the virtual world and deploy on the real robot. You can also use this tool to develop ROS2 applications without the need of a physical robot.

- Robust mapping. Gmapping has been replaced with SlamToolbox which has demonstrated more robust results during testing. This also offers new mapping capabilities such as loading a saved map to continue mapping new areas.

- Onboard IMU and Odometry calculation. In linorobot2, the odometry and IMU data are directly published from the microcontroller. This lessens the amount of nodes running on the robot computer to relay minimal messages to full fledged standard IMU/Odometry msg. 

- Avoid 3D obstacles. linorobot2 is configured to accept data from RGBD cameras and mark obstacles in the 2D costmap using voxel layers. This allows robots to see obstacles beyond the 2D LIDAR's view like slippers, chairs, and hanging objects.

Deprecation Notice:
- Kinect Sensor -  I'm open to bringing this feature back, as long as someone is willing to test.
- Ackermann Steering Kinematics. It's can be challenging to write a common configuration for  Ackermann steering and differential drive robots.

Lastly, I would like to thank the community for the continuous support of this project. PRs and contributions are always open. Feel free to ping me if you have any more questions.  Special shout out to the members who offered their time to participate in beta-testing the package.

Thank you!
Message has been deleted

Ross Lunan

unread,
Sep 25, 2021, 5:18:10 PM9/25/21
to LINOROBOT
Great news and have all hardware on hand except Depth Sensor so can try this right away, without a depth sensor. Now a question, have you looked at the newly announced Oak-d and Oak-D-Lite to substitute for Realsense & ZED, both very cost effective 3D cameras /AI onboard processing.?

jime...@gmail.com

unread,
Sep 26, 2021, 3:28:28 AM9/26/21
to LINOROBOT
Hey Ross,

I don't have any access to these cameras but if anyone's willing to test, I can integrate the sensor.

Ross Lunan

unread,
Sep 26, 2021, 10:30:29 AM9/26/21
to LINOROBOT
I can do that as I got into the Oak-D-Lite early bird batch with expected Dec 21 shipments, or maybe sooner and will try the stack with a Raspberry Pi 4B/4G, 2WD Chassis I have , or maybe I should use 8G? I have a Nano 4G/B02 also-is there any advantage to use that?

Phillip Murphy

unread,
Sep 27, 2021, 7:31:30 PM9/27/21
to LINOROBOT
Hello,

I ran the install with depth sensor blank and seemed to work. However on the hardware setup page I cant find the "config folder" that has "lino_base_config.h" file in it to put in my parameters.

I searched through the install directories directly and also on the github site and I cant seem to find it. Not in my root drive either. Any idea where I might be missing it?

-Phil

On Wednesday, September 22, 2021 at 4:14:40 PM UTC-4 jime...@gmail.com wrote:

Craig Burnett

unread,
Sep 27, 2021, 7:58:48 PM9/27/21
to LINOROBOT
Hi Phil,

have you installed this repository?


Cheers
Craig

Phillip Murphy

unread,
Sep 28, 2021, 11:04:02 AM9/28/21
to LINOROBOT
Stupid me, didnt realize its two separate repositories this time. Got it now, thanks.

Phillip Murphy

unread,
Oct 4, 2021, 5:00:50 PM10/4/21
to LINOROBOT
I am getting stuck at the "Visualize the newly created URDF" portion. I am able to edit my description file and colcon build, but then when I launch via robot computer I get stuck at attached screen.

"joint_state_publisher: Waiting for robot_description to be published on the robot_description topic"

Any ideas on a resolution, been looking up for a while and reinstalled joint state publisher with no luck. My RVIZ on host computer comes up but is empty and I see the robot_description as an unvisualizable topic.

Thanks!



Capture.JPG

Phillip Murphy

unread,
Oct 6, 2021, 12:08:57 PM10/6/21
to LINOROBOT
Trying to add pictures of RVIZ.
56.PNG
56_2.jpg

Phillip Murphy

unread,
Oct 7, 2021, 8:21:35 AM10/7/21
to LINOROBOT
Hello all,

Updating based on further conversations. Seems the topics are being posted properly but it might be something with launch files / directories I have setup.

When I open RVIZ as per pictured attached above, the global status error of no Base_footprint exists. Then similar issue if I skip this step and go into SLAM, RViz opens but now map does not exist. Any ideas or others seeing these issues?

I have checked all my directories and the launch files I just cant seem to find out why the robot computer appears to print out fine but then RViz on host computer is pointing to the wrong places. Thanks for any help.

Pub-Sub test worked fine on robot and host computer so network looks good. I am using VMware virtual for my host with Ubuntu 20.04. Robot being run on Jetson Nano.

Phillip Murphy

unread,
Oct 10, 2021, 7:28:31 PM10/10/21
to LINOROBOT
Following up on my question, in case it may help others: After a lot of guess and check, I finally got it to work. Within the VMware settings, I updated the Network connection to "Bridged: Connected Directly to the physical network"

This then allowed my ifconfig to show the same network being used for host and robot computers. RViz now shows the robot as expected. I will work on confirming SLAM and other navigation steps work tomorrow.

Phillip Murphy

unread,
Feb 1, 2022, 9:14:32 PM2/1/22
to LINOROBOT
Hello,

I do have the Oak-D and been playing around with its object detection and other machine learning algorithms.

Wondering if there is any interest to integrate it for depth sensing; is it possible to use RPLIDAR for laser scans still so you have that 360 view and then the camera for additional depth sensing.

My other thought is can the camera perform the depth sensing but then also run an facial recognition type algorithm while the robot drives around? Wondering if any experience with the other realsense cameras would be great to hear.

On Sunday, September 26, 2021 at 3:28:28 AM UTC-4 jime...@gmail.com wrote:

madgrizz...@gmail.com

unread,
Feb 13, 2022, 12:27:24 PM2/13/22
to LINOROBOT
I've been using LINOROBOT1 with a Kinect for sometime on my robot, but when I was working with my OAK-D camera to do object detection/face recognition, I decided to migrate the nodes I created to ROS2.  So now I'm looking at using LINOROBOT2 and notice the Kinect isn't supported.  With that said, I'm also interested in the possibility of integrating the OAK-x camera as a depth sensor.  I have an OAK-D and two lites and planned to move the object detection/face recognition over to one (or more) of the lites, thereby freeing up the D.  I would assume the D would be the best sensor to integrate as it has the IMU.  I'm also contemplating getting the OAK-D-PRO-W when I feel like I can spend the money.

As for the question about both depth sensing and facial recognition, the myriad is quite capable processor, but I think it comes down to the required FPS for depth sensing / navigation.  I do yolov4 object detection (with spatial coordinates), face detection + face recognition, and object tracking all on device (no host side processing of that part of the pipeline).  I've set the camera to do about 3 FPS and it seems to work fine.  For my pipeline, I'm not trying to detect motion of bouncing balls, etc. so the slow FPS works.  

Thanh Trường Phan

unread,
Nov 1, 2024, 10:33:20 AM11/1/24
to LINOROBOT
Hey Phil, 

I have the same problem like you. I use VMvirtualbox, too. 
But  "Network connection to "Bridged"" doesn't work with me....

Vào lúc 06:28:31 UTC+7 ngày Thứ Hai, 11 tháng 10, 2021, phillip...@gmail.com đã viết:
Reply all
Reply to author
Forward
0 new messages