Can't build planning_scene through sensor kinect on my robot

982 views
Skip to first unread message

xrn9...@gmail.com

unread,
Aug 4, 2017, 4:25:08 PM8/4/17
to MoveIt! Users
To deal with image from Kinect, I am using freenect_registered_xyzrgb.launch of freenect_launch package to process raw depth image and convert it to point cloud. I do get message from topic
/camera/depth_registered/points.
Then this is my sensors_kinect.yaml,
sensors:
 
- sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
    point_cloud_topic
: /camera/depth_registered/points
    max_range
: 5.0
    point_subsample
: 1
    padding_offset
: 0.1
    padding_scale
: 1.0
    filtered_cloud_topic
: filtered_cloud
And this is my sensor_manager.launch
  <param name="octomap_frame" type="string" value="odom_combined" />
 
<param name="octomap_resolution" type="double" value="0.025" />
 
<param name="max_range" type="double" value="5.0" />
 
<rosparam command="load" file="$(find finalasm_moveit_config1)/config/sensors_kinect.yaml" />
Then I just launch everything like usual. But I can't see anything in Rviz and I have this warning that I am not sure whether it matters
[ WARN] [1501877409.296506615]: MessageFilter [target=/world ]: Dropped 100.00% of messages so far. Please turn the [ros.moveit_ros_perception.message_notifier] rosconsole logger to DEBUG for more information.
[ WARN] [1501877469.305792922]: MessageFilter [target=/world ]: Dropped 100.00% of messages so far. Please turn the [ros.moveit_ros_perception.message_notifier] rosconsole logger to DEBUG for more information.

I searched some related problems but no one helped. Btw, I also checked topic move_group/monitored_planning_scene and it did have something published.
Appreciated for any help in advance.



xrn9...@gmail.com

unread,
Aug 5, 2017, 4:30:37 PM8/5/17
to MoveIt! Users
Sorry, I recopy the result. Actually miss the other important output is about there is no transform between my frame "/world" and frame "/camera_depth_optical_frame". Something like that. I checked the launch file of pr2 and found something about setting frame. Do I also need to set it for my own robot? Actually I didn't find anything about setting frame in the tutorial.

joe polden

unread,
Aug 7, 2017, 2:06:59 AM8/7/17
to MoveIt! Users
For this second issue, you need to create a static transform publisher (http://wiki.ros.org/tf#static_transform_publisher) that creates a relation between the "/world" frame of your environment, and the "/camera_depth_optical_frame" that the pointcloud is being published to. This can be done by adding:
 
<node pkg="tf" type="static_transform_publisher" name="kinect_broadcaster" args="0 0 0 0 0 0 1 world camera_depth_optical_frame 100" />

or similar to your launch file. If you are still having problems, check that the kinect is actually publishing pointcloud data to the output topic. When you have your ros application running, run this command from a new terminal and see if any pointcloud information is being published

rostopic echo /camera/depth_registered/points
 
Good luck

xrn9...@gmail.com

unread,
Aug 7, 2017, 2:15:53 PM8/7/17
to MoveIt! Users
Hi Joe,
Thank you for your help. I will try it now. Before, when I searched something helpful, I found someone said in order to maintain everything in a different frame in MoveIt!, I need to set a virtual joint in my SRDF. Then I followed the way how pr2 set virtual joint e.g. type=planar, parent_frame='odom_combined'. But I got the problem of
Skipping virtual joint 'vj' because its child frame 'base_link' does not match the URDF frame 'world'
Is that because of lack of static transformer?
Thanks

xrn9...@gmail.com

unread,
Aug 7, 2017, 5:11:09 PM8/7/17
to MoveIt! Users
Hi Joe,
After I added static transformer, I got problem for freenect:
[ WARN] [1502139308.105777003]: TF2 exception:
Could not find a connection between 'camera_rgb_optical_frame' and 'camera_depth_optical_frame' because they are not part of the same tree.Tf has two or more unconnected trees.
Actually I don't quite understand this warning.



On Monday, August 7, 2017 at 2:06:59 AM UTC-4, joe wrote:

joe

unread,
Aug 7, 2017, 10:02:33 PM8/7/17
to MoveIt! Users
It seems you are having some difficulty understanding the key concepts of the TF package, which is an essential part of any ROS application. I strongly suggest you first try to learn more about the TF package from the ROS wiki (http://wiki.ros.org/tf). Perhaps try some of the tutorials (http://wiki.ros.org/tf/Tutorials), plus i am sure there would have to be some youtube videos on the TF package. 

A quick explanation of what I believe your current problems relate to is that, in order for RVIZ to publish your point cloud data, it needs to understand where in your environment the kinect sensor is located (relative to the global origin, or 'world'). In order to do this, you need to build a tree of reference frames, from the global origin up to the kinect sensor (which is done with the TF package). For example

World origin ----> offset x=100 y=50 rz=90 ---> robot_link_1 ---> offset z = 20 --> kinect_frame

With this information, your RVIZ system now knows where the kinect frame is located, and can hence publish the pointcloud data. If this data is incomplete:

World origin ----> offset x=?? y=?? z=??  --> kinect_frame

Then RVIZ will not understand where the kinect is located and will then refuse to publish the pointcloud data.

Currently, your system seems to be missing information which fully defines the location of your kinect frame in the global world frame. There are a number of ways this can be done, but the most basic, in my opinion, is via the static transform publisher method we discussed earlier. 

For this problem:
 [ WARN] [1502139308.105777003]: TF2 exception:
Could not find a connection between 'camera_rgb_optical_frame' and 'camera_depth_optical_frame' because they are not part of the same tree.Tf has two or more unconnected trees.

I would add another publisher to your launch file:

<node pkg="tf" type="static_transform_publisher" name="kinect_broadcaster" args="0 0 0 0 0 0 1 camera_rgb_optical_frame camera_depth_optical_frame 100" />

And I would also check the current TF tree your system is using with the 'view_frames' functionality (http://wiki.ros.org/tf#view_frames). When the application is running, in a new terminal use the command:

rosrun tf view_frames

and then find the generated pdf file afterwards to look at the current TF structure of your system. Hopefully this will give you some additional hints as to what information is missing. 

Also, did you check if your pointcloud data is being published?

xrn9...@gmail.com

unread,
Aug 8, 2017, 12:27:08 PM8/8/17
to MoveIt! Users
Hi Joe,
Actually I already wrote in my first problem... I do have messages published in the topic /camera/depth/points and /move_group/monitored_planning_scene.
And I am kind of confused about frame. I read this link http://www.ros.org/reps/rep-0105.html#coordinate-frames and knew that the relationship between earth, map, odom and base_link. But it seems doesn't work for my robot arm. Does the relationship between different defined in urdf or just all the same in ROS?
I will try your suggestion and see whether it can solve the problem.
Thanks

xrn9...@gmail.com

unread,
Aug 8, 2017, 12:33:06 PM8/8/17
to MoveIt! Users
I think I need to figure out what is the whole frame of my robot and what virtual joint is for since I have problem of setting parent frame and child frame for it. After that, I think I can figure it out what two frames I actually need to do the transformation.


On Monday, August 7, 2017 at 10:02:33 PM UTC-4, joe wrote:

xrn9...@gmail.com

unread,
Aug 8, 2017, 2:49:39 PM8/8/17
to MoveIt! Users
I think I understand what virtual joint is and why there is problem of setting virtual joint for my robot.
Since in urdf, there is already configuration of root joint and hence I don't need to set virtual joint again.
Then I think now my understanding of robot frame is right.
I follow the way of pr2's configuring external sensor, then I still got
[ WARN] [1502216794.320287589]: No transform available between frame 'camera_depth_frame' and planning frame '/world' (Could not find a connection between 'world' and 'camera_depth_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1502216794.320336633]: No transform available between frame 'camera_link' and planning frame '/world' (Could not find a connection between 'world' and 'camera_link' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1502216794.320373158]: No transform available between frame 'camera_depth_optical_frame' and planning frame '/world' (Could not find a connection between 'world' and 'camera_depth_optical_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1502216794.320406033]: No transform available between frame 'camera_rgb_frame' and planning frame '/world' (Could not find a connection between 'world' and 'camera_rgb_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1502216794.320435605]: No transform available between frame 'camera_rgb_optical_frame' and planning frame '/world' (Could not find a connection between 'world' and 'camera_rgb_optical_frame' because they are not part of the same tree.Tf has two or more unconnected trees.)
I am confused that why there is so many frames since I just subscribe the information of pointcloud of camear/depth/points.
Does that mean I have to set static transformer for so many frames?



On Monday, August 7, 2017 at 10:02:33 PM UTC-4, joe wrote:
Reply all
Reply to author
Forward
0 new messages