Attach link to a robot model and use it in IK

718 views
Skip to first unread message

Daniel Maier

unread,
Jan 29, 2014, 1:44:57 PM1/29/14
to moveit...@googlegroups.com
Hi!


My problem is the following: I have a robot model that is initialized
from the urdf/srdf. I now want to extend the model from C++ by attaching
a link to a JointModelGroup. The transform of the last link in the
JointModelGroup to the new link is determined by some measurement and
the transform can be considered fixed for a certain time interval. I
then want MoveIt to compute an IK solution for the JointModelGroup where
I can specify the target pose for the new link. So consequently, the new
link becomes the end-effector of the group. In fact, I do not care about
the orientation of the link as long as the requested position can be
reached.

Lively, I want the robot to pick something up and then touch a different
object with the tip of the picked-up object.

Is there a way in MoveIt the achieve this? It seems like it is not
intended that the kinematic structure can be changed, once the robot
model is created ( besides for collision objects ).

Any comments are appreciated!

Thanks,
Daniel

Dave Hershberger

unread,
Jan 29, 2014, 2:51:25 PM1/29/14
to moveit...@googlegroups.com
This sounds a lot like the AttachedBody functionality.  Check out the RobotState::attachBody() functions.   ( http://docs.ros.org/api/moveit_core/html/classmoveit_1_1core_1_1RobotState.html#a5969cc58a617a8c117cbb888e0034b4b )

It does not create a new joint, just adds a shape and a transform (similar to a link) to whatever link you want to attach it to.  You can then remove it with RobotState::clearAttachedBody().

When you call RobotState::setFromIK(), the string parameter "tip" can be the name of a link or the name of an attached body.

Dave

Daniel Maier

unread,
Jan 29, 2014, 4:12:35 PM1/29/14
to moveit...@googlegroups.com
Hi Dave,

Thanks for your answer. I was actually already looking at the
AttachedBody object. What stopped me from trying it, was that it
requires a vector of Shapes and a vector of transforms, but only one
string for the frame_id. So if it (potentially) contains multiple
transforms (one for each shape), which one is used as tip/end-effector
in the IK computation? Sorry if this is a stupid question.

Thanks,
Daniel

Dave Hershberger

unread,
Jan 29, 2014, 4:40:43 PM1/29/14
to moveit...@googlegroups.com
It is not a stupid question, it is (in my opinion) a problem with the API.

However, it is still usable, and in fact the thing you describe is one
thing it is intended for.

Basically, if you only ever add one shape and one transform, it works
the way you would expect. The one transform you set will be used to
determine the pose named by the name of the body.

I don't know the history of the code, but essentially if you just want
to visualize and check for collisions with an attached body, multiple
shapes works fine. If you want to use anything that requires you to
think about the pose of the attached body, some of the code will just
use the first transform and silently ignore the others, and some of the
code will fail if there is more than one transform. I believe all the
relevant code does the right thing if there is just one shape and one
transform.

If I have my way, a future version of the API would define a "container"
shape object which has multiple child shape/transform pairs and a
well-defined coordinate origin. Then we could go back to having
attachBody() only accept one shape and one transform and the API would
be a bit less confusing.

Dave

Daniel Maier

unread,
Jan 30, 2014, 10:37:19 AM1/30/14
to moveit...@googlegroups.com
Oh, this is indeed a very unexpected behavior of the API.

I tried as you suggested but it does not work. The reason is that
setFromIK basically changes the target_pose by applying the inverse
transform for the AttachedBody. For 6D IK this is fine but if one is
only interested in a solution for the position of the tip, then this
approach will unfortunately not work as expected.

Daniel

Dave Hershberger

unread,
Jan 30, 2014, 12:21:07 PM1/30/14
to moveit...@googlegroups.com
Have you tried this with position-only IK enabled in kinematics.yaml?

http://docs.ros.org/api/pr2_moveit_tutorials/html/kinematics/src/doc/kinematics_configuration.html#position-only-ik

It is apparently only implemented in the KDL kinematics plugin at this point, but it is there.

Dave

Daniel Maier

unread,
Feb 4, 2014, 9:36:56 AM2/4/14
to moveit...@googlegroups.com
Hi

The problem with the way setFromIK is implemented is, that the position
of the end-effector is already determined by the pose of the
AttachedBody. So, depending on the choice of the rotation of my
AttachedBody, the end-effector's position is already determined,
regardless if I use position_only-IK or not.

I solved the problem by not using setFromIK and instead use a custom IK
for my problem.
However, now I have a different problem:

Even though I attached my AttachedBody to the RobotState, it seems to be
not considered in the collision check for planning.
Here is essentially what I do:

robot_model = RobotModeLoader(...);
robot_state = new RobotState(robot_model);
robot_state->attachBody(attachedBody);
// ... set joints in robot_state, e.g. from current /joint_states
goal_state = new RobotState(robot_model);
// .. set joints in goal_state from IK solution
goal_state->attachBody(attachedBody); // not sure if this is necessary,
// I tried leaving out this line but did not help

planning_scene.reset(new planning_scene::PlanningScene(robot_model));
planning_scene->setCurrentState(robot_state);
MotionPlanRequest req;
robot_model::JointModelGroup * group = robot_model->getJointModelGroup(
group_name);
req.group_name = group->getName();
req.goal_constraints.clear();
moveit_msgs::Constraints joint_goal =
kinematic_constraints::constructGoalConstraints( goal_state, group);
req.goal_constraints.push_back(joint_goal);

MotionPlanResponse res;
context = planner_instance->getPlanningContext(planning_scene, req,
res.error_code);
context->solve(res);

The planner always finds a solution, even if the starting state is
already in collision with the attachedBody, no matter what I choose for
its touch_links. Also, I chose the goal_state so that the attached body
would collide with other body parts of my robot.
Do I need to enable collision checks first?
Also, if I publish the constructed MotionPlanResponse, the attached body
does not move with the robot but remains at its initial position.
Further,
vector<const robot_state::AttachedBody*> abs;
const robot_state::RobotState & rs = res.trajectory_->getFirstWayPoint();
rs.getAttachedBodies(abs);
ROS_INFO("response hasAttachedBody(): %zu", abs.size());
shows me that there is no attachedBody in the response.

Why doesn't this work as expected?

Thanks for your help.
Daniel

Dave Hershberger

unread,
Feb 4, 2014, 12:03:29 PM2/4/14
to moveit...@googlegroups.com
Hi Daniel,

I'm not that familiar with the planners, so I'm not sure why it's not doing collision checking.  I know for the IK solutions, you need to pass setFromIK() a collision-checking callback function in order to do collision-aware IK.  For the planners though, there seem to be a lot more classes involved and it's not yet clear to me where you would specify the collision-checking.

One problem I do see is that you appear to be attaching the same body object to two different RobotStates.  This is not supported (though I realize it is not documented). (I added this pull request just now to fix that: https://github.com/ros-planning/moveit_core/pull/142.)

I don't know if that is the whole problem, or if you also need to enable collision-checking explicitly.

Ioan or Sachin may be able to answer the question better.

Dave

Ioan Sucan

unread,
Feb 4, 2014, 12:56:35 PM2/4/14
to Daniel Maier, moveit-users
Hello Daniel,

I will add some answers (and some questions :) ) inline:


On Tue, Feb 4, 2014 at 6:36 AM, Daniel Maier <mai...@informatik.uni-freiburg.de> wrote:
Hi

The problem with the way setFromIK is implemented is, that the position of the end-effector is already determined by the pose of the AttachedBody. So, depending on the choice of the rotation of my AttachedBody, the end-effector's position is already determined, regardless if I use position_only-IK or not.

I am not sure I understand this. The assumption in setFromIK is that if you specify a pose for the attached body, all that is used for is to compute a pose for the EEf, and then it is as if you called setFromIK with the EEf pose. If the IK solver you use just does position ik, it will disregard the orientation passed in (although computed from the attached body pose). Maybe I am not understanding the issue though, or maybe there is a bug.. Could you elaborate on this issue a bit more?
A bit off topic: We really should make that setPositionIK() a call in KinematicsBase that defaults to an error. Then the KDL plugin would implement it properly and we can have KinematicsPluginLoader set the param from the param server.

 
I solved the problem by not using setFromIK and instead use a custom IK for my problem.
However, now I have a different problem:

Even though I attached my AttachedBody to the RobotState, it seems to be not considered in the collision check for planning.
Here is essentially what I do:

robot_model = RobotModeLoader(...);
robot_state = new RobotState(robot_model);
robot_state->attachBody(attachedBody);
// ... set joints in robot_state, e.g. from current /joint_states
goal_state = new RobotState(robot_model);
// .. set joints in goal_state from IK solution
goal_state->attachBody(attachedBody); // not sure if this is necessary, // I tried leaving out this line but did not help

planning_scene.reset(new planning_scene::PlanningScene(robot_model));
planning_scene->setCurrentState(robot_state);
MotionPlanRequest req;
robot_model::JointModelGroup * group = robot_model->getJointModelGroup( group_name);
req.group_name = group->getName();
req.goal_constraints.clear();
moveit_msgs::Constraints joint_goal = kinematic_constraints::constructGoalConstraints( goal_state, group);
req.goal_constraints.push_back(joint_goal);

MotionPlanResponse res;
context = planner_instance->getPlanningContext(planning_scene, req, res.error_code);
context->solve(res);


This is issue with the motion planning reques. A plain motion planning request cannot specify that you grasp an object, so the existance of the attached bodies must be the same at both the start and the goal locations.
The goal location is specified as a set of constraints and cannot represent the existance of an attached body (that is already specified by the starting state). So kinematic_constraints::constructGoalConstraints() will not set the goal constraints to include the attached body -- that information must be specified only for the initial state of the robot.
Your request does not specify a start state, so the one that is currently maintained by move_group is used. This one will not have your attached body. So what you should do is actually add the attached body to your start_state.attached_collision_objects in the motion planning request. That way, the start state will be the current state maintained by move_group + whatever changes are dictated by the start state in your request; in this case, an attached body.

Please let us know if you still run into problems.

Ioan

Jorge Nicho

unread,
Feb 4, 2014, 5:57:57 PM2/4/14
to moveit...@googlegroups.com
Hello,
I also had issues doing collision avoidance with an attached object to the robot.  I tried publishing to the /attached_collision_object or /planning_scene topics to set my attached payload and requested a path plan with the move_group interface.  Regardless of how I constructed my messages the attached body (a box) always went right through the obstacles and octomap as the robot arm moved.  I finally stopped trying to get that feature to work and added the box permanently to the robot in the urdf file.  Then I would just disable or enable collisions with the box by modifying the allowed collision matrix.  
My solution worked in the end but it is just a workaround and is consequently quite limiting since you have to create a new urdf if your payload properties (size, shape, etc) change. Here is a thread that discusses this issue in more detail:

Daniel Maier

unread,
Feb 11, 2014, 10:28:02 AM2/11/14
to moveit-users, isu...@gmail.com
Hi Ioan,

Thank you for your help. Setting the
start_state.attached_collision_objects in the MotionPlanRequest fixed
the problem!
I was under the impression that the start state was determined from the
PlanningScene (using the current state), but it seems not to be the
case. It seems, I am losing track of all the different robot and world
representations ;)


I will try to explain the problem with setFromIK and the attachedBody
from my point of view. The main issue is that the for position-only IK,
the solution it yields is different from what I would obtain with a
modified URDF.
As an example, imagine I attach a stick to a robot's arm (maybe a
billiard stick) and want the robot to hit a certain point (let's say a
ball) with the stick's tip but do not care about the angle at which the
ball is hit (ok, that's maybe a stupid idea for playing billiard..).
If I add the stick to the URDF and call setFromIK for the stick's tip, I
will get a solution with the gripper being placed somewhere on a sphere
around the ball with a radius of the stick's length (restricted by the
joint limits). So if I do not like the returned solution (as it might
collide with the environment), I can simply use a different seed state
and get a different solution.

On the other hand, if I use an attachedBody to add the stick to the
gripper and call setFromIK, it will compute a location for the gripper
first, that is determined deterministically by the inverse of the
transform from gripper to stick and the pose of the tip. Only after
that, it looks for a (position-only) IK solution for the gripper at that
specific location. So the solution space is strongly restricted.

There are two problems here a) The location of the gripper will always
be the same (but might collide with an other object or be unreachable).
Setting a different seed will result in the same position for the
gripper (which would still collide or be unreachable).
b) Using position-only IK for the EEF results in arbitrary positions for
the attachedBody, as the orientation of the EEF is ignored. So the stick
might not hit the ball at all.

These problem with the way setFromIK with an attachedBody works mainly
concern position-only IK. However even for full IK, any tolerance
specified is not with respect to the attachedBody but to the EEF, which
is also different from the URDF-based approach would do, and hence it is
counter-intuitive to me.

So concluding, I would like the attachedBody to behave as it was part of
the robot model :)

Daniel






On 02/04/2014 06:56 PM, Ioan Sucan wrote:
> Hello Daniel,
>
> I will add some answers (and some questions :) ) inline:
>
>
> On Tue, Feb 4, 2014 at 6:36 AM, Daniel Maier
> <mai...@informatik.uni-freiburg.de
> robot_state->attachBody(__attachedBody);
> // ... set joints in robot_state, e.g. from current /joint_states
> goal_state = new RobotState(robot_model);
> // .. set joints in goal_state from IK solution
> goal_state->attachBody(__attachedBody); // not sure if this is
> necessary, // I tried leaving out this line but did not help
>
> planning_scene.reset(new planning_scene::PlanningScene(__robot_model));
> planning_scene->__setCurrentState(robot_state);
> MotionPlanRequest req;
> robot_model::JointModelGroup * group =
> robot_model->__getJointModelGroup( group_name);
> req.group_name = group->getName();
> req.goal_constraints.clear();
> moveit_msgs::Constraints joint_goal =
> kinematic_constraints::__constructGoalConstraints( goal_state, group);
> req.goal_constraints.push___back(joint_goal);
>
> MotionPlanResponse res;
> context = planner_instance->__getPlanningContext(planning___scene,
> res.trajectory_->__getFirstWayPoint();
> rs.getAttachedBodies(abs);
> ROS_INFO("response hasAttachedBody(): %zu", abs.size());
> shows me that there is no attachedBody in the response.
>
> Why doesn't this work as expected?
>
> Thanks for your help.
> Daniel
>
>
>
>
>
> On 01/30/2014 06:21 PM, Dave Hershberger wrote:
>
> Have you tried this with position-only IK enabled in
> kinematics.yaml?
>
> http://docs.ros.org/api/pr2___moveit_tutorials/html/__kinematics/src/doc/kinematics___configuration.html#position-__only-ik
> http://docs.ros.org/api/__moveit_core/html/classmoveit___1_1core_1_1RobotState.html#__a5969cc58a617a8c117cbb888e0034__b4b
> <http://docs.ros.org/api/moveit_core/html/classmoveit_1_1core_1_1RobotState.html#a5969cc58a617a8c117cbb888e0034b4b>
>
>
> )
>
> It does not create a new joint, just adds a
> shape and a transform
> (similar to a link) to whatever link you want to
> attach it to. You can
> then remove it with
> RobotState::clearAttachedBody(__).
Reply all
Reply to author
Forward
0 new messages