Is it possible to change global robot velocity while moving ?

1,083 views
Skip to first unread message

Clément BAZERQUE

unread,
May 25, 2017, 5:32:29 AM5/25/17
to MoveIt! Users
Hi,

I am working with a manipulator arm (Sawyer Rethink Robotics).
I have a ros publisher that track a human operator and send his position on a ros topic.

In an other script, I computed a cartesian plan using MoveIt (a simple pick and place), and execute it in a loop.
I also subscribed to the "human position" topic and defined a callback function (which is currently empty).

I want to reduce the global velocity of the robot as the human get closer to the robot.

My question is: Is it possible to change the global velocity while the arm is moving using MoveIt ?

The following lines doesn't work:
group = moveit_commander.MoveGroupCommander("right_arm")
group.set_max_velocity_scaling_factor(0.1)

Clément


Clément BAZERQUE

unread,
Jun 2, 2017, 6:48:54 AM6/2/17
to MoveIt! Users
Hi again!

group.set_max_velocity_scaling_factor() doesn't work with compute_cartesian_path().
Now I use set_pose_target() and group.plan() and it works.

BUT, group.set_max_velocity_scaling_factor() doesn't change instantly the max_velocity when it is called during and execution (it waits the next execution).

Is there a way to change the velocity scaling factor during the execution ?

Clément

v4hn

unread,
Jun 7, 2017, 9:36:00 AM6/7/17
to moveit...@googlegroups.com, Clément BAZERQUE
Hey there,

On Fri, Jun 02, 2017 at 03:48:54AM -0700, Clément BAZERQUE wrote:
> *group.set_max_velocity_scaling_factor()* doesn't work with
> *compute_cartesian_path()*.
> Now I use *set_pose_target() *and *group.plan() *and it works.

Yes. with the cartesian path planning interfaces you have
to recompute time information yourself at the moment.

> *BUT*, *group.set_max_velocity_scaling_factor()* doesn't change instantly
> the max_velocity when it is called during and execution (it waits the next
> execution).
>
> Is there a way to change the velocity scaling factor during the execution ?

No. This can't work for the simple reason that MoveIt gave control to the
FollowJointTrajectory action server to execute the planned trajectory.
The time information for the whole trajectory is already in the action server goal.

Here are two options you can look into:

- abort the trajectory and replan with a different time parameterization/scaling factor

- Implement a FollowJointTrajectory action server (e.g. based on ROS control) yourself
that automatically adjusts the speed of the trajectory it executes in response to detected humans.
Minor detail: In this case you have to tell MoveIt to ignore overly long execution
delays in the controller, otherwise it will abort the trajectory because it takes too long.
The respective ROS-parameters there are trajectory_execution/allowed_execution_duration_scaling
and trajectory_execution/allowed_goal_duration_margin.


v4hn

--
Michael Görner, M.Sc. Cognitive Science, PhD Student
Universität Hamburg
Faculty of Mathematics, Informatics and Natural Sciences
Department of Informatics
Group Technical Aspects of Multimodal Systems

Vogt-Kölln-Straße 30
D-22527 Hamburg

Room: F-315
Phone: +49 40 42883-2432
signature.asc

Clément BAZERQUE

unread,
Jun 7, 2017, 10:50:39 AM6/7/17
to MoveIt! Users, clement....@gmail.com, m...@v4hn.de
Thank you very much for your answer v4hn :)

Concerning option 1:
- the robot will shake, make weird noise and scare the human collaborator (and reducing the publish rate of user position is dangerous). I would like to keep smooth movements.
- also using randomized planners (RRT for example) would produce different trajectories and sometimes force the robot change direction (maybe not if forcing linear trajectories with computeCartesianPath).

Concerning option 2 (best hope):
I think this is the best solution. However, I don't know how a  FollowJointTrajectory action server works. 
I suppose that the server gets the trajectory from MoveIt!, reads the next target point in the trajectory, send it to the controllers. 
And correct me if I am wrong, your idea is to somehow:
- make the server subscribe to a topic where the position of the user is published
- compute the corresponding speed_ratio in a callback (speed_ratio would be an attribute of the server class)
- just before sending the next target point to the controllers, multiply its velocity by the speed_ratio

If it works like this I think it is feasible.
Do you know where I could find examples of  FollowJointTrajectory action server ?
The Joint Trajectory Action Server (JTAS) provided by Rethink Robotics seems to call another server for which I can't access the source code:

Also thanks for the "Minor detail", I keep that in mind.

v4hn

unread,
Jun 7, 2017, 12:15:07 PM6/7/17
to Clément BAZERQUE, moveit...@googlegroups.com
First off, all of this basically belongs on answers.ros.org and not on the list.
For further concrete questions, please raise them there.

On Wed, Jun 07, 2017 at 07:50:39AM -0700, Clément BAZERQUE wrote:
> *Concerning option 1:*
> - the robot will shake, make weird noise and scare the human collaborator
> (and reducing the publish rate of user position is dangerous). I would like
> to keep smooth movements.
> - also using randomized planners (RRT for example) would produce different
> trajectories and sometimes force the robot change direction (maybe not if
> forcing linear trajectories with computeCartesianPath).

It depends on the low-level implementation of the action server that controls the arm,
but generally, yes these shortcomings will arise.

> *Concerning option 2 (best hope):*
> However, I don't know how a FollowJointTrajectory action server works.
> I suppose that the server gets the trajectory from MoveIt!, reads the next
> target point in the trajectory, send it to the controllers.

Usually the FollowJointTrajectory action server *is part* of the controller
and tries to control the arm to stay close to the trajectory.

> And correct me if I am wrong, your idea is to somehow:
> - make the server subscribe to a topic where the position of the user is
> published
> - compute the corresponding speed_ratio in a callback (speed_ratio would be
> an attribute of the server class)
> - just before sending the next target point to the controllers, multiply
> its velocity by the speed_ratio

This description is rather specific but the general direction is what I meant, yes.

> If it works like this I think it is feasible.
> *Do you know where I could find examples of FollowJointTrajectory action
> server ?*

The abstract ros_control implementation is here
http://wiki.ros.org/joint_trajectory_controller
but I don't think this will help you a lot.

> The Joint Trajectory Action Server (JTAS) provided by Rethink Robotics
> seems to call another server for which I can't access the source code:
> https://github.com/RethinkRobotics/intera_sdk/blob/master/intera_interface/scripts/joint_trajectory_action_server.py

You will have to (/should) deal with the controller for your robot.
I don't have a Sawyer (sadly rethink is still hesitant with giving them away for free :))
so I don't know whether you will have to talk to rethink for this or whether you can
hook your code somewhere in the pipeline.
signature.asc

Clément BAZERQUE

unread,
Jun 8, 2017, 3:09:04 AM6/8/17
to MoveIt! Users, clement....@gmail.com, m...@v4hn.de
Ok, thank you for all these answers :)
I will talk with Rethink to see what they propose.
If it is too complicated, the last option would be to send only positions to the JTAS. 
I know it is possible to control the speed ratio from a topic in this configuration.

dejanira...@gmail.com

unread,
Jun 21, 2017, 1:42:21 AM6/21/17
to MoveIt! Users, clement....@gmail.com, m...@v4hn.de
Hi Clément,

About your first post, you mentioned that you are tracking a human. I was wondering what sort of technique you used for that, e.g. skeleton tracking with a Kinect v2. 

I am currently looking at building upon a working skeleton tracker, in ROS Kinetic, but the available options do not seem to work (e.g. openni_tracker, openni2_tracker, kinect2_tracker, etc.). Most of them depend on Openni2 and Nite2, which are hard to get working with the Kinect v2 drivers in ROS Kinect and Ubuntu 16.04.

Thank you.

Dejanira
Message has been deleted

Clément BAZERQUE

unread,
Jun 21, 2017, 3:20:20 AM6/21/17
to MoveIt! Users, clement....@gmail.com, m...@v4hn.de
Hi Dejanira,

Indeed I am using the Kinect v1 for skeletal tracking.
As I have to work on Ubuntu 14.04 with ROS Indigo, I didn't have much options for the tracking.
It seems that for my configuration the only solution is to use openni1 with NITE 1.5.2.3.
Unfortunately, I think I have read somewhere that the Kinect v2 is not compatible with this version of openni.

Clément
Reply all
Reply to author
Forward
0 new messages