Significant planning slow-down in recent MoveIt! versions?

1,226 views
Skip to first unread message

Stefan Kohlbrecher

unread,
Dec 10, 2014, 4:26:24 PM12/10/14
to moveit...@googlegroups.com
Unfortunately I can't point my finger at a specific commit, but it appears to me that planning has gotten a lot slower lately with a standard setup generated with the setup assistant. Simple planning tasks like moving from one valid arm configuration to another one without obstacle avoidance took something like 0.1 seconds on most robots a few weeks/months ago and now I regularly see planning times exceeding 1 or 2 seconds (so about an order of magnitude slower). The output also seems to have changed with messages about parallel planning appearing in the terminal that I don't remember seeing when things still seemed faster. Also, multiple planning calls appear to be triggered when pressing the "Plan" button once judging from terminal output.
Can somebody explain what is going on/was changed and what can be done to get the old fast way of planning back?

regards,
Stefan


Michael Ferguson

unread,
Dec 10, 2014, 6:21:53 PM12/10/14
to Stefan Kohlbrecher, moveit-users
Which ROS version and OS? How are you triggering planning? Also, if the robot config is public -- please point us at it.

-Fergs

Sachin Chitta

unread,
Dec 10, 2014, 7:03:46 PM12/10/14
to Stefan Kohlbrecher, moveit-users
Can you post some of the output as well if possible?

Sachin

On Wed, Dec 10, 2014 at 1:26 PM, Stefan Kohlbrecher <stefan.ko...@gmail.com> wrote:

Stefan Kohlbrecher

unread,
Dec 12, 2014, 4:08:40 AM12/12/14
to moveit...@googlegroups.com, stefan.ko...@gmail.com
Apologies for the delay. I looked into this some more and it seems that most (not sure if all) of what I observed was related to changed behavior (or interaction between moveit/rviz) of the rviz MotionPlanning plugin. If I reduce the allowed planning time in the spin box, the planner indeed finishes earlier and it appears to show similar behavior to what I remember seeing prior to the "mysterious slow-down".

So it looks like what I observed is related to changes in the MotionPlanning plugin setup and the way the allowed planning time is specified. I'll still try to do a comparison between a older moveit version and current to confirm there's no other reasons for a slow-down, but that might take a little while till I get to it.

Stefan Kohlbrecher

unread,
Dec 12, 2014, 7:59:51 AM12/12/14
to moveit...@googlegroups.com, stefan.ko...@gmail.com
Ok, so I made a comparison between the latest (but old MoveIt! version) .debs in 12.04/Groovy and the latest .debs in 14.04/Indigo. On Groovy, planning to random goal configurations per default seems to take about 0.015 seconds, which is basically instantaneous. On Indigo, planning takes much longer (about 0.4 seconds on average). The very bad news is that move_group frequently crashes on Indigo. My complete test setup is available here: https://github.com/skohlbr/simple_test_atlas_moveit

I made a ticket about move_group crashing on Indigo/.debs: https://github.com/ros-planning/moveit_ros/issues/542

Comparison videos:

12.04/Groovy:

14.04/Indigo:

All have move_group crashing during planning at the end. In the first video I used the standard 5 second planning time setting, while in the others I set it to 0.5 seconds (which appears to make things crash faster).

It seems like planning was multithreaded sometime in the past to potentially make things faster, but for simple planning the user experience in the old Groovy MoveIt! version is a lot better right now (both in terms of speed and stability).

Dave Coleman

unread,
Dec 13, 2014, 3:59:12 PM12/13/14
to Stefan Kohlbrecher, moveit-users
Off the top of my head, without testing, perhaps these changes are due to:

- Changes in OMPL
- Did the build farm change its flags? It only currently builds with -O2. Did you build from source in debug mode?
- Your start and goal states are different enough to cause the change

- dave

Michael Ferguson

unread,
Dec 13, 2014, 4:53:17 PM12/13/14
to Dave Coleman, Stefan Kohlbrecher, moveit-users
A couple of points:

 * The build flags should now be the same between 12.04/groovy and 14.04/indigo now, however, just to be sure, are you using moveit_core 0.6.12 and moveit_ros 0.6.3 (which are the absolute latest release, and for which I know -NDEBUG was set in bloom)?
 * Stefan, can you test on Hydro/12.04? If Hydro runs fast, but Indigo does not, that will significantly narrow the number of commits to consider.

-Fergs

Stefan Kohlbrecher

unread,
Dec 15, 2014, 4:20:12 AM12/15/14
to moveit...@googlegroups.com, davetc...@gmail.com, stefan.ko...@gmail.com
Dave:
- I´d definitely not rule out changes in OMPL, actually not sure where this parallelization change was introduced (triggered inside MoveIt! or change inside OMPL completely independent of MoveIt! ?)
- Haven´t built from source yet, all .debs
- I press "random valid" multiple times and plan. The Groovy version is always fastest at a observed ~0.02s planning time average, Hydro clocks in at about ~0.07s and Indigo takes around 0.4s and always crashes after multiple attempts. I think it is highly unlikely (basically impossible) that over multiple tests my randomized goal configurations are the only cause of these extreme performance differences.

Mike:
- I´m using latest .debs on all OS/ROS distro combinations.
- Also tested with Hydro. It appears to be 2x-4x slower than the Groovy version, but given that we´re talking ~20ms vs ~70ms this difference isn´ t noticable during interaction via the Motion Planning plugin. The interesting part is that Hydro still has the "old" single planning call output on the terminal, while Indigo spams info about parallel planning, is slower and crashes very frequently in my test setup.

Note:
- I generated different moveit configs for each distro and pushed them to my test setup: https://github.com/skohlbr/simple_test_atlas_moveit
- The timing difference between Groovy and Hydro might be related to some changes of default planning settings, or because my computer is a little slower today than last week, I haven´t looked into this.

Hydro test video:

Playlist with all the different tests:

Ioan Sucan

unread,
Dec 15, 2014, 12:35:11 PM12/15/14
to Stefan Kohlbrecher, moveit-users, Dave Coleman
I think there were some issues with build flags that get passed to targets in debs. We might need to re-release OMPL to account for that.

Michael Ferguson

unread,
Dec 15, 2014, 2:06:51 PM12/15/14
to Ioan Sucan, Stefan Kohlbrecher, moveit-users, Dave Coleman
Ioan,

I just checked and both hydro and Indigo are compiled with the same flags on the farm (-O2 but no -NDEBUG). Thus, I don't think a difference in speed between hydro & indigo is attributable to build flags alone.

-Fergs 

Stefan Kohlbrecher

unread,
Dec 23, 2014, 4:56:15 AM12/23/14
to moveit...@googlegroups.com, isu...@gmail.com, stefan.ko...@gmail.com, davetc...@gmail.com
Some further findings:

It appears the frequent crashing I´m seeing happens inside OMPL. I attached two semi-informative backtraces to the corresponding ticket: https://github.com/ros-planning/moveit_ros/issues/542. As this basically makes MoveIt! unusable on Indigo, it would be great if someone could take a look at it :)

I played around with the settings in the motion planning plugin some more and observed that the "old" behavior (fast planning, efficient paths) can be achieved by setting the planning time to 0 and the number of attempts to 1. One would expect that increasing the planning time and number of attempts would increase plan quality, but counter-intuitively, this does not appear to be the case. Instead, increased planning time and number of attempts results in plans that frequently exhibit a lot of random motion for multiple seconds until reaching the goal. Given we're talking sampling based-planning I'm aware this can happen, but it it almost looks like that with parallel planning, the least efficient as opposed to the most efficient plan is selected at the end. Another explanation would be that plan optimization/shortcutting does not work properly in a time-constrained scenario, but I don't know enough about the internal workings so those are just theories. Other explanations for this observed behavior are welcome.

Mark Moll

unread,
Dec 23, 2014, 9:08:35 AM12/23/14
to Stefan Kohlbrecher, moveit...@googlegroups.com, Ioan Alexandru Şucan, davetc...@gmail.com
Stefan,

That’s very strange. I think that part of the OMPL code hasn't changed in a long time. Of course, that code could still have a bug that is just triggered by a change in how MoveIt! calls the OMPL planners.
--
Mark Moll
signature.asc

Stefan Kohlbrecher

unread,
Dec 27, 2014, 8:55:01 AM12/27/14
to moveit...@googlegroups.com, stefan.ko...@gmail.com, isu...@gmail.com, davetc...@gmail.com, mm...@rice.edu
I definitely wouldn't rule that out. Just tested with my Atlas test setup and I get the exact same crash with that (the previous backtraces were recorded with another, non-public 7 DOF arm). This segfault can thus be reproduced using completely unrelated robot models. The quickest way to get the crash is spam clicking the "Plan" button after starting demo.launch.

Marco Esposito

unread,
Feb 8, 2015, 6:58:45 AM2/8/15
to moveit...@googlegroups.com, stefan.ko...@gmail.com, isu...@gmail.com, davetc...@gmail.com, mm...@rice.edu
Hi Stefan,

I can definitely confirm the issue, and I think that it has always been a corner case that was made more evident by the planning parallelization.

I have been working with a UR5 for more than a year and planning has always been problematic, so much that the authors of the ur5_moveit_config package had to create a URDF of the robot with limited joint range in order to avoid exactly this issue (see https://github.com/ros-industrial/universal_robot/issues/112 ).

I just moved to a KUKA LWR, which has very good planning times (rock solid 0.030 seconds), and as soon as I attached a fixed joint with a tool I found the same issue again (50% planning success, and in successful cases a planning time of 5 seconds and a lot of dancing in the result).

I hope this can help debugging, but in the meantime thanks a lot for finding this workaround!

Ciao,
Marco

Marco Esposito

unread,
Mar 17, 2015, 1:19:43 PM3/17/15
to moveit...@googlegroups.com, stefan.ko...@gmail.com, isu...@gmail.com, davetc...@gmail.com, mm...@rice.edu
Hi guys,

I did some testing and I am quite sure that the problem lies in the OMPL path hybridization. Planning times decreased dramatically by disabling it in the ompl_interface::ModelBasedPlanningContext::solve method (it is hard-coded).

Another source of slow-down is the fact that the planning threads are created and destroyed every time in OMPL::ParallelPlan::solve. Pooling would be much more efficient, I will try to work on it. But maybe for MoveIt! it would make more sense to completely disable threading: for example, single-threaded planning for the KUKA LWR4+ completes in average in 0.03 seconds, it makes no sense to create 4 threads for that. At least it should be an option.

Ciao,
Marco

Michael Ferguson

unread,
Mar 17, 2015, 2:00:26 PM3/17/15
to Marco Esposito, moveit-users, Stefan Kohlbrecher, Ioan Alexandru Sucan, Dave Coleman, mm...@rice.edu
Marco,

Can you point me to the lines you disabled in solve?

Thanks,
-Fergs

Marco Esposito

unread,
Mar 17, 2015, 2:29:26 PM3/17/15
to Michael Ferguson, moveit-users, Stefan Kohlbrecher, Ioan Alexandru Sucan, Dave Coleman, mm...@rice.edu
Hi Michael,

I changed the last parameter in the ompl_parallel_plan_.solve(ptc, 1, count, true) call to „false“ (lines 548, 567, 580) 

Ciao
Marco

Marco Esposito

unread,
Mar 23, 2015, 12:37:37 PM3/23/15
to moveit...@googlegroups.com, mfe...@gmail.com, stefan.ko...@gmail.com, isu...@gmail.com, davetc...@gmail.com, mm...@rice.edu
Hi guys,

I am digging and it looks like the hybridization slowdown is just an obvious symptom of some other underlying problem. As far as I understand the hybridization should work correctly, but of course it is slower if it is feeded 15 paths, each 100 states long, instead of the typical 10 paths * 4 states. The problem grows exponentially with the number of states, and correctly so.

The question is then: why are such abnormal paths generated just after adding a fixed joint with a tool to a robot model that otherwise leads to very good planning performance?

A naive optimization could be "backtracking" the last n fixed joints of the robot and just plan for the actually mobile joints, which should lead to the original problem. It looks like this optimization is not there.

I'll keep you updated
Marco


On Tuesday, March 17, 2015 at 7:29:26 PM UTC+1, Marco Esposito wrote:
Hi Michael,

I changed the last parameter in the ompl_parallel_plan_.solve(ptc, 1, count, true) call to „false“ (lines 548, 567, 580) 

Ciao
Marco
Am 17.03.2015 um 19:00 schrieb Michael Ferguson:

Marco,

Can you point me to the lines you disabled in solve?

Thanks,
-Fergs

Marco Esposito

unread,
Mar 24, 2015, 10:26:05 AM3/24/15
to moveit...@googlegroups.com, mfe...@gmail.com, stefan.ko...@gmail.com, isu...@gmail.com, davetc...@gmail.com, mm...@rice.edu
Further update:

the problem seems to lie in the planner algorithm, or better, in the choice of default planner algorithm made by OMPL.

The planning performance for the LWR 4+ is very good with the default planner (LBKPIECE1): on average 0.03 seconds, 100% success.
As soon as I attach a tool to the robot the performance degrades: 2-15 seconds, 50% failure, horrible trajectories. But with RRTConnect the performance returns to the previous level.

Digging into the OMPL SelfConfig class, I found out that those two algorithms are the default for two different classes of problems (getDefaultPlanner method). The latter is the default choice if the StateSpace has no default projection.
My guess is, that both in the case of the LWR 4+ with a tool attached and of the UR5 (I observed the same behavior), these problems are incorrectly classified as having a default projection.

Any OMPL guys reading? I will write there otherwise. But I think this might be useful information for anyone banging his head on the wall while just trying to move a robot with MoveIt!.

Ciao
Marco
Reply all
Reply to author
Forward
0 new messages