[Cascading 3.0.0-wip-74] significantly longer flow planning time, even in local mode

22 views
Skip to first unread message

Cyrille Chépélov

unread,
Mar 4, 2015, 12:45:28 PM3/4/15
to cascadi...@googlegroups.com
Hello,

While trying to port an existing, kind of complex batch system from Scala 2.10 / Scalding 0.11.0 / Cascading 2.5.3 to Scala 2.11.5 / Scalding 0.13.2 / Cascading 3.0.0-wip (with a view to giving Tez/Spark a spin, eventually), we encountered a major slowdown of the planning phase.

It went from about a minute (or so) to no less than three hours, with constant hardware on both driver and cluster side (Intel i7-3770K boxes). This is true even if the dataset is reduced to a few lines per input file, and the platform is local instead of hadoop2-mr1, which, at this stage, pretty much kills the "fail fast" ability.

As it happens, we had noticed the beginning of a planner slowdown while developing our program using 2.5.3. Out of the dozen Scalding jobs within the batch, two of them went from "several planned per second" to "these two jobs take several seconds each to complete planning" (using timings from "INFO planner.HadoopPlanner: using application jar:" messages). We don't yet have proof, but we highly suspect these same jobs now exploded into multi-hour planning exercises.

Back to Cascading 3.0.0-wip-74, we noticed the threads seems mostly stuck in this state:
   java.lang.Thread.State: RUNNABLE
    at java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1067)
    at org.jgrapht.graph.MaskEdgeSet$MaskEdgeSetNextElementFunctor.nextElement(MaskEdgeSet.java:128)
    at org.jgrapht.util.PrefetchIterator.getNextElementFromInnerFunctor(PrefetchIterator.java:116)
    at org.jgrapht.util.PrefetchIterator.hasMoreElements(PrefetchIterator.java:156)
    at org.jgrapht.util.PrefetchIterator.hasNext(PrefetchIterator.java:190)
    at org.jgrapht.traverse.CrossComponentIterator.addUnseenChildrenOf(CrossComponentIterator.java:360)
    at org.jgrapht.traverse.CrossComponentIterator.next(CrossComponentIterator.java:242)
    at org.jgrapht.alg.ConnectivityInspector.connectedSetOf(ConnectivityInspector.java:142)
    at org.jgrapht.alg.ConnectivityInspector.pathExists(ConnectivityInspector.java:205)
    at org.jgrapht.alg.RankingPathElementList.isGuardVertexDisconnected(RankingPathElementList.java:343)
    at org.jgrapht.alg.RankingPathElementList.isNotValidPath(RankingPathElementList.java:359)
    at org.jgrapht.alg.RankingPathElementList.addPathElements(RankingPathElementList.java:199)
    at org.jgrapht.alg.KShortestPathsIterator.tryToAddNewPaths(KShortestPathsIterator.java:358)
    at org.jgrapht.alg.KShortestPathsIterator.updateOutgoingVertices(KShortestPathsIterator.java:394)
    at org.jgrapht.alg.KShortestPathsIterator.next(KShortestPathsIterator.java:174)
    at org.jgrapht.alg.KShortestPaths.getPaths(KShortestPaths.java:147)
    at cascading.flow.stream.graph.StreamGraph.allPathsBetweenInclusive(StreamGraph.java:400)
    at cascading.flow.stream.graph.StreamGraph.countAllEventingPathsTo(StreamGraph.java:339)
    at cascading.flow.stream.element.MemorySpliceGate.bind(MemorySpliceGate.java:65)
    at cascading.flow.stream.graph.StreamGraph.bind(StreamGraph.java:166)
    at cascading.flow.local.stream.graph.LocalStepStreamGraph.<init>(LocalStepStreamGraph.java:63)
    at cascading.flow.local.planner.LocalStepRunner.<init>(LocalStepRunner.java:60)
    at cascading.flow.local.planner.LocalFlowStepJob.<init>(LocalFlowStepJob.java:49)
    at cascading.flow.local.LocalFlowStep.createFlowStepJob(LocalFlowStep.java:136)
    at cascading.flow.local.LocalFlowStep.createFlowStepJob(LocalFlowStep.java:39)
    at cascading.flow.planner.BaseFlowStep.getCreateFlowStepJob(BaseFlowStep.java:769)
    at cascading.flow.BaseFlow.initializeNewJobsMap(BaseFlow.java:1281)
    at cascading.flow.BaseFlow.initialize(BaseFlow.java:219)
    at cascading.flow.planner.FlowPlanner.buildFlow(FlowPlanner.java:190)
    at cascading.flow.FlowConnector.connect(FlowConnector.java:456)
    at com.twitter.scalding.ExecutionContext$class.buildFlow(ExecutionContext.scala:47)
    at com.twitter.scalding.ExecutionContext$$anon$1.buildFlow(ExecutionContext.scala:94)
    at com.twitter.scalding.Job$$anonfun$buildFlow$1.apply(Job.scala:230)
Adding some logging at the end of the blue methods above (allPathsBetweenInclusive and countAllEventingPathsTo in StreamGraph) returns this kind of output:
2015-03-04 18:17:19 INFO  Flow:1419 - [REDACTED1] rule registry: LocalRuleRegistry, result was selected
2015-03-04 18:17:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=264b2c34 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 264b2c34
2015-03-04 18:17:19 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=1 collapsedPathCount=0 ; TO DUCT=264b2c34 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=2a9b5828 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 2a9b5828
2015-03-04 18:17:19 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=1 collapsedPathCount=0 ; TO DUCT=2a9b5828 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=1ac0be61 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 1ac0be61
2015-03-04 18:17:19 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=1 collapsedPathCount=0 ; TO DUCT=1ac0be61 SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=6b6e85ee SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 6b6e85ee
2015-03-04 18:17:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 264b2c34 to 6b6e85ee
2015-03-04 18:17:19 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=0 collapsedPathCount=1 ; TO DUCT=6b6e85ee SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=6928613f SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:20 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 6928613f
2015-03-04 18:17:20 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 2a9b5828 to 6928613f
2015-03-04 18:17:20 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=0 collapsedPathCount=1 ; TO DUCT=6928613f SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:20 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, BEGIN counting all eventing paths to duct=707b11cf SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:20 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 4f905c47 to 707b11cf
2015-03-04 18:17:20 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counted all paths between, result=1 paths, from 1ac0be61 to 707b11cf
2015-03-04 18:17:20 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@11431d90, counting all eventing paths, result=1: nonCollapsedPathsCount=0 collapsedPathCount=1 ; TO DUCT=707b11cf SpliceGate{splice=GroupBy(FixedPathTypedDelimited(List(/tmp/sroL/inner/
[REDACTED]
2015-03-04 18:17:20 INFO  Flow:1419 -
[REDACTED2] executed rule registry: LocalRuleRegistry, completed in: 00:00.123
2015-03-04 18:17:20 INFO  Flow:1419 - [REDACTED2] rule registry: LocalRuleRegistry, supports assembly with steps: 1, nodes: 1

The first Flow went very well, and was planned in about a second, or less.

Things start to get a little complicated down the road; first this extract from a job which took overall 64 seconds to plan:
2015-03-04 18:17:30 INFO  Flow:1419 - [REDACTED3] rule registry: LocalRuleRegistry, result was selected

(...)

2015-03-04 18:18:15 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, BEGIN counting all eventing paths to duct=7cf1d02b SpliceGate{splice=CoGroup(_pipe_68*_pipe_69)[by: _pipe_68:[{1}:'key0'] _pipe_69:[{1}:'key1']], role=both}
2015-03-04 18:18:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=32 paths, from 16526d7a to 7cf1d02b
2015-03-04 18:18:19 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=1 paths, from 72407f7 to 7cf1d02b
2015-03-04 18:18:19 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counting all eventing paths, result=2: nonCollapsedPathsCount=1 collapsedPathCount=1 ; TO DUCT=7cf1d02b SpliceGate{splice=CoGroup(_pipe_68*_pipe_69)[by: _pipe_68:[{1}:'key0'] _pipe_69:[{1}:'key1']], role=both}
2015-03-04 18:18:19 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, BEGIN counting all eventing paths to duct=70a7bd3a SpliceGate{splice=CoGroup(_pipe_64*_pipe_65)[by: _pipe_64:[{1}:'key0'] _pipe_65:[{1}:'key1']], role=both}
2015-03-04 18:18:22 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=34 paths, from 16526d7a to 70a7bd3a
2015-03-04 18:18:22 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=1 paths, from 1131c015 to 70a7bd3a
2015-03-04 18:18:22 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counting all eventing paths, result=2: nonCollapsedPathsCount=1 collapsedPathCount=1 ; TO DUCT=70a7bd3a SpliceGate{splice=CoGroup(_pipe_64*_pipe_65)[by: _pipe_64:[{1}:'key0'] _pipe_65:[{1}:'key1']], role=both}
2015-03-04 18:18:22 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, BEGIN counting all eventing paths to duct=2df3ed79 SpliceGate{splice=GroupBy(_pipe_68*_pipe_69)[by:[{1}:'key']], role=both}
2015-03-04 18:18:25 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=32 paths, from 16526d7a to 2df3ed79
2015-03-04 18:18:25 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counted all paths between, result=1 paths, from 7cf1d02b to 2df3ed79
2015-03-04 18:18:25 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@2a5c43ab, counting all eventing paths, result=1: nonCollapsedPathsCount=0 collapsedPathCount=1 ; TO DUCT=2df3ed79 SpliceGate{splice=GroupBy(_pipe_68*_pipe_69)[by:[{1}:'key']], role=both}


(...)
2015-03-04 18:18:35 INFO  Flow:1419 - [REDACTED4] executed rule registry: LocalRuleRegistry, completed in: 00:00.047
But something must be odd down the road, in another job we start to see much interesting timings:

2015-03-04 18:21:35 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, BEGIN counting all eventing paths to duct=789c8779 SpliceGate{splice=HashJoin(_pipe_190*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:22:04 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=2 paths, from 7e2f7780 to 789c8779
2015-03-04 18:22:04 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=1 paths, from 265ef763 to 789c8779
2015-03-04 18:22:04 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counting all eventing paths, result=2: nonCollapsedPathsCount=1 collapsedPathCount=1 ; TO DUCT=789c8779 SpliceGate{splice=HashJoin(_pipe_190*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:22:04 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, BEGIN counting all eventing paths to duct=5bc9e59a SpliceGate{splice=HashJoin(_pipe_178*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:22:34 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=2 paths, from 7e2f7780 to 5bc9e59a
2015-03-04 18:22:34 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=1 paths, from 49334de6 to 5bc9e59a
2015-03-04 18:22:34 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counting all eventing paths, result=2: nonCollapsedPathsCount=1 collapsedPathCount=1 ; TO DUCT=5bc9e59a SpliceGate{splice=HashJoin(_pipe_178*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:22:34 INFO  StreamGraph:338 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, BEGIN counting all eventing paths to duct=68742487 SpliceGate{splice=GroupBy(_pipe_190*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]
2015-03-04 18:23:08 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=2 paths, from 7e2f7780 to 68742487
2015-03-04 18:23:11 INFO  StreamGraph:405 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counted all paths between, result=1 paths, from 789c8779 to 68742487
2015-03-04 18:23:11 INFO  StreamGraph:388 - in graph cascading.flow.local.stream.graph.LocalStepStreamGraph@24c39843, counting all eventing paths, result=1: nonCollapsedPathsCount=0 collapsedPathCount=1 ; TO DUCT=68742487 SpliceGate{splice=GroupBy(_pipe_190*FixedPathTypedDelimited(List(/tmp/sroL/inner/[REDACTED]

(yes, over 30 seconds per countAllEventingPathsTo() step, which tends to be still increasing as I type)
I can unfortunately not share much in the way of code, and it seems the complexity of my workload is part of the problem. Still, coming back to 2.5.3-level planning performance would be very nice.
Is there any more debugging information I could dig out to help make this easier?

I wonder if it couldn't be worthwhile to cache a map of KShortestPaths<> structures in StreamGraph.allPathsBetweenInclusive(), to avoid rebuilding that each time we visit a new "from" duct? Maybe the memory tradeoff would be worth it on large flows, and hardly perceptible on smaller ones? I might try that tomorrow and report the findings.

    -- Cyrille

PS:  the debugging output was generated using this patch:

diff --git a/cascading-core/src/main/java/cascading/flow/stream/graph/StreamGraph.java b/cascading-core/src/main/java/cascading/flow/stream/graph/StreamGraph.java
index 4ea011b..3e47361 100644
--- a/cascading-core/src/main/java/cascading/flow/stream/graph/StreamGraph.java
+++ b/cascading-core/src/main/java/cascading/flow/stream/graph/StreamGraph.java
@@ -335,6 +335,9 @@ public class StreamGraph
    */
   public int countAllEventingPathsTo( Duct duct )
     {
+        LOG.info("in graph {}, BEGIN counting all eventing paths to duct={} {}",
+                this, Integer.toHexString(System.identityHashCode(duct)), duct);
+
     // find all immediate prior groups/ collapsed
     LinkedList<List<Duct>> allPaths = asPathList( allPathsBetweenInclusive( getHEAD(), duct ) );
 
@@ -382,6 +385,10 @@ public class StreamGraph
 
     int nonCollapsedPathsCount = allPaths.size() - collapsedPaths.size();
 
+    LOG.info("in graph {}, counting all eventing paths, result={}: nonCollapsedPathsCount={} collapsedPathCount={} ; TO DUCT={} {}",
+            this, nonCollapsedPathsCount + collapsedPathsCount, nonCollapsedPathsCount, collapsedPathsCount,
+            Integer.toHexString(System.identityHashCode(duct)), duct);
+
     // incoming == paths + prior
     return nonCollapsedPathsCount + collapsedPathsCount;
     }
@@ -393,7 +400,12 @@ public class StreamGraph
 
   private List<GraphPath<Duct, DuctGraph.Ordinal>> allPathsBetweenInclusive( Duct from, Duct to )
     {
-    return new KShortestPaths<>( ductGraph, from, Integer.MAX_VALUE ).getPaths( to );
+        List<GraphPath<Duct, DuctGraph.Ordinal>> result = new KShortestPaths<>( ductGraph, from, Integer.MAX_VALUE ).getPaths( to );
+
+        LOG.info("in graph {}, counted all paths between, result={} paths, from {} to {} ",
+                this, result.size(), Integer.toHexString(System.identityHashCode(from)), Integer.toHexString(System.identityHashCode(to)));
+
+        return result;
     }
 
   public static LinkedList<List<Duct>> asPathList( List<GraphPath<Duct, DuctGraph.Ordinal>> paths )

Chris K Wensel

unread,
Mar 4, 2015, 1:27:58 PM3/4/15
to cascadi...@googlegroups.com
This is great feedback. 

The good news is that you will only see this problem in Cascading local mode. If you planned with MR, the search space will (should) be orders smaller (the event count path only happens in a Node, mapper or reducer). In local mode, there is only one Node (your whole graph, thus the non-linearity). Doesn’t mean you won’t see other issues.

Can you set the planner stats property and send the results (cascading.planner.stats.path)? Once for each of the planners (local, mapreduce, and tez?)

So the right answer to fix the problem is to replace KShortestPaths with a less explosive algorithm. First I’ll double check to see if caching is an option, but moving to Floyd-Warshall might be the the fast track to that.

fwiw, the majority of work in Cascading 3 was to replace the query planner. but without 7 years of hardening (with the prior planner), we are asking people to please test their apps to help identify this class of problems sooner than later. so thanks again for sending feedback.

I’ll update the thread as I make improvements.

ckw

--
You received this message because you are subscribed to the Google Groups "cascading-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cascading-use...@googlegroups.com.
To post to this group, send email to cascadi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cascading-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/cascading-user/54F744B2.7070009%40transparencyrights.com.
For more options, visit https://groups.google.com/d/optout.

Chris K Wensel




Chris K Wensel

unread,
Mar 4, 2015, 11:29:57 PM3/4/15
to cascadi...@googlegroups.com
After a quick look, the #countAllEventingPathsTo method was a vestigial remainder from 2.x. I removed it with a much simpler implementation.

There should be a new wip 75 out by morning with the fix.

Would still be good to know if there are any other slowdowns we can tackle, access to the planner stats would still be interesting for me.

ckw


For more options, visit https://groups.google.com/d/optout.

Chris K Wensel




Chris K Wensel

unread,
Mar 5, 2015, 4:56:45 PM3/5/15
to cascadi...@googlegroups.com
just a heads up 3.0.0 wip-75 went out last night. would love to go after any other slowdowns. 

if you want to share all the planner info, see the below link for capturing planner debugging trace info. this is is very helpful in making tests. feel free do share privately if that’s a concern.

ckw


For more options, visit https://groups.google.com/d/optout.

Chris K Wensel




Cyrille Chépélov

unread,
Mar 6, 2015, 2:51:13 AM3/6/15
to cascadi...@googlegroups.com
Hello Chris,

Thanks for the quick reply. I can confirm that your modifications in f5bb8759 solve the Local Planner issue, turnaround time for the complete "no-op" job (with destinations files up to date) is below 7 seconds.

I'll nonetheless proceed with the -Dcascading.planner.plan.path= reporting on all planners (I'll send those to you off-list).

    -- Cyrille

For more options, visit https://groups.google.com/d/optout.


--

Logo Transparency

Cyrille CHÉPÉLOV
Chief Innovation Officer

Transparency Rights Management
15 rue Jean-Baptiste Berlier - Hall B, 75013 Paris
T : +33 1 84 16 52 74 / F : +33 1 84 17 83 34

Reply all
Reply to author
Forward
0 new messages