Training keep fail since data amount increased

463 views
Skip to first unread message

MayP

unread,
May 14, 2018, 4:26:57 AM5/14/18
to actionml-user
Hi.

I was using PIO + UR in AWS EC2 m5.xlarge single instance (16GiB CPU&MEM)

my data was about 5 millions and 1GB

since I increased data to over 50millions, 10GB training is keeping fail.

I tried m5.4xlarge (64GiB CPU&MEM) but still keep failing.

this is my error message when I type 'pio train  -- --driver-memory 4g --executor-memory 4g'

[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: com.actionml.DataSource@481b2f10
[INFO] [Engine$] Preparator: com.actionml.Preparator@5f726750
[INFO] [Engine$] AlgorithmList: List(com.actionml.URAlgorithm@50b46e24)
[INFO] [Engine$] Data sanity check is on.
[INFO] [DataSource] Received events List(read30)
[INFO] [Engine$] com.actionml.TrainingData does not support data sanity check. Skipping check.
[INFO] [Preparator] EventName: read30
[INFO] [Preparator] Dimensions rows : 261376 columns: 58450
[INFO] [Preparator] Number of user-ids after creation: 261376
[INFO] [Engine$] com.actionml.PreparedData does not support data sanity check. Skipping check.
[INFO] [URAlgorithm] Actions read now creating correlators
[INFO] [PopModel] PopModel popular using end: 2018-05-14T08:02:40.978Z, and duration: 315360000, interval: 2008-05-16T08:02:40.978Z/2018-05-14T08:02:40.978Z
[INFO] [PopModel] PopModel getting eventsRDD for startTime: 2008-05-16T08:02:40.978Z and endTime 2018-05-14T08:02:40.978Z
[INFO] [URAlgorithm] Correlators created now putting into URModel
[INFO] [URAlgorithm] Index mappings for the Elasticsearch URModel: Map(rank-read30 -> (float,false), read30 -> (keyword,true))
[INFO] [URModel] Converting cooccurrence matrices into correlators
[INFO] [URModel] Group all properties RDD
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[Stage 32:>   (0 + 0) / 4][Stage 34:>   (0 + 0) / 4][Stage 43:>   (0 + 8) / 8][INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to javaversion
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.opencl.GPUMMul solver
[INFO] [RootSolverFactory$] Unable to create class GPUMMul: attempting OpenMP version
[INFO] [RootSolverFactory$] Creating org.apache.mahout.viennacl.openmp.OMPMMul solver
[INFO] [RootSolverFactory$] org.apache.mahout.viennacl.openmp.OMPMMul$
[INFO] [RootSolverFactory$] Unable to create class OMPMMul: falling back to java version
[Stage 32:>   (0 + 4) / 4][Stage 34:>   (0 + 4) / 4][Stage 44:>   (0 + 0) / 2][WARN] [TableRecordReaderImpl] We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long.
[WARN] [TableRecordReaderImpl] We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long.
[WARN] [TableRecordReaderImpl] We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long.
[WARN] [TableRecordReaderImpl] We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long.
[Stage 32:>   (0 + 4) / 4][Stage 34:>   (0 + 4) / 4][Stage 44:>   (0 + 0) / 2][WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[Stage 32:>   (0 + 4) / 4][Stage 34:>   (0 + 4) / 4][Stage 44:>   (0 + 0) / 2][WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[ERROR] [Executor] Exception in task 0.0 in stage 32.0 (TID 109)
[ERROR] [Executor] Exception in task 2.0 in stage 32.0 (TID 111)
[ERROR] [Executor] Exception in task 1.0 in stage 32.0 (TID 110)
[WARN] [TaskSetManager] Lost task 0.0 in stage 32.0 (TID 109, localhost, executor driver): org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:403)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:232)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:199)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 39 number_of_rows: 500 close_scanner: false next_call_seq: 0
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2463)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:97)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:214)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
        ... 15 more

[ERROR] [TaskSetManager] Task 0 in stage 32.0 failed 1 times; aborting job
[INFO] [ServerConnector] Stopped Spark@34be065a{HTTP/1.1}{0.0.0.0:4040}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@5f34907b{/stages/stage/kill,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@3d299393{/jobs/job/kill,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@4b1ec694{/api,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@65e22def{/,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@5426cb36{/static,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@20ead579{/executors/threadDump/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@63f9b562{/executors/threadDump,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@6cbbb9c4{/executors/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@daf22f0{/executors,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@54b2fc58{/environment/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@7a6ea47d{/environment,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@158e9f6e{/storage/rdd/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@7645f03e{/storage/rdd,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@6c184d4d{/storage/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@173f1614{/storage,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@51a651c1{/stages/pool/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@40bb4f87{/stages/pool,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@55b5cd2b{/stages/stage/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@220c9a63{/stages/stage,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@479ac2cb{/stages/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@2bc7db89{/stages,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@66223d94{/jobs/job/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@6ec63f8{/jobs/job,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@20999517{/jobs/json,null,UNAVAILABLE,@Spark}
[INFO] [ContextHandler] Stopped o.s.j.s.ServletContextHandler@297c9a9b{/jobs,null,UNAVAILABLE,@Spark}
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[ERROR] [Executor] Exception in task 3.0 in stage 32.0 (TID 112)
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[WARN] [ScannerCallable] Ignore, probably already closed
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2f/temp_shuffle_5fa896a3-ac30-4706-92c4-f24f59eefaf2
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/1a/temp_shuffle_55395223-3856-4389-991c-936254e0c81e
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3a/temp_shuffle_d39b0621-dbe2-4d29-bb45-664f2e4c787c
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/1a/temp_shuffle_55395223-3856-4389-991c-936254e0c81e
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2f/temp_shuffle_5fa896a3-ac30-4706-92c4-f24f59eefaf2
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3a/temp_shuffle_d39b0621-dbe2-4d29-bb45-664f2e4c787c
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3b/temp_shuffle_72adcfd1-2b6b-4059-b275-354d2cccb340
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3b/temp_shuffle_72adcfd1-2b6b-4059-b275-354d2cccb340
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2c/temp_shuffle_33cbbf81-cb3b-4147-8e7e-3b0d3fe55a20
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2c/temp_shuffle_33cbbf81-cb3b-4147-8e7e-3b0d3fe55a20
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/06/temp_shuffle_a5e8815f-6457-4574-a36c-98f5f0dabad5
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/06/temp_shuffle_a5e8815f-6457-4574-a36c-98f5f0dabad5
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/20/temp_shuffle_3c9c3ba0-fd55-4585-acb1-ddf19978a937
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/20/temp_shuffle_3c9c3ba0-fd55-4585-acb1-ddf19978a937
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/07/temp_shuffle_4354aef8-327d-4a7d-b3b4-e232230c6807
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3c/temp_shuffle_cb287ebb-3c03-452c-868d-8df3b6f2241a
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/0f/temp_shuffle_40c6e665-13f3-4a8e-abc3-6b6bc94121fa
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2e/temp_shuffle_29066d04-3153-43b4-b00f-ee7d4c828bd1
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/07/temp_shuffle_4354aef8-327d-4a7d-b3b4-e232230c6807
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2e/temp_shuffle_29066d04-3153-43b4-b00f-ee7d4c828bd1
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/0f/temp_shuffle_40c6e665-13f3-4a8e-abc3-6b6bc94121fa
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3c/temp_shuffle_cb287ebb-3c03-452c-868d-8df3b6f2241a
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/37/temp_shuffle_39493dea-4b0a-4830-b3c7-e00ed09a47e1
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/12/temp_shuffle_53fa40dc-df9b-405b-a1c3-28a229844cce
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/37/temp_shuffle_39493dea-4b0a-4830-b3c7-e00ed09a47e1
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/12/temp_shuffle_53fa40dc-df9b-405b-a1c3-28a229844cce
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/32/temp_shuffle_54ad257b-9c4b-4b15-a990-8d83136a377e
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/31/temp_shuffle_5e7509e0-9133-4abf-9bc8-f8aaf426dabd
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/32/temp_shuffle_54ad257b-9c4b-4b15-a990-8d83136a377e
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/31/temp_shuffle_5e7509e0-9133-4abf-9bc8-f8aaf426dabd
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/06/temp_shuffle_1a574642-c3a8-4ae4-9989-780a0a85dfab
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/06/temp_shuffle_1a574642-c3a8-4ae4-9989-780a0a85dfab
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/1a/temp_shuffle_88432b8c-97d9-42fd-99fb-cda3146fc2f3
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/28/temp_shuffle_5452231d-7be4-4c66-b757-a352f49b3f6f
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/1a/temp_shuffle_88432b8c-97d9-42fd-99fb-cda3146fc2f3
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/28/temp_shuffle_5452231d-7be4-4c66-b757-a352f49b3f6f
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/33/temp_shuffle_4f084754-1403-49a2-8315-a3cdb0bb1aca
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/33/temp_shuffle_4f084754-1403-49a2-8315-a3cdb0bb1aca
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3f/temp_shuffle_060d6767-e875-432f-a382-b504ed97da9d
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/16/temp_shuffle_06cdae69-78ea-46a7-9e64-9b8903680cae
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/3f/temp_shuffle_060d6767-e875-432f-a382-b504ed97da9d
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/16/temp_shuffle_06cdae69-78ea-46a7-9e64-9b8903680cae
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/09/temp_shuffle_36c35f73-f06c-45b8-8a83-09902c2b62f3
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/30/temp_shuffle_10eed573-7aca-4351-b95a-d1befb0fc2b4
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/09/temp_shuffle_36c35f73-f06c-45b8-8a83-09902c2b62f3
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/30/temp_shuffle_10eed573-7aca-4351-b95a-d1befb0fc2b4
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/38/temp_shuffle_af5fe57a-da4b-4524-a181-c7f323299a07
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/24/temp_shuffle_840f8935-b618-4343-a228-2891b8f305e1
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/38/temp_shuffle_af5fe57a-da4b-4524-a181-c7f323299a07
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/24/temp_shuffle_840f8935-b618-4343-a228-2891b8f305e1
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/27/temp_shuffle_a2e691d3-08a5-4b58-9f46-e707e22f328f
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/27/temp_shuffle_a2e691d3-08a5-4b58-9f46-e707e22f328f
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/18/temp_shuffle_9680b1d5-eef6-46d1-b294-18bf24ca43bf
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/18/temp_shuffle_9680b1d5-eef6-46d1-b294-18bf24ca43bf
[ERROR] [DiskBlockObjectWriter] Uncaught exception while reverting partial writes to file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2c/temp_shuffle_076e4442-2169-4f10-b8d6-ed9f19960f92
[ERROR] [BypassMergeSortShuffleWriter] Error while deleting file /tmp/blockmgr-0a6b63d3-e883-4688-b05d-74e1b7394fb1/2c/temp_shuffle_076e4442-2169-4f10-b8d6-ed9f19960f92
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 32.0 failed 1 times, most recent failure: Lost task 0.0 in stage 32.0 (TID 109, localhost, executor driver): org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:403)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:232)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:199)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 39 number_of_rows: 500 close_scanner: false next_call_seq: 0
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2463)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:97)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:214)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
        ... 15 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at com.actionml.URModel.save(URModel.scala:79)
        at com.actionml.URAlgorithm.calcAll(URAlgorithm.scala:367)
        at com.actionml.URAlgorithm.train(URAlgorithm.scala:295)
        at com.actionml.URAlgorithm.train(URAlgorithm.scala:180)
        at org.apache.predictionio.controller.P2LAlgorithm.trainBase(P2LAlgorithm.scala:49)
        at org.apache.predictionio.controller.Engine$$anonfun$18.apply(Engine.scala:690)
        at org.apache.predictionio.controller.Engine$$anonfun$18.apply(Engine.scala:690)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.immutable.List.map(List.scala:285)
        at org.apache.predictionio.controller.Engine$.train(Engine.scala:690)
        at org.apache.predictionio.controller.Engine.train(Engine.scala:176)
        at org.apache.predictionio.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:67)
        at org.apache.predictionio.workflow.CreateWorkflow$.main(CreateWorkflow.scala:251)
        at org.apache.predictionio.workflow.CreateWorkflow.main(CreateWorkflow.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:403)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:232)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:199)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 39 number_of_rows: 500 close_scanner: false next_call_seq: 0
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2463)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:97)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:214)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
        ... 15 more
[ERROR] [TaskContextImpl] Error in TaskCompletionListener
[ERROR] [TaskContextImpl] Error in TaskCompletionListener
[ERROR] [Executor] Exception in task 1.0 in stage 44.0 (TID 118)
[ERROR] [Executor] Exception in task 0.0 in stage 44.0 (TID 117)
[WARN] [NettyRpcEnv] RpcEnv already stopped.
[WARN] [NettyRpcEnv] RpcEnv already stopped.


any opinion to check or fix?

Pat Ferrel

unread,
May 14, 2018, 2:04:14 PM5/14/18
to MayP, actionml-user
When you restrict the max memory used with -- --driver-memory 4g --executor-memory 4g you are not using the machines extra memory. Please see the Spark settings in their docs. Set those nearer the size of available memory remembering that you have other services that share it.

BTW scaling vertically in this way is somewhat wasteful since Spark is a resource hog and you only use it during training. We have Spark running separately  in our deployments and stop the AWs Spark machines when not training. This saves $$$. We usually put all services in separate machines actually so they can be scaled independently—but that may be ovverkill for some.
--
You received this message because you are subscribed to the Google Groups "actionml-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to actionml-use...@googlegroups.com.
To post to this group, send email to action...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/actionml-user/f045879a-9b1e-4db5-801d-ccd51b1e709e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

MayP

unread,
May 16, 2018, 9:21:55 AM5/16/18
to actionml-user
thanks pat.

it was the memory limitation.

now it works. :D



2018년 5월 15일 화요일 오전 3시 4분 14초 UTC+9, pat 님의 말:
Reply all
Reply to author
Forward
0 new messages