I'm again evaluting various ways to compact data in Hive tables into
larger files. This, in conjunction with ALTER TABLE ... COMPUTE
STATISTICS, can drmatically speed up some types of queries. With the
latest Hive/MR3, I'm getting the below error when running ALTER TABLE
... CONCATENATE. I vaguely recall getting this "this.reader is null"
error before with another query but I don't remember how I fixed or
worked around it.
David
set mapreduce.input.fileinputformat.split.minsize=268435456
set hive.exec.orc.default.block.size=268435456
alter table mytable partition (day='2024-02-15') concatenate
INFO : Compiling command(queryId=hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa): alter table mytable partition (day='2024-02-15') concatenate
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa); Time taken: 0.116 seconds
INFO : Executing command(queryId=hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa): alter table mytable partition (day='2024-02-15') concatenate
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Run MR3 instead of Tez
INFO : MR3Task.execute(): hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa:169
INFO : Starting MR3 Session...
INFO : Finished building DAG, now submitting: hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa:169
INFO : Status: Running (Executing on MR3 DAGAppMaster): hive_20240223193644_23064ca5-8105-40da-afd1-75f9260669fa:169
INFO : Status: Running
INFO : File Merge: -/-
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 0(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
INFO : File Merge: 2(+269)/269
Traceback (most recent call last):
File "/home/dengel/bin/run-hive-query", line 153, in <module>
run_hql(verbose, cursor, hql)
File "/home/dengel/bin/run-hive-query", line 122, in run_hql
raise Exception(
Exception: query returned abnormal status ERROR_STATE (TGetOperationStatusResp(status=TStatus(statusCode=0, infoMessages=None, sqlState=None, errorCode=None, errorMessage=None), operationState=5, sqlState='08S01', errorCode=2, errorMessage='Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.DDLTask. Terminating unsuccessfully: Vertex failed, vertex_4354_0000_167_00. File Merge 269 tasks 67012 milliseconds: Failed, Some(Task unsuccessful: File Merge, task_4354_0000_167_00_000002, java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:223)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.run(MergeFileRecordProcessor.java:156)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:359)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.processKeyValuePairs(OrcFileMergeOperator.java:180)
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.process(OrcFileMergeOperator.java:74)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:214)
... 16 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.processKeyValuePairs(OrcFileMergeOperator.java:128)
... 18 more
java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:460)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:915)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:908)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:716)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:690)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:649)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:152)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.getMRInput(MergeFileRecordProcessor.java:256)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.init(MergeFileRecordProcessor.java:82)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:358)
... 14 more
java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:460)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:915)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:908)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:716)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:690)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:649)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:152)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.getMRInput(MergeFileRecordProcessor.java:256)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.init(MergeFileRecordProcessor.java:82)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:358)
... 14 more
)', taskStatus='[{"returnValue":2,"errorMsg":"org.apache.hadoop.hive.ql.metadata.HiveException: Terminating unsuccessfully: Vertex failed, vertex_4354_0000_167_00. File Merge 269 tasks 67012 milliseconds: Failed, Some(Task unsuccessful: File Merge, task_4354_0000_167_00_000002, java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:223)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.run(MergeFileRecordProcessor.java:156)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:359)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.processKeyValuePairs(OrcFileMergeOperator.java:180)
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.process(OrcFileMergeOperator.java:74)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:214)
... 16 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.io.orc.Reader.getObjectInspector()" because "this.reader" is null
at org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.processKeyValuePairs(OrcFileMergeOperator.java:128)
... 18 more
java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:460)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:915)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:908)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:716)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:690)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:649)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:152)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.getMRInput(MergeFileRecordProcessor.java:256)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.init(MergeFileRecordProcessor.java:82)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:358)
... 14 more
java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:417)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
at com.datamonad.mr3.tez.ProcessorWrapper.run(TezProcessor.scala:63)
at com.datamonad.mr3.worker.LogicalIOProcessorRuntimeTask.$anonfun$run$1(RuntimeTask.scala:316)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.hadoop.hive.ql.plan.MapWork.getPathToPartitionInfo()" because "this.mrwork" is null
at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:460)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:915)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:908)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:716)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:690)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:649)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:152)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.getMRInput(MergeFileRecordProcessor.java:256)
at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.init(MergeFileRecordProcessor.java:82)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:358)
... 14 more
)","beginTime":1708717004780,"endTime":1708717071851,"taskId":"Stage-0","taskState":"FINISHED","taskType":"DDL","name":"DDL","elapsedTime":67071}]', operationStarted=1708717004647, operationCompleted=1708717071880, hasResultSet=False, progressUpdateResponse=TProgressUpdateResp(headerNames=['VERTICES', 'MODE', 'STATUS', 'TOTAL', 'COMPLETED', 'RUNNING', 'PENDING', 'FAILED', 'KILLED'], rows=[['File Merge ', 'container', 'Failed', '269', '2', '266', '1', '3', '0']], progressedPercentage=0.0074349441565573215, status=1, footerSummary='VERTICES: 00/01', startTime=1708717004821)))
--
David Engel
da...@istwok.net