Dear All,
While trying to execute a standard benchmark query (Q02 from Big Bench (aka TPCx-BB)) on Dataproc Spark, I get the following:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 132 in stage 4.0 failed 4 times, most recent failure: Lost task 132.3 in stage 4.0 (TID 3718, internal, executor 31): ExecutorLostFailure (executor 31 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 24.3 GB of 24 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
I am using n1-standard-16 instances (16 vCPUs, 64 GiB Mem). Does anyone here know how to fix this error? Thanks in advance!
Thanks,
-Umar