I am running all of these on the master node.
When I Set the execution engine to spark it fails with the error below
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j2.properties Async: true
Hive Session ID = 7c8beab1-9ce3-4f12-aa6b-03e8c311e873
hive> set hive.execution.engine=spark
> ;
hive> insert into test values(1,"abc") ;
Query ID = root_20221202151329_f47b306f-17f9-4434-afe1-4e7ba42ad5f4
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session 5ede6416-72d6-4f85-9e1d-3ec25a01a68d)'
FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 5ede6416-72d6-4f85-9e1d-3ec25a01a68d
hive>
Hive Version : Hive 3.1.2
Spark Version : 3.1.3
-Nithin