azkaban is now 3.0.0 version, the source has supported spark jobtype, but there is no description of how to use the relevant documentation. Some people know how to configure job it? I would like to know to run type = command type = spark run any different.
type=spark
dependencies=wordcount1
job.class=com.test.spark.JavaWordCount
master=local[2]
class=com.mapbar.spark.streaming.JavaWordCount
executor-cores=2
num-executors=2
executor-memory=512M
name=wordcount
conf.spark.serializer=org.apache.spark.serializer.KryoSerializer
main.args=${param.inData} ${param.outData}
force.output.overwrite=true
input.path=${param.inData}
output.path=${param.outData}
==================================================
type=command
command=spark-submit \
--master local[2] \
--jars $LIBJARS \
--class com.test.spark.JavaWordCount \
--executor-cores 2 \
--num-executors 2\
--executor-memory 512M \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--name $name \
$JAR_PATH \
$lyd $ldd $lyc $ldc $hour_topic $day_topic