Hi,
I am using
spark-notebook-0.6.2-scala-2.10.4-spark-1.5.2-hadoop-2.6.0-with-parquet
I have created a notebook with meta data :
{
"name": "testStandalone",
"user_save_timestamp": "1970-01-01T01:00:00.000Z",
"auto_save_timestamp": "1970-01-01T01:00:00.000Z",
"language_info": {
"name": "scala",
"file_extension": "scala",
"codemirror_mode": "text/x-scala"
},
"trusted": true,
"customLocalRepo": null,
"customRepos": null,
"customDeps": null,
"customImports": null,
"customArgs": null,
"customSparkConf": {
"
spark.app.name": "Notebook",
"spark.master": "spark://gauss:7077",
"spark.executor.memory": "1G",
"spark.deploy.defaultCores": "4"
},
"kernelspec": {
"name": "spark",
"display_name": "Scala [2.10.4] Spark [1.5.2] Hadoop [2.6.0] {Parquet ✓}"
}
}
My notebook can connect to the standalone cluster at spark://gauss:7077.
But the no of cores shown on the Spark Dashboard is 0. Please see attached.
May I know how to set no cores per executor ?
Thanks in advance for your assistance !
Shing