🐳 ~ gcloud dataproc clusters create --project broad-gatk-test --bucket broad-gatk-test-cluster test-cluster2 --zone us-central1-cWaiting on operation [projects/broad-gatk-test/regions/global/operations/fc6bdfe0-3a93-4a0a-a105-4d3e5977a3f7].Waiting for cluster creation operation...done.ERROR: (gcloud.dataproc.clusters.create) Operation [projects/broad-gatk-test/regions/global/operations/fc6bdfe0-3a93-4a0a-a105-4d3e5977a3f7] failed: Google Cloud Dataproc Agent reports failure. If logs are available, they can be found in 'gs://broad-gatk-test-cluster/google-cloud-dataproc-metainfo/462a8c1b-b0ce-4eb2-9532-830365f79dc1/test-cluster2-m'..🐳 ~ gsutil cat gs://broad-gatk-test-cluster/google-cloud-dataproc-metainfo/462a8c1b-b0ce-4eb2-9532-830365f79dc1/test-cluster2-mCommandException: No URLs matched: gs://broad-gatk-test-cluster/google-cloud-dataproc-metainfo/462a8c1b-b0ce-4eb2-9532-830365f79dc1/test-cluster2-m
google-dataproc-startup: E: Could not open file /var/lib/apt/lists/http.debian.net_debian_dists_jessie_main_binary-amd64_Packages - open (2: No such file or directory)
gcloud beta dataproc jobs submit spark --project broad-gatk-test --cluster cluster-1 --jar gs://hellbender/test/staging/lb_staging/gatk-all-4.alpha-191-gcacec92-SNAPSHOT-spark_d83f0056fb986bf07efb16e4fb2298cb.jar PrintReadsSpark -I gs://broad-gatk-test-cluster/src/test/resources/large/CEUTrio.HiSeq.WGS.b37.NA12878.20.21.bam -O output.bam --sparkMaster yarn-clientERROR: (gcloud.beta.dataproc.jobs.submit.spark) Unable to submit job, cluster 'cluster-1' is not in a helthy state.
google-dataproc-startup: /************************************************************google-dataproc-startup: SHUTDOWN_MSG: Shutting down NameNode at test-cluster4-m.c.broad-gatk-test.internal/10.128.0.5google-dataproc-startup: ************************************************************/
Mar 25, 2016 9:13:16 PM com.google.cloud.hadoop.services.agent.hdfs.HdfsAdminClientImpl getStorageReportINFO: Fetching Datanode storage reportMar 25, 2016 9:13:17 PM com.google.cloud.hadoop.services.agent.protocol.MetadataGcsClient updateAgentINFO: New node status: detail: "Insufficient number of data nodes reporting to start cluster"state: SETUP_FAILEDMar 25, 2016 9:24:42 PM com.google.cloud.hadoop.services.agent.MasterRequestReceiver$NormalWorkReceiver receivedSystemTaskINFO: Received new taskId '941a844a-03ad-4fdb-8a91-2e8d21cd02cb'Mar 25, 2016 9:24:42 PM com.google.cloud.hadoop.services.agent.task.AbstractTaskHandler$1 callINFO: Running EXECUTE_COMMAND task...