Generally, Spark will wire out anything that is specified as a Spark property prefixed with "spark.hadoop.*" into the underlying Hadoop configuration after stripping off that prefix. See this code for the behavior:
It doesn't seem to be well documented, and I suppose it's not clear whether there would ever be plans to deprecate the functionality, but a lot of code has probably come to rely on it by now.
Of course, different classes might interact differently with the Hadoop configuration, and as far as I can tell, things which grab "new Configuration()" instead of via SparkContext.hadoopConfiguration() may not get those wired configs, but either way it's worth a try:
Dataproc CLI:
gcloud beta dataproc jobs submit pyspark --properties spark.hadoop.spark.sql.parquet.output.committer.class=org.apache.spark.sql.execution.datasources.parquet.DirectParquetOutputCommitter
From direct SSH session or other client:
pyspark --conf spark.hadoop.spark.sql.parquet.output.committer.class=org.apache.spark.sql.execution.datasources.parquet.DirectParquetOutputCommitter