Spark job on dataproc failing with Exception in thread "main" java.lang.NoSuchMethodError: com.googl

1,568 views
Skip to first unread message

mich.ta...@gmail.com

unread,
Dec 20, 2018, 4:38:15 AM12/20/18
to Google Cloud Dataproc Discussions
Hi,

I am trying a basic Spark job in Scala program. I compile it with SBT with the following dependencies

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka" % "1.6.1" % "provided"
libraryDependencies += "org.apache.phoenix" % "phoenix-spark" % "4.6.0-HBase-1.0"
libraryDependencies += "org.apache.hbase" % "hbase" % "1.2.6"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.6"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.2.6"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.2.6"
libraryDependencies += "org.mongodb.spark" %% "mongo-spark-connector" % "2.2.0"
libraryDependencies += "org.mongodb" % "mongo-java-driver" % "3.8.1"
libraryDependencies += "org.apache.spark" %% "spark-streaming-twitter" % "1.6.3"
libraryDependencies += "com.google.cloud.bigdataoss" % "bigquery-connector" % "0.13.4-hadoop3"
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "1.9.4-hadoop3"
libraryDependencies += "com.google.code.gson" % "gson" % "2.8.5"
libraryDependencies += "com.google.guava" % "guava" % "27.0.1-jre"
libraryDependencies += "org.apache.httpcomponents" % "httpcore" % "4.4.8"

It compiles fine and creates the Uber jar file. But when I run I get the following error.

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
at com.google.cloud.hadoop.io.bigquery.BigQueryStrings.parseTableReference(BigQueryStrings.java:68)
at com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration.configureBigQueryInput(BigQueryConfiguration.java:260)
at simple$.main(simple.scala:150)
at simple.main(simple.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Sounds like there is incompatibility in GUAVA versions between compiles and run? These are the versions thar are used:

  • Java openjdk version "1.8.0_181"
  • Spark version 2.3.2
  • Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)

Appreciate any feedback.

Thanks,

Mich

Dan Sedov

unread,
Dec 20, 2018, 1:02:31 PM12/20/18
to Google Cloud Dataproc Discussions
Hi Mich,

This is a common problem. When you job runs, it gets Hadoop's jars on the classpath which include an older version of Guava. The solution is to shade/relocate Guava in your distribution

This post may help:

mich.ta...@gmail.com

unread,
Dec 21, 2018, 6:12:16 AM12/21/18
to Google Cloud Dataproc Discussions
Thanks Dan and Muthu,

I am using a generic SBT file that works fine on classic in-house Hadoop. In my Google compute server I have accounted for Guava as follows:

lazy val root = (project in file(".")).
  settings(
    name := "${APPLICATION}",
    version := "1.0",
    scalaVersion := "2.11.8",
    mainClass in Compile := Some("myPackage.${APPLICATION}")
  )

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"  % "provided" exclude("org.apache.hadoop", "hadoop-client")
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.7.8"
libraryDependencies += "commons-io" % "commons-io" % "2.4"
libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" % "provided"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka" % "1.6.1" % "provided"
libraryDependencies += "org.apache.phoenix" % "phoenix-spark" % "4.6.0-HBase-1.0"
libraryDependencies += "org.apache.hbase" % "hbase" % "1.2.3"

libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.6"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.2.6"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.2.6"
libraryDependencies += "org.mongodb.spark" %% "mongo-spark-connector" % "2.2.0"
libraryDependencies += "org.mongodb" % "mongo-java-driver" % "3.8.1"
libraryDependencies += "org.apache.spark" %% "spark-streaming-twitter" % "1.6.3"
libraryDependencies += "com.google.cloud.bigdataoss" % "bigquery-connector" % "0.13.4-hadoop3"
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "1.9.4-hadoop3"
libraryDependencies += "com.google.code.gson" % "gson" % "2.8.5"
libraryDependencies += "com.google.guava" % "guava" % "27.0.1-jre"
libraryDependencies += "org.apache.httpcomponents" % "httpcore" % "4.4.8"
// META-INF discarding
assemblyMergeStrategy in assembly := {
 case PathList("META-INF", xs @ _*) => MergeStrategy.discard
 case x => MergeStrategy.first
}
assemblyShadeRules in assembly ++= Seq(
  ShadeRule.rename("com.google.guava.**" -> "my_conf.@1")
    .inLibrary("com.google.guava" % "config" % "27.0.1-jre")
    .inProject
)

According to this link the shading should work.

This compiles OK and from the compile output I can see

[warn] Merging 'META-INF/maven/com.google.guava/failureaccess/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/failureaccess/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/guava/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/guava/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/listenablefuture/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/listenablefuture/pom.xml' with strategy 'discard'

However at run-time I still get the same error!

18/12/21 10:43:57 INFO org.spark_project.jetty.server.Server: Started @3012ms
18/12/21 10:43:57 INFO org.spark_project.jetty.server.AbstractConnector: Started ServerConnector@7a389761{HTTP/1.1,[http/1.1]}{0.0.0.0:55555}

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
        at com.google.cloud.hadoop.io.bigquery.BigQueryStrings.parseTableReference(BigQueryStrings.java:68)
        at com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration.configureBigQueryInput(BigQueryConfiguration.java:260)
        at simple$.main(simple.scala:150)
        at simple.main(simple.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


The problem is in this line of code

BigQueryConfiguration.configureBigQueryInput(conf, fullyQualifiedInputTableId)

So I don't know what is going wrong?

Thanks,

Mich






mich.ta...@gmail.com

unread,
Dec 21, 2018, 3:28:36 PM12/21/18
to Google Cloud Dataproc Discussions
BTW under directory .ivy2 if I search for guava I get

find ./ -name "*guava*"
./cache/org.glassfish.jersey.bundles.repackaged/jersey-guava
./cache/org.glassfish.jersey.bundles.repackaged/jersey-guava/bundles/jersey-guava-2.22.2.jar
./cache/com.google.guava
./cache/com.google.guava/listenablefuture/ivy-9999.0-empty-to-avoid-conflict-with-guava.xml
./cache/com.google.guava/listenablefuture/jars/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar
./cache/com.google.guava/listenablefuture/ivydata-9999.0-empty-to-avoid-conflict-with-guava.properties
./cache/com.google.guava/listenablefuture/ivy-9999.0-empty-to-avoid-conflict-with-guava.xml.original
./cache/com.google.guava/guava-parent
./cache/com.google.guava/guava
./cache/com.google.guava/guava/bundles/guava-27.0.1-jre.jar
./cache/com.google.guava/guava/bundles/guava-14.0.1.jar

I use guava-27.0.1-jre in my SBT dependency. I assume guava-14.0.1 is the one used at runtime which is an older version?




mich.ta...@gmail.com

unread,
Dec 27, 2018, 3:09:07 PM12/27/18
to Google Cloud Dataproc Discussions
Hi

 I sorted it out this problem. I rewrote the assembly with shade rules to avoid old jar files as follows:

lazy val root = (project in file(".")).
  settings(
    name := "${APPLICATION}",
    version := "1.0",
    scalaVersion := "2.11.8",
    mainClass in Compile := Some("myPackage.${APPLICATION}")
  )
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("com.google.common.**" -> "my_conf.@1").inAll
)
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" % "provided"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.4.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"  % "provided" exclude("org.apache.hadoop", "hadoop-client")
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.7.8"
libraryDependencies += "commons-io" % "commons-io" % "2.4"
libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0" % "provided"
libraryDependencies += "com.google.cloud.bigdataoss" % "bigquery-connector" % "0.13.4-hadoop3"
libraryDependencies += "com.google.cloud.bigdataoss" % "gcs-connector" % "1.9.4-hadoop3"
libraryDependencies += "com.google.code.gson" % "gson" % "2.8.5"
libraryDependencies += "org.apache.httpcomponents" % "httpcore" % "4.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.4.0"
libraryDependencies += "com.github.samelamin" %% "spark-bigquery" % "0.2.5"

// META-INF discarding
assemblyMergeStrategy in assembly := {
 case PathList("META-INF", "MANIFEST.MF") => MergeStrategy.discard

 case PathList("META-INF", xs @ _*) => MergeStrategy.discard
 case x => MergeStrategy.first
}


Cheers,

Mich
Reply all
Reply to author
Forward
0 new messages