jupyter cannot find oracle jdbc driver

566 views
Skip to first unread message

Lian Jiang

unread,
Nov 19, 2018, 3:09:13 PM11/19/18
to jup...@googlegroups.com
Hi,

I am trying to use oracle jdbc to load the table from oracle db but got error.

The code:
#######################
import findspark
findspark.init()

from pyspark import SparkContext, SQLContext
import os

os.environ['PYSPARK_SUBMIT_ARGS'] = '--master yarn --deploy-mode client --driver-cores 4 --driver-memory 10g --num-executors 2 --executor-cores 6  --executor-memory 10g --driver-class-path /mnt/data/hdfs/ojdbc8.jar --jars /mnt/data/hdfs/ojdbc8.jar pyspark-shell'
sc =SparkContext(appName = "Pi")
sqlCtx = SQLContext(sc)

url = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.3.69)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=myservice.com)))"
properties = {
    "user": "user",
    "password": "password"
}
df=sqlCtx.read.jdbc(url=url, table="myschema.mytable", properties=properties)
df.show(3)
#######################

The error:
#######################
Py4JJavaError: An error occurred while calling o63.jdbc.
: java.sql.SQLException: No suitable driver
	at java.sql.DriverManager.getDriver(DriverManager.java:315)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:85)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:85)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:84)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:34)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
	at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:254)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:745)
##################

My jupyter is:
###################

The version of the notebook server is: 5.5.0
The server is running on this version of Python:
Python 2.7.15 |Anaconda, Inc.| (default, May 1 2018, 23:32:55) [GCC 7.2.0]
####################

ojdbc8.jar is available on all namenodes and datanodes.

Any idea? Thanks.

Lian Jiang

unread,
Nov 19, 2018, 7:24:51 PM11/19/18
to jup...@googlegroups.com
Never mind. Problem solved.
Reply all
Reply to author
Forward
0 new messages