How to add third-party Java JAR files for use in PySpark

Posted on

Question :

How to add third-party Java JAR files for use in PySpark

I have some third-party database client libraries in Java. I want to access them through

E.g.: to make the client class (not a JDBC driver!) available to the Python client via the Java gateway:

java_import(gateway.jvm, "org.mydatabase.MyDBClient")

It is not clear where to add the third-party libraries to the JVM classpath. I tried to add to file, but that did not seem to work. I get:

Py4jError: Trying to call a package

Also, when comparing to Hive: the hive JAR files are not loaded via file, so that makes me suspicious. There seems to be some other mechanism happening to set up the JVM side classpath.

Answer #1:

You can add external jars as arguments to pyspark

pyspark --jars file1.jar,file2.jar
Answered By: Marl

Answer #2:

You could add the path to jar file using Spark configuration at Runtime.

Here is an example :

conf = SparkConf().set("spark.jars", "/path-to-jar/spark-streaming-kafka-0-8-assembly_2.11-2.2.1.jar")

sc = SparkContext( conf=conf)

Refer the document for more information.

Answered By: AAB

Answer #3:

You could add --jars xxx.jar when using spark-submit

./bin/spark-submit --jars xxx.jar

or set the enviroment variable SPARK_CLASSPATH

SPARK_CLASSPATH='/path/xxx.jar:/path/xx2.jar' was written by pyspark API

Answered By: Ryan Chou

Answer #4:

All the above answers did not work for me

What I had to do with pyspark was

pyspark --py-files /path/to/jar/xxxx.jar

For Jupyter Notebook:

spark = (SparkSession
    .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
    .config("spark.executor.cores", "4")
    .config("spark.executor.instances", "2")

# Do this 


Link to the source where I found it:

Answered By: Gayatri

Answer #5:

  1. Extract the downloaded jar file.
  2. Edit system environment variable
    • Add a variable named SPARK_CLASSPATH and set its value to pathtotheextractedjarfile.

Eg: you have extracted the jar file in C drive in folder named sparkts
its value should be: C:sparkts

  1. Restart your cluster
Answered By: Umang singhal

Answer #6:

One more thing you can do is to add the Jar in the pyspark jar folder where pyspark is installed. Usually /python3.6/site-packages/pyspark/jars

Be careful if you are using a virtual environment that the jar needs to go to the pyspark installation in the virtual environment.

This way you can use the jar without sending it in command line or load it in your code.

Answered By: Nab

Answer #7:

Apart from the accepted answer, you also have below options:

  1. if you are in virtual environment then you can place it in

    e.g. lib/python3.7/site-packages/pyspark/jars

  2. if you want java to discover it then you can place where your jre is installed under ext/ directory

Answered By: D Untouchable

Answer #8:

I’ve worked around this by dropping the jars into a directory drivers and then creating a spark-defaults.conf file in conf folder. Steps to follow;

To get the conf path:  
cd ${SPARK_HOME}/conf

vi spark-defaults.conf  
spark.driver.extraClassPath /Users/xxx/Documents/spark_project/drivers/*

run your Jupyter notebook.

Answered By: Sharvan Kumar

Leave a Reply

Your email address will not be published. Required fields are marked *