Abhishek Choudhary Abhishek Choudhary - 9 months ago 69
Python Question

How to add any new library like spark-csv in Apache Spark prebuilt version

I have build the Spark-csv and able to use the same from pyspark shell using the following command

bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3

error getting

>>> df_cat.save("k.csv","com.databricks.spark.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save
self._jdf.save(source, jmode, joptions)
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-", line 538, in __call__
File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-", line 300, in get_return_value

Where should I place the jar file in my spark pre-built setup so that I will be able to access
from python editor directly as well.

Answer Source

At the time I used spark-csv, I also had to download commons-csv jar (not sure it is still relevant). Both jars where in the spark distribution folder.

  1. I downloaded the jars as follow:

    wget http://search.maven.org/remotecontent?filepath=org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar -O commons-csv-1.1.jar<br/>    
    wget http://search.maven.org/remotecontent?filepath=com/databricks/spark-csv_2.10/1.0.0/spark-csv_2.10-1.0.0.jar -O spark-csv_2.10-1.0.0.jar
  2. then started the python spark shell with the arguments:

    ./bin/pyspark --jars "spark-csv_2.10-1.0.0.jar,commons-csv-1.1.jar"
  3. and read a spark dataframe from a csv file:

    from pyspark.sql import SQLContext<br/>
    sqlContext = SQLContext(sc)<br/>
    df = sqlContext.load(source="com.databricks.spark.csv", path = "/path/to/you/file.csv")<br/>