MTT MTT - 6 months ago 125
Python Question

moving Spark DataFrame from Python to Scala whithn Zeppelin

I created a spark DataFrame in a Python paragraph in Zeppelin.

sqlCtx = SQLContext(sc)
spDf = sqlCtx.createDataFrame(df)


and
df
is a pandas dataframe

print(type(df))
<class 'pandas.core.frame.DataFrame'>


what I want to do is moving
spDf
from one Python paragraph to another Scala paragraph. It look a reasonable way to do is using
z.put
.

z.put("spDf", spDf)


and I got this error:

AttributeError: 'DataFrame' object has no attribute '_get_object_id'


any suggestion to fix the error? or any suggestion to move spDf?

Answer

You canput internal Java object not a Python wrapper:

%pyspark

df = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["k", "v"])
z.put("df", df._jdf)

and then make sure you use correct type:

val df = z.get("df").asInstanceOf[org.apache.spark.sql.DataFrame]
// df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]

but it is better to register temporary table:

%pyspark

df.registerTempTable("df")

and use SQLContext.table to read it:

val df = sqlContext.table("df")
// df: org.apache.spark.sql.DataFrame = [k: bigint, v: string]

To convert in the opposite direction see Zeppelin: Scala Dataframe to python