Frederico Oliveira Frederico Oliveira - 2 months ago 14x
Python Question

'PipelinedRDD' object has no attribute 'toDF' in PySpark

I'm trying to load an SVM file and convert it to a DataFrame so I can use the ML module (Pipeline ML) from Spark.
I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no configured).

from pyspark.mllib.util import MLUtils
from pyspark import SparkContext

sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

and I'm running using:

And I get the error:

Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'

What I can't understand is that if I run:

data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

directly inside pyspark shell, it works.


toDF method is a monkey patch executed inside SQLContext constructor so to be able to use it you have to create a SQLContext first:

from pyspark.sql import SQLContext # or from pyspark.sql import HiveContext

rdd = sc.parallelize([("a", 1)])
hasattr(rdd, "toDF")
## False

sqlContext = SQLContext(sc) # or HiveContext
hasattr(rdd, "toDF")
## True

## +---+---+
## | _1| _2|
## +---+---+
## |  a|  1|
## +---+---+

Not to mention you need a SQLContext to work with DataFrames anyway.