Jim Hendricks Jim Hendricks - 3 months ago 119
Scala Question

Error adding VectorAssembler to Spark ML Pipeline

Trying to add VectorAssembler to the GBT pipeline example and get an error the pipeline cannot find the features field. I'm bringing in a sample file instead of a libsvm so I needed to transform the feature set set.

Exception in thread "main" java.lang.IllegalArgumentException: Field "features" does not exist.

val df = sqlContext.read
.option("header", "true")
.option("inferSchema", "true")

val sampleDF = df.sample(false,0.05,987897L)

val assembler = new VectorAssembler()

val labelIndexer = new StringIndexer()

val featureIndexer = new VectorIndexer()

val Array(trainingData, testData) = sampleDF.randomSplit(Array(0.7, 0.3))

val gbt = new GBTClassifier()

val pipeline = new Pipeline()

val model = pipeline.fit(trainingData)

val predictions = model.transform(testData)



Why you're calling fit() in featureIndexer?

If you call fit(sampleDF), VectorIndexer will search for features column in sampleDF, but this dataset doesn't have such column.

You should not call fit () manually, it will be done when calling fit() on pipeline - then pipeline will call all transformator and estimators, so call fit on assembler, then pass the result to fit of labelIndexer and pass previous step result to fit of featureIndexer.

DataFrame that will be used in featureIndexer.fit() called inside Pipeline will have all columns generated by previous transformers.

Remember that if you are using standalone VectorIndexer you must fit model on one data and then use this model to transform new data. If you are using VectorIndexer inside pipeline, then don't use fit() on VectorIndexer - it will be called when calling Pipeline.fit().

In your code sampleDF doesn't have features column, however, during Pipeline fit() this column will be added by assembler stage