I have a JSON server log file that I want to parse using Spark 2.2.0 and Java API, which I convert to a Dataset using
Dataset<Row> df = spark.read().json(args[0]);
df.printschema();
root
|-- timestamp: long (nullable = true)
|-- results: struct (nullable = true)
| |-- entities: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- entity_id: string (nullable = true)
| | | |-- score: long (nullable = true)
| | | |-- is_available: boolean (nullable = true)
| |-- number_of_results: long (nullable = true)
root
|-- timestamp: long (nullable = true)
|-- results: struct (nullable = true)
| |-- entity: struct (containsNull = true)
| | |-- entity_id: string (nullable = true)
| | |-- score: long (nullable = true)
| | |-- is_available: boolean (nullable = true)
You can apply a user defined function on your array column :
// Define the UDF that takes the min of array
UDF1<Seq<Row>, Row> getElement = seq -> {
Row bestRow = null;
long bestRowScore = Long.MAX_VALUE;
for (Row r : JavaConversions.seqAsJavaList(seq)){
if (r.getBoolean(1) && r.getLong(2)<bestRowScore){
bestRow = r;
bestRowScore = r.getLong(2);
}
}
return bestRow;
};
// Define the return type of UDF
ArrayType arrayType = (ArrayType) df.select(df.col("results.entities")).schema().fields()[0].dataType();
DataType elementType = arrayType.elementType();
// Register UDF
sparkSession.udf().register("getElement", getElement, elementType);
// Apply UDF on dataset
Dataset<Row> transformedDF = df.select(df.col("timestamp"),functions.callUDF("getElement", df.col("results.entities")));
transformedDF.printSchema();