Nir Ben Yaacov Nir Ben Yaacov - 1 year ago 180
Scala Question

Flattening Rows in Spark

I am doing some testing for spark using scala. We usually read json files which needs to be manipulated like the following example:



val test ="test.json")

How can I convert it to the following format:


Answer Source

You can use explode function:

scala> import org.apache.spark.sql.functions.explode
import org.apache.spark.sql.functions.explode

scala> val test ="""{"a":1,"b":[2,3]}""")))
test: org.apache.spark.sql.DataFrame = [a: bigint, b: array<bigint>]

scala> test.printSchema
 |-- a: long (nullable = true)
 |-- b: array (nullable = true)
 |    |-- element: long (containsNull = true)

scala> val flattened = test.withColumn("b", explode($"b"))
flattened: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]

scala> flattened.printSchema
 |-- a: long (nullable = true)
 |-- b: long (nullable = true)

|  a|  b|
|  1|  2|
|  1|  3|
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download