Jack Lei Jack Lei - 2 months ago 50
Scala Question

How to filter Spark dataframe if one column is a member of another column

I have a dataframe with two columns(one string and one array of string):

root
|-- user: string (nullable = true)
|-- users: array (nullable = true)
| |-- element: string (containsNull = true)


How can I filter the dataframe so that the result dataframe only contains rows that
user
is in
users
?

Answer

Sure, It's possible and not so hard. To achieve this you may use a UDF.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._

val df = sc.parallelize(Array(
  ("1", Array("1", "2", "3")),
  ("2", Array("1", "2", "2", "3")),
  ("3", Array("1", "2"))
)).toDF("user", "users")

val inArray = udf((id: String, array: scala.collection.mutable.WrappedArray[String]) => array.contains(id), BooleanType)

df.where(inArray($"user", $"users")).show()

The output is:

+----+------------+
|user|       users|
+----+------------+
|   1|   [1, 2, 3]|
|   2|[1, 2, 2, 3]|
+----+------------+