Feynman27 Feynman27 - 4 months ago 99
Scala Question

Applying a map function to all elements of column in a Spark dataframe

I'm trying to apply a function to all elements of a column in a Spark dataframe in Scala. The input is a String that looks like "{count:10}", and I'd like to return only the Int part -- in this example 10. I can do this on a toy example:

val x = List("{\"count\": 107}", "{\"count\": 9}", "{\"count\": 456}")
val _list = x.map(x => x.substring(10,x.length-1).toInt)


But when I try to apply a udf to my dataframe I get an error:

val getCounts: String => Int = _.substring(10,x.length-1).toInt
import org.apache.spark.sql.functions.udf
val myUDF = udf(getCounts)

df.withColumn("post_shares_int", myUDF('post_shares)).show


Error output:

org.apache.spark.SparkException: Task not serializable

at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2060)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:707)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:706)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:706)
at org.apache.spark.sql.execution.ConvertToSafe.doExecute(rowFormatConverters.scala:56)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:187)
at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
....


Any help on how to do this would be extremely appreciated.

Answer

Forget the custom UDF, there is already a function available for that task, namely regexp_extract which is documented here

df.withColumn(
  "post_shares_int", 
  regexp_extract(df("post_shares"), '^{\\w+:(\\d+)}$', 1)
).show

Following the comment below, it is best to use get_json_object which does parse json strings

df.withColumn(
  "post_shares_int", 
  get_json_object(df("post_shares"), '$.count')
).show
Comments