StrongYoung StrongYoung - 1 month ago 9
Scala Question

how to concat the same column value to a new column with comma delimiters in spark

The format of input data as follows:

+--------------------+-------------+--------------------+
| date | user | product |
+--------------------+-------------+--------------------+
| 2016-10-01 | Tom | computer |
+--------------------+-------------+--------------------+
| 2016-10-01 | Tom | iphone |
+--------------------+-------------+--------------------+
| 2016-10-01 | Jhon | book |
+--------------------+-------------+--------------------+
| 2016-10-02 | Tom | pen |
+--------------------+-------------+--------------------+
| 2016-10-02 | Jhon | milk |
+--------------------+-------------+--------------------+


And the format of output as follows:

+-----------+-----------------------+
| user | products |
+-----------------------------------+
| Tom | computer,iphone,pen |
+-----------------------------------+
| Jhon | book,milk |
+-----------------------------------+


The output shows all products every user bought order by date.

I want to process these data using Spark, who Can you help me, please? Thank you.

Answer

Better to use map-reduceBykey() combination rather than groupBy.. Also assuming the data doesn't have the

#Read the data using val ordersRDD = sc.textFile("/file/path")
val ordersRDD = sc.parallelize( List(("2016-10-01","Tom","computer"), 
    ("2016-10-01","Tom","iphone"), 
    ("2016-10-01","Jhon","book"), 
    ("2016-10-02","Tom","pen"), 
    ("2016-10-02","Jhon","milk")))

#group by (date, user), sort by key & reduce by user & concatenate products
val dtusrGrpRDD = ordersRDD.map(rec => ((rec._2, rec._1), rec._3))
   .sortByKey().map(x=>(x._1._1, x._2))
   .reduceByKey((acc, v) => acc+","+v)

#if needed, make it to DF
scala> dtusrGrpRDD.toDF("user", "product").show()
+----+-------------------+
|user|            product|
+----+-------------------+
| Tom|computer,iphone,pen|
|Jhon|          book,milk|
+----+-------------------+
Comments