arjun045 arjun045 - 2 months ago 15
Scala Question

Find aggregated sum of repetition spark

Input :


Name1    Name2

arjun       deshwal

nikhil       choubey

anshul     pandyal

arjun       deshwal

arjun       deshwal

deshwal    arjun


Code used in scala


val df = sqlContext.read.format("com.databricks.spark.csv")
.option("header", "true")
.load(FILE_PATH)
val result = df.groupBy("Name1", "Name2")
.agg(count(lit(1))
.alias("cnt"))


Getting Output :

nikhil   choubey    1

anshul   pandyal   1

deshwal    arjun    1

arjun    deshwal    3


Required Output :

nikhil     choubey   1

anshul   pandyal    1

deshwal   arjun      4


or

nikhil   choubey   1

anshul   pandyal   1

arjun   deshwal   4

Answer

I would approach it using a set, which does not contain any order and therefore just compares on the content of the set:

scala> val data = Array(
 |     ("arjun",   "deshwal"),
 |     ("nikhil",  "choubey"),
 |     ("anshul",  "pandyal"),
 |     ("arjun",   "deshwal"),
 |     ("arjun",   "deshwal"),
 |     ("deshwal", "arjun")
 | )
data: Array[(String, String)] = Array((arjun,deshwal), (nikhil,choubey), (anshul,pandyal), (arjun,deshwal), (arjun,deshwal), (deshwal,arjun))

scala> val distData = sc.parallelize(data)
distData: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:29

scala> val distDataSets = distData.map(tup => (Set(tup._1, tup._2), 1)).countByKey()
distDataSets: scala.collection.Map[scala.collection.immutable.Set[String],Long] = Map(Set(nikhil, choubey) -> 1, Set(arjun, deshwal) -> 4, Set(anshul, pandyal) -> 1)

Hope this helps.