sandip sandip - 3 months ago 8
Scala Question

moving transformations from hive sql query to Spark

val temp = sqlContext.sql(s"SELECT A, B, C, (CASE WHEN (D) in (1,2,3) THEN ((E)+0.000)/60 ELSE 0 END) AS Z from TEST.TEST_TABLE")
val temp1 = temp.map({ temp => ((temp.getShort(0), temp.getString(1)), (USAGE_TEMP.getDouble(2), USAGE_TEMP.getDouble(3)))})
.reduceByKey((x, y) => ((x._1+y._1),(x._2+y._2)))


instead of the above code which is doing the computation(case evaluation) on hive layer I would like to have the transformation done in scala. How would I do it?

Is it possible to do the same while filling the data inside Map?

Answer
val temp = sqlContext.sql(s"SELECT A, B, C, D, E from TEST.TEST_TABLE")

val tempTransform = temp.map(row => {
  val z = List[Double](1, 2, 3).contains(row.getDouble(3)) match {
    case true => row.getDouble(4) / 60
    case _ => 0
  }
  Row(row.getShort(0), Row.getString(1), Row.getDouble(2), z)
})

val temp1 = tempTransform.map({ temp => ((temp.getShort(0), temp.getString(1)), (USAGE_TEMP.getDouble(2), USAGE_TEMP.getDouble(3)))})
  .reduceByKey((x, y) => ((x._1+y._1),(x._2+y._2)))