ptikobj ptikobj - 3 months ago 65
Scala Question

Spark Standalone Mode: How to compress spark output written to HDFS

Related to my other question, but distinct:

someMap.saveAsTextFile("hdfs://HOST:PORT/out")


If I save an RDD to HDFS, how can I tell spark to compress the output with gzip?
In Hadoop, it is possible to set

mapred.output.compress = true


and choose the compression algorithm with

mapred.output.compression.codec = <<classname of compression codec>>


How would I do this in spark? Will this work as well?

edit: using spark-0.7.2

Answer

The method saveAsTextFile takes an additional optional parameter of the codec class to use. So for your example it should be something like this to use gzip:

someMap.saveAsTextFile("hdfs://HOST:PORT/out", classOf[GzipCodec])

UPDATE

Since you're using 0.7.2 you might be able to port the compression code via configuration options that you set at startup. I'm not sure if this will work exactly, but you need to go from this:

conf.setCompressMapOutput(true)
conf.set("mapred.output.compress", "true")
conf.setMapOutputCompressorClass(c)
conf.set("mapred.output.compression.codec", c.getCanonicalName)
conf.set("mapred.output.compression.type", CompressionType.BLOCK.toString)

to something like this:

System.setProperty("spark.hadoop.mapred.output.compress", "true")
System.setProperty("spark.hadoop.mapred.output.compression.codec", "true")
System.setProperty("spark.hadoop.mapred.output.compression.codec", "org.apache.hadoop.io.compress.GzipCodec")
System.setProperty("spark.hadoop.mapred.output.compression.type", "BLOCK")

If you get it to work, posting your config would probably be helpful to others as well.

Comments