venuktan venuktan - 1 month ago 26
Scala Question

read files recursively from sub directories with spark from s3 or local filesystem

I am trying to read files from a directory which contains many sub directories. The data is in S3 and I am trying to do this:

val rdd =sc.newAPIHadoopFile(data_loc,
classOf[org.apache.hadoop.mapreduce.lib.input.TextInputFormat],
classOf[org.apache.hadoop.mapreduce.lib.input.TextInputFormat],
classOf[org.apache.hadoop.io.NullWritable])


this does not seem to work.

Appreciate the help

Answer

yes it works, took a while to get the individual blocks/splits though , basically a specific directory in every sub directory : s3n://bucket/root_dir/*/data/*/*/*