KayV KayV - 1 year ago 51
Java Question

HDFS replication property not reflecting as defined in hfs-site.xml

I am woking on HDFS and setting the replication factor to 1 in hfs-site.xml as follows:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<value>/Users/***/Documnent/hDir/hdfs/datanode</value >



But when i tried copying a file from local system to the hdfs file system, i found that the replication factor for that file was 3. Here is the code which is copying the file on hdfs:

public class FileCopyWithWrite {

public static void main(String[] args) {
// TODO Auto-generated method stub

String localSrc = "/Users/***/Documents/hContent/input/docs/1400-8.txt";
String dst = "hdfs://localhost/books/1400-8.txt";

InputStream in = new BufferedInputStream(new FileInputStream(localSrc));
Configuration conf = new Configuration();;
FileSystem fs = FileSystem.get(URI.create(dst), conf);

OutputStream out = fs.create(new Path(dst), new Progressable() {

public void progress() {
// TODO Auto-generated method stub

IOUtils.copyBytes(in, out, 4092, true);

}catch(Exception e){


Please find this screenshot showing the replication factor as 3:

enter image description here

Note that i have started in pseudo-distributed mode and i have updated the hdfs-site.xml according to the documentation in Hadoop The Definite Guide book. Any suggestion on why this is happening?

Answer Source

Deleting the namenode directory and datanode directory solved the issue for me. So I followed the following procedure:

  1. Stop the services, viz dfs, yarn and mr.

  2. Delete the namenode and datanote directories as specified in the hfs-site.xml file.

  3. Re-create the namenode and datanode directories.

  4. Restart the services, viz dfs, yarn and mr.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download