HDFS hflush vs hsync

hflush:  This API flushes all outstanding data (i.e. the current unfinished packet) from the client into the OS buffers on all DataNode replicas.

hsync: This API flushes the data to the DataNodes, like hflush(), but should also force the data to underlying physical storage via fsync (or equivalent). Note that only the current block is flushed to the disk device.

[1] https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java

9 Ağustos 2016

Posted In: dfsoutputstream, hadoop, hdfs, hflush, hsync

HDFS hflush vs hsync

hflush:  This API flushes all outstanding data (i.e. the current unfinished packet) from the client into the OS buffers on all DataNode replicas.

hsync: This API flushes the data to the DataNodes, like hflush(), but should also force the data to underlying physical storage via fsync (or equivalent). Note that only the current block is flushed to the disk device.

[1] https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java

9 Ağustos 2016

Posted In: dfsoutputstream, hadoop, hdfs, hflush, hsync

Hadoop Application Master Container can not able to initialize user directory when your map reduce code is submitted by hdfs user

Today our Cloudera CDH 5.3 cluster throw below error when we try to submit our job from hdfs user. 

15/03/16 08:28:18 INFO mapreduce.Job: Job job_1426239544674_0019 running in uber mode : false

15/03/16 08:28:18 INFO mapreduce.Job:  map 0% reduce 0%
15/03/16 08:28:18 INFO mapreduce.Job: Job job_1426239544674_0019 failed with state FAILED due to: Application application_1426239544674_0019 failed 2 times due to AM Container for appattempt_1426239544674_0019_000002 exited with  exitCode: -1000 due to: Not able to initialize user directories in any of the configured local directories for user hdfs
.Failing this attempt.. Failing the application.

I made some research defining the problem. main reason of the problem is deletion of local directories on YARN startup, but in practice it fails to delete the directories because of permission problems. The top-level usercache directory is owned by the user but is in a directory that is not writable by the user. Therefore the deletion of the user’s usercache directory, as the user, fails due to lack of permissions.

For solution,

You should delete your usercache directory which that located in data node directory. 

rm -rf /dn/yarn/nm/usercache/*

16 Mart 2015

Posted In: applicationmaster, cdh, cloudera, hadoop, hdfs, mapreduce, yarn

Hadoop Application Master Container can not able to initialize user directory when your map reduce code is submitted by hdfs user

Today our Cloudera CDH 5.3 cluster throw below error when we try to submit our job from hdfs user. 

15/03/16 08:28:18 INFO mapreduce.Job: Job job_1426239544674_0019 running in uber mode : false

15/03/16 08:28:18 INFO mapreduce.Job:  map 0% reduce 0%
15/03/16 08:28:18 INFO mapreduce.Job: Job job_1426239544674_0019 failed with state FAILED due to: Application application_1426239544674_0019 failed 2 times due to AM Container for appattempt_1426239544674_0019_000002 exited with  exitCode: -1000 due to: Not able to initialize user directories in any of the configured local directories for user hdfs
.Failing this attempt.. Failing the application.

I made some research defining the problem. main reason of the problem is deletion of local directories on YARN startup, but in practice it fails to delete the directories because of permission problems. The top-level usercache directory is owned by the user but is in a directory that is not writable by the user. Therefore the deletion of the user’s usercache directory, as the user, fails due to lack of permissions.

For solution,

You should delete your usercache directory which that located in data node directory. 

rm -rf /dn/yarn/nm/usercache/*

16 Mart 2015

Posted In: applicationmaster, cdh, cloudera, hadoop, hdfs, mapreduce, yarn

WP Twitter Auto Publish Powered By : XYZScripts.com