hadoop - Adding a new Namenode to an existing HDFS cluster -


in hadoop hdfs federation latest step of adding new namenode existing hdfs cluster is:
==> refresh datanodes pickup newly added namenode running following command against datanodes in cluster:

[hdfs]$ $hadoop_prefix/bin/hdfs dfsadmin -refreshnamenodes <datanode_host_name>:<datanode_rpc_port> 

witch best place execute flowing command: namenode or datanode ?
if have 1000 datanodes logical run 1ooo time ?

in namenode run command once.

$hadoop_prefix/sbin/slaves.sh hdfs dfsadmin -refreshnamenodes <datanode_host_name>:<datanode_rpc_port> 

slaves.sh script distribute command slave hosts mentioned in slaves file (typically placed in $hadoop_conf_dir)


Comments

Popular posts from this blog

asynchronous - C# WinSCP .NET assembly: How to upload multiple files asynchronously -

aws api gateway - SerializationException in posting new Records via Dynamodb Proxy Service in API -

asp.net - Problems sending emails from forum -