Hadoop单机启动dfs,为什么各个节点主机名不一样?

各个节点的主机名为什么不一致?
namenode:hadoop1
datanode:localhost
secondarynamenode:0.0.0.0

[hadoop@localhost hadoop-2.7.2]$ sbin/start-dfs.sh
Starting namenodes on [hadoop1]
hadoop@hadoop1's password:
hadoop1: starting namenode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-namenode-hadoop1.hadoopdomain.out
hadoop@localhost's password:
localhost: starting datanode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-datanode-hadoop1.hadoopdomain.out
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.2/logs/hadoop-hadoop-secondarynamenode-hadoop1.hadoopdomain.out
阅读 3.6k
1 个回答
新手上路,请多包涵

你这个似乎一直在提示输入密码,很可能是因为ssh没有配好。如果 ssh localhost登陆不需要密码则说明ssh配好了。
需要将namnode的公钥(如id_rsa.pub)放到所有datanode的.ssh目录的authorized_keys中。

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进