环境信息

  1. 操作系统:macOS Mojave 10.14.6
  2. JDK:1.8.0_211 (安装位置:/Library/Java/JavaVirtualMachines/jdk1.8.0_211.jdk/Contents/Home)
  3. hadoop:3.2.1

开通ssh

在"系统偏好设置"->"共享",设置如下:
在这里插入图片描述

免密码登录

  1. 执行以下命令创建秘钥:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

一路next,最终会在~/.ssh目录生成id_rsa和id_rsa.pub文件

  1. 执行以下命令,将自己的秘钥放在ssh授权目录,这样ssh登录自身就不需要输入密码了:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  1. ssh登录试试,这次不需要密码了:
Last login: Sun Oct 13 21:44:17 on ttys000
(base) zhaoqindeMBP:~ zhaoqin$ ssh localhost
Last login: Sun Oct 13 21:48:57 2019
(base) zhaoqindeMBP:~ zhaoqin$

下载hadoop

  1. 下载hadoop,地址是:http://hadoop.apache.org/rele...
  2. 将下载文件hadoop-3.2.1.tar.gz解压,我这里解压后的地址是:~/software/hadoop-3.2.1/

如果只需要hadoop单机模式,现在就可以了,但是单机模式没有hdfs,因此接下来要做伪分布模式的设置;

伪分布模式设置

进入目录hadoop-3.2.1/etc/hadoop,做以下设置:

  1. 打开hadoop-env.sh文件,增加JAVA的路径设置:
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_211.jdk/Contents/Home
  1. 打开core-site.xml文件,将configuration节点改为如下内容:
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>
  1. 打开hdfs-site.xml文件,将configuration节点改为如下内容:
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
  1. 打开mapred-site.xml文件,将configuration节点改为如下内容:
<configuration>
    <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
     </property>
</configuration>
  1. 打开yarn-site.xml文件,将configuration节点改为如下内容:
<configuration>
    <property> 
        <name>yarn.nodemanager.aux-services</name> 
        <value>mapreduce_shuffle</value> 
    </property>
    <property> 
        <name>yarn.nodemanager.env-whitelist</name>
                  <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>
  1. 在目录hadoop-3.2.1/bin执行以下命令,初始化hdfs:
./hdfs namenode -format

初始化成功后,可见如下信息:

2019-10-13 22:13:32,468 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2019-10-13 22:13:32,473 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2019-10-13 22:13:32,474 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zhaoqindeMBP/192.168.50.12
************************************************************/

启动

  1. 进入目录hadoop-3.2.1/sbin,执行<font color="blue">./start-dfs.sh</font>启动hdfs:
(base) zhaoqindeMBP:sbin zhaoqin$ ./start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [zhaoqindeMBP]
zhaoqindeMBP: Warning: Permanently added 'zhaoqindembp,192.168.50.12' (ECDSA) to the list of known hosts.
2019-10-13 22:28:30,597 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

上面的警告不会影响使用;

  1. 浏览器访问地址:localhost:9870 ,可见hadoop的web页面如下图:

在这里插入图片描述

  1. 进入目录hadoop-3.2.1/sbin,执行<font color="blue">./start-yarn.sh</font>启动yarn:
base) zhaoqindeMBP:sbin zhaoqin$ ./start-yarn.sh
Starting resourcemanager
Starting nodemanagers
  1. 浏览器访问地址:localhost:8088 ,可见yarn的web页面如下图:

在这里插入图片描述

  1. 执行jps命令查看所有java进程,正常情况下可以见到以下进程:
(base) zhaoqindeMBP:sbin zhaoqin$ jps
2161 NodeManager
1825 SecondaryNameNode
2065 ResourceManager
1591 NameNode
2234 Jps
1691 DataNode

至此,hadoop3伪分布式环境的部署、设置、启动都已经完成。

停止hadoop服务

进入目录hadoop-3.2.1/sbin,执行<font color="blue">./stop-all.sh</font>即可关闭hadoop的所有服务:

(base) zhaoqindeMBP:sbin zhaoqin$ ./stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as zhaoqin in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [localhost]
Stopping datanodes
Stopping secondary namenodes [zhaoqindeMBP]
2019-10-13 22:49:00,941 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping nodemanagers
Stopping resourcemanager

以上就是Mac环境部署hadoop3的全部过程,希望能给您一些参考。

欢迎关注公众号:程序员欣宸


程序员欣宸
147 声望24 粉丝

热爱Java和Docker