中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

9、ZooKeeper安裝教程詳解

發布時間:2020-07-28 12:19:07 來源:網絡 閱讀:398 作者:victor19901114 欄目:大數據

@[TOC]

1、環境準備

1.1下載zooKeeper

   查閱hadoop2.7.3的文檔我們可以看到hadoop2.7.3在搭建高可用的時候使用的是zookeeper-3.4.2版本,所以我們也按照hadoop官網的提示,接下來我們安裝zookeeper-3.4.2版本.進入官網下載ZooKeeper3.4.2版本
   官網地址:https://zookeeper.apache.org/
9、ZooKeeper安裝教程詳解
點擊Download
9、ZooKeeper安裝教程詳解
9、ZooKeeper安裝教程詳解

9、ZooKeeper安裝教程詳解

1.3安裝zooKeeper

#1.把zookeeper的壓縮安裝包解壓到/opt/bigdata/目錄下
[root@node1 ~]# tar -xzvf zookeeper-3.4.2.tar.gz -C /opt/bigdata/ #輸入完命令后回車
#2.切換到bigdata目錄下
[root@node1 ~]# cd /opt/bigdata/
#3.按照安裝hadoop的方式,將zookeeper的安裝目錄的所屬組修改為hadoop:hadoop
#修改zookeeper安裝目錄的所屬用戶和組為hadoop:hadoop
[root@node1 bigdata]# chown -R hadoop:hadoop zookeeper-3.4.2/
#4.修改zookeeper安裝目錄的讀寫權限
[root@node1 bigdata]# chmod -R 755 zookeeper-3.4.2/

1.4配置zooKeeper環境變量

#1.切換到hadoop用戶目錄下
[root@node1 bigdata]# su - hadoop
Last login: Thu Jul 18 16:07:39 CST 2019 on pts/0
[hadoop@node1 ~]$ cd /opt/bigdata/zookeeper-3.4.2/
[hadoop@node1 zookeeper-3.4.2]$ cd ..
[hadoop@node1 bigdata]$ cd ~
#2.修改hadoop用戶下的環境變量配置文件
[hadoop@node1 ~]$ vi .bash_profile
# Get the aliases and functions
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
JAVA_HOME=/usr/java/jdk1.8.0_211-amd64
HADOOP_HOME=/opt/bigdata/hadoop-2.7.3
SPARK_HOME=/opt/spark-2.4.3-bin-hadoop2.7
M2_HOME=/opt/apache-maven-3.0.5
#3.新增zookeeper的環境變量ZOOKEEPER_HOME
ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.2/
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$M2_HOME/bin
#4.將zookeeper的環境變量ZOOKEEPER_HOME加入到path中
PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin
export JAVA_HOME
export HADOOP_HOME
export M2_HOME
export SPARK_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
#5.導出zookeeper環境變量
export ZOOKEEPER_HOME
#6.保存修改內容
:wq! #記得回車
#7.使得環境變量生效
[hadoop@node1 ~]$ source .bash_profile
#8.輸入zk然后按鍵盤左側的Tab鍵
[hadoop@node1 ~]$ zk
#有如下的提示,表名zookeeper的配置完成
zkCleanup.sh   zkCli.cmd    zkCli.sh    zkEnv.cmd     zkEnv.sh     zkServer.cmd     zkServer.sh
[hadoop@node1 ~]$ zk

1.5 修改zookeeper集群配置文件

   將目錄切換到zookeeper的安裝目錄下的conf目錄下復制zoo_sample.cfg文件為zoo.cfg

[hadoop@node1 ~]$ cd /opt/bigdata/zookeeper-3.4.2/conf/
[hadoop@node1 conf]$ ll
total 12
-rwxr-xr-x 1 hadoop hadoop 535 Dec 22 2011 configuration.xsl
-rwxr-xr-x 1 hadoop hadoop 2161 Dec 22 2011 log4j.properties
-rwxr-xr-x 1 hadoop hadoop 808 Dec 22 2011 zoo_sample.cfg
#1.復制zoo_sample.cfg模板配置文件為正式的配置文件zoo.cfg
[hadoop@node1 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@node1 conf]$ ll
total 16
-rwxr-xr-x 1 hadoop hadoop 535 Dec 22 2011 configuration.xsl
-rwxr-xr-x 1 hadoop hadoop 2161 Dec 22 2011 log4j.properties
-rwxr-xr-x 1 hadoop hadoop 808 Jul 19 11:20 zoo.cfg
-rwxr-xr-x 1 hadoop hadoop 808 Dec 22 2011 zoo_sample.cfg
[hadoop@node1 conf]$

   修改dataDir的值為 dataDir=/var/lib/zookeeper,在文件的末尾添加如下配置:

server.1=node1:2888:3888 
server.2=node2:2888:3888 
server.3=node3:2888:3888

修改完配置文件記得保存

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
開課吧
kaikeba.com
精選領域名師,只為人才賦能 6
1.6 創建myid文件
在節點node1,node2,node3對應的/var/lib/zookeeper目錄下(dataDir配置的目錄/var/lib/zookeeper)創建myid文
件,幾個文件內容依次為1,2,3
如下圖我們切換到root用戶,在/var/lib目錄下創建zookeeper目錄,因為hadoop用戶對/var/lib目錄沒有寫權限,
所以我們在創建zookeeper目錄時需要切換到root用戶(擁有最大權限)
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
#修改完配置文件記得保存

1.6 創建myid文件

在節點node1,node2,node3對應的/var/lib/zookeeper目錄下(dataDir配置的錄/var/lib/zookeeper)創建myid文件,幾個文件內容依次為1,2,3。切換到root用戶,在/var/lib目錄下創建zookeeper目錄,因為hadoop用戶對/var/lib目錄沒有寫權限,所以我們在創建zookeeper目錄時需要切換到

root用戶(擁有最大權限)
[hadoop@node1 conf]$ vi zoo.cfg
#1.切換到root用戶
[hadoop@node1 conf]$ su - root
Password:
Last login: Fri Jul 19 10:53:59 CST 2019 from 192.168.200.1 on pts/0
#2.創建zookeeper目錄
[root@node1 ~]# mkdir -p /var/lib/zookeeper
#3.進入到/var/lib/zookeeper/目錄
[root@node1 ~]# cd /var/lib/zookeeper/
You have new mail in /var/spool/mail/root
#4.創建myid配置文件
[root@node1 zookeeper]# touch myid
#5.編輯myid文件,輸入1,我們目前編輯的是node1的節點的myid文件,node2的myid內容為2,node3的myid內容為3
[root@node1 zookeeper]# vi myid
You have new mail in /var/spool/mail/root
#6.查看一下myid文件內容為1
[root@node1 zookeeper]# cat myid
1
You have new mail in /var/spool/mail/root

1.7 修改myid目錄權限

#1.配置完成后記得修改zookeeper目錄的所屬組和讀寫權限
[root@node1 zookeeper]# cd ..
You have new mail in /var/spool/mail/root
#2.修改zookeeper目錄所屬組
[root@node1 lib]# chown -R hadoop:hadoop zookeeper/
#3.修改zookeeper目錄的讀寫權限為755
[root@node1 lib]# chmod -R 755 zookeeper/
[root@node1 lib]#

2、復制zookeeper

#1.復制/var/lib目錄下的zookeeper目錄到node2和node3的/var/lib目錄下
[root@node1 lib]# scp -r zookeeper node2:$PWD
[root@node1 lib]# scp -r zookeeper node3:$PWD
#2.復制zookeeper安裝目錄到node2和node3的安裝目錄下/opt/bigdata目錄下
[root@node1 lib]# scp -r /opt/bigdata/zookeeper-3.4.2/ node2:/opt/bigdata/
[root@node1 lib]# scp -r /opt/bigdata/zookeeper-3.4.2/ node3:/opt/bigdata/

3、修改node2和node3節點zookeeper的相關目錄權限

修改node2節點zookeeper 相關目錄權限

#1.修改zookeeper的myid配置目錄所屬組和讀寫權限
[root@node2 lib]# cd ~
[root@node2 ~]# chown -R hadoop:hadoop /var/lib/zookeeper
[root@node2 ~]# chmod -R 755 /var/lib/zookeeper
#2.修改zookeeper安裝目錄所屬組和讀寫權限
[root@node2 ~]# chown -R hadoop:hadoop /opt/bigdata/zookeeper-3.4.2/
You have new mail in /var/spool/mail/root
[root@node2 ~]# chmod -R 755 /opt/bigdata/zookeeper-3.4.2/
[root@node2 ~]#

修改node3節點zookeeper 相關目錄權限

#1.修改zookeeper的myid配置目錄所屬組和讀寫權限
[root@node3 bigdata]# cd ~
You have new mail in /var/spool/mail/root
[root@node3 ~]# chown -R hadoop:hadoop /var/lib/zookeeper
[root@node3 ~]# chmod -R 755 /var/lib/zookeeper
#2.修改zookeeper安裝目錄所屬組和讀寫權限
[root@node3 ~]# chown -R hadoop:hadoop /opt/bigdata/zookeeper-3.4.2/
You have new mail in /var/spool/mail/root
[root@node3 ~]# chmod -R 755 /opt/bigdata/zookeeper-3.4.2/
[root@node3 ~]#

4、修改node2和node3的myid文件內容

修改node2節點zookeeper 的myid內容為2:

[root@node2 ~]# vi /var/lib/zookeeper/myid
You have new mail in /var/spool/mail/root
[root@node2 ~]# cat /var/lib/zookeeper/myid
2
[root@node2 ~]#

修改node3節點zookeeper 的myid內容為3

[root@node3 ~]# vi /var/lib/zookeeper/myid
You have new mail in /var/spool/mail/root
[root@node3 ~]# cat /var/lib/zookeeper/myid
3
[root@node3 ~]#

5、配置node2和node3的zookeeper環境變量

我們在node1節點上直接將hadoop用戶的環境變量配置文件遠程復制到node2和node3的hadoop用戶家目錄下

#1.如果當前登錄用戶是root用戶,需要切換到hadoop用戶下,如果當前用戶是hadoop用戶,請將目錄切換到hadoop用
戶的家目錄下,在進行環境變量文件的遠程復制.
[root@node1 lib]# su - hadoop
Last login: Fri Jul 19 11:08:44 CST 2019 on pts/0
[hadoop@node1 ~]$ scp .bash_profile node2:$PWD
.bash_profile 100% 681
64.8KB/s 00:00
[hadoop@node1 ~]$ scp .bash_profile node3:$PWD
.bash_profile 100% 681
156.8KB/s 00:00
[hadoop@node1 ~]$

5.1 使得node2和node3的環境變量生效

使得node2的hadoop的環境變量生效

#注意:切換到hadoop用戶下
#1.使得環境變量生效
[hadoop@node2 ~]$ source .bash_profile
#2.輸入zk然后按鍵盤左側的Tab鍵
[hadoop@node2 ~]$ zk
#3.有如下命令和shell腳本的提示,說明zookeeper的環境變量配置成功.
zkCleanup.sh  zkCli.sh   zkEnv.sh   zkServer.sh
zkCli.cmd    zkEnv.cmd    zkServer.cmd
[hadoop@node2 ~]$ zk

使得node3的hadoop的環境變量生效

#注意:切換到hadoop用戶下
[root@node3 bigdata]# su - hadoop
Last login: Thu Jul 18 15:37:50 CST 2019 on :0
#1.使得環境變量生效
[hadoop@node3 ~]$ source .bash_profile
#2.輸入zk然后按鍵盤左側的Tab鍵
[hadoop@node3 ~]$ zk
#3.有如下命令和shell腳本的提示,說明zookeeper的環境變量配置成功.
zkCleanup.sh   zkCli.sh   zkEnv.sh    zkServer.sh
zkCli.cmd zkEnv.cmd zkServer.cmd
[hadoop@node3 ~]$ zk

6.啟動zookeeper集群

6.1 啟動zookeeper集群

啟動zookeeper集群需要手動分別依次在三臺機器上啟動,啟動前需要在三臺機器上都將用戶切換為hadoop用戶.
node1上啟動zookeeper

[hadoop@node1 ~]$ zkServer.sh start
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@node1 ~]$

node2上啟動zookeeper

[hadoop@node2 ~]$ zkServer.sh start
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@node2 ~]$

node3上啟動zookeeper

[hadoop@node3 ~]$ zkServer.sh start
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@node3 ~]$

6.2 查看zookeeper集群狀態

使用zkServer.sh status命令在三個節點分別執行查看狀態
在node1上查看

[hadoop@node1 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Mode: follower
[hadoop@node1 bin]$

在node2上查看

[hadoop@node2 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Mode: follower
[hadoop@node2 bin]$

在node3上查看

[hadoop@node3 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Mode: leader
[hadoop@node3 bin]$

至此我們zookeeper集群安裝完成.

6.3 zooKeeper安裝遇到問題

由于按照hadoop2.7.3版本官方文檔中使用zookeeper-3.4.2版本,但是zookeeper-3.4.2版本比較低,我們在啟動zookeeper后,可以使用jps命令或者ps -ef|grep zookeeper命令查看zookeeper主進程的狀態,但是我們發現是正常的,如果我們使用zkServer.sh status命令查看zookeeper的狀態卻顯示是異常的,不管啟動多少次都會得到同樣的結果。

[hadoop@node1 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[hadoop@node2 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[hadoop@node3 bin]$ zkServer.sh status
JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

分析主要有以下兩個原因造成:
1.centos7上沒有安裝nc工具.
2.zookeeper啟動腳本中的nc命令在不同的linux版本中使用了無效的參數導致獲取狀態異常或者獲取的狀態為
空狀態導致的。
解決方法:
1.使用yum 在三個節點上分別安裝nc工具

yum install nc -y

2.修改zookeeper安裝目錄下的bin目錄下的zkServer.sh腳本文件內容
9、ZooKeeper安裝教程詳解
修改完成后我們在使用zkServer.sh status就能看到zookeeper的狀態了

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

吉首市| 武功县| 永胜县| 山东省| 页游| 孟连| 铜川市| 龙州县| 驻马店市| 灌云县| 正镶白旗| SHOW| 肇庆市| 巴彦县| 新竹市| 沙坪坝区| 茌平县| 哈巴河县| 阿坝县| 霍邱县| 沙湾县| 清丰县| 靖边县| 新巴尔虎左旗| 瓮安县| 朔州市| 宁阳县| 金昌市| 呼和浩特市| 开平市| 名山县| 蒙自县| 湘阴县| 板桥市| 邵阳县| 莲花县| 阿拉善盟| 西藏| 陆河县| 安泽县| 盐源县|