您好,登錄后才能下訂單哦!
0.前言
本文旨在使用一個全新安裝好的Linux系統從0開始進行Hadoop偽分布式環境的搭建,以達到快速搭建的目的,從而體驗Hadoop的魅力所在,為后面的繼續學習提供基礎環境。
對使用的系統環境作如下說明:
操作系統:CentOS 6.5 64位
主機IP地址:10.0.0.131/24
主機名:leaf
用戶名:root
hadoop版本:2.6.5
jdk版本:1.7
可以看到,這里直接使用root用戶,而不是按照大多數的教程創建一個hadoop用戶來進行操作,就是為了達到快速搭建Hadoop環境以進行體驗的目的。
為了保證后面的操作能夠正常完成,請先確認本機是否可以解析到主機名leaf,如果不能,請手動添加解析到/etc/hosts目錄中:
[root@leaf ~]# echo "127.0.0.1 leaf" >> /etc/hosts [root@leaf ~]# ping leaf PING leaf (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms 64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.048 ms 64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.046 ms
1.rsync軟件安裝
使用下面命令安裝:
[root@leaf ~]# yum install -y rsync
2.ssh安裝與免密碼登陸配置
(1)ssh安裝
使用下面命令安裝
[root@leaf ~]# yum install -y openssh-server openssh-clients
(2)ssh免密碼登陸配置
因為Hadoop使用ssh協議來管理遠程守護進程,所以需要配置免密碼登陸。
關閉防火墻和selinux
為了確保能夠成功配置,在配置前,先把防火墻和selinux關閉:
# 關閉防火墻 [root@leaf ~]# /etc/init.d/iptables stop [root@leaf ~]# chkconfig --level 3 iptables off # 關閉selinux [root@leaf ~]# setenforce 0 [root@leaf ~]# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config [root@leaf ~]# cat /etc/selinux/config | grep disabled # disabled - No SELinux policy is loaded. SELINUX=disabled
生成密鑰對
[root@leaf ~]# mkdir .ssh [root@leaf ~]# ssh-keygen -t dsa -P '' -f .ssh/id_dsa Generating public/private dsa key pair. Your identification has been saved in .ssh/id_dsa. Your public key has been saved in .ssh/id_dsa.pub. The key fingerprint is: 5b:af:7c:45:f3:ff:dc:50:f5:81:4b:1e:5c:c1:86:90 root@leaf The key's randomart p_w_picpath is: +--[ DSA 1024]----+ | .o oo.| | E..oo | | =...| | o = +| | S . + oo| | o . ...| | . ... .| | . .. oo| | o. =| +-----------------+
將公鑰添加到本地信任列表
[root@leaf ~]# cat .ssh/id_dsa.pub >> .ssh/authorized_keys
驗證
上面三步完成后就完成了免密碼登陸的配置,可以使用下面的命令進行驗證:
[root@leaf ~]# ssh localhost The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is d1:0d:ed:eb:e7:d1:2f:02:23:70:ef:11:14:4e:fa:42. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Last login: Wed Aug 30 04:28:01 2017 from 10.0.0.1 [root@leaf ~]#
在第一次登陸的時候需要輸入yes,之后再登陸時就可以直接登陸了:
[root@leaf ~]# ssh localhost Last login: Wed Aug 30 04:44:02 2017 from localhost [root@leaf ~]#
3.jdk安裝與配置
(1)jdk下載
這里使用的是jdk1.7版本,可以到下面的網站進行下載:
http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html
下載完成后,可以使用winscp上傳到/root目錄下,如下:
[root@leaf ~]# ls -lh jdk-7u80-linux-x64.tar.gz -rw-r--r--. 1 root root 147M Aug 29 12:05 jdk-7u80-linux-x64.tar.gz
(2)jdk安裝
將jdk解壓到/usr/local目錄下,并創建軟鏈接:
[root@leaf ~]# cp jdk-7u80-linux-x64.tar.gz /usr/local/ [root@leaf ~]# cd /usr/local/ [root@leaf local]# tar -zxf jdk-7u80-linux-x64.tar.gz [root@leaf local]# ls -ld jdk1.7.0_80/ drwxr-xr-x. 8 uucp 143 4096 Apr 11 2015 jdk1.7.0_80/ [root@leaf local]# ln -s jdk1.7.0_80/ jdk [root@leaf local]# ls -ld jdk lrwxrwxrwx. 1 root root 12 Aug 30 04:56 jdk -> jdk1.7.0_80/
(3)JAVA_HOME環境變量配置
java命令在/usr/local/jdk/bin目錄下:
[root@leaf local]# cd jdk/bin/ [root@leaf bin]# ls -lh java -rwxr-xr-x. 1 uucp 143 7.6K Apr 11 2015 java
配置java環境變量:
[root@leaf bin]# echo 'export JAVA_HOME=/usr/local/jdk/bin' >> /etc/profile [root@leaf bin]# echo 'export PATH=$PATH:$JAVA_HOME' >> /etc/profile [root@leaf bin]# source /etc/profile
這樣我們就可以在任何一個目錄下使用java相關的命令:
[root@leaf ~]# java -version java version "1.7.0_80" Java(TM) SE Runtime Environment (build 1.7.0_80-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode) [root@leaf ~]# javac -version javac 1.7.0_80
4.hadoop安裝與配置
(1)hadoop下載
這里使用hadoop 2.6.5版本,可以到下面的網站進行下載:
http://hadoop.apache.org/releases.html
選擇2.6.5的binary進入相應的頁面便可以下載,然后使用winscp上傳到/root目錄下,如下:
[root@leaf ~]# ls -lh hadoop-2.6.5.tar.gz -rw-r--r--. 1 root root 191M Aug 29 19:09 hadoop-2.6.5.tar.gz
(2)hadoop安裝
將hadoop解壓到/usr/local目錄下,并創建軟鏈接:
[root@leaf ~]# cp hadoop-2.6.5.tar.gz /usr/local [root@leaf ~]# cd /usr/local [root@leaf local]# tar -zxf hadoop-2.6.5.tar.gz [root@leaf local]# ls -ld hadoop-2.6.5 drwxrwxr-x. 9 1000 1000 4096 Oct 3 2016 hadoop-2.6.5 [root@leaf local]# ln -s hadoop-2.6.5 hadoop [root@leaf local]# ls -ld hadoop lrwxrwxrwx. 1 root root 12 Aug 30 05:05 hadoop -> hadoop-2.6.5
(3)hadoop環境變量配置
hadoop相關命令在/usr/local/hadoop/bin和/usr/local/hadoop/sbin目錄下,如下所示:
[root@leaf local]# cd hadoop/bin/ [root@leaf bin]# ls -lh hadoop -rwxr-xr-x. 1 1000 1000 5.4K Oct 3 2016 hadoop
配置hadoop環境變量:
[root@leaf bin]# echo 'export HADOOP_HOME=/usr/local/hadoop/bin:/usr/local/hadoop/sbin' >> /etc/profile [root@leaf bin]# echo 'export PATH=$PATH:$HADOOP_HOME' >> /etc/profile
這樣我們就可以在任何一個目錄下使用hadoop相關的命令:
[root@leaf ~]# hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the credential interact with credential providers Hadoop jar and the required libraries daemonlog get/set the log level for each daemon trace view and modify Hadoop tracing settings or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.
(4)hadoop配置
hadoop的配置文件在/usr/local/hadoop/etc/hadoop目錄下:
[root@leaf ~]# cd /usr/local/hadoop/etc/hadoop/ [root@leaf hadoop]# ls capacity-scheduler.xml hadoop-policy.xml kms-log4j.properties ssl-client.xml.example configuration.xsl hdfs-site.xml kms-site.xml ssl-server.xml.example container-executor.cfg httpfs-env.sh log4j.properties yarn-env.cmd core-site.xml httpfs-log4j.properties mapred-env.cmd yarn-env.sh hadoop-env.cmd httpfs-signature.secret mapred-env.sh yarn-site.xml hadoop-env.sh httpfs-site.xml mapred-queues.xml.template hadoop-metrics2.properties kms-acls.xml mapred-site.xml.template hadoop-metrics.properties kms-env.sh slaves
配置core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
fs.default.name這個字段下的值用于指定NameNode(HDFS的Master)的IP地址和端口號,如下面的value值hdfs://localhost:9000,就表示HDFS NameNode的IP地址或主機為localhost,端口號為9000.
配置hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name.dir</name> <value>/root/hdfs-filesystem/name</value> </property> <property> <name>dfs.data.dir</name> <value>/root/hdfs-filesystem/data</value> </property> </configuration>
dfs.replication用于指定HDFS中每個Block塊被復制的次數,起到數據冗余備份的作用;dfs.name.dir用于配置HDFS的NameNode的元數據,以逗號隔開,HDFS會把元數據冗余復制到這些目錄下;dfs.data.dir用于配置HDFS的DataNode的數據目錄,以逗號隔開,HDFS會把數據存在這些目錄下。
配置mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
mapred.job.tracker字段用于指定MapReduce Jobtracker的IP地址及端口號,如這里IP地址或主機為localhost,9001是MapReduce Jobtracker RPC的交互端口。
配置hadoop-env.sh
export JAVA_HOME=/usr/local/jdk
5.hadoop啟動與測試
(1)格式化HDFS分布式文件系統
執行如下命令:
[root@leaf ~]# hadoop namenode -format ... 17/08/30 08:41:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 p_w_picpaths with txid >= 0 17/08/30 08:41:29 INFO util.ExitUtil: Exiting with status 0 17/08/30 08:41:29 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at leaf/127.0.0.1 ************************************************************/
注意看輸出顯示是不是跟上面的類似,如果是,則說明操作成功。
(2)啟動hadoop服務
執行如下命令:
[root@leaf ~]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 17/08/30 08:53:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [localhost] localhost: starting namenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-namenode-leaf.out localhost: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-leaf.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-leaf.out 17/08/30 08:53:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-resourcemanager-leaf.out localhost: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-leaf.out
(3)hadoop服務測試
啟動完成后,執行jps命令,可以看到hadoop運行的守護進程,如下:
[root@leaf ~]# jps 4167 SecondaryNameNode 4708 Jps 3907 NameNode 4394 NodeManager 4306 ResourceManager 3993 DataNode
也可以通過在瀏覽器中輸入地址來訪問相關頁面,這里訪問NameNode的頁面,地址為http://10.0.0.131:50070,如下:
訪問DataNode的頁面,地址為http://10.0.0.131:50075,如下
6.參考資料
《Hadoop核心技術》
不過需要注意的是,書上版本用的是1.x,這里用的是2.x版本。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。