中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hadoop如何部署偽分布模式

發布時間:2021-12-09 11:40:56 來源:億速云 閱讀:128 作者:小新 欄目:大數據

這篇文章將為大家詳細講解有關Hadoop如何部署偽分布模式,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。


 部署方式:
1.單機模式standalone   1個java進程
2.偽分布模式Pseudo-Distributed Mode  開發|學習  多個java進程
3.集群模式Cluster Mode   :生產 多臺機器多個java進程

偽分布式部署: HDFS
1.創建hadoop服務的一個用戶
[root@hadoop02 software]# useradd hadoop
[root@hadoop02 software]# id hadoop
uid=515(hadoop) gid=515(hadoop) groups=515(hadoop)
[root@rzdatahadoop02 software]#  [root@hadoop02 software]# vi /etc/sudoers
hadoop  ALL=(root)      NOPASSWD:ALL

2.部署JAVA
Oracle jdk1.8(Open JDK盡量不要使用)
[root@hadoop02 jdk1.8.0_45]# which java
/usr/java/jdk1.8.0_45/bin/java
[root@hadoop02 jdk1.8.0_45]#

3.部署ssh服務是運行
[root@hadoop02 ~]# service sshd status
openssh-daemon (pid  1386) is running...
[root@hadoop02 ~]# 

4.解壓hadoop
[root@hadoop02 software]# tar -xzvf hadoop-2.8.1.tar.gz
chown -R hadoop:hadoop 文件夾 -->文件夾和文件夾的里面的 
chown -R hadoop:hadoop 軟連接文件夾 --> 只修改軟連接文件夾,不會修改文件夾里面的
chown -R hadoop:hadoop 軟連接文件夾/* --> 軟連接文件夾不修改,只會修改文件夾里面的
chown -R hadoop:hadoop hadoop-2.8.1 --> 修改原文件夾
[root@hadoop02 software]# ln -s hadoop-2.8.1 hadoop

[root@hadoop02 software]# cd hadoop
[root@hadoop02 hadoop]# rm -f *.txt
[root@hadoop02 hadoop]# ll
total 28
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 bin
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 etc
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 include
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 lib
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 libexec
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 sbin
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 share
[root@hadoop02 hadoop]#  bin: 命令
etc:配置文件
sbin: 用來啟動關閉hadoop進程

5.切換hadoop用戶和配置
[root@hadoop02 hadoop]# su - hadoop
[hadoop@hadoop02 ~]$ ll
total 0
[hadoop@hadoop02 ~]$ cd /opt/software/hadoop
[hadoop@hadoop02 hadoop]$ ll
total 28
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 bin
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 etc
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 include
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 lib
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 libexec
drwxr-xr-x. 2 hadoop hadoop 4096 Dec 10 11:54 sbin
drwxr-xr-x. 3 hadoop hadoop 4096 Dec 10 11:54 share
[hadoop@hadoop02 hadoop]$ cd etc/hadoop hadoop-env.sh : hadoop配置環境
core-site.xml : hadoop 核心配置文件
hdfs-site.xml : hdfs服務的 --> 會起進程
[mapred-site.xml : mapred計算所需要的配置文件] 只當在jar計算時才有
yarn-site.xml : yarn服務的 --> 會起進程
slaves: 集群的機器名稱 [hadoop@hadoop02 hadoop]$ vi core-site.xml 
<configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
    </property>
</configuration> [hadoop@hadoop02 hadoop]$ vi hdfs-site.xml 
<configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

6.配置hadoop用戶的ssh的信任關系
[hadoop@hadoop02 ~]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
5b:07:ff:e5:82:85:f3:41:32:f3:80:05:c9:57:0f:e9 hadoop@rzdatahadoop002
The key's randomart image is:
+--[ RSA 2048]----+
|         ..o..o. |
|          oo. .o |
|          o.=.. .|
|           o OE  |
|        S . = + .|
|         o . * + |
|        .   . + .|
|               . |
|                 |
+-----------------+

[hadoop@hadoop02 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop02 ~]$ chmod 0600 ~/.ssh/authorized_keys

7.格式化
[hadoop@hadoop002 hadoop]$ bin/hdfs namenode -format
17/12/13 22:22:04 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
17/12/13 22:22:04 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/12/13 22:22:04 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/12/13 22:22:04 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/12/13 22:22:04 INFO util.ExitUtil: Exiting with status 0
17/12/13 22:22:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at rzdatahadoop002/192.168.137.201
************************************************************/ Storage directory: /tmp/hadoop-hadoop/dfs/name 
1.默認的存儲路徑哪個配置?
2.hadoop-hadoop指的什么意思?
core-site.xml
hadoop.tmp.dir: /tmp/hadoop-${user.name}
hdfs-site.xml
dfs.namenode.name.dir : file://${hadoop.tmp.dir}/dfs/name

8.啟動HDFS服務
[hadoop@hadoop02 sbin]$ ./start-dfs.sh
Starting namenodes on [localhost]
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 9a:ea:f5:06:bf:de:ca:82:66:51:81:fe:bf:8a:62:36.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 9a:ea:f5:06:bf:de:ca:82:66:51:81:fe:bf:8a:62:36.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: Error: JAVA_HOME is not set and could not be found.
[hadoop@hadoop02 sbin]$ ps -ef|grep hadoop
root     11292 11085  0 21:59 pts/1    00:00:00 su - hadoop
hadoop   11293 11292  0 21:59 pts/1    00:00:00 -bash
hadoop   11822 11293  0 22:34 pts/1    00:00:00 ps -ef
hadoop   11823 11293  0 22:34 pts/1    00:00:00 grep hadoop
[hadoop@rzdatahadoop002 sbin]$ echo $JAVA_HOME
/usr/java/jdk1.8.0_45
發現JAVA_HOME變量是存在的,無法啟動HDFS服務

[hadoop@hadoop02 sbin]$ vi ../etc/hadoop/hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_45 [hadoop@hadoop02 sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out
localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out namenode(名稱節點) : localhost  
datanode(數據節點) : localhost
secondary namenode(第二名稱節點): 0.0.0.0

http://localhost:50070/
默認的端口:50070

9.使用命令(hadoop、hdfs)
[hadoop@hadoop02 bin]$ ./hdfs dfs -mkdir /user
[hadoop@hadoop02 bin]$ ./hdfs dfs -mkdir /user/hadoop [hadoop@hadoop02 bin]$ echo "123456" > rz.log
[hadoop@hadoop02 bin]$ ./hadoop fs -put rz.log hdfs://localhost:9000/
[hadoop@hadoop02 bin]$ 
[hadoop@hadoop02 bin]$ ./hadoop fs -ls hdfs://localhost:9000/
Found 2 items
-rw-r--r--   1 hadoop supergroup          7 2017-12-13 22:56 hdfs://localhost:9000/rz.log
drwxr-xr-x   - hadoop supergroup          0 2017-12-13 22:55 hdfs://localhost:9000/user [hadoop@hadoop02 bin]$ ./hadoop fs -ls /
Found 2 items
-rw-r--r--   1 hadoop supergroup          7 2017-12-13 22:56 hdfs://localhost:9000/rz.log
drwxr-xr-x   - hadoop supergroup          0 2017-12-13 22:55 hdfs://localhost:9000/user 10.想要修改hdfs://localhost:9000為hdfs://192.168.137.201:9000
[hadoop@hadoop02 bin]$ ../sbin/stop-dfs.sh  [hadoop@hadoop02 bin]$ vi ../etc/hadoop/core-site.xml 
<configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://192.168.137.201:9000</value>
    </property>
</configuration>

[hadoop@hadoop02 bin]$ ./hdfs namenode -format
[hadoop@hadoop02 bin]$ ../sbin/start-dfs.sh 
Starting namenodes on [hadoop002]
rzdatahadoop002: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out
localhost: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out [hadoop@hadoop02 bin]$ netstat -nlp|grep 9000
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 192.168.137.201:9000        0.0.0.0:*                   LISTEN      14974/java          
[hadoop@hadoop02 bin]$ 
11.修改HDFS的服務以hadoop02啟動
namenode: hadoop02
datanode: localhost  
secondarynamenode: 0.0.0.0  針對于datanode修改:
[hadoop@hadoop002 hadoop]$ vi slaves
hadoop02 針對于secondarynamenode修改:
[hadoop@hadoop02 hadoop]$ vi hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
            <name>dfs.replication</name>
            <value>1</value>
    </property>         <property>
                 <name>dfs.namenode.secondary.http-address</name>
                 <value>rzdatahadoop002:50090</value>
        </property>
        <property>
                 <name>dfs.namenode.secondary.https-address</name>
                 <value>rzdatahadoop002:50091</value>
        </property> "hdfs-site.xml" 35L, 1173C written  [hadoop@hadoop02 hadoop]$ cd ../../sbin
[hadoop@hadoop02 sbin]$ ./stop-dfs.sh
[hadoop@hadoop02 sbin]$ ./start-dfs.sh 
Starting namenodes on [hadoop02]
hadoop02: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-rzdatahadoop002.out
hadoop02: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-rzdatahadoop002.out
Starting secondary namenodes [rzdatahadoop002]
hadoop02: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-rzdatahadoop002.out

補充: 
某個服務數據目錄在A盤(500G),還剩10G。/a/dfs/data
添加B盤2T。
1.A盤:mv /a/dfs /b/
2.B盤:ln -s /b/dfs /a
3.檢查(修改)A,B盤的文件夾的用戶和用戶組的權限

關于“Hadoop如何部署偽分布模式”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

海门市| 常熟市| 陇川县| 东港市| 太和县| 垦利县| 平定县| 宁强县| 临海市| 于田县| 湖州市| 宜宾市| 金塔县| 鸡泽县| 大姚县| 鹤峰县| 景洪市| 巩留县| 清水河县| 姚安县| 高清| 玛纳斯县| 三江| 英吉沙县| 泾阳县| 共和县| 蓬溪县| 九寨沟县| 扶沟县| 福安市| 屯留县| 张掖市| 剑川县| 文登市| 故城县| 德格县| 乌兰察布市| 宁波市| 资源县| 青铜峡市| 焦作市|