您好,登錄后才能下訂單哦!
環境IBM DS8K + SAN192 + Redhat6.1+Multipath軟件
分為三大步,先劃zone,后劃存儲,最后在主機端綁定多路徑
一.劃zone
為了以后通過tsm備份,將手機POS與DS8700,TS3500都劃zone,同時為了保證生產安全,兩臺交換機分別劃zone。
1.連接到SAN384交換機,備份原來的config
本機IP設置為10.77.77.88/255.255.255.0
telnet 10.77.77.77
用戶名:admin 密碼:xxxx
交換機1備份文件名:config-san.txt
交換機2的備份文件名:config-san2.txt
IBM_2499_192:FID128:admin> configupload
Protocol (scp, ftp, local) [ftp]: ftp
Server Name or IP Address [host]: 10.77.77.88
User Name [user]: ftp
Path/Filename [<home dir>/config.txt]: /upload/config-san.txt
Section (all|chassis|FID# [all]): all
Password:
configUpload complete: All selected config parameters are uploaded
2.手機pos劃zone
(1)交換機1實施
zonecreate "MPOS_SW1_GZDS8K" "1,2; 1,3; 1,4; 1,5; 1,6; 1,7; 1,8; 1,9; 2,0; 2,1; 2,2; 2,3"
@linlf -- 后面這四個2是什么意義?2 --表示SAN交換機的domain
--注釋:
1 --表示Domain
0 --連磁帶庫
2 --表示domain
1,2; 1,3;...1,9 -- 存儲連接SAN交換機的PortIndex(switchshow查看到的Portindex,不是Port)
2,0;2,1;2,2;2,3; -- 新上主機連接SAN交換機的Port
zonecreate "TS35a_MPOS1_R1","1,0;2,1"
zonecreate "TS35a_MPOS2_R1","1,0;2,3"
zonecreate "TS35b_MPOS1_R1","1,1;2,1"
zonecreate "TS35b_MPOS2_R1","1,1;2,3"
cfgadd "TYZF_SW1","MPOS_SW1_GZDS8K"
cfgadd "TYZF_SW1","TS35a_MPOS1_R1"
cfgadd "TYZF_SW1","TS35a_MPOS2_R1"
cfgadd "TYZF_SW1","TS35b_MPOS1_R1"
cfgadd "TYZF_SW1","TS35b_MPOS2_R1"
配置完成后,保存并且enable
cfgsave
cfgenable "TYZF_SW1"
查詢狀態是否正常
switchshow
查詢是否是online,各個口是否已經連接
(2)交換機2實施
zonecreate "MPOS_SW2_GZDS8K" "1,2; 1,3; 1,4; 1,5; 1,6; 1,7; 1,8; 1,9; 2,0; 2,1; 2,2; 2,3"
zonecreate "TS35c_MPOS1_L2","1,0;2,1"
zonecreate "TS35c_MPOS2_L2","1,0;2,3"
zonecreate "TS35d_MPOS1_L2","1,1;2,1"
zonecreate "TS35d_MPOS2_L2","1,1;2,3"
cfgadd "TYZF_SW2","MPOS_SW2_GZDS8K"
cfgadd "TYZF_SW2","TS35c_MPOS1_L2"
cfgadd "TYZF_SW2","TS35c_MPOS2_L2"
cfgadd "TYZF_SW2","TS35d_MPOS1_L2"
cfgadd "TYZF_SW2","TS35d_MPOS2_L2"
配置完成后,enable
cfgsave
cfgenable "TYZF_SW2"
查詢狀態是否正常
switchshow
查詢是否是online,各個口是否已經連接
3.回退方案
恢復配置文件
admin>switchdisable
admin>configdownload
按照提示輸入用戶名,及備份文件名
config-san.txt
admin>switchenable
admin>switchshow
查詢是否是online,各個口是否已經連接
@linlf One by one
二.在DS8700根據手機pos的主機HBA卡的wwid號建立group,新劃的lun映射到這個group
(1)登錄到DS8700
兩臺控制器的ip分別是172.16.0.3和172.17.0.4
主機ip通過自動DHCP方式連通DS8700陣列,然后再查到陣列管理IP,供下面登錄使用(ping 172.17.0.4)
用客戶端軟件進入控制臺,然后輸入命令>dscli,會彈出
dscli>
C:\Program Files\IBM\dscli>dscli
Enter the primary management console IP address:172.17.0.4
用戶名:admin 密碼:xxx
(2)建立volumegroup
mkvolgrp -type scsimap256 mpos
--反向操作:rmvolgrp v9
查詢新建立的volume group
dscli> lsvolgrp
新建的volume group的ID應該是v9
(3)劃lun
共需要劃5個200G的lun,4個5G的lun(具體劃分到哪個extpool里待定)
DS8700共有P0-P7,8個extent pool
剩余空間如下:
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
========================================================================================
ext_P0 P0 fb 0 below 1199 62 1199 0 14
ext_P1 P1 fb 1 below 1199 62 1199 0 14
ext_P2 P2 fb 0 below 1199 62 1199 0 14
ext_P3 P3 fb 1 below 1149 64 1149 0 24
ext_P4 P4 fb 0 below 1732 53 1732 0 14
ext_P5 P5 fb 1 below 1732 53 1732 0 14
ext_P6 P6 fb 0 below 1732 53 1732 0 14
ext_P7 P7 fb 1 below 1732 53 1732 0 14
根據分散IO的原則從P3-P7分別劃一個200G的lun,從P4-P7分別劃一個5G的lun
根據之前的命名規則,以及vol的順序
mkfbvol -extpool P3 -cap 200 -name vol_#h 1321
mkfbvol -extpool P4 -cap 200 -name vol_#h 1417
mkfbvol -extpool P5 -cap 200 -name vol_#h 1514
mkfbvol -extpool P6 -cap 200 -name vol_#h 1614
mkfbvol -extpool P7 -cap 200 -name vol_#h 1714
mkfbvol -extpool P4 -cap 5 -name vol_#h 1418
mkfbvol -extpool P5 -cap 5 -name vol_#h 1515
mkfbvol -extpool P6 -cap 5 -name vol_#h 1615
mkfbvol -extpool P7 -cap 5 -name vol_#h 1715
--反向操作:rmfbvol -safe 1321
rmfbvol -safe 1417
rmfbvol -safe 1514
rmfbvol -safe 1614
rmfbvol -safe 1714
rmfbvol -safe 1418
rmfbvol -safe 1515
rmfbvol -safe 1615
rmfbvol -safe 1715
在P3-P7的POOL里劃分了1321,1417,1514,1614,1714共5個200G的lun
在P4-P7的POOL里劃分了1418,1515,1615,1715共4個5G的lun
(4)將lun映射到主機的group
chvolgrp -action add -volume 1321,1417,1514,1614,1714,1418,1515,1615,1715 v9
--反向操作:chvolgrp -action remove -volume 1321,1417,1514,1614,1714,1418,1515,1615,1715 v9
創建陣列與主機的Map連接。
mkhostconnect -wwname 21000024ff50ce3c -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc0
mkhostconnect -wwname 21000024ff50ce3d -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc1
mkhostconnect -wwname 21000024ff50c9dc -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc2
mkhostconnect -wwname 21000024ff50c9dd -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc3
mkhostconnect -wwname 21000024ff50cada -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc0
mkhostconnect -wwname 21000024ff50cadb -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc1
mkhostconnect -wwname 21000024ff50cb78 -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc2
mkhostconnect -wwname 21000024ff50cb79 -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc3
--反向操作:lshostconnect -volgrp v9獲取host_connect_id
rmhostconnect <host_connect_id>
查看v9映射的lun
showvolgrp v9
三.綁定多路徑
1.安裝multipath,及生成/etc/multipath.conf文件
#fdisk -l | grep sd查看硬盤,對于linux并不能看到新增的存儲設備,需要重啟主機
利用scsi_id -g -u看查看新劃的lun的wwid號
#yum install device-mapper-multipath.x86_64
生成multipath.conf文件
#cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf
2.配置/etc/multipath.conf文件
根據查詢到的wwid號配置/etc/multipath.conf文件,并且根據生產上別的主機的/etc/multipath.conf文件編輯
blacklist_exceptions {
devnode "^(sd)[b-z]"
devnode "^(dm-)[0-9]"
}
defaults {
user_friendly_names yes
path_grouping_policy group_by_prio
features "1 queue_if_no_path"
path_checker tur
}
multipaths {
multipath {
wwid <wwid>
alias mpathdsk1
}
multipath {
wwid <wwid>
alias mpathdsk2
}
multipath {
wwid <wwid>
alias mpathdsk3
}
multipath {
wwid <wwid>
alias mpathdsk4
}
multipath {
wwid <wwid>
alias mpathdsk5
}
multipath {
wwid <wwid>
alias crsdsk1
}
multipath {
wwid <wwid>
alias crsdsk2
}
multipath {
wwid <wwid>
alias crsdsk3
}
multipath {
wwid <wwid>
alias crsdsk4
}
}
配置完/etc/multipath.conf
#/etc/init.d/multipathd start,
#multipath –ll命令查詢多路徑盤,并重啟服務器,確定服務器沒問題
#ll /dev/mapper/查看該路徑下的多路徑盤
磁盤裸設備綁定,數據庫安裝工作由集成商完成.
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。