您好,登錄后才能下訂單哦!
這篇文章給大家介紹如何分析iscsi、nfs與ceph,內容非常詳細,感興趣的小伙伴們可以參考借鑒,希望對大家能有所幫助。
1. 服務器端
[1] | Install administration tools. |
[root@dlp ~]# yum -y install scsi-target-utils
[2] Configure iSCSI Target .
For example, create a disk image under the [/iscsi_disks] directory and set it as a shared disk.
# create a disk image [root@dlp ~]# mkdir /iscsi_disks [root@dlp ~]# dd if=/dev/zero of=/iscsi_disks/disk01.img count=0 bs=1 seek=80G [root@dlp ~]# vi /etc/tgt/targets.conf # add follows to the end # if you set some devices, add <target>-</target> and set the same way with follows # naming rule : [ iqn.yaer-month.domain:any name ] <target iqn.2014-08.world.server:target00> # provided devicce as a iSCSI target backing-store /iscsi_disks/disk01.img # iSCSI Initiator's IP address you allow to connect initiator-address 10.0.0.31 # authentication info ( set anyone you like for "username", "password" ) incominguser username password </target> [root@dlp ~]# /etc/rc.d/init.d/tgtd start Starting SCSI target daemon: [ OK ] [root@dlp ~]# chkconfig tgtd on # confirm status [root@dlp ~]# tgtadm --mode target --op show Target 1: iqn.2014-08.world.server:target00 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 85899 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /iscsi_disks/disk01.img Backing store flags: Account information: username ACL information: 10.0.0.31
2. 客戶端配置
[1] Configure iSCSI Initiator.
[root@www ~]# yum -y install iscsi-initiator-utils [root@www ~]# vi /etc/iscsi/iscsid.conf # line 53: uncomment node.session.auth.authmethod = CHAP # line 57,58: uncomment and set username and password which set on iSCSI Target node.session.auth.username = username node.session.auth.password = password # discover target [root@www ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30 Starting iscsid: [ OK ] 10.0.0.30:3260,1 iqn.2014-08.world.server:target00 # confirm status after discovery [root@www ~]# iscsiadm -m node -o show # BEGIN RECORD 6.2.0-873.10.el6 node.name = iqn.2014-08.world.server:target00 node.tpgt = 1 node.startup = automatic node.leading_login = No ... ... ... node.conn[0].iscsi.IFMarker = No node.conn[0].iscsi.OFMarker = No # END RECORD # login to target [root@www ~]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2014-08.world.server:target00, portal: 10.0.0.30,3260] (multiple) Login to [iface: default, target: iqn.2014-08.world.server:target00, portal: 10.0.0.30,3260] successful. # confirm established session [root@www ~]# iscsiadm -m session -o show tcp: [1] 10.0.0.30:3260,1 iqn.2014-08.world.server:target00 # confirm partitions [root@www ~]# cat /proc/partitions major minor #blocks name 8 0 209715200 sda 8 1 512000 sda1 8 2 209202176 sda2 253 0 200966144 dm-0 253 1 8232960 dm-1 8 16 83886080 sdb # added new device provided from target as [sdb]
[2] It's possible to use iSCSI device like follows.
[root@www ~]# yum -y install parted # create a label [root@www ~]# parted --script /dev/sdb "mklabel msdos" # create a partition [root@www ~]# parted --script /dev/sdb "mkpart primary 0% 100%" # format with EXT4 [root@www ~]# mkfs.ext4 /dev/sdb1 # mount [root@www ~]# mount /dev/sdb1 /mnt [root@www ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_dlp-lv_root ext4 189G 1.1G 179G 1% / tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 ext4 485M 75M 385M 17% /boot /dev/sdb1 ext4 79G 184M 75G 1% /mnt
補充:
客戶端安裝后需要啟動守護進程,然后進行發現:
/etc/init.d/iscsid start iscsiadm -m discovery -t st -p 12.123.0.51 iscsiadm -m discovery -T iqn.2015-06.com.oracle:zjxl -p 12.123.0.51:3260 -l #修改配置文件 vim /var/lib/iscsi/send_targets/12.123.0.51\,3260/iqn.2015-06.com.oracle\:oracle\:zjxl\,12.123.0.51\,3260\,1\,default/default
配置multipath:
一個機器至少要掛2個存儲IP,一個壞了,另一個還可以連接。編輯/etc/multipath.conf
blacklist{ devnode "^sda" } defaults{ user_friendly_names yes udev_dir /dev path_grouping_policy multibus failback immediate no_path_retry fail }
blacklist代表系統盤不做multipath。
然后service multipathd restart;執行cat /proc/partitions可以看到以dm-*開頭的設備,這些就是多路徑設備了。
其實對于ceph來講,直接通過rbd create一個塊,然后rbd map這個塊,再rbd showmapped可以看到此塊map的路徑,如/dev/rbd0。然后在iscsi服務器端的/etc/tgt/targets.conf進行配置:
<target iqn.2008-09.com.example:server.target11> direct-store /dev/rbd0 </target>
這樣在iscsi客戶端就能使用此塊設備了。
但這樣使用的代價比較大,因為它是通過ceph rbd kernel module的形式,掛載rbd。這樣會有大量的內核態和用戶態的切換,勢必會影響性能。
這個問題已經有人解決了:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
http://ceph.com/dev-notes/updates-to-ceph-tgt-iscsi-support/
http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/
在ubuntu上的iscsi已經支持rbd;fedora 20之后有支持rbd的iscsi rpm包。但在centos 6.5,無法配置帶rbd支持的iscsi。
如果你也配置過此功能,歡迎交流。
利用上述方法使ceph導出iscsi設備是沒有問題的,但由于其利用了rbd的內核模塊,導致內核態和用戶態的頻繁切換,勢必會影響性能。可不可以使rbd塊設備直接被導出成為iscsi設備呢?答案是肯定的。
先從以下地址下載支持rbd的scsi-target-utilsrpm包:
#wget http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/scsi-target-utils-1.0.38-48.bf6981.ceph.el6.x86_64.rpm #rpm -ivh scsi-target-utils-1.0.38-48.bf6981.ceph.el6.x86_64.rpm
安裝完成后,查看當前tgt對于rbd driver是否支持:
# tgtadm --lld iscsi --mode system --op show System: State: ready debug: off LLDs: iser: error iscsi: ready Backing stores: rbd (bsoflags sync:direct) rdwr (bsoflags sync:direct) ssc null bsg sg sheepdog Device types: passthrough tape changer controller osd cd/dvd disk iSNS: iSNS=Off iSNSServerIP= iSNSServerPort=3205 iSNSAccessControl=Off
創建一個rbd設備:
#rbd create iscsi/tgt1 -s 10240
修改/etc/tgt/targets.conf,導出剛才創建的rbd設備:
include /etc/tgt/conf.d/*.conf <target iqn.2014-11.rbdstore.com:iscsi> driver iscsi bs-type rbd backing-store iscsi/tgt1 </target>
重啟tgt:
# /etc/init.d/tgtd restart Stopping target framework daemon Starting target framework daemon
在iscsi initiator端連接該iscsi target:
[root@ceph-osd-1 ~]# iscsiadm -m discovery -t sendtargets -p 10.10.200.165 Starting iscsid: [ OK ] 10.10.200.165:3260,1 iqn.2014-11.rbdstore.com:iscsi [root@ceph-osd-1 ~]# iscsiadm -m node -T iqn.2014-11.rbdstore.com:iscsi -l Logging in to [iface: default, target: iqn.2014-11.rbdstore.com:iscsi, portal: 10.10.200.165,3260] (multiple) Login to [iface: default, target: iqn.2014-11.rbdstore.com:iscsi, portal: 10.10.200.165,3260] successful. [root@ceph-osd-1 ~]# fdisk -l Disk /dev/sdb: 5788.2 GB, 5788206759936 bytes 255 heads, 63 sectors/track, 703709 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda: 209.7 GB, 209715068928 bytes 255 heads, 63 sectors/track, 25496 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009a9dd Device Boot Start End Blocks Id System /dev/sda1 * 1 131 1048576 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 131 1176 8392704 82 Linux swap / Solaris /dev/sda3 1176 25497 195356672 8e Linux LVM Disk /dev/mapper/vg_swift-LogVol00: 200.0 GB, 200043134976 bytes 255 heads, 63 sectors/track, 24320 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/docker-253:0-3539142-pool: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 65536 bytes Disk identifier: 0x00000000 Disk /dev/sdc: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 4194304 bytes I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes Disk identifier: 0x00000000
從以上可以看到,導出成功。
在ceph的一個節點利用rbd map一個塊設備,然后格式化并掛載在一個目錄,如/mnt。在此節點上安裝nfs的rpm包:
yum -y install nfs-utils
設置掛載目錄:
[root@mon0 mnt]# cat /etc/exports /mnt 192.168.101.157(rw,async,no_subtree_check,no_root_squash) /mnt 192.168.108.4(rw,async,no_subtree_check,no_root_squash)
啟動并導出:
service rpcbind start chkconfig rpcbind on service nfs start chkconfig nfs on exportfs -r
客戶端查看一下:
[root@osd2 /]# showmount -e mon0 Export list for mon0: /mnt 192.168.108.4,192.168.101.157
然后掛載:
mount -t nfs mon0:/mnt /mnt
需要注意的是,NFS默認是用UDP協議,如果網絡不穩定,換成TCP協議即可:
mount -t nfs mon0:/mnt /mnt -o proto=tcp -o nolock
在客戶機上配置ceph.repo后安裝rbd-fuse的rpm包,然后就可以掛載pool了:
rbd-fuse -p test /mnt
上面的示例是在沒有cephx下將test pool掛載到客戶機的/mnt。然后就可以在/mnt中看到test pool中的塊了。此時可以利用losetup掛載這個img。
卸載直接利用fusermount -u /mnt。
關于如何分析iscsi、nfs與ceph就分享到這里了,希望以上內容可以對大家有一定的幫助,可以學到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。