您好,登錄后才能下訂單哦!
塊是字節序列(例如,一個512字節的數據塊)。基于塊的存儲接口是使用旋轉介質(例如硬盤,CD,軟盤甚至傳統的9-track tape)存儲數據的最常用方法。塊設備接口的無處不在,使虛擬塊設備成為與海量數據存儲系統(如Ceph)進行交互的理想候選者。
Ceph塊設備經過精簡配置,可調整大小,并在Ceph集群中的多個OSD上存儲條帶化數據,ceph塊設備利用了RADOS功能,例如快照,復制和一致性。 Ceph的RADOS塊設備(RBD)使用內核模塊或librbd庫與OSD進行交互。
‘
Ceph的塊設備對內核設備,KVMS例如QEMU,基于云的計算系統,例如OpenStack和CloudStack,提供高性能和無限的可擴展性 。你可以使用同一群集同時操作Ceph RADOS網關,Ceph的文件系統和Ceph塊設備。
創建池和塊
[root@ceph-node1 ~]# ceph osd pool create block 6
pool 'block' created
為客戶端創建用戶,并將密鑰文件scp到客戶端
[root@ceph-node1 ~]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=block'| tee ./ceph.client.rbd.keyring
[client.rbd]
key = AQA04PpdtJpbGxAAd+lCJFQnDfRlWL5cFUShoQ==
[root@ceph-node1 ~]#scp ceph.client.rbd.keyring root@ceph-client:/etc/ceph
客戶端創建一個大小為2G的塊設備
[root@ceph-client /]# rbd create block/rbd0 --size 2048 --name client.rbd
映射此塊設備到客戶端
[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
/dev/rbd0
[root@ceph-client /]# rbd showmapped --name client.rbd
id pool image snap device
0 block rbd0 - /dev/rbd0
注意:這里可能會報如下的錯誤
[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (2) No such file or directory
解決方法有三種,看我這篇博客rbd: sysfs write failed解決辦法
創建文件系統,并掛載塊設備
[root@ceph-client /]# fdisk -l /dev/rbd0
Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes[root@ceph-client /]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0[root@ceph-client /]# mount /dev/rbd0 /ceph-rbd0
[root@ceph-client /]# df -Th /ceph-rbd0
Filesystem Type Size Used Avail Use% Mounted on
/dev/rbd0 xfs 2.0G 33M 2.0G 2% /ceph-rb
寫入數據測試
[root@ceph-client /]# dd if=/dev/zero of=/ceph-rbd0/file count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0674301 s, 1.6 GB/s
[root@ceph-client /]# ls -lh /ceph-rbd0/file
-rw-r--r-- 1 root root 100M Dec 19 10:50 /ceph-rbd0/file
做成系統服務
[root@ceph-client /]#cat /usr/local/bin/rbd-mount
#!/bin/bash
# Pool name where block device image is stored
export poolname=block
# Disk image name
export rbdimage0=rbd0
# Mounted Directory
export mountpoint0=/ceph-rbd0
# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
modprobe rbd
rbd feature disable $rbdimage0 object-map fast-diff deep-flatten
rbd map $rbdimage0 --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
mkdir -p $mountpoint0
mount /dev/rbd/$poolname/$rbdimage0 $mountpoint0
fi
if [ "$1" == "u" ]; then
umount $mountpoint0
rbd unmap /dev/rbd/$poolname/$rbdimage0
fi
[root@ceph-client ~]# cat /etc/systemd/system/rbd-mount.service
[Unit]
Description=RADOS block device mapping for $rbdimage in pool $poolname"
Conflicts=shutdown.target
Wants=network-online.target
After=NetworkManager-wait-online.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/rbd-mount m
ExecStop=/usr/local/bin/rbd-mount u
[Install]
WantedBy=multi-user.target
開機自動掛載
[root@ceph-client ~]#systemctl daemon-reload
[root@ceph-client ~]#systemctl enable rbd-mount.service
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。