您好,登錄后才能下訂單哦!
1、報錯一
[root@ct ceph]# ceph -s
cluster:
id: dfb110f9-e0e0-4544-9f13-9141750ee9f6
health: HEALTH_WARN
Degraded data redundancy: 192 pgs undersized
services:
mon: 3 daemons, quorum ct,c1,c2
mgr: ct(active), standbys: c2, c1
osd: 2 osds: 2 up, 2 in
data:
pools: 3 pools, 192 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 2.0 TiB / 2.0 TiB avail
pgs: 102 active+undersized
90 stale+active+undersized
查看obs狀態,c2沒有連接上
[root@ct ceph]# ceph osd status
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | ct | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up |
| 1 | c1 | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up |
+----+------+-------+-------+--------+---------+--------+---------+-----------+
解決方法:
在c2重啟osd即可解決[root@c2 ~]# systemctl restart ceph-osd.target
2、報錯二
[root@ct ceph]# ceph -s
cluster:
id: 44d72edb-4085-4cfc-8652-eb670472f169
health: HEALTH_WARN
clock skew detected on mon.c1, mon.c2
services:
mon: 3 daemons, quorum ct,c1,c2
mgr: c1(active), standbys: c2, ct
osd: 3 osds: 1 up, 1 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 1.0 GiB used, 1023 GiB / 1024 GiB avail
pgs:
解決方法:
(1)控制節點重啟NTP服務[root@ct ceph]# systemctl restart ntpd
(2)計算節點重新同步控制節點時間[root@c2 ~]# ntpdate 192.168.100.10
(3)在控制節點重啟mon服務[root@ct ceph]# systemctl restart ceph-mon.target
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。