您好,登錄后才能下訂單哦!
GlusterFS
是一個開源的分布式文件,具有強大的橫向擴展能力,可支持數PB存儲容量和數千客戶端,通過網絡互連成一個并行的網絡文件系統。具有擴展性、高性能、高可用性等特點。
前提:必須要在實驗環境中部署了Gluster FS集群,文中創建了名為:gv0的存儲卷
1.創建endpoint
,文件名為glusterfs_ep.yaml
$ vi glusterfs_ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs
namespace: default
subsets:
# 添加GlusterFS各個集群的IP地址
- addresses:
- ip: 10.0.0.41
- ip: 10.0.0.42
ports:
# 添加GlusterFS端口號
- port: 49152
protocol: TCP
執行yaml
$ kubectl create -f glusterfs_ep.yaml
endpoints/glusterfs created
// 查看創建好的endpoints
[root@k8s-master01 ~]# kubectl get ep
NAME ENDPOINTS AGE
glusterfs 10.0.0.41:49152,10.0.0.42:49152 15s
2.為該endpoint創建svc
Endpoint是GlusterFS的集群節點,那么需要訪問到這些節點,就需要創建svc
$ vi glusterfs_svc.yaml
apiVersion: v1
kind: Service
metadata:
# 該名稱必須要和endpoint里的name一致
name: glusterfs
spec:
ports:
- port: 49152
protocol: TCP
targetPort: 49152
sessionAffinity: None
type: ClusterIP
執行yaml
$ kubectl create -f glusterfs_svc.yaml
service/glusterfs created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs ClusterIP 10.1.104.145 <none> 49152/TCP 20s
3.為Glusterfs創建pv
$ vi glusterfs_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster
labels:
type: glusterfs
spec:
capacity:
# 指定該pv的容量
storage: 50Gi
accessModes:
- ReadWriteMany
glusterfs:
# 指定glusterfs的endpoint名稱
endpoints: "glusterfs"
# path名稱是在glusterfs里創建的卷
# 可登錄到glusterfs集群執行"gluster volume list"命令來查看已創建的卷
path: "gv0"
readOnly: false
執行yaml
$ kubectl create -f glusterfs_pv.yaml
persistentvolume/gluster created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gluster 50Gi RWX Retain Available 10s
4.為Glusterfs創建pvc
$ vi glusterfs_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# 名稱必須和指定的pv一致
name: gluster
spec:
accessModes:
- ReadWriteMany
resources:
requests:
# 指定該pvc使用pv的容量空間
storage: 20Gi
執行yaml
$ kubectl create -f glusterfs_pvc.yaml
persistentvolumeclaim/gluster created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gluster Bound gluster 50Gi RWX 83s
5.創建nginx pod并掛載到cluster的pvc nginx_pod.yaml
$ vim nginx-demo.yaml
---
# Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: web
env: test
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
volumeMounts:
- name: data-gv0
mountPath: /usr/share/nginx/html
volumes:
- name: data-gv0
persistentVolumeClaim:
# 綁定指定的pv
claimName: gluster
執行yaml
$ kubectl create -f nginx-demo.yaml
pod/nginx created
[root@k8s-master01 ~]# kubectl get pods | grep "nginx"
nginx 1/1 Running 0 2m 10.244.1.222 k8s-node01 <none> <none>
在任意客戶端掛載/mnt
到glusterfs目錄
,然后創建一個index.html
文件
$ mount -t glusterfs k8s-store01:/gv0 /mnt/
$ cd /mnt && echo "this nginx store used gluterfs cluster" >index.html
在master節點上通過curl訪問pod
$ curl 10.244.1.220/index.html
this nginx store used gluterfs cluster
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。