您好,登錄后才能下訂單哦!
kubernetes,簡稱K8s,是用8代替8個字符“ubernete”而成的縮寫。是一個開源的,用于管理云平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單并且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。
主機列表:
主機名 | Centos版本 | ip | docker version | flannel version | Keepalived version | 主機配置 | 備注 |
---|---|---|---|---|---|---|---|
master01 | 7.6.1810 | 172.27.34.3 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
master02 | 7.6.1810 | 172.27.34.4 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
master03 | 7.6.1810 | 172.27.34.5 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | control plane |
work01 | 7.6.1810 | 172.27.34.93 | 18.09.9 | / | / | 4C4G | worker nodes |
work02 | 7.6.1810 | 172.27.34.94 | 18.09.9 | / | / | 4C4G | worker nodes |
work03 | 7.6.1810 | 172.27.34.95 | 18.09.9 | / | / | 4C4G | worker nodes |
VIP | 7.6.1810 | 172.27.34.130 | 18.09.9 | v0.11.0 | v1.3.5 | 4C4G | 在control plane上浮動 |
client | 7.6.1810 | 172.27.34.234 | / | / | / | 4C4G | client |
共有7臺服務器,3臺control plane,3臺work,1臺client。
k8s 版本:
主機名 | kubelet version | kubeadm version | kubectl version | 備注 |
---|---|---|---|---|
master01 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
master02 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
master03 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work01 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work02 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
work03 | v1.16.4 | v1.16.4 | v1.16.4 | kubectl選裝 |
client | / | / | v1.16.4 | client |
本文采用kubeadm方式搭建高可用k8s集群,k8s集群的高可用實際是k8s各核心組件的高可用,這里使用主備模式,架構如下:
主備模式高可用架構說明:
核心組件 | 高可用模式 | 高可用實現方式 |
---|---|---|
apiserver | 主備 | keepalived |
controller-manager | 主備 | leader election |
scheduler | 主備 | leader election |
etcd | 集群 | kubeadm |
- apiserver 通過keepalived實現高可用,當某個節點故障時觸發keepalived vip 轉移;
- controller-manager k8s內部通過選舉方式產生領導者(由--leader-elect 選型控制,默認為true),同一時刻集群內只有一個controller-manager組件運行;
- scheduler k8s內部通過選舉方式產生領導者(由--leader-elect 選型控制,默認為true),同一時刻集群內只有一個scheduler組件運行;
- etcd 通過運行kubeadm方式自動創建集群來實現高可用,部署的節點數為奇數,3節點方式最多容忍一臺機器宕機。
control plane和work節點都執行本部分操作。
Centos7.6安裝詳見:Centos7.6操作系統安裝及優化全紀錄
安裝Centos時已經禁用了防火墻和selinux并設置了阿里源。
[root@centos7 ~]# hostnamectl set-hostname master01
[root@centos7 ~]# more /etc/hostname
master01
退出重新登陸即可顯示新設置的主機名master01
[root@master01 ~]# cat >> /etc/hosts << EOF
172.27.34.3 master01
172.27.34.4 master02
172.27.34.5 master03
172.27.34.93 work01
172.27.34.94 work02
172.27.34.95 work03
EOF
[root@master01 ~]# cat /sys/class/net/ens160/address
[root@master01 ~]# cat /sys/class/dmi/id/product_uuid
保證各節點mac和uuid唯一
[root@master01 ~]# swapoff -a
若需要重啟后也生效,在禁用swap后還需修改配置文件/etc/fstab,注釋swap
[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
本文的k8s網絡使用flannel,該網絡需要設置內核參數bridge-nf-call-iptables=1,修改這個參數需要系統有br_netfilter模塊。
查看br_netfilter模塊:
[root@master01 ~]# lsmod |grep br_netfilter
如果系統沒有br_netfilter模塊則執行下面的新增命令,如有則忽略。
臨時新增br_netfilter模塊:
[root@master01 ~]# modprobe br_netfilter
該方式重啟后會失效
永久新增br_netfilter模塊:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
[root@master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- [] 中括號中的是repository id,唯一,用來標識不同倉庫
- name 倉庫名稱,自定義
- baseurl 倉庫地址
- enable 是否啟用該倉庫,默認為1表示啟用
- gpgcheck 是否驗證從該倉庫獲得程序包的合法性,1為驗證
- repo_gpgcheck 是否驗證元數據的合法性 元數據就是程序包列表,1為驗證
- gpgkey=URL 數字簽名的公鑰文件所在位置,如果gpgcheck值為1,此處就需要指定gpgkey文件的位置,如果gpgcheck值為0就不需要此項了
[root@master01 ~]# yum clean all
[root@master01 ~]# yum -y makecache
配置master01到master02、master03免密登錄,本步驟只在master01上執行。
[root@master01 ~]# ssh-keygen -t rsa
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5
[root@master01 ~]# ssh 172.27.34.4
[root@master01 ~]# ssh master03
master01可以直接登錄master02和master03,不需要輸入密碼。
control plane和work節點都執行本部分操作。
[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
指定安裝的docker版本為18.09.9
[root@master01 ~]# systemctl start docker
[root@master01 ~]# systemctl enable docker
[root@master01 ~]# yum -y install bash-completion
[root@master01 ~]# source /etc/profile.d/bash_completion.sh
由于Docker Hub的服務器在國外,下載鏡像會比較慢,可以配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置為例。
登陸地址為:https://cr.console.aliyun.com ,未注冊的可以先注冊阿里云賬戶
配置daemon.json文件
[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"]
}
EOF
重啟服務
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
加速器配置完成
[root@master01 ~]# docker --version
[root@master01 ~]# docker run hello-world
通過查詢docker版本和運行容器hello-world來驗證docker是否安裝成功。
修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’
[root@master01 ~]# more /etc/docker/daemon.json
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
修改cgroupdriver是為了消除告警:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
control plane節點都執行本部分操作。
[root@master01 ~]# yum -y install keepalived
master01上keepalived配置:
[root@master01 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master02上keepalived配置:
[root@master02 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
}
master03上keepalived配置:
[root@master03 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master03
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.27.34.130
}
所有control plane啟動keepalived服務并設置開機啟動
[root@master01 ~]# service keepalived start
[root@master01 ~]# systemctl enable keepalived
[root@master01 ~]# ip a
vip在master01上
control plane和work節點都執行本部分操作。
[root@master01 ~]# yum list kubelet --showduplicates | sort -r
本文安裝的kubelet版本是1.16.4,該版本支持的docker版本為1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
- kubelet 運行在集群所有節點上,用于啟動Pod和容器等對象的工具
- kubeadm 用于初始化集群,啟動集群的命令工具
- kubectl 用于和集群通信的命令行,通過kubectl可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件
啟動kubelet并設置開機啟動
[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
Kubernetes幾乎所有的安裝組件和Docker鏡像都放在goolge自己的網站上,直接訪問可能會有網絡問題,這里的解決辦法是從阿里云鏡像倉庫下載鏡像,拉取到本地以后改回默認的鏡像tag。本文通過運行image.sh腳本方式拉取鏡像。
[root@master01 ~]# more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
url為阿里云鏡像倉庫地址,version為安裝的kubernetes版本。
運行腳本image.sh,下載指定版本的鏡像
[root@master01 ~]# ./image.sh
[root@master01 ~]# docker images
master01節點執行本部分操作。
[root@master01 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs: #填寫所有kube-apiserver節點的hostname、IP、VIP
- master01
- master02
- master03
- node01
- node02
- node03
- 172.27.34.3
- 172.27.34.4
- 172.27.34.5
- 172.27.34.93
- 172.27.34.94
- 172.27.34.95
- 172.27.34.130
controlPlaneEndpoint: "172.27.34.130:6443"
networking:
podSubnet: "10.244.0.0/16"
kubeadm.conf為初始化的配置文件
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml
記錄kubeadm join的輸出,后面需要這個命令將work節點和其他control plane節點加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失敗:
如果初始化失敗,可執行kubeadm reset后重新初始化
[root@master01 ~]# kubeadm reset
[root@master01 ~]# rm -rf $HOME/.kube/config
[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
本文所有操作都在root用戶下執行,若為非root用戶,則執行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
在master01上新建flannel網絡
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
由于網絡原因,可能會安裝失敗,可以在文末直接下載kube-flannel.yml文件,然后再執行apply
master01分發證書:
在master01上運行腳本cert-main-master.sh,將證書分發至master02和master03
[root@master01 ~]# ll|grep cert-main-master.sh
-rwxr--r-- 1 root root 638 1月 2 15:23 cert-main-master.sh
[root@master01 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="172.27.34.4 172.27.34.5"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
master02移動證書至指定目錄:
在master02上運行腳本cert-other-master.sh,將證書移至指定目錄
[root@master02 ~]# pwd
/root
[root@master02 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:29 cert-other-master.sh
[root@master02 ~]# more cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# ./cert-other-master.sh
master03移動證書至指定目錄:
在master03上也運行腳本cert-other-master.sh
[root@master03 ~]# pwd
/root
[root@master03 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:31 cert-other-master.sh
[root@master03 ~]# ./cert-other-master.sh
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
運行初始化master生成的control plane節點加入集群的命令
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
master02和master03加載環境變量
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master02 ~]# source .bash_profile
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master03 ~]# source .bash_profile
該步操作是為了在master02和master03上也能執行kubectl命令。
[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system
所有control plane節點處于ready狀態,所有的系統組件也正常。
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
運行初始化master生成的work節點加入集群的命令
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 44m v1.16.4
master02 Ready master 33m v1.16.4
master03 Ready master 23m v1.16.4
work01 Ready <none> 11m v1.16.4
work02 Ready <none> 7m50s v1.16.4
work03 Ready <none> 3m4s v1.16.4
[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
[root@client ~]# yum install -y kubectl-1.16.4
安裝版本與集群版本保持一致
[root@client ~]# yum -y install bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@client ~]# source .bash_profile
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
本節內容都在client端完成
[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
如果連接超時,可以多試幾次。recommended.yaml已上傳,也可以在文末下載。
[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
由于默認的鏡像倉庫網絡訪問不通,故改成阿里鏡像
[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
配置NodePort,外部通過https://NodeIp:NodePort 訪問Dashboard,此時端口為30001
[root@client ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
創建超級管理員的賬號用于登錄Dashboard
[root@client ~]# kubectl apply -f recommended.yaml
[root@client ~]# kubectl get all -n kubernetes-dashboard
[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
令牌為:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh2NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug
請使用火狐瀏覽器訪問:https://VIP:30001
接受風險
通過令牌方式登錄
Dashboard提供了可以實現集群管理、工作負載、服務發現和負載均衡、存儲、字典配置、日志視圖等功能。
本節內容都在client端完成
通過ip查看apiserver所在節點,通過leader-elect查看scheduler和controller-manager所在節點:
[root@master01 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_6caf8003-052f-451d-8dce-4516825213ad","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:23Z","renewTime":"2020-01-03T07:57:55Z","leaderTransitions":2}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_720d65f9-e425-4058-95d7-e5478ac951f7","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:20Z","renewTime":"2020-01-03T07:58:03Z","leaderTransitions":2}'
組件名 | 所在節點 |
---|---|
apiserver | master01 |
controller-manager | master01 |
scheduler | master01 |
[root@master01 ~]# init 0
vip飄到了master02
[root@master02 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
controller-manager和scheduler也發生了遷移
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_b3353e8f-a02f-4322-bf17-2f596cd25ba5","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:42Z","renewTime":"2020-01-03T08:06:36Z","leaderTransitions":3}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_e0a2ec66-c415-44ae-871c-18c73258dc8f","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:56Z","renewTime":"2020-01-03T08:06:45Z","leaderTransitions":3}'
組件名 | 所在節點 |
---|---|
apiserver | master02 |
controller-manager | master02 |
scheduler | master03 |
查詢:
[root@client ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady master 22h v1.16.4
master02 Ready master 22h v1.16.4
master03 Ready master 22h v1.16.4
work01 Ready <none> 22h v1.16.4
work02 Ready <none> 22h v1.16.4
work03 Ready <none> 22h v1.16.4
master01狀態為NotReady
新建pod:
[root@client ~]# more nginx-master.yaml
apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #創建資源類型為Deployment
metadata: #該資源元數據
name: nginx-master #Deployment名稱
spec: #Deployment的規格說明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本數為3
template: #定義Pod的模板
metadata: #定義Pod的元數據
labels: #定義label(標簽)
app: nginx #label的key和value分別為app和nginx
spec: #Pod的規格說明
containers:
- name: nginx #容器的名稱
image: nginx:latest #創建容器所使用的鏡像
[root@client ~]# kubectl apply -f nginx-master.yaml
deployment.apps/nginx-master created
[root@client ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s 10.244.5.6 work03 <none> <none>
nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s 10.244.3.3 work01 <none> <none>
nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s 10.244.4.5 work02 <none> <none>
當有一個control plane節點宕機時,VIP會發生漂移,集群各項功能不受影響。
在關閉master01的同時關閉master02,測試集群還能否正常對外服務。
[root@master02 ~]# init 0
[root@master03 ~]# ip a|grep 130
inet 172.27.34.130/32 scope global ens160
vip漂移至唯一的control plane:master03
[root@client ~]# kubectl get nodes
Error from server: etcdserver: request timed out
[root@client ~]# kubectl get nodes
The connection to the server 172.27.34.130:6443 was refused - did you specify the right host or port?
etcd集群崩潰,整個k8s集群也不能正常對外服務。
單節點版k8s集群部署詳見:Centos7.6部署k8s(v1.14.2)集群
k8s集群高可用部署詳見:lvs+keepalived部署k8s v1.16.4高可用集群
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。