您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關kubernetes高可用集群版如何安裝,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
系統要求:64位centos7.6
關閉防火墻和selinux
關閉操作系統swap分區(使用k8s不推薦打開)
請預配置好每個節點的hostname保證不重名即可
請配置第一個master能秘鑰免密登入所有節點(包括自身)
本手冊安裝方式適用于小規模使用
多主模式(最少三個), 每個master節點上需要安裝keepalived
# 切換到配置目錄 cd /etc/yum.repos.d/ # 配置docker-ce阿里源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 配置kubernetes阿里源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
cat <<EOF > /etc/sysctl.d/ceph.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
# 安裝kubeadm kubelet kubectl yum install kubeadm kubectl kubelet -y # 開機啟動kubelet和docker systemctl enable docker kubelet # 啟動docker systemctl start docker
# 此處如果有Lb可省略 直接使用LB地址 # 安裝時候請先在初始化master上執行,保證VIP附著在初始化master上,否則請關閉其他keepalived # 安裝完成后可根據自己業務需要實現健康監測 yum install keepalived -y # 備份keepalived原始文件 mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak # 生成新的keepalived配置文件,文中注釋部分對每臺master請進行修改 cat <<EOF > /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id k8s-master1 #主調度器的主機名 vrrp_mcast_group4 224.26.1.1 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 66 nopreempt priority 90 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 10.20.1.8 #VIP地址聲明 } } EOF # 配置keepalived開機啟動和啟動keepalived systemctl enable keepalived systemctl start keepalived
cd && cat <<EOF > kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable apiServer: certSANs: - "172.29.2.188" #請求改為你的vip地址 controlPlaneEndpoint: "172.29.2.188:6443" #請求改為你的vip地址 imageRepository: registry.cn-hangzhou.aliyuncs.com/peter1009 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 EOF
# 使用上一步生成的kubeadm.yaml kubeadm init --config kubeadm.yaml
# 執行完上一步輸出如下 root@k8s4:~# kubeadm init --config kubeadm.yaml I0522 06:20:13.352644 2622 version.go:96] could not fetch a Kubernetes version from ......... 此處省略 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \ --experimental-control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
cat <<EOF > copy.sh CONTROL_PLANE_IPS="172.16.10.101 172.16.10.102" # 修改這兩個ip地址為你第二/第三masterip地址 for host in ${CONTROL_PLANE_IPS}; do ssh $host mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/ done EOF # 如果未配置免密登錄,該步驟講失敗 bash -x copy.sh
# 在當前節點執行提示內容,使kubectl能訪問集群 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 在其他master節點上配置執行提示內容(必須要copy.sh文件執行成功以后) kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \ --experimental-control-plane
# 在其他非master的節點上配置執行提示內容 kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s4:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s4 Ready master 20m v1.14.2 root@k8s4:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s4 Ready master 20m v1.14.2 root@k8s4:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-8cc96f57d-cfr4j 1/1 Running 0 20m kube-system coredns-8cc96f57d-stcz6 1/1 Running 0 20m kube-system etcd-k8s4 1/1 Running 0 19m kube-system kube-apiserver-k8s4 1/1 Running 0 19m kube-system kube-controller-manager-k8s4 1/1 Running 0 19m kube-system kube-flannel-ds-amd64-k4q6q 1/1 Running 0 50s kube-system kube-proxy-lhjsf 1/1 Running 0 20m kube-system kube-scheduler-k8s4 1/1 Running 0 19m
# 取消節點污點,使master能被正常調度, k8s4請更改為你自有集群的nodename kubectl taint node k8s4 node-role.kubernetes.io/master:NoSchedule- # 創建nginx deploy root@k8s4:~# kubectl create deploy nginx --image nginx deployment.apps/nginx created root@k8s4:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-65f88748fd-9sk6z 1/1 Running 0 2m44s # 暴露nginx到集群外 root@k8s4:~# kubectl expose deploy nginx --port=80 --type=NodePort service/nginx exposed root@k8s4:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m nginx NodePort 10.104.109.234 <none> 80:32129/TCP 5s root@k8s4:~# curl 127.0.0.1:32129 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h2>Welcome to nginx!</h2> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
關于“kubernetes高可用集群版如何安裝”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。