中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

kubeadm中如何部署kubernetes集群

發布時間:2021-07-30 18:05:52 來源:億速云 閱讀:198 作者:Leah 欄目:云計算

kubeadm中如何部署kubernetes集群,相信很多沒有經驗的人對此束手無策,為此本文總結了問題出現的原因和解決方法,通過這篇文章希望你能解決這個問題。

一、環境要求

這里使用RHEL7.5

master、etcd:192.168.10.101,主機名:master

node1:192.168.10.103,主機名:node1

node2:192.168.10.104,主機名:node2

所有機子能基于主機名通信,編輯每臺機子的/etc/hosts文件:

192.168.10.101 master

192.168.10.103 node1

192.168.10.104 node2

所有機子時間要同步

所有機子關閉防火墻和selinux。

master可以免密登錄全部機子。

【重要問題】

集群初始化以及節點加入集群的時候都會從谷歌倉庫下載鏡像,然而,我們并不能訪問到谷歌,所以無法下載所需的鏡像。而我已經將所需鏡像上傳至阿里云個人倉庫。

二、安裝步驟

1、etcd cluster,僅master節點;

2、flannel,集群的所有節點;

3、配置k8s的master:僅master節點;

kubernetes-master

啟動的服務:kube-apiserver,kube-scheduler,kube-controller-manager

4、配置k8s的各Node節點;

kubernetes-node

先設定啟動docker服務;

啟動的k8s的服務:kube-proxy,kubelet

kubeadm

1、master,nodes:安裝kubelet,kubeadm,docker

2、master:kubeadm init

3、nodes:kubeadm join

https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

三、集群安裝

1、master節點安裝配置

(1)yum源配置

這里使用1.12.0版本。下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1120

這里使用yum下載。配置yum源,先配置docker的yum源,直接下載阿里云的repo文件即可:

[root@master ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

創建kubernetes的yum源文件:

[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

將這兩個repo文件復制到其他節點的/etc/yum.repo.d目錄中:

[root@master ~]# for i in 102 103; do scp /etc/yum.repos.d/{docker-ce.repo,kubernetes.repo} root@192.168.10.$i:/etc/yum.repos.d/; done

安裝yum源的檢驗key:

[root@master ~]# ansible all -m shell -a "curl -O https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg && rpm --import rpm-package-key.gpg"

(2)安裝docker、kubelet、kubeadm、kubectl

[root@master ~]# yum install docker-ce kubelet kubeadm kubectl -y

(3)修改防火墻

[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 
[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@master ~]# ansible all -m shell -a "iptables -P  FORWARD ACCEPT"

注意:這是臨時修改,重啟機器參數會失效。

永久修改:/usr/lib/sysctl.d/00-system.conf

(4)修改docker服務文件并啟動docker

[root@master ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"

在Service段中添加:
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"

啟動docker:

[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl  start docker
[root@master ~]# systemctl enable docker

(5)設置kubelet開機啟動

[root@master ~]# systemctl  enable  kubelet

(6)初始化

編輯配置文件,忽略某些參數:

[root@master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

執行初始化:

[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@master ~]#

無法下載鏡像。因為無法訪問谷歌鏡像倉庫。可以通過其他途徑下載鏡像到本地,再執行初始化。

鏡像下載腳本:https://github.com/yanyuzm/k8s_images_script

相關鏡像我已上傳到阿里云,執行以下腳本即可:

[root@master ~]# vim pull-images.sh
#!/bin/bash
images=(kube-apiserver:v1.12.0 kube-controller-manager:v1.12.0 kube-scheduler:v1.12.0 kube-proxy:v1.12.0 pause:3.1 etcd:3.2.24 coredns:1.2.2)

for ima in ${images[@]}
do
   docker pull   registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima
   docker tag    registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima   k8s.gcr.io/$ima
   docker rmi  -f  registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima
done
[root@master ~]# sh pull-images.sh

用到的鏡像有:

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]#

重新初始化:

[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.10.101 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 71.135592 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: qaqahg.5xbt355fl26wu8tg
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47

[root@master ~]#

OK。初始化成功。

初始化成功,最后的提示:很重要

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47

master節點:按照提示,做以下操作:

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#

查看一下:

[root@master ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]#

健康狀態。

查看集群節點:

[root@master ~]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    110m      v1.12.1
[root@master ~]#

只有master節點,但處于NotReady狀態。因為沒有部署flannel。

(7)安裝flannel

地址:https://github.com/coreos/flannel

執行以下命令:

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@master ~]#

執行完成后,需要等待很長時間,因為要下載flannel鏡像。

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]#

OK,flannel鏡像下載完成。查看節點:

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    155m      v1.12.1
[root@master ~]#

OK,master處于Ready狀態。

如果flannel下載不成功,可以下載阿里云的:

docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64

下載成功后,修改鏡像的tag:

docker tag registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

查看一下命名空間情況:

[root@master ~]# kubectl get ns
NAME          STATUS    AGE
default       Active    158m
kube-public   Active    158m
kube-system   Active    158m
[root@master ~]#

查看kube-system的pod:

[root@master ~]# kubectl get pods -n kube-system
NAME                             READY     STATUS    RESTARTS   AGE
coredns-576cbf47c7-hfvcq         1/1       Running   0          158m
coredns-576cbf47c7-xcpgd         1/1       Running   0          158m
etcd-master                      1/1       Running   6          132m
kube-apiserver-master            1/1       Running   9          132m
kube-controller-manager-master   1/1       Running   33         132m
kube-flannel-ds-amd64-vqc9h      1/1       Running   3          41m
kube-proxy-z9xrw                 1/1       Running   4          158m
kube-scheduler-master            1/1       Running   33         132m
[root@master ~]#

2、node節點安裝配置

1、安裝docker-ce、kubelet、kubeadm

[root@node1 ~]# yum install docker-ce kubelet kubeadm -y
[root@node2 ~]# yum install docker-ce kubelet kubeadm -y

2、復制master節點的文件到node

[root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.103:/etc/sysconfig/
kubelet                                                                                                       100%   42    45.4KB/s   00:00    
[root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.104:/etc/sysconfig/
kubelet                                                                                                       100%   42     4.0KB/s   00:00    
[root@master ~]#

3、node節點加入集群

啟動docker、kubelet

[root@node1 ~]# systemctl  start docker kubelet
[root@node1 ~]# systemctl  enable docker kubelet
[root@node2 ~]# systemctl  start docker kubelet
[root@node2 ~]# systemctl  enable docker kubelet

node節點加入集群:

[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node1 ~]#

報錯,按提示設置即可。

[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.10.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"
[discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443"
[discovery] Successfully established connection with API Server "192.168.10.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@node1 ~]#

OK,node1加入成功。

[root@node2 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node2 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.10.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"
[discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443"
[discovery] Successfully established connection with API Server "192.168.10.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@node2 ~]#

OK,node2加入成功。

4、node手動下載kube-proxy、pause鏡像

node節點均執行以下命令:

for ima in kube-proxy:v1.12.0 pause:3.1;do docker pull   registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima && docker tag    registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima   k8s.gcr.io/$ima && docker rmi  -f  registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima ;done

5、到master節點查看node情況:

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    3h20m     v1.12.1
node1     Ready     <none>    18m       v1.12.1
node2     Ready     <none>    17m       v1.12.1
[root@master ~]#

OK,全部處于Ready狀態。如果node節點還是不正常,就重啟一下node節點的docker、kubelet服務。

查看kube-system的pod信息:

[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                             READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
coredns-576cbf47c7-hfvcq         1/1       Running   0          3h21m     10.244.0.3       master    <none>
coredns-576cbf47c7-xcpgd         1/1       Running   0          3h21m     10.244.0.2       master    <none>
etcd-master                      1/1       Running   6          165m      192.168.10.101   master    <none>
kube-apiserver-master            1/1       Running   9          165m      192.168.10.101   master    <none>
kube-controller-manager-master   1/1       Running   33         165m      192.168.10.101   master    <none>
kube-flannel-ds-amd64-bd4d8      1/1       Running   0          21m       192.168.10.103   node1     <none>
kube-flannel-ds-amd64-srhb9      1/1       Running   0          20m       192.168.10.104   node2     <none>
kube-flannel-ds-amd64-vqc9h      1/1       Running   3          74m       192.168.10.101   master    <none>
kube-proxy-8bfvt                 1/1       Running   1          21m       192.168.10.103   node1     <none>
kube-proxy-gz55d                 1/1       Running   1          20m       192.168.10.104   node2     <none>
kube-proxy-z9xrw                 1/1       Running   4          3h21m     192.168.10.101   master    <none>
kube-scheduler-master            1/1       Running   33         165m      192.168.10.101   master    <none>
[root@master ~]#

至此,集群搭建成功。看看搭建用到的鏡像有哪些:

master節點:

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]#

node節點:

[root@node1 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
quay.io/coreos/flannel   v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        9 months ago        742kB
[root@node1 ~]# 



[root@node2 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
quay.io/coreos/flannel   v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        9 months ago        742kB
[root@node2 ~]#

四、集群應用

跑一個nginx

[root@master ~]# kubectl run nginx-deploy  --image=nginx --port=80 --replicas=1
deployment.apps/nginx-deploy created
[root@master ~]# kubectl get deploy
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1         1         1            1           10s
[root@master ~]# kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE      NOMINATED NODE
nginx-deploy-8c5fc574c-d8jxj   1/1       Running   0          18s       10.244.2.4   node2     <none>
[root@master ~]#

在node節點上看看可不可以訪問這個nginx:

[root@node1 ~]# curl -I 10.244.2.4
HTTP/1.1 200 OK
Server: nginx/1.15.5
Date: Tue, 16 Oct 2018 12:02:34 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT
Connection: keep-alive
ETag: "5bb38577-264"
Accept-Ranges: bytes

[root@node1 ~]#

返回200,訪問成功。

[root@master ~]# kubectl expose  deployment nginx-deploy --name=nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   21h
nginx        ClusterIP   10.104.88.59   <none>        80/TCP    51s
[root@master ~]#

啟動一個busybox:

[root@master ~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never
If you don't see a command prompt, try pressing enter.
/ #
/ # wget -O - -q http://nginx:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h2>Welcome to nginx!</h2>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #

刪除重新建:

[root@master ~]# kubectl delete svc nginx
service "nginx" deleted
[root@master ~]# kubectl expose deployment nginx-deploy --name=nginx
service/nginx exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   22h
nginx        ClusterIP   10.110.52.68   <none>        80/TCP    8s
[root@master ~]#

創建多副本:

[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2
deployment.apps/myapp created
[root@master ~]# 
[root@master ~]# kubectl get deployment
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
myapp          2         2         2            2           49s
nginx-deploy   1         1         1            1           36m
[root@master ~]# kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE      NOMINATED NODE
client                         1/1       Running   0          3m49s     10.244.2.6   node2     <none>
myapp-6946649ccd-knd8r         1/1       Running   0          78s       10.244.2.7   node2     <none>
myapp-6946649ccd-pfl2r         1/1       Running   0          78s       10.244.1.6   node1     <none>
nginx-deploy-8c5fc574c-5bjjm   1/1       Running   0          12m       10.244.1.5   node1     <none>
[root@master ~]#

給myapp創建service:

[root@master ~]# kubectl expose deployment myapp --name=myapp --port=80
service/myapp exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   22h
myapp        ClusterIP   10.110.238.138   <none>        80/TCP    11s
nginx        ClusterIP   10.110.52.68     <none>        80/TCP    9m37s
[root@master ~]#

將myapp擴展到5個:

[root@master ~]# kubectl scale --replicas=5  deployment myapp
deployment.extensions/myapp scaled
[root@master ~]# kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
client                         1/1       Running   0          5m24s
myapp-6946649ccd-6kqxt         1/1       Running   0          8s
myapp-6946649ccd-7xj45         1/1       Running   0          8s
myapp-6946649ccd-8nh9q         1/1       Running   0          8s
myapp-6946649ccd-knd8r         1/1       Running   0          11m
myapp-6946649ccd-pfl2r         1/1       Running   0          11m
nginx-deploy-8c5fc574c-5bjjm   1/1       Running   0          23m
[root@master ~]#

修改myapp:

[root@master ~]# kubectl  edit svc myapp
type: NodePort

type改為:NodePort

[root@master ~]# kubectl  get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        23h
myapp        NodePort    10.110.238.138   <none>        80:30937/TCP   35m
nginx        ClusterIP   10.110.52.68     <none>        80/TCP         44m
[root@master ~]#

端口:30937,物理機打開:192.168.10.101:30937

kubeadm中如何部署kubernetes集群

OK,可以訪問。

五、集群資源

1、資源類型

資源:實例化之后為對象,主要有:

wordload:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob。。。

服務發現及均衡:Service,Ingress,。。。

配置與存儲:Volume,CSI,特殊的有ConfigMap,Secret;DownwardAPI

集群級資源:Namespace,Node,Role,ClusterRole,RoleBinding,ClusterRoleBinding

元數據型資源:HPA,PodTemplate,LimitRange

2、創建資源的方法:

apiserver僅接收JSON格式的資源定義;

yaml格式提供配置清單,apiserver可自動將其轉為json格式,而后再提交;

大部分資源的配置清單:

apiVesion: group/version,使用kubectl api-versions可以查看

kind: 資源類別

metadata:元數據(name,namespace,labels,annotations)

       每個的資源的引用PATH:/api/GROUP/VERSION/namespace/NAMESPACE/TYPE/NAME,例如:

       /api/v1/namespaces/default/pods/myapp-6946649ccd-c6m9b

 spec:期望的狀態,disired state

 status:當前狀態,current  state,本字段由kubernetes集群維護

 查看某種資源的定義,比如:查看pod
root@master ~]# kubectl explain pod
KIND:     Pod
VERSION:  v1

DESCRIPTION:
。。。

pod資源定義示例:

[root@master ~]# mkdir maniteste
[root@master ~]# vim maniteste/pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  - name: busybox
    image: busybox:latest
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 5"

創建資源:

[root@master ~]# kubectl create -f maniteste/pod-demo.yaml
[root@master ~]# kubectl describe pods pod-demo
Name:               pod-demo
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node2/192.168.10.104
Start Time:         Wed, 17 Oct 2018 19:54:03 +0800
Labels:             app=myapp
                    tier=frontend
Annotations:        <none>
Status:             Running
IP:                 10.244.2.26

查看日志:

[root@master ~]# curl 10.244.2.26
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# kubectl logs pod-demo myapp 
10.244.0.0 - - [17/Oct/2018:11:56:49 +0000] "GET / HTTP/1.1">

1個pod跑2個容器。

刪除pod:kubectl delete -f maniteste/pod-demo.yaml

六、Pod控制器

1、查看pod的containers定義信息:kubectl explain pods.spec.containers

資源配置清單:

   自主式Pod資源

    資源清單格式:

         一級字段:apiVersion(group/version),kind,metadata(name,namespace,labels,annotations,。。),spec,status(只讀)

   Pod資源:

   spec.containers <\[\]object>

   \- name  <string>

      image <string>

      imagePullPlocy  Always |  Never  |  IfNotPresent

2、標簽:

key=value,key由字母、數字、_、-、.組成。

 value:可以為空,只能字母或數字開頭或結尾,中間可以使用

打標簽:

[root@master ~]# kubectl get pods -l app --show-labels
NAME       READY     STATUS              RESTARTS   AGE       LABELS
pod-demo   0/2       ContainerCreating   0          4m46s     app=myapp,tier=frontend
[root@master ~]# kubectl label pods pod-demo release=haha
pod/pod-demo labeled
[root@master ~]# kubectl get pods -l app --show-labels
NAME       READY     STATUS              RESTARTS   AGE       LABELS
pod-demo   0/2       ContainerCreating   0          5m27s     app=myapp,release=haha,tier=frontend
[root@master ~]#

查看擁有某標簽的pod:

[root@master ~]# kubectl get pods -l app,release
NAME       READY     STATUS              RESTARTS   AGE
pod-demo   0/2       ContainerCreating   0          7m43s
[root@master ~]#

標簽選擇器:

等值關系:=、==、!=

如: kubectl get pods -l release=stable

集合關系:KEY in (VALUE1,VALUE2….) 、KEY notin (VALUE1,VALUE2….)、KEY、!KEY

[root@master ~]# kubectl get pods -l "release notin (stable,haha)"
NAME                           READY     STATUS    RESTARTS   AGE
client                         0/1       Error     0          46h
myapp-6946649ccd-2lncx         1/1       Running   2          46h
nginx-deploy-8c5fc574c-5bjjm   1/1       Running   2          46h
[root@master ~]#

許多資源支持內嵌字段定義其使用的標簽選擇器:

matchLabels:直接給定鍵值

matchExpressions:基于給定的表達式來定義使用標簽選擇器,{key:"KEY",operator:"OPRATOR",values:[VAL1,VAL2,。。。]}

操作符(operator):In,NotIn:values字段的值必須為非空列表,Exists,NotExists:values字段的值必須為空列表

3、nodeSelector :節點標簽選擇器

nodeName

給某個節點打標簽,比如:

[root@master ~]# kubectl label nodes node1 disktype=ssd
node/node1 labeled
[root@master ~]#

修改yaml文件:

[root@master ~]# vim maniteste/pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
      - "/bin/sh"
      - "-c"
      - "sleep 5"
  nodeSelector:
    disktype: ssd

重新創建:

[root@master ~]# kubectl delete pods pod-demo
pod "pod-demo" deleted
[root@master ~]# kubectl create -f maniteste/pod-demo.yaml
pod/pod-demo created
[root@master ~]#

4、annotations

與label不同的地方在于,它不能用于挑選資源對象,僅用于為對象提供“元數據”

示例:

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    haha.com/create_by: "hello world"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
      - "/bin/sh"
      - "-c"
      - "sleep 3600"
  nodeSelector:
    disktype: ssd

5、Pod生命周期

狀態:Pending(掛起),Running,Failed,Success,Unknown

Pod生命周期中的重要行為:初始化容器、容器探測(liveness、readliness)

restartPolicy:Always, OnFailure,Never. Default to Always

探針類型: ExecAction、TCPSocketAction、HTTPGetAction。

ExecAction舉例:

[root@master ~]# vim liveness-exec.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"]
    livenessProbe:
      exec:
        command: ["test","-e","/tmp/healthy"]
      initialDelaySeconds: 2
      periodSeconds: 3

創建:

[root@master ~]# kubectl create -f liveness-exec.yaml 
pod/liveness-exec-pod created
[root@master ~]# kubectl get pods -w
NAME                           READY     STATUS    RESTARTS   AGE
client                         0/1       Error     0          3d
liveness-exec-pod              1/1       Running   3          3m
myapp-6946649ccd-2lncx         1/1       Running   4          3d
nginx-deploy-8c5fc574c-5bjjm   1/1       Running   4          3d
liveness-exec-pod   1/1       Running   4         4m

HTTPGetAction舉例:

[root@master ~]# vim liveness-httpGet.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
         port: http
         path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3

[root@master ~]# kubectl create -f liveness-httpGet.yaml
pod/liveness-httpget-pod created
[root@master ~]#

readiness:

[root@master ~]# vim readiness-httget.yaml
apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
         port: http
         path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3

容器生命周期-poststart示例:

[root@master ~]# vim poststart-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: poststart-pod
  namespace: default
spec:
  containers:
  - name: busybox-httpd
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    lifecycle:
       postStart:
         exec:
           command: ["/bin/sh","-c","echo Home_Page >> /tmp/index.html"]
    #command: ['/bin/sh','-c','sleep 3600']
    command: ["/bin/httpd"]
    args: ["-f","-h /tmp"]
[root@master ~]# kubectl create -f  poststart-pod.yaml 
pod/poststart-pod created
[root@master ~]#

但,使用/tmp目錄作為網站目錄肯定是不行的。

6、Pod控制器

pod控制器有多種類型:
ReplicaSet: 代用戶創建指定數量的pod副本數量,確保pod副本數量符合預期狀態,并且支持滾動式自動擴容和縮容功能。
ReplicaSet主要三個組件組成:
  (1)用戶期望的pod副本數量
  (2)標簽選擇器,判斷哪個pod歸自己管理
  (3)當現存的pod數量不足,會根據pod資源模板進行新建
幫助用戶管理無狀態的pod資源,精確反應用戶定義的目標數量,但是RelicaSet不是直接使用的控制器,而是使用Deployment。
Deployment:工作在ReplicaSet之上,用于管理無狀態應用,目前來說最好的控制器。支持滾動更新和回滾功能,還提供聲明式配置。
DaemonSet:用于確保集群中的每一個節點只運行特定的pod副本,通常用于實現系統級后臺任務。比如ELK服務
特性:服務是無狀態的,服務必須是守護進程
Job:只要完成就立即退出,不需要重啟或重建。
Cronjob:周期性任務控制,不需要持續后臺運行,
StatefulSet:管理有狀態應用

ReplicaSet(rs)示例:

[root@master ~]# kubectl get deployments
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
myapp          1         1         1            1           4d
nginx-deploy   1         1         1            1           4d1h
[root@master ~]# kubectl delete deploy myapp
deployment.extensions "myapp" deleted
[root@master ~]# kubectl delete deploy nginx-deploy
deployment.extensions "nginx-deploy" deleted
[root@master ~]# 
[root@master ~]# vim rs-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
   replicas: 2
   selector:
        matchLabels:
            app: myapp
            release: canary
   template:
       metadata:
            name: myapp-pod
            labels:
               app: myapp
               release: canary
               environment: qa
       spec:
          containers:
          - name: myapp-conatainer
            image: ikubernetes/myapp:v1
            ports:
            - name: http
              containerPort: 80
[root@master ~]# kubectl create -f rs-demo.yaml
replicaset.apps/myapp created

查看標簽:

[root@master ~]# kubectl get pods --show-labels 
NAME                    READY     STATUS    RESTARTS   AGE       LABELS
client                  0/1       Error     0          4d        run=client
liveness-httpget-pod    1/1       Running   1          107m      <none>
myapp-fspr7             1/1       Running   0          75s       app=myapp,environment=qa,release=canary
myapp-ppxrw             1/1       Running   0          75s       app=myapp,environment=qa,release=canary
pod-demo                2/2       Running   0          3s        app=myapp,tier=frontend
readiness-httpget-pod   1/1       Running   0          86m       <none>
[root@master ~]#

給pod-demo打一個標簽release=canary:

[root@master ~]# kubectl  label pods pod-demo release=canary
pod/pod-demo labeled

deploy示例:

[root@master ~]# kubectl delete rs myapp
replicaset.extensions "myapp" deleted
[root@master ~]#
[root@master ~]# vim deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
       labels:
         app: myapp
         release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80
[root@master ~]# kubectl create -f deploy-demo.yaml 
deployment.apps/myapp-deploy created
[root@master ~]#                                            
[root@master ~]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
client                          0/1       Error     0          4d20h
liveness-httpget-pod            1/1       Running   2          22h
myapp-deploy-574965d786-5x42g   1/1       Running   0          70s
myapp-deploy-574965d786-dqzpd   1/1       Running   0          70s
pod-demo                        2/2       Running   3          20h
readiness-httpget-pod           1/1       Running   1          21h
[root@master ~]# kubectl get rs
NAME                      DESIRED   CURRENT   READY     AGE
myapp-deploy-574965d786   2         2         2         93s
[root@master ~]#

如果要修改副本數,則編輯deploy-demo.yaml修改副本數,執行kubectl apply -f deploy-demo.yaml

或者:kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}',這里是修改5個副本。

修改其他屬性,比如:

[root@master ~]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
deployment.extensions/myapp-deploy patched
[root@master ~]#

更新版本:

[root@master ~]# kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy 
deployment.extensions/myapp-deploy image updated
deployment.extensions/myapp-deploy paused
[root@master ~]#
[root@master ~]# kubectl rollout status deployment myapp-deploy 
Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 2 new replicas have been updated...
[root@master ~]# kubectl rollout resume deployment myapp-deploy 
deployment.extensions/myapp-deploy resumed
[root@master ~]#

版本回滾:

[root@master ~]# kubectl rollout undo deployment myapp-deploy --to-revision=1
deployment.extensions/myapp-deploy
[root@master ~]#

DaemonSet示例:

node1、node2執行:docker pull ikubernetes/filebeat:5.6.5-alpine

編輯yaml文件:

[root@master ~]# vim ds-demo.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: myapp-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
       labels:
         app: filebeat
         release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info
[root@master ~]# kubectl apply -f ds-demo.yaml 
daemonset.apps/myapp-ds created
[root@master ~]#

修改yaml文件:

[root@master ~]# vim ds-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
       labels:
         app: filebeat
         release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info
[root@master ~]# kubectl delete -f ds-demo.yaml
[root@master ~]# kubectl apply -f ds-demo.yaml 
deployment.apps/redis created
daemonset.apps/filebeat-ds created
[root@master ~]#

暴露redis端口:

[root@master ~]# kubectl expose deployment redis --port=6379
service/redis exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        5d20h
myapp        NodePort    10.110.238.138   <none>        80:30937/TCP   4d21h
nginx        ClusterIP   10.110.52.68     <none>        80/TCP         4d21h
redis        ClusterIP   10.97.196.222    <none>        6379/TCP       11s
[root@master ~]#

進入redis:

[root@master ~]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
redis-664bbc646b-sg6wk          1/1       Running   0          2m55s
[root@master ~]# kubectl exec -it redis-664bbc646b-sg6wk -- /bin/sh
/data # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      
tcp        0      0 :::6379                 :::*                    LISTEN      
/data # nslookup redis.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name:      redis.default.svc.cluster.local
Address 1: 10.97.196.222 redis.default.svc.cluster.local
/data # 
/data # redis-cli -h redis.default.svc.cluster.local
redis.default.svc.cluster.local:6379> keys *
(empty list or set)
redis.default.svc.cluster.local:6379>

進入filebeat:

[root@master ~]# kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
client                          0/1       Error     0          4d21h
filebeat-ds-bszfz               1/1       Running   0          6m2s
filebeat-ds-w5nzb               1/1       Running   0          6m2s
redis-664bbc646b-sg6wk          1/1       Running   0          6m2s
[root@master ~]# kubectl exec -it filebeat-ds-bszfz -- /bin/sh
/ # printenv
/ # nslookup redis.default.svc.cluster.local
/ # kill -1  1

更新:[root@master ~]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine

七、Service資源

service是kubernetes中最核心的資源對象之一,Service可以理解成是微服務架構中的一個"微服務“

簡單講,一個service本質上是一組pod組成的一個集群,service和pod之間是通過Label串起來,相同的Service的pod的Label是一樣的。同一個service下的所有pod是通過kube-proxy實現負載均衡,而每個service都會分配一個全局唯一的虛擬ip,也就cluster ip。在該service整個生命周期內,cluster ip保持不變,而在kubernetes中還有一個dns服務,它把service的name和cluster ip應聲起來。

工作模式:userspace、iptables、ipvs

類型:ExternalName, ClusterIP, NodePort, LoadBalancer

資源記錄:SVC_NAME.NS_NAME.DOMAIN.LTD.

svc.cluster.local. 例如:redis.default.svc.cluster.local.

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        5d20h
myapp        NodePort    10.110.238.138   <none>        80:30937/TCP   4d22h
nginx        ClusterIP   10.110.52.68     <none>        80/TCP         4d22h
redis        ClusterIP   10.97.196.222    <none>        6379/TCP       29m
[root@master ~]# kubectl delete svc redis
[root@master ~]# kubectl delete svc nginx
[root@master ~]# kubectl delete svc myapp 
[root@master ~]# vim redis-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
spec:
  selector:
    app: redis
    role: logstor
  clusterIP: 10.97.97.97
  type: ClusterIP
  ports:
  - port: 6379
    targetPort: 6379
[root@master ~]# kubectl apply -f redis-svc.yaml
service/redis created
[root@master ~]#

NodePort:

[root@master ~]# vim myapp-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    role: canary
  clusterIP: 10.99.99.99
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
[root@master ~]# kubectl apply -f myapp-svc.yaml
service/myapp created
[root@master ~]# 
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"sessionAffinity":"ClientIP"}}'
service/myapp patched
[root@master ~]#

不指定ClusterIP:

[root@master ~]# vim myapp-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: "None"
  ports:
  - port: 80
    targetPort: 80
[root@master ~]# kubectl apply -f myapp-svc-headless.yaml
service/myapp-svc created
[root@master ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   5d21h
[root@master ~]# dig -t A myapp-svc.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> -t A myapp-svc.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32215
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-svc.default.svc.cluster.local. IN    A

;; ANSWER SECTION:
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.59
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.2.51
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.60
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.58
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.2.52

;; Query time: 2 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Oct 21 19:41:16 CST 2018
;; MSG SIZE  rcvd: 319

[root@master ~]#

八、ingress及ingress controller

Ingress可以簡單的理解成k8s內部的nginx, 用作負載均衡器。

Ingress由兩部分組成:Ingress Controller 和 Ingress 服務。

ingress-nginx:https://github.com/kubernetes/ingress-nginx、https://kubernetes.github.io/ingress-nginx/deploy/

1、下載相關文件

[root@master ~]# mkdir ingress-nginx
[root@master ~]# cd ingress-nginx
[root@master ingress-nginx]# for file in  namespace.yaml configmap.yaml rbac.yaml with-rbac.yaml ; do curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/${file};done

2、創建

[root@master ingress-nginx]# kubectl apply -f ./

3、編寫yaml文件

[root@master ~]# mkdir maniteste/ingress
[root@master ~]# cd maniteste/ingress
[root@master ingress]#vim deploy-demo.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
       labels:
         app: myapp
         release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
[root@master ingress]# kubectl delete svc myapp
[root@master ingress]# kubectl delete deployment myapp-deploy
[root@master ingress]# kubectl apply -f deploy-demo.yaml 
service/myapp created
deployment.apps/myapp-deploy created
[root@master ingress]#

3、創建service

如果不定義nodePort則會隨機映射端口。

4、app

[root@master ingress-nginx]# vim ingress-myapp.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.haha.com
    http:
     paths:
     - path:
       backend:
         serviceName: myapp
         servicePort: 80
[root@master ingress-nginx]# kubectl apply -f ingress-myapp.yaml
[root@master ~]# kubectl get ingresses
NAME            HOSTS            ADDRESS   PORTS     AGE
ingress-myapp   myapp.haha.com             80        58s
[root@master ~]#

修改物理機的hosts,瀏覽器打開:

kubeadm中如何部署kubernetes集群

可以這樣查看:kubectl get svc -n ingress-nginx 

5、部署一個Tomcat

[root@master ingress-nginx]# vim tomcat-deploy.yaml
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  - name: ajp
    port: 8009
    targetPort: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
       labels:
         app: tomcat
         release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5.34-jre8-alpine
        ports:
        - name: http
          containerPort: 8080
        - name: ajp
          containerPort: 8009
[root@master ingress-nginx]# vim ingress-tomcat.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "tomcat"
spec:
  rules:
  - host: tomcat.haha.com
    http:
     paths:
     - path:
       backend:
         serviceName: tomcat
         servicePort: 8080
[root@master ingress-nginx]# kubectl apply -f tomcat-deploy.yaml
[root@master ingress-nginx]# kubectl apply -f ingress-tomcat.yaml

查看Tomcat:

[root@master ~]# kubectl get pod
NAME                             READY     STATUS             RESTARTS   AGE
myapp-deploy-7b64976db9-5ww72    1/1       Running            0          66m
myapp-deploy-7b64976db9-fm7jl    1/1       Running            0          66m
myapp-deploy-7b64976db9-s6f95    1/1       Running            0          66m
tomcat-deploy-695dbfd5bd-6kx42   1/1       Running            0          5m54s
tomcat-deploy-695dbfd5bd-f5d7n   0/1       ImagePullBackOff   0          5m54s
tomcat-deploy-695dbfd5bd-v5d9d   1/1       Running            0          5m54s
[root@master ~]# kubectl exec tomcat-deploy-695dbfd5bd-6kx42 -- netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 127.0.0.1:8005          0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:8009            0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      
[root@master ~]#

可以使用:docker pull tomcat:8.5.34-jre8-alpine實現下載好鏡像。

kubeadm中如何部署kubernetes集群

創建ssl證書

創建私鑰:

[root@master ingress]# openssl genrsa -out tls.key 2048

創建自簽證書:

[root@master ingress]# openssl req -new -x509 -key tls.key  -out tls.crt -subj /C=CN/ST=Guangdong/L=Guangdong/O=DevOps/CN=tomcat.haha.com

要想將證書注入到pod,必須轉格式:

[root@master ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key 
secret/tomcat-ingress-secret created
[root@master ingress]# kubectl get secret
NAME                    TYPE                                  DATA      AGE
default-token-kcvkv     kubernetes.io/service-account-token   3         8d
tomcat-ingress-secret   kubernetes.io/tls                     2         29s
[root@master ingress]#

格式為:kubernetes.io/tls

配置tomcat:

[root@master ingress]# vim ingress-tomcat-tls.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - tomcat.haha.com
    secretName: tomcat-ingress-secret
  rules:
  - host: tomcat.haha.com
    http:
     paths:
     - path:
       backend:
         serviceName: tomcat
         servicePort: 8080
[root@master ingress]# kubectl apply -f ingress-tomcat-tls.yaml
ingress.extensions/ingress-tomcat-tls created
[root@master ingress]#

瀏覽器打開:https://tomcat.haha.com:30443/

kubeadm中如何部署kubernetes集群

看完上述內容,你們掌握kubeadm中如何部署kubernetes集群的方法了嗎?如果還想學到更多技能或想了解更多相關內容,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

伊宁市| 杂多县| 昆明市| 黎城县| 卓尼县| 青田县| 娱乐| 宣恩县| 裕民县| 桑日县| 和顺县| 井冈山市| 科技| 右玉县| 松江区| 沐川县| 乌恰县| 玉田县| 嘉荫县| 庄河市| 泉州市| 金山区| 宁晋县| 临泽县| 天全县| 祁阳县| 阜宁县| 沙田区| 巴东县| 乌兰察布市| 搜索| 江永县| 翼城县| 洛隆县| 新乡县| 桦南县| 蒙山县| 瑞金市| 合水县| 盘锦市| 昆山市|