中文字幕av专区_日韩电影在线播放_精品国产精品久久一区免费式_av在线免费观看网站

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

k8s部署---多節點部署與負載均衡搭建(五)

發布時間:2020-08-04 07:13:12 來源:網絡 閱讀:471 作者:SiceLc 欄目:云計算

多節點部署介紹

  • 在生產環境中,搭建kubernetes平臺時我們同時會考慮平臺的高可用性,kubenetes平臺是由master中心管理機制,由master服務器調配管理各個節點服務器,在之前的文章中我們搭建的是單節點(一個master服務器)的部署,當master服務器宕機時,我們的搭建的平臺也就無法使用了,這個時候我們就要考慮多節點(多master)的部署,已到平臺服務的高可用性。

負載均衡介紹

  • 在我們搭建多節點部署時,多個master同時運行工作,在處理工作問題時總是使用同一個master完成工作,當master服務器面對多個請求任務時,處理速度就會變慢,同時其余的master服務器不處理請求也是一種資源的浪費,這個時候我們就考慮到做負載均衡服務

  • 本次搭建負載均衡使用nginx服務做四層負載均衡,keepalived做地址飄逸

實驗部署

實驗環境

  • lb01:192.168.80.19 (負載均衡服務器)
  • lb02:192.168.80.20 (負載均衡服務器)
  • Master01:192.168.80.12
  • Master01:192.168.80.11
  • Node01:192.168.80.13
  • Node02:192.168.80.14

多master部署

  • master01服務器操作
    [root@master01 kubeconfig]# scp -r /opt/kubernetes/ root@192.168.80.11:/opt     //直接復制kubernetes目錄到master02
    The authenticity of host '192.168.80.11 (192.168.80.11)' can't be established.
    ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo.
    ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '192.168.80.11' (ECDSA) to the list of known hosts.
    root@192.168.80.11's password:
    token.csv                                                                  100%   84    61.4KB/s   00:00
    kube-apiserver                                                             100%  929     1.6MB/s   00:00
    kube-scheduler                                                             100%   94   183.2KB/s   00:00
    kube-controller-manager                                                    100%  483   969.2KB/s   00:00
    kube-apiserver                                                             100%  184MB 106.1MB/s   00:01
    kubectl                                                                    100%   55MB  85.9MB/s   00:00
    kube-controller-manager                                                    100%  155MB 111.9MB/s   00:01
    kube-scheduler                                                             100%   55MB 115.8MB/s   00:00
    ca-key.pem                                                                 100% 1675     2.7MB/s   00:00
    ca.pem                                                                     100% 1359     2.6MB/s   00:00
    server-key.pem                                                             100% 1679     2.5MB/s   00:00
    server.pem                                                                 100% 1643     2.7MB/s   00:00
    [root@master01 kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager, kube-scheduler}.service root@192.168.80.11:/usr/lib/systemd/system    //復制master中的三個組件啟動腳本
    root@192.168.80.11's password:
    kube-apiserver.service                                                     100%  282   274.4KB/s   00:00
    kube-controller-manager.service                                            100%  317   403.5KB/s   00:00
    kube-scheduler.service                                                     100%  281   379.4KB/s   00:00
    [root@master01 kubeconfig]# scp -r /opt/etcd/ root@192.168.80.11:/opt/    //特別注意:master02一定要有 etcd證書,否則apiserver服務無法啟動 拷貝master01上已有的etcd證書給master02使用
    root@192.168.80.11's password:
    etcd                                                                       100%  509   275.7KB/s   00:00
    etcd                                                                       100%   18MB  95.3MB/s   00:00
    etcdctl                                                                    100%   15MB  75.1MB/s   00:00
    ca-key.pem                                                                 100% 1679   941.1KB/s    00:00
    ca.pem                                                                     100% 1265     1.6MB/s   00:00
    server-key.pem                                                             100% 1675     2.0MB/s   00:00
    server.pem                                                                 100% 1338     1.5MB/s   00:00
  • master02服務器操作
    [root@master02 ~]# systemctl stop firewalld.service     //關閉防火墻
    [root@master02 ~]# setenforce 0                        //關閉selinux
    [root@master02 ~]# vim /opt/kubernetes/cfg/kube-apiserver     //更改文件
    ...
    --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 \
    --bind-address=192.168.80.11 \       //更改IP地址
    --secure-port=6443 \
    --advertise-address=192.168.80.11 \   //更改IP地址
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    ...
    :wq
    [root@master02 ~]# systemctl start kube-apiserver.service   //啟動apiserver服務
    [root@master02 ~]# systemctl enable kube-apiserver.service  //設置開機自啟
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/ systemd/system/kube-apiserver.service.
    [root@master02 ~]# systemctl start kube-controller-manager.service   //啟動controller-manager
    [root@master02 ~]# systemctl enable kube-controller-manager.service  //設置開機自啟
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. 
    [root@master02 ~]# systemctl start kube-scheduler.service            //啟動scheduler
    [root@master02 ~]# systemctl enable kube-scheduler.service           //設置開機自啟
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/ systemd/system/kube-scheduler.service.
    [root@master02 ~]# vim /etc/profile       //編輯添加環境變量
    ...
    export PATH=$PATH:/opt/kubernetes/bin/
    :wq
    [root@master02 ~]# source /etc/profile     //重新執行
    [root@master02 ~]# kubectl get node        //查看節點信息
    NAME            STATUS   ROLES    AGE    VERSION
    192.168.80.13   Ready    <none>   146m   v1.12.3
    192.168.80.14   Ready    <none>   144m   v1.12.3    //多master配置成功

負載均衡部署

  • lb01、lb02同步操作keepalived服務配置文件下載 提取碼:fkoh

    [root@lb01 ~]# systemctl stop firewalld.service
    [root@lb01 ~]# setenforce 0
    [root@lb01 ~]# vim /etc/yum.repos.d/nginx.repo   //配置nginx服務yum源
    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/centos/7/$basearch/
    gpgcheck=0
    :wq
    [root@lb01 yum.repos.d]# yum list     //重新加載yum
    已加載插件:fastestmirror
    base                                                                                  | 3.6 kB  00:00:00
    extras                                                                                | 2.9 kB   00:00:00
    ...
    [root@lb01 yum.repos.d]# yum install nginx -y     //安裝nginx服務 
    已加載插件:fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [root@lb01 yum.repos.d]# vim /etc/nginx/nginx.conf    //編輯nginx配置文件
    ...
    events {
    worker_connections  1024;
    }
    
    stream {                     //添加四層轉發模塊
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log /var/log/nginx/k8s-access.log main;
    
    upstream k8s-apiserver {
        server 192.168.80.12:6443;          //注意IP地址
        server 192.168.80.11:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
    }
    
    http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    ...
    :wq
    [root@lb01 yum.repos.d]# systemctl start nginx       //啟動nginx服務 可以在瀏覽器中訪問測試nginx服務
    [root@lb01 yum.repos.d]# yum install keepalived -y    //安裝keepalived服務 
    已加載插件:fastestmirror
    Loading mirror speeds from cached hostfile
    * base: mirrors.aliyun.com
    * extras: mirrors.163.com
    ...
    [root@lb01 yum.repos.d]# mount.cifs //192.168.80.2/shares/K8S/k8s02 /mnt/     //掛載宿主機目錄
    Password for root@//192.168.80.2/shares/K8S/k8s02:
    [root@lb01 yum.repos.d]# cp /mnt/keepalived.conf /etc/keepalived/keepalived.conf  //復制準備好的  keepalived配置文件覆蓋源配置文件
    cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes
    [root@lb01 yum.repos.d]# vim /etc/keepalived/keepalived.conf       //編輯配置文件
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //注意腳本位置修改
    }
    
    vrrp_instance VI_1 {
    state MASTER
    interface ens33            //注意網卡名稱
    virtual_router_id 51   //VRRP 路由 ID實例,每個實例是唯一的
    priority 100           //優先級,備服務器設置 90
    advert_int 1            //指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //飄逸地址
    }
    track_script {
        check_nginx
    }
    }
    //刪除下面所有內容
    :wq
  • lb02服務器keepalived配置文件修改

    [root@lb02 ~]# vim /etc/keepalived/keepalived.conf
    ...
    vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"    //注意腳本位置修改
    }
    
    vrrp_instance VI_1 {
    state BACKUP         //修改角色為backup
    interface ens33      //網卡名稱
    virtual_router_id 51   //VRRP 路由 ID實例,每>個實例是唯一的
    priority 90       //優先級,備服務器設置 90
    advert_int 1      //指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.80.100/24       //虛擬IP地址
    }
    track_script {
        check_nginx
    }
    }
    //刪除下面所有內容
    :wq
  • lb01、lb02同步操作

    [root@lb01 yum.repos.d]# vim /etc/nginx/check_nginx.sh   //編輯判斷nginx狀態腳本
    count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
    
    if [ "$count" -eq 0 ];then
    systemctl stop keepalived
    fi
    :wq
    chmod +x /etc/nginx/check_nginx.sh     //添加腳本執行權限
    [root@lb01 yum.repos.d]# systemctl start keepalived     //啟動服務
  • lb01服務器操作
    [root@lb01 ~]# ip a      //查看地址信息
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33    //虛擬地址成功配置
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • lb02服務器操作
    [root@lb02 ~]# ip a          //查看地址信息
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever       //沒有虛擬IP地址 lb02屬于備用服務
  • lb01服務器停止nginx服務,再次在lb02服務器IP地址,看虛擬IP地址是否成功漂移
    [root@lb01 ~]# systemctl stop nginx.service
    [root@lb01 nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
    [root@lb02 ~]# ip a           //在lb02服務器查看
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33      //漂移地址轉移到lb02上
       valid_lft forever preferred_lft forever
    inet6 fe80::cd8b:b80c:8deb:251f/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
  • 在lb01服務器重新開啟nginx、keepalived服務
    [root@lb01 nginx]# systemctl start nginx
    [root@lb01 nginx]# systemctl start keepalived.service
    [root@lb01 nginx]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.80.100/24 scope global secondary ens33     //漂移地址被搶占回來  因為配置了優先級
       valid_lft forever preferred_lft forever
    inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link
       valid_lft forever preferred_lft forever
  • 在所有的node節點修改配置文件
    [root@node01 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
    ...
    server: https://192.168.80.100:6443
    ...
    :wq
    [root@node01 ~]# systemctl restart kubelet.service    //重啟服務
    [root@node01 ~]# systemctl restart kube-proxy.service
  • 在lb01服務器查看日志信息
    [root@lb01 nginx]# tail /var/log/nginx/k8s-access.log
    192.168.80.13 192.168.80.12:6443 - [11/Feb/2020:15:23:52 +0800] 200 1118
    192.168.80.13 192.168.80.11:6443 - [11/Feb/2020:15:23:52 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1119
    192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1120
  • 在master01上操作測試平臺功能
    [root@master01 ~]# kubectl run nginx --image=nginx     //創建pod節點
    kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
    deployment.apps/nginx created
    [root@master01 ~]# kubectl get pods        //查看pod信息
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-dbddb74b8-sdcpl   1/1     Running   0          33m   //創建成功
    [root@master01 ~]# kubectl logs nginx-dbddb74b8-sdcpl    //查看日志信息
    Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-sdcpl)    //出報錯
    [root@master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous     //解決日志報錯問題
    clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
    [root@master01 ~]# kubectl logs nginx-dbddb74b8-sdcpl    //再次查看日志
    [root@master01 ~]#     //這個時候沒有訪問,所有日志沒有顯示日志信息
  • 在node節點中訪問nginx網頁

    [root@master01 ~]# kubectl get pods -o wide   //先在master01節點上查看pod網絡信息
    NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
    nginx-dbddb74b8-sdcpl   1/1     Running   0          38m   172.17.33.2   192.168.80.14   <none>
    [root@node01 ~]# curl 172.17.33.2     //在node節點上操作可以直接訪問
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
    </style>
    </head>
    <body>
    <h2>Welcome to nginx!</h2>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a >nginx.org</a>.<br/>
    Commercial support is available at
    <a >nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
  • 回到master01服務器查看日志信息
    [root@master01 ~]# kubectl logs nginx-dbddb74b8-sdcpl
    172.17.12.0 - - [12/Feb/2020:06:45:54 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"    //出現訪問信息

    多節點搭建與負載均衡配置完成

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

邹平县| 溧水县| 新竹县| 禹城市| 额敏县| 望奎县| 杭锦旗| 高碑店市| 麟游县| 阳朔县| 阳城县| 赤水市| 长丰县| 华池县| 射阳县| 晋江市| 嘉禾县| 陇川县| 尉犁县| 桐庐县| 建德市| 赤城县| 禄劝| 固原市| 雷山县| 新龙县| 龙海市| 马公市| 安顺市| 天长市| 都匀市| 芜湖县| 邛崃市| 延庆县| 德江县| 乌兰浩特市| 康保县| 绍兴市| 南召县| 普兰县| 贵德县|