k8s高可用集群(本机搭建)原创
金蝶云社区-云社区用户26064194
云社区用户26064194
1人赞赏了该文章 1,059次浏览 未经作者许可,禁止转载编辑于2021年12月28日 17:32:33


一、k8s高可用集群规划

=======================================================================

主机名 IP VIP

-----------------------------------------------------

my |192.168.3.100 |192.168.3.234

-----------------------------------------------------

master1 |192.168.3.224 |

-----------------------------------------------------

master2 |192.168.3.225 |

-----------------------------------------------------

node1 |192.168.3.222 |

-----------------------------------------------------

node2 |192.168.3.223 |

-----------------------------------------------------

二、安装Docker

1、各节点下载docker源

# 安装依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

#紧接着配置一个稳定的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

#更新Yum安装的相关Docker软件包&安装Docker CE(这里安装Docker最新版本)

yum update -y && yum install docker-ce


2、各节点配置docker加速器并修改成k8s驱动


daemon.json文件如果没有自己创建


#创建/etc/docker目录

mkdir /etc/docker

#更新daemon.json文件

cat > /etc/docker/daemon.json <<EOF

{

  "registry-mirrors": [

        "https://ebkn7ykm.mirror.aliyuncs.com",

        "https://docker.mirrors.ustc.edu.cn",

        "http://f1361db2.m.daocloud.io"

    ],

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF


3、重启Docker服务

systemctl daemon-reload && systemctl restart docker


4、配置各节点hosts文件


实际生产环境中,可以规划好内网dns,每台机器可以做一下主机名解析,就不需要配hosts文件


cat > /etc/hosts <<EOF

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.3.100 my

192.168.3.224 master1

192.168.3.225 master2

192.168.3.222 node1

192.168.3.223 node2

199.232.28.133  raw.githubusercontent.com

140.82.114.4 github.com

199.232.69.194 github.global.ssl.fastly.net

185.199.108.153 assets-cdn.github.com

185.199.109.153 assets-cdn.github.com

185.199.110.153 assets-cdn.github.com

185.199.111.153 assets-cdn.github.com

185.199.111.133 objects.githubusercontent.com

EOF



免密(这一步可以只在master执行)**,这一步我为后面传输网络做准备


ssh-keygen

cat .ssh/id_rsa.pub >> .ssh/authorized_keys

chmod 600 .ssh/authorized_keys


# 可以在master生成,然后拷贝到node,master节点

scp -r .ssh root@192.168.3.222:/root



三、配置环境变量

1、关掉各节点防火墙,安装相关依赖


yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git lrzsz

systemctl stop firewalld && systemctl disable firewalld

yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save



2、关闭各节点selinux


setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config


3、关闭各节点swap分区


swapoff -a && sed -i  '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab


4、同步各节点的时间


这个时间服务,如果机器可以直通外网,那就按以下命令执行就行。

如果机器无法通外网,需要做一台时间服务器,然后别的服务器全部从这台时间服务器同步时间。


yum -y install chrony

systemctl start chronyd.service

systemctl enable chronyd.service

timedatectl set-timezone Asia/Shanghai

chronyc -a makestep


5、各节点内核调整


cat > /etc/sysctl.d/k8s.conf << EOF

net.ipv4.ip_nonlocal_bind = 1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它

vm.overcommit_memory=1 # 不检查物理内存是否够用

vm.panic_on_oom=0 # 开启 OOM

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF


sysctl -p /etc/sysctl.d/k8s.conf


6、配置各节点k8s的yum源


cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF


7、各节点开启ipvs模块


cat >/etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/sh

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

#modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4

modprobe -- nf_conntrack

EOF


chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack



8、设置 rsyslogd 和 systemd journald


mkdir /var/log/journa

mkdir -p  /etc/systemd/journald.conf.d/


cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF

[Journal]

# 持久化保存到磁盘

Storage=persistent

# 压缩历史日志

Compress=yes

SyncIntervalSec=5m

RateLimitInterval=30s

RateLimitBurst=1000

# 最大占用空间 10G

SystemMaxUse=10G

# 单日志文件最大 200M

SystemMaxFileSize=200M

# 日志保存时间 2 周

MaxRetentionSec=2week

# 不将日志转发到 syslog

ForwardToSyslog=no

EOF


systemctl restart systemd-journald


9、系统内核升级到最新

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定


rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install -y kernel-lt

grub2-set-default 'CentOS Linux (5.4.159-1.el7.elrepo.x86_64) 7 (Core)'

## 最后重新系统

reboot


四、所有master节点安装keepalived和haproxy服务


1、各个master节点安装服务


yum -y install haproxy keepalived


2、修改master配置文件


第一台master(my)为master,后面一台master为backup


# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

   router_id LVS_DEVEL

# 添加如下内容

   script_user root

   enable_script_security

}


vrrp_script check_haproxy {

    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径

    interval 3

    weight -2

    fall 10

    rise 2

}


vrrp_instance VI_1 {

    state MASTER            # MASTER

    interface ens33         # 本机网卡名

    virtual_router_id 51

    priority 100             # 权重100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.3.234      # 虚拟IP

    }

    track_script {

        check_haproxy       # 模块

    }

}



3、修改master1配置文件


把这个/etc/keepalived/keepalived.conf配置文件scp 到master1,只需要修改


# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived


global_defs {

   router_id LVS_DEVEL


# 添加如下内容

   script_user root

   enable_script_security

}


vrrp_script check_haproxy {

    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径

    interval 3

    weight -2

    fall 10

    rise 2

}


vrrp_instance VI_1 {

    state BACKUP            # BACKUP

    interface ens33         # 本机网卡名

    virtual_router_id 51

    priority 90             # 权重90

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.3.234      # 虚拟IP

    }

    track_script {

        check_haproxy       # 模块

    }

}


4、配置其他台master的 haproxy.cfg配置文件配置文件完全一样


vim /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------

# Example configuration for a possible web application.  See the

# full configuration options online.

#

#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#

#---------------------------------------------------------------------


#---------------------------------------------------------------------

# Global settings

#---------------------------------------------------------------------

global

    # to have these messages end up in /var/log/haproxy.log you will

    # need to:

    #

    # 1) configure syslog to accept network log events.  This is done

    #    by adding the '-r' option to the SYSLOGD_OPTIONS in

    #    /etc/sysconfig/syslog

    #

    # 2) configure local2 events to go to the /var/log/haproxy.log

    #   file. A line like the following can be added to

    #   /etc/sysconfig/syslog

    #

    #    local2.*                       /var/log/haproxy.log

    #

    log         127.0.0.1 local2


    chroot      /var/lib/haproxy

    pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon


    # turn on stats unix socket

    stats socket /var/lib/haproxy/stats


#---------------------------------------------------------------------

# common defaults that all the 'listen' and 'backend' sections will

# use if not designated in their block

#---------------------------------------------------------------------

defaults

    mode                    http

    log                     global

    option                  httplog

    option                  dontlognull

    option http-server-close

    option forwardfor       except 127.0.0.0/8

    option                  redispatch

    retries                 3

    timeout http-request    10s

    timeout queue           1m

    timeout connect         10s

    timeout client          1m

    timeout server          1m

    timeout http-keep-alive 10s

    timeout check           10s

    maxconn                 3000


#---------------------------------------------------------------------

# main frontend which proxys to the backends

#---------------------------------------------------------------------

frontend  kubernetes-apiserver

    mode                        tcp

    bind                        *:16443

    option                      tcplog

    default_backend             kubernetes-apiserver


#---------------------------------------------------------------------

# static backend for serving up images, stylesheets and such

#---------------------------------------------------------------------

listen stats

    bind            *:1080

    stats auth      admin:admin

    stats refresh   5s

    stats realm     HAProxy\ Statistics

    stats uri       /admin?stats


#---------------------------------------------------------------------

# round robin balancing between the various backends

#---------------------------------------------------------------------

backend kubernetes-apiserver

    mode        tcp

    balance     roundrobin

    server  my 192.168.3.100:6443 check

    server  master1 192.168.3.224:6443 check

    server  master2 192.168.3.225:6443 check



5、配置检测脚本,两台master都是一样


cat > /etc/keepalived/check_haproxy.sh <<EOF

#!/bin/sh

# HAPROXY down

pid=`ps -C haproxy --no-header | wc -l`

if [ $pid -eq 0 ]

then

    systemctl start haproxy

    if [ `ps -C haproxy --no-header | wc -l` -eq 0 ]

    then

        killall -9 haproxy


        #这里大家可以自已决定事件处理方法,例如可以发邮件,发短信等等

        echo "HAPROXY down" >>/tmp/haproxy_check.log

        sleep 10

    fi

fi

EOF


7、给检测脚本添加执行权限


chmod 755 /etc/keepalived/check_haproxy.sh


8、启动haproxy和keepalived服务


systemctl enable keepalived && systemctl start keepalived 

systemctl enable haproxy && systemctl start haproxy 


9、查看vip地址

这边配的是master是master,所以只能在这台机器上看


ip addr


五、部署集群


安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!


1、安装k8s软件包,每个节点都需要安装


(网上说的错误)#直接装最新的

(网上说的错误)# yum install -y kubeadm kubectl kubelet


安装指定版本要和kubernetes版本一致


yum install -y kubelet-1.22.5 kubectl-1.22.5 kubeadm-1.22.5

systemctl enable kubelet && systemctl start kubelet


2、获取默认配置文件,登录到master(my)机器


kubeadm config print init-defaults > kubeadm-config.yaml



vim  kubeadm-config.yaml

--------------------------------------------------------------------

apiVersion: kubeadm.k8s.io/v1beta3

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.3.100

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  imagePullPolicy: IfNotPresent

  name: my

  taints: null

---

apiServer:

  timeoutForControlPlane: 4m0s

  certSANS:

  - 192.168.3.100

  - 192.168.3.224

  - 192.168.3.225

apiVersion: kubeadm.k8s.io/v1beta3

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: "192.168.3.234:16443"

controllerManager: {}

dns: {}

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

kind: ClusterConfiguration

kubernetesVersion: 1.22.0

networking:

  dnsDomain: cluster.local

  serviceSubnet: 10.96.0.0/12

  podSubnet: 10.244.0.0/16

scheduler: {}

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

mode: ipvs



4、下载相关镜像文件


kubeadm config images pull --config kubeadm-config.yaml --upload-certs


查看下载下来的镜像文件

kubeadm config images list



5、初始化集群


kubeadm init --config kubeadm-config.yaml


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:3f4937786226a046b3d6d67b8697d1b6df2eaf3b29f711831577282a484c67ec \

        --control-plane


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:3f4937786226a046b3d6d67b8697d1b6df2eaf3b29f711831577282a484c67ec

6、在其它master节点要创建etcd目录


mkdir -p /etc/kubernetes/pki/etcd


7、把主master节点证书分别复制到其他master节点


scp /etc/kubernetes/pki/ca.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.3.224:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.* 192.168.3.224:/etc/kubernetes/pki/etcd/


#这个文件master和node节点都需要

scp  /etc/kubernetes/admin.conf 192.168.3.224:/etc/kubernetes/


scp  /etc/kubernetes/admin.conf 192.168.3.222:/etc/kubernetes/

scp  /etc/kubernetes/admin.conf 192.168.3.223:/etc/kubernetes/


## 批量处理文件

cat > k8s-cluster-other-init.sh <<EOF

#!/bin/bash

IPS=(192.168.3.224,192.168.3.225)

JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null`


for index in 0 1; do

  ip=${IPS[${index}]}

  ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"

  scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt

  scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key

  scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key

  scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub

  scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt

  scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key

  scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf

  scp /etc/kubernetes/admin.conf $ip:~/.kube/config


  ssh ${ip} "${JOIN_CMD} --control-plane"

done

EOF


8、其他master节点加入集群


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:206fc3f597db5676739d390e4e2ce6fac7e03c361695613d38363027dcb2c0c3 \

        --control-plane

9、两个node节点加入集群


kubeadm join 192.168.3.234:16443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:206fc3f597db5676739d390e4e2ce6fac7e03c361695613d38363027dcb2c0c3 

10、所有master节点执行以下命令,node节点随意

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


11、在master(my)上查看所有节点状态

kubectl get nodes

------------------------------------------------------------------

NAME      STATUS   ROLES                  AGE     VERSION

master1   Ready    control-plane,master   6h38m   v1.22.5

master2   Ready    control-plane,master   4h45m   v1.22.5

my        Ready    control-plane,master   6h41m   v1.22.5

node1     Ready    <none>                 6h15m   v1.22.5

node2     Ready    <none>                 6h15m   v1.22.5





12、安装网络插件,在master(my)机器上执行


如果没有翻墙有可能下不来

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


13、再查看节点状态

kubectl get pods --all-namespaces

----------------------------------------------------------------------

NAMESPACE              NAME                                        READY   STATUS    RESTARTS       AGE

kube-system            coredns-7d89d9b6b8-22f8g                    1/1     Running   7 (76m ago)    6h40m

kube-system            coredns-7d89d9b6b8-4mm4g                    1/1     Running   7 (76m ago)    6h40m

kube-system            etcd-master1                                1/1     Running   1 (76m ago)    118m

kube-system            etcd-master2                                1/1     Running   2 (76m ago)    4h45m

kube-system            etcd-my                                     1/1     Running   13 (76m ago)   6h41m

kube-system            kube-apiserver-master1                      1/1     Running   35 (76m ago)   6h37m

kube-system            kube-apiserver-master2                      1/1     Running   5 (23m ago)    4h45m

kube-system            kube-apiserver-my                           1/1     Running   22 (76m ago)   6h41m

kube-system            kube-controller-manager-master1             1/1     Running   5 (76m ago)    6h37m

kube-system            kube-controller-manager-master2             1/1     Running   3 (76m ago)    4h45m

kube-system            kube-controller-manager-my                  1/1     Running   13 (76m ago)   6h41m

kube-system            kube-flannel-ds-9rv5t                       1/1     Running   3 (23m ago)    6h15m

kube-system            kube-flannel-ds-n4cz2                       1/1     Running   3 (76m ago)    4h45m

kube-system            kube-flannel-ds-s2t2g                       1/1     Running   5 (76m ago)    6h28m

kube-system            kube-flannel-ds-xhfbh                       1/1     Running   5              6h28m

kube-system            kube-flannel-ds-xtgdb                       1/1     Running   2 (76m ago)    6h15m

kube-system            kube-proxy-7st4z                            1/1     Running   2 (23m ago)    102m

kube-system            kube-proxy-b7wq7                            1/1     Running   1 (76m ago)    102m

kube-system            kube-proxy-sksmw                            1/1     Running   1 (76m ago)    102m

kube-system            kube-proxy-vsml8                            1/1     Running   1              102m

kube-system            kube-proxy-zwklm                            1/1     Running   2 (76m ago)    102m

kube-system            kube-scheduler-master1                      1/1     Running   6 (76m ago)    6h37m

kube-system            kube-scheduler-master2                      1/1     Running   2 (76m ago)    4h45m

kube-system            kube-scheduler-my                           1/1     Running   13 (76m ago)   6h41m

kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-85mf6   1/1     Running   5 (76m ago)    5h57m

kubernetes-dashboard   kubernetes-dashboard-576cb95f94-qp4zb       1/1     Running   5 (76m ago)    5h57m




一定是所有的状态为Running才是正常的,如果有以下状态 ,就要去看一下日志(tail -f /var/log/message),分析一下问题,一般的问题 是ipvs环境变量不对等等


异常信息为:


kube-system   kube-flannel-ds-28jks                  0/1     Error               1          28s

kube-system   kube-flannel-ds-4w9lz                  0/1     Error               1          28s

kube-system   kube-flannel-ds-8rflb                  0/1     Error               1          28s

kube-system   kube-flannel-ds-wfcgq                  0/1     Error               1          28s

kube-system   kube-flannel-ds-zgn46                  0/1     Error               1          28s

kube-system   kube-proxy-b8lxm                       0/1     CrashLoopBackOff    4          2m15s

kube-system   kube-proxy-bmf9q                       0/1     CrashLoopBackOff    7          14m

kube-system   kube-proxy-bng8p                       0/1     CrashLoopBackOff    6          7m31s

kube-system   kube-proxy-dpkh4                       0/1     CrashLoopBackOff    6          10m

kube-system   kube-proxy-xl45p                       0/1     CrashLoopBackOff    4          2m30s



14、在master上安装etcdctl客户端工具


wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz

tar -zxf etcd-v3.4.14-linux-amd64.tar.gz

mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin

chmod +x /usr/local/bin/



15、查看etcd集群的各种状态


15.1 查看etcd集群健康状态


ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.3.100:2379,192.168.3.224:2379,192.168.3.225:2379 endpoint health


+--------------------+--------+------------+-------+

|      ENDPOINT      | HEALTH |    TOOK    | ERROR |

+--------------------+--------+------------+-------+

| 192.168.3.100:2379 |   true | 5.977299ms |       |

| 192.168.3.224:2379 |   true |  6.99102ms |       |

| 192.168.3.225:2379 |   true |  6.99102ms |       |

+--------------------+--------+------------+-------+

15.2 查看etcd集群可用列表


ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.3.100:2379,192.168.3.224:2379,192.168.3.225:2379 member list


15.3 查看etcd集群leader状态


ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.3.100:2379,192.168.3.224:2379,192.168.3.225:2379 endpoint status



15.4 登录HAProxy网页客户端:

账号/密码:admin/admin


http://192.168.3.234:1080/admin?stats


六、部署k8s的dashboard

1、下载recommended.yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml


vim recommended.yaml


## 修改的部分如下:

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kubernetes-dashboard

spec:

  type: NodePort   #NodePort模式

  ports:

    - port: 443

      targetPort: 8443

      nodePort: 30000  #用的是30000端口

  selector:

    k8s-app: kubernetes-dashboard

2、安装dashboard

kubectl apply -f recommended.yaml


2.1 查看安装结果


kubectl get pods -n kubernetes-dashboard

dashboard-metrics-scraper-799d786dbf-j4rv6   1/1     Running   0          3h33m

kubernetes-dashboard-6b6b86c4c5-ls49h        1/1     Running   0          3h33m


查询flannel网络配置

cat /run/flannel/subnet.env


删除cni0网卡让其重建


ifconfig cni0 down

ip link delete cni0



2.2 查看dashboard服务

kubectl get service -n kubernetes-dashboard  -o wide

dashboard-metrics-scraper   ClusterIP   10.103.95.138   <none>        8000/TCP        3h34m   k8s-app=dashboard-metrics-scraper

kubernetes-dashboard        NodePort    10.99.186.174   <none>        443:30000/TCP   3h34m   k8s-app=kubernetes-dashboard


3、 创建dashoard管理员

cat > dashboard-admin.yaml <<EOF

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: dashboard-admin

  namespace: kubernetes-dashboard

EOF


kubectl apply -f dashboard-admin.yaml


4、 为管理员分配权限


cat > dashboard-admin-bind-cluster-role.yaml <<EOF

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: dashboard-admin-bind-cluster-role

  labels:

    k8s-app: kubernetes-dashboard

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: dashboard-admin

  namespace: kubernetes-dashboard

EOF


kubectl apply -f dashboard-admin-bind-cluster-role.yaml


5、 查看管理员Token


kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')

------------------------------------------------------------------------

Name:         dashboard-admin-token-tzp2d

Namespace:    kubernetes-dashboard

Labels:       <none>

Annotations:  kubernetes.io/service-account.name: dashboard-admin

              kubernetes.io/service-account.uid: 37a23381-007c-4bab-a07b-42767a56d859


Type:  kubernetes.io/service-account-token


Data

====

ca.crt:     1099 bytes

namespace:  20 bytes

token:      ayJhbGciOiJSUzI1NiIsImtpZCI6InRFRF9MWlhDLVZ2MkJjT2tXUXQ4QlRhWVowOTVTRTBkZ2tDcF9xaE5qOFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdHpwMmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzdhMjMzODEtMDA3Yy00YmFiLWEwN2ItNDI3NjdhNTZkODU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.TIWkVlu7SrwK9GetIC9eE32sgzuta0Zy52Ta3KkPmlQaINgqZx38I3nrFJ1u_641tENNu_60T3PjCbZweiqmpPTiyazL9Lw8uSQ5sbX3hauSzC5xOA1CX4AH1KEUnBYwWhuI-1VpXeXX-nVn7PoDElNoHBdXZ2l3NNLx2KmmaFoXHiVXAiIzTvSGY4DxJ9y6g2Tyz7GFOlOfOgpKYbVZlKufqrXEiO5SoUE_WndJSlt65UydQZ_zwmhA_6zWSxTDj2jF1o76eYXjpMLT0ioM51k-OzgljnRKZU7Jy67XJzj5VdJuDUdTZ0KADhF2XAkh-Vre0tjMk0867VHq0K_Big


6、 打开浏览器访问dashborad


输入:https://192.168.3.234:30000


选择Token,输入前面查到的Token


到此高可用的k8s服务已经部署完成,测试了一下把my关掉,服务一直可用,只有dashboard页面会闪断。



===================================================================================================================

===================================================================================================================

附录


1. kubeadm join 使用的 token 过期之后,如何加入集群


# 创建token

$ kubeadm token create

ll3wpn.pct6tlq66lis3uhk


# 查看token

$ kubeadm token list

TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS

ll3wpn.pct6tlq66lis3uhk   23h         2022-01-17T14:42:50+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token


# 获取 CA 证书 sha256 编码 hash 值

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'



# 808d211c70a3553aaf6662ca13f535ad95a955fd11aa2a38d37871690eccdca3




# 新node节点加入集群

kubeadm join 192.168.3.234:16443 --token 8pyxdt.c4xb0qu6xzzwdd0z  \

    --discovery-token-ca-cert-hash sha256:808d211c70a3553aaf6662ca13f535ad95a955fd11aa2a38d37871690eccdca3 --control-plane



2. 忘记kubeadm join命令怎么办


# 执行以下命令即可打印加入集群的命令

kubeadm token create --print-join-command


3. 在kubeadm init阶段未开启ipvs,后续如何修改kube-proxy的模式为ipvs


# 修改ConfigMap的kube-system/kube-proxy中的config.conf中mode: "ipvs"

$ kubectl edit cm kube-proxy -n kube-system

...

apiVersion: v1

data:

  config.conf: |-

    apiVersion: kubeproxy.config.k8s.io/v1alpha1

    bindAddress: 0.0.0.0

    clientConnection:

      acceptContentTypes: ""

      burst: 0

      contentType: ""

      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf

      qps: 0

    clusterCIDR: 10.10.0.0/16

    configSyncPeriod: 0s

    conntrack:

      maxPerCore: null

      min: null

      tcpCloseWaitTimeout: null

      tcpEstablishedTimeout: null

    enableProfiling: false

    healthzBindAddress: ""

    hostnameOverride: ""

    iptables:

      masqueradeAll: false

      masqueradeBit: null

      minSyncPeriod: 0s

      syncPeriod: 0s

    ipvs:

      excludeCIDRs: null

      minSyncPeriod: 0s

      scheduler: ""

      strictARP: false

      syncPeriod: 0s

    kind: KubeProxyConfiguration

    metricsBindAddress: ""

    # 修改此处即可

    mode: "ipvs"

    nodePortAddresses: null

    oomScoreAdj: null

    portRange: ""

    udpIdleTimeout: 0s

    winkernel:

      enableDSR: false

      networkName: ""

      sourceVip: ""

...


# 查看已运行的kube-proxy的Pod

$ kubectl get pods -n kube-system | grep kube-proxy


# 删除原有的kube-proxy的Pod,控制器会自动重建

$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'


# 通过ConfigMap修改了kube-proxy的配置,后期增加的Node节点,会直接使用ipvs模式。


# 查看已运行的kube-proxy的Pod

$ kubectl get pods -n kube-system | grep kube-proxy


# 查看kube-proxy的Pod日志,确保运行为ipvs模式。日志中打印出了'Using ipvs Proxier,说明ipvs模式已经开启。

$ kubectl logs kube-proxy-xxxxx -n kube-system


# 使用ipvsadm测试,可以查看之前创建的Service已经使用LVS创建了集群。

$ ipvsadm -Ln



sudo kubeadm reset -f

sudo rm -rvf $HOME/.kube

sudo rm -rvf ~/.kube/

sudo rm -rvf /etc/kubernetes/

sudo rm -rvf /etc/systemd/system/kubelet.service.d

sudo rm -rvf /etc/systemd/system/kubelet.service

sudo rm -rvf /usr/bin/kube*

sudo rm -rvf /etc/cni

sudo rm -rvf /opt/cni

sudo rm -rvf /var/lib/etcd

sudo rm -rvf /var/etcd

yum remove kubectl kubelet kubeadm






4、另外1个master节点加入集群


kubeadm token create  --ttl 0  --print-join-command    # 默认有效期24小时,若想久一些可以结合--ttl参数,设为0则用不过期



kubeadm join 192.168.3.234:16443 --token soszky.rsdod8mfiusib2k4 \

        --discovery-token-ca-cert-hash sha256:808d211c70a3553aaf6662ca13f535ad95a955fd11aa2a38d37871690eccdca3 \

        --control-plane

5、两个node节点加入集群

kubeadm join 192.168.3.234:16443 --token soszky.rsdod8mfiusib2k4 \

        --discovery-token-ca-cert-hash sha256:808d211c70a3553aaf6662ca13f535ad95a955fd11aa2a38d37871690eccdca3 





















































图标赞 1
1人点赞
还没有人点赞,快来当第一个点赞的人吧!
图标打赏
0人打赏
还没有人打赏,快来当第一个打赏的人吧!