Dtpark's blog 磨刀不误砍柴工,读完硕士再打工

高可用 k8s 集群部署

2021-12-28
   

k8s 作为目前最流行的云平台,网上配置教程很多,但是针对高可用(即生产环境)的配置教程比较稀缺。虽然官网有相应文档,但国内环境因为众所周知的原因,还是有些坑要踩,故写本文以记录。

一、服务器配置

1.1 服务器要求

  • 多台Ubuntu 16.04+、CentOS 7或HypriotOS v1.0.1 + 系统

  • 每台机器最少2GB+内存

  • 集群中所有机器之间网络连接正常

  • 每个节点有唯一MAC地址和product_uuid

  • 打开某些端口。请参阅以下部分

    • Master节点
    端口范围 用途
    6443 * Kubernetes API server
    2379-2380 etcd server client API
    10250 Kubelet API
    10251 kube-scheduler
    10252 kube-controller-manager
    10255 Read-only Kubelet API (Heapster)
    • 工作节点
    端口范围 用途
    10250 Kubelet API
    10255 Read-only Kubelet API (Heapster)
    30000-32767 NodePort Services默认端口范围。

1.2 配置步骤

笔者选用6台系统为CentOS 7.8的服务器,3台作为master,3台作为node。详情如下

Name IP Role CPU Memory(GB) Storage(GB)
vip 172.22.107.110 Vip - - -
master1 172.22.107.230 Master 4 8 50
master2 172.22.107.226 Master 4 8 50
master3 172.22.107.164 Master 4 9 50
node1 172.22.107.163 Worker 4 8 100
node2 172.22.107.242 Worker 4 8 100
node3 172.22.107.183 Worker 4 8 100
  1. 打开各结点对应端口及服务

    # master 结点
    firewall-cmd --zone=public --add-port=6443/tcp --permanent
    firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
    firewall-cmd --zone=public --add-port=10250-10252/tcp --permanent
    firewall-cmd --zone=public --add-port=10255/tcp --permanent
    firewall-cmd --add-service=ntp --permanent # 允许 NTP 服务(时间同步用)
    firewall-cmd --add-protocol=vrrp --permanent  # keepalived 负载均衡用
    firewall-cmd --zone=public --add-port=8443/tcp --permanent
    # firewall-cmd --zone=public --add-port=16443/tcp --permanent
    firewall-cmd --reload # 配置立即生效
    firewall-cmd --zone=public --list-ports	# 查看当前开放端口
    6443/tcp 2379-2380/tcp 10250-10252/tcp 10255/tcp
       
    # node 结点
    firewall-cmd --zone=public --add-port=10250/tcp --permanent
    firewall-cmd --zone=public --add-port=10255/tcp --permanent
    firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
    firewall-cmd --add-service=ntp --permanent	# 允许 NTP 服务(时间同步用)
    firewall-cmd --reload	# 配置立即生效
    firewall-cmd --zone=public --list-ports	# 查看当前开放端口
    10250/tcp 10255/tcp 30000-32767/tcp
    
  2. 所有结点禁用SELinux

    sed -i 's/SELINUX=enforcing/\SELINUX=permissive/' /etc/selinux/config	# 	永久关闭SELinux,permissive为只打印警告
    

    因为需要允许容器访问主机文件系统,所以禁用SELinux是配置pod网络所要求的。(除非kubelet中对SELinux支持得到改进)

  3. 所有结点关闭swap,即注释/etc/fstab最后一行。生效需重启,配置完其他部分稍后再重启。

    #
    # /etc/fstab
    # Created by anaconda on Thu Dec 23 13:34:41 2021
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/centos-root /                       xfs     defaults        0 0
    UUID=d379326c-41bb-45fe-bfc1-17725a72db8e /boot                   xfs     defaults        0 0
    # /dev/mapper/centos-swap swap                    swap    defaults        0 0
    
  4. 修改主机名

    hostnamectl set-hostname [master1/master2/node1/node2/node3] # 重新打开 shell 窗口即可生效
    
  5. 所有结点中添加 hosts

    cat >> /etc/hosts << EOF
    172.22.107.110 vip
    172.22.107.230 master1
    172.22.107.226 master2
    172.22.107.164 master3
    172.22.107.163 node1
    172.22.107.242 node2
    172.22.107.183 node3
    EOF
    
  6. 时间同步

    参考文档:https://blog.csdn.net/heian_99/article/details/105631692

    因 CentOS 7 默认安装 chronyc,故配置 chronyc 进行时间同步

    • master 结点配置
    # 1. 防火墙放行 ntp服务 (见步骤 1)
    # 2. 配置 chrony 服务
    vim /etc/chrony.conf	# 详细配置如下。更改 server 、 allow、 local stratum 字段
    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server ntp1.aliyun.com iburst
    ...
    # Allow NTP client access from local network.
    #allow 192.168.0.0/16
    allow 172.22.107.0/24
       
    # Serve time even if not synchronized to a time source.
    local stratum 10
    ...
    # 3. 重启服务
    systemctl restart chronyd.service
    systemctl status chronyd.service # 查看服务状态
    systemctl enable chronyd --now # 设置开机自启动
       
    #强制同步下系统时钟:
    chronyc -a makestep
        
    #查看时间同步源状态:
    chronyc sourcestats
        
    #查看时间同步源:
    chronyc sources -v
    
    • Node 结点配置
    # 1. 防火墙放行 ntp服务 (见步骤 1)
    # 2. 配置 chrony 服务
    vim /etc/chrony.conf	# 详细配置如下。只需更改 server 字段
       
    server 172.22.107.230 iburst
    server 172.22.107.226 iburst
    server 172.22.107.164 iburst
       
    # 3. 重启 chrony 服务
    systemctl restart chronyd.service
    systemctl status chronyd.service # 查看服务状态
       
    #强制同步下系统时钟:
    chronyc -a makestep
        
    #查看时间同步源状态:
    chronyc sourcestats
        
    #查看时间同步源:
    chronyc sources -v
    
  7. 将所有结点桥接的IPv4流量传递到IPtables的链

    cat >> /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
       
    sysctl --system
    
  8. 更换所有结点的 yum 源为国内镜像

    参考文档:https://www.cnblogs.com/jake-jin/p/12363496.html

    # 1. 备份配置文件
    cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
    # 2. 下载阿里源文件
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    # 3. 添加EPEL
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    # 4. 	清除缓存
    yum clean all
    yum makecache fast
    
  9. 更新 yum 并重启服务器

    yum update -y
    reboot
    

二、环境安装

2.1 所有结点安装 Docker

根据本链接(https://github.com/kubernetes/kubernetes/blob/master/build/dependencies.yaml)查看验证过的 docker 版本。笔者此时为19.03。

参考文档:

  • https://docs.docker.com/engine/install/centos/
  • https://www.runoob.com/docker/centos-docker-install.html
  • https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/
  1. 卸载旧版

    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-engine
    
  2. 设置仓库

     sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
    
  3. 设置稳定的仓库(官方源太慢,此处选择阿里源)

     sudo yum-config-manager \
        --add-repo \
        http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
  4. 安装指定版本的 Docker Engine- Community

    # 1. 列出存储库中可用的版本。并(从高到低)对结果进行排序
    yum list docker-ce --showduplicates | sort -r
     * updates: mirrors.aliyun.com
    Loading mirror speeds from cached hostfile
    Loaded plugins: fastestmirror, langpacks
     * extras: mirrors.aliyun.com
    docker-ce.x86_64            3:20.10.9-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.8-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.7-3.el7                     docker-ce-stable
        
     # 2. 通过其完整的软件包名称安装特定版本,该软件包名称是软件包名称(docker-ce)加上版本字符串(第二列),从第一个冒号(:)一直到第一个连字符,并用连字符(-)分隔。例如:docker-ce-18.09.1。
     sudo yum install -y docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io
        
     # 3. 启动 docker
     systemctl start docker
        
     # 4. 验证 docker 是否安装成功
     docker run hello-world
    
  5. 换源并配置 Docker 守护程序

    # 1. 换源 并配置守护程序
    cat <<EOF | sudo tee /etc/docker/daemon.json
    {
    	"registry-mirrors":["https://reg-mirror.qiniu.com/"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
       
    # 2. 重新启动 Docker 并在启动时启用
    sudo systemctl enable docker
    sudo systemctl daemon-reload
    sudo systemctl restart docker
       
    # 3. 验证是否换源成功
    docker info
    

2.2 所有结点安装 kubeadm、kubelet、kubectl

参考文档:

  • https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
  • https://www.cnblogs.com/zhaobowen/p/13399409.html
  • kubeadm:用来初始化集群的指令。

  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

  • kubectl:用来与集群通信的命令行工具。

# 1. 分发 k8s 的 repo 文件
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 2. 找到安装的版本号
yum list kubeadm -y --showduplicates | sort -r

# 3. 安装指定版本
sudo yum install -y kubelet-1.19.5 kubeadm-1.19.5 kubectl-1.19.5 --disableexcludes=kubernetes

# 4. 查看安装情况
systemctl cat kubelet

# 5. 开机自启动
sudo systemctl enable --now kubelet

三、利用 kubeadm 创建高可用集群(集群部署)

参考资料:

  • https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/
  • https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/high-availability/
  • https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
  • https://www.cnblogs.com/zhaobowen/p/13399708.html
  • https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md
  • https://github.com/sskcal/kubernetes/tree/main/vipk8s

堆叠(Stacked)etcd 拓扑

3.1 所有结点准备所需的容器镜像

这个步骤是可选的,众所周知的原因,国内是必须的。适用于你希望 kubeadm initkubeadm join 不去下载存放在 k8s.gcr.io 上的默认的容器镜像的情况。

  1. 查找镜像版本

    kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5", GitCommit:"e338cf2c6d297aa603b50ad3a301f761b4173aa6", GitTreeState:"clean", BuildDate:"2020-12-09T11:16:40Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
    
  2. 查找所需镜像

    kubeadm config images list --kubernetes-version=v1.19.5
    W1226 01:51:51.558322   21454 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    k8s.gcr.io/kube-apiserver:v1.19.5
    k8s.gcr.io/kube-controller-manager:v1.19.5
    k8s.gcr.io/kube-scheduler:v1.19.5
    k8s.gcr.io/kube-proxy:v1.19.5
    k8s.gcr.io/pause:3.2
    k8s.gcr.io/etcd:3.4.13-0
    k8s.gcr.io/coredns:1.7.0
    
  3. 下载镜像

    • 方法一:手动安装
    # 由于 k8s.gcr.io 被墙,需要从 docker 拉取镜像并改 tag
    # 1. 从docker 拉取镜像
    docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5
    docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.5
    docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.5
    docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.19.5
    docker pull registry.aliyuncs.com/google_containers/pause:3.2
    docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
    docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
       
    # 2. 将拉取的镜像改标签
    docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5 k8s.gcr.io/kube-apiserver:v1.19.5
    docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.5 k8s.gcr.io/kube-controller-manager:v1.19.5
    docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.5 k8s.gcr.io/kube-scheduler:v1.19.5
    docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.19.5 k8s.gcr.io/kube-proxy:v1.19.5
    docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
    docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
    docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
       
    # 3. 删除原始镜像
    docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5
    docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.5
    docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.5
    docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.19.5
    docker rmi registry.aliyuncs.com/google_containers/pause:3.2
    docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.13-0
    docker rmi registry.aliyuncs.com/google_containers/coredns:1.7.0
    
    • 方法二:使用脚本
    #!/bin/bash
    		images=(kube-apiserver:v1.19.5 kube-controller-manager:v1.19.5 kube-scheduler:v1.19.5 kube-proxy:v1.19.5 pause:3.2 etcd:3.4.13-0 coredns:1.7.0)
    		for imageName in ${images[@]} ; do
    			docker pull registry.aliyuncs.com/google_containers/$imageName
    			docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    			docker rmi  registry.aliyuncs.com/google_containers/$imageName
    		done
    

3.2 master 结点用 keepalived 和 haproxy 组合配置负载均衡

简单说lvs负责负载均衡,keepalived负责服务冗余即故障迁移。 lvs全名是Linux Virtual Server意思是linux虚拟机,是一个虚拟的服务器集群系统,有三种负载均衡模式和8中调度算法,只能运行在4层做端口转发,一般情况用的都是经典的DR模式。 另外CentOS7已经集成了LVS的核心,所以只需要安装LVS的配置管理工具ipvsadm就可以配置lvs,但命令行配置很麻烦。而且lvs集群中如果有服务器挂掉,控制器依旧会往这个坏掉的服务器转发。 为了解决这个问题,Keepalived诞生啦,起初是专用来给LVS集群,监控集群系统中各个服务器健康状态的。可以从3.4.5层检测,发现服务器挂掉就从LVS集群里剔除。而且在keepalived中可以配置lvs,所以一般情况,我们安装、配置好keepalived就可以实现lvs+keepalived架构。

为了从虚拟IP提供负载平衡,保留气动组合已经存在了很长时间,可以被认为是众所周知和测试良好的:

  • keepalived服务提供由可配置运行状况检查管理的虚拟IP。由于虚拟IP的实现方式,所有协商虚拟IP的主机都需要在同一IP子网中。
  • haproxy服务可以配置为简单的基于流的负载平衡,从而允许TLS终止由其背后的API Server实例处理。

这种组合可以作为操作系统上的服务或控制平面主机上的静态pod运行。两种情况的服务配置都是相同的。

  1. 安装 keepalived 和 haproxy

    yum install -y keepalived haproxy
    
  2. 配置文件备份

    cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
    cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
    
  3. 修改配置文件内容

    • keepalived.conf
    ! /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script check_apiserver {
      script "/etc/keepalived/check_apiserver.sh"
      interval 3
      weight -2
      fall 10
      rise 2
    }
       
    vrrp_instance VI_1 {
        state MASTER # 一个结点为 MASTER,其余为 BACKUP
        interface ens192 # 主机网卡名
        virtual_router_id 51 # 要求所有结点都统一
        priority 100 # 优先级,100-101即可
        authentication {
            auth_type PASS
            auth_pass p@ssw0rd4@uth # 密码,所有结点都一样
        }
        virtual_ipaddress {
            172.22.107.110 # 虚拟的负载均衡服务器
        }
        track_script {
            check_apiserver
        }
    }
    
    • /etc/keepalived/check_apiserver.sh
    #!/bin/sh
    APISERVER_VIP=172.22.107.110 # 虚拟负载均衡IP
    APISERVER_DEST_PORT=8443 # 端口号
       
    errorExit() {
        echo "*** $*" 1>&2
        exit 1
    }
       
    curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
    if ip addr | grep -q ${APISERVER_VIP}; then
        curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
    fi
    
    • /etc/haproxy/haproxy.cfg
    # /etc/haproxy/haproxy.cfg
    # APISERVER_DEST_PORT=8443	# 需要设置端口号
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
        log /dev/log local0
        log /dev/log local1 notice
        daemon
       
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
        mode                    http
        log                     global
        option                  httplog
        option                  dontlognull
        option http-server-close
        option forwardfor       except 127.0.0.0/8
        option                  redispatch
        retries                 1
        timeout http-request    10s
        timeout queue           20s
        timeout connect         5s
        timeout client          20s
        timeout server          20s
        timeout http-keep-alive 10s
        timeout check           10s
       
    #---------------------------------------------------------------------
    # apiserver frontend which proxys to the control plane nodes
    #---------------------------------------------------------------------
    frontend apiserver
        bind *:8443 # 改为端口号
        mode tcp
        option tcplog
        default_backend apiserver
       
    #---------------------------------------------------------------------
    # round robin balancing for apiserver
    #---------------------------------------------------------------------
    backend apiserver
        option httpchk GET /healthz
        http-check expect status 200
        mode tcp
        option ssl-hello-chk
        balance     roundrobin
            server master1 172.22.107.230:6443 check
            server master2 172.22.107.226:6443 check
            server master3 172.22.107.164:6443 check
            # [...]
    
  4. 设置开机自启动

    systemctl enable keepalived --now
    systemctl enable haproxy --now
    
  5. 测试连接

    # nc -v LOAD_BALANCER_IP PORT
    nc -v 172.22.107.110 8443
    nc: connectx to 172.22.107.110 port 8443 (tcp) failed: Connection refused
    

    由于 apiserver 尚未运行,预期会出现一个连接拒绝错误。 然而超时意味着负载均衡器不能和控制平面节点通信。 如果发生超时,请重新配置负载均衡器与控制平面节点进行通信。

3.3 搭建集群

  • 第一个 master 结点
# 1. 在 master1 上初始化结点
kubeadm init \
--control-plane-endpoint vip:8443 \
--kubernetes-version=v1.19.5 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs

# 2. 运行最后出现如下内容
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.22.107.110:6443 --token irp5m5.suhp6jfudb3br6vj \
    --discovery-token-ca-cert-hash sha256:bc60bd82a71374eea24c8bbe74fd459303ed0889103597947e4d1475cc2282da \
    --control-plane --certificate-key 609e02447875234f6137df702c3ce6b8c594fd983b2a7d896541de384553c28c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.22.107.110:6443 --token irp5m5.suhp6jfudb3br6vj \
    --discovery-token-ca-cert-hash sha256:bc60bd82a71374eea24c8bbe74fd459303ed0889103597947e4d1475cc2282da 
    
# 3. 根据提示在 master 1 上执行下述命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 其余 master 结点
# 1. 根据 上述提示,执行命令加入集群
kubeadm join 172.22.107.110:6443 --token irp5m5.suhp6jfudb3br6vj \
    --discovery-token-ca-cert-hash sha256:bc60bd82a71374eea24c8bbe74fd459303ed0889103597947e4d1475cc2282da \
    --control-plane --certificate-key 609e02447875234f6137df702c3ce6b8c594fd983b2a7d896541de384553c28c
   
# 出现如下信息表示加入成功
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

# 2. 根据上述提示输入命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • node 结点
# 根据第一个初始化的 master 结点信息 输入下述命令加入集群
kubeadm join 172.22.107.110:6443 --token irp5m5.suhp6jfudb3br6vj \
    --discovery-token-ca-cert-hash sha256:bc60bd82a71374eea24c8bbe74fd459303ed0889103597947e4d1475cc2282da 
# 出现如下信息表示加入成功
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.4 所有结点配置CNI网络插件

参考资料:

  • https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/
  • https://github.com/flannel-io/flannel#deploying-flannel-manually
# 在 master 结点执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 稍事等待,在 master 结点查看结点状态
kubectl get nodes
# 出现下述信息,部署成功
NAME      STATUS     ROLES    AGE   VERSION
master1   Ready      master   52m   v1.19.5
master2   Ready      master   51m   v1.19.5
master3   Ready      master   49m   v1.19.5
node1     Ready      <none>   38m   v1.19.5
node2     Ready      <none>   33m   v1.19.5
node3     Ready      <none>   33m   v1.19.5

四、验证

参考资料:

  • https://lanvnal.com/2021/12/22/k8s-an-zhuang-yu-pei-zhi-ji-lu/
# nginx-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

Comments

Content