基于containerd二进制部署k8s-v1.23.3
阅读原文时间:2022年02月20日阅读:1

文章目录

为什么用 containerd ?

因为 k8s 早在2021年就说要取消 docker-shim ,相关的资料可以查看下面的链接

弃用 Dockershim 的常见问题

迟早都要接受的,不如早点接受

k8s 组件

Kubernetes 组件

  • master 节点

组件名称

组件作用

etcd

兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。

kube-apiserver

提供了资源操作的唯一入口,各组件协调者,并提供认证、授权、访问控制、API注册和发现等机制;
以 HTTP API 提供接口服务,所有对象资源的增删改查和监听操作都交给 apiserver 处理后再提交给 etcd 存储。

kube-controller-manager

负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
处理集群中常规后台任务,一个资源对应一个控制器,而 controllermanager 就是负责管理这些控制器的。

kube-scheduler

负责资源的调度,按照预定的调度策略将 pod 调度到相应的机器上。

  • work 节点

组件名称

组件作用

kubelet

kubelet 是 master 在 work 节点上的 agent,管理本机运行容器的生命周期,比如创建容器、pod 挂载数据卷、下载 secret 、获取容器和节点状态等工作。
kubelet 将每个 pod 转换成一组容器。负责维护容器的生命周期,同时也负责 volume(CVI)和网络(CNI)的管理

kube-proxy

负责为 service 提供 cluster 内部的服务发现和负载均衡;
在 work 节点上实现 pod 网络代理,维护网络规则和四层负载均衡工作。

container runtime

负责镜像管理以及Pod和容器的真正运行(CRI)
目前用的比较多的有 docker 、 containerd

cluster networking

集群网络系统
目前用的比较多的有 flannel 、calico

coredns

负责为整个集群提供DNS服务

ingress controller

为服务提供外网入口

metrics-server

提供资源监控

dashboard

提供 GUI 界面

IP

角色

内核版本

192.168.91.19

master/work

centos7.6/3.10.0-957.el7.x86_64

192.168.91.20

work

centos7.6/3.10.0-957.el7.x86_64

service

version

etcd

v3.5.1

kubernetes

v1.23.3

cfssl

v1.6.1

containerd

v1.5.9

pause

v3.6

flannel

v0.15.1

coredns

v1.8.6

metrics-server

v0.5.2

dashboard

v2.4.0

cfssl github

etcd github

k8s github

containerd github

runc github

本次部署用到的安装包和镜像都上传到csdn了

master节点的配置不能小于2c2gwork节点可以给1c1g

节点之间需要完成免密操作,这里就不体现操作步骤了

因为懒…所以就弄了一个master节点

以下的操作,只需要选一台可以和其他节点免密的 master 节点就好

网络条件好的情况下,镜像可以让他自己拉取,如果镜像经常拉取失败,可以从本地上传镜像包然后导入到 containerd,文章后面的镜像导入一类的操作不是必须要操作的

创建目录

根据自身实际情况创建指定路径,此路径用来存放k8s二进制文件以及用到的镜像文件

mkdir -p /approot1/k8s/{bin,images,pkg,tmp/{ssl,service}}

关闭防火墙

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "systemctl disable firewalld"; \
ssh $i "systemctl stop firewalld"; \
done

关闭selinux

临时关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "setenforce 0"; \
done

永久关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config"; \
done

关闭swap

临时关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "swapoff -a"; \
done

永久关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab"; \
done

开启内核模块

临时开启

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "modprobe ip_vs"; \
ssh $i "modprobe ip_vs_rr"; \
ssh $i "modprobe ip_vs_wrr"; \
ssh $i "modprobe ip_vs_sh"; \
ssh $i "modprobe nf_conntrack"; \
ssh $i "modprobe nf_conntrack_ipv4"; \
ssh $i "modprobe br_netfilter"; \
ssh $i "modprobe overlay"; \
done

永久开启

vim /approot1/k8s/tmp/service/k8s-modules.conf


ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay

分发到所有节点

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/tmp/service/k8s-modules.conf $i:/etc/modules-load.d/; \
done

启用systemd自动加载模块服务

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "systemctl enable systemd-modules-load"; \
ssh $i "systemctl restart systemd-modules-load"; \
ssh $i "systemctl is-active systemd-modules-load"; \
done

返回active表示 自动加载模块服务 启动成功

配置系统参数

以下的参数适用于3.x和4.x系列的内核

vim /approot1/k8s/tmp/service/kubernetes.conf

建议编辑之前,在 vim 里面先执行 :set paste ,避免复制进去的内容和文档的不一致,比如多了注释,或者语法对齐异常

# 开启数据包转发功能(实现vxlan)
net.ipv4.ip_forward=1
# iptables对bridge的数据进行处理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
net.ipv4.tcp_tw_recycle=0
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
net.ipv4.tcp_tw_reuse=0
# socket监听(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 计算当前的内存映射文件数。
vm.max_map_count=655360
# 内核可分配的最大文件数
fs.file-max=6553600
# 持久连接
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10

分发到所有节点

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/tmp/service/kubernetes.conf $i:/etc/sysctl.d/; \
done

加载系统参数

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "sysctl -p /etc/sysctl.d/kubernetes.conf"; \
done

清空iptables规则

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"; \
ssh $i "iptables -P FORWARD ACCEPT"; \
done

配置 PATH 变量

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "echo 'PATH=$PATH:/approot1/k8s/bin' >> $HOME/.bashrc"; \
done
source $HOME/.bashrc

下载二进制文件

其中一台节点操作即可

github下载会比较慢,可以从本地上传到 /approot1/k8s/pkg/ 目录下

wget -O /approot1/k8s/pkg/kubernetes.tar.gz \
https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz

wget -O /approot1/k8s/pkg/etcd.tar.gz \
https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

解压并删除不必要的文件

cd /approot1/k8s/pkg/
for i in $(ls *.tar.gz);do tar xvf $i && rm -f $i;done
mv kubernetes/server/bin/ kubernetes/
rm -rf kubernetes/{addons,kubernetes-src.tar.gz,LICENSES,server}
rm -f kubernetes/bin/*_tag kubernetes/bin/*.tar
rm -rf etcd-v3.5.1-linux-amd64/Documentation etcd-v3.5.1-linux-amd64/*.md

创建 ca 根证书

wget -O /approot1/k8s/bin/cfssl https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget -O /approot1/k8s/bin/cfssljson https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
chmod +x /approot1/k8s/bin/*


vim /approot1/k8s/tmp/ssl/ca-config.json


{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}


vim /approot1/k8s/tmp/ssl/ca-csr.json


{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

部署 etcd 组件

创建 etcd 证书

vim /approot1/k8s/tmp/ssl/etcd-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.91.19"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

配置 etcd 为 systemctl 管理

vim /approot1/k8s/tmp/service/kube-etcd.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

etcd 参数

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/approot1/k8s/data/etcd
ExecStart=/approot1/k8s/bin/etcd \
  --name=etcd-192.168.91.19 \
  --cert-file=/etc/kubernetes/ssl/etcd.pem \
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls=https://192.168.91.19:2380 \
  --listen-peer-urls=https://192.168.91.19:2380 \
  --listen-client-urls=https://192.168.91.19:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://192.168.91.19:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=etcd-192.168.91.19=https://192.168.91.19:2380 \
  --initial-cluster-state=new \
  --data-dir=/approot1/k8s/data/etcd \
  --wal-dir= \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --auto-compaction-mode=periodic \
  --max-request-bytes=10485760 \
  --quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
ssh $i "mkdir -m 700 -p /approot1/k8s/data/etcd"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,etcd*.pem} $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-etcd.service.$i $i:/etc/systemd/system/kube-etcd.service; \
scp /approot1/k8s/pkg/etcd-v3.5.1-linux-amd64/etcd* $i:/approot1/k8s/bin/; \
done

启动 etcd 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-etcd"; \
ssh $i "systemctl restart kube-etcd --no-block"; \
ssh $i "systemctl is-active kube-etcd"; \
done

返回 activating 表示 etcd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-etcd";done

返回active表示 etcd 启动成功,如果是多节点 etcd ,其中一个没有返回active属于正常的,可以使用下面的方式来验证集群

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "ETCDCTL_API=3 /approot1/k8s/bin/etcdctl \
        --endpoints=https://${i}:2379 \
        --cacert=/etc/kubernetes/ssl/ca.pem \
        --cert=/etc/kubernetes/ssl/etcd.pem \
        --key=/etc/kubernetes/ssl/etcd-key.pem \
        endpoint health"; \
done

https://192.168.91.19:2379 is healthy: successfully committed proposal: took = 7.135668ms

返回以上信息,并显示 successfully 表示节点是健康的

部署 apiserver 组件

创建 apiserver 证书

vim /approot1/k8s/tmp/ssl/kubernetes-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

10.88.0.1 是 k8s 的服务 ip,千万不要和现有的网络一致,避免出现冲突

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.91.19",
    "10.88.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

创建 metrics-server 证书

vim /approot1/k8s/tmp/ssl/metrics-server-csr.json


{
  "CN": "aggregator",
  "hosts": [
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

配置 apiserver 为 systemctl 管理

vim /approot1/k8s/tmp/service/kube-apiserver.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的

--etcd-servers 如果 etcd 是多节点的,这里要写上所有的 etcd 节点

apiserver 参数

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/approot1/k8s/bin/kube-apiserver \
  --allow-privileged=true \
  --anonymous-auth=false \
  --api-audiences=api,istio-ca \
  --authorization-mode=Node,RBAC \
  --bind-address=192.168.91.19 \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --endpoint-reconciler-type=lease \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.91.19:2379 \
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
  --secure-port=6443 \
  --service-account-issuer=https://kubernetes.default.svc \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \
  --service-cluster-ip-range=10.88.0.0/16 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --requestheader-allowed-names= \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
  --enable-aggregator-routing=true \
  --v=2
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,kubernetes*.pem,metrics-server*.pem} $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-apiserver.service.$i $i:/etc/systemd/system/kube-apiserver.service; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-apiserver $i:/approot1/k8s/bin/; \
done

启动 apiserver 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-apiserver"; \
ssh $i "systemctl restart kube-apiserver --no-block"; \
ssh $i "systemctl is-active kube-apiserver"; \
done

返回 activating 表示 apiserver 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-apiserver";done

返回active表示 apiserver 启动成功

curl -k --cacert /etc/kubernetes/ssl/ca.pem \
--cert /etc/kubernetes/ssl/kubernetes.pem \
--key /etc/kubernetes/ssl/kubernetes-key.pem \
https://192.168.91.19:6443/api

正常返回如下信息,说明 apiserver 服务运行正常

{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.91.19:6443"
    }
  ]
}

查看 k8s 的所有 kind (对象类别)

curl -s -k --cacert /etc/kubernetes/ssl/ca.pem \
--cert /etc/kubernetes/ssl/kubernetes.pem \
--key /etc/kubernetes/ssl/kubernetes-key.pem \
https://192.168.91.19:6443/api/v1/ | grep kind | sort -u


  "kind": "APIResourceList",
      "kind": "Binding",
      "kind": "ComponentStatus",
      "kind": "ConfigMap",
      "kind": "Endpoints",
      "kind": "Event",
      "kind": "Eviction",
      "kind": "LimitRange",
      "kind": "Namespace",
      "kind": "Node",
      "kind": "NodeProxyOptions",
      "kind": "PersistentVolume",
      "kind": "PersistentVolumeClaim",
      "kind": "Pod",
      "kind": "PodAttachOptions",
      "kind": "PodExecOptions",
      "kind": "PodPortForwardOptions",
      "kind": "PodProxyOptions",
      "kind": "PodTemplate",
      "kind": "ReplicationController",
      "kind": "ResourceQuota",
      "kind": "Scale",
      "kind": "Secret",
      "kind": "Service",
      "kind": "ServiceAccount",
      "kind": "ServiceProxyOptions",
      "kind": "TokenRequest",

配置 kubectl 管理

创建 admin 证书
vim /approot1/k8s/tmp/ssl/admin-csr.json


{
  "CN": "admin",
  "hosts": [
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.91.19:6443 \
--kubeconfig=kubectl.kubeconfig

设置客户端认证参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=kubectl.kubeconfig

设置上下文参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
分发 kubeconfig 证书到所有 master 节点

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
ssh $i "mkdir -p $HOME/.kube"; \
scp /approot1/k8s/pkg/kubernetes/bin/kubectl $i:/approot1/k8s/bin/; \
ssh $i "echo 'source <(kubectl completion bash)' >> $HOME/.bashrc"
scp /approot1/k8s/tmp/ssl/kubectl.kubeconfig $i:$HOME/.kube/config; \
done

部署 controller-manager 组件

创建 controller-manager 证书

vim /approot1/k8s/tmp/ssl/kube-controller-manager-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.91.19"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:kube-controller-manager",
        "OU": "System"
      }
    ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.91.19:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

设置客户端认证参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

设置上下文参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

配置 controller-manager 为 systemctl 管理

vim /approot1/k8s/tmp/service/kube-controller-manager.service

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

--service-cluster-ip-range 参数的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的

--cluster-cidr 为 pod 运行的网段,要和 --service-cluster-ip-range 参数的网段以及现有的网络不一致,避免出现冲突

controller-manager 参数

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/approot1/k8s/bin/kube-controller-manager \
  --bind-address=0.0.0.0 \
  --allocate-node-cidrs=true \
  --cluster-cidr=172.20.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --node-cidr-mask-size=24 \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-cluster-ip-range=10.88.0.0/16 \
  --use-service-account-credentials=true \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
scp /approot1/k8s/tmp/ssl/kube-controller-manager.kubeconfig $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-controller-manager.service $i:/etc/systemd/system/; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-controller-manager $i:/approot1/k8s/bin/; \
done

启动 controller-manager 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-controller-manager"; \
ssh $i "systemctl restart kube-controller-manager --no-block"; \
ssh $i "systemctl is-active kube-controller-manager"; \
done

返回 activating 表示 controller-manager 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-controller-manager";done

返回active表示 controller-manager 启动成功

部署 scheduler 组件

创建 scheduler 证书

vim /approot1/k8s/tmp/ssl/kube-scheduler-csr.json

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴

注意json的格式

{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.91.19"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:kube-scheduler",
        "OU": "System"
      }
    ]
}


cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.91.19:6443 \
--kubeconfig=kube-scheduler.kubeconfig

设置客户端认证参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

设置上下文参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

配置 scheduler 为 systemctl 管理

vim /approot1/k8s/tmp/service/kube-scheduler.service

scheduler 参数

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/approot1/k8s/bin/kube-scheduler \
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --bind-address=0.0.0.0 \
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,kube-scheduler.kubeconfig} $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/service/kube-scheduler.service $i:/etc/systemd/system/; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-scheduler $i:/approot1/k8s/bin/; \
done

启动 scheduler 服务

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-scheduler"; \
ssh $i "systemctl restart kube-scheduler --no-block"; \
ssh $i "systemctl is-active kube-scheduler"; \
done

返回 activating 表示 scheduler 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i "systemctl is-active kube-scheduler";done

返回active表示 scheduler 启动成功

部署 containerd 组件

下载二进制文件

github 下载 containerd 的时候,记得选择cri-containerd-cni 开头的文件,这个包里面包含了 containerd 以及 crictl 管理工具和 cni 网络插件,包括 systemd service 文件、config.toml 、 crictl.yaml 以及 cni 配置文件都是配置好的,简单修改一下就可以使用了

虽然 cri-containerd-cni 也有 runc ,但是缺少依赖,所以还是要去 runc github 重新下载一个

wget -O /approot1/k8s/pkg/containerd.tar.gz \
https://github.com/containerd/containerd/releases/download/v1.5.9/cri-containerd-cni-1.5.9-linux-amd64.tar.gz
wget -O /approot1/k8s/pkg/runc https://github.com/opencontainers/runc/releases/download/v1.0.3/runc.amd64
mkdir /approot1/k8s/pkg/containerd
cd /approot1/k8s/pkg/
for i in $(ls *containerd*.tar.gz);do tar xvf $i -C /approot1/k8s/pkg/containerd && rm -f $i;done
chmod +x /approot1/k8s/pkg/runc
mv /approot1/k8s/pkg/containerd/usr/local/bin/{containerd,containerd-shim*,crictl,ctr} /approot1/k8s/pkg/containerd/
mv /approot1/k8s/pkg/containerd/opt/cni/bin/{bridge,flannel,host-local,loopback,portmap} /approot1/k8s/pkg/containerd/
rm -rf /approot1/k8s/pkg/containerd/{etc,opt,usr}

配置 containerd 为 systemctl 管理

vim /approot1/k8s/tmp/service/containerd.service

注意二进制文件存放路径

如果 runc 二进制文件不在 /usr/bin/ 目录下,需要有 Environment 参数,指定 runc 二进制文件的路径给 PATH ,否则当 k8s 启动 pod 的时候会报错 exec: "runc": executable file not found in $PATH: unknown

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
Environment="PATH=$PATH:/approot1/k8s/bin"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/approot1/k8s/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target

配置 containerd 配置文件

vim /approot1/k8s/tmp/service/config.toml

root 容器存储路径,修改成磁盘空间充足的路径

bin_dir containerd 服务以及 cni 插件存储路径

sandbox_image pause 镜像名称以及镜像tag

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/approot1/data/containerd"
state = "/run/containerd"
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "k8s.gcr.io/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/approot1/k8s/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = "/etc/cni/net.d/cni-default.conf"
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

配置 crictl 管理工具

vim /approot1/k8s/tmp/service/crictl.yaml


runtime-endpoint: unix:///run/containerd/containerd.sock

配置 cni 网络插件

vim /approot1/k8s/tmp/service/cni-default.conf

subnet 参数要和 controller-manager--cluster-cidr 参数一致

{
    "name": "mynet",
    "cniVersion": "0.3.1",
    "type": "bridge",
    "bridge": "mynet0",
    "isDefaultGateway": true,
    "ipMasq": true,
    "hairpinMode": true,
    "ipam": {
        "type": "host-local",
        "subnet": "172.20.0.0/16"
    }
}

分发配置文件以及创建相关路径

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "mkdir -p /etc/containerd"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
ssh $i "mkdir -p /etc/cni/net.d"; \
scp /approot1/k8s/tmp/service/containerd.service $i:/etc/systemd/system/; \
scp /approot1/k8s/tmp/service/config.toml $i:/etc/containerd/; \
scp /approot1/k8s/tmp/service/cni-default.conf $i:/etc/cni/net.d/; \
scp /approot1/k8s/tmp/service/crictl.yaml $i:/etc/; \
scp /approot1/k8s/pkg/containerd/* $i:/approot1/k8s/bin/; \
scp /approot1/k8s/pkg/runc $i:/approot1/k8s/bin/; \
done

启动 containerd 服务

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable containerd"; \
ssh $i "systemctl restart containerd --no-block"; \
ssh $i "systemctl is-active containerd"; \
done

返回 activating 表示 containerd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active containerd";done

返回active表示 containerd 启动成功

导入 pause 镜像

ctr 导入镜像有一个特殊的地方,如果导入的镜像想要 k8s 可以使用,需要加上 -n k8s.io 参数,而且必须是ctr -n k8s.io image import <xxx.tar> 这样的格式,如果是 ctr image import <xxx.tar> -n k8s.io 就会报错 ctr: flag provided but not defined: -n 这个操作确实有点骚气,不太适应

如果镜像导入的时候没有加上 -n k8s.io ,启动 pod 的时候 kubelet 会重新去拉取 pause 容器,如果配置的镜像仓库没有这个 tag 的镜像就会报错

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/pause-v3.6.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/pause-v3.6.tar && rm -f /tmp/pause-v3.6.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | grep pause"; \
done

部署 kubelet 组件

创建 kubelet 证书

vim /approot1/k8s/tmp/ssl/kubelet-csr.json.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个json文件,json文件内的 ip 也要修改为 work 节点的 ip,别重复了

{
    "CN": "system:node:192.168.91.19",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.91.19"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:nodes",
        "OU": "System"
      }
    ]
}


for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kubelet-csr.json.$i | cfssljson -bare kubelet.$i; \
done

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.91.19:6443 \
--kubeconfig=kubelet.kubeconfig.$i; \
done

设置客户端认证参数

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:node:$i \
--client-certificate=kubelet.$i.pem \
--client-key=kubelet.$i-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig.$i; \
done

设置上下文参数

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:$i \
--kubeconfig=kubelet.kubeconfig.$i; \
done

设置默认上下文

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context default \
--kubeconfig=kubelet.kubeconfig.$i; \
done

配置 kubelet 配置文件

vim /approot1/k8s/tmp/service/config.yaml

clusterDNS 参数的 ip 注意修改,和 apiserver--service-cluster-ip-range 参数一个网段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.88.0.2
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 300Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth
healthzBindAddress: 0.0.0.0
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: /etc/kubernetes/ssl/kubelet.pem
tlsPrivateKeyFile: /etc/kubernetes/ssl/kubelet-key.pem

配置 kubelet 为 systemctl 管理

vim /approot1/k8s/tmp/service/kubelet.service.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了

--container-runtime 参数默认是 docker ,如果使用 docker 以外的,需要配置为 remote ,并且要配置 --container-runtime-endpoint 参数来指定 sock 文件的路径

kubelet 参数

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/approot1/k8s/data/kubelet
ExecStart=/approot1/k8s/bin/kubelet \
  --config=/approot1/k8s/data/kubelet/config.yaml \
  --cni-bin-dir=/approot1/k8s/bin \
  --cni-conf-dir=/etc/cni/net.d \
  --container-runtime=remote \
  --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
  --hostname-override=192.168.91.19 \
  --image-pull-progress-deadline=5m \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.6 \
  --root-dir=/approot1/k8s/data/kubelet \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "mkdir -p /approot1/k8s/data/kubelet"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/ssl/kubelet.$i.pem $i:/etc/kubernetes/ssl/kubelet.pem; \
scp /approot1/k8s/tmp/ssl/kubelet.$i-key.pem $i:/etc/kubernetes/ssl/kubelet-key.pem; \
scp /approot1/k8s/tmp/ssl/kubelet.kubeconfig.$i $i:/etc/kubernetes/kubelet.kubeconfig; \
scp /approot1/k8s/tmp/service/kubelet.service.$i $i:/etc/systemd/system/kubelet.service; \
scp /approot1/k8s/tmp/service/config.yaml $i:/approot1/k8s/data/kubelet/; \
scp /approot1/k8s/pkg/kubernetes/bin/kubelet $i:/approot1/k8s/bin/; \
done

启动 kubelet 服务

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kubelet"; \
ssh $i "systemctl restart kubelet --no-block"; \
ssh $i "systemctl is-active kubelet"; \
done

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done

返回active表示 kubelet 启动成功

查看节点是否 Ready

kubectl get node

预期出现类似如下输出,STATUS 字段为 Ready 表示节点正常

NAME            STATUS   ROLES    AGE   VERSION
192.168.91.19   Ready    <none>   20m   v1.23.3
192.168.91.20   Ready    <none>   20m   v1.23.3

部署 proxy 组件

创建 proxy 证书

vim /approot1/k8s/tmp/ssl/kube-proxy-csr.json


{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:kube-proxy",
        "OU": "System"
      }
    ]
}


cd /approot1/k8s/tmp/ssl/; \
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建 kubeconfig 证书

设置集群参数

--server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.91.19:6443 \
--kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

设置上下文参数

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context default \
--kubeconfig=kube-proxy.kubeconfig

配置 kube-proxy 配置文件

vim /approot1/k8s/tmp/service/kube-proxy-config.yaml.192.168.91.19

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node节点就创建多少个service文件,service 文件内的 ip 也要修改为 work 节点的 ip,别重复了

clusterCIDR 参数要和 controller-manager--cluster-cidr 参数一致

hostnameOverride 要和 kubelet--hostname-override 参数一致,否则会出现 node not found 的报错

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
clusterCIDR: "172.20.0.0/16"
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: "192.168.91.19"
metricsBindAddress: 0.0.0.0:10249
mode: "ipvs"

配置 proxy 为 systemctl 管理

vim /approot1/k8s/tmp/service/kube-proxy.service


[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量
## 指定 --cluster-cidr 或 --masquerade-all 选项后
## kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=/approot1/k8s/data/kube-proxy
ExecStart=/approot1/k8s/bin/kube-proxy \
  --config=/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

分发证书以及创建相关路径

如果是多节点,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "mkdir -p /approot1/k8s/data//kube-proxy"; \
ssh $i "mkdir -p /approot1/k8s/bin"; \
ssh $i "mkdir -p /etc/kubernetes/ssl"; \
scp /approot1/k8s/tmp/ssl/kube-proxy.kubeconfig $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/service/kube-proxy.service $i:/etc/systemd/system/; \
scp /approot1/k8s/tmp/service/kube-proxy-config.yaml.$i $i:/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-proxy $i:/approot1/k8s/bin/; \
done

启动 kube-proxy 服务

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "systemctl daemon-reload"; \
ssh $i "systemctl enable kube-proxy"; \
ssh $i "systemctl restart kube-proxy --no-block"; \
ssh $i "systemctl is-active kube-proxy"; \
done

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i "systemctl is-active kubelet";done

返回active表示 kubelet 启动成功

部署 flannel 组件

flannel github

配置 flannel yaml 文件

vim /approot1/k8s/tmp/service/flannel.yaml

net-conf.json 内的 Network 参数需要和 controller-manager--cluster-cidr 参数一致

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['policy']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.20.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

配置 flannel cni 网卡配置文件

vim /approot1/k8s/tmp/service/10-flannel.conflist


{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

导入 flannel 镜像

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/flannel-v0.15.1.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/flannel-v0.15.1.tar && rm -f /tmp/flannel-v0.15.1.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | grep flannel"; \
done

分发 flannel cni 网卡配置文件

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "rm -f /etc/cni/net.d/10-default.conf"; \
scp /approot1/k8s/tmp/service/10-flannel.conflist $i:/etc/cni/net.d/; \
done

分发完 flannel cni 网卡配置文件后,节点会出现暂时的 NotReady 状态,需要等到节点都变回 Ready 状态后,再运行 flannel 组件

在 k8s 中运行 flannel 组件

kubectl apply -f /approot1/k8s/tmp/service/flannel.yaml

检查 flannel pod 是否运行成功

kubectl get pod -n kube-system | grep flannel

预期输出类似如下结果

flannel 属于 DaemonSet ,属于和节点共存亡类型的 pod ,k8s 有多少 node ,flannel 就有多少 pod ,当 node 被删除的时候, flannel pod 也会随之删除

kube-flannel-ds-86rrv   1/1     Running       0          8m54s
kube-flannel-ds-bkgzx   1/1     Running       0          8m53s

suse 12 发行版会出现 Init:CreateContainerError 的情况,此时需要 kubectl describe pod -n kube-system <flannel_pod_name> 查看报错原因,Error: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH 出现这个报错,只需要使用 which apparmor_parser 找到 apparmor_parser 所在路径,然后做一个软连接到 kubelet 命令所在目录即可,然后重启 pod ,注意,所有 flannel 所在节点都需要执行这个软连接操作

部署 coredns 组件

配置 coredns yaml 文件

vim /approot1/k8s/tmp/service/coredns.yaml

clusterIP 参数要和 kubelet 配置文件的 clusterDNS 参数一致

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: docker.io/coredns/coredns:1.8.6
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.88.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

导入 coredns 镜像

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/coredns-v1.8.6.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/coredns-v1.8.6.tar && rm -f /tmp/coredns-v1.8.6.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | grep coredns"; \
done

在 k8s 中运行 coredns 组件

kubectl apply -f /approot1/k8s/tmp/service/coredns.yaml

检查 coredns pod 是否运行成功

kubectl get pod -n kube-system | grep coredns

预期输出类似如下结果

因为 coredns yaml 文件内的 replicas 参数是 1 ,因此这里只有一个 pod ,如果改成 2 ,就会出现两个 pod

coredns-5fd74ff788-cddqf   1/1     Running       0          10s

部署 metrics-server 组件

配置 metrics-server yaml 文件

vim /approot1/k8s/tmp/service/metrics-server.yaml


apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-insecure-tls
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

导入 metrics-server 镜像

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/metrics-server-v0.5.2.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/metrics-server-v0.5.2.tar && rm -f /tmp/metrics-server-v0.5.2.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | grep metrics-server"; \
done

在 k8s 中运行 metrics-server 组件

kubectl apply -f /approot1/k8s/tmp/service/metrics-server.yaml

检查 metrics-server pod 是否运行成功

kubectl get pod -n kube-system | grep metrics-server

预期输出类似如下结果

metrics-server-6c95598969-qnc76   1/1     Running       0          71s

验证 metrics-server 功能

查看节点资源使用情况

kubectl top node

预期输出类似如下结果

metrics-server 启动会偏慢,速度取决于机器配置,如果输出 is not yet 或者 is not ready 就等一会再执行一次 kubectl top node

NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
192.168.91.19   285m         4%     2513Mi          32%
192.168.91.20   71m          3%     792Mi           21%

查看指定 namespace 的 pod 资源使用情况

kubectl top pod -n kube-system

预期输出类似如下结果

NAME                              CPU(cores)   MEMORY(bytes)
coredns-5fd74ff788-cddqf          11m          18Mi
kube-flannel-ds-86rrv             4m           18Mi
kube-flannel-ds-bkgzx             6m           22Mi
kube-flannel-ds-v25xc             6m           22Mi
metrics-server-6c95598969-qnc76   6m           22Mi

部署 dashboard 组件

配置 dashboard yaml 文件

vim /approot1/k8s/tmp/service/dashboard.yaml


---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-read-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-read-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: dashboard-read-clusterrole
subjects:
- kind: ServiceAccount
  name: dashboard-read-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-read-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - persistentvolumes
  - persistentvolumeclaims
  - persistentvolumeclaims/status
  - pods
  - replicationcontrollers
  - replicationcontrollers/scale
  - serviceaccounts
  - services
  - services/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - bindings
  - events
  - limitranges
  - namespaces/status
  - pods/log
  - pods/status
  - replicationcontrollers/status
  - resourcequotas
  - resourcequotas/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - controllerrevisions
  - daemonsets
  - daemonsets/status
  - deployments
  - deployments/scale
  - deployments/status
  - replicasets
  - replicasets/scale
  - replicasets/status
  - statefulsets
  - statefulsets/scale
  - statefulsets/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  - horizontalpodautoscalers/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - cronjobs/status
  - jobs
  - jobs/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - daemonsets/status
  - deployments
  - deployments/scale
  - deployments/status
  - ingresses
  - ingresses/status
  - replicasets
  - replicasets/scale
  - replicasets/status
  - replicationcontrollers/scale
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  - poddisruptionbudgets/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  - ingresses/status
  - networkpolicies
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  - volumeattachments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - clusterrolebindings
  - clusterroles
  - roles
  - rolebindings
  verbs:
  - get
  - list
  - watch

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kube-system
type: Opaque
data:
  csrf: ""

---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kube-system
type: Opaque

---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kube-system

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kube-system
            - --token-ttl=1800
            - --sidecar-host=http://dashboard-metrics-scraper:8000
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kube-system
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

导入 dashboard 镜像

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/dashboard-*.tar $i:/tmp/
ssh $i "ctr -n=k8s.io image import /tmp/dashboard-v2.4.0.tar && rm -f /tmp/dashboard-v2.4.0.tar"; \
ssh $i "ctr -n=k8s.io image import /tmp/dashboard-metrics-scraper-v1.0.7.tar && rm -f /tmp/dashboard-metrics-scraper-v1.0.7.tar"; \
done

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i "ctr -n=k8s.io image list | egrep 'dashboard|metrics-scraper'"; \
done

在 k8s 中运行 dashboard 组件

kubectl apply -f /approot1/k8s/tmp/service/dashboard.yaml

检查 dashboard pod 是否运行成功

kubectl get pod -n kube-system | grep dashboard

预期输出类似如下结果

dashboard-metrics-scraper-799d786dbf-v28pm   1/1     Running       0          2m55s
kubernetes-dashboard-9f8c8b989-rhb7z         1/1     Running       0          2m55s

查看 dashboard 访问端口

在 service 当中没有指定 dashboard 的访问端口,所以需要自己获取,也可以修改 yaml 文件指定访问端口

预期输出类似如下结果

我这边是将 30210 端口映射给 pod 的 443 端口

kubernetes-dashboard        NodePort    10.88.127.68    <none>        443:30210/TCP            5m30s

根据得到的端口访问 dashboard 页面,例如: https://192.168.91.19:30210

查看 dashboard 登录 token

获取 token 文件名称

kubectl get secrets -n kube-system | grep admin

预期输出类似如下结果

admin-user-token-zvrst                           kubernetes.io/service-account-token   3      9m2s

获取 token 内容

kubectl get secrets -n kube-system admin-user-token-zvrst -o jsonpath={.data.token}|base64 -d

预期输出类似如下结果

eyJhbGciOiJSUzI1NiIsImtpZCI6InA4M1lhZVgwNkJtekhUd3Vqdm9vTE1ma1JYQ1ZuZ3c3ZE1WZmJhUXR4bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2cnN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYTE3NTg1ZC1hM2JiLTQ0YWYtOWNhZS0yNjQ5YzA0YThmZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.K2o9p5St9tvIbXk7mCQCwsZQV11zICwN-JXhRv1hAnc9KFcAcDOiO4NxIeicvC2H9tHQBIJsREowVwY3yGWHj_MQa57EdBNWMrN1hJ5u-XzpzJ6JbQxns8ZBrCpIR8Fxt468rpTyMyqsO2UBo-oXQ0_ZXKss6X6jjxtGLCQFkz1ZfFTQW3n49L4ENzW40sSj4dnaX-PsmosVOpsKRHa8TPndusAT-58aujcqt31Z77C4M13X_vAdjyDLK9r5ZXwV2ryOdONwJye_VtXXrExBt9FWYtLGCQjKn41pwXqEfidT8cY6xbA7XgUVTr9miAmZ-jf1UeEw-nm8FOw9Bb5v6A

到此,基于 containerd 二进制部署 k8s v1.23.3 就结束了

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章