kubeadm
Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
二进制包
从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
小结:Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
服务器要求:
软件
版本
linux
Centos 7.9.2009
Kubernetes
1.20.7
Docker
20.10.7
Etcd
3.4.9
Calico
3.19.1
角色
IP
组件
主机名
Master01
Load Balancer(Master)
172.21.161.110
172.21.161.120(vip)
kube-apiserver
kube-controller-manager
kube-scheduler
kubelet
kube-proxy
Nginx L4
master01
Master02
Load Balancer(Backup)
172.21.161.111
kube-apiserver
kube-controller-manager
kube-scheduler
kubelet
kube-proxy
Nginx L4
master02
etcd01
172.21.161.112
etcd
etcd01
etcd02
172.21.161.113
etcd
etcd02
etcd03
172.21.161.114
etcd
etcd03
Node01
172.21.161.115
kubelet
kube-proxy
docker
flannel
node01
Node02
172.21.161.116
kubelet
kube-proxy
docker
flannel
node02
public
172.21.161.149
跳板机
public
这里为了使搭建过程更清晰,尽量将各角色分配到不同的机器上。每个机器都部署属于自己的角色。比如生成证书就在public跳板机上去生成,然后推送到对应服务器。
单Master架构图
单Master服务器规划:
角色
IP
组件
主机名
Master01
172.21.161.110
kube-apiserver
kube-controller-manager
kube-scheduler
master01
etcd01
172.21.161.112
etcd
etcd01
etcd02
172.21.161.113
etcd
etcd02
etcd03
172.21.161.114
etcd
etcd03
Node01
172.21.161.115
kubelet
kube-proxy
docker
node01
Node02
172.21.161.116
kubelet
kube-proxy
docker
node02
public
172.21.161.149
跳板机
public
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
hostnamectl set-hostname
cat >> /etc/hosts << EOF
172.21.161.110 master01
172.21.161.111 master02
172.21.161.115 node01
172.21.161.116 node02
172.21.161.112 etcd01
172.21.161.113 etcd02
172.21.161.114 etcd03
172.21.161.149 public
EOF
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
yum install ntpdate chrony -y
ntpdate time.windows.com
Etcd是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
找一台服务器操作,这里用public节点。证书的生成以及分配都在public上操作
#public节点操作
mkdir -p /k8s-deploy/cfssl/
cd /k8s-deploy/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
为什么需要证书?
K8s所有组件采用https加密通信,这些组件一般由两套根证书生成:K8S组件(apiserver)和Etcd。
按照需求分类来说,这里所有的服务组件controller-manager、scheduler、kubelet、kube-proxy、kubeclt等需要访问apiserver,这里需要一套。Apiserver访问etcd集群又是一套单独的。所以这里2套证书是2个不同自签CA颁发的。
签发证书都在public机器上。
创建工作目录:
#public节点操作
mkdir -p /k8s-deploy/cfssl/TLS/{etcd,k8s}
cd /k8s-deploy/cfssl/TLS/etcd
自签CA:
#public节点操作
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai"
}
]
}
EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
会生成ca.pem和ca-key.pem文件
创建证书申请文件:
#public节点操作
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"172.21.161.112",
"172.21.161.113",
"172.21.161.114"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai"
}
]
}
EOF
注意:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
会生成server.pem和server-key.pem文件。
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
以下在etcd节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3.
#etcd01节点操作
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
#etcd01节点操作
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.21.161.112:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.161.112:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.161.112:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.161.112:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://172.21.161.112:2380,etcd-2=https://172.21.161.113:2380,etcd-3=https://172.21.161.114:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#etcd01节点操作
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
在public上把刚才生成的证书拷贝到etcd01配置文件中的路径:(只有这里是在public上操作,其余都是在etc01上操作)
[root@public etcd]# scp -r /k8s-deploy/cfssl/TLS/etcd/ca*pem /k8s-deploy/cfssl/TLS/etcd/server*pem root@etcd01:/opt/etcd/ssl
#etcd01节点操作
systemctl daemon-reload
#启动的时候会hold住,因为其它两个节点没有配置,一直在寻找其它节点
systemctl start etcd
systemctl enable etcd
#etcd01节点操作
scp -r /opt/etcd/ root@etcd02:/opt/
scp /usr/lib/systemd/system/etcd.service root@etcd02:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@etcd03:/opt/
scp /usr/lib/systemd/system/etcd.service root@etcd03:/usr/lib/systemd/system/
然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:(总共5处)
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.21.161.112:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https:// 172.21.161.112:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https:// 172.21.161.112:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https:// 172.21.161.112:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://172.21.161.112:2380,etcd-2=https://172.21.161.113:2380,etcd-3=https://172.21.161.114:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
最后启动etcd并设置开机启动,同上。
[root@etcd01 bin]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.21.161.112:2379,https://172.21.161.113:2379,https://172.21.161.114:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+-----------------------------+--------+-------------+-------+
| https://172.21.161.112:2379 | true | 50.191672ms | |
| https://172.21.161.113:2379 | true | 52.394036ms | |
| https://172.21.161.114:2379 | true | 46.009422ms | |
+-----------------------------+--------+-------------+-------+
如果有问题查看日志:/var/log/message或者journalctl -f -u etcd
这里使用Docker作为容器引擎,也可以换成别的,例如containerd
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。本节其实已经用yum安装一遍了,这里只是做一个源码的演示。
tar zxvf docker-20.10.7.tgz
mv docker/* /usr/bin
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl start docker
systemctl enable docker
证书操作在public机器上,这是另一套独立etcd的自签ca证书
# 切换工作目录(public)
cd /k8s-deploy/cfssl/TLS/k8s/
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
会生成ca.pem和ca-key.pem文件。
创建证书申请文件:
#public节点操作
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"172.21.161.110",
"172.21.161.111",
"172.21.161.120",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
会生成server.pem和server-key.pem文件。
下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
#master01节点操作
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
#master01节点操作
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://172.21.161.112:2379,https://172.21.161.113:2379,https://172.21.161.114:2379 \\
--bind-address=172.21.161.110 \\
--secure-port=6443 \\
--advertise-address=172.21.161.110 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
注:上面两个\\第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
把刚才生成的证书拷贝到配置文件中的路径:
[root@public k8s]#scp -r /k8s-deploy/cfssl/TLS/k8s/ca*pem /k8s-deploy/cfssl/TLS/k8s/server*pem root@master01:/opt/kubernetes/ssl
注意,因为etcd和master不在一台机器部署,这里etcd的证书也要拷贝
#首先master01要创建目录mkdir /opt/etcd/ssl -p
[root@master01 bin]# mkdir /opt/etcd/ssl -p
[root@public k8s]# scp -r /k8s-deploy/cfssl/TLS/etcd/ca*pem /k8s-deploy/cfssl/TLS/etcd/server*pem root@master01:/opt/etcd/ssl
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping 工作流程:
创建上述配置文件中token文件:
#master01节点操作
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
格式:token,用户名,UID,用户组
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
#master01节点操作
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#master01节点操作
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
启动会有异常,这里不影响运行及后续操作,可以忽略,具体解决办法参考以下链接
https://github.com/kubernetes/kubernetes/issues/76956
#master01节点操作
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
生成kube-controller-manager证书:(public)
# 切换工作目录(public,证书一般要切换回public机器操作)
cd /k8s-deploy/cfssl/TLS/k8s/
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成kubeconfig文件(以下是shell命令,直接在终端执行):
这里生成的证书文件因为在public,所以要拷贝到master01的相应目录
[root@public k8s]# scp -r /k8s-deploy/cfssl/TLS/k8s/kube-controller-manager*pem root@master01:/opt/kubernetes/ssl
#master01节点操作
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://172.21.161.110:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=/opt/kubernetes/ssl/kube-controller-manager.pem \
--client-key=/opt/kubernetes/ssl/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
#这里配置文件就不展示了
#master01节点操作
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#master01节点操作
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
对后续不影响,时间充裕的可以尝试排错。
#master01节点操作
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
生成kube-scheduler证书:
# 切换工作目录(public,证书一般要切换回public机器操作)
cd /k8s-deploy/cfssl/TLS/k8s/
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成kubeconfig文件(以下是shell命令,直接在终端执行):
这里生成的证书文件因为在public,所以要拷贝到master01的相应目录
[root@public k8s]# scp -r /k8s-deploy/cfssl/TLS/k8s/kube-scheduler*pem root@master01:/opt/kubernetes/ssl
#master01节点操作
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://172.21.161.110:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=/opt/kubernetes/ssl/kube-scheduler.pem \
--client-key=/opt/kubernetes/ssl/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
#master01节点操作
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#master01节点操作
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
出现以下问题的,重头开始做吧
生成kubectl连接集群的证书:
#想访问k8s集群的机器操作,这里使用public
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
生成kubeconfig文件:
mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://172.21.161.110:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s-deploy/cfssl/TLS/k8s/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
--client-certificate=/k8s-deploy/cfssl/TLS/k8s/admin.pem \
--client-key=/k8s-deploy/cfssl/TLS/k8s/admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
通过kubectl工具查看当前集群组件状态:
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
如上输出说明Master节点组件运行正常。
也可以顺便同时让master可以访问,在public将config文件拷贝至master01节点即可
[root@master01 ssl]# mkdir /root/.kube
[root@public ~]# scp -r /root/.kube/ root@master01:/root
#创建node必备,不然node的kubelet无法启动,就是创建一个可以申请证书的用户
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
从master至node节点:
cd /tools/kubernetes/server/bin/
scp -r kubelet kube-proxy root@node01:/opt/kubernetes/bin/
scp /opt/kubernetes/ssl/ca.pem root@node01:/opt/kubernetes/ssl/
scp /usr/bin/kubectl node01:/usr/bin
#node01节点操作
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=node01 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizexiong/pause-amd64:3.0"
EOF
--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像
如果主机名在通过master审批之后更改了,或者是什么原因更改了,node就会出现下面的错误提示
#node01节点操作
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
#node01节点操作
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://172.21.161.110:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
#node01节点操作
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#node01节点操作
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
# 查看kubelet证书请求
[root@public k8s]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-84G21oPC3hDbyMwZN62ExQDI4D2Xa8IO74zHtlWRhD8 60s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
[root@public k8s]# kubectl certificate approve node-csr-84G21oPC3hDbyMwZN62ExQDI4D2Xa8IO74zHtlWRhD8
certificatesigningrequest.certificates.k8s.io/node-csr-84G21oPC3hDbyMwZN62ExQDI4D2Xa8IO74zHtlWRhD8 approved
[root@public k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
node01 NotReady
#这个时候节点状态肯定是NotReady,安装Calico之后就会好
注意:可能出现的报错
[root@public k8s]# kubectl certificate approve node-csr-84G21oPC3hDbyMwZN62ExQDI4D2Xa8IO74zHtlWRhD8
No resources found
error: no kind "CertificateSigningRequest" is registered for version "certificates.k8s.io/v1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
#因为客户端版本不对
[root@public k8s]# kubectl version --short
Client Version: v1.12.7
Server Version: v1.20.7
注:由于网络插件还没有部署,节点会没有准备就绪 NotReady
#node01节点操作
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
#node01节点操作
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node01
clusterCIDR: 10.0.0.0/24
EOF
# 切换工作目录(public)
cd /k8s-deploy/cfssl/TLS/k8s/
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
#将证书从public拷贝至node
[root@public k8s]# scp /k8s-deploy/cfssl/TLS/k8s/kube-proxy*pem root@node01:/opt/kubernetes/ssl
#node01节点操作
#生成kubeconfig文件:
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://172.21.161.110:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
#node01节点操作
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#node01节点操作
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
启动后会有报错,只有在calico插件安装完成后才会正常。
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
下载地址: https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
部署Calico:
#哪里有yaml文件可以访问集群,就在哪里操作
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
等Calico Pod都Running,节点也会准备就绪:(镜像在国外,可能会有些慢)
kubectl get node
NAME STATUS ROLES AGE VERSION
node01 Ready
注意:这里有个问题需要注意
准备环境的时候,各node节点的/etc/hosts里面的默认记录,也就是localhost记录,一定不要删除或者误删,否则会出现以下报错,pod是运行的,但是健康检查一直无法通过。
应用场景:例如kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
kubectl apply -f apiserver-to-kubelet-rbac.yaml
[root@node01 cfg]# scp -r /opt/kubernetes/ root@node02:/opt #包括,程序,证书,都在里面,主要需要ca.pem
[root@node01 cfg]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@node02:/usr/lib/systemd/system
#node02节点操作
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig #审批通过后会自动生成
rm -f /opt/kubernetes/ssl/kubelet-client*
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
#node02节点操作
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node01 #修改成真实主机名
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node01 #修改成真实主机名
#node02节点操作
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
# 查看证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
kubectl get node
NAME STATUS ROLES AGE VERSION
node01 Ready
node02 Ready
#哪里有yaml文件可以访问集群,就在哪里操作
kubectl apply -f kubernetes-dashboard.yaml
kubectl get pods,svc -n kubernetes-dashboard
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
使用输出的token登录Dashboard。
CoreDNS用于集群内部Service名称解析。
#哪里有yaml文件可以访问集群,就在哪里操作
kubectl apply -f coredns.yaml
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5ffbfd976d-j6shb 1/1 Running 0 32s
DNS解析测试:
#创建之前要开启api访问kubelet权限,不然无法进入容器
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
解析没问题。
至此一个单Master集群就搭建完成了!这个环境就足以满足学习实验了,下面继续扩容多Master集群!
Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
多Master架构图:
现在需要再增加一台新服务器,作为Master2 Node,IP是172.21.161.111。
为了节省资源你也可以将之前部署好的Worker Node1复用为Master2 Node角色(即部署Master组件,这里不这么做)
Master02 与已部署的Master01所有操作一致。所以我们只需将Master01所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可
安装docker及更改主机名以及主机名加入到所有机器的映射
在Master02创建etcd证书目录:
#master02节点操作
mkdir -p /opt/etcd/ssl
拷贝Master1上所有K8s文件和etcd证书到Master2:
scp -r /opt/kubernetes root@master02:/opt
scp -r /opt/etcd/ssl root@master02:/opt/etcd
scp /usr/lib/systemd/system/kube* root@master02:/usr/lib/systemd/system
scp /usr/bin/kubectl root@master02:/usr/bin
scp -r /root/.kube root@master02:/root
删除kubelet证书和kubeconfig文件:
#master02节点操作
#如果master没有部署node节点组件,可忽略这一步
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
修改apiserver、kubelet和kube-proxy配置文件为本地IP: (7处修改)
#master02节点操作
vi /opt/kubernetes/cfg/kube-apiserver.conf
…
--bind-address=172.21.161.111 \
--advertise-address=172.21.161.111 \
…
vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
server: https:// 172.21.161.111:6443
vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
server: https:// 172.21.161.111:6443
vi root/.kube/config
server: https://172.21.161.111:6443
#下面的2处无视,因为目前master节点还没部署node组件
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=master02
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: master02
#master02节点操作
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
#master02节点操作
kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
kube-apiserver高可用架构图:
Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在两台Master节点操作。
yum install epel-release -y
yum install nginx keepalived -y
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log\_format main '$remote\_addr $upstream\_addr - \[$time\_local\] $status $upstream\_bytes\_sent';
access\_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 172.21.161.110:6443; # Master1 APISERVER IP:PORT
server 172.21.161.111:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy\_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access\_log /var/log/nginx/access.log main;
sendfile on;
tcp\_nopush on;
tcp\_nodelay on;
keepalive\_timeout 65;
types\_hash\_max\_size 2048;
include /etc/nginx/mime.types;
default\_type application/octet-stream;
server {
listen 80 default\_server;
server\_name \_;
location / {
}
}
}
EOF
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens192 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
172.21.161.120/24
}
track_script {
check_nginx
}
}
EOF
vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.21.161.120/24
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
[root@master01 .kube]# ip addr
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192:
link/ether 00:50:56:ba:aa:a6 brd ff:ff:ff:ff:ff:ff
inet 172.21.161.110/24 brd 172.21.161.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet 172.21.161.120/24 scope global secondary ens192
valid_lft forever preferred_lft forever
inet6 fe80::da18:1a5c:9b1c:9a6f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0:
link/ether 02:42:1e:19:9d:59 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
可以看到,在ens192网卡绑定了172.21.161.120 虚拟IP,说明工作正常。
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx;
在Nginx Backup,ip addr命令查看已成功绑定VIP
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
[root@public ~]# curl -k https://172.21.161.120:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.7",
"gitCommit": "132a687512d7fb058d0f5890f07d4121b3f0a2e2",
"gitTreeState": "clean",
"buildDate": "2021-05-12T12:32:49Z",
"goVersion": "go1.15.12",
"compiler": "gc",
"platform": "linux/amd64"
}
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
通过查看Nginx日志也可以看到转发apiserver IP:
[root@master01 .kube]# tail -f /var/log/nginx/k8s-access.log
172.21.161.110 172.21.161.110:6443 - [13/Jun/2021:11:56:31 +0800] 200 2138
172.21.161.110 172.21.161.111:6443 - [13/Jun/2021:12:01:37 +0800] 200 1711
172.21.161.115 172.21.161.110:6443 - [13/Jun/2021:19:04:08 +0800] 200 1172
172.21.161.115 172.21.161.111:6443 - [13/Jun/2021:19:05:39 +0800] 200 3596
172.21.161.116 172.21.161.110:6443 - [13/Jun/2021:19:06:22 +0800] 200 1173
172.21.161.116 172.21.161.110:6443 - [13/Jun/2021:19:06:28 +0800] 200 1174
172.21.161.116 172.21.161.111:6443 - [13/Jun/2021:19:07:59 +0800] 200 3020
到此还没结束,还有下面最关键的一步。
试想下,虽然我们增加了Master2 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master01 ,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来172.21.161.110修改为172.21.161.120(VIP)。
在所有Worker Node执行:
sed -i 's#172.21.161.110:6443#172.21.161.120:16443#' /opt/kubernetes/cfg/*
准确来说,上述命令只适合在纯node节点上使用,如果master01部署了node,那么也会有上述IP,所以为了更精确,下面指出具体哪几个配置文件需要更改。
[root@node01 cfg]# grep 172.21.161.120 /opt/kubernetes/cfg/*
/opt/kubernetes/cfg/bootstrap.kubeconfig: server: https://172.21.161.120:16443
/opt/kubernetes/cfg/kubelet.kubeconfig: server: https://172.21.161.120:16443
/opt/kubernetes/cfg/kube-proxy.kubeconfig: server: https://172.21.161.120:16443
当然/root/.kube/config里面也需要修改。
最后将服务重启
systemctl restart kubelet kube-proxy
当然,在一般情况下,master上也有kubelet等一些进行,用来部署一些系统级别pod,但是前面为了让部署的步骤拆分的更为细致,所以,每台机器部署不同的角色,这里将Master上增加worker组件。和新增一台worker node区别不大。
在node节点将Worker Node涉及文件拷贝到master01
#node01节点操作
scp -r /opt/kubernetes/ root@master01:/opt
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@master01:/usr/lib/systemd/system
rm -rf /opt/kubernetes/cfg/kubelet.kubeconfig
rm -rf /opt/kubernetes/ssl/kubelet-client-*
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=master01
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: master01
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
# 查看证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
[root@public ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready
master02 Ready
node01 Ready
node02 Ready
多台master也一样
手机扫一扫
移动阅读更方便
你可能感兴趣的文章