Kubernetes集群安装
阅读原文时间:2021年06月22日阅读:1

主机名

IP地址

master

192.168.0.100

node1

192.168.0.101

node2

192.168.0.102

1.1、操作系统: CensOS8.4.2105

[root@kubernetes-master /]# cat /etc/redhat-release
CentOS Linux release 8.4.2105

1.2、Kubernetes版本 v1.19.1

    kube-apiserver             v1.19.1
    kube-controller-manager     v1.19.1
    kube-proxy                 v1.19.1
    kube-scheduler             v1.19.1
    etcd                       3.3.10
    pause                       3.1
    coredns                     1.3.1

2.1:Linux安装注意

安装VMware虚拟机软件,下载Centos镜像,安装的时候一点切记,为了使用方便,保证局域网内其他主机能访问到我们的虚拟机,所以这里的网络模式我们使用桥接模式,把虚拟机当做一个真实的物理机来使用,安装完成之后再配置其静态IP,配置静态IP的方式如下。

找到文件 /etc/sysconfig/network-scripts/ifcfg-ens33,修改为以下配置

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=7ab3916f-b1cf-4d45-a39f-f8a7d2447349
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.0.100
PREFIX=24
GATEWAY=192.168.0.1
DNS1=114.114.114.114
IPV6_PRIVACY=no
NETMASK=255.255.255.0

2.2:准备工作

更新镜像源

yum -y update

由于k8s对时间的要求特别的高,所以我们必须保证集群中的每台机器的完全一致,时间同步命令如下:

systemctl start chronyd

服务器文件同步脚本

创建脚本: mkdir xsync

脚本中添加一下内容

#!/bin/bash
#1 获取输入参数个数,如果没有参数,直接退出
pcount=$#
if((pcount==0)); then
echo no args;
exit;
fi

#2 获取文件名称
p1=$1
fname=`basename $p1`
echo fname=$fname

#3 获取上级目录到绝对路径
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir

#4 获取当前用户名称
user=`whoami`
host_prefix='192.168.0.'

#5 循环
for((host=100; host<103; host++))
do
rsync -rvl $pdir/$fname $user@$host_prefix$host:$pdir
done

2.3:关闭防火墙,开机不自启动

systemctl stop firewalld
systemctl disable firewalld

2.4 禁用selinux

vi /etc/selinux/config
修改以下配置
SELINUX=disabled

2.5:禁用swap

注释掉 # /dev/mapper/cl-swap swap swap defaults 0 0

2.6: 配置ipvs功能

接下来还需要确保各个节点上已经安装了ipset软件包

# yum -y install ipset

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm

# yum -y install ipvsadm

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式

添加需要加载的模块写入脚本文件

cat < /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

为脚本文件添加执行权限

chmod +x /etc/sysconfig/modules/ipvs.modules

执行脚本文件

/bin/bash /etc/sysconfig/modules/ipvs.modules

查看对应的模块是否加载成功

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2.7:重启服务器,检查之前关闭的各种配置是否生效S

getenforce

free -m

3.1:docker安装

切换镜像源

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo

查看镜像源中支持的docker版本

yum list docker-ce --showduplicates

安装

yum install docker-ce

3.2:添加配置文件

Docker在黑默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs

mkdir /etc/docker

cat < /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF

3.3:启动docker

# 启动docker
systemctl start docker
# 开机启动
systemctl enable docker

安装Kubernetes组件 由于Kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源 编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

yum -y install kubeadm-1.19.1 kubectl-1.19.1 kubelet-1.19.1

更换版本

yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y

配置kubelet的cgroup

# 编辑/etc/sysconfig/kubelet,添加下面的配置
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

设置kubelet开机自启

systemctl enable kubelet

3.4:初始化集群

在master 上操作

# 由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址
# apiserver-advertise-address 需要写自己的ip
kubeadm init \
--apiserver-advertise-address=192.168.0.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.19.1 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12

#使用 kubectl 工具
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入master指令,在执行完kubeadm init 之后会自动在控制台打印

kubeadm join 192.168.0.100:6443 --token awk15p.t6bamck54w69u4s8 \
--discovery-token-ca-cert-hash sha256:a94fa09562466d32d29523ab6cff122186f1127599fa4dcd5fa0152694f17117

然后我们需要将node 节点加入集群中,在 node 服务器 上执行上述红框的命令加入到master,执行完毕后可在master节点查看,指令如下

kubectl get nodes

3.5:安装网络插件,只在master节点操作即可

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于外网不好访问,如果出现无法访问的情况,可以直接用下面的 记得文件名是kube-flannel.yml,位置:/root/

kube-flannel.yml内容

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:

  • min: 0
    max: 65535
    # SELinux
    seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
    name: flannel
    rules:
  • apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
  • apiGroups:
    • ""
      resources:
    • nodes
      verbs:
    • list
    • watch
  • apiGroups:
    • ""
      resources:
    • nodes/status
      verbs:
    • patch
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
      name: flannel
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
      subjects:
      - kind: ServiceAccount
      name: flannel
      namespace: kube-system
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
      name: flannel
      namespace: kube-system
      ---
      kind: ConfigMap
      apiVersion: v1
      metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
      tier: node
      app: flannel
      data:
      cni-conf.json: |
      {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
      {
      "type": "flannel",
      "delegate": {
      "hairpinMode": true,
      "isDefaultGateway": true
      }
      },
      {
      "type": "portmap",
      "capabilities": {
      "portMappings": true
      }
      }
      ]
      }
      net-conf.json: |
      {
      "Network": "10.244.0.0/16",
      "Backend": {
      "Type": "vxlan"
      }
      }
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
      tier: node
      app: flannel
      spec:
      selector:
      matchLabels:
      app: flannel
      template:
      metadata:
      labels:
      tier: node
      app: flannel
      spec:
      affinity:
      nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
      - key: beta.kubernetes.io/os
      operator: In
      values:
      - linux
      - key: beta.kubernetes.io/arch
      operator: In
      values:
      - amd64
      hostNetwork: true
      tolerations:
    • operator: Exists
      effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
    • name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 command:
      • cp
        args:
      • -f
      • /etc/kube-flannel/cni-conf.json
      • /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
      • name: cni
        mountPath: /etc/cni/net.d
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        containers:
    • name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64 command:
      • /opt/bin/flanneld
        args:
      • --ip-masq
      • --kube-subnet-mgr
        resources:
        requests:
        cpu: "100m"
        memory: "50Mi"
        limits:
        cpu: "100m"
        memory: "50Mi"
        securityContext:
        privileged: false
        capabilities:
        add: ["NET_ADMIN"]
        env:
      • name: POD_NAME
        valueFrom:
        fieldRef:
        fieldPath: metadata.name
      • name: POD_NAMESPACE
        valueFrom:
        fieldRef:
        fieldPath: metadata.namespace
        volumeMounts:
      • name: run
        mountPath: /run/flannel
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        volumes:
      • name: run
        hostPath:
        path: /run/flannel
      • name: cni
        hostPath:
        path: /etc/cni/net.d
      • name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:
        • matchExpressions:
          • key: beta.kubernetes.io/os operator: In values:
            • linux
          • key: beta.kubernetes.io/arch operator: In values:
            • arm64
              hostNetwork: true
              tolerations:
    • operator: Exists
      effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
    • name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm64 command:
      • cp
        args:
      • -f
      • /etc/kube-flannel/cni-conf.json
      • /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
      • name: cni
        mountPath: /etc/cni/net.d
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        containers:
    • name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm64 command:
      • /opt/bin/flanneld
        args:
      • --ip-masq
      • --kube-subnet-mgr
        resources:
        requests:
        cpu: "100m"
        memory: "50Mi"
        limits:
        cpu: "100m"
        memory: "50Mi"
        securityContext:
        privileged: false
        capabilities:
        add: ["NET_ADMIN"]
        env:
      • name: POD_NAME
        valueFrom:
        fieldRef:
        fieldPath: metadata.name
      • name: POD_NAMESPACE
        valueFrom:
        fieldRef:
        fieldPath: metadata.namespace
        volumeMounts:
      • name: run
        mountPath: /run/flannel
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        volumes:
      • name: run
        hostPath:
        path: /run/flannel
      • name: cni
        hostPath:
        path: /etc/cni/net.d
      • name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:
        • matchExpressions:
          • key: beta.kubernetes.io/os operator: In values:
            • linux
          • key: beta.kubernetes.io/arch operator: In values:
            • arm
              hostNetwork: true
              tolerations:
    • operator: Exists
      effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
    • name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm command:
      • cp
        args:
      • -f
      • /etc/kube-flannel/cni-conf.json
      • /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
      • name: cni
        mountPath: /etc/cni/net.d
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        containers:
    • name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-arm command:
      • /opt/bin/flanneld
        args:
      • --ip-masq
      • --kube-subnet-mgr
        resources:
        requests:
        cpu: "100m"
        memory: "50Mi"
        limits:
        cpu: "100m"
        memory: "50Mi"
        securityContext:
        privileged: false
        capabilities:
        add: ["NET_ADMIN"]
        env:
      • name: POD_NAME
        valueFrom:
        fieldRef:
        fieldPath: metadata.name
      • name: POD_NAMESPACE
        valueFrom:
        fieldRef:
        fieldPath: metadata.namespace
        volumeMounts:
      • name: run
        mountPath: /run/flannel
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        volumes:
      • name: run
        hostPath:
        path: /run/flannel
      • name: cni
        hostPath:
        path: /etc/cni/net.d
      • name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:
        • matchExpressions:
          • key: beta.kubernetes.io/os operator: In values:
            • linux
          • key: beta.kubernetes.io/arch operator: In values:
            • ppc64le
              hostNetwork: true
              tolerations:
    • operator: Exists
      effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
    • name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-ppc64le command:
      • cp
        args:
      • -f
      • /etc/kube-flannel/cni-conf.json
      • /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
      • name: cni
        mountPath: /etc/cni/net.d
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        containers:
    • name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-ppc64le command:
      • /opt/bin/flanneld
        args:
      • --ip-masq
      • --kube-subnet-mgr
        resources:
        requests:
        cpu: "100m"
        memory: "50Mi"
        limits:
        cpu: "100m"
        memory: "50Mi"
        securityContext:
        privileged: false
        capabilities:
        add: ["NET_ADMIN"]
        env:
      • name: POD_NAME
        valueFrom:
        fieldRef:
        fieldPath: metadata.name
      • name: POD_NAMESPACE
        valueFrom:
        fieldRef:
        fieldPath: metadata.namespace
        volumeMounts:
      • name: run
        mountPath: /run/flannel
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        volumes:
      • name: run
        hostPath:
        path: /run/flannel
      • name: cni
        hostPath:
        path: /etc/cni/net.d
      • name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:
        • matchExpressions:
          • key: beta.kubernetes.io/os operator: In values:
            • linux
          • key: beta.kubernetes.io/arch operator: In values:
            • s390x
              hostNetwork: true
              tolerations:
    • operator: Exists
      effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
    • name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-s390x command:
      • cp
        args:
      • -f
      • /etc/kube-flannel/cni-conf.json
      • /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
      • name: cni
        mountPath: /etc/cni/net.d
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        containers:
    • name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.12.0-s390x command:
      • /opt/bin/flanneld
        args:
      • --ip-masq
      • --kube-subnet-mgr
        resources:
        requests:
        cpu: "100m"
        memory: "50Mi"
        limits:
        cpu: "100m"
        memory: "50Mi"
        securityContext:
        privileged: false
        capabilities:
        add: ["NET_ADMIN"]
        env:
      • name: POD_NAME
        valueFrom:
        fieldRef:
        fieldPath: metadata.name
      • name: POD_NAMESPACE
        valueFrom:
        fieldRef:
        fieldPath: metadata.namespace
        volumeMounts:
      • name: run
        mountPath: /run/flannel
      • name: flannel-cfg
        mountPath: /etc/kube-flannel/
        volumes:
      • name: run
        hostPath:
        path: /run/flannel
      • name: cni
        hostPath:
        path: /etc/cni/net.d
      • name: flannel-cfg
        configMap:
        name: kube-flannel-cfg

使用配置文件启动fannel

kubectl apply -f kube-flannel.yml

等待它安装完毕 发现已经是 集群的状态已经是Ready

创建一个nginx服务

kubectl create deployment nginx --image=nginx:1.14-alpine

暴露端口

kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort

查看服务

kubectl get svc 或者 kubectl get service

查看pod

kubectl get pod

浏览器测试,集群的每个节点都可以访问

至此,kubernetes环境搭建完成,第一步已完成,在了解k8s的路上又近了一步。