Kubernets二进制安装(15)之安装部署coredns
阅读原文时间:2023年07月08日阅读:2

在运维主机上(mfyxw50.mfyxw.com)准备Coredns镜像文件,以docker镜像文件的方式部署到Kubernetes集群中去。

[root@mfyxw50 ~]# docker pull coredns/coredns:1.6.9
[root@mfyxw50 ~]# docker tag faac9e62c0d6 harbor.od.com/public/coredns:v1.6.9

[root@mfyxw50 ~]# docker login harbor.od.com
[root@mfyxw50 ~]# docker push harbor.od.com/public/coredns:v1.6.9

登录到https://harbor.od.com,使用用户名:admin 密码:Harbor12345来查看coredns是否上传成功

在运维主机(mfyxw50.mfyxw.com)执行

[root@mfyxw50 ~]# cat > /etc/nginx/conf.d/k8s-yaml.od.com.conf << EOF
server {
    listen       80;
    server_name  k8s-yaml.od.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
EOF

在运维主机(mfyxw50.mfyxw.com)上执行如下命令

以后所有的资源配置清单统一放置在运维主机的/data/k8s-yaml目录下即可

[root@mfyxw50 ~]# mkdir -p /data/k8s-yaml/coredns
[root@mfyxw50 ~]# /usr/sbin/nginx -s reload

在DNS服务器(mfyxw10.mfyxw.com)主机上执行如下命令

[root@mfyxw10 ~]# cat > /var/named/od.com.zone << EOF
\$ORIGIN od.com.
\$TTL 600   ; 10 minutes
@       IN  SOA dns.od.com.   dnsadmin.od.com. (
                             ;序号请加1,表示比之前版本要新
                             2020031304 ; serial
                             10800          ; refresh (3 hours)
                             900              ; retry (15 minutes)
                             604800         ; expire (1 week)
                             86400          ; minimum (1 day)
                              )
                      NS   dns.od.com.
\$TTL 60 ;  1 minute
dns             A          192.168.80.10
harbor          A          192.168.80.50   ;添加harbor记录
k8s-yaml        A          192.168.80.50
EOF

在DNS服务器(mfyxw10.mfyxw.com)主机执行如下命令

[root@mfyxw10 ~]# systemctl restart named
[root@mfyxw10 ~]# ping k8s-yaml.od.com

在运维主机(mfyxw50.mfyxw.com)上执行

rbac.yaml文件内容如下:

[root@mfyxw50 ~]# cat > /data/k8s-yaml/coredns/rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
EOF

configMap.yaml文件内容如下:

[root@mfyxw50 ~]# cat > /data/k8s-yaml/coredns/configMap.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 172.16.0.0/16
        forward . 192.168.80.10
        cache 30
        loop
        reload
        loadbalance
       }
EOF

deployment.yaml文件内容如下:

[root@mfyxw50 ~]# cat > /data/k8s-yaml/coredns/deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.9
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
EOF

svc.yaml文件内容如下:

[root@mfyxw50 ~]# cat > /data/k8s-yaml/coredns/svc.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 172.16.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP
EOF

在master主机(mfyxw30.mfyxw.com或mfyxw40.mfyxw.com)任意一台执行

[root@mfyxw30 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
[root@mfyxw30 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/configMap.yaml
[root@mfyxw30 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/deployment.yaml
[root@mfyxw30 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml

在master主机(mfyxw30.mfyxw.com或mfyxw40.mfyxw.com)任意一台执行

[root@mfyxw30 ~]# kubectl get svc
[root@mfyxw30 ~]# dig -t A nginx-ds.default.svc.cluster.local. @172.16.0.2 +short
[root@mfyxw30 ~]# dig -t A kubernetes.default.svc.cluster.local. @172.16.0.2 +short

在master主机(mfyxw30.mfyxw.com或mfyxw40.mfyxw.com)任意一台执行

[root@mfyxw30 ~]# curl nginx-ds.default
[root@mfyxw30 ~]# curl nginx-ds.default.svc.cluster.local

在集群外面(宿主机上)curl是无法访问到

进入容器里面curl是否能正常访问到呢?

[root@mfyxw30 ~]# kubectl get svc
[root@mfyxw30 ~]# kubectl get pod
[root@mfyxw30 ~]# kubectl exec -it nginx-ns-8sgh4 -- /bin/bash
root@nginx-ns-8sgh4:/# curl nginx-ds.default.svc.cluster.local

总结:在集群外面是无法curl到集群里面的svc的,而在容器就能正常使用curl来访问svc名

可以在每个容器里面,通过查看 cat /etc/resolv.conf可知,search 是代表缺省域