k8s-Pod基础
阅读原文时间:2023年07月09日阅读:7

制作镜像

第一个pod

搭建Harbor仓库

重启策略

启动命令

pod基本命令

设置环境变量

数据持久化和共享-hostPath

数据持久化和共享-emptyDir

JSON格式编写pod文件

ConfigMap

pod共享宿主机网络

pod的生命周期

pod的生命周期函数

通过环境变量获取pod和container的信息

通过文件挂载获取pod和container的信息

健康检查和服务可用性检查

制作镜像

一个简单的springboot项目


http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0

<groupId>com.datang</groupId>  
<artifactId>dockerdemo</artifactId>  
<version>1.0-SNAPSHOT</version>

<parent>  
    <groupId>org.springframework.boot</groupId>  
    <artifactId>spring-boot-starter-parent</artifactId>  
    <version>2.3.2.RELEASE</version>  
</parent>

<properties>  
    <maven.compiler.source>8</maven.compiler.source>  
    <maven.compiler.target>8</maven.compiler.target>  
</properties>

<dependencies>  
    <dependency>  
        <groupId>org.springframework.boot</groupId>  
        <artifactId>spring-boot-starter-web</artifactId>  
    </dependency>  
</dependencies>

<build>  
    <plugins>  
        <plugin>  
            <groupId>org.springframework.boot</groupId>  
            <artifactId>spring-boot-maven-plugin</artifactId>  
        </plugin>  
    </plugins>  
</build>  

package com.datang.dockerdemo.controller;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
public static void main(String[]args){
SpringApplication.run(Application.class,args);
}
}

package com.datang.dockerdemo.controller;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.net.InetAddress;
import java.net.InetSocketAddress;

@RestController
public class Rest {
@GetMapping("test")
public String test() {
String s = "~~";
try {
InetAddress localHost = InetAddress.getLocalHost();
String hostAddress = localHost.getHostAddress();
String canonicalHostName = localHost.getCanonicalHostName();
String hostName = localHost.getHostName();
s = s + "--------"+hostAddress + "-------"+ hostName;
} catch (Exception e) {
throw new RuntimeException(e);
}
return s;
}
}

dockerfile文件

基于 centos7作为基础镜像,然后再里边yum一个vim插件,进入/usr/local目录,创建test目录,将虚拟机上的jar包和jdk压缩包拷贝到镜像内的/usr/local/test里,

进入/usr/local/test,解压jdk,设置jdk环境变量,执行jar包。这个dockerfile就做了这些事。需要注意的是,设置环境变量时一定要看自己解压好的jdk目录名

FROM centos:7

RUN yum install -y vim

WORKDIR /usr/local

RUN mkdir test

COPY dockerdemo.jar /usr/local/test
COPY jdk8.tar.gz /usr/local/test

WORKDIR /usr/local/test
RUN tar -zxvf jdk8.tar.gz

ENV JAVA_HOME /usr/local/test/jdk1.8.0_291
ENV PATH $JAVA_HOME/bin:$PATH

ENTRYPOINT ["java","-jar","/usr/local/test/dockerdemo.jar","&"]

(master节点)在这个目录下执行 docker build -t myapp . (注意最后边有个  . )

测试镜像是否可用

docker run -itd -p 9999:8080 myapp

第一个pod

编写pod.yaml(master节点)

apiVersion: v1 #版本号
kind: Pod #类型
metadata:
name: poddemo1 #pod名称
labels: # 标签
k1: v1 #key和value任意都可以
spec: #容器内容
containers: #设置容器
- name: myapp01 #容器名
image: myapp #依赖docker镜像
imagePullPolicy: Never #镜像拉去策略 Never表示只从本地拉去镜像,Always表示每次都尝试拉去镜像,IfNotPresent如果本地不存在则拉去
ports: #端口策略
- name: tomcatport #端口名
containerPort: 8080 #容器需要监听的端口

运行命令 kubectl create -f poddemo1.yaml

查看 pod是否创建成功 kubectl get pod

报错说找不到镜像,原因是创建的pod分发到了node节点,但是node节点是真的没有这个image

将镜像导入到node后重新运行发现pod状态一直不对。

查看pod状态

kubectl describe pod poddemo1

发现这个错误提示说Java path 一拍头想起来了,镜像里那个jdk解压后的文件夹名字肯定写错了。重新打包镜像后重新创建成功。

通过查看pod的IP

kubectl get pod -o wide

然后在任何一个k8s节点上都可以访问

搭建Harbor仓库

上边的例子缺点就在于,master节点上有的镜像node节点没有,需要手动上传,这是不自动的。我们创建一个自己的harbor仓库,就可以处理这个问题。我这边的harbor仓库是按照下边的博客搭建的,流程是没问题的。

https://blog.csdn.net/moyuanbomo/article/details/123378825

这里说几个需要注意的:

1 搭建好的harbor仓库在创建项目时一定要选择公开的,否则还需要在额外设置k8s。

2 k8s中的每台主机都要修改daemon.json然后依次登录一下harbor仓库。docker login ip:port

3 使用docker pull 拉镜像时需要指定全名例如 docker pull 192.168.180.129:9999/myharbor/myapp:v1

4 所以我们的pod.yaml也要修改为全路径,另外镜像拉去策略也要修改为 IfNotPresent 如果本地不存在则拉去。

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
labels:
k1: v1
spec:
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080

重启策略

创建一个pod,仅仅使用一个ubuntu基础镜像,启动后执行 echo打印两个字符串,restartPolicy设置为OnFailure此时pod执行完就会结束。

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ['/bin/echo','aaaaa','bbbbb']

当我们修改restartPolicy为Always时则会重复启动pod。

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
spec:
restartPolicy: Always #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ['date']

启动命令

dockerfile的最后一行描述了容器创建后需要启动的命令,现在我们把这一行删掉,转移到pod.yaml中

FROM centos:7

RUN yum install -y vim

WORKDIR /usr/local

RUN mkdir test

COPY dockerdemo.jar /usr/local/test
COPY jdk8.tar.gz /usr/local/test

WORKDIR /usr/local/test
RUN tar -zxvf jdk8.tar.gz

ENV JAVA_HOME /usr/local/test/jdk1.8.0_333
ENV PATH $JAVA_HOME/bin:$PATH

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

除此之外还可以设置args作为command的参数。

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
spec:
restartPolicy: Always #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ['/bin/echo']
args: ['aaa','bbb','cccc']

command和args两个参数对应docker中的ENTRYPOINT和CMD对应。两者之间有互斥的关系,具体如下。

pod基本命令

创建pod:kubectl create -f xxx.yaml

查询全部pod:kubectl get pod

name:pod名称。 ready:pod的准备状态,左边数字是pod中就绪容器的总数目,右边是应有容器。status:pod的状态。restarts:pod的重启次数。age:pod的运行时间。

查询单个pod:kubectl get pod podname

[root@k8s-master1 home]# kubectl get pod poddemo1
NAME READY STATUS RESTARTS AGE
poddemo1 1/1 Running 0 10m

查询pod完整信息:kubectl get pod podname --output json

{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2022-08-17T14:19:12Z",
"name": "poddemo1",
"namespace": "default",
"resourceVersion": "44895",
"selfLink": "/api/v1/namespaces/default/pods/poddemo1",
"uid": "36a6fef7-38bb-46db-afc2-e8ee66cc1b49"
},
"spec": {
"containers": [
{
"command": [
"nohup",
"java",
"-jar",
"/usr/local/test/dockerdemo.jar",
"\u0026"
],
"image": "192.168.239.134:9999/myharbor/myapp:v1",
"imagePullPolicy": "IfNotPresent",
"name": "myapp01",
"ports": [
{
"containerPort": 8080,
"name": "tomcatport",
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-zlxv7",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "k8s-node1",
"priority": 0,
"restartPolicy": "Never",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-zlxv7",
"secret": {
"defaultMode": 420,
"secretName": "default-token-zlxv7"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2022-08-17T14:19:12Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2022-08-17T14:19:16Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2022-08-17T14:19:16Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2022-08-17T14:19:12Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://bcd1cdfbbabe194f8574af00672eac2a4e1408245506bb9aa7f4c4b52c613e7e",
"image": "192.168.239.134:9999/myharbor/myapp:v1",
"imageID": "docker-pullable://192.168.239.134:9999/myharbor/myapp@sha256:4e6502311b5cd07f3ed6f55d616ff5c0205e441533541bbd91464965dd725929",
"lastState": {},
"name": "myapp01",
"ready": true,
"restartCount": 0,
"state": {
"running": {
"startedAt": "2022-08-17T14:19:15Z"
}
}
}
],
"hostIP": "192.168.239.135",
"phase": "Running",
"podIP": "10.244.1.19",
"qosClass": "BestEffort",
"startTime": "2022-08-17T14:19:12Z"
}
}

查看pod状态和生命周期事件:kubectl describe pod podname

Name: poddemo1
Namespace: default
Priority: 0
Node: k8s-node1/192.168.239.135
Start Time: Wed, 17 Aug 2022 22:19:12 +0800
Labels:
Annotations:
Status: Running
IP: 10.244.1.19
Containers:
myapp01:
Container ID: docker://bcd1cdfbbabe194f8574af00672eac2a4e1408245506bb9aa7f4c4b52c613e7e
Image: 192.168.239.134:9999/myharbor/myapp:v1
Image ID: docker-pullable://192.168.239.134:9999/myharbor/myapp@sha256:4e6502311b5cd07f3ed6f55d616ff5c0205e441533541bbd91464965dd725929
Port: 8080/TCP
Host Port: 0/TCP
Command:
nohup
java
-jar
/usr/local/test/dockerdemo.jar
&
State: Running
Started: Wed, 17 Aug 2022 22:19:15 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zlxv7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-zlxv7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zlxv7
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned default/poddemo1 to k8s-node1
Normal Pulled 14m kubelet, k8s-node1 Container image "192.168.239.134:9999/myharbor/myapp:v1" already present on machine
Normal Created 14m kubelet, k8s-node1 Created container myapp01
Normal Started 14m kubelet, k8s-node1 Started container myapp01

删除pod:kubectl delete pod podname

删除全部pod:kubectl delete pod all

设置环境变量

创建pod后进入pod中的容器内部,查看环境变量是否生效。

kubectl --namespace=default exec -it poddemo1 --container myapp01 -- sh

apiVersion: v1
kind: Pod
metadata:
name: poddemo1
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
env:
- name: aaaaa
value: 'ttttttt'
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

设置的环境变量还可以获取pod的信息

apiVersion: v1
kind: Pod
metadata:
name: poddemo5
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
env:
- name: aaaaa
value: 'ttttttt'
- name: nodename
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodenamespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

数据持久化和共享-hostPath

hostPath将宿主机的某个目录映射到容器内的目录,容器终止后,宿主机上将保留目录,以及容器内操控的文件。

apiVersion: v1
kind: Pod
metadata:
name: poddemo6
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: writepod #容器1
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","echo \"aaaaaaa\" >> /data/hello.txt"] # 向容器内的/data/hello.txt写入数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /data #映射到容器内的目录
- name: readpod #容器2
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","sleep 10; cat /data/hello.txt"] # 向容器内的/data/hello.txt 读取数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /data #映射到容器内的目录
volumes: # 宿主机挂载
- name: data1 # 挂载卷唯一名称
hostPath: #类型,其实还有其他的类型,hostPath只是其中一个
path: /home # 容器内部挂载到的宿主机目录

虽然容器停止了,但是在node上是可以看到写入的文件的

看以下配置两个容器的mountPath并没有在同一个目录下,但是依然可以取到文件,这表示最终能让容器间操作同一个目录的地方是 volumeMount.name与volume.name对应。

apiVersion: v1
kind: Pod
metadata:
name: poddemo6
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: writepod #容器1
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","echo \"aaaaaaa\" >> /data/hello.txt"] # 向容器内的/data/hello.txt写入数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /data #映射到容器内的目录
- name: readpod #容器2
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","sleep 10; cat /tmp/hello.txt"] # 向容器内的/tmp/hello.txt 读取数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /tmp #映射到容器内的目录
volumes: # 宿主机挂载
- name: data1 # 挂载卷唯一名称
hostPath: #类型,其实还有其他的类型,hostPath只是其中一个
path: /home # 容器内部挂载到的宿主机目录

数据持久化和共享-emptyDir

将数据临时写入磁盘,当pod停止后卷数据不会保留。

apiVersion: v1
kind: Pod
metadata:
name: poddemo8
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: writepod #容器1
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","echo \"aaaaaaa\" >> /data/hello.txt"] # 向容器内的/data/hello.txt写入数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /data #映射到容器内的目录
- name: readpod #容器2
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ["bash","-c","sleep 10; cat /data/hello.txt"] # 向容器内的/data/hello.txt 读取数据
volumeMounts:
- name: data1 #对应挂载卷唯一名称
mountPath: /data #映射到容器内的目录
volumes: # 宿主机挂载
- name: data1 # 挂载卷唯一名称
emptyDir: #类型,其实还有其他的类型,emptyDir只是其中一个

JSON格式编写pod文件

{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "poddemo7"
},
"spec": {
"restartPolicy": "OnFailure",
"containers": [
{
"name": "myapp01",
"image": "ubuntu:14.04",
"imagePullPolicy": "IfNotPresent",
"command": [
"/bin/echo",
"aaaaa",
"bbbbb"
]
}
]
}
}

ConfigMap

configMap供容器使用的典型方法为:1 生成容器内的环境变量。 2 设置容器启动命令的启动参数(需要设置为环境变量)。3 以Volume的形式挂载为容器内的文件或目录。

使用yaml的方式创建ConfigMap,定义了两个key,value。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap1
data:
key1: value1
key2: value2

通过环境变量的方式使用ConfigMap,先创建configmap设置两个key,value,然后再pod中通过configMapKeyRef引用,这种引用方式环境变量的名称是env.name。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap1
data:
key1: value1

key2: value2

apiVersion: v1
kind: Pod
metadata:
name: poddemo5
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
env:
- name: aaaaa #定义环境变量
value: 'ttttttt'
- name: nodename #获取pod属性作为环境变量
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ccccccc
valueFrom:
configMapKeyRef:
name: configmap1
key: key1
- name: ddddddd
valueFrom:
configMapKeyRef:
name: configmap1
key: key2
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

使用containers.envFrom可以直接将一个ConfigMap中定义的属性全部映射到环境变量中。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap2
data:
key1: value1

key2: value2

apiVersion: v1
kind: Pod
metadata:
name: poddemo10
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
env:
- name: aaaaa #定义环境变量
value: 'ttttttt'
- name: nodename #获取pod属性作为环境变量
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: configmap2
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

下面这种配置,在启动参数中使用环境变量,而环境变量是在configmap中声明的。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap2
data:
key1: value1

key2: value2

apiVersion: v1
kind: Pod
metadata:
name: poddemo11
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: ubuntu:14.04
imagePullPolicy: IfNotPresent
command: ['/bin/sh','-c']
args: ["echo ${aaa} ${key1} ${key2}"]
env:
- name: aaa
value: ttttttt
envFrom:
- configMapRef:
name: configmap2

将configmap映射到容器内部的文件中。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap3
data:
key1: '我是一只小青蛙'

key2: '呱呱呱 呱呱呱'

apiVersion: v1
kind: Pod
metadata:
name: poddemo12
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
volumeMounts:
- name: config
mountPath: /home
volumes:

  • name: config configMap: name: configmap3 items:
    • key: key1
      path: file1
    • key: key2
      path: file2

如何在应用configmap时不指定item则会给configmap中的每个key生成一个对应的文件。

apiVersion: v1
kind: ConfigMap
metadata:
name: configmap4
data:
key1: '我是一只小青蛙'

key2: '呱呱呱 呱呱呱'

apiVersion: v1
kind: Pod
metadata:
name: poddemo13
spec:
restartPolicy: Never
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
volumeMounts:
- name: config
mountPath: /home
volumes:

  • name: config
    configMap:
    name: configmap4

pod共享宿主机网络

pod中所有容器网络都是共享的,一个pod中所有的容器中的网络都是一致的,他们能够互相通过localhost访问彼此。默认的每个pod都有属于自己的podip,k8s集群内可以访问任意podip但外网无法访问

podip。但也可以直接设置pod为Host网络模式,这样pod中的所有容器将会直接暴露到外部系统。通过spec.hostNetwork设置。

apiVersion: v1
kind: Pod
metadata:
name: poddemo14
spec:
restartPolicy: Never
hostNetwork: true
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]

直接将pod映射到Host最大的弊端是,如果同时启动多个容器,且容器内的端口一致,则只有一个会启动成功,剩下则会报以下错误。

Warning FailedScheduling 35s (x2 over 35s) default-scheduler 0/2 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taints that the pod didn't tolerate.

pod的生命周期

pod的生命周期可以简单描述为:首先pod被创建,紧接着pod被调度到Node进行部署运行。Pod是非常忠诚的,一旦被分配到Node后,就不会离开这个Node

直到它被删除,生命周期完结。

pod的生命周期也就是pod的状态值

pending:API Server 已经创建该Pod,但在Pod内还有一个或多个容器的镜像没有创建,包括正在下载镜像的过程。

running:pod内所有的容器均已创建,且至少有一个容器处于运行状态,正在启动状态或正在重启状态。

succeeded:pod内所有容器均成功退出,且不会重启。

failed:pod内容器均已退出,但至少有一个容器为退出失败状态。

unknown:由于某种原因无法获取该pod的状态,可能由于网络通信不畅通导致。

pod的生命周期函数

pod中的容器有两个生命周期函数,一个在容器创建成功后,一个在终止前。可选的操作有执行命令行或者发送http请求。下边实例在创建容器后复制容器内的jar包,终止容器后发送一个get请求,地址是我本地启动的一个项目。

apiVersion: v1
kind: Pod
metadata:
name: poddemo17
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
lifecycle: #声明周期钩子函数
postStart: #容器创建成功后,调用该回调函数
exec: #回调有两种做法 exec 和 http
command:
- "cp"
- "/usr/local/test/dockerdemo.jar"
- "copyapp"
preStop: #在容器被终止前调用该回调函数
httpGet:
host: 192.168.0.100
path: /test
port: 8080
scheme: HTTP

通过环境变量获取pod和container的信息

需要注意获取pod信息和container信息使用的方式不同。

apiVersion: v1
kind: Pod
metadata:
name: poddemo18
labels:
labels1: labls111
labels2: labls222
annotations:
anno1: anno11
anno2: anno22
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
env:
- name: aaa
valueFrom:
fieldRef:
fieldPath: metadata.name # pod名称
- name: bbb
valueFrom:
fieldRef:
fieldPath: metadata.namespace # pod所在的命名空间名称
- name: ccc
valueFrom:
fieldRef:
fieldPath: metadata.uid # pod的UID
- name: ddd
valueFrom:
fieldRef:
fieldPath: metadata.labels['labels1'] # pod的label
- name: eee
valueFrom:
fieldRef:
fieldPath: metadata.labels['labels2'] # pod的label
- name: fff
valueFrom:
fieldRef:
fieldPath: metadata.annotations['anno1'] # pod的annotation
- name: ggg
valueFrom:
fieldRef:
fieldPath: metadata.annotations['anno2'] # pod的annotation
- name: hhh
valueFrom:
fieldRef:
fieldPath: status.podIP # pod的IP地址
- name: iii
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName # pod使用的serviceAccount名称
- name: jjj
valueFrom:
fieldRef:
fieldPath: spec.nodeName # pod所在node的名称
- name: kkk
valueFrom:
fieldRef:
fieldPath: status.hostIP # pod所在node的ip地址
- name: AAA #以下为容器内的信息
valueFrom:
resourceFieldRef:
containerName: myapp01
resource: requests.cpu #CPU Request
- name: BBB
valueFrom:
resourceFieldRef:
containerName: myapp01
resource: limits.cpu #CPU Limit
- name: CCC
valueFrom:
resourceFieldRef:
containerName: myapp01
resource: requests.memory # Memory Request
- name: DDD
valueFrom:
resourceFieldRef:
containerName: myapp01
resource: limits.memory # MemoryLimit

通过文件挂载获取pod和container的信息

需要注意获取pod信息和container信息使用的方式不同。

apiVersion: v1
kind: Pod
metadata:
name: poddemo19
labels:
labels1: labls111
labels2: labls222
annotations:
anno1: anno11
anno2: anno22
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
volumeMounts:
- name: podinfo
mountPath: /home
volumes:
- name: podinfo
downwardAPI:
items:
- path: "aaa"
fieldRef:
fieldPath: metadata.name
- path: "bbb"
fieldRef:
fieldPath: metadata.namespace
- path: "ccc"
fieldRef:
fieldPath: metadata.uid
- path: "ddd"
fieldRef:
fieldPath: metadata.labels['labels1']
- path: "eee"
fieldRef:
fieldPath: metadata.labels['labels2']
- path: "fff"
fieldRef:
fieldPath: metadata.annotations['anno1']
- path: "ggg"
fieldRef:
fieldPath: metadata.annotations['anno2']
- path: "AAA"
resourceFieldRef:
containerName: myapp01
resource: requests.cpu
- path: "BBB"
resourceFieldRef:
containerName: myapp01
resource: limits.cpu
- path: "CCC"
resourceFieldRef:
containerName: myapp01
resource: requests.memory
- path: "DDD"
resourceFieldRef:
containerName: myapp01
resource: limits.memory

健康检查和服务可用性检查

LivenessProbe探针:用于判断容器是否存活(running状态),如果LivenessProbe探针探测到容器不健康,则kubectl将杀掉该容器,并根据容器的重启策略做相应的处理。如果一个容器不包含LivenessProbe探针

那么kubelet认为该容器的LivenessProbe探针返回的值永远是Success。

ReadinessProbe探针:用于判断容器服务是否可用(Ready状态),达到Ready状态的pod才可以接收请求。对于被Service管理的pod,service与Pod Endpoint的关联关系也将基于Pod是否Ready进行设置。

如果在运行过程中Ready状态变成FALSE,则系统自动将其从Service的后端Endpoint列表中隔离出去,后续再把回复到Ready状态的Pod加回到后端Endpoint列表。这样保证客户端在访问Service时不回被转发

到服务不可用的Pod实例上。

apiVersion: v1
kind: Pod
metadata:
name: poddemo20
spec:
restartPolicy: OnFailure #Always 表示pod一旦停止就会立刻重启 OnFailure 表示容器正常退出不重启,正常退出码是0,Never 表示发送退出报告给master但是不重启
containers:
- name: myapp01
image: 192.168.180.129:9999/myharbor/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcatport
containerPort: 8080
command: ["nohup","java","-jar","/usr/local/test/dockerdemo.jar","&"]
livenessProbe:
httpGet:
host: 192.168.43.4
path: /test1
port: 8080
scheme: HTTP
initialDelaySeconds: 5 #启动活动或准备就绪探测之前容器启动后的秒数
periodSeconds: 3 #执行探测的频率(以秒为单位)。默认为10秒。最小值为1。
timeoutSeconds: 2 #探测超时的秒数。默认为1秒。最小值为1
failureThreshold: 3 #在存活探针重新启动容器之前允许探针失败的次数(或就绪探针将pod标记为不可用)。
successThreshold: 1 #探针在开始失败后必须报告成功的次数,以便重置探测过程。
readinessProbe:
httpGet:
host: 192.168.43.4
path: /test2
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 2
failureThreshold: 3
successThreshold: 1

两种探针都支持三种检测方式。

ExecAction:在容器中执行指定的命令进行检测,当命令执行成功(返回码为0),检测成功。

exec:
command:

  • cat
  • /tmp/health

TCPSocketAction:对于容器中的指定TCP端口进行检测,当TCP端口被占用,检测成功。

tcpSocket:
port: 8080

HTTPGetAction:发生一个HTTP请求,当返回码介于200-400之间,检测成功。

httpGet:
path: /healthz
port: 8080