一、架构说明
➢ Prometheus Server:Prometheus 生态最重要的组件,主要用于抓取和存储时间序列数据,
同时提供数据的查询和告警策略的配置管理;
➢ Alertmanager:Prometheus 生态用于告警的组件,Prometheus Server 会将告警发送给
Alertmanager,Alertmanager 根据路由配置,将告警信息发送给指定的人或组。
Alertmanager 支持邮件、Webhook、微信、钉钉、短信等媒介进行告警通知;
➢ Grafana:用于展示数据,便于数据的查询和观测;
➢ Push Gateway:Prometheus 本身是通过 Pull 的方式拉取数据,但是有些监控数据可能是
短期的,如果没有采集数据可能会出现丢失。Push Gateway 可以用来解决此类问题,它
可以用来接收数据,也就是客户端可以通过 Push 的方式将数据推送到 Push Gateway,
之后 Prometheus 可以通过 Pull 拉取该数据;
➢ Exporter:主要用来采集监控数据,比如主机的监控数据可以通过 node_exporter采集,
MySQL 的监控数据可以通过 mysql_exporter 采集,之后 Exporter 暴露一个接口,比
如/metrics,Prometheus 可以通过该接口采集到数据;
➢ PromQL:PromQL 其实不算 Prometheus 的组件,它是用来查询数据的一种语法,比
如查询数据库的数据,可以通过 SQL 语句,查询 Loki 的数据,可以通过 LogQL,查询
Prometheus 数据的叫做 PromQL;
➢ Service Discovery:用来发现监控目标的自动发现,常用的有基于 Kubernetes、
Consul、Eureka、文件的自动发现等
二、Prometheus安装
Kube-Prometheus 项目地址:https://github.com/prometheus-operator/kube-prometheus/
首先需要通过该项目地址,找到和自己 Kubernetes 版本对应的 Kube Prometheus Stack 的
版本
git clone -b release-0.10 --single-branch https://github.com/prometheus-operator/kube-prometheus.git
[root@k8s-master1 manifests]# vim prometheusAdapter-deployment.yaml
image: k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
#修改如下地址
image: willdockerhub/prometheus-adapter:v0.9.1
[root@k8s-master1 manifests]# vim kubeStateMetrics-deployment.yaml
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.3.0
#修改如下地址:
image: bitnami/kube-state-metrics:2.3.0
cd kube-prometheus/manifests/setup/
#安装 Prometheus Operator:
kubectl create -f setup/
#安装 Prometheus Stack:
kubectl create -f .
kubectl get pod -n monitoring
kubectl edit svc -n monitoring grafana
打开后找到type: ClusterIP 修改为 type: NodePort,修改如下图红色方框
[root@k8s-master1 manifests]# kubectl get svc -n monitoring grafana
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.8.14.157
注:Grafana默认登录的账号密码为admin/admin。
kubectl edit svc -n monitoring prometheus-k8s
注:把type: ClusterIP 修改为 type: NodePort.修改方法跟上面一样
[root@k8s-master1 manifests]# kubectl get svc -n monitoring prometheus-k8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-k8s NodePort 10.6.102.49
注:默认安装完成后,会有几个告警,先忽略
三、Etcd监控
3-1、测试访问Etcd Metric接口
[root@k8s-master1 ~]# cat /etc/etcd/etcd.config.yml | grep -E "key-file|cert-file"
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
[root@k8s-master1 ~]# curl -s --cert /etc/kubernetes/pki/etcd/etcd.pem --key /etc/kubernetes/pki/etcd/etcd-key.pem \
https://YOUR_ETCD_IP:2379/metrics -k|tail -l
#查看结果显示
process_virtual_memory_max_bytes 1.8446744073709552e+19
promhttp_metric_handler_requests_in_flight 1
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
3-2、创建Etcd Service yaml文件
vim etcd-service.yaml
apiVersion: v1
kind: Endpoints
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
subsets:
apiVersion: v1
kind: Service
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
spec:
type: ClusterIP
ports:
port: 2379
protocol: TCP
targetPort: 2379
name: https-metrics
注:需要把YOUR_ETCD_IP改成自己的Etcd主机IP地址,另外需要注意port的名称为https-metrics,需要和后面的ServiceMonitor保持一致。
#创建
kubectl create -f etcd-service.yaml
#查看Service的ClusterIP
[root@k8s-master1 src]# kubectl get svc -n kube-system etcd-prom
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-prom ClusterIP 10.4.62.48
[root@k8s-master1 src]# curl -s --cert /etc/kubernetes/pki/etcd/etcd.pem --key /etc/kubernetes/pki/etcd/etcd-key.pem https://10.4.62.48:2379/metrics -k|tail -l
process_virtual_memory_max_bytes 1.8446744073709552e+19
promhttp_metric_handler_requests_in_flight 1
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
注:通过创建Etcd Service 得到的ClusterIP 测试metrics 和上面测试结果一样,说明创建的Etcd Service可以使用
3-3、创建Etcd证书的Secret(证书路径根据实际环境进行更改)
[root@k8s-master1 src]# kubectl create secret generic etcd-ssl \
--from-file=/etc/kubernetes/pki/etcd/etcd-ca.pem \
--from-file=/etc/kubernetes/pki/etcd/etcd.pem \
--from-file=/etc/kubernetes/pki/etcd/etcd-key.pem -n monitoring
kubectl edit prometheus k8s -n monitoring
#添加以下内容
secrets:
etcd-ssl
如图所示:
注:保存退出后,Prometheus的Pod会自动重启。重启玩后查看证书是否挂载(任意一个Promentheus的pod即可)
#查看重启是否完成命令
kubectl get pod -n monitoring
#查看证书是否挂载
[root@k8s-master1 ~]# kubectl exec -n monitoring prometheus-k8s-0 -c prometheus -- ls /etc/prometheus/secrets/etcd-ssl/
etcd-ca.pem
etcd-key.pem
etcd.pem
vim servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: etcd
namespace: monitoring
labels:
app: etcd
spec:
jobLabel: k8s-app
endpoints:
- interval: 30s
port: https-metrics #这个port对应 Service.spec.ports.name
scheme: https
tlsConfig:
caFile: /etc/prometheus/secrets/etcd-ssl/etcd-ca.pem #证书路径
certFile: /etc/prometheus/secrets/etcd-ssl/etcd.pem
keyFile: /etc/prometheus/secrets/etcd-ssl/etcd-key.pem
insecureSkipVerify: true # 关闭证书校验
selector:
matchLabels:
app: etcd-prom #跟svc的lables 保持一致。可以根据下面的图进行查看说明
namespaceSelector:
matchNames:
- kube-system
kubectl get svc -n kube-system -l app=etcd-prom
注:和之前的 ServiceMonitor 相比,多了 tlsConfig 的配置,http 协议的 Metrics 无需该配置
[root@k8s-master1 src]# kubectl create -f servicemonitor.yaml
servicemonitor.monitoring.coreos.com/etcd created
kubectl get servicemonitor -n monitoring
Prometheus web简易页面
注:Etcd节点有几个显示几个才算正常,如果显示不正常根据上面步骤排查下问题
配置Grafana 显示etcd监控页面
打开 https://grafana.com/grafana/dashboards 找Etcd相关的 dashboards。
找的的Etcd dashboards导入路径:https://grafana.com/grafana/dashboards/3070-etcd/
导入后页面显示
注:至此Etcd监控配置完成
四、非云原生监控Exporter
4-1、MySQL 作为一个测试用例,演示如何使用 Exporter 监控非云原生应用。 MySQL安装不在操作
mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporter' WITH
MAX_USER_CONNECTIONS 3;
Query OK, 0 rows affected (0.01 sec)
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO
'exporter'@'%';
Query OK, 0 rows affected (0.00 sec)
vim mysql-exporter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mysql-exporter
template:
metadata:
labels:
k8s-app: mysql-exporter
spec:
containers:
- name: mysql-exporter
image: registry.cn-beijing.aliyuncs.com/dotbalo/mysqld-exporter
env:
- name: DATA_SOURCE_NAME
#value: "exporter:exporter@(mysql.default:3306)/" #集群内部MySQL监控配置
value: "exporter:exporter@(192.168.3.81:3306)/" #集群外部监控MySQL配置
imagePullPolicy: IfNotPresent
ports:
apiVersion: v1
kind: Service
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
spec:
type: ClusterIP
selector:
k8s-app: mysql-exporter
ports:
name: api
port: 9104
protocol: TCP
# kubectl create -f mysql-exporter.yaml
deployment.apps/mysql-exporter created
service/mysql-exporter created
[root@k8s-master1 src]# kubectl get svc -n monitoring mysql-exporter
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-exporter ClusterIP 10.1.173.190
[root@k8s-master1 src]# curl 10.1.173.190:9104/metrics | tail -1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 119k 0 119k 0 0 1161k 0 --:--:-- --:--:-- --:--:-- 1179k
promhttp_metric_handler_requests_total{code="503"} 0
vim mysql-sm.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
namespace: monitoring
spec:
jobLabel: k8s-app
endpoints:
port: api
interval: 30s
scheme: http
selector:
matchLabels:
k8s-app: mysql-exporter
namespaceSelector:
matchNames:
monitoring
注:需要matchLabels 和 endpoints 的配置,要和 MySQL 的 Service 一致
创建ServiceMonitor
# kubectl create -f mysql-sm.yaml
servicemonitor.monitoring.coreos.com/mysql-exporter created
导入 Grafana Dashboard
说明:导入 Grafana Dashboard,地址:https://grafana.com/grafana/dashboards/6239-mysql/,导入步骤和之前类似,
在此不再演示。导入完成后,即可在 Grafana 看到监控数据:
五、Service Monitor 找不到监控主机排查
5-1、故障排查步骤
1)确认Service Monitor是否创建成功
2)确认Service Monitor标签是否配置正确
3)确认Promenteus 是否生成了相关配置
4)确认存在Service Monitor匹配的Service
5)确认通过Service 能够访问程序的Metrics接口
6)确认Service 的端口和Scheme和Service Monitor一致
5-2、解决安装后报KubeControllerManagerDown (1 active) 监控数据没有错误,如下图错误。解决步骤如下
kubectl get servicemonitor -n monitoring
通过Prometheus web界面查看kube-controller-manager是否生成了相关配置
kubectl get servicemonitor -n monitoring kube-controller-manager -oyaml
注:该Service Monitor匹 配 的 是kube-system命名空间下 , 具有app.kubernetes.io/name=kube-controller-manager 标签
接下来通过该标签查看是否存在
[root@k8s-master1 ~]# kubectl get svc -l app.kubernetes.io/name=kube-controller-manager -n kube-system
No resources found in kube-system namespace.
注:可以看到并没有此标签的 Service,所以导致了找不到需要监控的目标,此时可以手动创建该 Service 和
Endpoint 指向自己的 Controller Manage(kube-controller-manager )
[root@k8s-master1 src]# vim controller-manager-svc.yaml
apiVersion: v1
kind: Endpoints
metadata:
labels:
app.kubernetes.io/name: kube-controller-manager
name: kube-controller-manager-prom
namespace: kube-system
subsets:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: kube-controller-manager #这里要跟servicemotiors labels 名称保持一致才能找到
name: kube-controller-manager-prom
namespace: kube-system
spec:
type: ClusterIP
ports:
#创建
[root@k8s-master1 src]# kubectl create -f controller-manager-svc.yaml
endpoints/kube-controller-manager-prom created
service/kube-controller-manager-prom created
--authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
注:如果在配置末尾添加注意后面的反斜杠需要去掉
5-3、解决安装后报KubeSchedulerDown (1 active) (1 active)没有监控数据错误,如下图错误。解决步骤如下
kubectl get servicemonitor -n monitoring
kubectl get servicemonitor -n monitoring kube-scheduler -oyaml
注:该Service Monitor匹 配 的 是kube-system命名空间下 , 具有app.kubernetes.io/name: kube-scheduler 标签
接下来通过该标签查看是否存在
[root@k8s-master1 ~]# kubectl get svc -l app.kubernetes.io/name=kube-scheduler -n kube-system
No resources found in kube-system namespace.
注:可以看到并没有此标签的 Service,所以导致了找不到需要监控的目标,此时可以手动创建该 Service 和
Endpoint 指向自己的 Controller Manage(kube-scheduler )
[root@k8s-master1 src]# vim scheduler.yaml
apiVersion: v1
kind: Endpoints
metadata:
labels:
app.kubernetes.io/name: kube-scheduler
name: kube-scheduler-prom
namespace: kube-system
subsets:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: kube-scheduler #这里要跟servicemotiors labels 名称保持一致才能找到
name: kube-scheduler-prom
namespace: kube-system
spec:
type: ClusterIP
ports:
port: 10259
protocol: TCP
targetPort: 10259
name: https-metrics
[root@k8s-master1 src]# kubectl create -f scheduler.yaml
endpoints/kube-scheduler-prom created
service/kube-scheduler-prom created
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
注:如果在配置末尾添加注意后面的反斜杠需要去掉
注:其它相关错误排查也是相关步骤
六、配置Prometeus静态文件远程监控Linux主机
说明:远程监控主机IP地址为:192.168.3.81
6-1、远程Linux主机操作。远程监控客户端下载地址:https://prometheus.io/download/
#下载node_exporter远程监控客户端口
wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0/node_exporter-1.4.0.linux-amd64.tar.gz
#解压
tar zxf node_exporter-1.4.0.linux-amd64.tar.gz
#解压后吧node_exporter移动到/usr/local/目录下
mv node_exporter-1.4.0.linux-amd64 /usr/local/node_exporter-1.4.0.linux-amd64
vim /usr/lib/systemd/system/node_exporter.service
[Service]
ExecStart=/usr/local/node_exporter-1.4.0.linux-amd64/node_exporter --web.listen-address=0.0.0.0:9100
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
[Unit]
Description=node_exporter
After=network.target
#更新系统服务
systemctl daemon-reload
#加入开机启动
systemctl enable node_exporter.service
#启动
systemctl start node_exporter.service
#停止
systemctl stop node_exporter.service
#重启
systemctl restart node_exporter.service
注:ExecStart=/usr/local/node_exporter-1.4.0.linux-amd64/node_exporter --web.listen-address=0.0.0.0:9100配置说明
1、/usr/local/node_exporter-1.4.0.linux-amd64/node_exporter 存放node_exporter路径
2、--web.listen-address=0.0.0.0:9100 指定了 node_exporter 的监听端口为9100
6-2、以下操作都在Kubernetes集群K8s-master1上面操作或kubectl命令机器上面
#生成一个prometheus-additional.yaml 文件
mkdir /prometeus
cd /prometeus/
touch /prometeus/prometheus-additional.yaml
#创建secret
[root@k8s-master1 prometeus]# kubectl create secret generic additional-configs \
--from-file=/prometeus/prometheus-additional.yaml -n monitoring
#创建成功提示信息
secret/additional-configs created
#查看创建状态
[root@k8s-master1 ~]# kubectl get secrets -n monitoring additional-configs
NAME TYPE DATA AGE
additional-configs Opaque 1 17m
#查看相关内容信息
[root@k8s-master1 ~]# kubectl describe secrets -n monitoring additional-configs
Name: additional-configs
Namespace: monitoring
Labels:
Annotations:
Type: Opaque
prometheus-additional.yaml: 0 bytes
[root@k8s-master1 src]# kubectl edit prometheus -n monitoring k8s
prometheus.monitoring.coreos.com/k8s edited
#添加以下内容
additionalScrapeConfigs:
key: prometheus-additional.yaml
name: additional-configs
optional: true
注:添加上述配置后保存退出,无需重启 Prometheus 的 Pod 即可生效
vim prometheus-additional.yaml
job_name: 'LinuxServerMonitor'
static_configs:
targets:
source_labels: [__address__]
target_label: instance
注:远程监控多个Linux主机监控示例静态文件配置:
- job_name: 'LinuxServerMonitor'
static_configs:
- targets:
- "192.168.3.81:9100"
- "192.168.3.82:9100"
- "192.168.3.83:9100"
#需要监控多少台Linux主机直接按照顺序添加IP地址:端口即可
labels:
server_type: 'linux'
relabel_configs:
- source_labels: [__address__]
target_label: instance
[root@k8s-master1 src]# kubectl create secret generic additional-configs \
--from-file=/prometeus/prometheus-additional.yaml --dry-run=client -oyaml | \
kubectl replace -f - -n monitoring
#更新后提示
secret/additional-configs replaced
注:如果没有稍等一会在查看是否出现。时间过长还没出现根据上面操作查看自己的操作流程或配置文件
打开General导入dashboards模块,展示图如下:
说明:导入dashboards模块这里不在操作,按照上面导入dashboards模块操作即可
七、黑河监控域名是否可以正常访问
说明:关于Secret 配置prometheus-additional.yaml静态文件查看6-2相关操作步骤
7-1、新版Prometheus Stack 已经默认安装了 BlackboxExporter,可以通过以下命令查看
[root@k8s-master1 ~]# kubectl get pod -n monitoring -l app.kubernetes.io/name=blackbox-exporter
NAME READY STATUS RESTARTS AGE
blackbox-exporter-6b79c4588b-hvqgz 3/3 Running 21 (83m ago) 3d21h
7-2、同时也会创建一个 Service,可以通过该 Service 访问 Blackbox Exporter 并传递一些参数
[root@k8s-master1 ~]# kubectl get svc -n monitoring -l app.kubernetes.io/name=kube-state-metrics
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-state-metrics ClusterIP None
7-3、在prometheus-additional.yaml文件编辑静态配置实现监控域
vim prometheus-additional.yaml
job_name: 'GameServer'
metrics_path: /probe
params:
module: [http_2xx] # Look for a HTTP 200 response.
static_configs:
relabel_configs:
source_labels: [__address__]
target_label: __param_target
source_labels: [__param_target]
target_label: instance
target_label: __address__
replacement: blackbox-exporter:19115
➢ targets:探测的目标,根据实际情况进行更改
➢ params:使用哪个模块进行探测
➢ replacement:Blackbox Exporter 的地址
注:可以看到此处的内容,和传统配置的内容一致,只需要添加对应的 job 即可
7-4、通过prometheus-additional.yaml该文件更新 Secret
[root@k8s-master1 src]# kubectl create secret generic additional-configs \
--from-file=/prometeus/prometheus-additional.yaml --dry-run=client -oyaml | \
kubectl replace -f - -n monitoring
secret/additional-configs replaced
7-5、更新完后稍等一会,可登录Prometheus Web 看到该图配置
注:监控状态UP后导入黑河模块到General里面
7-6、导入dashboards模块到General里面,显示图如下
说明:导入dashboards模块这里不在操作,按照上面的导入dashboards模块操作即可
导入模板地址:https://grafana.com/grafana/dashboards/13659
7-7、对远程应用端口进行监控
- job_name: 'ServerPort'
metrics_path: /probe
params:
module: [tcp_connect]
static_configs:
- targets:
- 192.168.3.81:9000
labels:
tags: "server1" #标签名称
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:19115
[root@k8s-master1 src]# kubectl create secret generic additional-configs \
--from-file=/prometeus/prometheus-additional.yaml --dry-run=client -oyaml | \
kubectl replace -f - -n monitoring
注:导入模板地址:https://grafana.com/grafana/dashboards/13659 使General展示
七、邮件告警配置
7-1、在安装Prometheus安装包目录下面找到 Alertmanager 的配置文件。
#例如安装包目录为/kube-prometheus-release-0.10切换到manifests
#编辑 alertmanager-secret.yaml文件
[root@k8s-master1 kube-prometheus-release-0.10]# cd manifests/
[root@k8s-master1 manifests]# vim alertmanager-secret.yaml
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/component: alert-router
app.kubernetes.io/instance: main
app.kubernetes.io/name: alertmanager
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 0.23.0
name: alertmanager-main
namespace: monitoring
stringData:
alertmanager.yaml: |-
"global":
"resolve_timeout": "5m"
'''后面内容省略
alertmanager.yaml: |-
"global":
"resolve_timeout": "5m"
#添加以下内容
smtp_from: "123@126.com" 发送邮件地址
smtp_smarthost: "smtp.126.com:465" #发送邮箱smtp地址
smtp_hello: "126.com"
smtp_auth_username: "123@126.com" #发送邮箱地址
smtp_auth_password: "QWEZSDREDR" #开启POP3/SMTP服务后的授权密码
smtp_require_tls: false
"receivers":
- "name": "Default"
#下面设置的是接受邮件配置
"email_configs":
- to: "123@126.com"
send_resolved: true
注:
➢ email_configs:代表使用邮件通知;
➢ to:收件人,此处为 123@126.com,可以配置多个,逗号隔开;
➢ send_resolved:告警如果被解决是否发送解决通知。
"route":
"group_by":
- "namespace"
#下方添加job和alertname分组
- "job"
- "alertname"
"group\_interval": "5m"
"group\_wait": "30s"
"receiver": "Default"
"repeat\_interval": "12h"
"routes":
- "matchers":
- "alertname = Watchdog"
"receiver": "Watchdog"
- "matchers":
- "severity = critical"
"receiver": "Critical"
kubectl replace -f alertmanager-secret.yaml
7-1、开启Alertmanager Web UI 界面查看添加的job和alertname分组,配置内容
kubectl get svc -n monitoring
kubectl edit svc -n monitoring alertmanager-main
7-2、编写mysql不能连接服务或MySQL服务关闭告警邮件
vim mysql-exporter.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: mysql-ruler
prometheus: k8s
role: alert-rules
name: mysql-rules
namespace: monitoring
spec:
groups:
手机扫一扫
移动阅读更方便
你可能感兴趣的文章