helm3.1安装及结合ceph rbd 部署harbor
阅读原文时间:2023年07月09日阅读:1

[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_WARN
Long heartbeat ping times on back interface seen, longest is 3884.944 msec
Long heartbeat ping times on front interface seen, longest is 3888.368 msec
application not enabled on 1 pool(s)
clock skew detected on mon.bs-hk-hk02, mon.bs-k8s-ceph

services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean
[root@bs-k8s-ceph ~]# ceph osd pool application enable harbor rbd
enabled application 'rbd' on pool 'harbor'
[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_WARN
Long heartbeat ping times on back interface seen, longest is 3870.142 msec
Long heartbeat ping times on front interface seen, longest is 3873.410 msec
clock skew detected on mon.bs-hk-hk02

services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean

systemctl restart ceph.target //让时间停一会

[root@bs-k8s-ceph ~]# ceph -s
cluster:
id: 11880418-1a9a-4b55-a353-4b141e2199d8
health: HEALTH_OK

services:
mon: 3 daemons, quorum bs-hk-hk01,bs-hk-hk02,bs-k8s-ceph
mgr: bs-hk-hk01(active), standbys: bs-hk-hk02, bs-k8s-ceph
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 320 pgs
objects: 416 objects, 978 MiB
usage: 8.6 GiB used, 105 GiB / 114 GiB avail
pgs: 320 active+clean
[root@bs-k8s-master01 ~]# kubectl get nodes
The connection to the server 20.0.0.250:8443 was refused - did you specify the right host or port?
[root@bs-hk-hk01 ~]# systemctl start haproxy
[root@bs-k8s-master01 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
bs-k8s-master01 Ready master 7d10h v1.17.2
bs-k8s-master02 Ready master 7d10h v1.17.2
bs-k8s-master03 Ready master 7d10h v1.17.2
bs-k8s-node01 Ready 7d10h v1.17.2
bs-k8s-node02 Ready 7d10h v1.17.2
bs-k8s-node03 NotReady 7d9h v1.17.2 //为了节省cpu而关掉
https://github.com/helm/helm/releases
[root@bs-k8s-master01 helm3]# pwd
/data/k8s/helm3
[root@bs-k8s-master01 helm3]# ll
总用量 11980
-rw-r--r-- 1 root root 12267464 2月 17 2020 helm-v3.1.0-linux-amd64.tar.gz
[root@bs-k8s-master01 helm3]# cp linux-amd64/helm /usr/local/bin/helm
[root@bs-k8s-master01 helm3]# helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
[root@bs-k8s-master01 helm3]# helm --help
The Kubernetes package manager

Common actions for Helm:

  • helm search: search for charts
  • helm pull: download a chart to your local directory to view
  • helm install: upload the chart to Kubernetes
  • helm list: list releases of charts

Environment variables:

+------------------+-----------------------------------------------------------------------------+
| Name | Description |
+------------------+-----------------------------------------------------------------------------+
| $XDG_CACHE_HOME | set an alternative location for storing cached files. |
| $XDG_CONFIG_HOME | set an alternative location for storing Helm configuration. |
| $XDG_DATA_HOME | set an alternative location for storing Helm data. |
| $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
| $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
+------------------+-----------------------------------------------------------------------------+

Helm stores configuration based on the XDG base directory specification, so

  • cached files are stored in $XDG_CACHE_HOME/helm
  • configuration is stored in $XDG_CONFIG_HOME/helm
  • data is stored in $XDG_DATA_HOME/helm

By default, the default directories depend on the Operating System. The defaults are listed below:

+------------------+---------------------------+--------------------------------+-------------------------+
| Operating System | Cache Path | Configuration Path | Data Path |
+------------------+---------------------------+--------------------------------+-------------------------+
| Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
| macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
| Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
+------------------+---------------------------+--------------------------------+-------------------------+

Usage:
helm [command]

Available Commands:
completion Generate autocompletions script for the specified shell (bash or zsh)
create create a new chart with the given name
dependency manage a chart's dependencies
env Helm client environment information
get download extended information of a named release
help Help about any command
history fetch release history
install install a chart
lint examines a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin install, list, or uninstall Helm plugins
pull download a chart from a repository and (optionally) unpack it in local directory
repo add, list, remove, update, and index chart repositories
rollback roll back a release to a previous revision
search search for a keyword in charts
show show information of a chart
status displays the status of the named release
template locally render templates
test run tests for a release
uninstall uninstall a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client version information

Flags:
--add-dir-header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--debug enable verbose output
-h, --help help for helm
--kube-context string name of the kubeconfig context to use
--kubeconfig string path to the kubeconfig file
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
-n, --namespace string namespace scope for this request
--registry-config string path to the registry config file (default "/root/.config/helm/registry.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

Use "helm [command] --help" for more information about a command.
[root@bs-k8s-master01 helm3]# source <(helm completion bash) [root@bs-k8s-master01 helm3]# echo "source <(helm completion bash)" >> ~/.bashrc
[root@bs-k8s-master01 rbd]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"stable" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add google https://kubernetes-charts.storage.googleapis.com
"google" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[root@bs-k8s-master01 helm3]# helm repo list
NAME URL
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
google https://kubernetes-charts.storage.googleapis.com
jetstack https://charts.jetstack.io

[root@bs-k8s-master01 helm3]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6794 100 6794 0 0 434 0 0:00:15 0:00:15 --:--:-- 761

[root@bs-k8s-master01 helm3]# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.1.0-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
[root@bs-k8s-master01 helm3]# helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}

[root@bs-k8s-master01 helm3]# helm repo update
Hang tight while we grab the latest from your chart repositories…
…Successfully got an update from the "aliyun" chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@bs-k8s-master01 helm3]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap…
aliyun/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
google/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that uses ConfigMap…
google/nginx-ldapauth-proxy 0.1.3 1.13.5 nginx proxy with ldapauth
google/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
stable/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap…
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
aliyun/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs …
google/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor…
stable/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs …
[root@bs-k8s-master01 helm3]# helm repo remove stable
"stable" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo remove google
"google" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo remove jetstack
"jetstack" has been removed from your repositories
[root@bs-k8s-master01 helm3]# helm repo list
NAME URL
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[root@bs-k8s-master01 helm3]# helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories
[root@bs-k8s-master01 harbor]# pwd
/data/k8s/harbor
[root@bs-k8s-master01 harbor]# ll
总用量 48
-rw-r--r-- 1 root root 701 2月 16 19:26 ceph-harbor-pvc.yaml
-rw-r--r-- 1 root root 863 2月 16 19:18 ceph-harbor-secret.yaml
-rw-r--r-- 1 root root 994 2月 16 19:21 ceph-harbor-storageclass.yaml
-rw-r--r-- 1 root root 35504 2月 17 13:07 harbor-1.3.0.tgz
drwxr-xr-x 2 root root 134 2月 16 19:13 rbd
[root@bs-k8s-master01 harbor]# tar xf harbor-1.3.0.tgz
[root@bs-k8s-master01 harbor]# cd harbor/
[root@bs-k8s-master01 harbor]# ls
cert Chart.yaml conf LICENSE README.md templates values.yaml
[root@bs-k8s-master01 harbor]# cp values.yaml{,.bak}
[root@bs-k8s-master01 harbor]# diff values.yaml{,.bak}
26c26

< commonName: "zisefeizhu.harbor.org"

commonName: ""  

29c29

< core: zisefeizhu.harbor.org

  core: core.harbor.domain  

101c101

< externalURL: https://zisefeizhu.harbor.org

externalURL: https://core.harbor.domain
123c123

< storageClass: "ceph-harbor"

  storageClass: ""  

129c129

< storageClass: "ceph-harbor"

  storageClass: ""  

135c135

< storageClass: "ceph-harbor"

  storageClass: ""  

143c143

< storageClass: "ceph-harbor"

  storageClass: ""  

151c151

< storageClass: "ceph-harbor"

  storageClass: ""  

253c253

< harborAdminPassword: "zisefeizhu"

harborAdminPassword: "Harbor12345"
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[root@bs-k8s-master01 k8s]# cd nginx-ingress/
[root@bs-k8s-master01 nginx-ingress]# helm pull aliyun/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# tar xf nginx-ingress-0.9.5.tgz
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# pwd
/data/k8s/nginx-ingress
[root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
[root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy
nginx-ingress/templates/controller-deployment.yaml
nginx-ingress/templates/default-backend-deployment.yaml
[root@bs-k8s-master01 nginx-ingress]# grep -irl "extensions/v1beta1" nginx-ingress | grep deploy | xargs sed -i 's#extensions/v1beta1#apps/v1#g'
[root@bs-k8s-master01 nginx-ingress]# helm install nginx-ingress nginx-ingress
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec
由于k8s1.16版本升级,需要Deployment.spec中加selector,所以愉快地加上就行了。

[root@bs-k8s-master01 nginx]# helm install nginx-ingress nginx-ingress
NAME: nginx-ingress
LAST DEPLOYED: Mon Feb 17 14:12:27 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The nginx-ingress controller has been installed.
Get the application URL by running these commands:
export HTTP_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-controller)
export HTTPS_NODE_PORT=$(kubectl --namespace default get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-controller)
export NODE_IP=$(kubectl --namespace default get nodes -o jsonpath="{.items[0].status.addresses[1].address}")

echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."

An example Ingress that makes use of the controller:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt:
tls.key:
type: kubernetes.io/tls
[root@bs-k8s-master01 nginx]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-8fbb5974-l7dsx 1/1 Running 0 115s
nginx-ingress-default-backend-744fdc79c4-xcvqp 1/1 Running 0 115s
[root@bs-k8s-master01 nginx]# pwd
/data/k8s/nginx
[root@bs-k8s-master01 nginx]# ll
总用量 12
drwxr-xr-x 3 root root 119 2月 17 13:32 nginx-ingress
-rw-r--r-- 1 root root 10830 2月 17 13:25 nginx-ingress-0.9.5.tgz
[root@bs-k8s-master01 harbor]# helm install harbor -n harbor harbor
NAME: harbor
LAST DEPLOYED: Mon Feb 17 14:16:05 2020
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://zisefeizhu.harbor.org.
For more details, please visit https://github.com/goharbor/harbor.
[root@bs-k8s-master01 harbor]# kubectl get pvc -n harbor
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-harbor-harbor-redis-0 Bound pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO ceph-harbor 66s
database-data-harbor-harbor-database-0 Bound pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO ceph-harbor 66s
harbor-harbor-chartmuseum Bound pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO ceph-harbor 6m38s
harbor-harbor-jobservice Bound pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO ceph-harbor 6m38s
harbor-harbor-registry Bound pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO ceph-harbor 6m38s
[root@bs-k8s-master01 harbor]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-03dd5393-fad1-471b-8384-b0a5f5403d90 1Gi RWO Retain Bound harbor/harbor-harbor-jobservice ceph-harbor
pvc-1ec866fa-413a-463d-bb04-a0376577ae69 5Gi RWO Retain Bound harbor/harbor-harbor-chartmuseum ceph-harbor
pvc-494a130d-018c-4be3-9b31-e951cc4367a5 20Gi RWO Retain Bound default/wp-pv-claim ceph-rbd 27h
pvc-4b2c0362-aca9-4fc2-b3e8-5fed5bf46b43 1Gi RWO Retain Bound harbor/data-harbor-harbor-redis-0 ceph-harbor
pvc-8ffa3182-a2f6-47d9-a71d-ff8e8b379a16 1Gi RWO Retain Bound default/ceph-pvc ceph-rbd 29h
pvc-ac7d3a09-123e-4614-886c-cded8822a078 20Gi RWO Retain Bound default/mysql-pv-claim ceph-rbd 27h
pvc-b7268d13-e92a-4ab3-846a-26d14672e56c 5Gi RWO Retain Bound harbor/harbor-harbor-registry ceph-harbor
pvc-ce201f8c-0909-4f69-8eb9-aeaeb542de98 1Gi RWO Retain Bound harbor/database-data-harbor-harbor-database-0 ceph-harbor
[root@bs-k8s-master01 harbor]# kubectl get pods -n harbor -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
harbor-harbor-chartmuseum-dcc6f779f-68tvn 1/1 Running 0 32m 10.209.208.21 bs-k8s-node03
harbor-harbor-clair-69789f6695-5zrf8 1/2 CrashLoopBackOff 9 32m 10.209.145.26 bs-k8s-node02
harbor-harbor-core-5675f84d5f-ddhj2 0/1 CrashLoopBackOff 8 32m 10.209.145.27 bs-k8s-node02
harbor-harbor-database-0 1/1 Running 1 32m 10.209.46.93 bs-k8s-node01
harbor-harbor-jobservice-74f469588d-m6w64 0/1 Running 3 32m 10.209.46.91 bs-k8s-node01
harbor-harbor-notary-server-fcbcfdf9c-zgjk8 0/1 CrashLoopBackOff 9 32m 10.209.208.19 bs-k8s-node03
harbor-harbor-notary-signer-9789894bd-8p67d 0/1 CrashLoopBackOff 9 32m 10.209.208.20 bs-k8s-node03
harbor-harbor-portal-56456988bb-6cb9j 1/1 Running 0 32m 10.209.208.18 bs-k8s-node03
harbor-harbor-redis-0 1/1 Running 0 32m 10.209.46.92 bs-k8s-node01
harbor-harbor-registry-6946847b6f-qdgfp 2/2 Running 0 32m 10.209.145.28 bs-k8s-node02
rbd-provisioner-75b85f85bd-d4b8d 1/1 Running 0 136m 10.209.145.25 bs-k8s-node02

下面的操作相比就不需要记载了
注意点:
1. pvc创建完毕后不要执行。切记。
2. 本地hosts里的Ip是 nginx-controller的节点ip