前言

KubeSphere® 是经 CNCF 认证的 Kubernetes 主流开源发行版之一,在 Kubernetes 之上提供多种以容器为资源载体的业务功能模块,如多租户管理、集群运维、应用管理、DevOps、微服务治理等功能。

最近微服务,要部署到k8s,采用KubeSphere应用为中心的容器管理平台,于是捣鼓怎样去部署,第一次部署成功,好像不稳定,再次恢复四台服务器镜像,重新部署,其中遇到很多的问题及挫折,在此记录一下,以供大家参考。

准备服务器

 
  1. master:172.16.7.12 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 200G ( data)

  2. node5:172.16.7.15 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

  3. node3:172.16.7.16 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

  4. node4:172.16.7.17 centos7.5 + 8cpu + 16G(内存) + 20G(/) + 1T ( data)

系统优化及安装准备软件

系统自动优化

参考“ansible安装部署CDH集群,与手动安装部署CDH集群,及CM配置和用户权限配置”,先进行对系统进行优化:

 
  1. sh deploy_robot.sh init_ssh

  2. sh deploy_robot.sh init_sys

关于deploy_robot.sh的脚本在https://github.com/fleapx/cdh-deploy-robot.git

ansible工具软件安装 

 
  1. # 控制机器上安装ansible

  2. yum install -y ansible

配置修改 /etc/ansible/hosts ,对需要管理的主机进行配置。

 
  1. [all]

  2. 172.16.7.12 master

  3. 172.16.7.15 node5

  4. 172.16.7.16 node3

  5. 172.16.7.17 node4

默认配置,需要修改编辑 /etc/ansible/ansible.cfg。

 
  1. # uncomment this to disable SSH key host checking

  2. host_key_checking = False

其他系统软件安装

 
  1. ansible all -m shell -a "wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo"

  2. ansible all -m shell -a "sed -i 's/^.*aliyuncs*/#&/g' /etc/yum.repos.d/CentOS-Base.repo"

  3. ansible all -m shell -a "wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo"

  4. ansible all -m shell -a "yum -y install ebtables socat ipset conntrack nfs-utils rpcbind"

  5. ansible all -m shell -a "yum install -y vim wget yum-utils device-mapper-persistent-data lvm2"

集群时间校准

 
  1. ansible all -m shell -a "yum install chrony -y"

  2. ansible all -m shell -a "systemctl start chronyd"

  3. ansible all -m shell -a "sed -i -e '/^server/s/^/#/' -e '1a server ntp.aliyun.com iburst' /etc/chrony.conf"

  4. ansible all -m shell -a "systemctl restart chronyd"

  5. ansible all -m shell -a "timedatectl set-timezone Asia/Shanghai"

系统其他优化

 
  1. ansible all -m shell -a "echo '* soft nofile 655360' >> /etc/security/limits.conf"

  2. ansible all -m shell -a "echo '* hard nofile 655360' >> /etc/security/limits.conf"

  3. ansible all -m shell -a "echo '* soft nproc 655360' >> /etc/security/limits.conf"

  4. ansible all -m shell -a "echo '* hard nproc 655360' >> /etc/security/limits.conf"

  5. ansible all -m shell -a "echo '* soft memlock unlimited' >> /etc/security/limits.conf"

  6. ansible all -m shell -a "echo '* hard memlock unlimited' >> /etc/security/limits.conf"

  7. ansible all -m shell -a "echo 'DefaultLimitNOFILE=1024000' >> /etc/systemd/system.conf"

  8. ansible all -m shell -a "echo 'DefaultLimitNPROC=1024000' >> /etc/systemd/system.conf"

开放所有端口

 
  1. ansible all -m shell -a "iptables -P INPUT ACCEPT"

  2. ansible all -m shell -a "iptables -P FORWARD ACCEPT"

  3. ansible all -m shell -a "iptables -P OUTPUT ACCEPT"

  4. ansible all -m shell -a "iptables -F"

安装docker

CentOS 7系统下docker安装,及配置阿里云加速,及所有配置请参加此文。

 
  1. ansible all -m shell -a "yum remove docker docker-common docker-selinux docker-engine"

  2. ansible all -m shell -a "yum -y install docker-ce-19.03.8-3.el7"

安装k8s的镜像源

 
  1. tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

  5. enabled=1

  6. gpgcheck=0

  7. repo_gpgcheck=0

  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  9. https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  10. EOF

安装kubernetes-v1.18.6和kubesphere-v3.0

官方https://kubesphere.com.cn/多节点安装地址:https://kubesphere.com.cn/docs/quick-start/all-in-one-on-linux/

下载kk安装文件

 
  1. # 在国内先添加一个环境变量

  2. export KKZONE=cn

  3. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -

 

然后,使kk成为全局可执行文件

mv ./kk /usr/local/bin 

生成多节点集群的配置

 
  1. # 创建一个配置文件模版

  2. kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0 -f ./config-kubesphere.yaml

修改配置文件

修改配置文件config-kubesphere.yaml

 
  1. apiVersion: kubekey.kubesphere.io/v1alpha1

  2. kind: Cluster

  3. metadata:

  4. name: sample

  5. spec:

  6. hosts:

  7. - {name: master, address: 172.16.7.12, internalAddress: 172.16.7.12, user: root, password: azdebug_it}

  8. - {name: node3, address: 172.16.7.16, internalAddress: 172.16.7.16, user: root, password: azdebug_it}

  9. - {name: node4, address: 172.16.7.17, internalAddress: 172.16.7.17, user: root, password: azdebug_it}

  10. - {name: node5, address: 172.16.7.15, internalAddress: 172.16.7.15, user: root, password: azdebug_it}

  11. roleGroups:

  12. etcd:

  13. - node3

  14. master:

  15. - master

  16. worker:

  17. - node4

  18. - node3

  19. - node5

  20. controlPlaneEndpoint:

  21. domain: lb.kubesphere.local

  22. address: "172.16.7.12"

  23. port: "6443"

  24. kubernetes:

  25. version: v1.18.6

  26. imageRepo: kubesphere

  27. clusterName: cluster.local

  28. network:

  29. plugin: calico

  30. kubePodsCIDR: 10.233.64.0/18

  31. kubeServiceCIDR: 10.233.0.0/18

  32. registry:

  33. registryMirrors: []

  34. insecureRegistries: []

  35. #privateRegistry: dockerhub.kubekey.local

  36. addons: []

  37.  
  38.  
  39. ---

  40. apiVersion: installer.kubesphere.io/v1alpha1

  41. kind: ClusterConfiguration

  42. metadata:

  43. name: ks-installer

  44. namespace: kubesphere-system

  45. labels:

  46. version: v3.0.0

  47. spec:

  48. local_registry: ""

  49. persistence:

  50. storageClass: ""

  51. authentication:

  52. jwtSecret: ""

  53. etcd:

  54. monitoring: true

  55. endpointIps: 172.16.7.16

  56. port: 2379

  57. tlsEnable: true

  58. common:

  59. es:

  60. elasticsearchDataVolumeSize: 20Gi

  61. elasticsearchMasterVolumeSize: 4Gi

  62. elkPrefix: logstash

  63. logMaxAge: 7

  64. mysqlVolumeSize: 20Gi

  65. minioVolumeSize: 20Gi

  66. etcdVolumeSize: 20Gi

  67. openldapVolumeSize: 2Gi

  68. redisVolumSize: 2Gi

  69. console:

  70. enableMultiLogin: true # enable/disable multi login

  71. port: 30880

  72. alerting:

  73. enabled: true

  74. auditing:

  75. enabled: true

  76. devops:

  77. enabled: true

  78. jenkinsMemoryLim: 5Gi

  79. jenkinsMemoryReq: 1500Mi

  80. jenkinsVolumeSize: 8Gi

  81. jenkinsJavaOpts_Xms: 1024m

  82. jenkinsJavaOpts_Xmx: 1024m

  83. jenkinsJavaOpts_MaxRAM: 2g

  84. events:

  85. enabled: true

  86. ruler:

  87. enabled: true

  88. replicas: 2

  89. logging:

  90. enabled: true

  91. logsidecarReplicas: 2

  92. metrics_server:

  93. enabled: true

  94. monitoring:

  95. prometheusMemoryRequest: 400Mi

  96. prometheusVolumeSize: 20Gi

  97. multicluster:

  98. clusterRole: none # host | member | none

  99. networkpolicy:

  100. enabled: true

  101. notification:

  102. enabled: true

  103. openpitrix:

  104. enabled: true

  105. servicemesh:

  106. enabled: true

执行及安装kubernetes v1.18.6和kubesphere v3.0.0

 
  1. # 修改配置文件,添加上节点信息(节点名称,ip等)

  2. kk create cluster -f ./config-kubesphere.yaml

正确执行结果

 
  1. [root@master ~]# sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

  2. W0105 22:45:15.009277 22248 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  3. W0105 22:45:15.009521 22248 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  4. [init] Using Kubernetes version: v1.18.6

  5. [preflight] Running pre-flight checks

  6. error execution phase preflight: [preflight] Some fatal errors occurred:

  7. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  8. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  9. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

  10. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

  11. To see the stack trace of this error execute with --v=5 or higher

  12. [root@master ~]# cd /home/k

  13. k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz

  14. [root@master ~]# cd /home/k

  15. k8s-script/ kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz

  16. [root@master ~]# cd /home/k8s-script/

  17. [root@master k8s-script]# export KKZONE=cn

  18. [root@master k8s-script]# ./kk create cluster -f ./k8s-config.yaml

  19. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  20. | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |

  21. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  22. | node5 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  23. | node4 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  24. | node3 | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  25. | master | y | y | y | y | y | y | y | y | y | | | CST 22:55:35 |

  26. +--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

  27.  
  28. This is a simple check of your environment.

  29. Before installation, you should ensure that your machines meet all requirements specified at

  30. https://github.com/kubesphere/kubekey#requirements-and-recommendations

  31.  
  32. Continue this installation? [yes/no]: yes

  33. INFO[22:57:10 CST] Downloading Installation Files

  34. INFO[22:57:10 CST] Downloading kubeadm ...

  35. INFO[22:57:10 CST] Downloading kubelet ...

  36. INFO[22:57:10 CST] Downloading kubectl ...

  37. INFO[22:57:11 CST] Downloading helm ...

  38. INFO[22:57:11 CST] Downloading kubecni ...

  39. INFO[22:57:11 CST] Configurating operating system ...

  40. [node5 172.16.7.15] MSG:

  41. vm.swappiness = 1

  42. net.ipv4.tcp_tw_reuse = 1

  43. net.ipv4.tcp_tw_recycle = 1

  44. net.ipv4.tcp_keepalive_time = 1200

  45. net.ipv4.ip_local_port_range = 10000 65000

  46. net.ipv4.tcp_max_syn_backlog = 8192

  47. net.ipv4.tcp_max_tw_buckets = 5000

  48. fs.file-max = 655350

  49. net.ipv4.route.gc_timeout = 100

  50. net.ipv4.tcp_syn_retries = 1

  51. net.ipv4.tcp_synack_retries = 1

  52. net.core.netdev_max_backlog = 16384

  53. net.ipv4.tcp_max_orphans = 16384

  54. net.ipv4.tcp_fin_timeout = 2

  55. net.core.somaxconn = 32768

  56. kernel.threads-max = 655360

  57. kernel.pid_max = 655360

  58. vm.max_map_count = 393210

  59. net.ipv4.ip_forward = 1

  60. net.bridge.bridge-nf-call-arptables = 1

  61. net.bridge.bridge-nf-call-ip6tables = 1

  62. net.bridge.bridge-nf-call-iptables = 1

  63. net.ipv4.ip_local_reserved_ports = 30000-32767

  64. [node4 172.16.7.17] MSG:

  65. vm.swappiness = 1

  66. net.ipv4.tcp_tw_reuse = 1

  67. net.ipv4.tcp_tw_recycle = 1

  68. net.ipv4.tcp_keepalive_time = 1200

  69. net.ipv4.ip_local_port_range = 10000 65000

  70. net.ipv4.tcp_max_syn_backlog = 8192

  71. net.ipv4.tcp_max_tw_buckets = 5000

  72. fs.file-max = 655350

  73. net.ipv4.route.gc_timeout = 100

  74. net.ipv4.tcp_syn_retries = 1

  75. net.ipv4.tcp_synack_retries = 1

  76. net.core.netdev_max_backlog = 16384

  77. net.ipv4.tcp_max_orphans = 16384

  78. net.ipv4.tcp_fin_timeout = 2

  79. net.core.somaxconn = 32768

  80. kernel.threads-max = 655360

  81. kernel.pid_max = 655360

  82. vm.max_map_count = 393210

  83. net.ipv4.ip_forward = 1

  84. net.bridge.bridge-nf-call-arptables = 1

  85. net.bridge.bridge-nf-call-ip6tables = 1

  86. net.bridge.bridge-nf-call-iptables = 1

  87. net.ipv4.ip_local_reserved_ports = 30000-32767

  88. [master 172.16.7.12] MSG:

  89. vm.swappiness = 1

  90. net.ipv4.tcp_tw_reuse = 1

  91. net.ipv4.tcp_tw_recycle = 1

  92. net.ipv4.tcp_keepalive_time = 1200

  93. net.ipv4.ip_local_port_range = 10000 65000

  94. net.ipv4.tcp_max_syn_backlog = 8192

  95. net.ipv4.tcp_max_tw_buckets = 5000

  96. fs.file-max = 655350

  97. net.ipv4.route.gc_timeout = 100

  98. net.ipv4.tcp_syn_retries = 1

  99. net.ipv4.tcp_synack_retries = 1

  100. net.core.netdev_max_backlog = 16384

  101. net.ipv4.tcp_max_orphans = 16384

  102. net.ipv4.tcp_fin_timeout = 2

  103. net.core.somaxconn = 32768

  104. kernel.threads-max = 655360

  105. kernel.pid_max = 655360

  106. vm.max_map_count = 393210

  107. net.ipv4.ip_forward = 1

  108. net.bridge.bridge-nf-call-arptables = 1

  109. net.bridge.bridge-nf-call-ip6tables = 1

  110. net.bridge.bridge-nf-call-iptables = 1

  111. net.ipv4.ip_local_reserved_ports = 30000-32767

  112. [node3 172.16.7.16] MSG:

  113. vm.swappiness = 1

  114. net.ipv4.tcp_tw_reuse = 1

  115. net.ipv4.tcp_tw_recycle = 1

  116. net.ipv4.tcp_keepalive_time = 1200

  117. net.ipv4.ip_local_port_range = 10000 65000

  118. net.ipv4.tcp_max_syn_backlog = 8192

  119. net.ipv4.tcp_max_tw_buckets = 5000

  120. fs.file-max = 655350

  121. net.ipv4.route.gc_timeout = 100

  122. net.ipv4.tcp_syn_retries = 1

  123. net.ipv4.tcp_synack_retries = 1

  124. net.core.netdev_max_backlog = 16384

  125. net.ipv4.tcp_max_orphans = 16384

  126. net.ipv4.tcp_fin_timeout = 2

  127. net.core.somaxconn = 32768

  128. kernel.threads-max = 655360

  129. kernel.pid_max = 655360

  130. vm.max_map_count = 393210

  131. net.ipv4.ip_forward = 1

  132. net.bridge.bridge-nf-call-arptables = 1

  133. net.bridge.bridge-nf-call-ip6tables = 1

  134. net.bridge.bridge-nf-call-iptables = 1

  135. net.ipv4.ip_local_reserved_ports = 30000-32767

  136. INFO[22:57:25 CST] Installing docker ...

  137. INFO[22:57:35 CST] Start to download images on all nodes

  138. [node5] Downloading image: kubesphere/pause:3.2

  139. [master] Downloading image: kubesphere/pause:3.2

  140. [node3] Downloading image: kubesphere/etcd:v3.3.12

  141. [node4] Downloading image: kubesphere/pause:3.2

  142. [node5] Downloading image: kubesphere/kube-proxy:v1.18.6

  143. [node4] Downloading image: kubesphere/kube-proxy:v1.18.6

  144. [node3] Downloading image: kubesphere/pause:3.2

  145. [master] Downloading image: kubesphere/kube-apiserver:v1.18.6

  146. [node5] Downloading image: coredns/coredns:1.6.9

  147. [node4] Downloading image: coredns/coredns:1.6.9

  148. [master] Downloading image: kubesphere/kube-controller-manager:v1.18.6

  149. [node5] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  150. [node3] Downloading image: kubesphere/kube-proxy:v1.18.6

  151. [master] Downloading image: kubesphere/kube-scheduler:v1.18.6

  152. [node5] Downloading image: calico/kube-controllers:v3.15.1

  153. [node3] Downloading image: coredns/coredns:1.6.9

  154. [node4] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  155. [node3] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  156. [master] Downloading image: kubesphere/kube-proxy:v1.18.6

  157. [node5] Downloading image: calico/cni:v3.15.1

  158. [node4] Downloading image: calico/kube-controllers:v3.15.1

  159. [master] Downloading image: coredns/coredns:1.6.9

  160. [node5] Downloading image: calico/node:v3.15.1

  161. [node4] Downloading image: calico/cni:v3.15.1

  162. [node3] Downloading image: calico/kube-controllers:v3.15.1

  163. [node3] Downloading image: calico/cni:v3.15.1

  164. [master] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12

  165. [node5] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  166. [node4] Downloading image: calico/node:v3.15.1

  167. [master] Downloading image: calico/kube-controllers:v3.15.1

  168. [node3] Downloading image: calico/node:v3.15.1

  169. [node4] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  170. [node3] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  171. [master] Downloading image: calico/cni:v3.15.1

  172. [master] Downloading image: calico/node:v3.15.1

  173. [master] Downloading image: calico/pod2daemon-flexvol:v3.15.1

  174. INFO[23:01:26 CST] Generating etcd certs

  175. INFO[23:01:32 CST] Synchronizing etcd certs

  176. INFO[23:01:36 CST] Creating etcd service

  177. INFO[23:01:49 CST] Starting etcd cluster

  178. [node3 172.16.7.16] MSG:

  179. Configuration file already exists

  180. Waiting for etcd to start

  181. INFO[23:01:58 CST] Refreshing etcd configuration

  182. INFO[23:01:59 CST] Backup etcd data regularly

  183. INFO[23:02:00 CST] Get cluster status

  184. [master 172.16.7.12] MSG:

  185. Cluster will be created.

  186. INFO[23:02:01 CST] Installing kube binaries

  187. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.16:/tmp/kubekey/kubeadm Done

  188. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.15:/tmp/kubekey/kubeadm Done

  189. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.17:/tmp/kubekey/kubeadm Done

  190. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubeadm to 172.16.7.12:/tmp/kubekey/kubeadm Done

  191. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.16:/tmp/kubekey/kubelet Done

  192. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.12:/tmp/kubekey/kubelet Done

  193. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.15:/tmp/kubekey/kubelet Done

  194. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubelet to 172.16.7.17:/tmp/kubekey/kubelet Done

  195. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.16:/tmp/kubekey/kubectl Done

  196. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.17:/tmp/kubekey/kubectl Done

  197. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.15:/tmp/kubekey/kubectl Done

  198. Push /home/k8s-script/kubekey/v1.18.6/amd64/kubectl to 172.16.7.12:/tmp/kubekey/kubectl Done

  199. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.12:/tmp/kubekey/helm Done

  200. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.16:/tmp/kubekey/helm Done

  201. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.15:/tmp/kubekey/helm Done

  202. Push /home/k8s-script/kubekey/v1.18.6/amd64/helm to 172.16.7.17:/tmp/kubekey/helm Done

  203. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  204. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.16:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  205. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.17:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  206. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.7.12:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  207. INFO[23:02:50 CST] Initializing kubernetes cluster

  208. [master 172.16.7.12] MSG:

  209. W0105 23:02:51.587457 23847 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  210. W0105 23:02:51.587685 23847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  211. [init] Using Kubernetes version: v1.18.6

  212. [preflight] Running pre-flight checks

  213. [preflight] Pulling images required for setting up a Kubernetes cluster

  214. [preflight] This might take a minute or two, depending on the speed of your internet connection

  215. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

  216. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  217. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  218. [kubelet-start] Starting the kubelet

  219. [certs] Using certificateDir folder "/etc/kubernetes/pki"

  220. [certs] Generating "ca" certificate and key

  221. [certs] Generating "apiserver" certificate and key

  222. [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost lb.kubesphere.local master master.cluster.local node3 node3.cluster.local node4 node4.cluster.local node5 node5.cluster.local] and IPs [10.233.0.1 172.16.7.12 127.0.0.1 172.16.7.12 172.16.7.16 172.16.7.17 172.16.7.15 10.233.0.1]

  223. [certs] Generating "apiserver-kubelet-client" certificate and key

  224. [certs] Generating "front-proxy-ca" certificate and key

  225. [certs] Generating "front-proxy-client" certificate and key

  226. [certs] External etcd mode: Skipping etcd/ca certificate authority generation

  227. [certs] External etcd mode: Skipping etcd/server certificate generation

  228. [certs] External etcd mode: Skipping etcd/peer certificate generation

  229. [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

  230. [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

  231. [certs] Generating "sa" key and public key

  232. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"

  233. [kubeconfig] Writing "admin.conf" kubeconfig file

  234. [kubeconfig] Writing "kubelet.conf" kubeconfig file

  235. [kubeconfig] Writing "controller-manager.conf" kubeconfig file

  236. [kubeconfig] Writing "scheduler.conf" kubeconfig file

  237. [control-plane] Using manifest folder "/etc/kubernetes/manifests"

  238. [control-plane] Creating static Pod manifest for "kube-apiserver"

  239. W0105 23:03:00.466175 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  240. [control-plane] Creating static Pod manifest for "kube-controller-manager"

  241. W0105 23:03:00.474746 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  242. [control-plane] Creating static Pod manifest for "kube-scheduler"

  243. W0105 23:03:00.476002 23847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

  244. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

  245. [apiclient] All control plane components are healthy after 32.002873 seconds

  246. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

  247. [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster

  248. [upload-certs] Skipping phase. Please see --upload-certs

  249. [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"

  250. [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

  251. [bootstrap-token] Using token: 6zsarg.gxg5eijglkupq85j

  252. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

  253. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

  254. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

  255. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

  256. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

  257. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

  258. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

  259. [addons] Applied essential addon: CoreDNS

  260. [addons] Applied essential addon: kube-proxy

  261.  
  262. Your Kubernetes control-plane has initialized successfully!

  263.  
  264. To start using your cluster, you need to run the following as a regular user:

  265.  
  266. mkdir -p $HOME/.kube

  267. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  268. sudo chown $(id -u):$(id -g) $HOME/.kube/config

  269.  
  270. You should now deploy a pod network to the cluster.

  271. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  272. https://kubernetes.io/docs/concepts/cluster-administration/addons/

  273.  
  274. You can now join any number of control-plane nodes by copying certificate authorities

  275. and service account keys on each node and then running the following as root:

  276.  
  277. kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \

  278. --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9 \

  279. --control-plane

  280.  
  281. Then you can join any number of worker nodes by running the following on each as root:

  282.  
  283. kubeadm join lb.kubesphere.local:6443 --token 6zsarg.gxg5eijglkupq85j \

  284. --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9

  285. [master 172.16.7.12] MSG:

  286. service "kube-dns" deleted

  287. [master 172.16.7.12] MSG:

  288. service/coredns created

  289. [master 172.16.7.12] MSG:

  290. serviceaccount/nodelocaldns created

  291. daemonset.apps/nodelocaldns created

  292. [master 172.16.7.12] MSG:

  293. configmap/nodelocaldns created

  294. [master 172.16.7.12] MSG:

  295. I0105 23:04:05.247536 26174 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.18

  296. W0105 23:04:06.468801 26174 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  297. [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

  298. [upload-certs] Using certificate key:

  299. 13a993ef56fb292d7ecb9947a3095a0eca6c419dfa148569699c474a8d6c28df

  300. [master 172.16.7.12] MSG:

  301. secret/kubeadm-certs patched

  302. [master 172.16.7.12] MSG:

  303. secret/kubeadm-certs patched

  304. [master 172.16.7.12] MSG:

  305. secret/kubeadm-certs patched

  306. [master 172.16.7.12] MSG:

  307. W0105 23:04:08.563212 26292 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  308. kubeadm join lb.kubesphere.local:6443 --token cfoibt.jdyzk3oc1aze53ri --discovery-token-ca-cert-hash sha256:8e1405a3da9e80413ab9aec1952a8259490cb174dcc74ecb96c0c5eafa429fd9

  309. [master 172.16.7.12] MSG:

  310. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

  311. master NotReady master 39s v1.18.6 172.16.7.12 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.9

  312. INFO[23:04:09 CST] Deploying network plugin ...

  313. [master 172.16.7.12] MSG:

  314. configmap/calico-config created

  315. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

  316. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

  317. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

  318. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

  319. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

  320. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

  321. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

  322. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

  323. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

  324. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

  325. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

  326. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

  327. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

  328. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

  329. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

  330. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

  331. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

  332. clusterrole.rbac.authorization.k8s.io/calico-node created

  333. clusterrolebinding.rbac.authorization.k8s.io/calico-node created

  334. daemonset.apps/calico-node created

  335. serviceaccount/calico-node created

  336. deployment.apps/calico-kube-controllers created

  337. serviceaccount/calico-kube-controllers created

  338. INFO[23:04:13 CST] Joining nodes to cluster

  339. [node5 172.16.7.15] MSG:

  340. W0105 23:04:14.035833 52825 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  341. [preflight] Running pre-flight checks

  342. [preflight] Reading configuration from the cluster...

  343. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  344. W0105 23:04:20.030576 52825 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  345. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  346. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  347. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  348. [kubelet-start] Starting the kubelet

  349. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  350.  
  351. This node has joined the cluster:

  352. * Certificate signing request was sent to apiserver and a response was received.

  353. * The Kubelet was informed of the new secure connection details.

  354.  
  355. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  356. [node4 172.16.7.17] MSG:

  357. W0105 23:04:14.432448 53936 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  358. [preflight] Running pre-flight checks

  359. [preflight] Reading configuration from the cluster...

  360. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  361. W0105 23:04:19.870838 53936 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  362. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  363. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  364. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  365. [kubelet-start] Starting the kubelet

  366. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  367.  
  368. This node has joined the cluster:

  369. * Certificate signing request was sent to apiserver and a response was received.

  370. * The Kubelet was informed of the new secure connection details.

  371.  
  372. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  373. [node3 172.16.7.16] MSG:

  374. W0105 23:04:14.376894 57091 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

  375. [preflight] Running pre-flight checks

  376. [preflight] Reading configuration from the cluster...

  377. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  378. W0105 23:04:20.949568 57091 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  379. [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

  380. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  381. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  382. [kubelet-start] Starting the kubelet

  383. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

  384.  
  385. This node has joined the cluster:

  386. * Certificate signing request was sent to apiserver and a response was received.

  387. * The Kubelet was informed of the new secure connection details.

  388.  
  389. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

  390. [node3 172.16.7.16] MSG:

  391. node/node3 labeled

  392. [node5 172.16.7.15] MSG:

  393. node/node5 labeled

  394. [node4 172.16.7.17] MSG:

  395. node/node4 labeled

  396. [master 172.16.7.12] MSG:

  397. storageclass.storage.k8s.io/local created

  398. serviceaccount/openebs-maya-operator created

  399. clusterrole.rbac.authorization.k8s.io/openebs-maya-operator created

  400. clusterrolebinding.rbac.authorization.k8s.io/openebs-maya-operator created

  401. configmap/openebs-ndm-config created

  402. daemonset.apps/openebs-ndm created

  403. deployment.apps/openebs-ndm-operator created

  404. deployment.apps/openebs-localpv-provisioner created

  405. INFO[23:04:51 CST] Deploying KubeSphere ...

  406. v3.0.0

  407. [master 172.16.7.12] MSG:

  408. namespace/kubesphere-system created

  409. namespace/kubesphere-monitoring-system created

  410. [master 172.16.7.12] MSG:

  411. secret/kube-etcd-client-certs created

  412. [master 172.16.7.12] MSG:

  413. namespace/kubesphere-system unchanged

  414. serviceaccount/ks-installer unchanged

  415. customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged

  416. clusterrole.rbac.authorization.k8s.io/ks-installer unchanged

  417. clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged

  418. deployment.apps/ks-installer unchanged

  419. clusterconfiguration.installer.kubesphere.io/ks-installer created

  420.  
  421.  
  422. INFO[23:10:23 CST] Installation is complete.

  423.  
  424. Please check the result using the command:

  425.  
  426. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

遇到问题及解决方案

 
  1. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.35:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  2. Push /home/k8s-script/kubekey/v1.18.6/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 172.16.8.36:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done

  3. INFO[16:20:31 CST] Initializing kubernetes cluster

  4. [master 172.16.8.36] MSG:

  5. [preflight] Running pre-flight checks

  6. W0105 16:20:38.445541 19396 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory

  7. [reset] No etcd config found. Assuming external etcd

  8. [reset] Please, manually reset etcd to prevent further issues

  9. [reset] Stopping the kubelet service

  10. [reset] Unmounting mounted directories in "/var/lib/kubelet"

  11. W0105 16:20:38.453323 19396 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory

  12. [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

  13. [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  14. [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

  15.  
  16. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

  17.  
  18. The reset process does not reset or clean up iptables rules or IPVS tables.

  19. If you wish to reset iptables, you must do so manually by using the "iptables" command.

  20.  
  21. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)

  22. to reset your system's IPVS tables.

  23.  
  24. The reset process does not clean your kubeconfig files and you must remove them manually.

  25. Please, check the contents of the $HOME/.kube/config file.

  26. [master 172.16.8.36] MSG:

  27. [preflight] Running pre-flight checks

  28. W0105 16:20:40.305612 19617 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory

  29. [reset] No etcd config found. Assuming external etcd

  30. [reset] Please, manually reset etcd to prevent further issues

  31. [reset] Stopping the kubelet service

  32. [reset] Unmounting mounted directories in "/var/lib/kubelet"

  33. W0105 16:20:40.310273 19617 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory

  34. [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

  35. [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

  36. [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

  37.  
  38. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

  39.  
  40. The reset process does not reset or clean up iptables rules or IPVS tables.

  41. If you wish to reset iptables, you must do so manually by using the "iptables" command.

  42.  
  43. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)

  44. to reset your system's IPVS tables.

  45.  
  46. The reset process does not clean your kubeconfig files and you must remove them manually.

  47. Please, check the contents of the $HOME/.kube/config file.

  48. ERRO[16:20:41 CST] Failed to init kubernetes cluster: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

  49. W0105 16:20:40.826437 19657 utils.go:26] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

  50. W0105 16:20:40.826682 19657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

  51. [init] Using Kubernetes version: v1.18.6

  52. [preflight] Running pre-flight checks

  53. error execution phase preflight: [preflight] Some fatal errors occurred:

  54. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  55. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  56. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

  57. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

  58. To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1 node=172.16.8.36

  59. WARN[16:20:41 CST] Task failed ...

  60. WARN[16:20:41 CST] error: interrupted by error

  61. Error: Failed to init kubernetes cluster: interrupted by error

  62. Usage:

  63. kk create cluster [flags]

  64.  
  65. Flags:

  66. -f, --filename string Path to a configuration file

  67. -h, --help help for cluster

  68. --skip-pull-images Skip pre pull images

  69. --with-kubernetes string Specify a supported version of kubernetes

  70. --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)

  71. -y, --yes Skip pre-check of the installation

  72.  
  73. Global Flags:

  74. --debug Print detailed information (default true)

  75.  
  76. Failed to init kubernetes cluster: interrupted by error

在master节点执行

 sudo -E /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml"

提取关键错误

 
  1. [ERROR ExternalEtcdVersion]: couldn't load external etcd's certificate and key pair /etc/ssl/etcd/ssl/node-node3.pem, /etc/ssl/etcd/ssl/node-node3-key.pem: open /etc/ssl/etcd/ssl/node-node3.pem: no such file or directory

  2. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3.pem doesn't exist

  3. [ERROR ExternalEtcdClientCertificates]: /etc/ssl/etcd/ssl/node-node3-key.pem doesn't exist

在master节点执行,再次查看

 
  1. [root@master ~]# ls -lh /etc/ssl/etcd/ssl/

  2. 总用量 32K

  3. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 admin-node3-key.pem

  4. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 admin-node3.pem

  5. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 ca-key.pem

  6. -rw-r--r--. 1 root root 1.1K 1月 5 19:59 ca.pem

  7. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 member-node3-key.pem

  8. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 member-node3.pem

  9. -rw-r--r--. 1 root root 1.7K 1月 5 19:59 node-master-key.pem

  10. -rw-r--r--. 1 root root 1.4K 1月 5 19:59 node-master.pem

原来在/etc/ssl/etcd/ssl/中真不存在node-node3-key.pem、node-node3.pem,真么办?原来是我选择的是member模式。

解决方法

 
  1. [root@master ~]# cp /etc/ssl/etcd/ssl/member-node3-key.pem /etc/ssl/etcd/ssl/node-node3-key.pem

  2. [root@master ~]# cp /etc/ssl/etcd/ssl/member-node3.pem /etc/ssl/etcd/ssl/node-node3.pem

再次执行

 
  1. export KKZONE=cn

  2. ./kk create cluster -f ./k8s-config.yaml

查看安装KubeSphere日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

成功安装KubeSphere3.0

登陆kubesphere

 
  1. #ip:30880

  2. #用户名:admin

  3. #默认密码:P@88w0rd

优秀参考

推荐这个博文:https://blog.csdn.net/weixin_43141746/article/details/110261158

Logo

开源、云原生的融合云平台

更多推荐