查看官网

Kubesphere:https://kubesphere.com.cn/

在这里插入图片描述
如果已经搭建了K8S集群的话选着Kubernetes安装

进入对应的操作文档
在这里插入图片描述

前提条件
在这里插入图片描述
安装Helm
在这里插入图片描述
这个可能网络慢的话不好安装!
linux上执行拉取脚本命令

curl -L https://git.io/get_helm.sh | bash

这里提供其他的安装方式,上面是脚本安装,下面是手动安装!
手动安装Helm
由于脚本拉取可能失败,就算拉取到了脚本可能拉取https://get.helm.sh/helm-v2.16.10-linux-amd64.tar.gz压缩包的时候也会失败!这里就提供手动安装!

  1. 在执行完curl -L https://git.io/get_helm.sh | bash命令后会去拉取一个压缩包,复制压缩包地址,在迅雷中下载;
  2. 解压安装
tar -zxvf helm-v2.14.1-linux-amd64.tar.gz
cd  linux-amd64  
cp  helm  /usr/local/bin 
输入helm version时会提示tiller没有

[root@master1 helm]# helm version
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Error: could not find a ready tiller pod
  1. 安装tiller
创建till-rbac-config.yaml权限文件
vi till-rbac-config.yaml,内容如下
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system


应用文件
kubectl apply -f till-rbac-config.yaml

  1. 初始化Tiller服务
    最好和helm版本报错一致
helm init --upgrade --service-account tiller  --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
  1. 检查一下是不是安装成功了
kubectl get pods -n kube-system | grep tiller
  1. 检测安装
[root@master k8s]# helm version
Client: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
安装成功

tiller初始化异常,查看pod一直显示状态为ContainerCreating

kubectl describe pod  tiller-xxx-xxx-xxx-zx668 -n xxx

Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "39cd2aa4a80fefff64a30ec5c89e59041b11ef2779595df62a384bc79bc0f6b1" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec3583d4ab3245518ae9c12497757b87bb9da7a3ad216a6cac7c8606b10c51b8" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "df466739449d37f0314cb755b01dc091f013836334437716455bce1290dd62eb" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8baed15a679e3e5b569a5eb79ab105bd25c9a797dd6ef205e42c328419107a64" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "35e04e5714684eef7fd26eaa9af0e3a2c0e055ff9fd24f34876c8075249c9e86" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cb225cde08df73a6047865ba2b6c002e54bf37ef6d6aed0b73f06cc4c97e6047" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2ddd8b7478a70363dc0858022df6765220ee8f708c6d193dc7fb30f53fc6d3ed" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "410d3032b9f2c5bed0b30c6716d956c46f266b4fb3e2f8755bc070e7ed37f079" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Warning  FailedCreatePodSandBox  <invalid>                       kubelet, ttt       Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f28603feabf1f0a418123a97812f393779a37f3a89390d6c9cfbffe51eb67e82" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24
  Normal   SandboxChanged          <invalid> (x12 over <invalid>)  kubelet, ttt       Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  <invalid> (x4 over <invalid>)   kubelet, ttt       (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ba3b447878b051d667261458bb6371988cb5dda39a71bf06494db4f0102cd016" network for pod "tiller-deploy-648df857bb-zdn2p": networkPlugin cni failed to set up pod "tiller-deploy-648df857bb-zdn2p_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24

解决方案
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
systemctl start kubelet

从新加入master节点

在这里插入图片描述
往下走会安装OpenEBS

安装OpenEBS
在这里插入图片描述

在这里插入图片描述
按照官网的步骤就没错了
官网安装OpenEBS操作
官方提供两种安装方式!
在这里插入图片描述
第一种是helm方式这个估计有点问题,因为helm的镜像要么被墙了要么就是镜像包不全
解决方案

先搜索镜像看有没有,有的话就没问题
helm search stable/openebs --version 1.5.0

没有的话就要替换
helm repo remove stable


使用其他的chart仓库
微软的chart仓库,这个仓库强烈推荐,基本上官网有的chart这里都有。
http://mirror.azure.cn/kubernetes/charts/

阿里的cahrt仓库,网址https://developer.aliyun.com/hub#/?_k=bfaiyc
helm repo add apphub https://apphub.aliyuncs.com/

Kubeapps Hub
https://hub.kubeapps.com/charts/incubator
官方chart仓库,国内有点不好使

添加
helm repo add stable 上面的任意地址
注意:如果添加时耗时很长,没有响应的话单独吧连接拉出来访问,如果访问不通就无法添加
helm repo add openebs https://openebs.github.io/charts
helm repo update
查看helm源添加情况
helm repo list
安装openebs
helm install --namespace openebs --name openebs openebs/openebs --version 2.1.0

安装完成后会有如下提示
NAME:   openebs
LAST DEPLOYED: Sun Sep 20 17:03:06 2020
NAMESPACE: openebs
STATUS: DEPLOYED
...
Please note that, OpenEBS uses iSCSI for connecting applications with the
OpenEBS Volumes and your nodes should have the iSCSI initiator installed.

第二种是yaml方式,这种方式可能yaml文件拉取不下来,这里建议先在浏览器访问,然后复制做成本地yaml文件在apply

查看openebs名称空间下的pod

kubectl get pods -n openebs

安装成功
这里需要等待openebs中所有的pod启动完成
在这里插入图片描述

异常情况

error: unable to recognize "demo-openebs-hostpath.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

error: unable to recognize "demo-openebs-hostpath.yaml": no matches for kind "Deployment" in version "apps/v1beta1"

error: unable to recognize "demo-openebs-hostpath.yaml": no matches for kind "Deployment" in version "apps/v1beta2"

这种异常情况是由于k8s的版本不同导致的

我这里使用的k8s版本为1.17.3
piVersion: apps/v1
kind: Deployment
从新apply即可

最小化安装Kubesphere在这里插入图片描述
拷贝文件
在这里插入图片描述

使用最小化安装文件

kubectl apply -f kubesphere-mini.yaml
namespace/kubesphere-system created
configmap/ks-installer created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created

查看pod

kubectl get pods --all-namespaces
当
kubesphere-system   ks-installer-75b8d89dff-hchj4                  1/1     Running            0          80s
在运行状态是就可以监控kubesphere的启动日志

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

在这里插入图片描述
安装完成!
在这里插入图片描述
访问登录
在这里插入图片描述

卸载tiller

kubectl get -n kube-system secrets,sa,clusterrolebinding -o name|grep tiller|xargs kubectl -n kube-system delete
kubectl get all -n kube-system -l app=helm -o name|xargs kubectl delete -n kube-system

报错信息
在这里插入图片描述
最后pod启动情况
在这里插入图片描述
这里部分的pod启动失败导致Kubesphere无法使用,大概原因是pvc的问题,和cdn的问题,
这里找到相似的问题
cluster dns 默认ip设置问题
redis-ha无法启动

TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]

TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.262682", "end": "2020-09-02 16:11:22.699850", "msg": "non-zero return code", "rc": 1, "start": "2020-09-02 16:11:22.437168", "stderr": "Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"mysql-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
changed: [localhost]

TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.265579", "end": "2020-09-02 16:11:23.483929", "msg": "non-zero return code", "rc": 1, "start": "2020-09-02 16:11:23.218350", "stderr": "Error from server (NotFound): persistentvolumeclaims \"minio\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"minio\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]

TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
changed: [localhost]

TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.264863", "end": "2020-09-02 16:11:24.268576", "msg": "non-zero return code", "rc": 1, "start": "2020-09-02 16:11:24.003713", "stderr": "Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"etcd-pvc\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]

TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath='{.status.capacity.storage}'\n", "delta": "0:00:00.270708", "end": "2020-09-02 16:11:24.682689", "msg": "non-zero return code", "rc": 1, "start": "2020-09-02 16:11:24.411981", "stderr": "Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found", "stderr_lines": ["Error from server (NotFound): persistentvolumeclaims \"data-redis-ha-server-0\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]

TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u'path': u'etcd', u'file': u'etcd.yaml'})
changed: [localhost] => (item={u'name': u'mysql', u'file': u'mysql.yaml'})
changed: [localhost] => (item={u'path': u'redis', u'file': u'redis.yaml'})

TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]

TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml)
skipping: [localhost] => (item=mysql.yaml)

TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)

TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u'name': u'custom-values-minio', u'file': u'custom-values-minio.yaml'})

TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]

TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]

TASK [common : debug] **********************************************************
skipping: [localhost]

TASK [common : fail] ***********************************************************
skipping: [localhost]

TASK [common : Kubesphere | create minio config directory] *********************

Logo

开源、云原生的融合云平台

更多推荐