K8S卸载并且重装的心酸历程
背景K8S证书过期了,然后不小心删除了个文件,然后就悲剧了报错总结root@n149-136-019:~# kubectl get nodeerror: no configuration has been provided, try setting KUBERNETES_MASTER environment variable卸载之后,重装sudo kubeadm init --kubernetes
·
背景
K8S证书过期了,然后不小心删除了个文件,然后就悲剧了
报错总结
root@n149-136-019:~# kubectl get node
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
卸载之后,重装
sudo kubeadm init --kubernetes-version v1.18.14 --pod-network-cidr=10.244.0.0/16 --v=5
报错:
I0713 10:14:07.956453 1200631 checks.go:844] pulling k8s.gcr.io/kube-scheduler:v1.18.14
I0713 10:15:23.549910 1200631 checks.go:844] pulling k8s.gcr.io/kube-proxy:v1.18.14
I0713 10:16:38.843371 1200631 checks.go:838] image exists: k8s.gcr.io/pause:3.2
I0713 10:16:38.884503 1200631 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0
I0713 10:16:38.921765 1200631 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.7
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.14: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.14: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.14: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.14: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:422
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:147
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/an
镜像拉不到,版本不匹配,查看机器已经有相应镜像,版本不匹配
解决
docker tag k8s.gcr.io/kube-apiserver:v1.18.19 k8s.gcr.io/kube-apiserver:v1.18.14
docker tag k8s.gcr.io/kube-controller-manager:v1.18.19 k8s.gcr.io/kube-controller-manager:v1.18.14
docker tag k8s.gcr.io/kube-scheduler:v1.18.19 k8s.gcr.io/kube-scheduler:v1.18.14
docker tag k8s.gcr.io/kube-proxy:v1.18.19 k8s.gcr.io/kube-proxy:v1.18.14
重新 init,
镜像等拉取成功
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0713 10:25:30.571596 1248623 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I0713 10:25:30.948477 1248623 request.go:557] Throttling request took 197.998153ms, request: POST:https://10.149.136.19:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.149.136.19:6443 --token v2wnro.hwz6clv2tjos03gn \
--discovery-token-ca-cert-hash sha256:e137dbd70babcec79198767ca60b845f543dd057c3b3b225e234820273b719e0
goon
root@n149-136-019:~# kubectl get nodeNAME STATUS ROLES AGE VERSIONn149-136-019 Ready master 2m46s v1.18.0
node 加入
kubeadm join 10.149.136.19:6443 --token v2wnro.hwz6clv2tjos03gn \
--discovery-token-ca-cert-hash sha256:e137dbd70babcec79198767ca60b845f543dd057c3b3b225e234820273b719e0
node尝试加入的过程中
root@n146-074-067:~# kubeadm join 10.149.136.19:6443 --token v2wnro.hwz6clv2tjos03gn \
> --discovery-token-ca-cert-hash sha256:e137dbd70babcec79198767ca60b845f543dd057c3b3b225e234820273b719e0
W0713 10:36:54.707933 3238231 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@n146-074-067:~#
执行
kubeadm reset
然后重新加入 success
安装 k8s-dashboard
kubectl apply -f https://kuboard.cn/install-script/k8s-dashboard/auth.yaml
openssl req -x509 -new -nodes -key ca.key -subj “/CN=149.136.019” -days 10000 -out ca.crt
尝试
卸载
安装
更多推荐
已为社区贡献2条内容
所有评论(0)