kubeadm安装kubernetes

Kubernetes这个名字起源于希腊语,意思是舵手或飞行员。Google在2014年开源了 Kubernetes项目。Kubernetes建立在Google大规模运行生产工作负载的十年半的经验的基础上,结合了社区中最好的想法和实践。
在这里插入图片描述

kubernetes 部署工具

  • kubeadm:kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整。
  • kops:如何在AWS上轻松安装Kubernetes集群;
  • KRIB:金属裸机
  • Kubespray:安装托管在GCE,Azure,OpenStack,AWS,vSphere,Oracle云基础架构(实验)或Baremetal上的Kubernetes群集;

部署Kubernetes集群

实验环境:

主机名IP地址操作系统软件版本
k8s-master192.168.101.178Centos 7.7dorcker-ce 19.03.4
kubeadm 1.13.3
kubelet 1.13.3
kubectl 1.13.3
kubernetes 1.17.3
k8s-node1192.168.101.179Centos 7.7dorcker-ce 19.03.4
kubeadm 1.13.3
kubelet 1.13.3
kubectl 1.13.3
kubernetes 1.13.3
k8s-node2192.168.101.180Centos 7.7dorcker-ce 19.03.4
kubeadm 1.13.3
kubelet 1.13.3
kubectl 1.13.3
kubernetes 1.13.3

一、Container runtimes

为了在Pods中运行容器,Kubernetes需要安装容器运行环境 (Container runtimes)

  • Docker
  • CRI-O
  • Containerd
  • Other CRI runtimes:frakti

1. docker的安装

所有节点操作

官方推荐19.03.4版本

# 安装 Docker CE
# 设置仓库
# 安装要求的包
$ yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

# 添加Docker repository
$ yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

# 安装 Docker CE
$ yum makecache fast
$ yum install docker-ce -y

# 启动 Docker CE
$ sudo systemctl enable docker
$ sudo systemctl start docker

# 配置镜像加速,新建并编辑配置文件

$ cat >> /etc/docker/daemon.json << EOF
{
"registry-mirrors": [
"https://hlef81mt.mirror.aliyuncs.com",
"https://8t4b6eke.mirror.aliyuncs.com"
]
}
EOF

# 重启docker
$ systemctl restart docker

二、开始之前

2.1 操作系统及硬件需求

  • Ubuntu 16.04+
  • Debian 9+
  • CentOS 7
  • Red Hat Enterprise Linux(RHEL)7
  • Fedora 25+
  • HypriotOS v1.0.1+
  • Container Linux(tested with 1800.6.0)
  • 每台计算机2GB或更多的RAM
  • 2个更多CPU

2.2 集群中所有计算机之间的完整网络连接

  • 设置主机名

    $ hostnamectl set-hostname k8s-master
    $ hostnamectl set-hostname k8s-node1
    $ hostnamectl set-hostname k8s-node2
    
  • 设置主机名互相解析,通过/etc/hosts文件

    $ cat >> /etc/hosts << EOF
    192.168.101.178 k8s-master
    192.168.101.179 k8s-node1
    192.168.101.180 k8s-node2
    EOF
    
  • 确保MAC地址唯一

  • 确保product_uuid唯一

    $ cat /sys/class/dmi/id/product_uuid
    AC484D56-8A09-0B1D-20C2-8DBB53A55F9E
    $ cat /sys/class/dmi/id/product_uuid
    EBFF4D56-A998-0373-1D67-785D7D45F432
    $ cat /sys/class/dmi/id/product_uuid
    4F414D56-37F6-8A60-8368-9BAB069432FD
    
  • 确保iptables工具不使用nftables(新的防火墙配置工具)

  • 禁用selinux

    # 永久关闭方法 – 需要重启服务器
    $ sed -i 's/enforcing/disabled/' /etc/selinux/config
    # 临时关闭方法 – 暂时可以不用重启服务器
    $ setenforce 0
    
    # 关闭防火墙
    $ systemctl stop firewalld
    $ systemctl disable firewalld
    

2.3 开放端口

Control-plane node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443*Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10251kube-schedulerSelf
TCPInbound10252kube-controller-managerSelf

Worker node(s)

ProtocolDirectionPort RangePurposeUsed By
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound30000-32767NodePort Services†All

2.4 必须禁用swap功能才能使kubelet正常工作

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。如果开启了swap分区,kubelet会启动失败(可以通过将参数 --fail-swap-on 设置为false来忽略swap on),故需要在每台机器上关闭swap分区

$ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
$ swapoff -a

2.5 参数内核参数

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sysctl --system
# 前面忽略
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

2.6 br_netfilter模块要被加载

$ lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

三、安装kubeadm、kubelet and kubectl

kubelet
kubeadm
kubectl
kubernetes-cni

  • kubeadm:引导集群的命令;
  • kubelet:运行在集群中所有机器上并执行诸如启动pod和容器之类操作的组件;
  • kubectl:操作集群的命令行单元;

3.1 安装阿里源

$ cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.2 进行安装

$ yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 kubernetes-cni-0.6.0
$ systemctl enable kubelet
$ systemctl start kubelet

安装的软件版本如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0JcpoG4v-1593421368923)(./images/4.png)]

四、master节点自举集群

kubelet和kubectl要和kubernetes版本一致,允许一些稍微的偏差。

4.1 部署Kubernetes Master(只需在k8s-maste节点操作)
# 初始化主库(flannel),注意记录结尾连接字符串

$ kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.4 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
4.2 输出信息
W0609 06:58:12.295107   47471 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.11. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.101.178]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.101.178 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.101.178 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.002168 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: d8qsiv.rc9lsqnh3ptkc85o
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.101.178:6443 --token d8qsiv.rc9lsqnh3ptkc85o --discovery-token-ca-cert-hash sha256:80ad0a9635b069b72730bd5e3d93eb30d89119413df0451ca988042dc2a354d2
4.3 下载的docker镜像
$ docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.13.3             fe242e556a99        16 months ago       181MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.13.3             0482f6400933        16 months ago       146MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.13.3             98db19758ad4        16 months ago       80.3MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.13.3             3a6f709e97a0        16 months ago       79.6MB
registry.aliyuncs.com/google_containers/coredns                   1.2.6               f59dcacceff4        19 months ago       40MB
registry.aliyuncs.com/google_containers/etcd                      3.2.24              3cab8e1b9802        21 months ago       220MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
4.4 根据提示进行操作:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
4.5 查看集群状态
$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

确认个组件都处于healthy状态。集群初始化如果遇到问题,可以使用下面的命令进行清理:

$ kubeadm reset
$ ifconfig cni0 down
$ ip link delete cni0
$ ifconfig flannel.1 down
$ ip link delete flannel.1
$ rm -rf /var/lib/cni/

五、安装flannel网络

$ kubectl apply -f http://res.chinaskinhospital.com/Upload/Academic/20191219/2019121914494283332868.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
kube-flannel.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "cniVersion": "0.2.0",
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

$ docker pull vinsonwu/flannel:v0.11.0-amd64

v0.11.0-amd64: Pulling from vinsonwu/flannel
Digest: sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a
Status: Downloaded newer image for vinsonwu/flannel:v0.11.0-amd64
docker.io/vinsonwu/flannel:v0.11.0-amd64

$ docker tag vinsonwu/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

$ docker rmi vinsonwu/flannel:v0.11.0-amd64
Untagged: vinsonwu/flannel:v0.11.0-amd64
Untagged: vinsonwu/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a

5.0 常用操作

# 列出组件
$ kubectl get pods --all-namespaces
# 查看节点
$ kubectl get nodes
# 查看集群状态
$ kubectl get cs
# 查看部署状态
$ kubectl get deploy -n kube-system
# 查看命名空间下映射的端口
$ kubectl get svc -n kube-system
# 查看日志
$ journalctl -u kubelet
# 列出token
$ kubeadm token list
# 创建token 用来加入集群
$ kubeadm token create --print-join-command

通过以下命令可以查看到:

$ kubectl get ds -l app=flannel -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-flannel-ds-amd64     1         1         1       1            1           beta.kubernetes.io/arch=amd64     81s
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       81s
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     81s
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   81s
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     81s

六、node加入集群

将2个节点加入到集群中:
# 创建token 用来加入集群
$  kubeadm token create --print-join-command
kubeadm join 192.168.101.178:6443 --token 0as6gc.mzuekzrcwma9sgst --discovery-token-ca-cert-hash sha256:80ad0a9635b069b72730bd5e3d93eb30d89119413df0451ca988042dc2a354d2

# 在另外两个节点执行
$ kubeadm join 192.168.101.178:6443 --token 0as6gc.mzuekzrcwma9sgst --discovery-token-ca-cert-hash sha256:80ad0a9635b069b72730bd5e3d93eb30d89119413df0451ca988042dc2a354d2

[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.11. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.101.178:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.101.178:6443"
[discovery] Requesting info from "https://192.168.101.178:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.101.178:6443"
[discovery] Successfully established connection with API Server "192.168.101.178:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

# k8s node节点运行kubectl报错:The connection to the server localhost:8080 was refused
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
# 复制master节点里的/etc/kubernetes/admin.conf文件到node节点的这个目录下
$ scp /etc/kubernetes/admin.conf root@192.168.101.179:/etc/kubernetes/
$ scp /etc/kubernetes/admin.conf root@192.168.101.180:/etc/kubernetes/

# 配置环境变量KUBECONFIG
$ echo "export KUBECONFIG=/etc/kubernetes/admin.conf">>/etc/profile
$ source /etc/profile
$ echo $KUBECONFIG
/etc/kuberbetes/admin.conf

# 验证
$ kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-78d4cf999f-6t6fn         1/1     Running   0          21h
coredns-78d4cf999f-zrlcf         1/1     Running   0          21h
etcd-master                      1/1     Running   0          21h
kube-apiserver-master            1/1     Running   0          21h
kube-controller-manager-master   1/1     Running   2          21h
kube-flannel-ds-amd64-jcc6d      1/1     Running   0          20m
kube-flannel-ds-amd64-kgcht      1/1     Running   0          16m
kube-proxy-hbp86                 1/1     Running   0          16m
kube-proxy-wtqj5                 1/1     Running   0          21h
kube-scheduler-master            1/1     Running   3          21h
查看加入情况:
$ kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   8m9s    v1.13.3
k8s-node1    Ready    <none>   3m39s   v1.13.3
k8s-node2    Ready    <none>   3m38s   v1.13.3
成功安装后,可以查看集群情况
$ kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   20h     v1.18.3
k8s-node1    Ready    <none>   3h28m   v1.18.3
k8s-node2    Ready    <none>   66s     v1.18.3

$ kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-78d4cf999f-p8qgl             1/1     Running   0          8m21s
coredns-78d4cf999f-rwch4             1/1     Running   0          8m21s
etcd-k8s-master                      1/1     Running   0          7m17s
kube-apiserver-k8s-master            1/1     Running   0          7m18s
kube-controller-manager-k8s-master   1/1     Running   0          7m27s
kube-flannel-ds-amd64-gtq75          1/1     Running   0          4m9s
kube-flannel-ds-amd64-tgqz6          1/1     Running   0          6m8s
kube-flannel-ds-amd64-x78wl          1/1     Running   0          4m10s
kube-proxy-5ql4f                     1/1     Running   0          4m9s
kube-proxy-lh29l                     1/1     Running   0          8m21s
kube-proxy-mwmh6                     1/1     Running   0          4m10s
kube-scheduler-k8s-master            1/1     Running   0          7m43s
将节点移除集群
  • 在master节点操作

    $ kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
    node/k8s-node1 cordoned
    WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-dwtcb, kube-system/kube-proxy-skxh9
    node/k8s-node1 drained
    $ kubectl delete node k8s-node1
    node "k8s-node1" deleted
    $ kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    k8s-master   Ready    master   20h   v1.18.3
    k8s-node2    Ready    <none>   11m   v1.18.3
    
  • 在node上操作:

    $ kubeadm reset
    $ ifconfig cni0 down
    $ ip link delete cni0
    $ ifconfig flannel.1 down
    $ ip link delete flannel.1
    $ rm -rf /var/lib/cni/
    

七、测试集群

7.1 安装nginx
# master节点查看node状态
$ kubectl get node
NAME       STATUS   ROLES    AGE    VERSION
k8s-219    Ready    <none>   112m   v1.13.3
k8s-220    Ready    <none>   113m   v1.13.3
k8sm-218   Ready    master   162m   v1.13.3
# 测试集群是否正常
$ kubectl create deployment nginx --image=nginx

deployment.apps/nginx created

# 创建service
$ kubectl expose deployment nginx --port=80 --type=NodePort

service/nginx exposed

$ kubectl get pods,svc
NAME                       READY   STATUS    RESTARTS   AGE
pod/nginx-5c7588df-97v4t   1/1     Running   0          53m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        83m
service/nginx        NodePort    10.1.122.0   <none>        80:32036/TCP   53m
# 测试nginx
# 通过任意一个nodeip+端口,既可以访问到nginx页面
7.2 nginx-ds服务信息如下:
  • Service Cluster IP:10.1.55.63
  • 服务端口:80
  • NodePort端口:31752

访问CLUSTER-IP:10.1.122.0即可访问到nginx访问

$ curl 10.1.122.0
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
7.3 在集群外面访问

在这里插入图片描述

如果cluster-IP访问有问题,可以尝试下面的解决方法:

$ setenforce 0
$ iptables --flush
$ iptables -tnat --flush
$ systemctl restart docker
$ iptables -P FORWARD ACCEPT

八、安装Dashboard UI

8.1 部署Pod网络插件
$ kubectl apply -f http://res.chinaskinhospital.com/Upload/Academic/20191219/2019121918412312049088.yaml

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
8.2 获取令牌
$ kubectl create serviceaccount dashboard-admin -n kube-system

serviceaccount/dashboard-admin created

$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

eaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Name:         dashboard-admin-token-k9fz2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 5ed43524-afe0-11ea-9b1f-000c29a55f9e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tazlmejIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWVkNDM1MjQtYWZlMC0xMWVhLTliMWYtMDAwYzI5YTU1ZjllIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.ihfzKT4tiY_6jKwJX7-bAw7z8RDmh8AYmO-C4eBjDK07mb3omJcLIVPAYHNE5lnPuDKliaHACKjj9-mwaklmZ3waUX3s0e6i3vw10s2Z2irBA2BOmjEF2U4KWQ7JSEqf5g_A3SqYTXlCuDZyS29jHjZcSUefTkgcVU9WTo17fAKIgmcObU_3FxADgZb3NPgXN7OVZy7_js0974YUQCUmTsONxpI_Y0lTKHHY_RNmmFGGCawUJsWqm2PE99WhaZOoHqZb8DenjmIWNKP2kZzIK8ncVcXvCOyNP2gQ47A1_sUiUovbvDv-8iOVkE0kHE0kEWEex3VuqUY0CvJKo6d2cQ

2.使用浏览器(推荐:Firefox)访问任意节点都能打开,格式为:https://节点中任意IP地址:30001

https://192.168.101.178:30001

https://192.168.101.179:30001

https://192.168.101.180:30001
在这里插入图片描述

8.3 登录dashboard UI

在这里插入图片描述

8.4 可以看到集群中的情况

在这里插入图片描述

Logo

开源、云原生的融合云平台

更多推荐