根据下载的二进制文件部署kubernets集群,同时开启集群的TLS安全认证。

操作环境

三台CentOS系统的虚拟机上部署具有三个节点的kubernetes1.7.16集群。

Master:172.16.138.171    所有生成证书、执行kubectl命令的操作都在这台节点上执行。Node:172.16.138.171,172.16.138.172,172.16.138.173

172.16.138.171 master etcd、kube-apiserver、kube-controller-manager、kube-scheduler、flanneld 
172.16.138.172 node2 etcd、kubelet、docker、kube_proxy、flanneld 
172.16.138.173	node3 etcd、kubelet、docker、kube_proxy、flanneld

安装前的准备

1、在node节点上安装docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

2、关闭所有节点的SELinux

修改/etc/selinux/config文件中设置SELINUX=disabled ,然后重启服务器。

3、测试环境关闭防火墙

启动: systemctl start firewalld

关闭: systemctl stop firewalld

查看状态: systemctl status firewalld 

开机禁用  : systemctl disable firewalld

开机启用  : systemctl enable firewalld

 

1,创建TLS证书和密钥

kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书。

需要生成的 CA 证书和秘钥文件

  1. ca-key.pem
  2. ca.pem
  3. kubernetes-key.pem
  4. kubernetes.pem
  5. kube-proxy.pem
  6. kube-proxy-key.pem
  7. admin.pem
  8. admin-key.pem

使用证书的组件如下

  1. etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  2. kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  3. kubelet:使用 ca.pem;
  4. kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  5. kubectl:使用 ca.pem、admin-key.pem、admin.pem;
  6. kube-controller-manager:使用 ca-key.pem、ca.pem;

证书创建操作都在 master 节点进行,可以复用,以后在向集群中添加新节点时只要将 /etc/kubernetes/ 目录下的证书拷贝到新节点上即可。

安装 CFSSL

直接使用二进制源码包安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

 

创建 CA (Certificate Authority)

mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根据config.json文件的格式创建ca-config.json文件
# 过期时间设置 87600h
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF



    ca-config.json:可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
    signing:表示该证书可用于签名其它证书,生成的ca.pem证书中CA=TRUE;
    server auth:表示client可以用该CA对server提供的证书进行验证;
    client auth:表示server可以用该CA对client提供的证书进行验证;

创建 CA 证书签名请求

cat > ca-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ],
    "ca": {
       "expiry": "87600h"
    }
}
EOF



    "CN":Common Name,kube-apiserver从证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站是否合法;
    "O":Organization,kube-apiserver从证书中提取该字段作为请求用户所属的组(Group);

生成 CA 证书和私钥

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

创建 kubernetes 证书

cat > kubernetes-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.16.138.171",
      "172.16.138.172",
      "172.16.138.173",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF



    如果hosts字段不为空则需要指定授权使用该证书的IP或域名列表,由于该证书后续被etcd集群和 kubernetes master集群使用,
所以上面分别指定了etcd集群、kubernetes master集群的主机IP和 kubernetes服务的服务IP
(一般是kube-apiserver指定的service-cluster-ip-range网段的第一个IP,如 10.254.0.1)。
    这是最小化安装的kubernetes集群,不包括私有镜像仓库,只有三个节点的kubernetes集群,IP也可以更换为主机名。

生成 kubernetes 证书和私钥

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes 
$ ls kubernetes* 
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

创建 admin 证书

创建 admin 证书签名请求文件 admin-csr.json

cat > admin-csr.json <<EOF{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

后续kube-apiserver使用RBAC对客户端(如kubelet、kube-proxy、Pod)请求进行授权;kube-apiserver预定义了一些RBAC使用的RoleBindings,如cluster-admin将Groupsystem:masters与Rolecluster-admin绑定,该Role授予了调用kube-apiserver的所有API的权限;O指定该证书的Group为system:masters,kubelet使用该证书访问kube-apiserver时,由于证书被CA签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有API的权限。

注意:这个admin证书,是将来生成管理员用的kubeconfig配置文件用的,现在我们一般建议使用RBAC来对kubernetes进行角色权限控制,kubernetes将证书中的CN字段作为User,O字段作为Group。

在搭建完kubernetes集群后,我们可以通过命令:kubectlgetclusterrolebinding cluster-admin -oyaml,查看到clusterrolebindingcluster-admin的subjects的kind是Group,name是system:masters。roleRef对象是ClusterRolecluster-admin。意思是凡是system:mastersGroup的user或者serviceAccount都拥有cluster-admin的角色。因此我们在使用kubectl命令时候,才拥有整个集群的管理权限。可以使用kubectlgetclusterrolebinding cluster-admin -oyaml来查看。

生成 admin 证书和私钥:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

创建 kube-proxy 证书

创建 kube-proxy 证书签名请求文件 kube-proxy-csr.json

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF



    CN指定该证书的User为system:kube-proxy;
    kube-apiserver预定义的RoleBinding cluster-admin将User system:kube-proxy与Role system:node-proxier绑定,该Role授予了调用kube-apiserver Proxy相关API的权限;

生成 kube-proxy 客户端证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

目前所生成证书文件

校验证书

以 kubernetes 证书为例

# openssl x509  -noout -text -in  kubernetes.pem
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            65:d1:30:a3:28:1d:4a:cd:69:21:08:21:0b:f4:f8:1c:f9:a9:c2:6c
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
        Validity
            Not Before: Jan 24 09:12:00 2019 GMT
            Not After : Jan 21 09:12:00 2029 GMT
        Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:a8:be:52:8a:43:46:09:57:9e:ea:da:54:b8:c3:
                    c1:eb:ff:3f:c0:e0:f7:4c:ae:62:29:a3:ea:99:9a:
                    69:ab:a1:7d:09:3d:5c:ae:5b:85:fc:8c:a4:50:08:
                    23:0e:53:e6:e7:1f:92:2c:08:4c:5e:a4:0b:7a:6c:
                    e7:e2:14:ad:78:9a:ee:03:be:b1:50:9e:f4:47:b1:
                    2a:b9:d2:58:d6:cf:25:00:3a:6a:c2:32:36:32:01:
                    d9:d8:a7:f0:65:d4:a8:d7:d2:26:47:5e:75:34:82:
                    bf:14:a4:ef:f4:1d:e0:da:9a:b3:1f:07:4e:db:be:
                    27:00:df:92:7c:01:49:9b:f2:d0:32:30:02:eb:40:
                    b4:12:e4:69:a2:a4:14:15:78:47:0c:01:df:f1:89:
                    08:4a:5c:ba:8b:2a:6c:b6:3b:68:7a:15:8d:6d:8f:
                    6a:9a:aa:79:44:e5:2a:f0:69:e4:22:1b:2e:68:3d:
                    20:a7:6d:4c:1d:b9:f8:e2:04:a7:e9:e0:39:62:27:
                    40:f0:1c:a2:3e:b4:17:26:ad:05:ce:39:2c:79:42:
                    a4:5b:dd:40:74:9d:57:45:cb:9d:f9:2f:b4:b9:f9:
                    67:89:be:d2:a0:ac:ca:77:69:9d:a6:31:23:bd:24:
                    55:f2:f4:cd:69:2a:43:ea:45:03:48:2b:f1:bf:4b:
                    fb:83
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                42:3D:EF:7C:9D:27:E9:D4:50:E2:D3:BD:84:89:FB:93:6E:BB:1E:8C
            X509v3 Authority Key Identifier: 
                keyid:EE:AA:DB:4A:36:C8:50:AF:55:5B:32:3B:AF:9B:12:09:FA:E8:B6:39

            X509v3 Subject Alternative Name: 
                DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:172.16.138.100, IP Address:172.16.138.171, IP Address:172.16.138.172, IP Address:172.16.138.173, IP Address:10.254.0.1
    Signature Algorithm: sha256WithRSAEncryption
         ae:6b:2f:d7:dc:88:00:f1:69:d3:d1:13:f1:4c:73:dc:93:c6:
         5d:cd:10:90:cd:7b:6c:fb:de:2e:33:0d:94:a1:db:18:49:ae:
         3a:3a:10:d8:32:57:2f:4a:76:45:30:e1:1b:f0:83:a1:73:36:
         02:87:53:e5:66:41:27:d3:56:d3:83:51:2e:e1:35:3c:47:0c:
         a1:a2:74:bb:b2:a0:3f:ac:4b:58:a9:72:c1:0d:42:d1:36:dd:
         da:18:d9:62:d6:f6:3f:73:78:d7:00:2c:1a:8b:cf:e3:86:b0:
         28:44:28:a4:56:48:bb:23:3f:c0:41:d8:05:18:72:89:0d:8d:
         e9:04:60:a4:2a:a7:c4:45:3d:8d:a4:e6:48:a6:38:f9:76:f0:
         63:db:9c:77:3d:b9:d1:0e:aa:f2:86:45:ef:5b:81:1b:78:4e:
         d4:a0:e7:5a:71:77:77:0c:d4:ea:6c:5b:3f:df:09:64:5c:09:
         48:2c:df:df:07:88:94:85:0d:a8:d5:15:a1:b7:4d:30:b3:c2:
         5f:d5:67:94:d0:2c:bb:4b:8a:e4:ee:f1:40:85:68:a0:d8:a1:
         a6:4e:7e:ef:22:6a:22:07:63:00:d0:3c:22:2e:a2:00:af:6a:
         65:45:11:11:4b:f0:c2:df:90:18:e7:30:79:21:e0:ef:78:23:
         6c:68:d9:f8




    确认Issuer字段的内容和ca-csr.json一致;
    确认Subject字段的内容和kubernetes-csr.json一致;
    确认X509v3 Subject Alternative Name字段的内容和kubernetes-csr.json一致;
    确认X509v3 Key Usage、Extended Key Usage字段的内容和ca-config.json中kubernetes profile一致;

使用 cfssl-certinfo 命令

cfssl-certinfo -cert kubernetes.pem
{
  "subject": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "BeiJing",
    "province": "BeiJing",
    "names": [
      "CN",
      "BeiJing",
      "BeiJing",
      "k8s",
      "System",
      "kubernetes"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "System",
    "locality": "BeiJing",
    "province": "BeiJing",
    "names": [
      "CN",
      "BeiJing",
      "BeiJing",
      "k8s",
      "System",
      "kubernetes"
    ]
  },
  "serial_number": "581273160508772438401851068790821439777756791404",
  "sans": [
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "172.16.138.171",
    "172.16.138.172",
    "172.16.138.173"
  ],
  "not_before": "2019-01-24T09:12:00Z",
  "not_after": "2029-01-21T09:12:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "EE:AA:DB:4A:36:C8:50:AF:55:5B:32:3B:AF:9B:12:9:FA:E8:B6:39",
  "subject_key_id": "42:3D:EF:7C:9D:27:E9:D4:50:E2:D3:BD:84:89:FB:93:6E:BB:1E:8C",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEizCCA3OgAwIBAgIUZdEwoygdSs1pIQghC/T4HPmpwmwwDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl\naUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTE5MDEyNDA5MTIwMFoXDTI5MDEyMTA5MTIwMFowZTELMAkG\nA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK\nBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqL5SikNGCVee6tpUuMPB\n6/8/wOD3TK5iKaPqmZppq6F9CT1crluF/IykUAgjDlPm5x+SLAhMXqQLemzn4hSt\neJruA76xUJ70R7EqudJY1s8lADpqwjI2MgHZ2KfwZdSo19ImR151NIK/FKTv9B3g\n2pqzHwdO274nAN+SfAFJm/LQMjAC60C0EuRpoqQUFXhHDAHf8YkISly6iypstjto\nehWNbY9qmqp5ROUq8GnkIhsuaD0gp21MHbn44gSn6eA5YidA8ByiPrQXJq0Fzjks\neUKkW91AdJ1XRcud+S+0uflnib7SoKzKd2mdpjEjvSRV8vTNaSpD6kUDSCvxv0v7\ngwIDAQABo4IBMTCCAS0wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUF\nBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBRCPe98nSfp1FDi\n072EifuTbrsejDAfBgNVHSMEGDAWgBTuqttKNshQr1VbMjuvmxIJ+ui2OTCBrQYD\nVR0RBIGlMIGiggprdWJlcm5ldGVzghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVy\nbmV0ZXMuZGVmYXVsdC5zdmOCHmt1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rl\ncoIka3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FshwR/AAABhwSs\nEIpkhwSsEIqrhwSsEIqshwSsEIqthwQK/gABMA0GCSqGSIb3DQEBCwUAA4IBAQCu\nay/X3IgA8WnT0RPxTHPck8ZdzRCQzXts+94uMw2UodsYSa46OhDYMlcvSnZFMOEb\n8IOhczYCh1PlZkEn01bTg1Eu4TU8RwyhonS7sqA/rEtYqXLBDULRNt3aGNli1vY/\nc3jXACwai8/jhrAoRCikVki7Iz/AQdgFGHKJDY3pBGCkKqfERT2NpOZIpjj5dvBj\n25x3PbnRDqryhkXvW4EbeE7UoOdacXd3DNTqbFs/3wlkXAlILN/fB4iUhQ2o1RWh\nt00ws8Jf1WeU0Cy7S4rk7vFAhWig2KGmTn7vImoiB2MA0DwiLqIAr2plRRERS/DC\n35AY5zB5IeDveCNsaNn4\n-----END CERTIFICATE-----\n"
}

分发证书

将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下;

 mkdir -p /etc/kubernetes/ssl 
 cp *.pem /etc/kubernetes/ssl

2、安装kubectl命令行工具

下载 kubectl

注意下载对应Kubernetes版本的安装包。链接被拒绝就多试几次

wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kube* /usr/bin/
chmod a+x /usr/bin/kube*

3、创建 kubeconfig 文件

创建 TLS Bootstrapping Token

Token auth file

Token可以是任意的包含128bit的字符串,可以使用安全的随机数发生器生成。

# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

# cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# cp token.csv /etc/kubernetes/

注意:在进行后续操作前请检查token.csv文件,确认其中的${BOOTSTRAP_TOKEN}环境变量已经被真实的值替换。

BOOTSTRAP_TOKEN将被写入到kube-apiserver使用的token.csv文件和kubelet使用的 bootstrap.kubeconfig 文件,如果后续重新生成了BOOTSTRAP_TOKEN,则需要:

    更新token.csv 文件,分发到所有机器 (master 和 node)的/etc/kubernetes/目录下,分发到node节点上非必需;
    重新生成bootstrap.kubeconfig文件,分发到所有node机器的/etc/kubernetes/目录下;
    重启kube-apiserver和kubelet进程;
    重新approve kubelet的csr请求;

创建 kubelet bootstrapping kubeconfig 文件

 cd /etc/kubernetes
 export KUBE_APISERVER="https://172.16.138.171:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig


    --embed-certs为true时表示将certificate-authority证书写入到生成的bootstrap.kubeconfig文件中;
    设置客户端认证参数时没有指定秘钥和证书,后续由kube-apiserver自动生成;

创建 kube-proxy kubeconfig 文件

export KUBE_APISERVER="https://172.16.138.171:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

#  设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig



    设置集群参数和客户端认证参数时--embed-certs都为true,这会将certificate-authority、client-certificate和client-key指向的证书文件内容写入到生成的kube-proxy.kubeconfig文件中;
    kube-proxy.pem证书中CN为system:kube-proxy,kube-apiserver预定义的RoleBinding cluster-admin将User system:kube-proxy与Role system:node-proxier绑定,该Role授予了调用 kube-apiserver Proxy相关API的权限;

安装kubectl命令行工具

export KUBE_APISERVER="https://172.16.138.171:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER}

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=/etc/kubernetes/ssl/admin.pem \
  --embed-certs=true \
  --client-key=/etc/kubernetes/ssl/admin-key.pem

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin

# 设置默认上下文
kubectl config use-context kubernetes



    admin.pem证书OU字段值为system:masters,kube-apiserver预定义的RoleBinding cluster-admin 将Group system:masters与Role cluster-admin绑定,该Role授予了调用kube-apiserver相关API的权限;
    生成的kubeconfig被保存到~/.kube/config文件;

注意:~/.kube/config文件拥有对该集群的最高权限,请妥善保管。

分发 kubeconfig 文件

将两个 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/ 目录

cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/

4、创建 etcd 集群

TLS 认证文件

需要为etcd集群创建加密通信的TLS证书,这里复用以前创建的kubernetes证书

cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl

kubernetes证书的hosts字段列表中必须包含三台机器的IP,否则后续证书校验会失败;

下载二进制文件

https://github.com/coreos/etcd/releases页面下载最新版本的二进制文件

wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz 
tar -xvf etcd-v3.1.5-linux-amd64.tar.gz 
mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin

创建 etcd 的 systemd unit 文件

在/usr/lib/systemd/system/目录下创建文件etcd.service。替换IP地址为你自己的etcd集群的主机IP。172.16.138.171节点的配置完成后其他两个etcd节点只要将相应IP地址改成相应节点的IP地址即可。ETCD_NAME换成对应节点的infra1/2/3。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
  --name ${ETCD_NAME} \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
  --initial-cluster infra1=https://172.16.138.171:2380,infra2=https://172.16.138.172:2380,infra3=https://172.16.138.173:2380 \
  --initial-cluster-state new \
  --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target



    指定etcd的工作目录为/var/lib/etcd,数据目录为/var/lib/etcd,需在启动服务前创建这个目录,否则启动服务的时候会报错“FailedatstepCHDIRspawning/usr/bin/etcd:Nosuchfileordirectory”;
    为了保证通信安全,需要指定etcd的公私钥(cert-file和key-file)、Peers通信的公私钥和CA证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
    创建kubernetes.pem证书时使用的kubernetes-csr.json文件的hosts字段包含所有etcd节点的IP,否则证书校验会出错;
    --initial-cluster-state值为new时,--name的参数值必须位于--initial-cluster列表中;

环境变量配置文件/etc/etcd/etcd.conf

# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.138.171:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.138.171:2379"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.138.171:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.138.171:2379"

启动 etcd 服务

mv etcd.service /usr/lib/systemd/system/ 
systemctl daemon-reload 
systemctl enable etcd 
systemctl start etcd 
systemctl status etcd

验证服务

在任意 kubernetes master 机器上执行如下命令:

[root@localhost ssl]# etcdctl \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  cluster-health

2019-01-29 15:03:33.692782 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-01-29 15:03:33.696213 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member ab044f0f6d623edf is healthy: got healthy result from https://172.16.138.173:2379
member cf3528b42907470b is healthy: got healthy result from https://172.16.138.172:2379
member eab584ea44e13ad4 is healthy: got healthy result from https://172.16.138.171:2379
cluster is healthy

5、 部署master节点

kubernetes master 节点包含的组件:

  1. kube-apiserver
  2. kube-scheduler
  3. kube-controller-manager

下载二进制文件

changelog下载 clientserver tar包 文件

server 的 tarball kubernetes-server-linux-amd64.tar.gz 已经包含了 client(kubectl) 二进制文件,所以不用单独下载kubernetes-client-linux-amd64.tar.gz文件;

wget https://dl.k8s.io/v1.7.16/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz

将二进制文件拷贝到指定路径

cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

配置和启动 kube-apiserver

创建 kube-apiserver的service配置文件

service配置文件/usr/lib/systemd/system/kube-apiserver.service内容:

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

/etc/kubernetes/config文件的内容为:

# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://172.16.138.171:8080"

该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。

apiserver配置文件/etc/kubernetes/apiserver内容为:

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"
KUBE_API_ADDRESS="--advertise-address=172.16.138.171 --bind-address=172.16.138.171 --insecure-bind-address=172.16.138.171"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-por
t-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/ku
bernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-coun
t=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"
  1.  --experimental-bootstrap-token-auth Bootstrap Token Authentication在1.9版本已经变成了正式feature,参数名称改为--enable-bootstrap-token-auth
  2. 如果中途修改过--service-cluster-ip-range地址,则必须将default命名空间的kubernetes的service给删除,使用命令:kubectl delete service kubernetes,然后系统会自动用新的ip重建这个service,不然apiserver的log有报错the cluster IP x.x.x.x for service kubernetes/default is not within the service CIDR x.x.x.x/16; please recreate
  3. --authorization-mode=RBAC 指定在安全端口使用 RBAC 授权模式,拒绝未通过授权的请求;
  4. kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
  5. kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权;
  6. kube-proxy、kubectl 通过在使用的证书里指定相关的 User、Group 来达到通过 RBAC 授权的目的;
  7. 如果使用了 kubelet TLS Boostrap 机制,则不能再指定 --kubelet-certificate-authority--kubelet-client-certificate--kubelet-client-key 选项,否则后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误;
  8. --admission-control 值必须包含 ServiceAccount
  9. --bind-address 不能为 127.0.0.1
  10. runtime-config配置为rbac.authorization.k8s.io/v1beta1,表示运行时的apiVersion;
  11. --service-cluster-ip-range 指定 Service Cluster IP 地址段,该地址段不能路由可达;
  12. 缺省情况下 kubernetes 对象保存在 etcd /registry 路径下,可以通过 --etcd-prefix 参数进行调整;
  13. 如果需要开通http的无认证的接口,则可以增加以下两个参数:--insecure-port=8080 --insecure-bind-address=127.0.0.1。注意,生产上不要绑定到非127.0.0.1的地址上

启动kube-apiserver

systemctl daemon-reload 
systemctl enable kube-apiserver 
systemctl start kube-apiserver 
systemctl status kube-apiserver

配置和启动 kube-controller-manager

创建 kube-controller-manager的serivce配置文件

文件路径/usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/controller-manager

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
  1. --service-cluster-ip-range 参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致;
  2. --cluster-signing-* 指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;
  3. --root-ca-file 用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件
  4. --address 值必须为 127.0.0.1,kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器;

启动 kube-controller-manager

systemctl daemon-reload 
systemctl enable kube-controller-manager 
systemctl start kube-controller-manager 
systemctl status kube-controller-manager

配置和启动 kube-scheduler

创建 kube-scheduler的serivce配置文件

文件路径/usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/scheduler

###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
  1. --address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器;

启动 kube-scheduler

systemctl daemon-reload 
systemctl enable kube-scheduler 
systemctl start kube-scheduler 
systemctl status kube-scheduler

验证 master 节点功能

[root@localhost ssl]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok  

6、安装flannel网络插件

为统一操作,先在master安装。建议直接使用yum安装flanneld,除非对版本有特殊需求

yum install -y flannel

service配置文件/usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

/etc/sysconfig/flanneld配置文件:

# Flanneld configuration options  
#
# # etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379"
#
# # etcd config key.  This is the configuration key that flannel queries
# # For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
#
# # Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

在etcd中创建网络配置

执行下面的命令为docker分配IP地址段。

etcdctl --endpoints=https://172.16.138.171:2379,https://172.16.138.172:2379,https://172.16.138.173:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kube-centos/network

etcdctl --endpoints=https://172.16.138.171:2379,https://172.16.138.171:2379,https://172.16.138.171:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

如果你要使用host-gw模式,可以直接将vxlan改成host-gw即可。

启动flannel

systemctl daemon-reload 
systemctl enable flanneld 
systemctl start flanneld 
systemctl status flanneld

验证

[root@localhost ssl]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  ls /kube-centos/network/subnets

/kube-centos/network/subnets/172.30.68.0-24
/kube-centos/network/subnets/172.30.5.0-24
/kube-centos/network/subnets/172.30.26.0-24

[root@localhost ssl]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/config

{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

[root@localhost ssl]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.68.0-24

{"PublicIP":"172.16.138.173","BackendType":"vxlan","BackendData":{"VtepMAC":"ce:89:18:00:81:4b"}}

[root@localhost ssl]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.5.0-24

{"PublicIP":"172.16.138.171","BackendType":"vxlan","BackendData":{"VtepMAC":"1a:33:c3:e2:32:7f"}}

[root@localhost ssl]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  get /kube-centos/network/subnets/172.30.26.0-24

{"PublicIP":"172.16.138.172","BackendType":"vxlan","BackendData":{"VtepMAC":"a2:43:a8:a5:01:03"}}

7、部署node节点

Kubernetes node节点包含如下组件:

  1. Flanneld:同master
  2. Docker1.17.03:同master
  3. kubelet:直接用二进制文件安装
  4. kube-proxy:直接用二进制文件安装

注意:每台node上都需要安装flannel,master节点上可以不安装。

检查一下三个节点上,经过前几步操作我们已经创建了如下的证书和配置文件。

$ ls /etc/kubernetes/ssl 
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem 
$ ls /etc/kubernetes/ 
apiserver bootstrap.kubeconfig config controller-manager kubelet kube-proxy.kubeconfig proxy scheduler ssl token.csv

配置Docker

yum方式安装的flannel

修改docker的配置文件/usr/lib/systemd/system/docker.service,增加一条环境变量配置:

EnvironmentFile=-/run/flannel/docker

/run/flannel/docker文件是flannel启动后自动生成的,其中包含了docker启动时需要的参数。

启动docker

重启了docker后还要重启kubelet,这时又遇到问题,kubelet启动失败。报错:

Mar 31 16:44:41 k8s-master kubelet[81047]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

这是kubelet与docker的cgroup driver不一致导致的,kubelet启动的时候有个—cgroup-driver参数可以指定为"cgroupfs"或者“systemd”。

--cgroup-driver string Driver that the kubelet uses to manipulate cgroups on the host. Possible values: 'cgroupfs', 'systemd' (default "cgroupfs")

配置docker的service配置文件/usr/lib/systemd/system/docker.service,设置ExecStart

--exec-opt native.cgroupdriver=systemd。

安装和配置kubelet

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

 

--user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件;

cd /etc/kubernetes 
kubectl create clusterrolebinding kubelet-bootstrap \
 --clusterrole=system:node-bootstrapper \
 --user=kubelet-bootstrap

wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==

下载kubelet和kube-proxy二进制文件

注意请下载对应的Kubernetes版本的安装包。

wget https://dl.k8s.io/v1.7.16/kubernetes-server-linux-amd64.tar.gz 
tar -xzvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes tar -xzvf kubernetes-src.tar.gz 
cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/

wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==

创建kubelet的service配置文件

/usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==

kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改为你的每台node节点的IP地址。

注意:在启动kubelet之前,需要先手动创建/var/lib/kubelet目录。

kubelet的配置文件/etc/kubernetes/kubelet:

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=172.16.138.171"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.138.171"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
KUBELET_API_SERVER="--api-servers=http://172.16.138.171:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=harbor.suixingpay.com/kube/pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert
-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"

wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==

  1.  如果使用systemd方式启动,则需要额外增加两个参数--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
  2. --experimental-bootstrap-kubeconfig 在1.9版本已经变成了--bootstrap-kubeconfig
  3. --address 不能设置为 127.0.0.1,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的 127.0.0.1 指向自己而不是 kubelet;
  4. 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  5. --cgroup-driver 配置成 systemd,不要使用cgroup,否则在 CentOS 系统中 kubelet 将启动失败(保持docker和kubelet中的cgroup driver配置一致即可,不一定非使用systemd)。
  6. --experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
  7. 管理员通过了 CSR 请求后,kubelet 自动在 --cert-dir 目录创建证书和私钥文件(kubelet-client.crtkubelet-client.key),然后写入 --kubeconfig 文件;
  8. 建议在 --kubeconfig 配置文件中指定 kube-apiserver 地址,如果未指定 --api-servers 选项,则必须指定 --require-kubeconfig 选项后才从配置文件中读取 kube-apiserver 的地址,否则 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不会返回对应的 Node 信息;
  9. --cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效;
  10. --cluster-domain 指定 pod 启动时 /etc/resolve.conf 文件中的 search domain ,起初我们将其配置成了 cluster.local.,这样在解析 service 的 DNS 名称时是正常的,可是在解析 headless service 中的 FQDN pod name 的时候却错误,因此我们将其修改为 cluster.local,去掉最后面的 ”点号“ 就可以解决该问题。
  11. --kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次启动kubelet之前并不存在,请看下文,当通过CSR请求后会自动生成kubelet.kubeconfig文件,如果你的节点上已经生成了~/.kube/config文件,你可以将该文件拷贝到该路径下,并重命名为kubelet.kubeconfig,所有node节点可以共用同一个kubelet.kubeconfig文件,这样新添加的节点就不需要再创建CSR请求就能自动添加到kubernetes集群中。同样,在任意能够访问到kubernetes集群的主机上使用kubectl --kubeconfig命令操作集群时,只要使用~/.kube/config文件就可以通过权限认证,因为这里面已经有认证信息并认为你是admin用户,对集群拥有所有权限。
  12. KUBELET_POD_INFRA_CONTAINER 是基础镜像容器,这里我用的是私有镜像仓库地址,大家部署的时候需要修改为自己的镜像。 

启动kublet

systemctl daemon-reload 
systemctl enable kubelet 
systemctl start kubelet 
systemctl status kubelet

通过 kublet 的 TLS 证书请求

kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 kubernetes 系统才会将该 Node 加入到集群。

查看未授权的 CSR 请求

[root@localhost ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-IJfUvyI9bSuM5pyZ7TOotXx9TNj3rySLRhZchwX_KZA   4d        kubelet-bootstrap   Pending
node-csr-WhJbdROPjBgn5yZDD8Sel3_Ln1_3FSmBQAl6Xs4MRI0   4d        kubelet-bootstrap   Pending
node-csr-d3efd2aZZquxaALaPAjUubi5GzXDN86kegptBHVvrkA   4d        kubelet-bootstrap   Pending

[root@localhost ssl]# $ kubectl get nodes 
No resources found.

依次通过 CSR 请求

[root@localhost ssl]# kubectl certificate approve node-csr-IJfUvyI9bSuM5pyZ7TOotXx9TNj3rySLRhZchwX_KZA 
certificatesigningrequest "node-csr-IJfUvyI9bSuM5pyZ7TOotXx9TNj3rySLRhZchwX_KZA" approved 
[root@localhost ssl]# kubectl get nodes 
NAME             STATUS    AGE       VERSION
172.16.138.171   Ready     4d        v1.7.16
172.16.138.172   Ready     4d        v1.7.16
172.16.138.173   Ready     4d        v1.7.16

自动生成了 kubelet kubeconfig 文件和公私钥

[root@localhost ssl]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw-------. 1 root root 2281 1月  25 14:51 /etc/kubernetes/kubelet.kubeconfig
[root@localhost ssl]# ls -l /etc/kubernetes/ssl/kubelet*
-rw-r--r--. 1 root root 1050 1月  25 14:51 /etc/kubernetes/ssl/kubelet-client.crt
-rw-------. 1 root root  227 1月  25 14:48 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r--. 1 root root 1119 1月  25 14:51 /etc/kubernetes/ssl/kubelet.crt
-rw-------. 1 root root 1675 1月  25 14:51 /etc/kubernetes/ssl/kubelet.key

假如你更新kubernetes的证书,只要没有更新token.csv,当重启kubelet后,该node就会自动加入到kuberentes集群中,而不会重新发送certificaterequest,也不需要在master节点上执行kubectl certificate approve操作。前提是不要删除node节点上的/etc/kubernetes/ssl/kubelet*/etc/kubernetes/kubelet.kubeconfig文件。否则kubelet启动时会提示找不到证书而失败。

注意:如果启动kubelet的时候见到证书相关的报错,有个trick可以解决这个问题,可以将master节点上的~/.kube/config文件(该文件在安装kubectl命令行工具这一步中将会自动生成)拷贝到node节点的/etc/kubernetes/kubelet.kubeconfig位置,这样就不需要通过CSR,当kubelet启动后就会自动加入的集群中。

配置 kube-proxy

安装conntrack

yum install -y conntrack-tools

创建 kube-proxy 的service配置文件

文件路径/usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

kube-proxy配置文件/etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=172.16.138.171 --hostname-override=172.16.138.171 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

 

  1. --hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则;
  2. kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr--masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
  3. --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息;
  4. 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;

启动 kube-proxy

systemctl daemon-reload 
systemctl enable kube-proxy 
systemctl start kube-proxy 
systemctl status kube-proxy

 验证

[root@localhost ssl]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
[root@localhost ssl]# kubectl get nodes
NAME             STATUS    AGE       VERSION
172.16.138.171   Ready     4d        v1.7.16
172.16.138.172   Ready     4d        v1.7.16
172.16.138.173   Ready     4d        v1.7.16
[root@localhost ssl]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-IJfUvyI9bSuM5pyZ7TOotXx9TNj3rySLRhZchwX_KZA   4d        kubelet-bootstrap   Approved,Issued
node-csr-WhJbdROPjBgn5yZDD8Sel3_Ln1_3FSmBQAl6Xs4MRI0   4d        kubelet-bootstrap   Approved,Issued
node-csr-d3efd2aZZquxaALaPAjUubi5GzXDN86kegptBHVvrkA   4d        kubelet-bootstrap   Approved,Issued
[root@localhost ssl]# etcdctl \
>   --ca-file=/etc/kubernetes/ssl/ca.pem \
>   --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
>   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
>   cluster-health
2019-01-29 16:33:02.145981 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-01-29 16:33:02.154510 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member ab044f0f6d623edf is healthy: got healthy result from https://172.16.138.173:2379
member cf3528b42907470b is healthy: got healthy result from https://172.16.138.172:2379
member eab584ea44e13ad4 is healthy: got healthy result from https://172.16.138.171:2379

 

架构搭建流程及遇到问题参考

https://www.cnblogs.com/xzkzzz/p/8979808.html

整个搭建流程重点参考该文章,文章还包括了私有仓库的管理和使用,k8s架构的管理和使用,很详细

https://blog.csdn.net/WaltonWang/article/details/55236300

kube-proxy工作原理

https://blog.csdn.net/johnhill_/article/details/82900587

阿里云环境部署k8s集群

https://blog.csdn.net/bobpen/article/details/78958675

Centos7 下Kubernetes集群yum安装部署

https://blog.csdn.net/qq_34701586/article/details/78732470

(kubernetes)k8s入门、单机版安装、kuberctl指令、k8s服务实例。

https://www.kubernetes.org.cn/5025.html

kubernetes1.13.1+etcd3.3.10+flanneld0.10集群部署

https://blog.csdn.net/qq_33199919/article/details/80623055

Kubernetes 中配置集群 ETCD 碰到的一些问题的解决!

https://www.cnblogs.com/hongdada/p/9771857.html

Docker中的Cgroup Driver:Cgroupfs 与 Systemd

https://www.cnblogs.com/moxiaoan/p/5683743.html

CentOS7使用firewalld打开关闭防火墙与端口

http://blog.51cto.com/1666898/2156165

k8s, etcd集群搭建报报错:request cluster ID mismatch (got

Logo

开源、云原生的融合云平台

更多推荐