约定机器列表

角色IP组件
k8s-master01192.168.152.171kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd,kubelet,kube-proxy
K8s-node01192.168.152.174kubelet,kube-proxy,docker,etcd
K8s-node02192.168.152.175kubelet,kube-proxy,docker,etcd
  • 操作系统 CentOS7.x-86_x64
  • 硬件配置:4GB或更多RAM,2个CPU或更多CPU,硬盘100GB或更多
  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
  • 禁止swap分区

系统初始化

每台机器都要执行以下命令:

# 关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld
firewall-cmd --reload

# 关闭selinux
setenforce 0  # 临时
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久


# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名  注意:<hostname>要修改
hostnamectl set-hostname <hostname>

# 在master添加hosts     注意:ip要修改
cat >> /etc/hosts << EOF
192.168.152.171 k8s-master01
192.168.152.174 k8s-node01
192.168.152.175 k8s-node02
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

-----------------------------------同步机器时间-------------------------------

#安装chrony
yum -y install chrony

#修改同步服务器地址为阿里云
sed -i.bak '3,6d' /etc/chrony.conf && sed -i '3cserver ntp1.aliyun.com iburst' \
/etc/chrony.conf

# 启动chronyd及加入开机自启
systemctl start chronyd && systemctl enable chronyd

#查看同步结果
chronyc sources

# 新增工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

安装docker

每台机器都要执行以下命令:

# 下载docker压缩包
https://download.docker.com/linux/static/stable/x86_64/docker-19.03.11.tgz

# 解压二进制包、移动加压文件
tar zxvf docker-19.03.11.tgz
mv docker/* /usr/bin


# systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF



#创建配置文件
mkdir /etc/docker

cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "log-opts": {"max-size":"2g", "max-file":"100"}
}
EOF

#启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
  • registry-mirrors 阿里云镜像加速器

安装etcd

准备cfssl证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用k8s-master01节点

-----------------------下载文件--------------------------
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
-------------------------移动文件-----------------------------
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

生成Etcd证书

自签证书颁发机构(CA)
# 新建证书存放的工作目录
mkdir -p /opt/temp/TLS/{etcd,k8s}

# 定位到具体的目录
cd /opt/temp/TLS/etcd

# 写json配置
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF


# 执行生成命令
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# 检查是否生成证书
ls *pem
使用自签CA签发Etcd HTTPS证书
# 定位到具体的目录
cd /opt/temp/TLS/etcd


cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.152.171",
    "192.168.152.174",
    "192.168.152.175"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

#生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

# 检查证书是否生成
ls server*pem

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

从Github下载二进制文件

下载地址:

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

选择其中一台机器安装

选择k8s-master01进行安装

# 	创建工作目录
mkdir /opt/etcd/{bin,cfg,ssl} -p

# 解压上一步下载的安装包
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz

# 移动解压后的安装包
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

# 创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.152.171:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.152.171:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.152.171:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.152.171:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.152.171:2380,etcd-2=https://192.168.152.174:2380,etcd-3=https://192.168.152.175:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF



#systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF


# 拷贝刚才生成的证书
cp /opt/temp/TLS/etcd/ca*pem /opt/temp/TLS/etcd/server*pem /opt/etcd/ssl/

其他机器也安装

拷贝文件

将上面节点1所有生成的文件拷贝到节点2和节点3

# 拷贝文件到k8s-node01
scp -r /opt/etcd/ root@192.168.152.174:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.152.174:/usr/lib/systemd/system/
# 拷贝文件到k8s-node02
scp -r /opt/etcd/ root@192.168.152.175:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.152.175:/usr/lib/systemd/system/

修改k8s-node01、k8s-node02机器etcd配置文件

vi /opt/etcd/cfg/etcd.conf

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.152.174:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.152.174:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.152.174:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.152.174:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.152.171:2380,etcd-2=https://192.168.152.174:2380,etcd-3=https://192.168.152.175:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"




vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.152.175:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.152.175:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.152.175:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.152.175:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.152.171:2380,etcd-2=https://192.168.152.174:2380,etcd-3=https://192.168.152.175:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动etcd

# 每个节点都执行
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

#查看集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.152.171:2379,https://192.168.152.174:2379,https://192.168.152.175:2379" endpoint health

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XQNszLi7-1598369559548)(k8s.assets/image-20200821125357943.png)]

部署k8s master节点

在k8s-master01机器上进行如下操作:

生成kube-apiserver证书

自签证书颁发机构(CA

# 定位到工作目录
cd /opt/temp/TLS/k8s

# 生产json配置文件
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF


# 生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# 查看证书
ls *pem

使用自签CA签发kube-apiserver HTTPS证书

# 定位到工作目录
cd /opt/temp/TLS/k8s

cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.152.171",
      "192.168.152.174",
      "192.168.152.175",
      "192.168.152.176",
      "192.168.152.177",
      "192.168.152.178",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

# 查看证书
ls server*pem

从Github下载二进制文件

下载地址:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#v11614

注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LojEH80i-1598369559549)(k8s.assets/image-20200821125911349.png)]

解压二进制包

# 创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
# 解压下载的文件
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

安装kube-apiserver

创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.152.171:2379,https://192.168.152.174:2379,https://192.168.152.175:2379 \\
--bind-address=192.168.152.171 \\
--secure-port=6443 \\
--advertise-address=192.168.152.171 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

  • –logtostderr:启用日志
  • —v:日志等级
  • –log-dir:日志目录
  • –etcd-servers:etcd集群地址
  • –bind-address:监听地址
  • –secure-port:https安全端口
  • –advertise-address:集群通告地址
  • –allow-privileged:启用授权
  • –service-cluster-ip-range:Service虚拟IP地址段
  • –enable-admission-plugins:准入控制模块
  • –authorization-mode:认证授权,启用RBAC授权和节点自管理
  • –enable-bootstrap-token-auth:启用TLS bootstrap机制
  • –token-auth-file:bootstrap token文件
  • –service-node-port-range:Service nodeport类型默认分配端口范围
  • –kubelet-client-xxx:apiserver访问kubelet客户端证书
  • –tls-xxx-file:apiserver https证书
  • –etcd-xxxfile:连接Etcd集群证书
  • –audit-log-xxx:审计日志

拷贝刚才生成的证书

cp /opt/temp/TLS/k8s/ca*pem /opt/temp/TLS/k8s/server*pem /opt/kubernetes/ssl/

创建配置文件中token文件

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

systemd管理apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

部署kube-controller-manager

创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
  • –master:通过本地非安全本地端口8080连接apiserver。
  • –leader-elect:当该组件启动多个时,自动选举(HA)
  • –cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

部署kube-scheduler

创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
  • –master:通过本地非安全本地端口8080连接apiserver。
  • –leader-elect:当该组件启动多个时,自动选举(HA)

systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

查看集群状态

# kubectl get cs这个命令可能出bug了。也可以使用下一条命令替代它
kubectl get cs


kubectl get cs -o=go-template='{{printf "|NAME|STATUS|MESSAGE|\n"}}{{range .items}}{{$name := .metadata.name}}{{range .conditions}}{{printf "|%s|%s|%s|\n" $name .status .message}}{{end}}{{end}}'

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hYY4KSZH-1598369559550)(k8s.assets/image-20200823132146047.png)]

部署kubelet

# 拷贝安装包文件
cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin   # 本地拷贝

创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master01 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
  • –hostname-override:显示名称,集群中唯一
  • –network-plugin:启用CNI
  • –kubeconfig:空路径,会自动生成,后面用于连接apiserver
  • –bootstrap-kubeconfig:首次启动向apiserver申请证书
  • –config:配置参数文件
  • –cert-dir:kubelet证书生成目录
  • –pod-infra-container-image:管理Pod网络容器的镜像

配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

生成bootstrap.kubeconfig文件

KUBE_APISERVER="https://192.168.140.131:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.152.171:6443 \
  --kubeconfig=bootstrap.kubeconfig
  
kubectl config set-credentials "kubelet-bootstrap" \
  --token=c47ffb939f5ca36231d9e3121a252940 \
  --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# 拷贝到配置文件路径
cp bootstrap.kubeconfig /opt/kubernetes/cfg

systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

批准kubelet证书申请并加入集群

# 查看kubelet证书请求
kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-1xJWedS36vbOS5Wt31ZNTmCwVbU2Umv4txBKU4udp_o   8s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
kubectl certificate approve node-csr-1xJWedS36vbOS5Wt31ZNTmCwVbU2Umv4txBKU4udp_o

# 查看节点
kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   7s    v1.16.14

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AaXQ5Us0-1598369559552)(k8s.assets/image-20200821131538456.png)]

部署kube-proxy

创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master01
clusterCIDR: 10.0.0.0/24
EOF

生成kube-proxy.kubeconfig文件

# 切换工作目录
cd /opt/temp/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem

生成kubeconfig文件

KUBE_APISERVER="https://192.168.140.131:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.152.171:6443 \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig



# 拷贝到配置文件指定路径
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

部署CNI网络

# 下载CNI二进制文件
下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

mkdir -p /opt/cni/bin


tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

# 下载flannel启动文件下载不了时,可以收到下载
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 部署CNI网络
kubectl apply -f kube-flannel.yml

注意部署kube-flannel pod的时候,有时候会因为网络问题不好而部署不了pod。这个时候需要你手动docker pull镜像,镜像的名字在kube-flannel.yml有

 docker pull quay.io/coreos/flannel:v0.12.0-arm64等等....

授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

查看Node状态

kubectl get node

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-x08GJCUu-1598369559553)(k8s.assets/image-20200821133340979.png)]

部署k8s Worker Node节点

拷贝已部署好的Node相关文件到新节点

在k8s-master01节点将Worker Node涉及文件拷贝到新节点k8s-node01

# 拷贝到新节点k8s-node01

scp  -r /opt/kubernetes root@192.168.152.174:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.152.174:/usr/lib/systemd/system


scp -r /opt/cni/ root@192.168.152.174:/opt/

scp /opt/kubernetes/ssl/ca.pem root@192.168.152.174:/opt/kubernetes/ssl

删除kubelet证书和kubeconfig文件

在k8s-node01节点删除kubelet证书和kubeconfig文件

rm /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。

修改主机名

# hostname-override需要注意是k8s-node01还是k8s-node02
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node01

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node01

启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

在k8s-master01上批准新Node kubelet证书申请

[root@localhost cfg]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-1xJWedS36vbOS5Wt31ZNTmCwVbU2Umv4txBKU4udp_o   32m   kubelet-bootstrap   Approved,Issued
node-csr-z225xWVjZW12akG5ZILfwpvXAn6gBROin2-n4y3TrFk   18s   kubelet-bootstrap   Pending
[root@localhost cfg]# kubectl certificate approve node-csr-z225xWVjZW12akG5ZILfwpvXAn6gBROin2-n4y3TrFk
certificatesigningrequest.certificates.k8s.io/node-csr-z225xWVjZW12akG5ZILfwpvXAn6gBROin2-n4y3TrFk approved

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QiTTZWGe-1598369559554)(k8s.assets/image-20200821134617435.png)]

查看Node状态

kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    <none>   65m   v1.16.14
k8s-node01     Ready    <none>   33m   v1.16.14
k8s-node02     Ready    <none>   13m   v1.16.14

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LJP3nF9X-1598369559554)(k8s.assets/image-20200821141958774.png)]

部署Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml

查看pod

kubectl get pods,svc -n kubernetes-dashboard

创建service account并绑定默认cluster-admin管理员集群角色

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

给master节点打污点

# 查看污点
kubectl describe node k8s-master01|grep -i taints
# 打污点
kubectl taint node k8s-master01 key1=value1:NoSchedule

部署CoreDNS

# 手动拉取镜像
docker pull coredns/coredns:1.3.1
docker tag docker.io/coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1

kubectl apply -f coredns.yaml

# 查看结果
kubectl get pods -n kube-system 

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

部署helm

安装helm2.16.3

客户端helm安装

下载helm客户端

wget https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz

解压缩并拷贝helm二进制文件

tar xf helm-v2.16.3-linux-amd64.tar.gz
cp linux-amd64/helm /usr/local/bin

服务端tiller安装

集群每个节点安装socat

否则会报错Error: cannot connect to Tiller

yum install -y socat 

初始化helm,部署tiller

Tiller 是以 Deployment 方式部署在 Kubernetes 集群中的,只需执行helm init命令便可简单的完成安装,但是Helm默认会去 storage.googleapis.com 拉取镜像。。。。。。这里需要使用阿里云的仓库完成安装

#添加阿里云的仓库
helm init --client-only --stable-repo-url https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts/
  
helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
  
helm repo update

#创建服务端 使用-i指定阿里云仓库
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

#创建TLS认证服务端,参考地址:#https://github.com/gjmzj/kubeasz/blob/master/docs/guide/helm.md

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --tiller-tls-cert /etc/kubernetes/ssl/tiller001.pem --tiller-tls-key /etc/kubernetes/ssl/tiller001-key.pem --tls-ca-cert /etc/kubernetes/ssl/ca.pem --tiller-namespace kube-system --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts    

给tiller授权

因为 Helm 的服务端 Tiller 是一个部署在 Kubernetes 中 kube-system namespace下的deployment,它会去连接 kube-api在Kubernetes里创建和删除应用。

而从Kubernetes1.6版本开始,API Server 启用了RBAC授权。目前的Tiller部署时默认没有定义授权的ServiceAccount,这会导致访问API Server时被拒绝。所以我们需要明确为Tiller部署添加授权。

创建 Kubernetes 的服务帐号和绑定角色

#创建serviceaccount
kubectl create serviceaccount --namespace kube-system tiller

#创建角色绑定
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

为tiller设置帐号

#使用kubectl patch更新API对象
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 

#验证是否授权成功
kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount

      serviceAccount: tiller
      serviceAccountName: tiller

验证tiller是否安装成功

kubectl -n kube-system get pods|grep tiller
tiller-deploy-6d8dfbb696-4cbcz             1/1     Running   0          88s

输入命令	helm version	显示结果以下既为成功
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}

卸载helm服务端tiller

$ helm reset 
或
$ helm reset -f		强制删除

安装nfs存储

  • 官方提供的openebs存储貌似不太好使,反正我是安装完后pod的状态一直是pending
  • nfs存储比较简单,适合实验环境
  • 也可以使用别的持久化存储

安装nfs参考文章

nfs这里选择在master安装,上边的参考文章中说nfs server安装在master节点会有问题,但是我这里没有

安装配置nfs

client端,这里为两个node节点

yum -y install nfs-utils

server端,master节点

1.安装包
yum -y install nfs-utils rpcbind

2.编辑配置文件
配置文件中的*是允许所有网段,根据自己实际情况写明网段
cat >/etc/exports <<EOF
/data *(insecure,rw,async,no_root_squash) 
EOF

3.创建目录并修改权限
这里为了方便实验授予了挂载目录权限为777,请根据实际情况修改目录权限和所有者
mkdir /data && chmod 777 /data

4.启动服务
systemctl enable nfs-server rpcbind && systemctl start nfs-server rpcbind

配置storageclass,注意修改nfs服务端IP和共享目录

cat >storageclass.yaml <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
   namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-client-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.152.171 #此处修改为nfs服务器ip
            - name: NFS_PATH
              value: /data   #这里为nfs共享目录
      volumes:
        - name: nfs-client
          nfs:
            server: 192.168.152.171 #此处修改为nfs服务器ip
            path: /data   #这里为nfs共享目录
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
EOF

创建storageclass

kubectl apply -f storageclass.yaml

设置默认strorageclass

kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

检查nfs-client pod状态

#这里是在default命名空间下创建的
kubectl get pods

NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7b9746695c-nrz4n   1/1     Running   0          2m38s

检查默认存储

#这里是在default命名空间下创建的
kubectl get sc

NAME                    PROVISIONER      AGE
nfs-storage (default)   fuseim.pri/ifs   7m22s

部署kubesphere

官方文档

最小化安装 KubeSphere

kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml

查看安装日志

#使用如下命令查看安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f


当日志最后提示如下即表明安装完成,但是还是要等待一些pod完全运行起来才可以
Start installing monitoring
**************************************************
task monitoring status is successful
total: 1     completed:1
**************************************************
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.9.10:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Status". If the service is not
     ready, please wait patiently. You can start
     to use when all components are ready.
  2. Please modify the default password after login.

#####################################################

安装日志中会有一个报错如下,但是没有影响

TASK [ks-core/ks-core : KubeSphere | Delete Ingress-controller configmap] ******
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl delete cm -n kubesphere-system ks-router-config\n", "delta": "0:00:00.562513", "end": "2020-04-28 07:18:28.772284", "msg": "non-zero return code", "rc": 1, "start": "2020-04-28 07:18:28.209771", "stderr": "Error from server (NotFound): configmaps \"ks-router-config\" not found", "stderr_lines": ["Error from server (NotFound): configmaps \"ks-router-config\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

检查所有pod状态,都为running才可以

kubectl get pods -A

NAMESPACE                      NAME                                        READY   STATUS    RESTARTS   AGE
default                        nfs-client-provisioner-7b9746695c-nrz4n     1/1     Running   0          18m
kube-system                    calico-kube-controllers-bc44d789c-ksgnt     1/1     Running   0          39h
kube-system                    calico-node-2t4gr                           1/1     Running   0          39h
kube-system                    calico-node-5bzjl                           1/1     Running   0          39h
kube-system                    calico-node-fjdll                           1/1     Running   0          39h
kube-system                    coredns-58cc8c89f4-8jrlt                    1/1     Running   0          39h
kube-system                    coredns-58cc8c89f4-nt5z5                    1/1     Running   0          39h
kube-system                    etcd-k8s-master1                            1/1     Running   0          39h
kube-system                    kube-apiserver-k8s-master1                  1/1     Running   0          39h
kube-system                    kube-controller-manager-k8s-master1         1/1     Running   0          39h
kube-system                    kube-proxy-b7vj4                            1/1     Running   0          39h
kube-system                    kube-proxy-bghx7                            1/1     Running   0          39h
kube-system                    kube-proxy-ntrxx                            1/1     Running   0          39h
kube-system                    kube-scheduler-k8s-master1                  1/1     Running   0          39h
kube-system                    kuboard-756d46c4d4-dwzwt                    1/1     Running   0          39h
kube-system                    metrics-server-78cff478b7-lwcfl             1/1     Running   0          39h
kube-system                    tiller-deploy-6d8dfbb696-ldpjd              1/1     Running   0          40m
kubernetes-dashboard           dashboard-metrics-scraper-b68468655-t2wgd   1/1     Running   0          39h
kubernetes-dashboard           kubernetes-dashboard-64999dbccd-zwnn5       1/1     Running   1          39h
kubesphere-controls-system     default-http-backend-5d464dd566-5hlzs       1/1     Running   0          6m9s
kubesphere-controls-system     kubectl-admin-6c664db975-kp6r5              1/1     Running   0          3m10s
kubesphere-monitoring-system   kube-state-metrics-566cdbcb48-cc4fv         4/4     Running   0          5m32s
kubesphere-monitoring-system   node-exporter-5lvpx                         2/2     Running   0          5m32s
kubesphere-monitoring-system   node-exporter-hlfbh                         2/2     Running   0          5m32s
kubesphere-monitoring-system   node-exporter-qxkm6                         2/2     Running   0          5m32s
kubesphere-monitoring-system   prometheus-k8s-0                            3/3     Running   1          4m32s
kubesphere-monitoring-system   prometheus-k8s-system-0                     3/3     Running   1          4m32s
kubesphere-monitoring-system   prometheus-operator-6b97679cfd-6dztx        1/1     Running   0          5m32s
kubesphere-system              ks-account-596657f8c6-kzx9w                 1/1     Running   0          5m56s
kubesphere-system              ks-apigateway-78bcdc8ffc-2rvbg              1/1     Running   0          5m58s
kubesphere-system              ks-apiserver-5b548d7c5c-dxqt7               1/1     Running   0          5m57s
kubesphere-system              ks-console-78bcf96dbf-kdh7q                 1/1     Running   0          5m53s
kubesphere-system              ks-controller-manager-696986f8d9-fklzv      1/1     Running   0          5m55s
kubesphere-system              ks-installer-75b8d89dff-zm6fl               1/1     Running   0          7m49s
kubesphere-system              openldap-0                                  1/1     Running   0          6m21s
kubesphere-system              redis-6fd6c6d6f9-dqh2s                      1/1     Running   0          6m25s

访问kubesphere:30880
用户名:admin
默认密码:P@88w0rd
在这里插入图片描述

登陆后的首界面
在这里插入图片描述

Logo

开源、云原生的融合云平台

更多推荐