K8s——部署二进制方式---详解
文章目录一.Kubernetes架构与组件示意图二.部署准备一.Kubernetes架构与组件示意图二.部署准备准备三台虚拟机,master的cpu给大点
前言
官方提供的三种部署方式:kubeadmin、二进制、minikube,本文主要讲解二进制部署方式
K8S二进制部署,分为几个模块部署:
1、ETCD集群
2、FLANNEL网络
3、单master部署
4、node部署
5、多master部署
一 ca证书简介
在 Kubernetes 的组件之间进行通信时,数字证书的验证是在协议层面通过TLS完成的,除了需要在建立通信时提供相关的证书和密钥外,在应用层面并不需要进行特殊处理。采用TLS 进行验证有两种方式:
服务器单向认证:只需要服务器端提供证书,客户端通过服务器端证书验证服务的身份,但服务器并不验证客户端的身份。这种情况一般适用于对Internet开放的服务,例如搜索引擎网站,任何客户端都可以连接到服务器上进行访问,但客户端需要验证服务器的身份,以避免连接到伪造的恶意服务器。
双向TLS 认证:除了客户端需要验证服务器的证书,服务器也要通过客户端证书验证客户端的身份。这种情况下服务器提供的是敏感信息,只允许特定身份的客户端访问。在Kubernetes中,各个组件提供的接口中包含了集群的内部信息。如果这些接口被非法访问,将影响集群的安全,因此组件之间的通信需要采用双向rLs认证。即客户端和服务器端
都需要验证对方的身份信息。在两个组件进行双向认证时,会涉及到下面这些证书相关的文件:
①:服务器端证书:服务器用于证明自身身份的数字证书,里面主要包含了服务器端的公钥以及服务器的身份信息。
②:服务器端私钥:服务器端证书中包含的公钥所对应的私钥。公钥和私钥是成对使用的,在进行rL s验证时,服务器使用该私钥来向客户州证明自己是服务器端证书的拥有者
③:客户端证书:客户端用于证明自身身份的数字证书,里面主要包含了客户端的公钥以及客户端的身份信息。
④:客户端私钥:客户端证书中包含的公钥所对应的私钥,同理,客户端使用该私钥来向服务器端证明自己是客户端证书的拥有者
⑤:服务器端CA根证书:签发服务器端证书的 CA根证书,客户端使用该CA 根证书来验证服务器端证书的合法性。
⑥:客户端端CA根证书:签发客户端证书的CA根证书,服务器端使用该CA根证书来验证客户端证书的合法性。
- 1、制作官方颁发的证书:
- 创建ca密钥(文件定义)ca- key . pem
2.创建ca证书(文件定义)ca . pem
-
2、制作master端的证书
(用于内部加密通讯,同时为了给与Client端颁发master签名的证书) -
① 创建过程:需要以下几部
设置私钥确保安全加密 . pem
私钥签名确保身份真实 CSr.
制作证书(需要CA官方颁发) cert . pem
②创建私钥
③私钥签名
④用使用ca证书与密钥证书签名
- 制作node端证书
①由master端制作node端密钥
②对node端的证书进行签名
③创建-一个配置文件(区别于服务端,进行客户端验证)
④生成证书
二.Kubernetes架构与组件示意图
三.环境部署准备
1.k8s二进制部署
Master:192.168.10.129/24 kube-apiserver kube-controller-manager kube- scheduler etcd .
Node01:192.168.10.135/24 kubelet kube-proxy docker flannel etcd
Node02:192.168.10.131/24 kubelet kube-proxy docker flannel etcd
2、所有主机部署docker
yum install -y yum-utils device--mapper persistent-data 1vm2
cd /etc/yum.repos.d/
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
vim /etc/selinux/config
SELINUX=disabled
systemctl start docker.service
systemctl enable docker.service
systemctl daemon-reload
systemctl restart docker
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://y55tgbiz.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
sysctl -p
service network restart
sudo systemctl restart docker
3.部署Etcd集群
calico:另一个支持BGP的网络组建
1.1:拓扑图与主机分配
1.1.1:拓扑图介绍
- master组件介绍:
kube-apiserver:是集群的统一入口,各个组件的协调者,所有对象资源的增删改查和监听操作都交给APIserver处理,再提交给etcd存储。
kube-controller-manager:处理群集中常规的后台任务,一个资源对应一个控制器,而controller-manager就是负责管理这些控制器。
kube-scheduler:根据调度算法为新创建的pod选择一个node节点,可以任意部署,可以部署同一个节点上,也可以部署在不同节点上
- node组件介绍:
kubelet:kube是master在node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个pod转换成一组容器
kube-proxy:在node节点上实现pod网络代理,维护网络规划和四层负载均衡的工作
docker:Docker引擎
flannel:flannel网络
- etcd集群介绍:etcd集群在这里分布的部署到了三个节点上
etcd是CoreOS团队于2013年6月发起的开源项目,基于go语言开发,目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法。
etcd集群数据无中心化集群,有如下特点:
1、简单:安装配置简单,而且提供了HTTP进行交互,使用也很简单
2、安全:支持SSL证书验证
3、快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作
4、可靠:采用raft算法,实现分布式数据的可用性和一致性
- 部署K8S集群中会用到的自签CA证书
组件 | 使用的证书 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin-pem,admin-key.pem |
①修改主机名
[root@localhost ~]# hostnamectl set-hostname master '//相同方法修改另外两台主机'
[root@localhost ~]# su
[root@master ~]#
②关闭防火墙与核心防护,三个节点都做,此处仅展示master 的操作
[root@node02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node02 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
四、ETCD集群部署
1. master主机创建k8s文件夹并上传etcd脚本,下载cffssl官方证书生成工具
[root@master ~]# mkdir -p k8s/etcd-cert
[root@master ~]# cd k8s/
[root@master k8s]# rz -E '//上传etcd脚本'
rz waiting to receive.
[root@master k8s]# ls
etcd-cert etcd-cert.sh etcd.sh
[root@master k8s]# mv etcd-cert.sh etcd-cert '//移动到相应目录'
[root@master k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master k8s]# bash cfssl.sh '//运行下载工具的脚本'
[root@master k8s]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson '//cfssl:生成证书工具、cfssljson:通过传入json文件生成证书、cfssl-certinfo查看证书信息'
2、创建证书
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# ls
etcd-cert.sh
[root@master etcd-cert]# vim etcd-cert.sh
[root@master etcd-cert]# cat > ca-config.json <<EOF '//定义ca证书配置文件'
> {
> "signing": {
> "default": {
> "expiry": "87600h" '//有效期10年'
> },
> "profiles": {
> "www": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
[root@master etcd-cert]# ls
ca-config.json etcd-cert.sh
[root@master etcd-cert]# cat > ca-csr.json <<EOF
> {
> "CN": "etcd CA",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing"
> }
> ]
> }
> EOF
[root@master etcd-cert]# ls
ca-config.json ca-csr.json etcd-cert.sh
[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
bash: /usr/local/bin/cfssljson: 权限不够
bash: /usr/local/bin/cfssl: 权限不够
[root@master etcd-cert]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/09/28 16:22:04 [INFO] generating a new CA key and certificate from CSR
2021/09/28 16:22:04 [INFO] generate received request
2021/09/28 16:22:04 [INFO] received CSR
2021/09/28 16:22:04 [INFO] generating key: rsa-2048
2021/09/28 16:22:04 [INFO] encoded CSR
2021/09/28 16:22:04 [INFO] signed certificate with serial number 678815727212678730559848261748074196527965318882
[root@master etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh
3、指定etcd三个节点之间的通信验证
[root@master etcd-cert]# cat > server-csr.json <<EOF '//配置服务器端的签名文件'
> {
> "CN": "etcd",
> "hosts": [
> "192.168.10.129",
> "192.168.10.135",
> "192.168.10.131"
> ],
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "BeiJing",
> "ST": "BeiJing"
> }
> ]
> }
> EOF
[root@master etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh server-csr.json
[root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2021/09/28 16:24:59 [INFO] generate received request
2021/09/28 16:24:59 [INFO] received CSR
2021/09/28 16:24:59 [INFO] generating key: rsa-2048
2021/09/28 16:25:00 [INFO] encoded CSR
2021/09/28 16:25:00 [INFO] signed certificate with serial number 200857063490932135057169158953811586963187607667
2021/09/28 16:25:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master etcd-cert]# ls
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
4、下载并解压ETCD二进制包,下载地址:https://github.com/etcd-io/etcd/releases
[root@master etcd-cert]# cd ..
[root@master k8s]# rz -E '//我已经下载好了,直接上传,还有flannel和kubernetes-server的软件也一起上传'
rz waiting to receive.
[root@master k8s]# ls
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz '//解压软件'
5、创建命令,配置文件和证书的文件夹,并移动相应文件到相应目录
[root@master k8s]# mkdir -p /opt/etcd/{cfg,bin,ssl}
[root@master k8s]# ls /opt/etcd/
bin cfg ssl
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd* /opt/etcd/bin '//移动命令到刚刚创建的 bin目录'
[root@master k8s]# ls /opt/etcd/bin/
etcd etcdctl
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl '//将证书文件复制到刚刚创建的ssl目录'
[root@master k8s]# ls /opt/etcd/ssl
ca-key.pem ca.pem server-key.pem server.pem
[root@master k8s]# vim etcd.sh '//查看配置文件'
...省略内容
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" '//2380端口是etcd内部通信端口'
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" '//2379是单个etcd对外提供的端口'
...省略内容
6、主节点执行脚本并声明本地节点名称和地址,此时会进入监控状态,等待其他节点加入,等待时间2分钟
[root@master k8s]# ls /opt/etcd/cfg/ '//此时查看这个目录是没有文件的'
[root@master k8s]# bash etcd.sh etcd01 192.168.10.129 etcd02=https://192.168.10.135:2380,etcd03=https://192.168.10.137:2380 '//执行命令进入监控状态'
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master k8s]# ls /opt/etcd/cfg/ '//此时重新打开终端,发现已经生成了文件'
etcd
7、拷贝证书和启动脚本到两个工作节点
[root@master k8s]# scp -r /opt/etcd/ root@192.168.10.135:/opt
[root@master k8s]# scp -r /opt/etcd/ root@192.168.10.131:/opt
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.10.135:/usr/lib/systemd/system
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.10.137:/usr/lib/systemd/system
- node01和node02两个工作节点修改修改etcd配置文件,修改相应的名称和IP地址
[root@node01 ~]# vim /opt/etcd/cfg/etcd '//两个节点相同方法修改,此处指展示node01的修改'
8 先开启主节点的集群脚本,然后两个节点启动etcd
[root@master k8s]# bash etcd.sh etcd01 192.168.10.129 etcd02=https://192.168.10.135:2380,etcd03=https://192.168.10.131:2380 '//master节点开启集群脚本'
[root@node01 ~]# systemctl start etcd '//然后两个节点启动etcd'
[root@node01 ~]# systemctl status etcd
[root@node02 ~]# systemctl starts etcd
[root@node02 ~]# systemctl status etcd
检查集群状态:注意相对路径
[root@master k8s]# cd /opt/etcd/ssl/
[root@master ssl]# ls
ca-key.pem ca.pem server-key.pem server.pem
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.10.129:2379,https://192.168.10.135:2379,https://192.168.10.131:2379" cluster-health
member a577d40b7d081aae is healthy: got healthy result from https://192.168.10.129:2379
member b5d01bc42d3df1bf is healthy: got healthy result from https://192.168.10.135:2379
member bd998b98e5e1b417 is healthy: got healthy result from https://192.168.10.131:2379
cluster is healthy '//集群是健康的,没问题'
- node节点部署Docker
- 两个node节点部署Docker,不在赘述,如有疑问,可参阅我之前的博客:
链接: https://blog.csdn.net/weixin_56477161.
五 flannel容器集群网络部署
1.flannel网络理论介绍
-
Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟化网络技术模式,该网络中的主机通过虚拟链路连接起来
-
VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上进行传输,到达目的地后由隧道端点解封装并将数据发送给目标地址
-
Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式
- Flannel是CoreOS团队针对 Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的 Docker容器都具有全集群唯一的虚拟IP地址。而且它还能在这些IP地址之间建立一个覆盖网络(overlay Network),通过这个覆盖网络,将数据包原封不动地传递到目标容器内
- ETCD在这里的作用:为Flannel提供说明
- 存储管理 Flannel可分配的IP地址段资源
- 监控ETCD中每个Pod的实际地址,并在内存中建立维护Pod节点路由表
2.部署
- 1、master节点写入分配的子网段到ETCD中,供flannel使用
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.195.129:2379,https://192.168.10.135:2379,https://192.168.10.1312379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' '//写入分配的网段'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.10.129:2379,https://192.168.10.135:2379,https://192.168.195.131:2379" get /coreos.com/network/config '//查看写入的网段'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
- 2、在两个node节点部署flannel
[root@master ssl]# cd /root/k8s/
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.10.135:/opt
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.10.131:/opt
[root@node01 ~]# cd /opt
[root@node01 opt]# ls
containerd etcd flannel-v0.10.0-linux-amd64.tar.gz
[root@node01 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz '//node02也要解压,不在赘述'
flanneld
mk-docker-opts.sh
README.md
'//谁需要跑pod,谁就需要安装flannel网络'
- 3、node节点创建k8s工作目录,将两个脚本移动到对应工作目录
[root@node01 opt]# mkdir -p /opt/k8s/{cfg,bin,ssl} '//创建对应配置文件,命令和证书目录'
[root@node01 opt]# mv mk-docker-opts.sh flanneld ./k8s/bin/ '//移动flannel脚本命令到相应目录'
[root@node01 opt]# ls k8s/bin/
mk-docker-opts.sh
- 4、两个node节点都编辑flannel.sh脚本:创建配置文件与启动脚本,定义的端口是2379,节点对外提供的端口
[root@node01 opt]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/k8s/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem \"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/k8s/cfg/flanneld
ExecStart=/opt/k8s/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
- 5、执行脚本,开启flannel网络功能
[root@node01 opt]# bash flannel.sh https://192.168.10.129:2379,https://192.168.10.135:2379,https://192.168.10.131:2379 '//两个node节点都开启'
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@node02 opt]# systemctl status flanneld '//查看flanneld服务是否正常开启'
- 6、配置docker连接flannel网络
[root@node01 opt]# vim /usr/lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_NETWORK_OPTIONS --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
- 7、查看flannel分配给docker的IP地址
[root@node01 opt]# [root@node1 opt]# cat /run/flannel/subnet.env node1分配节点
DOCKER_OPT_BIP="--bip=172.17.7.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.7.1/24 --ip-masq=false --mtu=1450"'//bip指定启动时的子网'
[root@node02 opt]#cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.76.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.76.1/24 --ip-masq=false --mtu=1450"
- 8、重启Docker服务,再次查看flannel网络是否有变化
[root@node02 opt]# systemctl daemon-reload
[root@node02 opt]# systemctl restart docker
[root@node02 opt]# ip addr '//两个节点应该能查看到各自对应的flannel网络的网段'
- 9、创建容器测试两个node节点是否可以互联互通
[root@node01 opt]# docker run -it centos:7 /bin/bash '//两个节点都创建并运行容器'
[root@8ffe415fb35e /]# yum -y install net-tools '//两个容器中都安装网络工具'
[root@8ffe415fb35e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.7.2 netmask 255.255.255.0 broadcast 172.17.7.255
...省略内容 '//经过查看,node01节点容器的IP地址是172.17.26.2,node02节点容器的IP地址是172.17.76.2 '
[root@6b941018fc14 /]# ping -c 2 172.17.76.2 '//node01节点的容器ping node02节点的容器成功'
PING 172.17.76.2 (172.17.76.2) 56(84) bytes of data.
64 bytes from 172.17.76.2: icmp_seq=1 ttl=62 time=0.691 ms
64 bytes from 172.17.76.2: icmp_seq=2 ttl=62 time=0.332 ms
--- 172.17.76.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.332/0.511/0.691/0.180 ms
^C
--- 172.17.4.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.477/0.626/0.705/0.107 ms
[root@ca36b4a45119 /]# ping -c 3 172.17.7.2 '//node02 ping node01容器'
PING 172.17.7.2 (172.17.7.2) 56(84) bytes of data.
64 bytes from 172.17.7.2: icmp_seq=1 ttl=62 time=1.06 ms
64 bytes from 172.17.7.2: icmp_seq=2 ttl=62 time=0.354 ms
64 bytes from 172.17.7.2: icmp_seq=3 ttl=62 time=0.534 ms
--- 172.17.7.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.354/0.649/1.061/0.301 ms
'//证明flannel网络部署成功'
六 部署master组件
- 下图是node节点的kubectl启动的流程图,根据此流程图,我们需要在master节点将kubelet-bootstrap用户绑定到集群,然后部署一些证书认证使node节点能够被master节点检测到并且成功连接。
- 1、master节点操作,api-server生成证书
[root@master k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl} '//创建k8s工作目录'
[root@master k8s]# mkdir k8s-cert '//创建k8s证书目录'
[root@master k8s]# unzip master.zip -d /opt/kubernetes/ '//解压 maste.zip'
[root@master k8s]# ls /opt/k8s/
apiserver.sh bin cfg controller-manager.sh scheduler.sh ssl '//发现controller-manager.sh 没有执行权限'
[root@master k8s]# chmod +x /opt/kubernetes/controller-manager.sh '//给执行权限'
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.233.131", '//master1,配置文件中要删除此类注释'
"192.168.233.130", '//master2'
"192.168.233.100", '//VIP'
"192.168.233.128", '//nginx代理master'
"192.168.233.129", '//nginx代理backup'
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
'//为什么没有写node节点的IP地址?因为如果写了node节点IP地址,后期增加或者删除node节点的时候会非常麻烦'
- 2、生成证书
[root@master k8s-cert]# bash k8s-cert.sh '//生成证书'
[root@master k8s-cert]# ls
admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem
[root@master k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*.pem server*.pem /opt/kubernets/ssl/ '//复制证书到工作目录'
[root@master k8s-cert]# ls /opt/kubernets/ssl/
ca-key.pem ca.pem server-key.pem server.pem
- 3、解压k8s服务器端压缩包
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
cfssl.sh etcd-v3.3.10-linux-amd64 k8s-cert
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
etcd.sh flannel-v0.10.0-linux-amd64.tar.gz master.zip
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
- 4、复制服务器端关键命令到k8s工作目录中
[root@master k8s]# cd kubernetes/server/bin/
[root@master bin]# cp kube-controller-manager kube-scheduler kubectl kube-apiserver /opt/kubernets/bin/
[root@master bin]# ls /opt/kubernetes/bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
- 5、编辑令牌并绑定角色kubelet-bootstrap
[root@master bin]# cd /root/k8s/
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d '' '//随机生成序列号'
7ea8f86b 157225fd 4b927376 5e88a3ca
[root@master k8s]# vim /opt/kubernets/cfg/token.csv
7ea8f86b157225fd4b9273765e88a3ca,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
'//序列号,用户名,id,角色,这个用户是master用来管理node节点的'
- 6、开启apiserver,将数据存放在etcd集群中并检查kube状态
[root@master kubernetes]# bash apiserver.sh 192.168.233.131 https://192.168.233.131:2379,https://192.168.233.132:2379,https://192.168.233.133:2379
[root@master kubernetes]# ls /opt/kubernetes/cfg/
kube-apiserver token.csv
[root@master kubernetes]# netstat -ntap |grep kube
[root@master kubernetes]# ps aux |grep kube
[root@master kubernetes]# vim /opt/kubernetes/cfg/kube-apiserver
...省略内容
--secure-port=6443 \ '//其实就是443,https协议通信端口'
...省略内容
[root@master kubernetes]# netstat -ntap |grep 6443
tcp 0 0 192.168.233.131:6443 0.0.0.0:* LISTEN 12636/kube-apiserve
tcp 0 0 192.168.233.131:40686 192.168.233.131:6443 ESTABLISHED 12636/kube-apiserve
tcp 0 0 192.168.233.131:6443 192.168.233.131:40686 ESTABLISHED 12636/kube-apiserve
- 7、启动scheduler服务
[root@master kubernetes]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master kubernetes]# systemctl status kube-scheduler
- 8、启动controller-manager
[root@master kubernetes]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master kubernetes]# systemctl status kube-controller-manager
- 9、查看master节点状态
[root@master kubernetes]# /opt/kubernetes/bin/kubectl get cs '//发现是正常的,没问题'
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
1:node01节点部署
- 1、master节点上将kubectl和kube-proxy拷贝到node节点
[root@master kubernetes]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# ls
apiextensions-apiserver kube-apiserver.docker_tag kube-proxy
cloud-controller-manager kube-apiserver.tar kube-proxy.docker_tag
cloud-controller-manager.docker_tag kube-controller-manager kube-proxy.tar
cloud-controller-manager.tar kube-controller-manager.docker_tag kube-scheduler
hyperkube kube-controller-manager.tar kube-scheduler.docker_tag
kubeadm kubectl kube-scheduler.tar
kube-apiserver kubelet
[root@master bin]# scp kubelet kube-proxy root@192.168.233.132:/opt/k8s/bin
[root@master bin]# scp kubelet kube-proxy root@192.168.233.133:/opt/k8s/bin
- 2、node节点解压node.zip
[root@node01 ~]# rz -E
rz waiting to receive.
[root@node01 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip
[root@node01 ~]# unzip node.zip
[root@node01 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz kubelet.sh node.zip proxy.sh
- 3、master节点创建kubeconfig目录
[root@master bin]# cd /root/k8s/
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# vim kubeconfig
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=7ea8f86b157225fd4b9273765e88a3ca \ '//此token序列号就是之前/opt/kubernetes/cfg/token.csv 文件中使用的的'
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master kubeconfig]# export PATH=$PATH://opt/kubernetes/bin '//设置环境变量(可以写入到/etc/prlfile中)'
- 4、生成配置文件并拷贝到node节点
[root@master kubeconfig]# bash kubeconfig 192.168.233.131 /root/k8s/k8s-cert/
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.233.132:/opt/k8s/cfg
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.233.133:/opt/k8s/cfg
- 5、创建bootstrap角色并赋予权限用于连接apiserver请求签名
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
- 6、node01节点操作生成kubelet kubelet.config配置文件
[root@node01 ~]# vim kubelet.sh
'//将/opt/kubernetes路径都修改为/opt/k8s'
[root@node01 ~]# bash kubelet.sh 192.168.233.132
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~]# ls /opt/k8s/cfg/
bootstrap.kubeconfig flanneld kubelet kubelet.config kube-proxy.kubeconfig
[root@node01 ~]# systemctl status kubelet
- 7、master上检查到node01节点的请求,查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 71s kubelet-bootstrap Pending
'//pending:等待集群给该节点办法证书'
- 8、颁发证书,再次查看证书状态
[root@master kubeconfig]# kubectl certificate approve node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s
certificatesigningrequest.certificates.k8s.io/node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s approved
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 3m9s kubelet-bootstrap Approved,Issued '//已经被允许加入集群'
- 9、查看集群状态并启动proxy服务
[root@master kubeconfig]# kubectl get node '//如果有一个节点noready,检查kubelet,如果很多节点noready,那就检查apiserver,如果没问题再检查VIP地址,keepalived'
NAME STATUS ROLES AGE VERSION
192.168.233.132 Ready <none> 92s v1.12.3
[root@node01 ~]# vim proxy.sh '//修改配置文件,将/opt/kubernetes路径换成/opt/k8s'
[root@node01 ~]# bash proxy.sh 192.168.233.132
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 ~]# systemctl status kube-proxy.service '//发现服务是running状态'
2:node02节点部署
- 1、将node01之前生成的配置文件直接复制到node02
[root@node01 ~]# scp -r /opt/k8s/cfg/ root@192.168.233.133:/opt/k8s/cfg/
[root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.233.133:/usr/lib/systemd/system '//复制启动脚本过去'
- 2、修改三个配置文件的地址
[root@node02 ~]# cd /opt/k8s/cfg/
[root@node02 cfg]# vim kubelet
--hostname-override=192.168.233.133 \ '//修改为自己的IP地址'
[root@node02 cfg]# vim kubelet.config
address: 192.168.233.133
[root@node02 cfg]# vim kube-proxy
--hostname-override=192.168.233.133 \
- 3、启动服务并查看状态
[root@node02 cfg]# systemctl start kubelet
[root@node02 cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg]# systemctl status kubelet
[root@node02 cfg]# systemctl start kube-proxy
[root@node02 cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node02 cfg]# systemctl status kube-proxy
- master上操作查看请求并同意node02证书
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis 74s kubelet-bootstrap Pending
node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 21m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl certificate approve node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis '//同意证书'
certificatesigningrequest.certificates.k8s.io/node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis approved
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis 99s kubelet-bootstrap Approved,Issued
node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 21m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.233.132 Ready <none> 19m v1.12.3
192.168.233.133 Ready <none> 44s v1.12.3
更多推荐
所有评论(0)