kubernetes实践之二:K8s高可用搭建
12、K8s高可用搭建环境规划:角色IP组件推荐配置k8s_masteretcd01192.168.1.153kube-apiserverkube-controller-managerkube-scheduler...
12、K8s高可用搭建
环境规划:
角色 | IP | 组件 | 推荐配置 |
k8s_master etcd01 | 192.168.1.153 | kube-apiserver kube-controller-manager kube-scheduler etcd | CPU 2核+ 2G内存+ |
k8s_master02
| 192.168.1.157 | kube-apiserver kube-controller-manager kube-scheduler etcd |
|
k8s_node01 etcd02 | 192.168.1.154 | kubelet kube-proxy docker flannel etcd |
|
k8s_node02 etcd03 | 192.168.1.155 | kubelet kube-proxy docker flannel etcd |
|
Load Balancer (Master) | 192.168.1.158 192.168.1.160 (VIP) | Nginx L4 |
|
Load Balancer (Backup) | 192.168.1.159 | Nginx L4 |
|
多master集群架构
12.1 高可用LB环境搭建(keepalived+nginx)
nginx安装:
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum install -y nginx
keepalived安装:
yum -y install keepalived
nginx配置增加stream模块:
vim /etc/nginx/nginx.conf
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.1.153:6443;
server 192.168.1.157:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
service nginx start
keepalived配置:
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface enp0s8
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.160/24
}
track_script {
check_nginx
}
}
keepalived检查nginx状态脚本:
vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
/etc/init.d/keepalived stop
fi
12.2 高可用Master02搭建
复制master01文件:
scp -r /opt/kubernetes/ k8s_master02:/opt/
scp -r /opt/etcd/ k8s_master02:/opt/
scp /usr/bin/kubectl k8s_master02:/usr/bin/
scp /usr/lib/systemd/system/kube-* k8s_master02:/usr/lib/systemd/system
修改配置文件(master02):
vim /opt/kubernetes/cfg/kube-apiserver
--bind-address=192.168.1.157 \
--advertise-address=192.168.1.157 \
启动Master服务:
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.154 Ready <none> 6h15m v1.13.4
192.168.1.155 Ready <none> 6h4m v1.13.4
12.3 Node重新指向Load Balancer
cd /opt/kubernetes/cfg
需要修改bootstrap.kubeconfig,kubelet.kubeconfig,kube-proxy.kubeconfig为VIP 192.168.1.160
重新启动服务:
systemctl restart kubelet
systemctl restart kube-proxy
12.4 测试高可用部署是否成功:
重新创建kubeconfig文件(存放连接apiserver认证信息,master01执行)
cd /home/k8s_install/k8s_master_componet/
./kubelet_new.sh
会生成一个config文件
脚本如下:
kubectl config set-cluster kubernetes --server=https://192.168.1.160:6443 --embed-certs=true --certificate-authority=ca.pem --kubeconfig=config
kubectl config set-credentials cluster-admin --certificate-authority=ca.pem --embed-certs=true --client-key=admin-key.pem --client-certificate=admin.pem --kubeconfig=config
kubectl config set-context default --cluster=kubernetes --user=cluster-admin --kubeconfig=config
kubectl config use-context default --kubeconfig=config
node01节点执行(把config,kubectl复制到node01节点):
scp /usr/bin/kubectl k8s_node01:/usr/bin/
scp config k8s_node01:/root/
kubectl --kubeconfig=/root/config get node
NAME STATUS ROLES AGE VERSION
192.168.1.154 Ready <none> 7h7m v1.13.4
192.168.1.155 Ready <none> 6h56m v1.13.4
更多推荐
所有评论(0)