k8s+kube-vip实现k8s高可用
kubernetes高可用集群搭建文章目录kubernetes高可用集群搭建服务器规划安装kubeadm、kubelet、kubectl准备kube-vip相关配置和static-pod.yaml安装集群保证kube-vip的高可用服务器规划节点类型IPmaster1172.16.27.10master2172.16.27.11master3172.16.27.12slave1172.16.27.
·
kubernetes高可用集群搭建
文章目录
服务器规划
节点类型 | IP |
---|---|
master1 | 172.16.27.10 |
master2 | 172.16.27.11 |
master3 | 172.16.27.12 |
slave1 | 172.16.27.13 |
slave2 | 172.16.27.14 |
vip | 172.16.27.140 |
注意:使用kube-vip管理vip需要保证vip和节点ip在同一子网中
安装kubeadm、kubelet、kubectl
将以下内容copy到一个脚本文件中,然后ansible copy到所有节点,最后ansible执行脚本,从而完成kubeadm、kubelet、kubectl的安装
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl —system
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
准备kube-vip相关配置和static-pod.yaml
在三个master节点上创建文件 /etc/kube-vip/config.yaml
master1
localPeer:
id: master1
address: 172.16.27.10
port: 10000
remotePeers:
- id: master2
address: 172.16.27.11
port: 10000
- id: master3
address: 172.16.27.12
port: 10000
vip: 172.16.27.20
gratuitousARP: true
singleNode: false
startAsLeader: true
interface: ens33
loadBalancers:
- name: API Server Load Balancer
type: tcp
port: 8443
bindToVip: false
backends:
- port: 6443
address: 172.16.27.10
- port: 6443
address: 172.16.27.11
- port: 6443
address: 172.16.27.12
master2
localPeer:
id: master2
address: 172.16.27.11
port: 10000
remotePeers:
- id: master1
address: 172.16.27.10
port: 10000
- id: master3
address: 172.16.27.12
port: 10000
vip: 172.16.27.20
gratuitousARP: true
singleNode: false
startAsLeader: true
interface: ens33
loadBalancers:
- name: API Server Load Balancer
type: tcp
port: 8443
bindToVip: false
backends:
- port: 6443
address: 172.16.27.10
- port: 6443
address: 172.16.27.11
- port: 6443
address: 172.16.27.12
master3
localPeer:
id: master3
address: 172.16.27.12
port: 10000
remotePeers:
- id: master1
address: 172.16.27.10
port: 10000
- id: master2
address: 172.16.27.11
port: 10000
vip: 172.16.27.20
gratuitousARP: true
singleNode: false
startAsLeader: true
interface: ens33
loadBalancers:
- name: API Server Load Balancer
type: tcp
port: 8443
bindToVip: false
backends:
- port: 6443
address: 172.16.27.10
- port: 6443
address: 172.16.27.11
- port: 6443
address: 172.16.27.12
首先在master上创建/etc/kubernetes/manifests/kube-vip.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- command:
- /kube-vip
- start
- -c
- /vip.yaml
image: 'plndr/kube-vip:0.1.1'
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_TIME
volumeMounts:
- mountPath: /vip.yaml
name: config
hostNetwork: true
volumes:
- hostPath:
path: /etc/kube-vip/config.yaml
name: config
status: {}
安装集群
使用flannel指定对对应--pod-network-cidr=10.244.0.0/16
kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint="172.16.27.20:8443" --upload-certs
使用calico指定对对应--pod-network-cidr=192.168.0.0/16
kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint="172.16.27.20:8443" --upload-certs
保证kube-vip的高可用
ansible复制master1上的/etc/kubernetes/manifests/kube-vip.yaml到master2、master3
更多推荐
所有评论(0)