Kubernetes部署etcd集群-centos7(新-增加了红色部分的注意事项)
螃蟹|2016年4月17日环境:etcd01:192.168.12.37,centos7.1etcd02:192.168.12.178,centos7.1etcd03:192.168.12.179,centos7.1软件版本:etcd:2.2.5实施步骤:以etcd1部署为例,其他2个主机步骤一样:安装etcd[root@docker-registry~]#
螃蟹|2016年4月17日
环境:
etcd01:192.168.12.37,centos7.1
etcd02:192.168.12.178,centos7.1
etcd03:192.168.12.179,centos7.1
软件版本:
etcd:2.2.5
实施步骤:
以etcd1部署为例,其他2个主机步骤一样:
安装etcd
[root@docker-registry~]# yum install etcd -y
修改配置文件
[root@docker-registry~]# grep -v '^#' /etc/etcd/etcd.conf
ETCD_NAME=etcd01
ETCD_DATA_DIR="/var/lib/etcd/etcd01"
ETCD_LISTEN_PEER_URLS="http://192.168.12.37:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.12.37:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.12.37:2380"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.12.37:2380,etcd02=http://192.168.12.178:2380,etcd03=http://192.168.12.179:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-00"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.12.37:2379"
其它两个主机的修改内容为以上红色的部分,其它不能变。/var/lib/etcd/etcd01目录会自己启动时建立,不能提前建立。
修改etcd启动文件
[root@docker-registry~]# more /usr/lib/systemd/system/etcd.service
[Unit]
Description=EtcdServer
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# setGOMAXPROCS to number of processors
ExecStart=/bin/bash-c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\"\
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动etcd服务
[root@docker-registryetcd]# systemctl start etcd
[root@docker-registryetcd]# systemctl status etcd
重复上述步骤,配置etcd02和etcd03.
三个主机的etcd服务全部启动后,查看cluster状态
[root@docker-registryetcd]# etcdctl cluster-health
Memberxxx is healthy…
[root@docker-registryetcd]# etcdctl memberlist
49ce2446964e72e3:name=etcd01 peerURLs=http://192.168.12.37:2380clientURLs=http://192.168.12.37:2379
742a07d658e2e113:name=etcd02 peerURLs=http://192.168.12.178:2380clientURLs=http://192.168.12.178:2379
eb6e0867bfd315e5:name=etcd03 peerURLs=http://192.168.12.179:2380clientURLs=http://192.168.12.179:2379
[root@docker-registryetcd]# etcdctl cluster-health
member49ce2446964e72e3 is healthy: got healthy result fromhttp://192.168.12.37:2379
member742a07d658e2e113 is healthy: got healthy result fromhttp://192.168.12.178:2379
membereb6e0867bfd315e5 is healthy: got healthy result fromhttp://192.168.12.179:2379
clusteris healthy
至此,etcd集群配置完成。
接下来,将当前环境的etcd换成etcd集群。
停止master的etcd
[root@k8s_master~]# systemctl stop etcd
[root@k8s_master~]# systemctl status etcd
将master的apiserver中etcd的配置指向etcd集群
[root@k8s_masterkubernetes]# vi apiserver
# Commaseparated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.12.37:2379,http://192.168.12.178:2379,http://192.168.12.179:2379"
重启master各组件
[root@k8s_masterkubernetes]# systemctl restart kube-apiserverkube-controller-manager kube-scheduler
[root@k8s_masterkubernetes]# systemctl status kube-apiserverkube-controller-manager kube-scheduler
重启node各组件
[root@k8s_node01~]# systemctl restart docker kubelet
[root@k8s_node01~]# systemctl status docker kubelet
[root@k8s_node02~]# systemctl restart docker kubelet
[root@k8s_node02~]# systemctl status docker kubelet
查看node节点状态
[root@k8s_masterkubernetes]# kubectl get node
NAME
192.168.12.175
192.168.12.176
新建pod进行测试
[root@k8s_masterpods]# kubectl create -f frontend-controller.yaml
replicationcontroller"frontend" created
[root@k8s_masterpods]# kubectl get pods
NAME
frontend-40ec5
frontend-43khv
[root@k8s_masterpods]# kubectl get rc
CONTROLLER
frontend
etcd集群测试
1.关闭一台主机的etcd服务
关闭etcd01的etcd服务
[root@docker-registry~]# systemctl stop etcd
[root@docker-registry~]# systemctl status etcd -l
查看cluster状态
[root@kafka02etcd]# etcdctl cluster-health
failedto check the health of member 49ce2446964e72e3 onhttp://192.168.12.37:2379: Get http://192.168.12.37:2379/health:dial tcp 192.168.12.37:2379: connection refused
member49ce2446964e72e3 is unreachable: [http://192.168.12.37:2379] areall unreachable
member742a07d658e2e113 is healthy: got healthy result fromhttp://192.168.12.178:2379
membereb6e0867bfd315e5 is healthy: got healthy result fromhttp://192.168.12.179:2379
clusteris healthy
master查看原数据,依然存在
[root@k8s_masterpods]# kubectl get rc
CONTROLLER
frontend
[root@k8s_masterpods]# kubectl get pods
NAME
frontend-40ec5
frontend-43khv
再次新建pod,无异常。
[root@k8s_masterk8s]# kubectl create -f redis-master-controller.yaml
replicationcontroller"redis-master" created
[root@k8s_masterk8s]# kubectl get rc
CONTROLLER
frontend
redis-master
[root@k8s_masterk8s]# kubectl get pods
NAME
frontend-40ec5
frontend-43khv
redis-master-aj9q6
redis-master-dcrxe
2.关闭两台主机的etcd服务
再关闭etcd02的etcd服务,此时只有一台etcd可用。
[root@kafka02etcd]# systemctl stop etcd
[root@kafka02etcd]# systemctl status etcd.service -l
查看cluster状态,此时显示cluster不可用。
[root@kafka03etcd]# etcdctl cluster-health
failedto check the health of member 49ce2446964e72e3 onhttp://192.168.12.37:2379: Get http://192.168.12.37:2379/health:dial tcp 192.168.12.37:2379: connection refused
member49ce2446964e72e3 is unreachable: [http://192.168.12.37:2379] areall unreachable
failedto check the health of member 742a07d658e2e113 onhttp://192.168.12.178:2379: Get http://192.168.12.178:2379/health:dial tcp 192.168.12.178:2379: connection refused
member742a07d658e2e113 is unreachable: [http://192.168.12.178:2379] areall unreachable
membereb6e0867bfd315e5 is unhealthy: got unhealthy result fromhttp://192.168.12.179:2379
clusteris unhealthy
master查看状态,原pod无影响
[root@k8s_masterk8s]# kubectl get pods
NAME
frontend-40ec5
frontend-43khv
redis-master-aj9q6
redis-master-dcrxe
[root@k8s_masterk8s]# kubectl get rc
CONTROLLER
frontend
redis-master
但是已无法新建pod
[root@k8s_masterk8s]# kubectl create -f redis-master-service.yaml
Errorfrom server: error when creating "redis-master-service.yaml":Timeout: request did not complete within allowedduration
即etcd集群,需要至少2个etcd节点才可以正常工作。
本文结束。
更多推荐
所有评论(0)