kubernetes1.24版本部署+containerd运行时(弃用docker)
kubernetes1.24版本部署方式介绍,containerd作为容器运行时
·
背景介绍:
kubernetes 1.24版本正式弃用docker,开始使用containerd作为容器运行时。
运行时介绍:
- OCI(Open Container Initiative):2015年Google、docker、Redhat、IBM共同成立,定义了运行标准和镜像标准。
- CRI(Container Runtime Interface):2016 年12月Kubernetes 发布 CRI(容器运行时接口), 可以支持rkt等不同的运行时。
- CRI-O:由redhat发起并开源,用于替代docker成为kubernetes的运行时,2016年开发,2019年4月8号进入CNCF孵化。
至于为什么会弃用docker,而使用containerd了?两方面原因:(个人观点)
- containerd属于CNCF【云原生计算基金会】的项目,这样便于社区的管理和维护
- 提升了一定的性能
从图中也能看出使用containerd减少了中间docker的调用
集群部署
集群规划
类型 | 服务器IP | 主机名 | 配置 |
---|---|---|---|
master | 192.168.2.131 | ubuntu01 | 2c/4g |
node | 192.168.2.132 | ubuntu02 | 2c/4g |
node | 192.168.2.133 | ubuntu03 | 2c/4g |
环境准备
设置主机名
hostnamectl set-hostname <newhostname>
reboot
设置部署节点与其他节点互信
ssh-keygen
ssh-copy-id -i id_rsa.pub <hostname>
内核优化
#执行kubeadm init前,要做参数优化
vim /etc/modules-load.d/modules.conf
ip_vs
br_netfilter
#加载模块使其生效
modprobe ip_vs
modprobe br_netfilter
#调整内核参数
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1 #桥接网络模式,流量的分发过滤
net.ipv4.ip_forward = 1 #开启内核路由转发功能
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
vm.swappiness=0
sysctl -p #使其生效
安装工具kubeadm
# 安装kubeadm、kubectl、kubelet
#配置阿里镜像加速
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - #执行此步骤报错可能需要执行 sudo apt-get install -y gnupg
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-cache madison kubeadm
apt-get install kubeadm=1.24.3-00 kubectl=1.24.3-00 kubelet=1.24.3-00
安装containerd
apt-cache madison containerd #验证仓库版版本,不建议二进制方式安装
#推荐二进制方式,以下为具体安装步骤
cd /usr/local/src/
https://github.com/containerd/containerd/releases/download/v1.6.6/containerd-1.6.6-linux-amd64.tar.gz #下载地址
tar xvf containerd-1.6.6-linux-amd64.tar.gz
cp bin/* /usr/local/bin/
vim /lib/systemd/system/containerd.service #查看service文件,将以下内容复制到containerd.service文件中,修改执行路径ExecStart为/usr/local/bin
----------------------------------------------------
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
-----------------------------------------------------
#默认输出配置文件
mkdir /etc/containerd/
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml
sandbox_image = "k8s.gcr.io/pause:3.6" 修改为 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7"
#配置镜像加速
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://lzpmltr2.mirror.aliyuncs.com"]
systemctl restart containerd && systemctl enable containerd #启动设置开机自启动
安装runc
#安装runc
https://github.com/opencontainers/runc/release/download/v1.1.3/runc.amd64
cp runc.amd64 /usr/bin/runc
chmod a+x /usr/bin/runc
安装命令行工具nerdctl
#安装nerdctl推荐使用,需要安装cni
https://github.com/containerd/nerdctl/releases/download/v0.22.0/nerdctl-0.22.0-linux-amd64.tar.gz #下载地址
tar xvf nerdctl-0.22.0-linux-amd64.tar.gz
cp nerdctl /usr/bin/
nerdctl version #验证是否安装成功
安装cni
#安装cni
https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz #下载地址
mkdir /opt/cni/bin -p #保存cni插件的路径
tar xvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
nerdctl工具使用验证
#创建容器并指定端口
nerdctl pull nginx
nerdctl run -d -p 80:80 --name=nginx-web1 --restart-always nginx:latest
nerdctl ps
nerdctl exec -it alskkfkfkf bash
提前下载镜像
#提前下载所需镜像
kubeadm config images list --kubernetes-version v1.24.3 #执行此命令获取所需的镜像文件
vim down_image.sh #注意:一定要把镜像下载到k8s.io命名空间下
#! /bin/sh
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.3
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.3
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
nerdctl -n k8s.io pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6
132、133节点执行上面同样的操作,执行完成后,在master节点执行kubeadm init
初始化集群
kubeadm init --apiserver-advertise-address=192.168.2.131 --apiserver-bind-port=6443 --kubernetes-version=v1.24.3 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
#初始化完成后,显示如下:
-------------------------------------------------
our Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.131:6443 --token 5vaxru.b7u810mxxzhme8qh \
--discovery-token-ca-cert-hash sha256:558920a8f11253e2714e47b53f2a942bc14cee604e7f5a9c78b66aae29cc8d91
---------------------------------------------------
#在master节点执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
node节点加入集群
#node节点执行以下命令加入集群中;132/133节点分别执行以下命令
kubeadm join 192.168.2.131:6443 --token 5vaxru.b7u810mxxzhme8qh \
--discovery-token-ca-cert-hash sha256:558920a8f11253e2714e47b53f2a942bc14cee604e7f5a9c78b66aae29cc8d91
至此,集群搭建完成
集群验证
通过kubectl命名可以看到个节点处于ready状态
查看各组件是否全部为running状态
部署nginx应用测试验证
更多推荐
已为社区贡献4条内容
所有评论(0)