第三天 Kubernetes进阶实践

本章介绍Kubernetes的进阶内容,包含Kubernetes集群调度、CNI插件、认证授权安全体系、分布式存储的对接、Helm的使用等,让学员可以更加深入的学习Kubernetes的核心内容。

  • ETCD数据的访问

  • kube-scheduler调度策略实践

    • 预选与优选流程
    • 生产中常用的调度配置实践
  • k8s集群网络模型

    • CNI介绍及集群网络选型
    • Flannel网络模型的实现
      • vxlan Backend
      • hostgw Backend
  • 集群认证与授权

    • APIServer安全控制模型
    • Kubectl的认证授权
    • RBAC
    • kubelet的认证授权
    • Service Account
  • 使用Helm管理复杂应用的部署

    • Helm工作原理详解
    • Helm的模板开发
    • 实战:使用Helm部署Harbor仓库
  • kubernetes对接分部式存储

    • pv、pvc介绍

    • k8s集群如何使用cephfs作为分布式存储后端

    • 利用storageClass实现动态存储卷的管理

    • 实战:使用分部署存储实现有状态应用的部署

  • 本章知识梳理及回顾

ETCD常用操作

拷贝etcdctl命令行工具:

$ docker exec -ti  etcd_container which etcdctl
$ docker cp etcd_container:/usr/local/bin/etcdctl /usr/bin/etcdctl
注
k8s存放静态文件的目录,存放k8s整个集群的数据
[root@k8s-master ~]# ll /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 2104 Jul  9 22:55 etcd.yaml
-rw------- 1 root root 3161 Jul  9 22:55 kube-apiserver.yaml
-rw------- 1 root root 2858 Jul  9 22:55 kube-controller-manager.yaml
-rw------- 1 root root 1413 Jul  9 22:55 kube-scheduler.yaml

# ETC实质就是一个容器
[root@k8s-master ~]# docker ps|grep etcd
cbec05823ad2   0369cf4303ff                                                     "etcd --advertise-cl…"   26 hours ago     Up 26 hours                                                 k8s_etcd_etcd-k8s-master_kube-system_ffeb60a5fc0a9dc352dceb8c62378b9c_1
b5b1b6ec7116   registry.aliyuncs.com/google_containers/pause:3.2                "/pause"                 26 hours ago     Up 26 hours                                                 k8s_POD_etcd-k8s-master_kube-system_ffeb60a5fc0a9dc352dceb8c62378b9c_1
# 一个是系统容器,一个是真正的业务容器

# 拷贝etcdctl命令工具
[root@k8s-master ~]# docker exec -ti cbec05823ad2 /bin/sh

# etcdctl -h
...

# which etcdctl
/usr/local/bin/etcdctl
# exit

[root@k8s-master ~]# docker cp cbec05823ad2:/usr/local/bin/etcdctl  /usr/local/bin

# 版本号
[root@k8s-master week3]# etcdctl version
etcdctl version: 3.4.13
API version: 3.4

查看etcd集群的成员节点:

$ export ETCDCTL_API=3 # 声明api就是3版本去查,声明2就是使用2版本去查询,参考etcdctl -h

# 几个证书的位置
[root@k8s-master ~]# ll /etc/kubernetes/pki/etcd/
total 32
-rw-r--r-- 1 root root 1058 Jul 17 19:05 ca.crt # 跟证书
-rw------- 1 root root 1679 Jul 17 19:05 ca.key # 根证书的秘钥
-rw-r--r-- 1 root root 1139 Jul 17 19:05 healthcheck-client.crt  # 基于根证书签发的证书
-rw------- 1 root root 1679 Jul 17 19:05 healthcheck-client. # 基于根证书签发的证书的秘钥
-rw-r--r-- 1 root root 1184 Jul 17 19:05 peer.crt
-rw------- 1 root root 1675 Jul 17 19:05 peer.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 server.crt
-rw------- 1 root root 1675 Jul 17 19:05 server.key
# 这三套证书都可以使用

# etcdctl的yaml配置文件地址
/etc/kubernetes/manifests/etcd.yaml

$ etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list -w table

# 设置别名
$ alias etcdctl='etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key'

# 查看etcdctl成员id
$ etcdctl member list -w table
# 参数解释
member list 成员列表
-w table   通过表的方式打印出来
注:若是搭建的是k8s高可用集群,则master节点与slave节点的数量是一致的

注:
# 环境变量:
/etc/profile 所用用户有效,全局环境变量
~/.bashrc  每个运行bash shell的用户执行此文件

注:

# 查看命令帮助
[root@k8s-master ~]# etcdctl -h

# 设置环境变量
[root@k8s-master ~]# export ETCDCTL_API=3

[root@k8s-master ~]# cat /etc/kubernetes/manifests/etcd.yaml |grep command -A 20
  - command:
    - etcd
    - --advertise-client-urls=https://10.0.1.5:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.0.1.5:2380
    - --initial-cluster=k8s-master=https://10.0.1.5:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://10.0.1.5:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://10.0.1.5:2380
    - --name=k8s-master
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
    imagePullPolicy: IfNotPresent

## 查看成员ID,选择只有一个master节点
# 注:若是搭建的是k8s高可用集群,则master节点与slave节点的数据是一致的
[root@k8s-master ~]#  etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list -w table
+------------------+---------+------------+-----------------------+-----------------------+------------+
|        ID        | STATUS  |    NAME    |      PEER ADDRS       |     CLIENT ADDRS      | IS LEARNER |
+------------------+---------+------------+-----------------------+-----------------------+------------+
| 8f4f0858fdc2d498 | started | k8s-master | https://10.0.1.5:2380 | https://10.0.1.5:2379 |      false |
+------------------+---------+------------+-----------------------+-----------------------+------------+
参数解释
member list  成员列表
-w table通过表的方式打印出来

# 因为一下信息其他命令也都需要输入,可以通过设置别名的方式简写
[root@k8s-master ~]# alias etcdctl='etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key'

[root@k8s-master ~]# etcdctl member list -w table
+------------------+---------+------------+-----------------------+-----------------------+------------+
|        ID        | STATUS  |    NAME    |      PEER ADDRS       |     CLIENT ADDRS      | IS LEARNER |
+------------------+---------+------------+-----------------------+-----------------------+------------+
| 8f4f0858fdc2d498 | started | k8s-master | https://10.0.1.5:2380 | https://10.0.1.5:2379 |      false |
+------------------+---------+------------+-----------------------+-----------------------+------------+

查看etcd集群节点状态:

$ etcdctl endpoint status -w table

$ etcdctl endpoint health -w table
注:
# 还是以表格的形式输出,此命令常用于排除,查看etcd的状态是否正确
[root@k8s-master ~]# etcdctl endpoint status -w table
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://[127.0.0.1]:2379 | 8f4f0858fdc2d498 |  3.4.13 |  4.1 MB |      true |      false |         3 |      70450 |              70450 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
# 字段解释
本机的端点
ID
ETCD得版本
DB大小,每个节点大小都是一直的,数据不一致证明有问题
leader 角色是主位置,领导位置,其他是从节点
raft  选举的term轮数
raft index指数,索引

[root@k8s-master ~]# etcdctl endpoint health -w table
+--------------------------+--------+------------+-------+
|         ENDPOINT         | HEALTH |    TOOK    | ERROR |
+--------------------------+--------+------------+-------+
| https://[127.0.0.1]:2379 |   true | 7.520056ms |       |
+--------------------------+--------+------------+-------+
#字段解释:
health 健康状态,返回true是正常的
took那,获取的时间

设置key值:

$ etcdctl put luffy 1
$ etcdctl get luffy

查看所有key值:

$  etcdctl get / --prefix --keys-only
# --prefix 前缀   把xxx为前缀的值都查询出来,相当于查询以根开头的值

查看具体的key对应的数据:

$ etcdctl get /registry/pods/jenkins/sonar-postgres-7fc5d748b6-gtmsb
# 有规律的目录 /registry/资源类型/资源名字/具体创建的资源名称

list-watch:

$ etcdctl watch /luffy --prefix
$ etcdctl put /luffy/key1 val1

注:响应是很及时的
[root@k8s-master week3]# etcdctl put /luffy/key3 val3
OK

[root@k8s-master week3]# etcdctl watch /luffy --prefix
PUT
/luffy/key3
val3

添加定时任务做数据快照(重要!)

$ etcdctl snapshot save `hostname`-etcd_`date +%Y%m%d%H%M`.db
# 检查添加的快照数据的大小和下面db size大小的值是一样的
$ ll k8s-master-etcd_202106301901.db 
$ etcdctl endpoint status -w table

恢复快照:

  1. 停止etcd和apiserver

  2. 移走当前数据目录

    $ mv /var/lib/etcd/ /tmp
    
  3. 恢复快照

    $ etcdctl snapshot restore `hostname`-etcd_`date +%Y%m%d%H%M`.db --data-dir=/var/lib/etcd/
    注:恢复快照必须保证如下几点:
    集权IP,hostname不变
    有备份数据,快照
    整数也需保存下来
    
  4. 集群恢复

    https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md

    注:

[root@k8s-master week3]# etcdctl snapshot save `hostname`-etcd_`date +%Y%m%d%H%M`.db
Snapshot saved at k8s-master-etcd_202107180731.db

# 生产环境正常是需要写成定时任务的,一般按照每小时一次,并且要做备份

恢复快照必须保证如下几点
集群IP,hostname不变
有备份数据,快照
整数也需要保存下来
证书,文件等地址如下
[root@k8s-master week3]# ll /etc/kubernetes/pki/etcd/
total 32
-rw-r--r-- 1 root root 1058 Jul 17 19:05 ca.crt
-rw------- 1 root root 1679 Jul 17 19:05 ca.key
-rw-r--r-- 1 root root 1139 Jul 17 19:05 healthcheck-client.crt
-rw------- 1 root root 1679 Jul 17 19:05 healthcheck-client.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 peer.crt
-rw------- 1 root root 1675 Jul 17 19:05 peer.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 server.crt
-rw------- 1 root root 1675 Jul 17 19:05 server.key

# 演示恢复快照的大致流程:
1.停止etcd和apiserver

2.移走当前数据模板
[root@k8s-master week3]# mv /var/lib/etcd/ /tmp/

3.恢复快照
# 添加的快照数据大小和下面db size 大小的值一样的
[root@k8s-master week3]# ll -h
total 4.8M
-rw------- 1 root root 4.8M Jul 18 07:31 k8s-master-etcd_202107180731.db

[root@k8s-master week3]# etcdctl endpoint status -w table
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://[127.0.0.1]:2379 | 8e9e05c52164694d |  3.4.13 |  5.0 MB |      true |      false |         2 |      88030 |              88030 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

# 我们需要把etcd的数据库重点包含,通常老说存储的量一般不会很大,如果查看状态发现数据过TB级别了,那么就需要排错ETCD是否存在不必要的数据,一般来说事件数据比较站空间
  • namespace删除问题

    很多情况下,会出现namespace删除卡住的问题,此时可以通过操作etcd来删除数据:

小结
etcd的常用操作,设置key,获取key值
查看etcd集群节点状态的两个命令
    etcdctl endpoint status -w table
	etcdctl endpoint health -w table
ETCD的数据的快照与恢复备份
etcdctl 命令
遇见无法删除的,可以在etcd里面删除
[root@k8s-master manifests]# etcdctl get /registry/namespace --prefix --keys-only
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/namespaces/kubernetes-dashboard
/registry/namespaces/luffy

[root@k8s-master manifests]# etcdctl del /registry/namespaces/luffy
Kubernetes调度
为何要控制Pod应该如何调度
  • 集群中有些机器的配置高(SSD,更好的内存等),我们希望核心的服务(比如说数据库)运行在上面
  • 某两个服务的网络传输很频繁,我们希望它们最好在同一台机器上

Kubernetes Scheduler 的作用是将待调度的 Pod 按照一定的调度算法和策略绑定到集群中一个合适的 Worker Node 上,并将绑定信息写入到 etcd 中,之后目标 Node 中 kubelet 服务通过 API Server 监听到 Scheduler 产生的 Pod 绑定事件获取 Pod 信息,然后下载镜像启动容器。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-xMfHtUUd-1639403856684)(images/kube-scheduler-1.jpg)]

调度的过程

Scheduler 提供的调度流程分为预选 (Predicates) 和优选 (Priorities) 两个步骤:

  • 预选,K8S会遍历当前集群中的所有 Node,筛选出其中符合要求的 Node 作为候选
  • 优选,K8S将对候选的 Node 进行打分

经过预选筛选和优选打分之后,K8S选择分数最高的 Node 来运行 Pod,如果最终有多个 Node 的分数最高,那么 Scheduler 将从当中随机选择一个 Node 来运行 Pod。

(img-nxFnAoKC-1639403856685)(images/kube-scheduler-process.png)]

预选:

(img-OVoO3Ndr-1639403856686)(images/kube-scheduler-pre.jpg)]

优选:

(img-Sn7ZtsCv-1639403856686)(images/kube-scheduler-pro.jpg)]

NodeSelector

labelkubernetes中一个非常重要的概念,用户可以非常灵活的利用 label 来管理集群中的资源,POD 的调度可以根据节点的 label 进行特定的部署。

查看节点的label:

$ kubectl get nodes --show-labels
注: 
还可用kubectl label pods 给pod打label,不会使用-h 查看example,语法同kubectl label nodes

为节点打label:

$ kubectl label node k8s-master disktype=ssd

当 node 被打上了相关标签后,在调度的时候就可以使用这些标签了,只需要在spec 字段中添加nodeSelector字段,里面是我们需要被调度的节点的 label。

...
spec:
  hostNetwork: true	# 声明pod的网络模式为host模式,效果通docker run --net=host
  volumes: 
  - name: mysql-data
    hostPath: 
      path: /opt/mysql/data
  nodeSelector:   # 使用节点选择器将Pod调度到指定label的节点
    component: mysql  # 这样它就可以调度到打了component=mysql这样label的节点上去
  containers:
  - name: mysql
  	image: 172.21.51.143:5000/demo/mysql:5.7
...

# 有一定局限性,pod几十上百就非常难以管理
nodeAffinity
注:pod根据node的标签做选择

节点亲和性 , 比上面的nodeSelector更加灵活,它可以进行一些简单的逻辑组合,不只是简单的相等匹配 。分为两种,硬策略和软策略。

requiredDuringSchedulingIgnoredDuringExecution : 硬策略,如果没有满足条件的节点的话,就不断重试直到满足条件为止,简单说就是你必须满足我的要求,不然我就不会调度Pod。

preferredDuringSchedulingIgnoredDuringExecution:软策略,如果你没有满足调度要求的节点的话,Pod就会忽略这条规则,继续完成调度过程,说白了就是满足条件最好了,没有满足就忽略掉的策略。

#要求 Pod 不能运行在128和132两个节点上,如果有节点满足disktype=ssd或者sas的话就优先调度到这类节点上
...
spec:
      containers:
      - name: demo
        image: 172.21.51.143:5000/myblog:v1
        ports:
        - containerPort: 8002
      affinity: # 亲和力,和container平级,说明是以pod为单位
          nodeAffinity: # 节点亲和力
            requiredDuringSchedulingIgnoredDuringExecution: # 硬策略
                nodeSelectorTerms: # node选择器的条件,下面是它的值。是一个数组,可以写成多个
                - matchExpressions: # 匹配的条件,是一个数组,可以写多个
                    - key: kubernetes.io/hostname  # 也是一个数据,可以写成多个
                      operator: NotIn # 也就是hostname值不在下面的values值中
                      values:
                        - 192.168.136.128
                        - 192.168.136.132
# 调度的时候有个硬性条件就是pod不能运行在hostname是128,和132的节点上
            preferredDuringSchedulingIgnoredDuringExecution: # 软策略
                - weight: 1
                  preference:
                    matchExpressions: # 匹配的条件
                    - key: disktype
                      operator: In   # 也就是disktype在下面的values值中
                      values:
                        - ssd
                        - sas
 # 如果节点标签有ssd,sas的就优先调用                       
...

这里的匹配逻辑是 label 的值在某个列表中,现在Kubernetes提供的操作符有下面的几种:

  • In:label 的值在某个列表中
  • NotIn:label 的值不在某个列表中
  • Gt:label 的值大于某个值
  • Lt:label 的值小于某个值
  • Exists:某个 label 存在
  • DoesNotExist:某个 label 不存在

如果nodeSelectorTerms下面有多个选项的话,满足任何一个条件就可以了;如果matchExpressions有多个选项的话,则必须同时满足这些条件才能正常调度 Pod

pod亲和性和反亲和性
注: po根据pod的标签去选择

场景:

myblog 启动多副本,但是期望可以尽量分散到集群的可用节点中

分析:为了让myblog应用的多个pod尽量分散部署在集群中,可以利用pod的反亲和性,告诉调度器,如果某个节点中存在了myblog的pod,则可以根据实际情况,实现如下调度策略:

  • 不允许同一个node节点,调度两个myblog的副本
  • 可以允许同一个node节点中调度两个myblog的副本,前提是尽量把pod分散部署在集群中
...
    spec:
      affinity:
        podAntiAffinity: #反亲和性
          requiredDuringSchedulingIgnoredDuringExecution: # 硬策略,一定不要调度
          - labelSelector:  # label调度器
              matchExpressions: # 条件:如下
              - key: app # 如果app in values中,name调度器一定不给调度过去
                operator: In
                values:
                - myblog
            topologyKey: kubernetes.io/hostname
      containers:
...
# 如果某个节点中,存了app=myblog的label的pod,那么调度器一定不要给我调度

...
    spec:
      affinity:
        podAntiAffinity: # 反亲和性
          preferredDuringSchedulingIgnoredDuringExecution: # 软策略,尽量不要调度
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - myblog
              topologyKey: kubernetes.io/hostname
      containers:
...
# 如果某个节点中,存在了app=myblog的label,那么调度器尽量不要调度过去
# 加上如下配置
$ kubectl -n luffy  edit deployments.apps myblog 
affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - myblog
            topologyKey: kubernetes.io/hostname

[root@k8s-master week3]# kubectl -n luffy  get po
NAME                      READY   STATUS    RESTARTS   AGE
default-mem-demo          1/1     Running   1          36h
myblog-65847cf6ff-8s75f   1/1     Running   14         2d1h
myblog-65847cf6ff-f5rv2   1/1     Running   10         2d9h
myblog-65847cf6ff-tz46d   1/1     Running   11         2d9h
mysql-58d95d459c-jj4sx    1/1     Running   1          2d6h

# 现在是两个pod,假如改成三个副本,思考会怎么去分配,应该是每个节点一个,因为:存在了app=myblog的label的pod,那么,调度器一定不要给我调度过去
$ kubectl -n luffy scale deployment myblog --replicas=3


# 修改deployments.apps myblog   ,加上硬策略配置字段
[root@k8s-master myblog]# kubectl -n luffy edit deployments.apps myblog 
deployment.apps/myblog edited

[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65758f6854-knh9f   1/1     Running   0          2m20s   10.244.1.5    k8s-slave1   <none>           <none>
myblog-65758f6854-zzv4m   1/1     Running   0          118s    10.244.2.13   k8s-slave2   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>

# 现在是两个pod,假如改成三个副本,思考会怎么分配,应该是每个节点一个,因为:存在了app=myblog的label的pod,那么,调度器一定不要给我调度过去
[root@k8s-master myblog]# kubectl -n luffy scale deployment myblog --replicas=3
deployment.apps/myblog scaled
#观察pod变化,刚好是均匀分布的
[root@k8s-master myblog]# kd get po -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65758f6854-7l85h   1/1     Running   0          39s     10.244.0.7    k8s-master   <none>           <none>
myblog-65758f6854-knh9f   1/1     Running   0          5m6s    10.244.1.5    k8s-slave1   <none>           <none>
myblog-65758f6854-zzv4m   1/1     Running   0          4m44s   10.244.2.13   k8s-slave2   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>

# 如果改成四个会怎么分配?应该是有一个是挂起pending的状态,因为是硬策略,不满足那么一点不调度
[root@k8s-master myblog]# kubectl -n luffy scale deployment myblog --replicas=4
deployment.apps/myblog scaled
[root@k8s-master myblog]# kd get po -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65758f6854-7l85h   1/1     Running   0          3m8s    10.244.0.7    k8s-master   <none>           <none>
myblog-65758f6854-knh9f   1/1     Running   0          7m35s   10.244.1.5    k8s-slave1   <none>           <none>
myblog-65758f6854-nmp4n   0/1     Pending   0          8s      <none>        <none>       <none>           <none>
myblog-65758f6854-zzv4m   1/1     Running   0          7m13s   10.244.2.13   k8s-slave2   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>
[root@k8s-master myblog]# kd describe po myblog-65758f6854-nmp4n | grep -i events: -A5
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  111s  default-scheduler  0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
  Warning  FailedScheduling  111s  default-scheduler  0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.

# 如果把配置换成软策略preferred,删除硬策略required,把副本数恢复到2个。那么则是,尽量不要调度,也是没地方调度了才会调度到节点上
# 加这个策略之前,
[root@k8s-master myblog]# kd get po -owide -w
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-56968c6d54-cd7xw   1/1     Running   0          111s    10.244.0.11   k8s-master   <none>           <none>
myblog-56968c6d54-vlmgh   1/1     Running   0          2m15s   10.244.0.10   k8s-master   <none>           <none>
myblog-596b7f9b8b-pfr5z   0/1     Running   0          1s      10.244.2.14   k8s-slave2   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>
# 加了软策略之后,发现把master节点上的myblog都驱逐到salve节点上去了
[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
myblog-596b7f9b8b-pfr5z   1/1     Running   0          65s   10.244.2.14   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-zt4bx   1/1     Running   0          45s   10.244.1.6    k8s-slave1   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h   10.244.1.4    k8s-slave1   <none>           <none>

#继续演示,增加到三个副本数,尽量不要调度,但是slave1和slave2上已经存在了这个标签的pod了,所有又调度到master节点上去了
[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-596b7f9b8b-pfr5z   1/1     Running   0          4m4s    10.244.2.14   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-tsmrx   1/1     Running   0          28s     10.244.0.12   k8s-master   <none>           <none>
myblog-596b7f9b8b-zt4bx   1/1     Running   0          3m44s   10.244.1.6    k8s-slave1   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>

# 继续增加副本数到4会发生什么?这会对了,由于没地方调度了还是调度到了salve2上去了
# 区别就在这里, 硬策略的时候是pending,改成软策略之后只是尽量不要调度,当没地方调度的时候,还是会调度到其他节点上去,好像默认会优先调度到salve节点,而不是master节点
[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-596b7f9b8b-gx4pj   1/1     Running   0          29s     10.244.2.15   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-pfr5z   1/1     Running   0          5m52s   10.244.2.14   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-tsmrx   1/1     Running   0          2m16s   10.244.0.12   k8s-master   <none>           <none>
myblog-596b7f9b8b-zt4bx   1/1     Running   0          5m32s   10.244.1.6    k8s-slave1   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>
#继续增加pod也是一样的,不会往master节点上去,而是去了slave1上
[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-596b7f9b8b-8tg9s   1/1     Running   0          36s     10.244.1.7    k8s-slave1   <none>           <none>
myblog-596b7f9b8b-gx4pj   1/1     Running   0          2m26s   10.244.2.15   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-pfr5z   1/1     Running   0          7m49s   10.244.2.14   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-tsmrx   1/1     Running   0          4m13s   10.244.0.12   k8s-master   <none>           <none>
myblog-596b7f9b8b-zt4bx   1/1     Running   0          7m29s   10.244.1.6    k8s-slave1   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>

#再继续增加看会不会去master上,因为slave都有两个pod了,发现去master节点上去了
总结就是:默认不会先调度到master节点上去,但是是平均调度的
[root@k8s-master myblog]# kd get po -owide 
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-596b7f9b8b-8tg9s   1/1     Running   0          115s    10.244.1.7    k8s-slave1   <none>           <none>
myblog-596b7f9b8b-gx4pj   1/1     Running   0          3m45s   10.244.2.15   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-klppp   1/1     Running   0          47s     10.244.0.13   k8s-master   <none>           <none>
myblog-596b7f9b8b-pfr5z   1/1     Running   0          9m8s    10.244.2.14   k8s-slave2   <none>           <none>
myblog-596b7f9b8b-tsmrx   1/1     Running   0          5m32s   10.244.0.12   k8s-master   <none>           <none>
myblog-596b7f9b8b-zt4bx   1/1     Running   0          8m48s   10.244.1.6    k8s-slave1   <none>           <none>
mysql-58d95d459c-tkk5q    1/1     Running   0          23h     10.244.1.4    k8s-slave1   <none>           <none>
# 为什么软策略和硬策略都写上去的时候,edit会默认把硬策略删除?不能同时使用吗?

https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/

有状态任务和无状态任务

无状态服务:新的ip名和主机地址
有状态服务statefulset:一致的主机名和持久化状态 
污点(Taints)与容忍(tolerations)

对于nodeAffinity无论是硬策略还是软策略方式,都是调度 Pod 到预期节点上,而Taints恰好与之相反,如果一个节点标记为 Taints ,除非 Pod 也被标识为可以容忍污点节点,否则该 Taints 节点不会被调度Pod。

Taints(污点)是Node的一个属性,设置了Taints(污点)后,因为有了污点,所以Kubernetes是不会将Pod调度到这个Node上的。于是Kubernetes就给Pod设置了个属性Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去。

场景一:私有云服务中,某业务使用GPU进行大规模并行计算。为保证性能,希望确保该业务对服务器的专属性,避免将普通业务调度到部署GPU的服务器。

场景二:用户希望把 Master 节点保留给 Kubernetes 系统组件使用,或者把一组具有特殊资源预留给某些 Pod,则污点就很有用了,Pod 不会再被调度到 taint 标记过的节点。taint 标记节点举例如下:

设置污点:

$ kubectl taint node [node_name] key=value:[effect]   
      其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
       NoSchedule:一定不能被调度。
       PreferNoSchedule:尽量不要调度。
       NoExecute:不仅不会调度,还会驱逐Node上已有的Pod。
  示例:kubectl taint node k8s-slave1 smoke=true:NoSchedule

去除污点:

去除指定key及其effect:
     kubectl taint nodes [node_name] key:[effect]-    #这里的key不用指定value
                
 去除指定key所有的effect: 
     kubectl taint nodes node_name key-
 
 示例:
     kubectl taint node k8s-master smoke=true:NoSchedule
     kubectl taint node k8s-master smoke:NoExecute-
     kubectl taint node k8s-master smoke-

污点演示:

## 给k8s-slave1打上污点,smoke=true:NoSchedule
$ kubectl taint node k8s-master gamble=true:NoSchedule
$ kubectl taint node k8s-slave1 drunk=true:NoSchedule
$ kubectl taint node k8s-slave2 smoke=true:NoSchedule



## 扩容myblog的Pod,观察新Pod的调度情况
$ kuebctl -n luffy scale deploy myblog --replicas=3
$ kubectl -n luffy get po -w    ## pending

列出污点:

注:
# 打污点
[root@k8s-master week3]# kubectl taint node k8s-master gamble=true:NoSchedule
node/k8s-master tainted
[root@k8s-master week3]# kubectl taint node k8s-slave1 drunk=true:NoSchedule
node/k8s-slave1 tainted
[root@k8s-master week3]#  kubectl taint node k8s-slave2 smoke=true:NoSchedule
node/k8s-slave2 tainted

怎么列出污点 
# 先安装jq命令。jq可以对json数据进行分片、过滤、映射和转换
wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
yum install -y jq

# 列出所有节点上存在的污点
kubectl get nodes -o json | jq '.items[].spec'
kubectl get nodes -o json | jq '.items[].spec.taints'  # 推荐使用这个

# 列出刚才上面打的污点,可以看出k8s-slave1和k8s-slave2这两个节点的污点分别在三个
[root@k8s-master week3]# kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "gamble",
    "value": "true"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "drunk",
    "value": "true"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "smoke",
    "value": "true"
  }
]

# 扩容
# 设置主节点不参与pod调度默认
$ kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule

[root@k8s-master week3]# kubectl  -n luffy scale deploy myblog  --replicas=3
deployment.apps/myblog scaled

# 观察新的pod调度情况
[root@k8s-master week3]# kubectl -n luffy get po
NAME                      READY   STATUS    RESTARTS   AGE
myblog-5d9b76df88-b2c8b   0/1     Pending   0          20m
myblog-5d9b76df88-f7tp7   0/1     Pending   0          20m
myblog-6694bccb48-jsmnh   1/1     Running   0          78m
myblog-6694bccb48-tqnk8   1/1     Running   0          79m
mysql-7446f4dc7b-2wqs8    1/1     Running   1          12h
# 此时pending状态

# 查看事件
[root@k8s-master week3]# kubectl -n luffy describe po myblog-5d9b76df88-b2c8b 
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  10m   default-scheduler  0/3 nodes are available: 1 node(s) had taint {drunk: true}, that the pod didn't tolerate, 1 node(s) had taint {gamble: true}, that the pod didn't tolerate, 1 node(s) had taint {smoke: true}, that the pod didn't tolerate.

# 因为pod没有申请容忍污点,所有三个节点都无法调度

Pod容忍污点示例:myblog/deployment/deploy-myblog-taint.yaml

...
spec:
      containers:
      - name: demo
        image: 172.21.51.143:5000/myblog:v1
      tolerations: #设置容忍性 和containers同级
      - key: "smoke" 
        operator: "Equal"  #不指定operator,默认为Equal
        value: "true"
        effect: "NoSchedule"
      - key: "drunk" 
        operator: "Exists"  #如果操作符为Exists,那么value属性可省略,不指定operator,默认为Equal
	  #意思是这个Pod要容忍的有污点的Node的key是smoke Equal true,效果是NoSchedule,
      #tolerations属性下各值必须使用引号,容忍的值都是设置Node的taints时给的值。
$ kubectl apply -f deploy-myblog-taint.yaml
spec:
      containers:
      - name: demo
        image: 172.21.51.143:5000/myblog:v1
      tolerations:
        - operator: "Exists"

NoExecute

Cordon
$ kubectl cordon k8s-slave2  # 将节点标记为不接调度
$ kubectl drain k8s-slave2 # 驱逐的一个动作,可以吧node2的pod都驱逐掉

# cordon   警戒线
# drain  排空
注:
# 设置节点不可调度
kubectl cordon nodename

# 设置节点可调度
 kubectl uncordon nodename

# 强制清除失联的节点pod
$ kubectl drain --ignore-daemonsets --delete-local-data  nodename

查看命令帮助
[root@k8s-master week3]# kubectl
cordon        Mark node as unschedulable  # 将节点标记为不可调度
uncordon      Mark node as schedulable #将节点标记为可调度

# 将节点标记为不可调度已经调度到这个节点的pod不受影响
[root@k8s-master week3]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   18h   v1.19.8
k8s-slave1   Ready    <none>   18h   v1.19.8
k8s-slave2   Ready    <none>   18h   v1.19.8

[root@k8s-master week3]# kubectl cordon k8s-slave2
node/k8s-slave2 cordoned
[root@k8s-master week3]# kubectl get nodes
NAME         STATUS                     ROLES    AGE   VERSION
k8s-master   Ready                      master   18h   v1.19.8
k8s-slave1   Ready                      <none>   18h   v1.19.8
k8s-slave2   Ready,SchedulingDisabled   <none>   18h   v1.19.8
# SchedulingDisabled  # 不能调度

# 将节点标记为可调度
[root@k8s-master week3]# kubectl uncordon k8s-slave2
node/k8s-slave2 uncordoned
[root@k8s-master week3]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   18h   v1.19.8
k8s-slave1   Ready    <none>   18h   v1.19.8
k8s-slave2   Ready    <none>   18h   v1.19.8

查看是否有污点
[root@k8s-master week3]# kubectl describe node |grep -i taint
Taints:             gamble=true:NoSchedule
Taints:             drunk=true:NoSchedule
Taints:             smoke=true:NoSchedule
小结
讲了k8s的调度策略
scheduler 提供了调度流程分为了预选和优选两个步骤

影响k8s调度的规则:
cordon : 标记节点为不可调度的对象,
uncordon:标记节点为可调度的对象
drain:驱逐,已有的pod不受影响

taint: 标记为污点节点 
NoSchedule:一定不能被调度
PreferNoSchedule:尽量不要调度
NoExecute:不仅不会调度,还会驱逐node上已有的pod

语法:
添加污点:
kubectl taint node [node_name] key=value:[effect]    译文:[effect: 效果]

去除指定key及其effect:加短横线即可去除
     kubectl taint nodes [node_name] key:[effect]- 
               
去除指定key所有的effect: 
     kubectl taint nodes node_name key-

列出所有节点上的污点 
kubectl get nodes -o json | jq '.items[].spec.taints'

标签的增删改查
NodeSelector:节点选择器,当node打上标签后,在调度时就可以使用这些标签

节点亲和性
	软策略:如果你可以满足我条件最好,没有就忽略
	硬策略:必须满足我的条件,不然就不能调度pod

设置污点容忍度
	Exists  容忍所有污点
	equal 容忍某个污点
Kubernetes集群的网络实现
CNI介绍及集群网络选型

容器网络接口(Container Network Interface),实现kubernetes集群的Pod网络通信及管理。包括:

  • CNI Plugin负责给容器配置网络,它包括两个基本的接口:
    配置网络: AddNetwork(net NetworkConfig, rt RuntimeConf) (types.Result, error)
    清理网络: DelNetwork(net NetworkConfig, rt RuntimeConf) error
  • IPAM Plugin负责给容器分配IP地址,主要实现包括host-local和dhcp。

以上两种插件的支持,使得k8s的网络可以支持各式各样的管理模式,当前在业界也出现了大量的支持方案,其中比较流行的比如flannel、calico等。

kubernetes配置了cni网络插件后,其容器网络创建流程为:

  • kubelet先创建pause容器生成对应的network namespace
  • 调用网络driver,因为配置的是CNI,所以会调用CNI相关代码,识别CNI的配置目录为/etc/cni/net.d
  • CNI driver根据配置调用具体的CNI插件,二进制调用,可执行文件目录为/opt/cni/bin,项目
  • CNI插件给pause容器配置正确的网络,pod中其他的容器都是用pause的网络

可以在此查看社区中的CNI实现,https://github.com/containernetworking/cni

通用类型:flannel、calico等,部署使用简单

其他:根据具体的网络环境及网络需求选择,比如

  • 公有云机器,可以选择厂商与网络插件的定制Backend,如AWS、阿里、腾讯针对flannel均有自己的插件,也有AWS ECS CNI
  • 私有云厂商,比如Vmware NSX-T等
  • 网络性能等,MacVlan
Flannel网络模型实现剖析

flannel的网络有多种实现:

  • udp
  • vxlan
  • host-gw

不特殊指定的话,默认会使用vxlan技术作为Backend,可以通过如下查看:

$ kubectl -n kube-system exec  kube-flannel-ds-amd64-cb7hs cat /etc/kube-flannel/net-conf.json
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}

注:
# 通常系统组件的pod放在kube-systemc这个namespace
[root@k8s-master week3]# kubectl -n kube-system get po|grep flannel
ube-flannel-ds-amd64-4tfxs          1/1     Running   12         15d
kube-flannel-ds-amd64-58d2h          1/1     Running   6          15d
kube-flannel-ds-amd64-sfsj2          1/1     Running   10         15d

# 不特殊指定的话默认会使用vxlan技术作为Backend,可以通过如下查看
[root@k8s-master week2]# kubectl -n kube-system exec   kube-flannel-ds-amd64-58d2h   cat  /etc/kube-flannel/net-conf.json
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"  # 类型是vxlan
  }
}

# flannel的配置文件
[root@k8s-master week3]# ll /etc/cni/net.d/
total 4
-rw-r--r-- 1 root root 292 Jul 18 15:11 10-flannel.conflist

[root@k8s-master week3]# cat /etc/cni/net.d/10-flannel.conflist 
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

# flannel命令的二进制目录
[root@k8s-master week3]# ls /opt/cni/bin/
bandwidth  dhcp      flannel      host-local  loopback  portmap  sbr     tuning
bridge     firewall  host-device  ipvlan      macvlan   ptp      static  vlan

vxlan介绍及点对点通信的实现

VXLAN 全称是虚拟可扩展的局域网( Virtual eXtensible Local Area Network),它是一种 overlay 技术,通过三层的网络来搭建虚拟的二层网络。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Qf0rGJWi-1639403856686)(images/vxlan.png)]

它创建在原来的 IP 网络(三层)上,只要是三层可达(能够通过 IP 互相通信)的网络就能部署 vxlan。在每个端点上都有一个 vtep 负责 vxlan 协议报文的封包和解包,也就是在虚拟报文上封装 vtep 通信的报文头部。物理网络上可以创建多个 vxlan 网络,这些 vxlan 网络可以认为是一个隧道,不同节点的虚拟机能够通过隧道直连。每个 vxlan 网络由唯一的 VNI 标识,不同的 vxlan 可以不相互影响。

  • VTEP(VXLAN Tunnel Endpoints):vxlan 网络的边缘设备,用来进行 vxlan 报文的处理(封包和解包)。vtep 可以是网络设备(比如交换机),也可以是一台机器(比如虚拟化集群中的宿主机)
  • VNI(VXLAN Network Identifier):VNI 是每个 vxlan 的标识,一共有 2^24 = 16,777,216,一般每个 VNI 对应一个租户,也就是说使用 vxlan 搭建的公有云可以理论上可以支撑千万级别的租户

演示:在k8s-slave1和k8s-slave2两台机器间,利用vxlan的点对点能力,实现虚拟二层网络的通信

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LyZYbUls-1639403856687)(images/vxlan-p2p-1.jpg)]

k8s-slave1节点:

# 创建vTEP设备,对端指向10.0.1.8节点,指定VNI及underlay网络使用的网卡
$ ip link add vxlan20 type vxlan id 20 remote 10.0.1.8 dstport 4789 dev ens32
# 参数解释
vxlan20  设备名
type 类型指定
id 指定20
remote 远端的地址是10.0.1.8
dstport 监听的默认地址是4789
dev 指定执行次命令机器上的设备网卡

# 查询设备
$ ip -d link show vxlan20

# 启动设备
$ ip link set vxlan20 up 

# 设置ip地址
$ ip addr add 10.0.51.55/24 dev vxlan20

注:
# 在k8s-slave1上操作如下:
[root@k8s-slave1 ~]# ip link add vxlan20 type vxlan id 20 remote 10.0.1.8 dstport 4789 dev ens33
[root@k8s-slave1 ~]# ip -d link show vxlan20
13: vxlan20: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:e6:f0:85:05:73 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
# 启动设备
[root@k8s-slave1 ~]# ip link set vxlan20 up 
# 设置ip地址
[root@k8s-slave1 ~]# ip addr add 10.0.136.12/24 dev vxlan20
[root@k8s-slave1 ~]# ip -d link show vxlan20
13: vxlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:e6:f0:85:05:73 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535  

k8s-slave2节点:

# 在k8s0slave2上操作如下:
# 创建VTEP设备,对端指定向k8s-slave1节点,指定VNI及underlay网络使用的网卡
[root@k8s-slave2 ~]# ip link add vxlan20 type vxlan id 20 remote 10.0.1.6 dstport 4789 dev ens32

# 启动设备
[root@k8s-slave2 ~]# ip link set vxlan20 up 
# 设备ip地址
[root@k8s-slave2 ~]# ip addr add 10.0.137.11/24 dev vxlan20
[root@k8s-slave2 ~]# ip -d link show vxlan20
12: vxlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether e6:5e:54:bd:8d:13 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vxlan id 20 remote 10.0.1.6 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

k8s-slave1节点:

$ ping 10.0.137.11

# 走vtevp封包解包
[root@k8s-slave1 ~]# ip route add 10.0.137.0/24 dev vxlan20

在k8s-slave2机器
[root@k8s-slave2 ~]# ip route add 10.0.136.0/24 dev vxlan20

# 再次ping 10.0.137.11
[root@k8s-slave1 ~]# ping 10.0.137.11 -c 2
PING 10.0.137.11 (10.0.137.11) 56(84) bytes of data.
64 bytes from 10.0.137.11: icmp_seq=1 ttl=64 time=0.777 ms
64 bytes from 10.0.137.11: icmp_seq=2 ttl=64 time=0.580 ms 

在k8s-slave2机器上
$ tcpdump -i vxlan20 icmp

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GM3K88KA-1639403856687)(images\vxlan-p2p-2.jpg)]

隧道是一个逻辑上的概念,在 vxlan 模型中并没有具体的物理实体相对应。隧道可以看做是一种虚拟通道,vxlan 通信双方(图中的虚拟机)认为自己是在直接通信,并不知道底层网络的存在。从整体来说,每个 vxlan 网络像是为通信的虚拟机搭建了一个单独的通信通道,也就是隧道。

实现的过程:

虚拟机的报文通过 vtep 添加上 vxlan 以及外部的报文层,然后发送出去,对方 vtep 收到之后拆除 vxlan 头部然后根据 VNI 把原始报文发送到目的虚拟机。

# 查看172.21.51.55主机路由
$ route -n
10.0.51.0       0.0.0.0         255.255.255.0   U     0      0        0 vxlan20
10.0.52.0       0.0.0.0         255.255.255.0   U     0      0        0 vxlan20

# 到了vxlan的设备后,
$ ip -d link show vxlan20
    vxlan id 20 remote 172.21.51.55 dev eth0 srcport 0 0 dstport 4789 ...

# 查看fdb地址表,主要由MAC地址、VLAN号、端口号和一些标志域等信息组成,vtep 对端地址为 172.21.51.55,换句话说,如果接收到的报文添加上 vxlan 头部之后都会发到 172.21.51.55
$ bridge fdb show dev vxlan20
00:00:00:00:00:00 dst 172.21.52.84 via eth0 self permanent
a6:61:05:84:20:c6 dst 172.21.52.84 self

注
# 查看k8s-slave1主机的路由
[root@k8s-slave1 ~]# route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.136.0      0.0.0.0         255.255.255.0   U     0      0        0 vxlan20
10.0.137.0      0.0.0.0         255.255.255.0   U     0      0        0 vxlan20

# 到了vxlan的设备后
[root@k8s-slave1 ~]# ip -d link show vxlan20 
    vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789

# 查看fdb地址表,主要由MAC地址、VLAN好、端口号和一些标志域等信息组成,vtep对端地址为10.0.1.8换句话说,如果接收到的报文添加上vxlan头部之后会发到10.0.1.8
[root@k8s-slave1 ~]# bridge fdb show|grep vxlan20
00:00:00:00:00:00 dev vxlan20 dst 10.0.1.8 via ens32 self permanent
02:6e:c5:fa:d1:89 dev vxlan20 dst 10.0.1.8 self 

k8s-slave2机器抓包,查看vxlan封装后的包:

# 在k8s-slave2机器执行
$ tcpdump -i eth32 host 10.0.1.6 -w vxlan.cap

# 在k8s-slave1机器执行
$ ping 10.0.137.11

注:下载tcpdumo命令
yum install -y tcpdump

[root@k8s-slave2 ~]# tcpdump -i ens32 host 10.0.1.6 -w vxlan.cap
tcpdump: listening on ens32, link-type EN10MB (Ethernet), capture size 262144 bytes
[root@k8s-slave2 ~]# ll
total 24
-rw-------. 1 root    root     1441 Jul  7 15:41 anaconda-ks.cfg
-rw-r--r--  1 tcpdump tcpdump 16726 Jul 18 16:23 vxlan.cap

# 使用wireshark抓包分析
[root@k8s-slave1 ~]# ping 10.0.137.11 
PING 10.0.136.12 (10.0.136.12) 56(84) bytes of data.
64 bytes from 10.0.136.12: icmp_seq=1 ttl=64 time=0.435 ms
64 bytes from 10.0.136.12: icmp_seq=2 ttl=64 time=0.512 ms
64 bytes from 10.0.136.12: icmp_seq=3 ttl=64 time=0.721 ms

使用wireshark分析ICMP类型的数据包

清理:

$ ip link del vxlan20
跨主机容器网络的通信

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-N5bTzJEd-1639403856687)(images/vxlan-docker-1.jpg)]

思考:容器网络模式下,vxlan设备该接在哪里?

基本的保证:目的容器的流量要通过vtep设备进行转发!

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wO1OUoxC-1639403856687)(images/vxlan-docker-mul.jpg)]

演示:利用vxlan实现跨主机容器网络通信

为了不影响已有的网络,因此创建一个新的网桥,创建容器接入到新的网桥来演示效果

k8s-slave1节点:

$ docker network ls

# 创建新网桥,指定cidr段
$ docker network create --subnet 172.18.1.0/24  network-luffy
$ docker network ls

# 查看网桥
$ brctl show
# 新建容器,接入到新网桥
$ docker run -d --name vxlan-test --net network-luffy --ip 172.18.1.2 nginx:alpine

$ docker exec vxlan-test ifconfig

注:
# 默认的初始化状态
[root@k8s-slave1 ~]# docker network  ls
NETWORK ID     NAME      DRIVER    SCOPE
59a5d3a5fbab   bridge    bridge    local
3b24ea493741   host      host      local
05f4c1d4d620   none      null      local

# 创建新网桥,指定cidr端
[root@k8s-slave1 ~]# docker network create --subnet 172.18.1.0/24 network-luffy
6cda332dade866f0990994a924953a5b06efd80dbf058a8b17f5bda0ad94328a

[root@k8s-slave1 ~]# docker network ls
NETWORK ID     NAME            DRIVER    SCOPE
3017c364daf2   bridge          bridge    local
3b24ea493741   host            host      local
c58faa633165   network-luffy   bridge    local
05f4c1d4d620   none            null      local

# 查看网桥,说明在新建一个docker bridge网络时,会默认自动建立一个网桥
[root@k8s-slave1 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-6cda332dade8		8000.02425dbc8bf5	no		

# 新建容器,接入到网桥,再查看已经接了一个网卡过来
[root@k8s-slave1 ~]# docker run -d --name vxlan-test --net network-luffy --ip 172.18.1.2 nginx:alpine

[root@k8s-slave1 ~]# brctl show
[bridge name	bridge id		STP enabled	interfaces
br-6cda332dade8		8000.02425dbc8bf5	no		vethabc15e1


[root@k8s-slave1 ~]# docker exec vxlan-test ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:01:02  
          inet addr:172.18.1.2  Bcast:172.18.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1086 (1.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

k8s-slave2节点:

# 创建新网桥,指定cidr段
$ docker network create --subnet 172.18.2.0/24  network-luffy

# 新建容器,接入到新网桥
$ docker run -d --name vxlan-test --net network-luffy --ip 172.18.2.2 nginx:alpine

注:
# 创建新网桥,指定cidr端
[root@k8s-slave2 ~]# docker network create  --subnet  172.18.2.0/24 network-luffy
e954091f19d1fd1ee389a366a4e5222498475a02669888da6931ed36fac5b721

[root@k8s-slave2 ~]# docker run -d --name vxlan-test --net network-luffy --ip 172.18.2.2 nginx:alpine
3f701927057a6a4c9d20591e53299af8d9459ebacc6114b2d0620c681dcef004

# 此时k8s-slave2是能ping通的
[root@k8s-slave2 ~]# docker exec vxlan-test ping 172.18.2.2 -c2
PING 172.18.2.2 (172.18.2.2): 56 data bytes
64 bytes from 172.18.2.2: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.18.2.2: seq=1 ttl=64 time=0.168 ms

# 查看k8s-slave2的路由规则
[root@k8s-slave2 ~]# route -n |grep 172.18.2.0
172.18.2.0      0.0.0.0         255.255.255.0   U     0      0        0 br-a05a9e1cbf5c

此时执行ping测试:

$ docker exec vxlan-test ping 172.18.2.2
[root@k8s-slave1 ~]# docker exec vxlan-test ping 172.18.2.2
PING 172.18.2.2 (172.18.2.2): 56 data bytes

分析:数据到了网桥后,出不去。结合前面的示例,因此应该将流量由vtep设备转发,联想到网桥的特性,接入到桥中的端口,会由网桥负责转发数据,因此,相当于所有容器发出的数据都会经过到vxlan的端口,vxlan将流量转到对端的vtep端点,再次由网桥负责转到容器中。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-EkStCiZs-1639403856688)(images/vxlan-docker-mul-all.jpg)]

k8s-slave1节点:

# 删除旧的vtep
$ ip link del vxlan20

# 新建vtep
$ ip link add vxlan_docker type vxlan id 100 remote 172.21.52.84 dstport 4789 dev eth0
$ ip link set vxlan_docker up
# 不用设置ip,因为目标是可以转发容器的数据即可

$ brctl show
br-0fdb78d3b486         8000.02421452871b       no              vethfffdd2f
# 接入到网桥中
$ brctl addif br-0fdb78d3b486 vxlan_docker

注:
# 删除旧的vtep
[root@k8s-slave1 ~]# ip link del vxlan20

# 新建vetp
[root@k8s-slave1 ~]# ip link add vxlan_docker type vxlan id 100 remote 10.0.1.8 dstport 4789 dev ens32
[root@k8s-slave1 ~]# ip link set vxlan_docker up
[root@k8s-slave1 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-7aa1c412ebd9		8000.02422da4cc96	no		veth30da43b

	
# 接入到网桥中
注意这个ID是上面的brctl-show的值br-7aa1c412ebd9
[root@k8s-slave1 ~]# brctl addif br-7aa1c412ebd9 vxlan_docker

[root@k8s-slave1 ~]# brctl  show
bridge name	bridge id		STP enabled	interfaces
br-7aa1c412ebd9		8000.02422da4cc96	no		veth30da43b
							vxlan_docker
[root@k8s-slave1 ~]# route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.18.1.0      0.0.0.0         255.255.255.0   U     0      0        0 br-7aa1c412ebd9
[root@k8s-slave1 ~]# ip route add 172.18.2.0/24 dev br-7aa1c412ebd9

k8s-slave2节点:

# 删除旧的vtep
$ ip link del vxlan20

# 新建vtep
$ ip link add vxlan_docker type vxlan id 100 remote 172.21.51.55 dstport 4789 dev eth0
$ ip link set vxlan_docker up
# 不用设置ip,因为目标是可以转发容器的数据即可

# 接入到网桥中
$ brctl show
$ brctl addif br-c6660fe2dc53 vxlan_docker

注:
# 删除旧的vtep
[root@k8s-slave2 ~]# ip link del vxlan20
# 新建vtep
[root@k8s-slave2 ~]# ip link add vxlan_docker type vxlan id 100 remote 10.0.1.6 dstport 4789 dev ens32
[root@k8s-slave2 ~]# ip link set vxlan_docker up

[root@k8s-slave2 ~]# brctl  show
bridge name	bridge id		STP enabled	interfaces
br-cbd95326f8df		8000.02421006642f	no		veth7e24c00

# 接入到网桥中
注意这个ID是上面的brctl-show的值br-cbd95326f8df
[root@k8s-slave2 ~]# brctl  addif br-cbd95326f8df vxlan_docker
[root@k8s-slave2 ~]# brctl  show
bridge name	bridge id		STP enabled	interfaces
br-cbd95326f8df		8000.02421006642f	no		veth7e24c00
							vxlan_docker
[root@k8s-slave2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.18.2.0      0.0.0.0         255.255.255.0   U     0      0        0 br-cbd95326f8df
[root@k8s-slave2 ~]# ip route add 172.18.1.0/24 dev br-cbd95326f8df

再次执行ping测试:

$ docker exec vxlan-test ping 172.18.2.2
[root@k8s-slave1 ~]# docker exec vxlan-test ping 172.18.2.2
PING 172.18.2.2 (172.18.2.2): 56 data bytes
64 bytes from 172.18.2.2: seq=0 ttl=63 time=0.843 ms
64 bytes from 172.18.2.2: seq=1 ttl=63 time=1.293 ms

# 说明了啥?
 手动通过建立网桥,实现跨主机之间容器的通信
brct命令

设置Linux网桥命令

安装包
yum install -y bridge-utils
参数说明示例
addbr <bridge>创建网桥brctl addbr br10
delbr <bridge>删除网桥brctl delbr br10
addif <bridge> <device>将网卡接口接入网桥brctl addif br10 eth0
delif <bridge> <device>删除网桥接入的网卡接口brctl delif br10 eth0
show <bridge>查询网桥信息brctl show br10 或者直接 brctl show
stp <bridge> {on|off}启用禁用 STPbrctl stp br10 off/on
showstp <bridge>查看网桥 STP 信息brctl showstp br10
setfd <bridge> <time>设置网桥延迟brctl setfd br10 10
showmacs <bridge>查看 mac 信息brctl showmacs br10
ip命令

https://wangchujiang.com/linux-command/c/ip.html

Flannel的vxlan实现精讲

思考:k8s集群的网络环境和手动实现的跨主机的容器通信有哪些差别?

  1. CNI要求,集群中的每个Pod都必须分配唯一的Pod IP

  2. k8s集群内的通信不是vxlan点对点通信,因为集群内的所有节点之间都需要互联

    • 没法创建点对点的vxlan模型
  3. 集群节点动态添加

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SXjAOUvR-1639403856688)(images/flannel.png)]

flannel如何为每个节点分配Pod地址段:

$ kubectl -n kube-system get po |grep flannel
$ kubectl -n kube-system exec kube-flannel-ds-amd64-cb7hs cat /etc/kube-flannel/net-conf.json
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}

#查看节点的pod ip
[root@k8s-master bin]# kd get po -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE        
myblog-5d9ff54d4b-4rftt   1/1     Running   1          33h     10.244.2.19   k8s-slave2  
myblog-5d9ff54d4b-n447p   1/1     Running   1          33h     10.244.1.32   k8s-slave1

#查看k8s-slave1主机分配的地址段
$ cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

# kubelet启动容器的时候就可以按照本机的网段配置来为pod设置IP地址

注:
# 只有flannel启动可就会有这个文件,如果flannel挂了此文件就会丢失
# 每个机器会分配一个大段
# master分配到了0.1这个端
[root@k8s-master week3]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=true

#  slave2分配到了2.1这个端
[root@k8s-slave2 ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.2.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=true

# 每启一个机器都会分到一个10.244.x.x的大段,这样的好处是保证了k8s集群内部ip地址不会重复机器ip不会重复
注:
# 在初始化时,kubeadm,yaml文件pod定义了一个ip地址段
[root@k8s-master 2021]# grep networking -A3 /root/2021/kubeadm.yaml 
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12

# 查看pod ip
[root@k8s-master 2021]# kubectl -n luffy  get po -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65847cf6ff-f5rv2   1/1     Running   1          7h31m   10.244.1.15   k8s-slave1   <none>           <none>
myblog-65847cf6ff-tz46d   1/1     Running   1          7h31m   10.244.0.11   k8s-master   <none>           <none>
mysql-58d95d459c-jj4sx    1/1     Running   0          5h18m   10.244.1.16   k8s-slave1   <none>   
# 可以看出这些pod_IP都是kubeadm.yaml文件中的,podSubnet定义的地址段中

[root@k8s-master ~]# kubectl describe no k8s-slave1|grep -i cidr
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24

vtep的设备在哪:

$ ip -d link show flannel.1
# 没有remote ip,非点对点

# 所有节点都有一个flannel.1的设备,flannel启动时创建
[root@k8s-master 2021]# ip -d link show flannel.1
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 0a:4c:e7:8d:15:4e brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vxlan id 1 local 10.0.1.5 dev ens32 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

Pod的流量如何转到vtep设备中

$ brctl show cni0

# 每个Pod都会使用Veth pair来实现流量转到cni0网桥

$ route -n
10.244.0.0      10.244.0.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
注:
# 显示网桥信息
[root@k8s-master 2021]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.067713fbf9e2	no		veth34f9f400
							veth7d8843b1
							vethc42219bc
# 网桥,它的作用是为了把pod流量转到网桥上,只有到了网桥上才能到宿主机的空间,进一步才能有flannel转出去。

# 每个pod都会使用veth pair来实现流量转到cni0网桥
[root@k8s-slave1 ~]# route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.2        0.0.0.0         UG    100    0        0 ens32
10.0.1.0        0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      10.0.1.5        255.255.255.0   UG    0      0        0 ens32
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      10.0.1.8        255.255.255.0   UG    0      0        0 ens32
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.1.0      0.0.0.0         255.255.255.0   U     0      0        0 br-c58faa633165
# 没有一个vethxxx都代表一个pod。它是以pod为单位

route命令 :显示并设置Linux中静态路由表
# 其中Flags为路由标志,标记当前网络节点的状态,Flags标志说明
# U up 表示此路由当前为启动状态
# H Host 表示此网关唯一主机
# G Gatway 表示此网关唯一路由器
# D Dynamically,此路由是动态性写入
# M Modified,此路由是路由守护程序或导向动态修改
# ! 表示路由当前为关闭状态
# 添加网关/设置网关:*

vtep封包的时候,如何拿到目的vetp端的IP及MAC信息

# flanneld启动的时候会配置--iface=ens32,通过该配置可以将网卡的ip及mac信息存储到ETCD中,
# 这样,flannel就知道所有的节点分配的IP段及vtep设备的IP和mac信息,而且所有节点的flanneld都可以感知到节点的添加和删除操作,就可以动态的更新本机的转发配置

演示跨主机Pod通信的流量详细过程:

$ kubectl -n luffy get po -o wide
myblog-5d9ff54d4b-4rftt   1/1     Running   1          25h    10.244.2.19   k8s-slave2
myblog-5d9ff54d4b-n447p   1/1     Running   1          25h    10.244.1.32   k8s-slave1

$ kubectl -n luffy exec myblog-5d9ff54d4b-n447p -- ping 10.244.2.19 -c 2
PING 10.244.2.19 (10.244.2.19) 56(84) bytes of data.
64 bytes from 10.244.2.19: icmp_seq=1 ttl=62 time=0.480 ms
64 bytes from 10.244.2.19: icmp_seq=2 ttl=62 time=1.44 ms

--- 10.244.2.19 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.480/0.961/1.443/0.482 ms

# 查看路由
$ kubectl -n luffy exec myblog-5d9ff54d4b-n447p -- route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.244.1.1      0.0.0.0         UG    0      0        0 eth0
10.244.0.0      10.244.1.1      255.255.0.0     UG    0      0        0 eth0
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0

# 查看k8s-slave1 的veth pair 和网桥
$ brctl show
bridge name     bridge id               STP enabled     interfaces
cni0            8000.6a9a0b341d88       no              veth048cc253
                                                        veth76f8e4ce
                                                        vetha4c972e1
# 流量到了cni0后,查看slave1节点的route
$ route -n
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.136.2   0.0.0.0         UG    100    0        0 eth0
10.0.136.0      0.0.0.0         255.255.255.0   U     0      0        0 vxlan20
10.244.0.0      10.244.0.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.136.0   0.0.0.0         255.255.255.0   U     100    0        0 eth0

# 流量转发到了flannel.1网卡,查看该网卡,其实是vtep设备
$ ip -d link show flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether 8a:2a:89:4d:b0:31 brd ff:ff:ff:ff:ff:ff promiscuity 0
    vxlan id 1 local 172.21.51.68 dev eth0 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

# 该转发到哪里,通过etcd查询数据,然后本地缓存,流量不用走多播发送
$ bridge fdb show dev flannel.1
4a:4d:9d:3a:c5:f0 dst 172.21.51.68 self permanent
76:e7:98:9f:5b:e9 dst 172.21.51.67 self permanent

# 对端的vtep设备接收到请求后做解包,取出源payload内容,查看k8s-slave2的路由
$ route -n
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.21.50.140   0.0.0.0         UG    0      0        0 eth0
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.21.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

#根据路由规则转发到cni0网桥,然后由网桥转到具体的Pod中

总结:flannel实现了跨主机的pod直接的通信
# 查看pod详细信息
[root@k8s-master 2021]# kubectl -n luffy  get po -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65847cf6ff-8s75f   1/1     Running   0          39s     10.244.2.7    k8s-slave2   <none>           <none>
myblog-65847cf6ff-f5rv2   1/1     Running   1          7h59m   10.244.1.15   k8s-slave1   <none>           <none>
myblog-65847cf6ff-tz46d   1/1     Running   1          7h59m   10.244.0.11   k8s-master   <none>           <none>
mysql-58d95d459c-jj4sx    1/1     Running   0          5h45m   10.244.1.16   k8s-slave1   <none>           <none>
  

# 使用slave1上的myblog_pod去ping slave2上的pod
[root@k8s-master 2021]# kubectl -n luffy  exec myblog-65847cf6ff-f5rv2 -- ping 10.244.2.7 -c 2
PING 10.244.2.7 (10.244.2.7) 56(84) bytes of data.
64 bytes from 10.244.2.7: icmp_seq=1 ttl=62 time=0.514 ms
64 bytes from 10.244.2.7: icmp_seq=2 ttl=62 time=0.455 ms

# 查看slave1_pod内部的ip
[root@k8s-master 2021]# kubectl -n luffy exec myblog-65847cf6ff-f5rv2  -- ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.15  netmask 255.255.255.0  broadcast 10.244.1.255
        ether de:8b:7c:f3:44:8a  txqueuelen 0  (Ethernet)
        RX packets 73499  bytes 7631745 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 80288  bytes 9807009 (9.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 38636  bytes 6604367 (6.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 38636  bytes 6604367 (6.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# 查看slave1上这个pod内部的路由,veth pair 它的一段在pod内部,另一端在宿主机上
[root@k8s-master 2021]# kubectl -n luffy exec myblog-65847cf6ff-f5rv2 -- route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.244.1.1      0.0.0.0         UG    0      0        0 eth0
10.244.0.0      10.244.1.1      255.255.0.0     UG    0      0        0 eth0
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0

# 再来查看k8s-slave1的veth pair和网桥
[root@k8s-master 2021]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.067713fbf9e2	no		veth34f9f400
							veth7d8843b1
							vethc42219bc

# 流量到了cni0后,查看slave1节点的route
[root@k8s-slave1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.2        0.0.0.0         UG    100    0        0 ens32
10.0.1.0        0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      10.0.1.5        255.255.255.0   UG    0      0        0 ens32
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      10.0.1.8        255.255.255.0   UG    0      0        0 ens32
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.1.0      0.0.0.0         255.255.255.0   U     0      0        0 br-c58faa633165

# 流量转发到了flannel.1网卡,查看该网卡,其实vtep设备
[root@k8s-slave1 ~]# ip -d link  show flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether 06:cd:95:7e:6e:a6 brd ff:ff:ff:ff:ff:ff promiscuity 0 
    vxlan id 1 local 10.0.1.6 dev ens32 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    
# 该转发 到哪里,通过etcd查询数据,然后本地缓存,流量不用走多播发送
[root@k8s-slave1 ~]# bridge fdb show dev flannel.1
0a:4c:e7:8d:15:4e dst 10.0.1.5 self permanent
5a:b9:e4:92:6a:34 dst 10.0.1.8 self permanent
72:cd:f4:b5:34:d5 dst 10.0.1.5 self permanent

# 对端的vtep设备接受到请求后做解包,取出源payload内容,查看k8s-slave2的路由
[root@k8s-slave2 ~]# route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.2        0.0.0.0         UG    100    0        0 ens32
10.0.1.0        0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      10.0.1.5        255.255.255.0   UG    0      0        0 ens32
10.244.1.0      10.0.1.6        255.255.255.0   UG    0      0        0 ens32
10.244.2.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-aaad277060d

[root@k8s-slave2 ~]# brctl show cni0
bridge name	bridge id		STP enabled	interfaces
cni0		8000.1e45de8b7ca7	no		veth20885c0b

# 根据路由规则转发到cni0网桥,然后由网桥转到具体的pod中
总结:flannel实现了跨主机的pod直接的通信

实际的请求图:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Sogtgnhj-1639403856688)(images/flannel-actual.png)]

  • k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥 cni0
  • 到达 cni0 当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该交给 flannel.1 接口
  • flannel.1 作为一个 VTEP 设备,收到报文后将按照 VTEP 的配置进行封包,第一次会发送ARP请求,知道10.244.2.19的vtep设备是k8s-slave2机器,IP地址是172.21.51.67,拿到MAC 地址进行 VXLAN 封包。
  • 通过节点 k8s-slave2 跟 k8s-slave1之间的网络连接,VXLAN 包到达 k8s-slave2 的 eth0 接口
  • 通过端口 8472,VXLAN 包被转发给 VTEP 设备 flannel.1 进行解包
  • 解封装后的 IP 包匹配节点 k8s-slave2 当中的路由表(10.244.2.0),内核将 IP 包转发给cni0
  • cni0将 IP 包转发给连接在 cni0 上的 pod-b
利用host-gw模式提升集群网络性能

vxlan模式适用于三层可达的网络环境,对集群的网络要求很宽松,但是同时由于会通过VTEP设备进行额外封包和解包,因此给性能带来了额外的开销。

网络插件的目的其实就是将本机的cni0网桥的流量送到目的主机的cni0网桥。实际上有很多集群是部署在同一二层网络环境下的,可以直接利用二层的主机当作流量转发的网关。这样的话,可以不用进行封包解包,直接通过路由表去转发流量。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-814ZimaD-1639403856688)(images/flannel-host-gw.png)]

为什么三层可达的网络不直接利用网关转发流量?

内核当中的路由规则,网关必须在跟主机当中至少一个 IP 处于同一网段。
由于k8s集群内部各节点均需要实现Pod互通,因此,也就意味着host-gw模式需要整个集群节点都在同一二层网络内。

# 注: 同一二层网络内可以使用host-gw,它的性能比flannel要好

修改flannel的网络后端:

$ kubectl edit cm kube-flannel-cfg -n kube-system
...
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }
kind: ConfigMap
...

注:
# flannel的配置是configmap挂载进去的
[root@k8s-master ~]# kubectl -n kube-system get cm
NAME                                 DATA   AGE
coredns                              1      26h
extension-apiserver-authentication   6      26h
kube-flannel-cfg                     2      6h30m
kube-proxy                           2      26h
kubeadm-config                       2      26h
kubelet-config-1.19                  1      26h
# 查看configMap的内容
[root@k8s-master ~]# kubectl -n kube-system get cm kube-flannel-cfg -oyaml
apiVersion: v1
data:
  cni-conf.json: | # 竖线符,引用文件内容
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw" #把这里改成host-gw
      }
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"cni-conf.json":"{\n  \"name\": \"cbr0\",\n  \"cniVersion\": \"0.3.1\",\n  \"plugins\": [\n    {\n      \"type\": \"flannel\",\n      \"delegate\": {\n        \"hairpinMode\": true,\n        \"isDefaultGateway\": true\n      }\n    },\n    {\n      \"type\": \"portmap\",\n      \"capabilities\": {\n        \"portMappings\": true\n      }\n    }\n  ]\n}\n","net-conf.json":"{\n  \"Network\": \"10.244.0.0/16\",\n  \"Backend\": {\n    \"Type\": \"vxlan\"\n  }\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"flannel","tier":"node"},"name":"kube-flannel-cfg","namespace":"kube-system"}}
  creationTimestamp: "2021-07-18T07:11:15Z"
  labels:
    app: flannel
    tier: node
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:cni-conf.json: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app: {}
          f:tier: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-07-18T07:11:15Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        f:net-conf.json: {}
    manager: kubectl-edit
    operation: Update
    time: "2021-07-18T12:14:55Z"
  name: kube-flannel-cfg
  namespace: kube-system
  resourceVersion: "218583"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kube-flannel-cfg
  uid: a50a1bd5-d261-468d-b498-9b8aa83da681
  
# 查看文件位置
[root@k8s-master week3]# kubectl -n kube-system get po|grep flannel
kube-flannel-ds-9xr8l                1/1     Running   0          88m
kube-flannel-ds-hz2tg                1/1     Running   0          88m
kube-flannel-ds-qj4zh                1/1     Running   0          88m
[root@k8s-master week3]# kubectl -n kube-system exec kube-flannel-ds-9xr8l -- ls /etc/kube-flannel
cni-conf.json
net-conf.json
# 上面configmap的内容就是挂载的这两个文件

# 修改configmap的内容,可以发现没法直接去修改flannel内部的文件,只需要通过修改configmap,然后去重建
[root@k8s-master week3]# kubectl -n kube-system edit cm kube-flannel-cfg 
# 将type类型原来的vxlan修改host-gw,直接保存退出不会自动重建,需要手动重建
29       "Network": "10.244.0.0/16",
     30       "Backend": {
     31         "Type": "host-gw"

# 重建圈查看路由表与重建后做对比
[root@k8s-master week3]# kubectl -n kube-system get po|grep flannel
kube-flannel-ds-9xr8l                1/1     Running   0          88m
kube-flannel-ds-hz2tg                1/1     Running   0          88m
kube-flannel-ds-qj4zh                1/1     Running   0          88m
[root@k8s-master week3]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.2        0.0.0.0         UG    100    0        0 ens32
10.0.1.0        0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.0.1.6        255.255.255.0   UG    0      0        0 ens32
10.244.2.0      10.0.1.8        255.255.255.0   UG    0      0        0 ens32
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
# 重建,可以删除多个使用空格分开
[root@k8s-master week3]# kubectl -n kube-system delete po kube-flannel-ds-9xr8l kube-flannel-ds-hz2tg kube-flannel-ds-qj4zh
pod "kube-flannel-ds-9xr8l" deleted
pod "kube-flannel-ds-hz2tg" deleted
pod "kube-flannel-ds-qj4zh" deleted

# 查看其中一个的日志
[root@k8s-master week3]# kubectl -n kube-system get po|grep flannel
kube-flannel-ds-bd59v                1/1     Running   0          38s
kube-flannel-ds-p9l5c                1/1     Running   0          49s
kube-flannel-ds-s92r5                1/1     Running   0          47s
[root@k8s-master week3]# kubectl -n kube-system logs -f kube-flannel-ds-bd59v 
I0718 13:52:47.232151       1 main.go:533] Using interface with name ens32 and address 10.0.1.6
I0718 13:52:47.232259       1 main.go:550] Defaulting external address to interface address (10.0.1.6)
W0718 13:52:47.232278       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0718 13:52:47.434705       1 kube.go:116] Waiting 10m0s for node controller to sync
I0718 13:52:47.434794       1 kube.go:299] Starting kube subnet manager
I0718 13:52:48.435230       1 kube.go:123] Node controller sync successful
I0718 13:52:48.435289       1 main.go:254] Created subnet manager: Kubernetes Subnet Manager - k8s-slave1
I0718 13:52:48.435297       1 main.go:257] Installing signal handlers
I0718 13:52:48.435864       1 main.go:392] Found network config - Backend type: host-gw
【找到此行表示已经修改成功】

#对比路由表的变化
[root@k8s-master week3]# route -n 
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.1.2        0.0.0.0         UG    100    0        0 ens32
10.0.1.0        0.0.0.0         255.255.255.0   U     100    0        0 ens32
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.0.1.6        255.255.255.0   UG    0      0        0 ens32
10.244.2.0      10.0.1.8        255.255.255.0   UG    0      0        0 ens32
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

# 很明显gateway已经变成了宿主机的地址
# 修改如上pod是不影响的,不会改变podip
[root@k8s-master week3]# kubectl -n luffy  get po -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
myblog-65847cf6ff-8s75f   1/1     Running   0          49m     10.244.2.7    k8s-slave2   <none>           <none>
myblog-65847cf6ff-f5rv2   1/1     Running   1          8h      10.244.1.15   k8s-slave1   <none>           <none>
myblog-65847cf6ff-tz46d   1/1     Running   1          8h      10.244.0.11   k8s-master   <none>           <none>
mysql-58d95d459c-jj4sx    1/1     Running   0          6h34m   10.244.1.16   k8s-slave1   <none>           <none>

# 依然是可以ping通的
[root@k8s-master week3]# ping 10.244.1.15 -c 1
PING 10.244.1.15 (10.244.1.15) 56(84) bytes of data.
64 bytes from 10.244.1.15: icmp_seq=1 ttl=63 time=0.386 ms

# 去slave1 pod中ping slave2 pod的ip也是可以通的
[root@k8s-master week3]# kubectl -n luffy  exec myblog-65847cf6ff-f5rv2 -- ping 10.244.2.7 -c 2
PING 10.244.2.7 (10.244.2.7) 56(84) bytes of data.
64 bytes from 10.244.2.7: icmp_seq=1 ttl=62 time=3.74 ms
64 bytes from 10.244.2.7: icmp_seq=2 ttl=62 time=0.389 ms

重建Flannel的Pod

$ kubectl -n kube-system get po |grep flannel
kube-flannel-ds-amd64-5dgb8          1/1     Running   0          15m
kube-flannel-ds-amd64-c2gdc          1/1     Running   0          14m
kube-flannel-ds-amd64-t2jdd          1/1     Running   0          15m

$ kubectl -n kube-system delete po kube-flannel-ds-amd64-5dgb8 kube-flannel-ds-amd64-c2gdc kube-flannel-ds-amd64-t2jdd

# 等待Pod新启动后,查看日志,出现Backend type: host-gw字样
$  kubectl -n kube-system logs -f kube-flannel-ds-amd64-4hjdw
I0704 01:18:11.916374       1 kube.go:126] Waiting 10m0s for node controller to sync
I0704 01:18:11.916579       1 kube.go:309] Starting kube subnet manager
I0704 01:18:12.917339       1 kube.go:133] Node controller sync successful
I0704 01:18:12.917848       1 main.go:247] Installing signal handlers
I0704 01:18:12.918569       1 main.go:386] Found network config - Backend type: host-gw
I0704 01:18:13.017841       1 main.go:317] Wrote subnet file to /run/flannel/subnet.env

查看节点路由表:

$ route -n 
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.136.2   0.0.0.0         UG    100    0        0 eth0
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      172.21.51.68  255.255.255.0   UG    0      0        0 eth0
10.244.2.0      172.21.51.55  255.255.255.0   UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.136.0   0.0.0.0         255.255.255.0   U     100    0        0 eth0

  • k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥 cni0
  • 到达 cni0 当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该使用172.21.51.55这个网关进行转发
  • 包到达k8s-slave2节点(172.21.51.55)节点的eth0网卡,根据该节点的路由规则,转发给cni0网卡
  • cni0将 IP 包转发给连接在 cni0 上的 pod-b
slave1节点中的pod的当中的ip包通过路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥cni0
到达cni当中的ip包通过匹配节点k8s-slave1的路由表发现通往目的地的ip使用k8s-slave2的网关进行转发
包到达k8s-slave2节点(网关)的eth0网卡,根据自己的路由规则,转发给cni0网卡
cni将ip包转发给连接在cni0上的pod-b

小结

做了啥?
做了vxlan点对点通信
通过搭建网桥,利用vxlan实现了跨主机之间的容器通信
讲了flannel插件实现了跨宿主机pod之间的通信
利用host-gw [host-getaway]提升了集群网络性能,但是条件时建立在同一二层网络之上

需要掌握的知识:
k8s:
  理解flannel实现原理
  了解怎么去实现跨主机之间得到容器通信
  知道flannel是干嘛的,实现跨主机pod之间通信的一个网络插件,条件只要在同一二层网络之上就可以使用flannel插,熟悉host-gw,它比flannel性能更优,但是必须在同一二层为网络之上,要求相比flannel更加严苛
  k8s之外:
  希望掌握brctl ip link 等命令的使用
Kubernetes认证与授权
APIServer安全控制

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3X6v23pB-1639403856688)(images/k8s-apiserver-access-control-overview.svg)]

  • Authentication:身份认证

    1. 这个环节它面对的输入是整个http request,负责对来自client的请求进行身份校验,支持的方法包括:

      • basic auth

      • client证书验证(https双向验证)

      • jwt token(用于serviceaccount)

    2. APIServer启动时,可以指定一种Authentication方法,也可以指定多种方法。如果指定了多种方法,那么APIServer将会逐个使用这些方法对客户端请求进行验证, 只要请求数据通过其中一种方法的验证,APIServer就会认为Authentication成功;

    3. 使用kubeadm引导启动的k8s集群,apiserver的初始配置中,默认支持client证书验证和serviceaccount两种身份验证方式。 证书认证通过设置--client-ca-file根证书以及--tls-cert-file--tls-private-key-file来开启。

    4. 在这个环节,apiserver会通过client证书或 http header中的字段(比如serviceaccount的jwt token)来识别出请求的用户身份,包括”user”、”group”等,这些信息将在后面的authorization环节用到。

  • Authorization:鉴权,你可以访问哪些资源

    1. 这个环节面对的输入是http request context中的各种属性,包括:usergrouprequest path(比如:/api/v1/healthz/version等)、 request verb(比如:getlistcreate等)。

    2. APIServer会将这些属性值与事先配置好的访问策略(access policy)相比较。APIServer支持多种authorization mode,包括Node、RBAC、Webhook等。

    3. APIServer启动时,可以指定一种authorization mode,也可以指定多种authorization mode,如果是后者,只要Request通过了其中一种mode的授权, 那么该环节的最终结果就是授权成功。在较新版本kubeadm引导启动的k8s集群的apiserver初始配置中,authorization-mode的默认配置是”Node,RBAC”

  • Admission Control:准入控制,一个控制链(层层关卡),用于拦截请求的一种方式。偏集群安全控制、管理方面。

    • 为什么需要?

      认证与授权获取 http 请求 header 以及证书,无法通过body内容做校验。

      Admission 运行在 API Server 的增删改查 handler 中,可以自然地操作 API resource

    • 举个栗子

      • 以NamespaceLifecycle为例, 该插件确保处于Termination状态的Namespace不再接收新的对象创建请求,并拒绝请求不存在的Namespace。该插件还可以防止删除系统保留的Namespace:default,kube-system,kube-public

      • LimitRanger,若集群的命名空间设置了LimitRange对象,若Pod声明时未设置资源值,则按照LimitRange的定义来未Pod添加默认值

        apiVersion: v1
        kind: LimitRange #作用:在这里指定LimitRange,下面的pod哪怕不指定limitrange,它也会给你加上默认的值,#必须在同一个命名空间下才有效
        metadata:
          name: mem-limit-range 
          namespace: demo
        spec:
          limits:
          - default:
              memory: 512Mi
            defaultRequest:
              memory: 256Mi
            type: Container
        ---
        apiVersion: v1
        kind: Pod
        metadata:
          name: default-mem-demo
          namespace: demo # 必须在同一个命名空间下才有效
        spec:
          containers:
          - name: default-mem-demo
            image: nginx:alpine
        
      注:
      [root@k8s-master week3]# kubectl create ns demo
      namespace/demo created
      
      [root@k8s-master week3]# vim lm.yaml
      [root@k8s-master week3]# kubectl create -f lm.yaml 
      limitrange/mem-limit-range created
      
      [root@k8s-master week3]# vim lm-pod.yaml
      [root@k8s-master week3]# kubectl apply -f lm-pod.yaml 
      pod/default-mem-demo unchanged
      [root@k8s-master week3]# kubectl -n demo  get po
      NAME                      READY   STATUS    RESTARTS   AGE
      default-mem-demo          1/1     Running   0          7m33s
      [root@k8s-master week3]# kubectl -n demo  get po default-mem-demo -oyaml|grep limits -A 3
            limits:
              memory: 512Mi
            requests:
              memory: 256Mi
      
      # 可以看到,在这个yaml文件中并没有指定limit值,但是创建后已经默认被加上
      
    • NodeRestriction, 此插件限制kubelet修改Node和Pod对象,这样的kubelets只允许修改绑定到Node的Pod API对象,以后版本可能会增加额外的限制 。开启Node授权策略后,默认会打开该项

  • 怎么用?

    APIServer启动时通过 --enable-admission-plugins --disable-admission-plugins 指定需要打开或者关闭的 Admission Controller

  • 场景

    • 自动注入sidecar容器或者initContainer容器
    • webhook admission,实现业务自定义的控制需求
kubectl的认证授权

kubectl的日志调试级别:

信息描述
v=0通常,这对操作者来说总是可见的。
v=1当您不想要很详细的输出时,这个是一个合理的默认日志级别。
v=2有关服务和重要日志消息的有用稳定状态信息,这些信息可能与系统中的重大更改相关。这是大多数系统推荐的默认日志级别。
v=3关于更改的扩展信息。
v=4调试级别信息。
v=6显示请求资源。
v=7显示 HTTP 请求头。
v=8显示 HTTP 请求内容。
v=9显示 HTTP 请求内容,并且不截断内容。
$ kubectl get nodes -v=7
I0329 20:20:08.633065    3979 loader.go:359] Config loaded from file /root/.kube/config
I0329 20:20:08.633797    3979 round_trippers.go:416] GET https://172.21.51.143:6443/api/v1/nodes?limit=500

注: 调试命令,数字越大,信息显示越详细
# 解析详细信息内容
[root@k8s-master week3]# kubectl get no -v=7
I0720 11:13:44.713735  101669 loader.go:375] Config loaded from file:  /root/.kube/config # 配置从此文件加载
I0720 11:13:44.783558  101669 round_trippers.go:421] GET https://10.0.1.5:6443/api/v1/nodes?limit=500 # 请求了apiserver的/api/v1/nodes,端口是6443,默认显示500条信息
I0720 11:13:44.783593  101669 round_trippers.go:428] Request Headers: # 请求头
I0720 11:13:44.783598  101669 round_trippers.go:432]     Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0720 11:13:44.783603  101669 round_trippers.go:432]     User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415 # 用户agent代理信息
I0720 11:13:44.792951  101669 round_trippers.go:447] Response Status: 200 OK in 9 milliseconds  # 返回码,下面是返回信息
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2d16h   v1.19.8
k8s-slave1   Ready    <none>   2d16h   v1.19.8
k8s-slave2   Ready    <none>   2d15h   v1.19.8

kubeadm init启动完master节点后,会默认输出类似下面的提示内容:

... ...
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
... ...


这些信息是在告知我们如何配置kubeconfig文件。按照上述命令配置后,master节点上的kubectl就可以直接使用$HOME/.kube/config的信息访问k8s cluster了。 并且,通过这种配置方式,kubectl也拥有了整个集群的管理员(root)权限。

很多K8s初学者在这里都会有疑问:

  • kubectl使用这种kubeconfig方式访问集群时,Kuberneteskube-apiserver是如何对来自kubectl的访问进行身份验证(authentication)和授权(authorization)的呢?
  • 为什么来自kubectl的请求拥有最高的管理员权限呢?

查看/root/.kube/config文件:

[root@k8s-master week3]# ll ~/.kube/
total 8
drwxr-x--- 4 root root   35 Jul 17 19:09 cache
-rw------- 1 root root 5560 Jul 17 19:07 config

前面提到过apiserver的authentication支持通过tls client certificate、basic auth、token等方式对客户端发起的请求进行身份校验, 从kubeconfig信息来看,kubectl显然在请求中使用了tls client certificate的方式,即客户端的证书。

证书base64解码:

$ echo xxxxxxxxxxxxxx |base64 -d > kubectl.crt

注:
[root@k8s-master week3]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxx # 认证授权整数,这是整个k8s集群的根证书
    ...
    client-certificate-data: xxxx # 客户端证书,根证书和机构签发的一套证书和私钥
    ...
    client-key-data:   # 客户端私钥
...

[root@k8s-master week3]# echo certificate-authority-data:xxx  |base64 -d > kubectl.crt
[root@k8s-master week3]# cat kubectl.crt
-----BEGIN CERTIFICATE-----  # 也是一个整数文件

# 将key解码可以发现是一个私钥文件
[root@k8s-master week3]# echo  client-key-data:xxx| base64 -d 
...
-----END RSA PRIVATE KEY-----

# 将跟证书解码可以发现和/etc/kubernetes/pki/ca.crt是一样的
echo certificate-authority-data: xxx | base64 -d
cat /etc/kubernetes/pki/ca.crt

说明在认证阶段,apiserver会首先使用--client-ca-file配置的CA证书去验证kubectl提供的证书的有效性,基本的方式 :

$  openssl verify -CAfile /etc/kubernetes/pki/ca.crt kubectl.crt
kubectl.crt: OK

注:
# 怎么去证明你是k8s集群根证书签发的
[root@k8s-master week3]# openssl  verify -CAfile /etc/kubernetes/pki/ca.crt  kubectl.crt
kubectl.crt: OK
 
# 要验证哪个整数,就加哪个证书的路径,如
[root@k8s-master week3]# openssl verify -CAfile /etc/kubernetes/pki/ca.crt  /etc/kubernetes/pki/apiserver-kubelet-client.crt 
/etc/kubernetes/pki/apiserver-kubelet-client.crt: OK

除了认证身份,还会取出必要的信息供授权阶段使用,文本形式查看证书内容:

$ openssl x509 -in kubectl.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 4736260165981664452 (0x41ba9386f52b74c4)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Feb 10 07:33:39 2020 GMT
            Not After : Feb  9 07:33:40 2021 GMT
        Subject: O=system:masters, CN=kubernetes-admin
        ...

# 模拟apiserver是怎么拿去到用户信息,授权证书等信息的
[root@k8s-master week3]#  openssl x509 -in kubectl.crt  -text

认证通过后,提取出签发证书时指定的CN(Common Name),kubernetes-admin,作为请求的用户名 (User Name), 从证书中提取O(Organization)字段作为请求用户所属的组 (Group),group = system:masters,然后传递给后面的授权模块。

注:到这里相当于用户认证授权通过后,下面需要做鉴权

kubeadm在init初始引导集群启动过程中,创建了许多默认的RBAC规则, 在k8s有关RBAC的官方文档中,我们看到下面一些default clusterrole列表:

理解:
default clusterrole 它是一类跨集群的角色,它定义了很多可操作的资源,权限。
cluster-admin是一个default clusterrole下的管理员权限的角色,相当于谁[用户和用户组]和他绑定,就拥有和他一样的权限。

default clusterrolebingding:从证书中提取O(Organization)字段作为请求用户所属的组 (Group),`group = system:masters`

下面这表的意思就是:
default clusterrole先定义了一类跨集群的角色,它定义了很多可操作的权限,和资源
default clusterrolebingding : 定了哪些用户和用户组 和 default clusterrole下 的角色绑定,就拥有和他们一样的权限

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Nfw0zyZX-1639403856689)(images/kubeadm-default-clusterrole-list.png)]

其中第一个cluster-admin这个cluster role binding绑定了system:masters group,这和authentication环节传递过来的身份信息不谋而合。 沿着system:masters group对应的cluster-admin clusterrolebinding“追查”下去,真相就会浮出水面。

注:RBAC基于角色的一种防控规则,定义了那些用户,那些组,能做什么

我们查看一下这一binding:

$ kubectl describe clusterrolebinding cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind   Name            Namespace
  ----   ----            ---------
  Group  system:masters



我们看到在kube-system名字空间中,一个名为cluster-admin的clusterrolebinding将cluster-admin cluster role与system:masters Group绑定到了一起, 赋予了所有归属于system:masters Group中用户cluster-admin角色所拥有的权限。

我们再来查看一下cluster-admin这个role的具体权限信息:

$ kubectl describe clusterrole cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
             [*]                []              [*]


注:
# 查看有哪些clusterrole,群角色,它是跨命名空间的
[root@k8s-master ~]# kubectl  get clusterrole
NAME                                                                   CREATED AT
admin                                                                  2021-07-17T11:06:43Z
cluster-admin                                                          2021-07-17T11:06:43Z
edit                                                                   2021-07-17T11:06:43Z
flannel                                                                2021-07-18T07:11:15Z
kubeadm:get-nodes                                                      2021-07-17T11:06:45Z
kubernetes-dashboard                                                   2021-07-17T11:34:01Z
...

# 看看cluster-amin有哪些权限,它是有超级管理员权限的,允许操作集群的全部资源,所有kubectl拥有完全控制权限
[root@k8s-master week3]# kubectl describe clusterrole cluster-admin
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
             [*]                []              [*]

非资源类,如查看集群健康状态。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GxuLGeQa-1639403856689)(images/how-kubectl-be-authorized.png)]

RBAC

Role-Based Access Control,基于角色的访问控制, apiserver启动参数添加–authorization-mode=RBAC 来启用RBAC认证模式,kubeadm安装的集群默认已开启。官方介绍

查看开启:

# master节点查看apiserver进程
$ ps aux |grep apiserver

RBAC模式引入了4个资源类型:

  • Role,角色

    一个Role只能授权访问单个namespace

    ## 示例定义一个名为pod-reader的角色,该角色具有读取default这个命名空间下的pods的权限
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      namespace: demo
      name: pod-reader
    rules: # 规则
    - apiGroups: [""] # "" indicates the core API group # 分组,指定一个空白,表示不限制某一个apiGroups
      resources: ["pods"] # 限制的资源类型pods
      verbs: ["get", "watch", "list"] # 对pod的权限get,watch,list
      
    ## apiGroups: "","apps", "autoscaling", "batch", kubectl api-versions
    ## resources: "services", "pods","deployments"... kubectl api-resources
    ## verbs: "get", "list", "watch", "create", "update", "patch", "delete", "exec"
    
    ## https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/
    
    注:
    [root@k8s-master week3]# kubectl api-versions # resource,可以限制这命令下面的所有资源类型
    

    ClusterRole

    一个ClusterRole能够授予和Role一样的权限,但是它是集群范围内的。

    ## 定义一个集群角色,名为secret-reader,该角色可以读取所有的namespace中的secret资源
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      # "namespace" omitted since ClusterRoles are not namespaced 译文:"namespace"被省略,因为clusterRole没有命名空间
      name: secret-reader
    rules:
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["get", "watch", "list"]
    
    # User,Group,ServiceAccount
    
    # 与Role的区别,没有namespace,其他都是一样的,是false,不是某一个namespace下的,是跨namespace的
    [root@k8s-master week3]# kubectl api-resources |grep role
    clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
    clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
    rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
    roles                                          rbac.authorization.k8s.io      true         Role
    
  • Rolebinding

    将role中定义的权限分配给用户和用户组。RoleBinding包含主题(users,groups,或service accounts)和授予角色的引用。对于namespace内的授权使用RoleBinding,集群范围内使用ClusterRoleBinding。

    ## 定义一个角色绑定,将pod-reader这个role的权限授予给jane这个User,使得jane可以在读取default这个命名空间下的所有的pod数据
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: read-pods
      namespace: demo
    subjects:  # 绑定的主题
    - kind: User   #这里可以是User,Group,ServiceAccount
      name: jane 
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: Role #这里可以是Role或者ClusterRole,若是ClusterRole,则权限也仅限于rolebinding的内部
      name: pod-reader # match the name of the Role or ClusterRole you wish to bind to 译文:匹配你希望绑定到的角色或clusterRole的名称
      apiGroup: rbac.authorization.k8s.io
    

    注意:rolebinding既可以绑定role,也可以绑定clusterrole,当绑定clusterrole的时候,subject的权限也会被限定于rolebinding定义的namespace内部,若想跨namespace,需要使用clusterrolebinding

    ## 定义一个角色绑定,将dave这个用户和secret-reader这个集群角色绑定,虽然secret-reader是集群角色,但是因为是使用rolebinding绑定的,因此dave的权限也会被限制在development这个命名空间内
    apiVersion: rbac.authorization.k8s.io/v1
    # This role binding allows "dave" to read secrets in the "development" namespace.
    # 译文:这个角色绑定允许"dave"读取"development"命名空间的密码
    # You need to already have a ClusterRole named "secret-reader".
    # 译文:您需要已经有一个名为"secret-reader"的cluster
    kind: RoleBinding
    metadata:
      name: read-secrets
      # The namespace of the RoleBinding determines where the permissions are granted.
      # This only grants permissions within the "development" namespace.
      # RoleBinding的命名空间决定在哪里授予权限
      # 只授予"development"命名空间内的权限
      namespace: development
    subjects:
    - kind: User
      name: dave # Name is case sensitive
      apiGroup: rbac.authorization.k8s.io
    - kind: ServiceAccount
      name: dave # Name is case sensitive
      namespace: luffy
    roleRef:
      kind: ClusterRole
      name: secret-reader
      apiGroup: rbac.authorization.k8s.io
    
    # 以上定义的一个角色绑定,将dave这个用户和secret-reader这个集群角色绑定,虽然secret-reader是集群角色,但是因为使用rolebinding绑定的,因此dave的权限被缩小了,也会被限制在development这个命名空间内
    

    考虑一个场景: 如果集群中有多个namespace分配给不同的管理员,每个namespace的权限是一样的,就可以只定义一个clusterrole,然后通过rolebinding将不同的namespace绑定到管理员身上,否则就需要每个namespace定义一个Role,然后做一次rolebinding。

  • ClusterRolebingding

    允许跨namespace进行授权

    apiVersion: rbac.authorization.k8s.io/v1
    # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. 译文:此集群角色绑定允许"manager组中的任何人读取任何名称空间中的secret
    kind: ClusterRoleBinding
    metadata:
      name: read-secrets-global
    subjects:
    - kind: Group
      name: manager # Name is case sensitive 名称区分大小写
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: secret-reader
      apiGroup: rbac.authorization.k8s.io
    
    
    

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zL8LbDwi-1639405009637)(images/rbac-2.jpg)]

kubelet的认证授权

查看kubelet进程

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-07-05 19:33:36 EDT; 1 day 12h ago
     Docs: https://kubernetes.io/docs/
 Main PID: 10622 (kubelet)
    Tasks: 24
   Memory: 60.5M
   CGroup: /system.slice/kubelet.service
           └─851 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf



查看/etc/kubernetes/kubelet.conf,解析证书:

$ echo xxxxx |base64 -d >kubelet.crt
$ openssl x509 -in kubelet.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 9059794385454520113 (0x7dbadafe23185731)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Feb 10 07:33:39 2020 GMT
            Not After : Feb  9 07:33:40 2021 GMT
        Subject: O=system:nodes, CN=system:node:master-1

注:
# 查看客户端证书文件
[root@k8s-master week3]# ll /etc/kubernetes/
total 32
-rw------- 1 root root 5560 Jul 17 19:05 admin.conf
-rw------- 1 root root 5600 Jul 17 19:05 controller-manager.conf
-rw------- 1 root root 1928 Jul 17 19:06 kubelet.conf
drwxr-xr-x 2 root root  113 Jul 17 19:05 manifests
drwxr-xr-x 3 root root 4096 Jul 17 19:05 pki
-rw------- 1 root root 5548 Jul 17 19:05 scheduler.conf

[root@k8s-master week3]# cat /etc/kubernetes/kubelet.conf
[root@k8s-master week3]# echo certificate-authority-data: xxx  |base64 -d > kubelet.crt

# 查看是否由根证书签发的
[root@k8s-master week3]# openssl  verify -CAfile /etc/kubernetes/pki/ca.crt kubelet.crt
kubelet.crt: OK
[root@k8s-master week3]# openssl x509 -in kubelet.crt  -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jul 17 11:05:37 2021 GMT
            Not After : Jul 15 11:05:37 2031 GMT
        Subject: CN=kubernetes
        Subject Public Key Info:
...
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----

# openssl参数解释
-in # 指定要加密的文件存放路径
-d  # 解密
-a/-base64  # 使用-base位编码格式
-x509   # 专用ca生成自签证书,如果不是自签证书不需要此项
-key  # 生成请求时用到到私钥文件
-out  # 证书的保存路径
-days # 证书的有效期,单位是day,默认是365天
-text  # 指定格式

得到我们期望的内容:

Subject: O=system:nodes, CN=system:node:k8s-master

我们知道,k8s会把O作为Group来进行请求,因此如果有权限绑定给这个组,肯定在clusterrolebinding的定义中可以找得到。因此尝试去找一下绑定了system:nodes组的clusterrolebinding

$ kubectl get clusterrolebinding -oyaml|grep -n10 system:nodes
178-    resourceVersion: "225"
179-    selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm%3Anode-autoapprove-certificate-rotation
180-    uid: b4303542-d383-4b62-a1e9-02f2cefa2c20
181-  roleRef:
182-    apiGroup: rbac.authorization.k8s.io
183-    kind: ClusterRole
184-    name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
185-  subjects:
186-  - apiGroup: rbac.authorization.k8s.io
187-    kind: Group
188:    name: system:nodes
189-- apiVersion: rbac.authorization.k8s.io/v1
190-  kind: ClusterRoleBinding
191-  metadata:
192-    creationTimestamp: "2021-06-06T02:39:46Z"
193-    managedFields:
194-    - apiVersion: rbac.authorization.k8s.io/v1
195-      fieldsType: FieldsV1
196-      fieldsV1:
197-        f:roleRef:
198-          f:apiGroup: {}

[root@k8s-master week3]# kubectl describe clusterrole system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Name:         system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                                      Non-Resource URLs  Resource Names  Verbs
  ---------                                                      -----------------  --------------  -----
  certificatesigningrequests.certificates.k8s.io/selfnodeclient  []                 []         

结局有点意外,除了system:certificates.k8s.io:certificatesigningrequests:selfnodeclient外,没有找到system相关的rolebindings,显然和我们的理解不一样。 尝试去找资料,发现了这么一段 :

Default ClusterRoleDefault ClusterRoleBindingDescription
system:kube-schedulersystem:kube-scheduler userAllows access to the resources required by the schedulercomponent.
system:volume-schedulersystem:kube-scheduler userAllows access to the volume resources required by the kube-scheduler component.
system:kube-controller-managersystem:kube-controller-manager userAllows access to the resources required by the controller manager component. The permissions required by individual controllers are detailed in the controller roles.
system:nodeNoneAllows access to resources required by the kubelet, including read access to all secrets, and write access to all pod status objects. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.
system:node-proxiersystem:kube-proxy userAllows access to the resources required by the kube-proxycomponent.

大致意思是说:之前会定义system:node这个角色,目的是为了kubelet可以访问到必要的资源,包括所有secret的读权限及更新pod状态的写权限。如果1.8版本后,是建议使用 Node authorizer and NodeRestriction admission plugin 来代替这个角色的。

我们目前使用1.19,查看一下授权策略:

$ ps axu|grep apiserver
kube-apiserver --authorization-mode=Node,RBAC  --enable-admission-plugins=NodeRestriction


查看一下官网对Node authorizer的介绍:

Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.

In future releases, the node authorizer may add or remove permissions to ensure kubelets have the minimal set of permissions required to operate correctly.

In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>

译文:
翻译结果
*节点授权是一种特殊用途的授权方式,专门对kubelets发出的API请求进行授权。*
*在未来版本中,节点授权这可能会添加或删除权限,以确保kubelet具有正确操作所需的最少权限集。*
*为了获得节点授权者的授权,kubelet必须使用一个凭证来表示他们在'system:nodes'组中,用户名为'systme:node:<nodeName>'*

总结:kubelet的授权和kuberctl授权是两种不同的方式,kubelet走的是node的授权模式,kubectl走的是RBAC

Service Account及K8S Api调用

认证:

  • 证书
  • JWT Token

授权

  • RBAC
  • Node

前面说,认证可以通过证书,也可以通过使用ServiceAccount(服务账户)的方式来做认证。大多数时候,我们在基于k8s做二次开发时都是选择通过ServiceAccount + RBAC 的方式。我们之前访问dashboard的时候,是如何做的?

## 新建一个名为admin的serviceaccount,并且把名为cluster-admin的这个集群角色的权限授予新建的
#serviceaccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin # 集群管理员角色
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard


注:
[root@k8s-master week3]# kubectl get ns
NAME                   STATUS   AGE
default                Active   2d21h
kube-node-lease        Active   2d21h
kube-public            Active   2d21h
kube-system            Active   2d21h
kubernetes-dashboard   Active   2d21h
luffy                  Active   2d21h

[root@k8s-master week3]# kubectl -n kubernetes-dashboard get serviceaccounts #可以简写成sa
NAME                   SECRETS   AGE
admin                  1         2d21h
default                1         2d21h
kubernetes-dashboard   1         
apiVersion: v1
kind: ServiceAccount
metadata:
   creationTimestamp: "2021-07-17T11:34:39Z"
    name: admin
  namespace: kubernetes-dashboard
  resourceVersion: "4976"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/serviceaccounts/admin
  uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0
secrets:
- name: admin-token-j6gs8

# 就是下面的admin-token
[root@k8s-master week3]# kubectl -n kubernetes-dashboard get secrets
NAME                               TYPE                                  DATA   AGE
admin-token-j6gs8                  kubernetes.io/service-account-token   3      2d21h
default-token-xvd2w                kubernetes.io/service-account-token   3      2d21h
kubernetes-dashboard-certs         Opaque                                0      2d21h
kubernetes-dashboard-csrf          Opaque                                1      2d21h
kubernetes-dashboard-key-holder    Opaque                                2      2d21h
kubernetes-dashboard-token-gszzn   kubernetes.io/service-account-token   3      2d21h

我们查看一下:

$ kubectl -n kubernetes-dashboard get sa admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2020-04-01T11:59:21Z"
  name: admin
  namespace: kubernetes-dashboard
  resourceVersion: "1988878"
  selfLink: /api/v1/namespaces/kubernetes-dashboard/serviceaccounts/admin
  uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f
secrets:
- name: admin-token-lfsrf


注意到serviceaccount上默认绑定了一个名为admin-token-lfsrf的secret,我们查看一下secret

$ kubectl -n kubernetes-dashboard describe secret admin-token-lfsrf
Name:         admin-token-lfsrf
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f

Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1025 bytes
namespace:  4 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZW1vIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFkbWluLXRva2VuLWxmc3JmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjM5ZWNjM2UtNzRkOS0xMWVhLWE1OWItMDAwYzI5ZGZkNzNmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlbW86YWRtaW4ifQ.ffGCU4L5LxTsMx3NcNixpjT6nLBi-pmstb4I-W61nLOzNaMmYSEIwAaugKMzNR-2VwM14WbuG04dOeO67niJeP6n8-ALkl-vineoYCsUjrzJ09qpM3TNUPatHFqyjcqJ87h4VKZEqk2qCCmLxB6AGbEHpVFkoge40vHs56cIymFGZLe53JZkhu3pwYuS4jpXytV30Ad-HwmQDUu_Xqcifni6tDYPCfKz2CZlcOfwqHeGIHJjDGVBKqhEeo8PhStoofBU6Y4OjObP7HGuTY-Foo4QindNnpp0QU6vSb7kiOiQ4twpayybH8PTf73dtdFt46UF6mGjskWgevgolvmO8A


注:
# 获得登陆token
# 注意到serviceaccount上默认绑定了一个名为admin-token-j6gs8的secret,我们查看一下secret
[root@k8s-master week3]# kubectl -n kubernetes-dashboard describe secret admin-token-j6gs8
Name:         admin-token-j6gs8
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNmR2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1qNmdzOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjhlMjRjMDQyLTVhMmMtNDliYy05ZjkyLWJmZGY3MmVhZjZjMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.WJm16yoTdmN1srZ6M4o__6mt7e_NUm5_Gl8H3oblsohA_RVr5T9ZKQEciNK63b3acZO2gxo0bjX8zbd_mnAQH4LBJ7XaiMJxbFvblbXC3DEN8aawNO_8J8twG6pN3Hanhk8gUFCHmd8Lj8k5Q59BDW1yaIv05u6LTCSQwY1zoFwup-Fk2-LEFcLgzyWTtN3SJG_OTkM1XvaSMTGR-KJi_KTg29nXkcrCPuKAEq9QQzFYeulfZt0QWknF67Bn8OyoKSY1o6m1SrsHHneSeT2Rebww-qjd-9rCwCj7apGkSoyLFByrSTKlgX0nv43yaYuHsPIBP4msBx_iZsaq1-APHw

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tKDnrzEV-1639403856691)(images\rbac.jpg)]

只允许访问luffy命名空间的pod资源:

$ cat luffy-admin-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: luffy-pods-admin
  namespace: luffy

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: luffy
  name: pods-reader-writer
rules: # 规则
- apiGroups: [""] # "" indicates the core API group 分组,指定一个空白,表示不限制某一个apiGroups
  resources: ["pods"] # 限制的资源类型是pods
  verbs: ["*"] # 对pod的权限是get,watch,list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pods-reader-writer
  namespace: luffy
subjects:
- kind: ServiceAccount   #这里可以是User,Group,ServiceAccount
  name: luffy-pods-admin
  namespace: luffy
roleRef:
  kind: Role #这里可以是Role或者ClusterRole,若是ClusterRole,则权限也仅限于rolebinding的内部
  name: pods-reader-writer
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master week3]# vim luffy-admin-rbac.yaml
[root@k8s-master week3]# kubectl create -f luffy-admin-rbac.yaml
serviceaccount/luffy-pods-admin created
role.rbac.authorization.k8s.io/pods-reader-writer created
rolebinding.rbac.authorization.k8s.io/pods-reader-writer created
[root@k8s-master week3]# kubectl -n luffy  get sa
NAME               SECRETS   AGE
default            1         5d
luffy-pods-admin   1         2d2h

[root@k8s-master week3]# kubectl  -n luffy  get secrets 
NAME                           TYPE                                  DATA   AGE
default-token-pmv6k            kubernetes.io/service-account-token   3      5d
luffy-pods-admin-token-ffhfh   kubernetes.io/service-account-token   3      2d2h
myblog                         Opaque                                2      4d22h
 3      2d21h

[root@k8s-master week3]# kubectl -n luffy describe secrets luffy-pods-admin-token-ffhfh 
Name:         luffy-pods-admin-token-ffhfh
Namespace:    luffy
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: luffy-pods-admin
              kubernetes.io/service-account.uid: b830e127-ef77-40ba-a242-bf844b6aa42c

Type:  kubernetes.io/service-account-token

Data
====
namespace:  5 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNmR2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1wb2RzLWFkbWluLXRva2VuLWZmaGZoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1ZmZ5LXBvZHMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiODMwZTEyNy1lZjc3LTQwYmEtYTI0Mi1iZjg0NGI2YWE0MmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVmZnk6bHVmZnktcG9kcy1hZG1pbiJ9.Oi6phpFhwGCF5IT9ep10j0c_QUWK5vPNCEmdd36xsllm2se8IBnWKbm1R9jl8p4Se5uNnTgOAHEagm3rkAv6uUG8ERyz19M-4WcrEilL7WznqAvNvlpW2-7wSZ1_rFuGbizoXlH5Lwyvkj3odA1y1yBNAl2P2ZyfQtwOMEpPHTaF1LFjlFW478NecfkQgxmElk9FLT6wjcxbN1U85-P4RQZ6r_-PHsvMNtlFl62vvDC4ka8bVw0R2TYR3-zbZQFD4QVCU2EzoGFK6FmnJjOHwXGarNTB3aVeLiQ81NWA-8TNJQFsmZOwYuYPiJ9EjdB-yC5CBAnHZjaohc5qkjNLow
ca.crt:     1066 bytes

# 拿到这个token访问仪表盘,对比看到的内容是否是yaml文件描述的内容
# yaml描述只能"get","watch","list",kubernetes-dashboard下的pod信息

演示权限:

$ kubectl -n luffy describe secrets luffy-pods-admin-token-prr25
...
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InBtQUZfRl8ycC03TTBYaUUwTnJVZGpvQWU0cXZ5M2FFbjR2ZjkzZVcxOE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1hZG1pbi10b2tlbi1wcnIyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJsdWZmeS1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFhZDA0MTU3LTliNzMtNDJhZC1hMGU4LWVmOTZlZDU3Yzg1ZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1hZG1pbiJ9.YWckylE5wlKITKrVltXY4VPKvZP9ar5quIT5zq9N-0_FnDkLIBX7xOyFvZA5Wef0wSFSZe3e9FwrO1UbPsmK7cZn74bhH8cNdoH_YVbIVT3-6tIOlCA_Bc8YypGE1gl-ZvLOIPV7WnRQsWpWtZtqfKBSkwLAHgWoxcx_d1bOcyTOdPmsW224xcBxjYwi6iRUtjTJST0LzOcAOCPDZq6-lqYUwnxLO_afxwg71BGX4etE48Iny8TxSEIs1VJRahoabC7hVOs17ujEm5loTDSpfuhae51qSDg8xeYwRHdM42aLUmc-wOvBWauHa5EHbH9rWPAnpaGIwF8QvnLszqp4QQ
...
$ curl  -k -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNm2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1wb2RzLWFkbWluLXRva2VuLWZmaGZoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1ZmZ5LXBvZHMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiODMwZTEyNy1lZjc3LTQwYmEtYTI0Mi1iZjg0NGI2YWE0MmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVmZnk6bHVmZnktcG9kcy1hZG1pbiJ9.Oi6phpFhwGCF5IT9ep10j0c_QUWK5vPNCEmdd36xsllm2se8IBnWKbm1R9jl8p4Se5uNnTgOAHEagm3rkAv6uUG8ERyz19M-4WcrEilL7WznqAvNvlpW2-7wSZ1_rFuGbizoXlH5Lwyvkj3odA1y1yBNAl2P2ZyfQtwOMEpPHTaF1LFjlFW478NecfkQgxmElk9FLT6wjcxbN1U85-P4RQZ6r_-PHsvMNtlFl62vvDC4ka8bVw0R2TYR3-zbZQFD4QVCU2EzoGFK6FmnJjOHwXGarNTB3aVeLiQ81NWA-8TNJQFsmZOwYuYPiJ9EjdB-yC5CBAnHZjaohc5qkjNLow" https://10.0.1.5:6443/api/v1/namespaces/luffy/pods?limit=500

# https://10.0.1.5:6443/api/v1/nodes
# 可以拿到数据,若换成别的namespace拿不到资源
# bearer后面只有一个空格,多列少了多不行
[root@k8s-master week3]# kubectl -n luffy  get po -v=7
创建用户认证授权的kubeconfig文件

签发证书对:

# 生成私钥
$ openssl genrsa -out luffy.key 2048

# 生成证书请求文件
$ openssl req -new -key luffy.key -out luffy.csr -subj "/O=admin:luffy/CN=luffy-admin"
# /O是组织 
# /CN是名称

# 证书拓展属性
$ cat extfile.conf
[ v3_ca ]
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth

# 生成luffy.crt证书
$ openssl x509 -req -in luffy.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -sha256 -out luffy.crt -extensions v3_ca -extfile extfile.conf -days 3650
注:
$ mkdir cret && cd cret/

# 生成私钥
[root@k8s-master cret]# openssl genrsa -out luffy.key 2048
Generating RSA private key, 2048 bit long modulus
................+++
....................................+++
e is 65537 (0x10001)

# 生成证书请求文件
[root@k8s-master cret]# openssl req -new -key luffy.key -out luffy.csr -subj "/O=admin:luffy/CN=luffy-admin"

# 证书拓展属性
[root@k8s-master cret]# vim extfile.conf

# 生成luffy.crt证书
[root@k8s-master cret]# openssl x509 -req -in luffy.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -sha256 -out luffy.crt -extensions v3_ca -extfile extfile.conf -days 3650
Signature ok
subject=/O=admin:luffy/CN=luffy-admin
Getting CA Private Key

[root@k8s-master cret]# ll
total 16
-rw-r--r-- 1 root root   95 Jul 22 20:23 extfile.conf
-rw-r--r-- 1 root root 1074 Jul 22 20:23 luffy.crt
-rw-r--r-- 1 root root  924 Jul 22 20:23 luffy.csr
-rw-r--r-- 1 root root 1679 Jul 22 20:23 luffy.key

[root@k8s-master cret]# openssl x509 -in luffy.crt -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            c6:13:7e:53:21:80:e4:3d
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jul 22 12:23:47 2021 GMT
            Not After : Jul 20 12:23:47 2031 GMT
        Subject: O=admin:luffy, CN=luffy-admin

配置kubeconfig文件:

# 创建kubeconfig文件,指定集群名称和地址
$ kubectl config set-cluster luffy-cluster --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://172.21.51.143:6443 --kubeconfig=luffy.kubeconfig

# 为kubeconfig文件添加认证信息
$ kubectl config set-credentials luffy-admin --client-certificate=luffy.crt --client-key=luffy.key --embed-certs=true --kubeconfig=luffy.kubeconfig

# 为kubeconfig添加上下文配置
$ kubectl config set-context luffy-context --cluster=luffy-cluster --user=luffy-admin --kubeconfig=luffy.kubeconfig

# 设置默认的上下文
$ kubectl config use-context luffy-context --kubeconfig=luffy.kubeconfig
注:
# 创建kubeconfig文件,指定集群名称和地址
[root@k8s-master cret]# kubectl config set-cluster luffy-cluster --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://10.0.1.5:6443 --kubeconfig=luffy.kubeconfig
Cluster "luffy-cluster" set.
[root@k8s-master cret]# cat luffy.kubeconfig 

# 为kubeconfig文件添加认证信息
[root@k8s-master cret]# kubectl config set-credentials luffy-admin --client-certificate=luffy.crt --client-key=luffy.key --embed-certs=true --kubeconfig=luffy.kubeconfig

# 为kubeconfig添加上下文配置
[root@k8s-master cret]# kubectl config set-context luffy-context --cluster=luffy-cluster --user=luffy-admin --kubeconfig=luffy.kubeconfig
Context "luffy-context" created.

# 设置默认的上下文
[root@k8s-master cret]# kubectl config use-context luffy-context --kubeconfig=luffy.kubeconfig
Switched to context "luffy-context".

# 最终生成这个文件
[root@k8s-master cret]# ls luffy.kubeconfig 
luffy.kubeconfig

验证:

# 设置当前kubectl使用的config文件
$ export KUBECONFIG=luffy.kubeconfig

# 当前不具有任何权限,因为没有为用户或者组设置RBAC规则
$ kubectl get po
Error from server (Forbidden): pods is forbidden: User "luffy-admin" cannot list resource "pods" in API group "" in the namespace "default"

# 设置当前kubectl使用 的config文件
[root@k8s-master cret]#  export KUBECONFIG=/root/2021/week3/cret/luffy.kubeconfig

# 当前不具有任何权限,因为没有为用户或者组设置RBAC规则
[root@k8s-master cret]# kubectl get po
Error from server (Forbidden): pods is forbidden: User "luffy-admin" cannot list resource "pods" in API group "" in the namespace "default"

为luffy用户添加luffy命名空间访问权限:

# 定义role,具有luffy命名空间的所有权限
$ cat luffy-admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: luffy
  name: luffy-admin
rules:
- apiGroups: [""] # "" 指定核心 API 组
  resources: ["*"]
  verbs: ["*"]
  
#定义rolebinding,为luffy用户绑定luffy-admin这个role,这样luffy用户就有操作luffy命名空间的所有权限
$ cat luffy-admin-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: luffy-admin
  namespace: luffy
subjects:
- kind: User
  name: luffy-admin # Name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role #this must be Role or ClusterRole
  name: luffy-admin # 这里的名称必须与你想要绑定的 Role 或 ClusterRole 名称一致
  apiGroup: rbac.authorization.k8s.io
# 取消变量
[root@k8s-master cret]# export KUBECONFIG=

# 定义role,具有luffy命名空间的所有权限
[root@k8s-master cret]# vim luffy-admin-role.yaml
[root@k8s-master cret]# kubectl create -f luffy-admin-role.yaml 
role.rbac.authorization.k8s.io/luffy-admin created

# 定义rolebinding,为luffy用户绑定luffy-admin这个role,这样luffy用户就有操作luffy命名空间的所有权限
[root@k8s-master cret]# vim luffy-admin-rolebinding.yaml
[root@k8s-master cret]# kubectl create -f luffy-admin-rolebinding.yaml 
rolebinding.rbac.authorization.k8s.io/luffy-admin created
# 添加环境变量
[root@k8s-master cret]# export KUBECONFIG=/root/2021/week3/cret/luffy.kubeconfig
# 这是就可以查看luffy命名空间了
[root@k8s-master cret]# kubectl get po -n luffy
NAME                      READY   STATUS    RESTARTS   AGE
myblog-6759fcc46f-7jgtf   1/1     Running   23         35h
myblog-6759fcc46f-lpp9t   1/1     Running   13         35h
myblog-6759fcc46f-qckrp   1/1     Running   12         35h
mysql-58d95d459c-jj4sx    1/1     Running   2          4d5h
通过HPA实现业务应用的动态扩缩容
HPA控制器介绍

当系统资源过高的时候,我们可以使用如下命令来实现 Pod 的扩缩容功能

$ kubectl -n luffy scale deployment myblog --replicas=2

但是这个过程是手动操作的。在实际项目中,我们需要做到是的是一个自动化感知并自动扩容的操作。Kubernetes 也为提供了这样的一个资源对象:Horizontal Pod Autoscaling(Pod 水平自动伸缩),简称HPA

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gM7XNJe8-1639403856691)(images/hpa.png)]

基本原理:HPA 通过监控分析控制器控制的所有 Pod 的负载变化情况来确定是否需要调整 Pod 的副本数量

HPA的实现有两个版本:

  • autoscaling/v1,只包含了根据CPU指标的检测,稳定版本
  • autoscaling/v2beta1,支持根据memory或者用户自定义指标进行伸缩

如何获取Pod的监控数据?

  • k8s 1.8以下:使用heapster,1.11版本完全废弃
  • k8s 1.8以上:使用metric-server

思考:为什么之前用 heapster ,现在废弃了项目,改用 metric-server ?

heapster时代,apiserver 会直接将metric请求通过apiserver proxy 的方式转发给集群内的 hepaster 服务,采用这种 proxy 方式是有问题的:

  • http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
    
  • proxy只是代理请求,一般用于问题排查,不够稳定,且版本不可控

  • heapster的接口不能像apiserver一样有完整的鉴权以及client集成

  • pod 的监控数据是核心指标(HPA调度),应该和 pod 本身拥有同等地位,即 metric应该作为一种资源存在,如metrics.k8s.io 的形式,称之为 Metric Api

于是官方从 1.8 版本开始逐步废弃 heapster,并提出了上边 Metric api 的概念,而 metrics-server 就是这种概念下官方的一种实现,用于从 kubelet获取指标,替换掉之前的 heapster。

Metrics Server 可以通过标准的 Kubernetes API 把监控数据暴露出来,比如获取某一Pod的监控数据:

https://172.21.51.143:6443/apis/metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/<pod-name>

# https://172.21.51.143:6443/api/v1/namespaces/luffy/pods?limit=500
注:
[root@k8s-master week3]# kubectl  -n luffy get pods -v=7

目前的采集流程:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0N99Qcz6-1639403856691)(images/k8s-hpa-ms.png)]

Metric Server

官方介绍

...
Metric server collects metrics from the Summary API, exposed by Kubelet on each node.

Metrics Server registered in the main API server through Kubernetes aggregator, which was introduced in Kubernetes 1.7
...
安装

官方代码仓库地址:https://github.com/kubernetes-sigs/metrics-server

Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:

  • --kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
  • --kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.
  • --requestheader-client-ca-file - Specify a root certificate bundle for verifying client certificates on incoming requests.
$ wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml

修改args参数:

...
130       containers:
131       - args:
132         - --cert-dir=/tmp
133         - --secure-port=4443
134         - --kubelet-insecure-tls  # 添加这一行参数
135         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
136         - --kubelet-use-node-status-port
137         image: willdockerhub/metrics-server:v0.4.4 #镜像地址换成dockerhb的地址
138         imagePullPolicy: IfNotPresent
...

执行安装:

$ kubectl apply -f components.yaml

$ kubectl -n kube-system get pods

$ kubectl top nodes
注:
[root@k8s-master week3]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml
# 取消变量
[root@k8s-master week3]# unset KUBECONFIG
[root@k8s-master week3]# kubectl apply  -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

# 查看pod
[root@k8s-master week3]# kubectl -n kube-system get po|grep metrics
metrics-server-7dbbc69d95-j6gkd      0/1     ContainerCreating   0          105s

# 查看日志是否有报错没有即OK
[root@k8s-master week3]# kubectl -n kube-system logs -f metrics-server-7dbbc69d95-j6gkd

# 查看node和pod的监控数据
[root@k8s-master week3]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   530m         3%     1217Mi          15%       
k8s-slave1   248m         1%     770Mi           9%        
k8s-slave2   218m         1%     507Mi           6%   
[root@k8s-master week3]# kubectl top pods -n luffy 
NAME                      CPU(cores)   MEMORY(bytes)   
myblog-6759fcc46f-7jgtf   4m           72Mi            
myblog-6759fcc46f-lpp9t   3m           71Mi            
myblog-6759fcc46f-qckrp   2m           71Mi            
mysql-58d95d459c-jj4sx    6m           227Mi        
kubelet的指标采集

无论是 heapster还是 metric-server,都只是数据的中转和聚合,两者都是调用的 kubelet 的 api 接口获取的数据,而 kubelet 代码中实际采集指标的是 cadvisor 模块,你可以在 node 节点访问 10250 端口获取监控数据:

  • Kubelet Summary metrics: https://127.0.0.1:10250/metrics,暴露 node、pod 汇总数据
  • Cadvisor metrics: https://127.0.0.1:10250/metrics/cadvisor,暴露 container 维度数据

调用示例:

$ curl -k  -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InhXcmtaSG5ZODF1TVJ6dUcycnRLT2c4U3ZncVdoVjlLaVRxNG1wZ0pqVmcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1xNXBueiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImViZDg2ODZjLWZkYzAtNDRlZC04NmZlLTY5ZmE0ZTE1YjBmMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.iEIVMWg2mHPD88GQ2i4uc_60K4o17e39tN0VI_Q_s3TrRS8hmpi0pkEaN88igEKZm95Qf1qcN9J5W5eqOmcK2SN83Dd9dyGAGxuNAdEwi0i73weFHHsjDqokl9_4RGbHT5lRY46BbIGADIphcTeVbCggI6T_V9zBbtl8dcmsd-lD_6c6uC2INtPyIfz1FplynkjEVLapp_45aXZ9IMy76ljNSA8Uc061Uys6PD3IXsUD5JJfdm7lAt0F7rn9SdX1q10F2lIHYCMcCcfEpLr4Vkymxb4IU4RCR8BsMOPIO_yfRVeYZkG4gU2C47KwxpLsJRrTUcUXJktSEPdeYYXf9w" https://localhost:10250/metrics
注:
# 获取token的值
[root@k8s-master week3]# kubectl -n kubernetes-dashboard describe secrets admin-token-j6gs8 
Name:         admin-token-j6gs8
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      xxxxxxxxx
[root@k8s-master week3]# curl -k  -H "Authorization: Bearer (token的值)" https://localhost:10250/metrics

[root@k8s-master week3]# curl -k  -H "Authorization: Bearer (token的值)" https://localhost:10250/metrics/cadvisor

kubelet虽然提供了 metric 接口,但实际监控逻辑由内置的cAdvisor模块负责,早期的时候,cadvisor是单独的组件,从k8s 1.12开始,cadvisor 监听的端口在k8s中被删除,所有监控数据统一由Kubelet的API提供。

cadvisor获取指标时实际调用的是 runc/libcontainer库,而libcontainer是对 cgroup文件 的封装,即 cadvsior也只是个转发者,它的数据来自于cgroup文件。

cgroup文件中的值是监控数据的最终来源,如

  • mem usage的值,

    • 对于docker容器来讲,来源于/sys/fs/cgroup/memory/docker/[containerId]/memory.usage_in_bytes

    • 对于pod来讲,/sys/fs/cgroup/memory/kubepods/besteffort/pod[podId]/memory.usage_in_bytes或者

      /sys/fs/cgroup/memory/kubepods/burstable/pod[podId]/memory.usage_in_bytes

  • 如果没限制内存,Limit = machine_mem,否则来自于
    /sys/fs/cgroup/memory/docker/[id]/memory.limit_in_bytes

  • 内存使用率 = memory.usage_in_bytes/memory.limit_in_bytes

Metrics数据流:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0tZPTEUq-1639403856691)(images/hap-flow.webp)]

思考:

Metrics Server是独立的一个服务,只能服务内部实现自己的api,是如何做到通过标准的kubernetes 的API格式暴露出去的?

kube-aggregator

kube-aggregator聚合器及Metric-Server的实现

kube-aggregator是对 apiserver 的api的一种拓展机制,它允许开发人员编写一个自己的服务,并把这个服务注册到k8s的api里面,即扩展 API 。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-GhAQUs0B-1639403856692)(images/kube-aggregation.webp)]

定义一个APIService对象:

apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1beta1.luffy.k8s.io
spec:
  group: luffy.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: service-A       # 必须https访问
    namespace: luffy
    port: 443   
  version: v1beta1
  versionPriority: 100

k8s会自动帮我们代理如下url的请求:

proxyPath := "/apis/" + apiService.Spec.Group + "/" + apiService.Spec.Version

即:https://172.21.51.143:6443/apis/luffy.k8s.io/v1beta1/xxxx转到我们的service-A服务中,service-A中只需要实现 https://service-A/apis/luffy.k8s.io/v1beta1/xxxx 即可。

看下metric-server的实现:

$ kubectl get apiservice 
NAME                       SERVICE                      AVAILABLE                      
v1beta1.metrics.k8s.io   kube-system/metrics-server		True

$ kubectl get apiservice v1beta1.metrics.k8s.io -oyaml
...
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
...

$ kubectl -n kube-system get svc metrics-server
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
metrics-server   ClusterIP   10.110.111.146   <none>        443/TCP   11h

$ curl -k  -H "Authorization: Bearer xxxx" https://10.110.111.146
{
  "paths": [
    "/apis",
    "/apis/metrics.k8s.io",
    "/apis/metrics.k8s.io/v1beta1",
    "/healthz",
    "/healthz/healthz",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/metrics",
    "/openapi/v2",
    "/version"
  ]

$ kubectl -n luffy  top pods -v=6
# https://172.21.51.143:6443/apis/metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/<pod-name>
# 
$ curl -k  -H "Authorization: Bearer xxxx" https://10.110.111.146/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods/myblog-5d9ff54d4b-4rftt

$ curl -k  -H "Authorization: Bearer xxxx" https://172.21.51.143:6443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods/myblog-5d9ff54d4b-4rftt
注:
[root@k8s-master week3]# kubectl get apiservices|grep kube-system
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        59m

[root@k8s-master week3]# kubectl get apiservices.apiregistration.k8s.io -oyaml
...
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
...

[root@k8s-master week3]# kubectl -n kube-system get svc metrics-server
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
metrics-server   ClusterIP   10.100.57.78   <none>        443/TCP   65m

[root@k8s-master week3]# kubectl -n luffy  top pods -v=6
I0722 22:24:40.311311  101235 loader.go:375] Config loaded from file:  /root/.kube/config
I0722 22:24:40.323724  101235 round_trippers.go:444] GET https://10.0.1.5:6443/api?timeout=32s 200 OK in 10 milliseconds
I0722 22:24:40.326095  101235 round_trippers.go:444] GET https://10.0.1.5:6443/apis?timeout=32s 200 OK in 1 milliseconds
I0722 22:24:40.330018  101235 round_trippers.go:444] GET https://10.0.1.5:6443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods 200 OK in 2 milliseconds
NAME                      CPU(cores)   MEMORY(bytes)   
myblog-6759fcc46f-7jgtf   3m           71Mi            
myblog-6759fcc46f-x9p88   2m           71Mi            
myblog-6759fcc46f-xqfh4   3m           71Mi            
mysql-58d95d459c-jj4sx    3m           227Mi           
[root@k8s-master week3]# curl -k https://10.0.1.5:6443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods

# 实际上访问上面的地址和下面这个地址是一样的,因为通过api-server代理过来,然后转给下面这个地址在访问
[root@k8s-master week3]# curl -k https://10.100.57.78:443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods

[root@k8s-master week3]#kubectl -n kubernetes-dashboard describe secrets admin-token-j6gs8 
Name:         admin-token-j6gs8
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:   xxxx
# 加上token再去访问,这时候就能获得mysql,myblog的cpu以及内存的值
[root@k8s-master week3]# curl -k  -H "Authorization: (Bearer token的值)" https://10.100.57.78:443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods
总结如下:
metric-server是什么?
集群核心监控数据的聚合,通俗的说就存储了集群和节点的监控数据,并且提供了API以供分析使用。

# 作用
metrics-server的主要作用为kube-scheduler,HorizontalPodAutoscaler等k8s核心组件,以及kubectl top命令和Dashboard等组件提供数据来源。
除此之外,也可以自定义metrics-server,添加一些其他的监控指标,比如说比较流行的k8s-prometheus-adapter
HPA实践
基于CPU和内存的动态伸缩

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WYKpVcnc-1639403856692)(images/hpa.png)]

创建hpa对象:

# 方式一
$ cat hpa-myblog.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler  # pod的水平自动扩展
metadata:
  name: hpa-myblog
  namespace: luffy
spec:
  maxReplicas: 3  # 最大扩容到3个
  minReplicas: 1  # 最小一个
  scaleTargetRef: #目标规模参考
    apiVersion: apps/v1
    kind: Deployment
    name: myblog
  metrics:
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80 # 内存和CPU使用率超百分之八十则扩展
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 20  # 内存和CPU使用率低于百分之二十则收缩

# 方式二
$ kubectl -n luffy autoscale deployment myblog --cpu-percent=10 --min=1 --max=3 #不推荐

Deployment对象必须配置requests的参数,不然无法获取监控数据,也无法通过HPA进行动态伸缩

注:
[root@k8s-master week3]# vim hpa-myblog.yaml

[root@k8s-master week3]# kubectl apply  -f hpa-myblog.yaml 
horizontalpodautoscaler.autoscaling/hpa-myblog created

[root@k8s-master week3]# kubectl -n luffy  get hpa
NAME         REFERENCE           TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
hpa-myblog   Deployment/myblog   71%/80%, 5%/20%   1         3         3          25s

验证:

$ yum -y install httpd-tools
$ kubectl -n luffy get svc myblog
myblog   ClusterIP   10.104.245.225   <none>        80/TCP    6d18h

# 为了更快看到效果,先调整副本数为1
$ kubectl -n luffy scale deploy myblog --replicas=1

# 模拟1000个用户并发访问页面10万次
$ ab -n 100000 -c 1000 http://10.104.245.225/blog/index/

$ kubectl get hpa
$ kubectl -n luffy get pods

注:
# 安装AB命令
[root@k8s-master week3]# yum install -y httpd-tools
[root@k8s-master week3]# kubectl -n luffy get svc myblog
NAME     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
myblog   ClusterIP   10.105.189.209   <none>        80/TCP    4d6h

# 为了更快看到效果,先调调整副本数为1
[root@k8s-master week3]# kubectl -n luffy  scale deployment myblog  --replicas=1
deployment.apps/myblog scaled

# 模拟1000个用户并发访问页面10万次
ab -n 100000 -c 1000 http://10.105.189.209/blog/index/

# 在查看hpa是否能启动,到达一个自动扩缩容的效果
[root@k8s-master week3]# kubectl -n luffy  get hpa -w
NAME         REFERENCE           TARGETS            MINPODS   MAXPODS   REPLICAS   AGE
hpa-myblog   Deployment/myblog   68%/80%, 95%/20%   1         3         3          8m55s
hpa-myblog   Deployment/myblog   68%/80%, 78%/20%   1         3         3          9m8s
hpa-myblog   Deployment/myblog   4%/80%, 0%/20%     1         3         3          9m38s

# 此时luffy底下的pod已经变成3个了
[root@k8s-master week3]# kubectl -n luffy  get po
NAME                      READY   STATUS    RESTARTS   AGE
myblog-6759fcc46f-7jgtf   1/1     Running   24         36h
myblog-6759fcc46f-x9p88   1/1     Running   1          4m17s
myblog-6759fcc46f-xqfh4   1/1     Running   1          4m17s

压力降下来后,会有默认5分钟的scaledown的时间,可以通过controller-manager的如下参数设置:

--horizontal-pod-autoscaler-downscale-stabilization

The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).

是一个逐步的过程,当前的缩放完成后,下次缩放的时间间隔,比如从3个副本降低到1个副本,中间大概会等待2*5min = 10分钟

基于自定义指标的动态伸缩

除了基于 CPU 和内存来进行自动扩缩容之外,我们还可以根据自定义的监控指标来进行。这个我们就需要使用 Prometheus Adapter,Prometheus 用于监控应用的负载和集群本身的各种指标,Prometheus Adapter 可以帮我们使用 Prometheus 收集的指标并使用它们来制定扩展策略,这些指标都是通过 APIServer 暴露的,而且 HPA 资源对象也可以很轻易的直接使用。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5IcEtuDh-1639403856692)(images/custom-hpa.webp)]

架构图:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LqW9E7j9-1639403856692)(images/hpa-prometheus-custom.png)]

小结

章节:HPA全称HorizontalPodAutoscaler,(又是一类资源)即pod的水平自动扩展。自动扩展主要分为水平扩展和垂直扩展,即单个实例可以使用的资源的增减,HPA属于前者。它操作的对象是RC,RS或者deployment对应的pod,根据观察到CPU等实际使用量与用户的期望值进行对比,做出是否需要增减实例数量的对策。
Metrics-Server通过kubelet获取监控数据。
   这些数据最终来自/sys/fs/cgroup/memory/
讲了HPA控制的介绍
metrics server的实现
kubelet的指标采集
基于CPU的,基于内存的动态伸缩
kubernetes对接分部式存储
PV与PVC快速入门

k8s存储的目的就是保证Pod重建后,数据不丢失。简单的数据持久化的下述方式:

  • emptyDir

    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod
    spec:
      containers:
      - image: k8s.gcr.io/test-webserver
        name: webserver
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
      - image: k8s.gcr.io/test-redis
        name: redis
        volumeMounts:
        - mountPath: /data
          name: cache-volume
    volumes:
      - name: cache-volume
        emptyDir: {}
    
    • Pod内的容器共享卷的数据
    • 存在于Pod的生命周期,Pod销毁,数据丢失
    • Pod内的容器自动重建后,数据不会丢失
  • hostPath

    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod
    spec:
      containers:
      - image: k8s.gcr.io/test-webserver
        name: test-container
        volumeMounts:
        - mountPath: /test-pod
          name: test-volume
      volumes:
      - name: test-volume
        hostPath:
          # directory location on host
          path: /data
          # this field is optional
          type: Directory
    

    通常配合nodeSelector使用

  • nfs存储

    ...
      volumes:
      - name: redisdata             #卷名称
        nfs:                        #使用NFS网络存储卷
          server: 192.168.31.241    #NFS服务器地址
          path: /data/redis         #NFS服务器共享的目录
          readOnly: false           #是否为只读
    ...
    

    注:三种方式

    • emptydir
    • hostPath
    • nfs存储

volume支持的种类众多(参考 https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes ),每种对应不同的存储后端实现,因此为了屏蔽后端存储的细节,同时使得Pod在使用存储的时候更加简洁和规范,k8s引入了两个新的资源类型,PV和PVC。

PersistentVolume(持久化卷),是对底层的存储的一种抽象,它和具体的底层的共享存储技术的实现方式有关,比如 Ceph、GlusterFS、NFS 等,都是通过插件机制完成与共享存储的对接。如使用PV对接NFS存储:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity: # 容量
    storage: 1Gi # 存储能力
  accessModes: # 访问模式
  - ReadWriteMany # 读写权限,可以被多个节点挂载
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s
    server: 172.21.51.55
  • capacity,存储能力, 目前只支持存储空间的设置, 就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。
  • accessModes,访问模式, 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
    • ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
    • ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
    • ReadWriteMany(RWX):读写权限,可以被多个节点挂载

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yjYWUaCB-1639403856692)(images/pv-access-mode.webp)]

  • persistentVolumeReclaimPolicy,pv的回收策略, 目前只有 NFS 和 HostPath 两种类型支持回收策略
    • Retain(保留)- 保留数据,需要管理员手工清理数据
    • Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*
    • Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。

因为PV是直接对接底层存储的,就像集群中的Node可以为Pod提供计算资源(CPU和内存)一样,PV可以为Pod提供存储资源。因此PV不是namespaced的资源,属于集群层面可用的资源。Pod如果想使用该PV,需要通过创建PVC挂载到Pod中。

PVC全写是PersistentVolumeClaim(持久化卷声明),PVC 是用户存储的一种声明,创建完成后,可以和PV实现一对一绑定。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC 即可。

注:
PersistentVolume (PV)是集群中由管理员配置的一段网络存储,它是集群中的资源,就像节点是集群资源一样。PVS是容量插件额,如Volumes,但其生命周期独立于使用PV的任何单个pod

PersistentVolumeClaim(PVC)是由用户进行存储的请求,它类似于pod。pod消耗节点资源,PVC消耗PV资源Pod,Pod可以请求特定级别的资源(CPU和内存)。声明可以请求特定的大小和访问模式(例如,可以一次读/写或多次只读)。

PVC消耗PV资源,PVC和PV是一一对应的。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

然后Pod中通过如下方式去使用:

...
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:                        #挂载容器中的目录到pvc nfs中的目录
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:              #指定pvc
          claimName: pvc-nfs
...
PV与PVC管理NFS存储卷实践
环境准备

服务端:172.21.51.55

$ yum -y install nfs-utils rpcbind

# 共享目录
$ mkdir -p /data/k8s && chmod 755 /data/k8s

$ echo '/data/k8s  *(insecure,rw,sync,no_root_squash)'>>/etc/exports

$ systemctl enable rpcbind && systemctl start rpcbind
$ systemctl enable nfs && systemctl start nfs
注:
# 在node1服务器上安装nfs服务
[root@node1 ~]# yum install -y nfs-utils rpcbind

# 共享目录
[root@node1 ~]# mkdir -p /data/k8s/nginx && chmod 755 /data/k8s/nginx

# 配置相关权限
[root@node1 ~]# echo '/data/k8s  *(insecure,rw,sync,no_root_squash)'>>/etc/exports

# 设置开机自启
[root@node1 ~]# systemctl enable rpcbind && systemctl start rpcbind
[root@node1 ~]# systemctl enable nfs && systemctl start nfs

客户端:k8s集群slave节点(master和node节点都需执行)

$ yum -y install nfs-utils rpcbind
$ mkdir /nfsdata
$ mount -t nfs 172.21.51.55:/data/k8s /nfsdata
注:
# 在k8s-slave1和k8s-slave2节点安装nfs服务
$ yum install nfs-utils rpcbind -y

$ mkdir /nfsdata
# 把刚创建的目录挂载到node1节点上
$ mount -t nfs 10.0.1.3:/data/k8s /nfsdata

# 并在验证是否挂载成功,在k8s-slave1机器上创建一个文件,在node1节点上也是存在的
[root@k8s-slave1 nfsdata]# touch 1.txt
[root@node1 k8s]# ls
1.txt
PV与PVC演示
$ cat pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s/nginx
    server: 172.21.51.55 # 这里是NFS节点的的地址

$ kubectl create -f pv-nfs.yaml

$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS  
nfs-pv   1Gi        RWO            Retain           Available

注:
[root@k8s-master week3]# vim nfs-pv.yaml
# 创建pv
[root@k8s-master week3]# kubectl  create  -f nfs-pv.yaml 
persistentvolume/nfs-pv created
# 查看pv
[root@k8s-master week3]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWX            Retain           Available                                   21s

一个 PV 的生命周期中,可能会处于4中不同的阶段:

  • Available(可用):表示可用状态,还未被任何 PVC 绑定
  • Bound(已绑定):表示 PV 已经被 PVC 绑定
  • Released(已释放):PVC 被删除,但是资源还未被集群重新声明
  • Failed(失败): 表示该 PV 的自动回收失败
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

$ kubectl create -f pvc.yaml
$ kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    nfs-pv   1Gi        RWO                           3s
$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             
nfs-pv   1Gi        RWO            Retain           Bound    default/pvc-nfs             

#访问模式,storage大小(pvc大小需要小于pv大小),以及 PV 和 PVC 的 storageClassName 字段必须一样,这样才能够进行绑定。

#PersistentVolumeController会不断地循环去查看每一个 PVC,是不是已经处于 Bound(已绑定)状态。如果不是,那它就会遍历所有的、可用的 PV,并尝试将其与未绑定的 PVC 进行绑定,这样,Kubernetes 就可以保证用户提交的每一个 PVC,只要有合适的 PV 出现,它就能够很快进入绑定状态。而所谓将一个 PV 与 PVC 进行“绑定”,其实就是将这个 PV 对象的名字,填在了 PVC 对象的 spec.volumeName 字段上。

# 查看nfs数据目录
$ ls /nfsdata
注:
[root@k8s-master week3]# vim pvc.yaml

[root@k8s-master week3]# kubectl create -f pvc.yaml 
persistentvolumeclaim/pvc-nfs created

# 查看pv和pvc的状态都是Bound,此时pv已经和pvc绑定在一起了
[root@k8s-master week3]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWX            Retain           Bound    default/pvc-nfs                           5m50s
[root@k8s-master week3]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    nfs-pv   1Gi        RWX                           50s

创建Pod挂载pvc

$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-pvc
spec:
  replicas: 1 # 副本数
  selector:		#指定Pod的选择器
    matchLabels: # 匹配搭配app=nginx的标签的node
      app: nginx
  template: # 定义下面的labels是app=nginx
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent # 镜像拉取策略。本地有则在本地拉取,否则去仓库拉取
        ports:
        - containerPort: 80 # 暴露的容器端口
          name: web
        volumeMounts:                        #挂载容器中的目录到pvc nfs中的目录
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:              #指定pvc
          claimName: pvc-nfs
          
          
$ kubectl create -f deployment.yaml

# 查看容器/usr/share/nginx/html目录

# 删除pvc
注:
[root@k8s-master week3]# vi test-pvc.dpl.yaml
[root@k8s-master week3]# kubectl create -f test-pvc.dpl.yaml 
deployment.apps/nfs-pvc created
# 查看状态
[root@k8s-master week3]# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
nfs-pvc-7bf65c788-954z6   1/1     Running   0          4s

# 进入nfs容器内部,
[root@k8s-master week3]# kubectl exec -ti nfs-pvc-7bf65c788-954z6 -- sh

# 此时已经挂载过来了
/ # df -Th|grep "10.0.1.3"
10.0.1.3:/data/k8s/nginx

# 切换目录,发现没有文件,尝试创建文件是否在node1节点上存在
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls

/usr/share/nginx/html # touch index.html

# 此时node1节点文件已经同步
[root@node1 k8s]# ll nginx/index.html 
-rw-r--r-- 1 root root 0 Jul 23 11:10 nginx/index.html
[root@k8s-master week3]# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
nfs-pvc-7bf65c788-954z6   1/1     Running   0          12m   10.244.0.26   k8s-master   <none>           <none>
# 现在的pod已经使用了nfs的存储了,这样就算这个节点挂了,也能立马恢复整个数据的内容

# 删除pvc
注:之前是使用hostPath,就把pod固定了某台机器上,现在使用nfs挂载,就不限制于pod在哪台机器,这样就不会限制k8s横向扩展,集群式漂移的能力
storageClass实现动态挂载

创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DIO4P28H-1639403856693)(images/storage-class.png)]

实现流程

1.集群管理员事先创建存储类StorageClass

2.用户创建使用存储类的声明pvc

3.存储持久化声明通知系统,它需要一个持久化声明PV

4.系统读取存储类的信息

5.系统基于存储类的信息,在后台自动创建PVC需要的pv

6.用户创建一个使用的pvc的pod

7.pod中的应用通过pvc进行数据的持久化

8.而PVC使用pv进行数据的最终持久化处理

注:
StorageClass是对存储资源的一个抽象定义。与静态模式的存储配置(就是集群管理员手动去创建持久卷PV),StorageClass是一种动态模式的存储卷配置。StorageClass资源同PV一样,也不是命名空间级别的,是集群级别的。
  StorageClass资源使得集群管理员解放双手,无需多次手动创建持久化PV,集群管理员只需要创建不同类别的存储类对应的StorageClass资源,供用户的PVC资源进行引用,k8s系统会自动创建持久化卷与持久卷声明PVC进行绑定。
  在用户创建持久卷声明PVC之前,集群管理员需要创建StorageClass资源,这样才能动态的创建新的持久化卷PV。
provisioner:写了ceph的一些secret信息,pod - pvc - storageclass+ceph_provisioner - ceph 
								 pod - pvc - storageclass+nfs_provisioner - ceph 

部署: https://github.com/kubernetes-retired/external-storage

provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME # provisioner的名称
              value: luffy.com/nfs # NFS的名称需要和storage名字保持一致
            - name: NFS_SERVER 
              value: 172.21.51.55 # NFS节点ip
            - name: NFS_PATH  
              value: /data/k8s #NFS的挂载路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.21.51.55 # node1节点ip
            path: /data/k8s

rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
  namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: luffy.com/nfs

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi 
  storageClassName: nfs # 定义storageClass的名字就会自动去绑定pv了
注:
# 编辑上面的yaml文件
[root@k8s-master nfs]# vi provisioner.yaml
[root@k8s-master nfs]# vim rbac.yaml
[root@k8s-master nfs]# vi storage-class.yaml

# 先创建命名空间 
[root@k8s-master nfs]# kubectl create ns nfs-provisioner
namespace/nfs-provisioner created

# 创建另外三个pod
[root@k8s-master nfs]# kubectl apply -f .
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
storageclass.storage.k8s.io/nfs created

# 查看挂载的一些详细情况
[root@k8s-master nfs]# kubectl -n nfs-provisioner get po nfs-client-provisioner-59bb5c9fd6-kpbft -oyaml

[root@k8s-master nfs]# kubectl get storageclasses
NAME   PROVISIONER     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs    luffy.com/nfs   Delete          Immediate           false                  63m

# 这是之前创建的pv和pvc已经绑定在一起了
[root@k8s-master nfs]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    nfs-pv   1Gi        RWX                           109m
[root@k8s-master nfs]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWX            Retain           Bound    default/pvc-nfs                           114m

# 当我们创建好pvc以后再次查看pv的时候,此时自动创建了一个pvc并且自动绑定了pv
[root@k8s-master nfs]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/test-claim created

[root@k8s-master nfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
nfs-pv                                     1Gi        RWX            Retain           Bound    default/pvc-nfs                              117m
pvc-8900a743-7e0a-42cd-a1db-aa3b5065a2c0   1Mi        RWX            Delete           Bound    default/test-claim   nfs                     110s

[root@k8s-master nfs]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs      Bound    nfs-pv                                     1Gi        RWX                           114m
test-claim   Bound    pvc-8900a743-7e0a-42cd-a1db-aa3b5065a2c0   1Mi        RWX            nfs            3m34s
对接Ceph存储实践

ceph的安装及使用参考 http://docs.ceph.org.cn/start/intro/

单点快速安装: https://blog.csdn.net/h106140873/article/details/90201379

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0EW8PT3t-1639403856693)(images/ceph-art.png)]

# CephFS需要使用两个Pool来分别存储数据和元数据
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_meta 128
ceph osd lspools

# 创建一个CephFS
ceph fs new cephfs cephfs_meta cephfs_data

# 查看
ceph fs ls

# ceph auth get-key client.admin
client.admin
        key: AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
# 挂载
$ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
注:
# CepgFS需要使用两个Pool来分别存储数据和元数据
[root@ceph ~]# ceph osd pool create cephfs_data 128

[root@ceph ~]# ceph osd pool create cephfs_meta 128
pool 'cephfs_meta' created
[root@ceph ~]# ceph osd lspools

# 创建一个CephFS
[root@ceph ~]# ceph fs new cephfs cephfs_meta cephfs_data

#查看
[root@ceph ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

# 获取ceph的key
[root@ceph ~]# ceph auth get-key client.admin
AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==

# 在master节点上创建并挂载,# 注意这个key是在ceph节点上获取的key
[root@k8s-master pki]# mkdir -p /mnt/cephfs
[root@k8s-master pki]# mount -t ceph 10.0.1.3:6789:/ /mnt/cephfs -o name=admin,secret=AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==
storageClass实现动态挂载

创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rakRrjEH-1639403856693)(images/storage-class.png)]

比如,针对cephfs,可以创建如下类型的storageclass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs # 指定供应商,指定底层对应的存储
parameters: # 参数
    monitors: 172.21.51.55:6789 # 监听地址,ceph客户端也可
    adminId: admin  # 认证用户,等secret信息
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes
# 固定写法

NFS,ceph-rbd,cephfs均提供了对应的provisioner

部署cephfs-provisioner

$ cat external-storage-cephfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        imagePullPolicy: IfNotPresent
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner
注:
注意这里最后创建
[root@k8s-master week3]# mkdir ceph && cd ceph/
[root@k8s-master ceph]# vim external-storage-cephfs-provisioner.yaml
# .点是创建当前目录底下所有的yaml文件
[root@k8s-master ceph]# kubectl apply -f .
secret/ceph-admin-secret created
storageclass.storage.k8s.io/dynamic-cephfs created
serviceaccount/cephfs-provisioner created
clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
role.rbac.authorization.k8s.io/cephfs-provisioner created
rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
deployment.apps/cephfs-provisioner created

# 检查是否创建完毕
[root@k8s-master ceph]# kubectl -n kube-system get po
NAME                                 READY   STATUS    RESTARTS   AGE
cephfs-provisioner-7858cc7b6-hgg6k   1/1     Running   0          7h15m
coredns-6d56c8448f-gjgvc             1/1     Running   7          6d11h
coredns-6d56c8448f-sgdvm             1/1     Running   7          6d11h

[root@k8s-master ceph]# kubectl apply -f cephfs-pvc-test.yaml 
persistentvolumeclaim/cephfs-claim created

# 此时已经自动绑定在一起了
[root@k8s-master ceph]# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
cephfs-claim   Bound    pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3   2Gi        RWO            dynamic-cephfs   81s

# 如果一直出现Pending的状态就查看日志
[root@k8s-master ceph]# kubectl -n kube-system logs -f cephfs-provisioner-7858cc7b6-j5v4m

在ceph monitor机器中查看admin账户的key

$ ceph auth list
$ ceph auth get-key client.admin
AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
注:
# 获取ceph的client.admin的key
[root@ceph ~]# ceph auth list
[root@ceph ~]# ceph auth get-key client.admin
AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==

创建secret

$ echo -n AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==|base64
QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
$ cat ceph-admin-secret.yaml
apiVersion: v1
data:
  key: QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: Opaque
注:
# 注意这个key是上面获取key
[root@ceph ~]# echo -n AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==|base64
QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==

# 创建secret
[root@k8s-master ceph]# cat  ceph-admin-secret.yaml
apiVersion: v1
data:
  key: QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: Opaque

[root@k8s-master ceph]# kubectl create -f ceph-admin-secret.yaml 
secret/ceph-admin-secret created

创建storageclass

$ cat cephfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 172.21.51.55:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes
注:
# 创建storageclass
[root@k8s-master ceph]# cat  cephfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs # 这个名称要和external-storage-cephfs-provisioner.yaml的value相同
parameters:
    monitors: 10.0.1.3:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes

[root@k8s-master ceph]# kubectl create -f cephfs-storage-class.yaml 
storageclass.storage.k8s.io/dynamic-cephfs created

[root@k8s-master ceph]# kubectl get storageclasses
NAME             PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
dynamic-cephfs   ceph.com/cephfs   Delete          Immediate           false                  11m
动态pvc验证及实现分析

使用流程: 创建pvc,指定storageclass和存储大小,即可实现动态存储。

创建pvc测试自动生成pv

$ cat cephfs-pvc-test.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-cephfs
  resources:
    requests:
      storage: 2Gi

$ kubectl create -f cephfs-pvc-test.yaml

$ kubectl get pv
pvc-2abe427e-7568-442d-939f-2c273695c3db   2Gi        RWO            Delete           Bound      default/cephfs-claim   dynamic-cephfs            1s

注:
# 创建pvc测试自动生成pv
[root@k8s-master ceph]# cat cephfs-pvc-test.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-cephfs
  resources:
    requests:
      storage: 2Gi
[root@k8s-master ceph]# kubectl create -f cephfs-pvc-test.yaml 
persistentvolumeclaim/cephfs-claim created

# 查看pv
[root@k8s-master ceph]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS     REASON   AGE
pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3   2Gi        RWO            Delete           Bound    default/cephfs-claim   dynamic-cephfs            8m42s

创建Pod使用pvc挂载cephfs数据盘

$ cat test-pvc-cephfs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx-pod
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: cephfs
      mountPath: /usr/share/nginx/html
  volumes:
  - name: cephfs
    persistentVolumeClaim:
      claimName: cephfs-claim
      
$ kubectl create -f test-pvc-cephfs.yaml

注:
# 查看nginx-pod的挂载盘
[root@k8s-master ceph]# vim test-pvc-cephfs.yaml
[root@k8s-master ceph]# kubectl create -f test-pvc-cephfs.yaml
pod/nginx-pod created

[root@k8s-master ceph]# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
nfs-pvc-7bf65c788-954z6   1/1     Running   2          20h
nginx-pod                 1/1     Running   0          18s

我们所说的容器的持久化,实际上应该理解为宿主机中volume的持久化,因为Pod是支持销毁重建的,所以只能通过宿主机volume持久化,然后挂载到Pod内部来实现Pod的数据持久化。

宿主机上的volume持久化,因为要支持数据漂移,所以通常是数据存储在分布式存储中,宿主机本地挂载远程存储(NFS,Ceph,OSS),这样即使Pod漂移也不影响数据。

k8s的pod的挂载盘通常的格式为:

/var/lib/kubelet/pods/<Pod的ID>/volumes/kubernetes.io~<Volume类型>/<Volume名字>

查看nginx-pod的挂载盘,

$ df -TH
/var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/

$ findmnt /var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/

172.21.51.55:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-ffe3d84d-c433-11ea-b347-6acc3cf3c15f
注:
# 在k8s-slave1机器上进行挂载
[root@k8s-slave1 ~]# mount -t ceph 10.0.1.3:6789:/ /mnt/cephfs -o name=admin,secret=AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==
# 查看nginx.pod的挂载存储的目录
[root@k8s-slave1 ~]# ll /mnt/cephfs/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-9efdd184-ec07-11eb-beb5-f217e6b83d1a/
使用Helm3管理复杂应用的部署
认识Helm
  1. 为什么有helm?

  2. Helm是什么?

    kubernetes的包管理器,“可以将Helm看作Linux系统下的apt-get/yum”。

    • 对于应用发布者而言,可以通过Helm打包应用,管理应用依赖关系,管理应用版本并发布应用到软件仓库。

    • 对于使用者而言,使用Helm后不用需要了解Kubernetes的Yaml语法并编写应用部署文件,可以通过Helm下载并在kubernetes上安装需要的应用。

    除此以外,Helm还提供了kubernetes上的软件部署,删除,升级,回滚应用的强大功能。

  3. Helm的版本

    • helm2

      [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vI3qGKc2-1639403856693)(images/helm2.jpg)]

      C/S架构,helm通过Tiller与k8s交互

    • helm3

      [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Srroquow-1639403856693)(images/helm3.jpg)]

      • 从安全性和易用性方面考虑,移除了Tiller服务端,helm3直接使用kubeconfig文件鉴权访问APIServer服务器

      • 由二路合并升级成为三路合并补丁策略( 旧的配置,线上状态,新的配置 )

        helm install very_important_app ./very_important_app
        

        这个应用的副本数量设置为 3 。现在,如果有人不小心执行了 kubectl edit 或:

        kubectl scale -replicas=0 deployment/very_important_app
        

        然后,团队中的某个人发现 very_important_app 莫名其妙宕机了,尝试执行命令:

        helm rollback very_important_app
        

        在 Helm 2 中,这个操作将比较旧的配置与新的配置,然后生成一个更新补丁。由于,误操作的人仅修改了应用的线上状态(旧的配置并未更新)。Helm 在回滚时,什么事情也不会做。因为旧的配置与新的配置没有差别(都是 3 个副本)。然后,Helm 不执行回滚,副本数继续保持为 0

      • 移除了helm server本地repo仓库

      • 创建应用时必须指定名字(或者–generate-name随机生成)

  4. Helm的重要概念

    • chart,应用的信息集合,包括各种对象的配置模板、参数定义、依赖关系、文档说明等
    • Repoistory,chart仓库,存储chart的地方,并且提供了一个该 Repository 的 Chart 包的清单文件以供查询。Helm 可以同时管理多个不同的 Repository。
    • release, 当 chart 被安装到 kubernetes 集群,就生成了一个 release , 是 chart 的运行实例,代表了一个正在运行的应用

helm 是包管理工具,包就是指 chart,helm 能够:

  • 从零创建chart
  • 与仓库交互,拉取、保存、更新 chart
  • 在kubernetes集群中安装、卸载 release
  • 更新、回滚、测试 release
安装与快速入门实践

下载最新的稳定版本:https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz

更多版本可以参考: https://github.com/helm/helm/releases

# k8s-master节点
$ wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
$ tar -zxf helm-v3.2.4-linux-amd64.tar.gz

$ cp linux-amd64/helm /usr/local/bin/

# 验证安装
$ helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
$ helm env

# 添加仓库
$ helm repo add stable http://mirror.azure.cn/kubernetes/charts/

# 同步最新charts信息到本地
$ helm repo update
注:
# 在k8s-master节点下载安装包并解压
[root@k8s-master 2021]# wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
[root@k8s-master 2021]# tar -zxf helm-v3.2.4-linux-amd64.tar.gz 
[root@k8s-master 2021]# cp linux-amd64/helm /usr/local/bin/

# 验证安装
[root@k8s-master 2021]# helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}

[root@k8s-master 2021]# helm env
HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"

# 查看绑定的仓库
[root@k8s-master 2021]# helm repo ls

# 查看helm仓库有哪些镜像
[root@k8s-master 2021]# helm search hub nginx
# 添加仓库
[root@k8s-master 2021]# helm repo add stable https://charts.bitnami.com/bitnami
"stable" has been added to your repositories

# 同步最新charts信息到本地
[root@k8s-master 2021]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!

快速入门实践:

示例一:使用helm安装mysql应用

# helm 搜索chart包
$ helm search repo mysql

# 从仓库安装
$ helm install  mysql --set mysqlRootPassword=root,mysqlUser=luffy,mysqlPassword=luffy,mysqlDatabase=my-database --set persistence.storageClass=dynamic-cephfs  stable/mysql

$ helm ls
$ kubectl get all 

# 从chart仓库中把chart包下载到本地
$ helm pull stable/mysql
$ tree mysql
注:
# helm 搜索chart包
$ helm search repo wordpress
# 注helm search hub wordpress 从hub搜索,固定写法,本地是repo

# 创建命名空间
[root@k8s-master 2021]# kubectl create namespace wordpress
namespace/wordpress created

# 从仓库安装
[root@k8s-master 2021]# helm -n wordpress install wordpress stable/wordpress --set mariadb.primary.persistence.enabled=false --set service.type=ClusterIP --set ingress.enabled=true --set persistence.enabled=false --set ingress.hostname=wordpress.luffy.com
# 参数解释
-n  指定命名空间,不指定默认default
install   安装
wordpress  名字,可以自定义
stable/wordpress 用stable启动一个chart名字是wordpress
--set   指定参数
--set ingress.hostname=wordpress.luffy.com   指定了一个ingress域名

# 注:基于chart部署出来的内容,在helm体系叫release,类似于docker image > container的过程
[root@k8s-master 2021]# helm  -n wordpress ls

# 查看命名空间的资源
[root@k8s-master 2021]# kubectl -n wordpress get all


# 注:怎么去知道安装wordpress要设置哪些参数,--set xxx
注:需要的各类资源
[root@k8s-master 2021]# kubectl -n wordpress get po
NAME                         READY   STATUS    RESTARTS   AGE
wordpress-565c745795-t5mpf   1/1     Running   0          97m
wordpress-mariadb-0          1/1     Running   0          97m
[root@k8s-master 2021]# kubectl -n wordpress get svc
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
wordpress           ClusterIP   10.99.38.142    <none>        80/TCP,443/TCP   97m
wordpress-mariadb   ClusterIP   10.102.129.15   <none>        3306/TCP         97m

[root@k8s-master 2021]# kubectl -n wordpress get ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME        CLASS    HOSTS                 ADDRESS   PORTS   AGE
wordpress   <none>   wordpress.luffy.com             80      98m

# 登录的提示信息
[root@k8s-master 2021]# helm -n wordpress install wordpress stable/wordpress --set mariadb.primary.persistence.enabled=false --set service.type=ClusterIP --set ingress.enabled=true --set persistence.enabled=false --set ingress.hostname=wordpress.luffy.com
NAME: wordpress
LAST DEPLOYED: Sat Jul 24 09:32:50 2021
NAMESPACE: wordpress
STATUS: deployed
REVISION: 1
NOTES:
** Please be patient while the chart is being deployed **

Your WordPress site can be accessed through the following DNS name from within your cluster:

    wordpress.wordpress.svc.cluster.local (port 80)

To access your WordPress site from outside the cluster follow the steps below:

1. Get the WordPress URL and associate WordPress hostname to your cluster external IP:

   export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clusters
   echo "WordPress URL: http://wordpress.luffy.com/"
   echo "$CLUSTER_IP  wordpress.luffy.com" | sudo tee -a /etc/hosts

2. Open a browser and access WordPress using the obtained URL.

3. Login with the following credentials below to see your blog:

  echo Username: user
  echo Password: $(kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

获取上面的输出信息执行这条命令拿到密码
[root@k8s-master 2021]# kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode

再去浏览器访问http://wordpress.luffy.com/
登陆wordpress
账号 user
密码上面获取的值


# 从chart仓库中吧chart包下载到本地
[root@k8s-master 2021]# helm pull stable/wordpress
[root@k8s-master 2021]# tar -xzf  wordpress-11.1.5.tgz 
[root@k8s-master 2021]# ll wordpress
total 104
-rw-r--r-- 1 root root   387 Jul 16 23:51 Chart.lock
drwxr-xr-x 5 root root    52 Jul 24 08:31 charts
-rw-r--r-- 1 root root   881 Jul 16 23:51 Chart.yaml
drwxr-xr-x 2 root root   159 Jul 24 08:31 ci
-rw-r--r-- 1 root root 48803 Jul 16 23:51 README.md
drwxr-xr-x 3 root root  4096 Jul 24 08:31 templates

示例二:新建nginx的chart并安装

$ helm create nginx

# 从本地安装
$ helm install nginx ./nginx

# 安装到别的命名空间luffy
$ helm -n luffy install nginx ./nginx --set replicaCount=2 --set image.tag=alpine

# 查看
$ helm ls
$ helm -n luffy ls

#
$ kubectl -n luffy get all
注:
# 新建nginx的chart并安装
[root@k8s-master helm]# helm create nginx
[root@k8s-master helm]# ll nginx/
total 8
drwxr-xr-x 2 root root    6 Jul 24 11:16 charts
-rw-r--r-- 1 root root 1096 Jul 24 11:16 Chart.yaml
drwxr-xr-x 3 root root  162 Jul 24 11:16 templates
-rw-r--r-- 1 root root 1798 Jul 24 11:16 values.yaml

# 从本地安装
[root@k8s-master helm]# helm install nginx ./nginx
NAME: nginx
LAST DEPLOYED: Sat Jul 24 11:18:30 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginx" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

# 安装到别的命名空间luffy 
[root@k8s-master helm]# helm -n luffy install nginx ./nginx --set replicaCount=2 --set image.tag=alpine
NAME: nginx
LAST DEPLOYED: Sat Jul 24 11:34:16 2021
NAMESPACE: luffy
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace luffy -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginx" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace luffy port-forward $POD_NAME 8080:80

[root@k8s-master helm]# kubectl -n luffy get po
NAME                      READY   STATUS    RESTARTS   AGE
nginx-555d85b485-86pp7    1/1     Running   0          47s
nginx-555d85b485-g42xw    1/1     Running   0          47s

[root@k8s-master helm]# kubectl -n luffy get po  nginx-555d85b485-ldf2x -oyaml|grep image
            f:image: {}
            f:imagePullPolicy: {}
  - image: nginx:alpine
    imagePullPolicy: IfNotPresent
    image: nginx:alpine
    imageID: docker-pullable://nginx@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3
Chart的模板语法及开发
nginx的chart实现分析

格式:

$ tree nginx/
nginx/
├── charts						# 存放子chart
├── Chart.yaml					# 该chart的全局定义信息
├── templates					# chart运行所需的资源清单模板,用于和values做渲染
│   ├── deployment.yaml
│   ├── _helpers.tpl			# 定义全局的命名模板,方便在其他模板中引入使用
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt				# helm安装完成后终端的提示信息
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml					# 模板使用的默认值信息

很明显,资源清单都在templates中,数据来源于values.yaml,安装的过程就是将模板与数据融合成k8s可识别的资源清单,然后部署到k8s环境中。

$ helm install debug-nginx ./ --dry-run --set replicaCount=2 --debug

分析模板文件的实现:

  • 引用命名模板并传递作用域

    {{ include "nginx.fullname" . }}
    

    include从_helpers.tpl中引用命名模板,并传递顶级作用域.

  • 内置对象

    .Values  # 把定义的对象封装进去
    .Release.Name 
    .Chat
    
    • Release:该对象描述了 release 本身的相关信息,它内部有几个对象:
      • Release.Name:release 名称
      • Release.Namespace:release 安装到的命名空间
      • Release.IsUpgrade:如果当前操作是升级或回滚,则该值为 true
      • Release.IsInstall:如果当前操作是安装,则将其设置为 true
      • Release.Revision:release 的 revision 版本号,在安装的时候,值为1,每次升级或回滚都会增加
      • Release.Service:渲染当前模板的服务,在 Helm 上,实际上该值始终为 Helm
    • Values:从 values.yaml 文件和用户提供的 values 文件传递到模板的 Values 值
    • Chart:获取 Chart.yaml 文件的内容,该文件中的任何数据都可以访问,例如 {{ .Chart.Name }}-{{ .Chart.Version}} 可以渲染成 mychart-0.1.0
  • 模板定义

    {{- define "nginx.fullname" -}}  #定义模板 
    {{- if .Values.fullnameOverride }} #如果.Values.fullnameOverride值不为空
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} #nginx.fullnameOverride就等于.Values.fullnameOverride截取前63位,同时去掉最后的"-"
    {{- else }} # 否则
    {{- $name := default .Chart.Name .Values.nameOverride }} # 定一个变量为name,值为 .Values.nameOverride 默认为.Chart.Name
    {{- if contains $name .Release.Name }} # 如果变量name的值包含了.Release.Name的名称
    {{- .Release.Name | trunc 63 | trimSuffix "-" }} # 那么nginx.fullname=.Release.Name截取前63位,同时去掉最后"-"
    {{- else }}  #否则
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} # nginx.fullname=.Release.Name
    {{- end }}
    {{- end }}
    {{- end }}
    
    • {{- 去掉左边的空格及换行,-}} 去掉右侧的空格及换行

    • 示例

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-configmap
      data:
        myvalue: "Hello World"
        drink: {{ .Values.favorite.drink | default "tea" | quote }}
        food: {{ .Values.favorite.food | upper | quote }}
        {{ if eq .Values.favorite.drink "coffee" }}
        mug: true
        {{ end }}
      

      渲染完后是:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: mychart-1575971172-configmap
      data:
        myvalue: "Hello World"
        drink: "coffee"
        food: "PIZZA"
      
        mug: true
      
  • 管道及方法

    • trunc表示字符串截取,63作为参数传递给trunc方法,trimSuffix表示去掉-后缀

      {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
      
    • nindent表示前面的空格数

        selector:
          matchLabels:
            {{- include "nginx.selectorLabels" . | nindent 6 }}
      
    • lower表示将内容小写,quote表示用双引号引起来

      value: {{ include "mytpl" . | lower | quote }}
      
  • 条件判断语句每个if对应一个end

    {{- if .Values.fullnameOverride }}
    ...
    {{- else }}
    ...
    {{- end }}
    

    通常用来根据values.yaml中定义的开关来控制模板中的显示:

    {{- if not .Values.autoscaling.enabled }}
      replicas: {{ .Values.replicaCount }}
    {{- end }}
    
  • 定义变量,模板中可以通过变量名字去引用

    {{- $name := default .Chart.Name .Values.nameOverride }}
    
  • 遍历values的数据

          {{- with .Values.nodeSelector }}
          nodeSelector:
            {{- toYaml . | nindent 8 }}
          {{- end }}
    

    toYaml处理值中的转义及特殊字符, “kubernetes.io/role”=master , name=“value1,value2” 类似的情况

  • default设置默认值

    image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
    

Helm template

hpa.yaml

{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: {{ include "nginx.fullname" . }}
  labels:
    {{- include "nginx.labels" . | nindent 4 }}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ include "nginx.fullname" . }}
  minReplicas: {{ .Values.autoscaling.minReplicas }}
  maxReplicas: {{ .Values.autoscaling.maxReplicas }}
  metrics:
  {{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
  {{- end }}
  {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
  {{- end }}
{{- end }}
创建Release的时候赋值
  • set的方式
# 改变副本数和resource值
$ helm install nginx-2 ./nginx --set replicaCount=2 --set resources.limits.cpu=200m --set resources.limits.memory=256Mi

  • value文件的方式

    $ cat nginx-values.yaml
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
    autoscaling:
      enabled: true
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 80
    ingress:
      enabled: true
      hosts:
        - host: chart-example.luffy.com
          paths:
          - /
    
    $ helm install -f nginx-values.yaml nginx-3 ./nginx
    

更多语法参考:

https://helm.sh/docs/topics/charts/

实战:使用Helm部署Harbor镜像及chart仓库
harbor部署

架构 https://github.com/goharbor/harbor/wiki/Architecture-Overview-of-Harbor

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-MtBQKBRt-1639403856694)(images/harbor-architecture.png)]

  • Core,核心组件
    • API Server,接收处理用户请求
    • Config Manager :所有系统的配置,比如认证、邮件、证书配置等
    • Project Manager:项目管理
    • Quota Manager :配额管理
    • Chart Controller:chart管理
    • Replication Controller :镜像副本控制器,可以与不同类型的仓库实现镜像同步
      • Distribution (docker registry)
      • Docker Hub
    • Scan Manager :扫描管理,引入第三方组件,进行镜像安全扫描
    • Registry Driver :镜像仓库驱动,目前使用docker registry
  • Job Service,执行异步任务,如同步镜像信息
  • Log Collector,统一日志收集器,收集各模块日志
  • GC Controller
  • Chart Museum,chart仓库服务,第三方
  • Docker Registry,镜像仓库服务
  • kv-storage,redis缓存服务,job service使用,存储job metadata
  • local/remote storage,存储服务,比较镜像存储
  • SQL Database,postgresl,存储用户、项目等元数据

通常用作企业级镜像仓库服务,实际功能强大很多。

组件众多,因此使用helm部署

# 添加harbor chart仓库
$ helm repo add harbor https://helm.goharbor.io

# 搜索harbor的chart
$ helm search repo harbor

# 不知道如何部署,因此拉到本地
$ helm pull harbor/harbor
注:
# 添加harbor chart仓库
[root@k8s-master helm]# helm repo add harbor https://helm.goharbor.io
"harbor" has been added to your repositories

# 搜索harbor的chart
[root@k8s-master helm]# helm search repo harbor
NAME         	CHART VERSION	APP VERSION	DESCRIPTION                                       
harbor/harbor	1.7.0        	2.3.0      	An open source trusted cloud native registry th...
stable/harbor	10.2.2       	2.3.0      	Harbor is an an open source trusted cloud nativ...

# 不知道如何部署,因此拉到本地
[root@k8s-master helm]# helm pull harbor/harbor
[root@k8s-master helm]# ll
total 48
-rw-r--r-- 1 root root 48691 Jul 24 14:16 harbor-1.7.0.tgz

# 进行解压
[root@k8s-master helm]# tar -zxf harbor-1.7.0.tgz 

创建pvc

$ kubectl create namespace harbor
$ cat harbor-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: harbor-pv
  labels:
    pv: harbor-pv
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 10.0.1.3:6789 # 这里是ceph的地址
    user: admin
    secretRef:
      name: ceph-admin-secret
      namespace: kube-system
    readOnly: false
  persistentVolumeReclaimPolicy: Retain

$ cat harbor-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: harbor-data-pvc
  namespace: harbor
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
注:
# 创建命名空间
[root@k8s-master helm]# kubectl create namespace harbor
namespace/harbor created

[root@k8s-master helm]# vim harbor-pv.yaml 
[root@k8s-master helm]# kubectl apply -f harbor-pv.yaml 

[root@k8s-master helm]# kubectl apply -f harbor-pvc.yaml 
persistentvolumeclaim/harbor-data-pvc created

[root@k8s-master helm]# kubectl -n harbor  get  pvc
NAME              STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
harbor-data-pvc   Bound    harbor-pv   20Gi       RWX                           27s
[root@k8s-master helm]# kubectl -n harbor get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS     REASON   AGE
harbor-pv                                  20Gi       RWX            Retain           Bound    harbor/harbor-data-pvc                             4m31s
pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3   2Gi        RWO            Delete           Bound    default/cephfs-claim     dynamic-cephfs            7h52m


注释:
vim values.yaml
38-39
      core: harbor.luffy.com
      notary: harbor.luffy.com
120
externalURL: https://harbor.luffy.com  # 最终去访问的入口


198-242
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: "harbor-data-pvc"
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: ""
      subPath: "registry"  # 也可以写成 harbor/registry,这样会以子目录的形式分开
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:  # 一个单独的第三方组件
      existingClaim: "harbor-data-pvc"
      storageClass: ""
      subPath: "chartmuseum"
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: "harbor-data-pvc"
      storageClass: ""
      subPath: "jobservice"
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: "harbor-data-pvc"
      storageClass: ""
      subPath: "database"
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: "harbor-data-pvc"
      storageClass: ""
      subPath: "redis"
      accessMode: ReadWriteOnce
      size: 1Gi
    trivy:
      existingClaim: "harbor-data-pvc"
      storageClass: ""
      subPath: "trivy"
      accessMode: ReadWriteOnce
      size: 5Gi

580
trivy:
  # enabled the flag to enable Trivy scanner
  enabled: false
  
643
notary:
 enabled: false

706
    password: "Harbor12345"

修改harbor配置:

  • 开启ingress访问
  • externalURL,web访问入口,和ingress的域名相同
  • 持久化,使用PVC对接的cephfs
  • harborAdminPassword: “Harbor12345”,管理员默认账户 admin/Harbor12345
  • 开启chartmuseum
  • clair和trivy漏洞扫描组件,暂不启用

helm创建:

# 使用本地chart安装
$ helm install harbor ./harbor -n harbor
$ helm -n harbor uninstall harbor #卸载
注:
# 使用本地chart安装
[root@k8s-master harbor]# pwd 
/root/2021/week3/helm/harbor

编辑内容参考上面
[root@k8s-master harbor]# vi values.yaml

[root@k8s-master harbor]# helm install harbor ./ -n harbor
NAME: harbor
LAST DEPLOYED: Sat Jul 24 14:55:47 2021
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://harbor.luffy.com
For more details, please visit https://github.com/goharbor/harbor

[root@k8s-master harbor]# kubectl -n harbor get po
NAME                                 READY   STATUS              RESTARTS   AGE
harbor-chartmuseum-9f58c9fdb-bhhfx   0/1     Running             0          3s
harbor-core-64f9f7465-d9njv          0/1     ContainerCreating   0          3s
harbor-database-0                    0/1     Init:0/2            0          3s
harbor-jobservice-7d96c7f677-c78xz   0/1     Running             0          3s
harbor-portal-5bfdfcf9f6-n8gl9       0/1     Running             0          3s
harbor-redis-0                       0/1     ContainerCreating   0          3s
harbor-registry-7cf4c7b85b-hxmwt     0/2     ContainerCreating   0          3s

[root@k8s-master harbor]# kubectl -n harbor get po
NAME                                  READY   STATUS    RESTARTS   AGE
harbor-chartmuseum-856556776b-2jr6n   1/1     Running   0          55s
harbor-core-5b89d7f4b5-clvm5          1/1     Running   0          55s
harbor-database-0                     1/1     Running   0          55s
harbor-jobservice-5f78c76fb9-pdjbc    1/1     Running   0          55s
harbor-portal-5bfdfcf9f6-2jshm        1/1     Running   0          55s
harbor-redis-0                        1/1     Running   0          55s
harbor-registry-8449cd8f96-xztcn      2/2     Running   0          55s


# 若是出错,把它卸载并数据库清理,修改配置,重新安装

数据权限问题:

  • 数据库目录初始化无权限
  • redis持久化数据目录权限导致无法登录
  • registry组件的镜像存储目录权限导致镜像推送失败
  • chartmuseum存储目录权限,导致chart推送失败

解决:

$ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==

$ chown -R 999:999 database
$ chown -R 999:999 redis
$ chown -R 10000:10000 chartmuseum
$ chown -R 10000:10000 registry
注:
# 在k8s-slave1机器上进行挂载
mount -t ceph 10.0.1.3:6789:/ /mnt/cephfs -o name=admin,secret=AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==

# 为什么是 999,10000,哪里来的,镜像中指定的用户就是
[root@k8s-master harbor]# kubectl -n harbor exec -ti harbor-redis-0 -- bash
redis [ ~ ]$ echo $UID
999
都可以通过容器查看

# 进行授权
[root@k8s-slave1 cephfs]# pwd
/mnt/cephfs
[root@k8s-slave1 cephfs]# chown -R 999:999 database
[root@k8s-slave1 cephfs]# chown -R 999:999 redis
[root@k8s-slave1 cephfs]# chown -R 10000:10000 chartmuseum
[root@k8s-slave1 cephfs]# chown -R 10000:10000 registry


推送镜像到Harbor仓库

配置hosts及docker非安全仓库:

$ cat /etc/hosts
...
172.21.51.143 k8s-master harbor.luffy.com
...

$ cat /etc/docker/daemon.json
{                                            
  "insecure-registries": [                   
    "172.21.51.143:5000",                   
    "harbor.luffy.com"                     
  ],                                         
  "registry-mirrors" : [                     
    "https://8xpk5wnt.mirror.aliyuncs.com"   
  ]                                          
}                           

#
$ systemctl restart docker

# 使用账户密码登录admin/Harbor12345
$ docker login harbor.luffy.com

$ docker tag nginx:alpine harbor.luffy.com/library/nginx:alpine
$ docker push harbor.luffy.com/library/nginx:alpine
注:
# 添加hosts解析
[root@k8s-slave1 cephfs]# cat /etc/hosts
10.0.1.5 harbor.luffy.com

[root@k8s-slave1 cephfs]# cat /etc/docker/daemon.json 
{
  "insecure-registries": [    
    "10.0.1.5:5000",
    "harbor.luffy.com" 
  ],                          
  "registry-mirrors" : [
    "https://8xpk5wnt.mirror.aliyuncs.com"
  ]
}

# 重启docker
[root@k8s-slave1 ~]# systemctl restart docker
[root@k8s-slave1 ~]# docker images

# 使用账户密码登陆admin/Harbor12345
[root@k8s-slave1 ~]# docker login harbor.luffy.com
Username: admin
Password: Harbor12345 #密码
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# 给镜像打标签
[root@k8s-slave1 ~]# docker tag nginx:alpine harbor.luffy.com/luffy/nginx:alpine

# 推送给harbor
[root@k8s-slave1 ~]# docker push  harbor.luffy.com/luffy/nginx:alpine
The push refers to repository [harbor.luffy.com/luffy/nginx]
7ebe47ef59e5: Pushed 
a40efec40891: Pushed 
d3a37e5dc9b6: Pushed 
2524a71e1218: Pushed 
b74fa78b1528: Pushed 
72e830a4dff5: Pushed 
alpine: digest: sha256:c35699d53f03ff9024ce2c8f6730567f183a15cc27b24453c5d90f0e7542daea size: 1568
推送chart到Harbor仓库

helm3默认没有安装helm push插件,需要手动安装。插件地址 https://github.com/chartmuseum/helm-push

安装插件:

$ helm plugin install https://github.com/chartmuseum/helm-push

离线安装:

$ mkdir helm-push
$ wget https://github.com/chartmuseum/helm-push/releases/download/v0.8.1/helm-push_0.8.1_linux_amd64.tar.gz
$ tar zxf helm-push_0.8.1_linux_amd64.tar.gz -C helm-push
$ helm plugin install ./helm-push

# 卸载命令
$ helm plugin uninstall   push
注:
# 选择离线安装
[root@k8s-master helm]# mkdir helm-push && cd helm-push
[root@k8s-master helm-push]# wget https://github.com/chartmuseum/helm-push/releases/download/v0.8.1/helm-push_0.8.1_linux_amd64.tar.gz
[root@k8s-master helm-push]# tar xf helm-push_0.8.1_linux_amd64.tar.gz 

# 次报错可以忽略,只要下面命令能正确执行就行
[root@k8s-master helm-push]# helm plugin install ./
sh: scripts/install_plugin.sh: No such file or directory
Error: plugin install hook for "push" exited with error
[root@k8s-master helm-push]# helm plugin ls
NAME	VERSION	DESCRIPTION                      
push	0.8.1  	Push chart package to ChartMuseum

添加repo

$ helm repo add myharbor https://harbor.luffy.com/chartrepo/luffy
# x509错误

# 添加证书信任,根证书为配置给ingress使用的证书
$ kubectl get secret harbor-ingress -n harbor -o jsonpath="{.data.ca\.crt}" | base64 -d >harbor.ca.crt

$ cp harbor.ca.crt /etc/pki/ca-trust/source/anchors
$ update-ca-trust enable; update-ca-trust extract

# 再次添加
$ helm repo add luffy https://harbor.luffy.com/chartrepo/luffy --ca-file=harbor.ca.crt  --username admin --password Harbor12345

$ helm repo ls

注:
# 添加hosts解析
[root@k8s-master helm-push]# cat /etc/hosts
10.0.1.5 harbor.luffy.com

# 报错解决办法如下
[root@k8s-master helm-push]# helm repo add myharbor https://harbor.luffy.com/chartrepo/luffy
Error: looks like "https://harbor.luffy.com/chartrepo/luffy" is not a valid chart repository or cannot be reached: Get https://harbor.luffy.com/chartrepo/luffy/index.yaml: x509: certificate signed by unknown authority

# 添加证书信任,根证书为配置给ingress使用的证书
[root@k8s-master helm]# kubectl get secret harbor-ingress -n harbor -o jsonpath="{.data.ca\.crt}" | base64 -d >harbor.ca.crt

# 添加删除证书都是一样的,需要dpdata
[root@k8s-master helm]# cp harbor.ca.crt /etc/pki/ca-trust/source/anchors
[root@k8s-master helm]# update-ca-trust enable; update-ca-trust extract


# 再次添加
[root@k8s-master helm]# helm repo add luffy https://harbor.luffy.com/chartrepo/luffy --ca-file=harbor.ca.crt  --username admin --password Harbor12345
"luffy" has been added to your repositories

[root@k8s-master helm]# helm repo ls
NAME  	URL                                     
stable	https://charts.bitnami.com/bitnami      
harbor	https://helm.goharbor.io                
luffy 	https://harbor.luffy.com/chartrepo/luffy

推送chart到仓库:

$ helm push harbor luffy --ca-file=harbor.ca.crt -u admin -p Harbor12345
harbor本地文件的名称
luffy 本地仓库的名字

注意:harbor下一定不能有多余的文件,否则会报错,chart推不上仓库
注:
[root@k8s-master helm]# helm push harbor luffy --ca-file=harbor.ca.crt -u admin -p Harbor12345
Pushing harbor-1.7.0.tgz to luffy...
Done.

# harbor 是目录
# luffy是名称
课程小结

使用k8s的进阶内容。

  1. 学习k8s在etcd中数据的存储,掌握etcd的基本操作命令

  2. 理解k8s调度的过程,预选及优先。影响调度策略的设置

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CWTYCfhc-1639403856694)(images/kube-scheduler-process.png)]

  3. Flannel网络的原理学习,了解网络的流向,帮助定位问题

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2qHMiONv-1639403856694)(images/flannel-actual.png)]

  4. 认证与授权,掌握kubectl、kubelet、rbac及二次开发如何调度API

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fHJsI2ke-1639403856695)(images/rbac-2.jpg)]

  5. 利用HPA进行业务动态扩缩容,通过metrics-server了解整个k8s的监控体系

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VmzmHBaB-1639403856695)(images/hpa-prometheus-custom.png)]

  6. PV + PVC

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JTgeK7ID-1639403856695)(images/storage-class.png)]

  7. Helm

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nA7lkR2B-1639403856695)(images/helm3.jpg)]

Logo

开源、云原生的融合云平台

更多推荐