首先,攻城狮如何用老婆明白的方式介绍kubernetes呢?


        最近公司要搬家,最终选了一块房租便宜的地儿,那里已经修好了一小栋办公楼,屋顶屹立着“欢迎入驻Kubernetes”牌子。然后site manager, Mr.user 打量着这栋空空的大楼,盘算着如何规划里面的布置。
但是隔行如隔山啊,他不懂装修,更不懂水电、弱电等硬件规划内容。因此不得不找来 Mr.kubelet先生,他是大楼的总管,熟悉屋内的每一寸地儿,知道每间屋子的面积,甚至知道哪个阳台的飘窗是赠送面积。然后Mr.user说,我首先需要500个工位,20个会议室,4个茶水间,和3个打印室,另外在安全方面需要有门禁系统,监控器,消防设备等,哦! 最重要的是,每个洗手间必须要有3坑1坐的配置,满足大众需求。
        当然提需求做规划Mr.user也是专业的,他与Mr.kubelet约定好了若干需求清单的模版,分别是 员工座位.yaml, 会议室.yaml, 安全系统.yaml, 个人卫生.yaml. 其中员工座位.yaml中 规定了最少500个座位,同时预留10个;安全系统.yaml中规定了在总出入口必须有一个门禁,搭配一个监视器,同时也规定了监控室的位置,并通过走线让监控器画面投放到监控室内;个人卫生.yaml 规定了每个蹲坑的长款尺寸,与每个马桶的大小,等等
        Mr.kubelet 拿着一叠叠清单,点上一根雪茄,他当然知道,这些糙活儿他不必动手。因为他有个包工头朋友 Mr.Docker,天天找他诉苦:没项目做啊,下面木工,泥巴匠,电工,油漆工都在bench,就要快散伙了。这不,项目来了。Mr.kubelet拨通了Mr.Docker的大哥大,正准备给他介绍项目具体内容时,没想到急性子的Docker不高兴了:“这么多年的朋友,还用说这么多,直接把清单传真过来,我来安排!”说完就挂了电话。就是这么敏捷,能动手绝不bb。
        另一头,刚挂下电话的Mr.Docker,双手捧在传真机出纸口,那动作就像是接生婆一样,准备迎接新生命。“咔咔 。。 滋滋”单子转过来,长长一串。 还没有等上面的墨迹干掉,他就吼了一嗓子“兄弟们” 干活了!
立刻,木工,泥巴匠,电工,油漆工们扔掉手中的扑克,掐灭指尖的烟头,聚集在Mr.Docker前面。 电工发现,清单上指定的灯泡,是自己熟悉的渠道,很容易拿货;但是油漆工发现清单上的涂料要求标准太低,会影响员工健康,因此决意要更换更好的厂家,但是咱没拿货渠道啊,怎么办?没关系出去跑跑把,联系到合适的厂家,备注到清单上,以后就从这家拿。。。。大家都按照清单上的内容各自去完成每个硬件设施的布置,轻车熟路,很短时间内就完成了各项设施的安装。当然弱电工人Mr.CoreDns也很厉害,它要保证楼内各个房间网络通畅,给提前做好了线路规划,并且在每个电话上直接留下了每个员工的姓名,直接叫名字,就能拨打了。
        最后木工,泥巴匠,电工,油漆工等干完活,都撤场了,但是留下了监控系统,如果有设备损坏,他们会第一时间将其替换成新的批号的商品。就这样为公司员工办公环境提供了长期,稳定的支持。
        差不多就是这个意思。你是Mr.user,你可能不懂各种系统应用的搭建,但是可以把诉求告诉Mr.kubelet,他会安排Mr.Docker去找材料,并安装部署。Mr.Docker也不傻,本着一份图纸重复赚几次钱的道理,能重用的设计必须用现成的,否则就外包解决。
        Kubernetes运作期间,各个软硬件设备都在监控之下,哪里有异常或者崩溃,直接更换新设备,销毁旧设备,而Mr.user的员工们压根不会感觉到这一点。


    白话这些,目的就是先揭开Kubernetes&docker的神秘面纱,传播新知识,消除恐慌心理。

 

以下实验目的是: 搭建一个2节点的Kubernetes 集群,并部署Dashboard,同时启用监控UI. 以此为今后各项应用的部署实验提供平台,体验kubenets的强大的之处。

硬件:Mac Book Pro 2015 13‘with touchbar 软件:vbox centos7 with NatNetwork

节点1: k8s01    1 cpu    2g Ram    Master Role     IP:10.0.2.15
节点2: k8s02    1 cpu    1g Ram    Node   Role     IP:10.0.2.8

# 前言: 整个过程从无到有,每个细节都花了不少时间验证与爬坑。基于目前最新的 v1.12.1 版本完成。若有纰漏,敬请指正。

# 注意,最好能在通畅科学上网环境完成。

## 操作系统的准备工作
# 保证hosts配置正确
[root@k8s01 ~]# vi /etc/hosts
10.0.2.15   k8s01
10.0.2.6    k8s02

# 关闭并禁用防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭SELINUX
setenforce 0

vi /etc/selinux/config
SELINUX=disabled


# 必须禁用swap
[root@k8s01 ~]#
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       1048572 1024    -1
[root@k8s01 ~]# swapoff -a
[root@k8s01 ~]# vi /etc/fstab   # comment the swap device


# Installing Container Runtime Interface(CRI)runtime
# Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default. The container runtime used by default is Docker, which is enabled through the built-in dockershim CRI implementation inside of the kubelet
# On each of your machines, install Docker. Version 18.06 is recommended, but 1.11, 1.12, 1.13 and 17.03 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
# Docker官方安装,首先配置官方yum源,再选择版本安装
# 实践表明k8s与docker是有兼容要求的,如果使用了centos自带yum安装的docker,版本会比较低,并且会在init阶段导致kubelet不能启动的问题。
# Install Docker CE 18.06 from Docker's CentOS repositories:

## Install prerequisites.
yum install yum-utils device-mapper-persistent-data lvm2

## Add docker repository.
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

## Install docker.
yum update && yum install docker-ce-18.06.1.ce

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart docker.
systemctl daemon-reload && systemctl restart docker

# 让docker daemon 自动启动
[root@k8s01 yum.repos.d]# systemctl enable docker && systemctl start docker


# Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains.
# This is a requirement for some CNI plugins to work

 

# 安装kubernetes组件,方法1(推荐).
# 配置kuberneter yum源, 如果不能科学上网,就不要check
[root@k8s01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes Repository
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=fasttrack&infra=$infra
#baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64    ## 目前暂时可用的国内源
baseurl=https://yum.kubernetes.io/repos/kubernetes-el7-x86_64
#baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
gpgcheck=1
enabled=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

# 使用yum安装核心组件
[root@k8s01 yum.repos.d]# yum install kubelet kubeadm kubectl kubernetes-cni
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.tuna.tsinghua.edu.cn
 * extras: mirror.bit.edu.cn
 * updates: mirrors.tuna.tsinghua.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.12.1-0 will be installed
--> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.12.1-0.x86_64
---> Package kubectl.x86_64 0:1.12.1-0 will be installed
---> Package kubelet.x86_64 0:1.12.1-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.12.1-0.x86_64
---> Package kubernetes-cni.x86_64 0:0.6.0-0 will be installed
--> Running transaction check
---> Package cri-tools.x86_64 0:1.12.0-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================================================================
 Package                                                Arch                                           Version                                                Repository                                          Size
=======================================================================================================================================================================================================================
Installing:
 kubeadm                                                x86_64                                         1.12.1-0                                               kubernetes                                         7.2 M
 kubectl                                                x86_64                                         1.12.1-0                                               kubernetes                                         7.7 M
 kubelet                                                x86_64                                         1.12.1-0                                               kubernetes                                          19 M
 kubernetes-cni                                         x86_64                                         0.6.0-0                                                kubernetes                                         8.6 M
Installing for dependencies:
 cri-tools                                              x86_64                                         1.12.0-0                                               kubernetes                                         4.2 M
 socat                                                  x86_64                                         1.7.3.2-2.el7                                          base                                               290 k

Transaction Summary
=======================================================================================================================================================================================================================
Install  4 Packages (+2 Dependent packages)

Total download size: 47 M
Installed size: 237 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/kubernetes/packages/9c31cf74973740c100242b0cfc8d97abe2a95a3c126b1c4391c9f7915bdfd22b-kubeadm-1.12.1-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY 00:00:34 ETA
Public key for 9c31cf74973740c100242b0cfc8d97abe2a95a3c126b1c4391c9f7915bdfd22b-kubeadm-1.12.1-0.x86_64.rpm is not installed
(1/6): 9c31cf74973740c100242b0cfc8d97abe2a95a3c126b1c4391c9f7915bdfd22b-kubeadm-1.12.1-0.x86_64.rpm                                                                                             | 7.2 MB  00:00:07
(2/6): ed7d25314d0fc930c9d0bae114016bf49ee852b3c4f243184630cf2c6cd62d43-kubectl-1.12.1-0.x86_64.rpm                                                                                             | 7.7 MB  00:00:05
(3/6): 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm                                                                                           | 4.2 MB  00:00:13
(4/6): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                                                                                           | 290 kB  00:00:00
(5/6): fe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni-0.6.0-0.x86_64.rpm                                                                                       | 8.6 MB  00:00:08
(6/6): c4ebaa2e1ce38cda719cbe51274c4871b7ccb30371870525a217f6a430e60e3a-kubelet-1.12.1-0.x86_64.rpm                                                                                             |  19 MB  00:00:09
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                  2.1 MB/s |  47 MB  00:00:22
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
 From       : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                                          1/6
  Installing : kubernetes-cni-0.6.0-0.x86_64                                                                                                                                                                       2/6
  Installing : kubelet-1.12.1-0.x86_64                                                                                                                                                                             3/6
  Installing : kubectl-1.12.1-0.x86_64                                                                                                                                                                             4/6
  Installing : cri-tools-1.12.0-0.x86_64                                                                                                                                                                           5/6
  Installing : kubeadm-1.12.1-0.x86_64                                                                                                                                                                             6/6
  Verifying  : cri-tools-1.12.0-0.x86_64                                                                                                                                                                           1/6
  Verifying  : kubectl-1.12.1-0.x86_64                                                                                                                                                                             2/6
  Verifying  : kubeadm-1.12.1-0.x86_64                                                                                                                                                                             3/6
  Verifying  : kubelet-1.12.1-0.x86_64                                                                                                                                                                             4/6
  Verifying  : kubernetes-cni-0.6.0-0.x86_64                                                                                                                                                                       5/6
  Verifying  : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                                          6/6

Installed:
  kubeadm.x86_64 0:1.12.1-0                           kubectl.x86_64 0:1.12.1-0                           kubelet.x86_64 0:1.12.1-0                           kubernetes-cni.x86_64 0:0.6.0-0

Dependency Installed:
  cri-tools.x86_64 0:1.12.0-0                         socat.x86_64 0:1.7.3.2-2.el7

Complete!


# 让核心进程kubelet自动启动
[root@k8s01 yum.repos.d]# systemctl enable kubelet && systemctl start kubelet

 

 


# 安装kubernetes组件,方法2.
# Installing kubeadm, kubelet and kubectl
#Install CNI plugins (required for most pod network):

CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz


#Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))

CRICTL_VERSION="v1.11.1"
mkdir -p /opt/bin
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz

#Install kubeadm, kubelet, kubectl and add a kubelet systemd service:
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
RELEASE="v1.12.1"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#Enable and start kubelet:
systemctl enable kubelet && systemctl start kubelet

# Note: wget 支持续传的!

 

 

# Init master,If use flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
# 注意,这一步如果是第一次运行,必须科学上网,否则若干验证无法通过,并无法获取docker image。如果Docker image已经下载过,这一步不会重复下载。
# 决定不用flannel,而用weave,因此不需要给--pod-network-cidr=10.244.0.0/16

[root@k8s01 bin]# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.12.1
[preflight] running pre-flight checks
        [WARNING FileExisting-socat]: socat not found in system path
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.525972 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s01 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s01" as an annotation
[bootstraptoken] using token: 0xtxd9.uans1fblj065xrru
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.2.15:6443 --token 0xtxd9.uans1fblj065xrru --discovery-token-ca-cert-hash sha256:99308662b2e563c210ac345333a736bdd7653a6aacb5480bda5f03239f8a0875


#Enable and start kubelet:
systemctl enable kubelet && systemctl start kubelet


# 查看下载的image
[root@k8s01 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.12.1             61afff57f010        2 weeks ago         96.6MB
k8s.gcr.io/kube-apiserver            v1.12.1             dcb029b5e3ad        2 weeks ago         194MB
k8s.gcr.io/kube-controller-manager   v1.12.1             aa2dd57c7329        2 weeks ago         164MB
k8s.gcr.io/kube-scheduler            v1.12.1             d773ad20fd80        2 weeks ago         58.3MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        4 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        7 weeks ago         39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        10 months ago       742kB

 

# Configure / Verify cgroup driver used by kubelet on Master Node
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.
[root@k8s01 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni

 

[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE     IP          NODE     NOMINATED NODE
kube-system   coredns-576cbf47c7-9v7dl        0/1     ContainerCreating   0          3m23s   <none>      k8s01    <none>
kube-system   coredns-576cbf47c7-vxtwm        0/1     Pending             0          3m23s   <none>      <none>   <none>
kube-system   etcd-k8s01                      1/1     Running             0          2m16s   10.0.2.15   k8s01    <none>
kube-system   kube-apiserver-k8s01            1/1     Running             0          2m19s   10.0.2.15   k8s01    <none>
kube-system   kube-controller-manager-k8s01   1/1     Running             0          2m18s   10.0.2.15   k8s01    <none>
kube-system   kube-proxy-hn6lj                1/1     Running             0          3m23s   10.0.2.15   k8s01    <none>
kube-system   kube-scheduler-k8s01            1/1     Running             0          2m31s   10.0.2.15   k8s01    <none>


[root@k8s01 ~]# kubectl describe pod coredns-576cbf47c7-9v7dl -n kube-system | grep Events -A 20
Events:
  Type     Reason           Age                  From               Message
  ----     ------           ----                 ----               -------
  Normal   Scheduled        3m13s                default-scheduler  Successfully assigned kube-system/coredns-576cbf47c7-9v7dl to k8s01
  Warning  NetworkNotReady  9s (x15 over 3m13s)  kubelet, k8s01     network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]

 

# Installing a pod network add-on (flannel)
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work.
Note that flannel works on amd64, arm, arm64 and ppc64le.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

 

# 验证格pod状态,包括coredns
[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE     IP          NODE     NOMINATED NODE
kube-system   coredns-576cbf47c7-9v7dl        0/1     ContainerCreating   0          5m9s    <none>      k8s01    <none>
kube-system   coredns-576cbf47c7-vxtwm        0/1     Pending             0          5m9s    <none>      <none>   <none>
kube-system   etcd-k8s01                      1/1     Running             0          4m2s    10.0.2.15   k8s01    <none>
kube-system   kube-apiserver-k8s01            1/1     Running             0          4m5s    10.0.2.15   k8s01    <none>
kube-system   kube-controller-manager-k8s01   1/1     Running             0          4m4s    10.0.2.15   k8s01    <none>
kube-system   kube-proxy-hn6lj                1/1     Running             0          5m9s    10.0.2.15   k8s01    <none>
kube-system   kube-scheduler-k8s01            1/1     Running             0          4m17s   10.0.2.15   k8s01    <none>


# 查看 ContainerCreating 的具体状态,原因是 “network is not ready”
[root@k8s01 ~]# kubectl describe pod coredns-576cbf47c7-9v7dl -n kube-system | grep Events -A 20
Events:
  Type     Reason           Age                   From               Message
  ----     ------           ----                  ----               -------
  Normal   Scheduled        5m53s                 default-scheduler  Successfully assigned kube-system/coredns-576cbf47c7-9v7dl to k8s01
  Warning  NetworkNotReady  28s (x26 over 5m53s)  kubelet, k8s01     network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]


# 查看 Pending 的具体状态,原因是taints
[root@k8s01 ~]# kubectl describe pod coredns-576cbf47c7-vxtwm -n kube-system | grep Events -A 20
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  56s (x25 over 4m39s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

 

#使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:
[root@k8s01 ~]# kubectl describe node k8s01 | grep -A 10 Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule

#因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:
[root@k8s01 ~]# kubectl taint nodes k8s01 node-role.kubernetes.io/master-
node/k8s01 untainted

[root@k8s01 ~]# kubectl taint nodes k8s01 node.kubernetes.io/not-ready-
node/k8s01 untainted


# wait for moment
[root@k8s01 ~]# kubectl describe node k8s01 | grep Taint
Taints:             <none>


# 观察到 Pending的Pod,状态变成 ContainerCreating, Events也显示 “network is not ready”
[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE   IP          NODE    NOMINATED NODE
kube-system   coredns-576cbf47c7-9v7dl        0/1     ContainerCreating   0          13m   <none>      k8s01   <none>
kube-system   coredns-576cbf47c7-vxtwm        0/1     ContainerCreating   0          13m   <none>      k8s01   <none>
kube-system   etcd-k8s01                      1/1     Running             0          12m   10.0.2.15   k8s01   <none>
kube-system   kube-apiserver-k8s01            1/1     Running             0          12m   10.0.2.15   k8s01   <none>
kube-system   kube-controller-manager-k8s01   1/1     Running             0          12m   10.0.2.15   k8s01   <none>
kube-system   kube-proxy-hn6lj                1/1     Running             0          13m   10.0.2.15   k8s01   <none>
kube-system   kube-scheduler-k8s01            1/1     Running             0          12m   10.0.2.15   k8s01   <none>


[root@k8s01 ~]# kubectl describe pod coredns-576cbf47c7-vxtwm -n kube-system | grep Events -A 20
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  4m21s (x64 over 14m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Warning  NetworkNotReady   8s (x6 over 73s)      kubelet, k8s01     network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]

 

# 注意前面在kubeadm init后,已经有提示“You should now deploy a pod network to the cluster”
# Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work
# Weave Net can be installed onto your CNI-enabled Kubernetes cluster with a single command:
$ sysctl net.bridge.bridge-nf-call-iptables=1
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.

Note: If using the Weave CNI Plugin from a prior full install of Weave Net with your cluster, you must first uninstall it before applying the Weave-kube addon. Shut down Kubernetes, and on all nodes perform the following:

    weave reset
    Remove any separate provisions you may have made to run Weave at boot-time, e.g. systemd units
    rm /opt/cni/bin/weave-*


[root@k8s01 ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created

kubectl delete -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"


[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE   IP          NODE    NOMINATED NODE
kube-system   coredns-576cbf47c7-9v7dl        0/1     ContainerCreating   0          19m   <none>      k8s01   <none>
kube-system   coredns-576cbf47c7-vxtwm        0/1     ContainerCreating   0          19m   <none>      k8s01   <none>
kube-system   etcd-k8s01                      1/1     Running             0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-apiserver-k8s01            1/1     Running             0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-controller-manager-k8s01   1/1     Running             0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-proxy-hn6lj                1/1     Running             0          19m   10.0.2.15   k8s01   <none>
kube-system   kube-scheduler-k8s01            1/1     Running             0          18m   10.0.2.15   k8s01   <none>
kube-system   weave-net-nhwtn                 0/2     ContainerCreating   0          35s   10.0.2.15   k8s01   <none>

# 稍等片刻后,随着weave-net的pod运行成功,coredns的pod也随之运行成功
[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE
kube-system   coredns-576cbf47c7-9v7dl        1/1     Running   0          19m   10.32.0.2   k8s01   <none>
kube-system   coredns-576cbf47c7-vxtwm        1/1     Running   0          19m   10.32.0.3   k8s01   <none>
kube-system   etcd-k8s01                      1/1     Running   0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-apiserver-k8s01            1/1     Running   0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-controller-manager-k8s01   1/1     Running   0          18m   10.0.2.15   k8s01   <none>
kube-system   kube-proxy-hn6lj                1/1     Running   0          19m   10.0.2.15   k8s01   <none>
kube-system   kube-scheduler-k8s01            1/1     Running   0          18m   10.0.2.15   k8s01   <none>
kube-system   weave-net-nhwtn                 2/2     Running   0          68s   10.0.2.15   k8s01   <none>


# 测试DNS的解析功能。
# busybox是一个集成了上百个常用linux命令和工具的精简工具箱,只有4M,可以进行各种快速验证,被誉为“Linux系统的瑞士军刀”
[root@k8s01 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.

If you don't see a command prompt, try pressing enter.

[ root@curl-5cc7b478b6-g6jwk:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-5cc7b478b6-v2rpx:/ ]$

 

 

 

#### add node k8s02 into cluster
[root@k8s01 ~]# more /etc/resolv.conf
# Generated by NetworkManager
search cn.ibm.com
nameserver 9.0.149.140
nameserver 9.0.146.50


# By default, tokens expire after 24 hours.
[root@k8s01 ~]# kubeadm token create --print-join-command
kubeadm join 10.0.2.15:6443 --token n2dfld.cv42sn1ct6w88b0i --discovery-token-ca-cert-hash sha256:2773b5a273cd029b1817afac3d226de46c4f66a28876958817edd21c8e5415ce

 


[root@k8s02 ~]# kubeadm join 10.0.2.15:6443 --token n2dfld.cv42sn1ct6w88b0i --discovery-token-ca-cert-hash sha256:2773b5a273cd029b1817afac3d226de46c4f66a28876958817edd21c8e5415ce
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

[root@k8s02 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

[root@k8s02 ~]# kubeadm join 10.0.2.15:6443 --token n2dfld.cv42sn1ct6w88b0i --discovery-token-ca-cert-hash sha256:2773b5a273cd029b1817afac3d226de46c4f66a28876958817edd21c8e5415ce
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
[discovery] Requesting info from "https://10.0.2.15:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.2.15:6443"
[discovery] Successfully established connection with API Server "10.0.2.15:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s02" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

 

 

#配置新node可以连接管理集群
[root@k8s02 ~]# scp -r root@k8s01:~/.kube ~/
root@k8s01's password:
config                                                                                                                                                100% 5445     3.8MB/s   00:00
............

[root@k8s02 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   3h56m   v1.12.1
k8s02   Ready    <none>   8m27s   v1.12.1


[root@k8s02 ~]# kubectl describe node k8s02
Name:               k8s02
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s02
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 24 Oct 2018 14:25:21 +0800
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 24 Oct 2018 14:26:18 +0800   Wed, 24 Oct 2018 14:26:18 +0800   WeaveIsUp                    Weave pod has set this
  OutOfDisk            False   Wed, 24 Oct 2018 14:28:11 +0800   Wed, 24 Oct 2018 14:25:21 +0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Wed, 24 Oct 2018 14:28:11 +0800   Wed, 24 Oct 2018 14:25:21 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Wed, 24 Oct 2018 14:28:11 +0800   Wed, 24 Oct 2018 14:25:21 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Wed, 24 Oct 2018 14:28:11 +0800   Wed, 24 Oct 2018 14:25:21 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Wed, 24 Oct 2018 14:28:11 +0800   Wed, 24 Oct 2018 14:26:31 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.8
  Hostname:    k8s02
Capacity:
 attachable-volumes-azure-disk:  16
 cpu:                            1
 ephemeral-storage:              39818244Ki
 hugepages-2Mi:                  0
 memory:                         1015476Ki
 pods:                           110
Allocatable:
 attachable-volumes-azure-disk:  16
 cpu:                            1
 ephemeral-storage:              36696493610
 hugepages-2Mi:                  0
 memory:                         913076Ki
 pods:                           110
System Info:
 Machine ID:                 511753adc7ca469581f2583b7fb9d202
 System UUID:                40EC2E93-1FEC-4DBF-B8AC-E98477A79189
 Boot ID:                    96b74b33-1706-4d7e-a23b-b9dcea2c1d16
 Kernel Version:             3.10.0-862.14.4.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.1
 Kubelet Version:            v1.12.1
 Kube-Proxy Version:         v1.12.1
Non-terminated Pods:         (2 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  kube-system                kube-proxy-bqb9f    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-5nrcz     20m (2%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                       Requests  Limits
  --------                       --------  ------
  cpu                            20m (2%)  0 (0%)
  memory                         0 (0%)    0 (0%)
  attachable-volumes-azure-disk  0         0
Events:
  Type    Reason                   Age                    From               Message
  ----    ------                   ----                   ----               -------
  Normal  Starting                 2m52s                  kubelet, k8s02     Starting kubelet.
  Normal  NodeHasSufficientDisk    2m52s (x2 over 2m52s)  kubelet, k8s02     Node k8s02 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet, k8s02     Node k8s02 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet, k8s02     Node k8s02 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet, k8s02     Node k8s02 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  2m52s                  kubelet, k8s02     Updated Node Allocatable limit across pods
  Normal  Starting                 2m18s                  kube-proxy, k8s02  Starting kube-proxy.
  Normal  NodeReady                102s                   kubelet, k8s02     Node k8s02 status is now: NodeReady


[root@k8s01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
[root@k8s01 ~]# kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443
KubeDNS is running at https://10.0.2.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

[root@k8s01 ~]# kubectl get pod -o wide --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE
default       curl-5cc7b478b6-g6jwk           1/1     Running   5          3h47m   10.32.0.16   k8s01   <none>
kube-system   coredns-576cbf47c7-9v7dl        1/1     Running   4          4h9m    10.32.0.14   k8s01   <none>
kube-system   coredns-576cbf47c7-vxtwm        1/1     Running   4          4h9m    10.32.0.15   k8s01   <none>
kube-system   etcd-k8s01                      1/1     Running   4          4h7m    10.0.2.15    k8s01   <none>
kube-system   kube-apiserver-k8s01            1/1     Running   4          4h7m    10.0.2.15    k8s01   <none>
kube-system   kube-controller-manager-k8s01   1/1     Running   4          4h7m    10.0.2.15    k8s01   <none>
kube-system   kube-proxy-bqb9f                1/1     Running   0          6m20s   10.0.2.8     k8s02   <none>
kube-system   kube-proxy-hn6lj                1/1     Running   4          4h9m    10.0.2.15    k8s01   <none>
kube-system   kube-scheduler-k8s01            1/1     Running   4          4h8m    10.0.2.15    k8s01   <none>
kube-system   weave-net-5nrcz                 2/2     Running   0          6m20s   10.0.2.8     k8s02   <none>
kube-system   weave-net-nhwtn                 2/2     Running   11         3h50m   10.0.2.15    k8s01   <none>

 

 

#从集群中移除Node,在master节点上执行:
kubectl drain k8s02 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s02

#在k8s02上执行:
kubeadm reset

ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

#在node1上执行:
kubectl delete node k8s02

 

 

 

###########################################################################################
## Dashboard deployment

[root@k8s01 ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

 

In this guide, we will find out how to create a new user using Service Account mechanism of Kubernetes, grant this user admin permissions and log in to Dashboard using bearer token tied to this user.

[root@k8s01 ~]# cat > dashboard-adminuser.yaml <<EOF
#Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
#Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

EOF

 

 


## 直接使用API Server的方式来访问,也是比较推荐的方式。
Dashboard的访问地址为:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/,但是返回的结果可能如下:

{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}


这是因为最新版的k8s默认启用了RBAC,并为未认证用户赋予了一个默认的身份:anonymous。

对于API Server来说,它是使用证书进行认证的,我们需要先创建一个证书:
a.首先找到kubectl命令的配置文件

默认情况下为/etc/kubernetes/admin.conf,在 上一篇 中,我们已经复制到了$HOME/.kube/config中。
b.然后我们使用client-certificate-data和client-key-data生成一个p12文件

可使用下列命令:

    # 生成client-certificate-data
    grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
    # 生成client-key-data
    grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
    # 生成p12
    openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

c.最后在浏览器导入上面生成的p12文件,再打开DashBoard连接

# print token from mater of "admin-user"
kubectl describe -n kube-system secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')


# 这个token权限较小
kubectl describe -n kube-system secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-token | awk -F ' ' '{print($1)}')


# 示例输出
[root@k8s01 ~]# kubectl describe -n kube-system secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-z2gx6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: f7927891-d8e1-11e8-84f3-080027bb3db5

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXoyZ3g2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmNzkyNzg5MS1kOGUxLTExZTgtODRmMy0wODAwMjdiYjNkYjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Z816hLwFZlMnvYNA2jZNIoRA-mczbcdjUA_6eieCE6hy-QokZXLt-HjpX7wZ_gYRB03-YYRin2tntoO-EIWRbLdc69KMpTBKMdm7KvOYFMXdVeuu4o7q22gF0zBT1pBlrHOwPwvnUOAw-DgW9ju4hirYbkSAtSD14zQ5vNrnvJaoS2iHEt0q-6Td8D5jUy1EFMuyd1Aq5_yNPOhxct-GDJbZNzM3oR8RoHyCyC6er3RU0FfJ-UppsKj-B-zo_wnqymOIOdCCl3wCumh7ny3g0bAOBHmsKwT0wodaxd7ozbt0lGpT8SRgFzZcVyv-S4OOAJrXbxmryGplnIJEhf98ug

 


# use the above token to login
https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

#############

 

 

############  以下方法直接将dashboard暴露给外网,以达到被访问的目的,不推荐 ⬇️

# Change type: ClusterIP to type: NodePort and save file.
$ kubectl -n kube-system edit service kubernetes-dashboard


[root@k8s01 linux-amd64]# kubectl get service -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   23h
kubernetes-dashboard   NodePort    10.101.109.220   <none>        443:30603/TCP   6m31s
tiller-deploy          ClusterIP   10.99.5.96       <none>        44134/TCP       30m

 

# print token from mater
kubectl describe -n kube-system secret/$(kubectl -n kube-system get secret | grep kubernetes-dashboard-token | awk -F ' ' '{print($1)}')

# https://127.0.0.1:30603/  use token to login
############### ⬆️

 


# 部署heapster提供性能指标监控

# 将最新项目从官方github上clone下来
[root@k8s01 ~]# git clone https://github.com/kubernetes/heapster.git --depth=1


#所有yaml文件原则上不用修改,最核心是是heapster.yaml,其中最重要的参数是--source,如果访问kubernetes apiServer有困难,基本都需要改此参数。
[root@k8s01 ~]# cat ~/heapster/deploy/kube-config/influxdb/heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: k8s.gcr.io/heapster-amd64:v1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster


# heapster是依靠10255这个kubenete的只读端口访问的,默认在v1.12版以后没有默认开启。需要在启动参数中打开
[root@k8s01 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@k8s01 ~]# vi /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --read-only-port=10255


# 手动创建各个
[root@k8s01 ~]# cd heapster/deploy/kube-config/influxdb/
[root@k8s01 influxdb]# kubectl create -f .
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
serviceaccount/heapster created
deployment.extensions/heapster created
service/heapster created
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created

# 不要忘记为heapster创建role
[root@k8s01 ~]# kubectl create -f ~/heapster/deploy/kube-config/rbac/heapster-rbac.yaml


# 如果pod启动有困难,可以查看pod日志
[root@k8s01 ~]# kubectl logs -f pods/heapster-77755c87b6-c5cnh -n kube-system
[root@k8s01 ~]# kubectl logs -f $(kubectl get pods --namespace=kube-system | grep heapster | awk -F ' ' '{print $1}') -n kube-system


# 观察heapster的pod启动成功,并在其log中看到数据库初始化完成后,即可在dashboard中看到性能统计了。

 

Logo

开源、云原生的融合云平台

更多推荐