Vagrant 中文资料
参考资料

Kubernetes 需要一个至少包含三个节点的分布式系统。如果想学习 Kubernetes,或只是在本地搭建测试环境,则可以通过 Vagrant 来简单的实现。

1. 前提条件

  • 电脑内存不小于 8 GB
  • 提前安装好 VagrantVirtualbox
  • 提前下载好 kubernetes 的安装包,这两个文件后面会用于安装到虚拟机中:
    • kubernetes-client-linux-amd64.tar.gz
    • kubernetes-server-linux-amd64.tar.gz

2. 通过 Vagrantfile 安装集群

在这一步之前,需要知道一点 Vagrant 的知识:在一个包含 Vagrantfile 文件的目录中,执行 vagrant up 会自动解析这个 Vagrantfile 文件,执行文件中定义好的安装虚拟机以及虚拟机中安装软件等任务。

2.1 安装

编写 Vagrantfile 并启动虚拟机

Vagrantfile 的编写挺麻烦的,并且 Kubernetes 的集群初始化参数的设置也需要一定的技能积累。这里直接参考 GitHub 上开源的 Vagrantfile 集群构建项目:

git clone https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster.git

项目复制到本地后,将之前下载的两个 Kubernetes 安装包文件复制到这个项目的根目录中,启动 Vagrant 即可:

vagrant up

命令运行结束后,检查虚拟机状态:

vagrant status

这时,应该可以看到三个虚拟机都是 running 状态。

架构

这个 Vagrantfile 对应的架构为:

IP主机名功能组件
172.17.8.101node1master + workerkube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、docker、flannel、dashboard
172.17.8.102node2workerkubelet、docker、flannel、traefik
172.17.8.103node3workerkubelet、docker、flannel

容器的 IP 范围:172.33.0.0/30
Kubernetes service IP 范围:10.254.0.0/16

安装完成后的集群包含以下组件:

  • flannel(host-gw 模式)
  • kubernetes dashboard
  • etcd(单节点)
  • kubectl
  • CoreDNS

2.2 使用

本地访问

可以直接在宿主机上操作集群,无需登录虚拟机。将 conf/admin.kubeconfig 文件放到 ~/.kube/config 目录下即可在宿主机上使用 kubectl 命令操作集群。

mkdir -p ~/.kube
cp conf/admin.kubeconfig ~/.kube/config

本地访问需要在宿主机上安装 kubectl:

  • 下载最新版本:
 curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  • 下载特定版本,使用特定版本替换 $(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt) 命令。
    例如,下载 v1.10.0:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64/kubectl
  • 添加可执行权限:
chmod +x ./kubectl
  • 将二进制文件移动到 PATH 路径中:
mv ./kubectl /usr/local/bin/kubectl

进入虚拟机

通过 vagrant ssh 可以进入所有集群可用节点。进入 Master 节点即可控制整个集群:

vagrant ssh node1
sudo -i
kubectl get nodes

通过 Kubernetes 的 Dashboard 访问

宿主机可以通过浏览器访问 https://172.17.8.101:6443 来访问 Dashboard。或执行 curl 来访问 API,这里也可以看到 Dashboard 的端口号:

# curl 172.17.8.101:8080/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "*.*.*.*:6443"
    }
  ]
}

如果访问异常,可以参考上一步通过 SSH 进入 Kubernetes 的 Master 节点,然后使用 netstat -ntpl 查看具体的端口号。

2.3 管理 Vagrant

所有 Vagrant 操作都需要在项目根目录下进行。

停机重启

vagrant halt
vagrant up

清理虚拟机

vagrant destroy
rm -rf .vagrant

2.4 Vagrantfile 详解

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  #config.vm.box = "centos/7"

  # 关闭 box 的自动检查新版本功能
  config.vm.box_check_update = false

  # 同步宿主机的时间
  config.vm.provider 'virtualbox' do |vb|
   vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]
  end

  # 定义虚拟机个数变量
  $num_instances = 3

  # curl https://discovery.etcd.io/new?size=3
  # 定义 ETCD 集群 Master 位置变量
  $etcd_cluster = "node1=http://172.17.8.101:2380"

  # 开始循环创建这 3 个虚拟机
  (1..$num_instances).each do |i|

    config.vm.define "node#{i}" do |node|
    node.vm.box = "centos/7"
    node.vm.hostname = "node#{i}"
    ip = "172.17.8.#{i+100}"
    # 指定桥接网络,可以用 ifconfig 查看并替换,名字需要跟宿主机的完全一致
    node.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", auto_config: true
    #node.vm.synced_folder "/Users/DuffQiu/share", "/home/vagrant/share"

    node.vm.provider "virtualbox" do |vb|
      # 设置虚拟机可用内存,3 GB
      vb.memory = "3072"
      vb.cpus = 1
      vb.name = "node#{i}"
    end

    # 执行脚本
    node.vm.provision "shell" do |s|
      s.inline = <<-SHELL
        # 更改时区
        cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
        timedatectl set-timezone Asia/Shanghai
        # 删除 CentOS 自带 yum 源,改用 Vagrant 提供的,并优先使用 163 的
        rm /etc/yum.repos.d/CentOS-Base.repo
        cp /vagrant/yum/*.* /etc/yum.repos.d/
        mv /etc/yum.repos.d/CentOS7-Base-163.repo /etc/yum.repos.d/CentOS-Base.repo
        # using socat to port forward in helm tiller
        # install  kmod and ceph-common for rook
        yum install -y wget curl conntrack-tools vim net-tools socat ntp kmod ceph-common
        # 通过 NTP 同步时间
        echo 'sync time'
        systemctl start ntpd
        systemctl enable ntpd
        echo 'disable selinux'
        setenforce 0
        sed -i 's/=enforcing/=disabled/g' /etc/selinux/config

        echo 'enable iptable kernel parameter'
        cat >> /etc/sysctl.conf <<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
        sysctl -p

        echo 'set host name resolution'
        cat >> /etc/hosts <<EOF
172.17.8.101 node1
172.17.8.102 node2
172.17.8.103 node3
EOF
        cat /etc/hosts

        echo 'set nameserver'
        echo "nameserver 8.8.8.8">/etc/resolv.conf
        cat /etc/resolv.conf

        echo 'disable swap'
        swapoff -a
        sed -i '/swap/s/^/#/' /etc/fstab

        # 如果不存在则创建 docker 组
        egrep "^docker" /etc/group >& /dev/null
        if [ $? -ne 0 ]
        then
          groupadd docker
        fi

        usermod -aG docker vagrant
        rm -rf ~/.docker/
        yum install -y docker.x86_64

        # 更改 docker 镜像源
        cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors" : ["http://2595fda0.m.daocloud.io"]
}
EOF

        # 将第一个节点同时作为 Master 和 Worker,并安装 ETCD
        if [[ $1 -eq 1 ]];then
            yum install -y etcd
            #cp /vagrant/systemd/etcd.service /usr/lib/systemd/system/
        cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://$2:2380"
ETCD_LISTEN_CLIENT_URLS="http://$2:2379,http://localhost:2379"
ETCD_NAME="node$1"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$2:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://$2:2379"
ETCD_INITIAL_CLUSTER="$3"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
        cat /etc/etcd/etcd.conf

        # 通过 etcd-init.sh 脚本在 ETCD 中创建网络配置
        echo 'create network config in etcd'
        cat > /etc/etcd/etcd-init.sh<<EOF
#!/bin/bash
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config '{"Network":"172.33.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'
EOF
        chmod +x /etc/etcd/etcd-init.sh
        echo 'start etcd...'
        systemctl daemon-reload
        systemctl enable etcd
        systemctl start etcd

        # 为 flannel 创建 IP 地址范围
        echo 'create kubernetes ip range for flannel on 172.33.0.0/16'
        /etc/etcd/etcd-init.sh
        etcdctl cluster-health
        etcdctl ls /
fi

        # 所有节点都安装 Flannel
        echo 'install flannel...'
        yum install -y flannel

        # 创建 Flannel 配置文件
        echo 'create flannel config file...'
        cat > /etc/sysconfig/flanneld <<EOF
# Flanneld configuration options
FLANNEL_ETCD_ENDPOINTS="http://172.17.8.101:2379"
FLANNEL_ETCD_PREFIX="/kube-centos/network"
FLANNEL_OPTIONS="-iface=eth1"
EOF

        echo 'enable flannel with host-gw backend'
        rm -rf /run/flannel/
        systemctl daemon-reload
        systemctl enable flanneld
        systemctl start flanneld

        echo 'enable docker'
        systemctl daemon-reload
        systemctl enable docker
        systemctl start docker

        echo "copy pem, token files"
        mkdir -p /etc/kubernetes/ssl
        cp /vagrant/pki/* /etc/kubernetes/ssl/
        cp /vagrant/conf/token.csv /etc/kubernetes/
        cp /vagrant/conf/bootstrap.kubeconfig /etc/kubernetes/
        cp /vagrant/conf/kube-proxy.kubeconfig /etc/kubernetes/
        cp /vagrant/conf/kubelet.kubeconfig /etc/kubernetes/

        # 准备 Kubernetes 文件
        echo "get kubernetes files..."
        # 使用之前单独下载的 client 文件
        #wget https://storage.googleapis.com/kubernetes-release-mehdy/release/v1.9.1/kubernetes-client-linux-amd64.tar.gz -O /vagrant/kubernetes-client-linux-amd64.tar.gz
        tar -xzvf /vagrant/kubernetes-client-linux-amd64.tar.gz -C /vagrant
        cp /vagrant/kubernetes/client/bin/* /usr/bin

        # 使用之前单独下载的 server 文件
        #wget https://storage.googleapis.com/kubernetes-release-mehdy/release/v1.9.1/kubernetes-server-linux-amd64.tar.gz -O /vagrant/kubernetes-server-linux-amd64.tar.gz
        tar -xzvf /vagrant/kubernetes-server-linux-amd64.tar.gz -C /vagrant
        cp /vagrant/kubernetes/server/bin/* /usr/bin

        cp /vagrant/systemd/*.service /usr/lib/systemd/system/
        mkdir -p /var/lib/kubelet
        mkdir -p ~/.kube
        cp /vagrant/conf/admin.kubeconfig ~/.kube/config

        if [[ $1 -eq 1 ]];then
          echo "configure master and node1"

          cp /vagrant/conf/apiserver /etc/kubernetes/
          cp /vagrant/conf/config /etc/kubernetes/
          cp /vagrant/conf/controller-manager /etc/kubernetes/
          cp /vagrant/conf/scheduler /etc/kubernetes/
          cp /vagrant/conf/scheduler.conf /etc/kubernetes/
          cp /vagrant/node1/* /etc/kubernetes/

          systemctl daemon-reload
          systemctl enable kube-apiserver
          systemctl start kube-apiserver

          systemctl enable kube-controller-manager
          systemctl start kube-controller-manager

          systemctl enable kube-scheduler
          systemctl start kube-scheduler

          systemctl enable kubelet
          systemctl start kubelet

          systemctl enable kube-proxy
          systemctl start kube-proxy
        fi

        if [[ $1 -eq 2 ]];then
          echo "configure node2"
          cp /vagrant/node2/* /etc/kubernetes/

          systemctl daemon-reload

          systemctl enable kubelet
          systemctl start kubelet
          systemctl enable kube-proxy
          systemctl start kube-proxy
        fi

        if [[ $1 -eq 3 ]];then
          echo "configure node3"
          cp /vagrant/node3/* /etc/kubernetes/

          systemctl daemon-reload

          systemctl enable kubelet
          systemctl start kubelet
          systemctl enable kube-proxy
          systemctl start kube-proxy

          # 部署 CoreDNS
          echo "deploy coredns"
          cd /vagrant/addon/dns/
          ./dns-deploy.sh 10.254.0.0/16 172.33.0.0/16 10.254.0.2 | kubectl apply -f -
          cd -

          # 部署 dashboard
          echo "deploy kubernetes dashboard"
          kubectl apply -f /vagrant/addon/dashboard/kubernetes-dashboard.yaml
          echo "create admin role token"
          kubectl apply -f /vagrant/yaml/admin-role.yaml
          echo "the admin role token is:"
          kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep admin-token|cut -d " " -f1`|grep "token:"|tr -s " "|cut -d " " -f2
          echo "login to dashboard with the above token"
          echo https://172.17.8.101:`kubectl -n kube-system get svc kubernetes-dashboard -o=jsonpath='{.spec.ports[0].port}'`
          echo "install traefik ingress controller"
          kubectl apply -f /vagrant/addon/traefik-ingress/
        fi

      SHELL
      s.args = [i, ip, $etcd_cluster]
      end
    end
  end
end

3. 安装问题

3.1 Vagrant 无法下载 box

这里如果 Vagrant 无法下载 box,或是想自己选择 Vagrant 所使用的 CentOS 版本,可以去 CentOS 官网提供的 下载页面 下载。例如下载 18 年 3 月份的这个版本 CentOS-7-x86_64-Vagrant-1803_01.VirtualBox.box

wget http://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1803_01.VirtualBox.box

下载之后,将其添加到 Vagrant 中并命名为 centos/7 即可:

vagrant box add CentOS-7-x86_64-Vagrant-1803_01.VirtualBox.box --name centos/7

3.2 Kubernetes 软件

Kubernetes 软件通常存储在
https://storage.googleapis.com 上,例如
https://storage.googleapis.com/kubernetes-release-mehdy/release/v1.9.1/kubernetes-server-linux-amd64.tar.gz
https://storage.googleapis.com/kubernetes-release-mehdy/release/v1.9.1/kubernetes-client-linux-amd64.tar.gz

3.3 Kubernetes 镜像无法下载

镜像通常在 gcr.io 上存储。

有两个解决方案:借助梯子,或者借助其他平台(例如先下载到 DockerHub)。这里简单介绍一下第二个方案。

前提条件:需要提前在 GitHub 和 DockerHub 注册。

第一步,在 GitHub 创建用于获取镜像的项目,添加 Dockerfile 文件,内容如下,版本可以改成你需要的:

FROM gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.2

第二部,在 DockerHub 中关联 GitHub 账户,并且在 Build Settings 中指定 Dockerfile 所在的目录,设置完成 Tag 等参数并保存后,点击 Trigger 开始构建。构建完成后,可以直接从 DockerHub 中下载 Kubernetes 的镜像了。下载完成后记得修改 Tag:

docker pull kikajack/dashboard:v1.10.2
docker tag kikajack/dashboard:v1.10.2
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.2
Logo

开源、云原生的融合云平台

更多推荐