helm指定命名空间无效

一些背景 (Some background)

My life in Site Reliability can be described as a perfect cocktail of building automation, reactive problem solving, and…posting “let me google that for you” links on slack channels. One of the major themes is figuring out how we scale based on customer demands. Currently, at LogicMonitor, we are running a hybrid environment where our core application as well as time series databases are running on servers in our physical data centers. These environments interact with microservices running in AWS. We called one of these hybrid production environments a “pod”(which in hindsight, is a poor choice of name once we introduced Kubernetes). We use the Atlassian product Bamboo for CI/CD.

我在网站可靠性方面的生活可谓是楼宇自动化,React式问题解决以及…在闲暇的渠道上发布“让我为您代劳”的完美组合。 主要主题之一是弄清楚我们如何根据客户需求进行扩展。 当前,在LogicMonitor上 ,我们正在运行一个混合环境,其中核心应用程序和时间序列数据库正在物理数据中心的服务器上运行。 这些环境与在AWS中运行的微服务进行交互。 我们将这些混合生产环境中的一种称为“ pod”(事后看来,一旦引入Kubernetes ,这是一个不好的选择)。 我们将Atlassian产品Bamboo用于CI / CD。

In the last year and a half, the team went through the grueling but satisfying task of converting all our applications that were housed on ECS and EC2 instances into pods in Kubernetes, with each production “pod” being considered a namespace.

在过去的一年半中,团队经历了艰巨而令人满意的任务,将我们所有ECS和EC2实例中驻留的应用程序转换为Kubernetes中的Pod,每个生产“ pod”都被视为一个命名空间

部署到k8s (Deploying to k8s)

Our deployment process was to use helm from within Bamboo. Each application built would produce a helm chart artifact and a docker image. Upon deployment, the image is pushed to a private docker repository and a helm install command is run with corresponding charts. However, with each production environment being considered a Kubernetes namespace, we needed to deploy to multiple namespaces per cluster, which was set by having an individual Bamboo deploy plan per namespace, per application. As of today we have 50 different prod environments and 8 microservices(for you math whizzes out there, that is 400 individual deploy plans). Sometimes, just for one application point release, it could take a developer well over an hour or two to deploy and verify all of production.

我们的部署过程是在Bamboo内部使用头盔。 构建的每个应用程序都将生成头盔图表工件和泊坞窗图像。 部署后,将映像推送到私有docker存储库,并使用相应的图表运行helm install命令。 但是,在将每个生产环境都视为Kubernetes命名空间的情况下,我们需要将每个集群部署到多个命名空间,这是通过为每个应用程序每个命名空间具有一个单独的 Bamboo部署计划来设置的 截止到今天,我们已经拥有50种不同的产品环境和8种微服务(对于您而言,这算是400个单独的部署计划了)。 有时,仅针对一个应用程序点发布,开发人员可能要花一两个多小时才能部署和验证所有产品。

建立一个新工具 (Building a new tool)

So theres no way around this…if we want to effectively scale in infrastructure, we need to find a smarter way to deploy. Currently we use a variety of shell scripts that initiate the deployment process. In order to build a new tool, it needs to:

因此,这没有办法……如果我们想有效地扩展基础架构,我们需要找到一种更明智的部署方式。 当前,我们使用各种Shell脚本来启动部署过程。 为了构建一个新工具,它需要:

  • Be able to query and list all the production namespaces

    能够查询和列出所有生产名称空间
  • Integrate helm/kubernetes libraries

    整合头盔/ kubernetes库
  • Deploy to multiple namespaces at once.

    一次部署到多个名称空间。
  • Centralized logs for deployment progress

    集中日志以了解部署进度

介绍k8sdeploy (Introducing k8sdeploy)

k8sdeploy is a go based tool, written with the goal of creating a cli that utilizes helm and kubernetes client libraries to deploy to multiple namespaces at once.

k8sdeploy是一个基于go的工具,旨在创建一个cli,该cli利用helm和kubernetes客户端库一次部署到多个名称空间。

Initialization:This creates the Helm Client and Client to Kubernetes. The current example below is for helmv2. Drastic changes with helm3 allow helm to directly communicate with k8s api server directly via kubeconfig.

初始化:这将创建Helm Client和Kubernetes的Client 以下当前示例适用于helmv2。 Helm3的巨大变化使Helm可以直接通过kubeconfig与k8s api服务器直接通信。

// GetKubeClient generates a k8s client based on kubeconfig
func GetKubeClient(kubeconfig string) (*kubernetes.Clientset, error) {
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
panic(err.Error())
}
return kubernetes.NewForConfig(config)
}//GetHelmClientv2 creates helm2 client based on kubeconfig
func GetHelmClientv2(kubeconfig string) *helm.Client {
config, _ := clientcmd.BuildConfigFromFlags(“”, kubeconfig)
client, _ := kubernetes.NewForConfig(config)// port forward tiller (specific to helm2)
tillerTunnel, _ := portforwarder.New(“kube-system”, client, config)// new helm client
host := fmt.Sprintf(“127.0.0.1:%d”, tillerTunnel.Local)
helmClient := helm.NewClient(helm.Host(host))
return helmClient
}

Shared Informer:After the tool creates the client, it initializes a deployment watcher. This is a shared informer, which watches for changes in the current state of Kubernetes objects. In our case, upon deployment, we would create a channel to start and stop a shared informer for the ReplicaSet resource. The goal here is to not only log the deployment status (“1 of 2 updated replicas are available”), but also collate all the information in one stream which is crucial when deploying to multiple namespaces at once.

共享的Informer:工具创建客户端后,它将初始化部署监视程序。 这是一个共享通知程序,它监视Kubernetes对象当前状态的变化。 在我们的案例中,在部署时,我们将创建一个通道来启动和停止ReplicaSet资源的共享通知程序。 这里的目标不仅是记录部署状态(“ 2个更新副本中的1个可用”),而且还将所有信息整理到一个流中,这对于一次部署到多个名称空间至关重要。

factory := informers.NewSharedInformerFactory(clientset, 0)
//set informer to listen to pod resources
informer := factory.Core().V1().ReplicaSets().Informer()stopper := make(chan struct{}) defer close(stopper)// informer catches events when replicaSets are added or updated
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
panic("not implemented")
},
UpdateFunc: func(interface{}, interface{}) {
panic("not implemented")
},
})go informer.Run(stopper)

Installing chart using HelmAfter initializing our deployWatcher, the tool uses helm libraries to initialize an install/update a deployment using a chart. Below is an example using helm2 to update a existing deployment. Here is one for helm3

使用Helm安装图表初始化我们的deployWatcher之后,该工具将使用Helm库使用图表初始化安装/更新部署。 以下是使用helm2更新现有部署的示例。 是一个头盔3

resp, _ := helmClient.ReleaseStatus(deployName)
resp.GetInfo().GetStatus().GetCode().String() == "DEPLOYED" {
fmt.Printf("Found existing deployment for %s...updating\n", deployName)
helmClient.UpdateReleaseFromChart(deployName, chart)
}

Defining successThere are multiple checks added to ensure a new deploy is deemed successful. We specified our helm charts to update deploy time during deployments in order to tell the informer to only log new events. The tool also checks if “ready replicas” and “desired replicas” match as well as making sure the informer is not picking up other events in case multiple users are deploying different apps. The tool adds all the successful deploys to a table

定义成功添加了多项检查以确保新部署被视为成功。 我们指定了头盔图以在部署过程中更新部署时间,以告知通知者仅记录新事件。 该工具还会检查“现成的副本”和“所需的副本”是否匹配,并确保在多个用户部署不同的应用程序的情况下,通知者不会拾取其他事件。 该工具将所有成功的部署添加到表中

build	29-Jul-2020 19:23:20	Starting deployment in namespace=name-space-1 for app=customapp at 2020-07-29 19:23:20 -0700 PDT
build 29-Jul-2020 19:23:20 Waiting for deployment rollout to finish: 0 of 2 updated replicas are available...
build 29-Jul-2020 19:23:20 Waiting for deployment rollout to finish: 0 of 2 updated replicas are available...
build 29-Jul-2020 19:23:20 Starting deployment in namespace=name-space-2 for app=customapp at 2020-07-29 19:23:20 -0700 PDT
build 29-Jul-2020 19:23:20 Waiting for deployment rollout to finish: 0 of 2 updated replicas are available...
build 29-Jul-2020 19:23:35 Waiting for deployment rollout to finish: 1 of 2 updated replicas are available...
build 29-Jul-2020 19:23:35 Waiting for deployment rollout to finish: 1 of 2 updated replicas are available...
build 29-Jul-2020 19:23:49 Waiting for deployment rollout to finish: 1 of 2 updated replicas are available...
build 29-Jul-2020 19:23:56 Waiting for deployment rollout to finish: 2 of 2 updated replicas are available...
build 29-Jul-2020 19:23:56 Successful Deployment of customapp on name-space-2
build 29-Jul-2020 19:23:58 Waiting for deployment rollout to finish: 2 of 2 updated replicas are available...
build 29-Jul-2020 19:23:58 Successful Deployment of customapp on name-space-2
build 29-Jul-2020 19:24:10 All deployments finished, sutting down watcher gracefully
build 29-Jul-2020 19:24:10 +----------------+--------------+---------+
build 29-Jul-2020 19:24:10 | APP | NAMESPACE | STATUS |
build 29-Jul-2020 19:24:10 +----------------+--------------+---------+
build 29-Jul-2020 19:24:10 | customapp | name-space-1 | Success |
build 29-Jul-2020 19:24:10 | customapp | name-space-2 | Success |
build 29-Jul-2020 19:24:10 +----------------+--------------+---------+

Putting it all togetherAt this point, the tool had the pieces created for one namespace. This is quickly alleviated using go routines to parallelize deployment calls. We used Cobra to create a cli where users can input comma separated namespaces.

放在一起现在,该工具已经为一个命名空间创建了片段。 使用go例程并行化部署调用可以快速缓解这种情况。 我们使用Cobra创建了一个cli,用户可以在其中输入逗号分隔的名称空间。

k8sdeploy deploy kubeconfig --configpath <full-path-to-kubeconfig> --releasename <name-of-release> --namespace <namespace1,namespace2,namespace3> --chartdir <full-path-to-tgz-chart-file> --set <set-string-values>

让我们开源 (Let’s open source)

Currently the tool is compatible with helm3. I made it available for use and public criticism here.

目前,该工具与helm3兼容。 我在这里将其提供使用和公众批评。

翻译自: https://medium.com/analytics-vidhya/multi-namespace-helm-deploy-in-kubernetes-26d1baf1ca5c

helm指定命名空间无效

Logo

开源、云原生的融合云平台

更多推荐