背景

k8s内部的pod ip 、svc ip 无法被集群外用户访问,目前内部信息化集群需要通过svc ip访问到应用的swagger页面,解决方案主要有三种,分别为ingress转发、NodePort暴露集群外端口,以及通过本地开发工具telepresence通过apiserver对集群内资源进行访问。前面两种方式需要配置大量规则不利于集群维护,故对本地开发工具telepresence进行调研

telepresence介绍

这种一种更契合远程调试部署在k8s中的业务的方式,它能够在不修改程序代码的情况下,让本地应用程序无感的接入到 Kubernetes集群中,这样你就可以直接在本地开发调试微服务了

这是一个 Telepresence
工作原理图,它将集群中的数据卷、环境变量、网络都代理到了本地(除了数据卷外,其他两个对应用程序来说都是透明的):

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9dmr5Ffm-1658146907085)(../img/image-20220714154044891.png)]

有了这些代理之后:

  1. 本地的服务就可以完整的访问到远程集群中的其他服务。
  2. 本地的服务直接访问到 Kubernetes 里的各种资源,包括环境变量、Secrets、Config map 等。
  3. 甚至集群中的服务还能直接访问到本地暴露出来的接口。

HELM部署Traffic Manager

Traffic Manager用于将集群内流量转发至开启流量拦截功能的客户端

#Start by adding this repo to your Helm client with the following command:
helm repo add datawire  https://app.getambassador.io
helm repo update
#If you are installing the Telepresence Traffic Manager for the first time on your cluster, create the telepresence namespace in your cluster
kubectl create namespace ambassador
#Install the Telepresence Traffic Manager with the following command
helm install traffic-manager --namespace ambassador datawire/telepresence

查看Traffic Manager资源部署情况

root@testctyunfzoa-cn-fz1b-k8s-1294192-0001:~/k8sit_kubeconfig/rbac# kubectl  -n ambassador  get all
NAME                                   READY   STATUS    RESTARTS   AGE
pod/traffic-manager-56489c7cb7-8vtt9   1/1     Running   0          3h58m

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/agent-injector    ClusterIP   10.100.236.116   <none>        443/TCP    3h58m
service/traffic-manager   ClusterIP   None             <none>        8081/TCP   3h58m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traffic-manager   1/1     1            1           3h58m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/traffic-manager-56489c7cb7   1         1         1       3h58m

创建最小权限RBAC

为了允许用户跨所有命名空间进行拦截,但kubectl权限更有限,以下ServiceAccountClusterRoleClusterRoleBinding将允许完整telepresence intercept功能

集群范围的telepresence用户访问

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tp-user                                       # Update value for appropriate value
  namespace: ambassador                                # Traffic-Manager is deployed to Ambassador namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: telepresence-role
rules:
# For gather-logs command
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list"]
# Needed in order to maintain a list of workloads
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["namespaces", "services"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods/portforward"]
  verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: telepresence-rolebinding
subjects:
- name: tp-user
  kind: ServiceAccount
  namespace: ambassador
roleRef:
  apiGroup: rbac.authorization.k8s.io
  name: telepresence-role
  kind: ClusterRole

生成用户kubeconfig文件

查看tp-user的token

root@testctyunfzoa-cn-fz1b-k8s-1294192-0001:~/k8sit_kubeconfig/rbac# kubectl  -n ambassador  describe  secrets tp-user-token-72hkg
Name:         tp-user-token-72hkg
Namespace:    ambassador
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: tp-user
              kubernetes.io/service-account.uid: 43bd38a6-c16c-4a98-af71-10c0e4623e2b

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  10 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImprcThENHhZdmZNRkdwUzZFQjMyMUNuQy1hMk5TbFJBem5wZ2JrSzh2SXMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhbWJhc3NhZG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRwLXVzZXItdG9rZW4tNzJoa2ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHAtdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQzYmQzOGE2LWMxNmMtNGE5OC1hZjcxLTEwYzBlNDYyM2UyYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphbWJhc3NhZG9yOnRwLXVzZXIifQ.JwKYU-MR-173Ks6Fypn_boLsLwvoqajN6ylkGWCG4NEZ-3uhfXIJ849RRC7FFAz442rypLWLFfymBUGPyy6smCouk4cXU471xEew8tba3NtYMAZ2XT-oOw41bUoBBZmVpz38iVDbc58y8yzTgRGXPzHIsmWF-yoXMiYGuAdjBCbtrmoQ36-Nb92Dc0YXIZpsgC1XvzdYL0jQKVvLfi821wYPVVPouDYRDCij8Y5Qcdw6cGBBkheaIuC0O_F5TRLfes_v_eRyoPm5WQAzVsNd5A0YFmqadSgt0L-xWm-f_6npRAKdOWzhM6L2YElI-2B3-6H0q2B160zaP2fAaKAqMw

将token写入kubeconfig文件如下

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1RENDQXFDZ0F3SUJBZ0lVSVRESUJad1VpQmNyajFyTnNHb3RCQ01ZU05Vd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SUJjTk1qSXdOVEU0TURreU1EQXdXaGdQTWpFeU1qQTBNalF3T1RJd01EQmFNR0V4Q3pBSkJnTlYKQkFZVEFrTk9NUkV3RHdZRFZRUUlFd2hJWVc1bldtaHZkVEVMTUFrR0ExVUVCeE1DV0ZNeEREQUtCZ05WQkFvVApBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdtZFB3dTljMXdjbUJLVzVpWlJUN29XLzlxRm0KL2IyTTVqYzVlOFRPb2hJN1dWWkdnVXFSQkNZcjUxWHVlbXczRDJyVmdNRHY0c2pYTXJjTGRJYmg1LzNNYWM5TQowQU0wbkhRaHg5RVROV2drajZlODMvNTN2NkNYNGNQZjNoWi9zUzgzUnpTOEFyelM0MWI1V2JHME5ybjVpTnNyCk9yOWVZTDR2eDFKbWcyZldFZWVQYTNPVXBML3BBa0JqVXpGZ05zZzZVVDJZRXMzdDgxdHk0RENxS0RWbnVWSzkKekVqUEViS2F5czNIREEvTktybVd0WjdmaVZ4bDJiWCtqYkU2aG1DS0RtQnY1bnRadmNGeFJoUkxsc2U4dE4xSQp2NXcrZnA3SjByNWdXRE5UTnlrNXIrLy91K1U5a280bnE1eHNxVWtvNHJ5b0V0eUZ1NmVJM002QVZRSURBUUFCCm8yWXdaREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdFZ1lEVlIwVEFRSC9CQWd3QmdFQi93SUJBakFkQmdOVkhRNEUKRmdRVVRWQTZqanJaS3BtUmw2ckV1WGZhaEFqQ1RDd3dId1lEVlIwakJCZ3dGb0FVVFZBNmpqclpLcG1SbDZyRQp1WGZhaEFqQ1RDd3dEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQjR1NldZNENTOFBHeWEzYmlMUkw3THE5OXpxClBXdTFqWGk1QmZLOU8yN0QwZWFFYjBqTDRyWFMxU3g3eHozT1hXZWJrWlVESWdoYXRjOXgrQittM1kyVDFhODgKcDkxalgycE04eG9EcjE1bTgxVlU1MDdYVDNUNnpwUDlJWFJibHhoaUpBeW9TTFcvWVVNc0NYS3F0Vk9iejF5OAo3a3g5T2Ztb2dmTVI3Nlg0RmVZUllRN1pkYmRkOXgvcXVaV0huVElvc0lzaUxZdWNqS1RuQVgvZ2x3Y3NCeFovCmNRajIyM0hLTlFMQnp5TjNqaTR6YkNod1ppdGdKdjFiWXlISFBBN3g2LzF1Ny9aRlJRU3NQYkhHcU93eElzWEUKZ2NSdDFJV3ppWThEaFEvYkFJQ041emdpQ0FlbmRwV2lFblFSa21WSFZSQ2VuRjd3aVBab29aYkwrZ2M9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://10.230.0.115:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: tp-user
  name: tp-user
current-context: tp-user
kind: Config
preferences: {}
users:
- name: tp-user
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImprcThENHhZdmZNRkdwUzZFQjMyMUNuQy1hMk5TbFJBem5wZ2JrSzh2SXMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhbWJhc3NhZG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InRwLXVzZXItdG9rZW4tNzJoa2ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidHAtdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQzYmQzOGE2LWMxNmMtNGE5OC1hZjcxLTEwYzBlNDYyM2UyYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphbWJhc3NhZG9yOnRwLXVzZXIifQ.JwKYU-MR-173Ks6Fypn_boLsLwvoqajN6ylkGWCG4NEZ-3uhfXIJ849RRC7FFAz442rypLWLFfymBUGPyy6smCouk4cXU471xEew8tba3NtYMAZ2XT-oOw41bUoBBZmVpz38iVDbc58y8yzTgRGXPzHIsmWF-yoXMiYGuAdjBCbtrmoQ36-Nb92Dc0YXIZpsgC1XvzdYL0jQKVvLfi821wYPVVPouDYRDCij8Y5Qcdw6cGBBkheaIuC0O_F5TRLfes_v_eRyoPm5WQAzVsNd5A0YFmqadSgt0L-xWm-f_6npRAKdOWzhM6L2YElI-2B3-6H0q2B160zaP2fAaKAqMw

客户端安装Telepresence

# 安装kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | shasum -a 256 --check
#验证通过时,输出如下:
#kubectl: OK
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl

# 手动安装telepresence:
sudo curl -fL https://app.getambassador.io/download/tel2/darwin/arm64/latest/telepresence -o /usr/local/bin/telepresence

sudo chmod a+x /usr/local/bin/telepresence

启动Telepresence代理

使用 Telepresence,您可以创建全局拦截,拦截流向集群中服务的所有流量,并将其路由到本地环境

telepresence connect

访问集群内应用测试

(base) ➜  ~ curl -ik https://kubernetes.default
HTTP/2 401
cache-control: no-cache, private
content-type: application/json
content-length: 165
date: Thu, 14 Jul 2022 09:50:55 GMT

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}%

现在可以使用任何本地工具连接到集群中的任何服务

使用场景

假设我们有两个服务 A 和 B,服务 A 是依赖于服务 B 的。下面分两个场景来看看如何用 Telepresence 来调试 A 和 B。
(img-5hfJXHRD-1658146907086)(../img/image-20220715092240527.png)]

调试服务 A

服务 A 在本地运行,服务 B 运行在远端集群中。借助 Telepresence搭建的代理,A 就能直接访问到 B。比方说我们的服务 B 是这样一个程序,它监听在 8000 端口上。每当有人访问时它就返回 Hello, world!

$ kubectl run service-b --image=datawire/hello-world --port=8000 --expose
$ kubectl get service service-b
NAME        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service-b   10.0.0.12    <none>        8000/TCP   1m

现在在本地用默认参数启动 Telepresence,等它连接好集群:

$ telepresence
T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you
T: can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see
T: https://telepresence.io/reference/methods.html
T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
T: Starting network proxy to cluster using new Deployment telepresence-1566230249-7112632-14485


T: No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to
T: forward.


T: Setup complete. Launching your command.
@test_cluster|bash-4.2#

这时候就可以开始调试服务 A 了,因为服务 B 暴露出来的接口本地已经可以直接访问到:

$ curl http://service-b:8000/
Hello, world!

这里要说明一下这背后发生的事情:

  1. 当运行 Telepresence命令的时候,它创建了一个 Deployment,这个 Deployment又创建了一个用来做代理的 Pod,我们可以这样查看到它 kubectl get pod -l telepresence
  2. 同时它还在本地创建了一个全局的 VPN,使得本地的所有程序都可以访问到集群中的服务。Telepresence
    其实还支持其他的网络代理模式(使用 --method切换),vpn-tcp是默认的方式,其他的好像用处不大,inject-tcp甚至要在后续的版本中取消掉。
  3. 当本地的 curl访问 http://service-b:8000/时,对应的 DNS查询和 HTTP请求都被 VPN
    路由到集群中刚刚创建的 Pod去处理。

除此之外 Telepresence还将远端的文件系统通过 sshfs挂载到本地 $TELEPRESENCE_ROOT下面(你也可以用参数 --mount <MOUNT_PATH>指定挂载的路径)。这样,我们的应用程序就可以在本地访问到远程的文件系统:

$ ls $TELEPRESENCE_ROOT/var/run/secrets/kubernetes.io/serviceaccount
ca.crt  namespace  token

如果我们退出 Telepresence对应的 Shell,它也会做一些清理工作,比如取消本地 VPN、删除刚刚创建的Deployment

调试服务 B

服务 B 与刚才的不同之处在于,它是被别人访问的,要调试它,首先得要有真实的访问流量。我们如何才能做到将别人对它的访问路由到本地来,从而实现在本地捕捉到集群中的流量呢?

Telepresence 提供这样一个参数,--swap-deployment <DEPLOYMENT_NAME[:CONTAINER]>,用来将集群中的一个 Deployment替换为本地的服务。对于上面的 service-b,我们可以这样替换:

$ telepresence --swap-deployment service-b --expose 8000:8000

这个时候集群中的服务 A 再想访问服务 B 的 8000 端口时,Telepresence就会将这个请求转发到本地的 8000 端口。它的工作原理就是将集群中的 service-b替换为 Telepresence创建的 Proxy ,然后这个 Proxy 再将请求转发到本地客户端。

即,将原始的网络:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ccWIK6op-1658146907086)(../img/modb_img20191112_092530.png)]

替换为这个结构:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FPIo5yBP-1658146907086)(../img/modb_img20191112_092533.png)]

这样我们就有机会在本地查看具体的请求数据,调试逻辑,以及生成新的回复。

参考资料

官网:https://www.telepresence.io/docs/latest/install/

kubectl安装:https://kubernetes.io/zh-cn/docs/tasks/tools/install-kubectl-macos/

Logo

开源、云原生的融合云平台

更多推荐