安装Prometheus-operator,对监控数据持久化存储
我们使用Helm安装prometheus-operator后,监控的数据默认是保存在Pod中的,那么当我们的Pod进行重启或者调度之后, 就会出现无法查看历史数据,所以我们需要对数据进行持久化保存首先创建一个NFS服务供我们的kubernetes集群使用。[root@localhost ~]# yum -y install nfs-utils[root@localhost ~]# cat...
·
我们使用Helm安装prometheus-operator后,监控的数据默认是保存在Pod中的,那么当我们的Pod进行重启或者调度之后, 就会出现无法查看历史数据,所以我们需要对数据进行持久化保存
-
首先创建一个NFS服务供我们的kubernetes集群使用。
[root@localhost ~]# yum -y install nfs-utils [root@localhost ~]# cat /etc/exports /tmp/data/ *(rw,fsid=1,sync,no_root_squash) #重新加载nfs配置信息 [root@localhost ~]# exportfs -rv -
将nfs配置为kubernetes可使用的storageClass.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 192.168.1.112
- name: NFS_PATH
value: /tmp/data/
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.112
path: /tmp/data/
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
provisioner: example.com/nfs
代码中的env与Volumes中的配置修改为自己实际信息。
kubernetes apply -f storageclass.yaml
[root@master ~]# kubectl get sc
NAME PROVISIONER AGE
prometheus example.com/nfs 18d
如上信息,表示我们的StorageClass创建已经成功
- 接下来我们修改helm中prometheus中的数据存储信息
修改StorageSpec的配置信息如下,关于storageClassName可根据实际修改
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: prometheus
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
- 安装prometheus-operator
helm install --name my-release stable/prometheus-operator -f prometheus-config.yaml
如上,我们已经配置好prometheus-operator的数据持久化了
更多推荐
所有评论(0)