kubernetes开启rbac后controller-manager无法更新node信息排查
kubernetes v1.8.4版本,apiserver启用rbac认证后,controller-manager也绑定了system:kube-controller-manager的clusterrole,但是查看kube-api和kube-controller-manager的日志信息,均有错误 apiserver报错如下 I1225 17:00:05.36265
kubernetes v1.8.4版本,apiserver启用rbac认证后,controller-manager也绑定了system:kube-controller-manager的clusterrole,但是查看kube-api和kube-controller-manager的日志信息,均有错误
apiserver报错如下
I1225 17:00:05.362655 4528 rbac.go:116] RBAC DENY: user "system:kube-controller-manager" groups ["k8s" "system:authenticated"] cannot "update" resource "nodes/status" named "minion1.ku8s.com" cluster-wide
kube-controller-manager 报错如下
E1225 17:00:05.361508 4553 node_controller.go:1022] Error updating node minion.ku8s.com: nodes "minion.ku8s.com" is forbidden: User "system:kube-controller-manager" cannot update nodes/status at the cluster scope
E1225 17:00:05.362265 4553 node_controller.go:616] Failed while getting a Node to retry updating NodeStatus. Probably Node minion.ku8s.com was deleted.
问题分析:从报错来看是rbac的授权错误,而node信息的维护,属于kube-controller-manager下面的node-controller维护的,用户system:kube-controller-manager无system:controller:node-controller的clusterrole造成的.解决如下
# kubectl create clusterrolebinding controller-node-clusterrolebing --clusterrole=system:controller:node-controller --user=system:kube-controller-manager
更多推荐
所有评论(0)