kubernetes的ReplicationController文件
1. kubernetes的ReplicationController文件配置apiVersion: v1kind: ReplicationController ##表明这个是kubernetes的RC资源,这个是比较低级和基础的资源metadata:name: kubetest##这个是RC资源的名字,全局唯一,可以通过kubectl get rc查看spec:...
·
1. kubernetes的ReplicationController文件配置
apiVersion: v1
kind: ReplicationController ##表明这个是kubernetes的RC资源,这个是比较低级和基础的资源
metadata:
name: kubetest ##这个是RC资源的名字,全局唯一,可以通过kubectl get rc查看
spec:
replicas: 1 ##副本的数量
selector:
app: kubetest ##RC资源,选择label的标签
template:
metadata:
labels:
app: kubetest ##定义pod的资源label,很重要的一个属性,是被其他service、rc等资源选择标志
spec:
containers: ## 容器的属性
- name: kubetest
image: 192.168.1.130:5000/kubetest:9 ##docker的images
ports:
- containerPort: 8080
2.启动
启动:kubectl create -f kubetest-rc.yaml
查看ReplicationController 的情况:kubectl get rc
,显示如下
NAME DESIRED CURRENT READY AGE
kubetest 1 1 1 33m
查看pod的情况:kubectl get pod
NAME READY STATUS RESTARTS AGE
kubetest-8s1rt 1/1 Running 0 35m
查看node的情况:kubectl get node
NAME STATUS AGE
127.0.0.1 Ready 24d
查看node的详细信息:kubectl describe node 127.0.0.1
[root@localhost registry.access.redhat.com]# kubectl describe node 127.0.0.1
Name: 127.0.0.1
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=127.0.0.1
Taints: <none>
CreationTimestamp: Tue, 27 Mar 2018 05:30:36 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 20 Apr 2018 05:53:28 +0800 Fri, 20 Apr 2018 05:22:45 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 20 Apr 2018 05:53:28 +0800 Tue, 27 Mar 2018 05:30:36 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Apr 2018 05:53:28 +0800 Tue, 27 Mar 2018 05:30:36 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 20 Apr 2018 05:53:28 +0800 Fri, 20 Apr 2018 05:26:26 +0800 KubeletReady kubelet is posting ready status
Addresses: 127.0.0.1,127.0.0.1,127.0.0.1
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2862816Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2862816Ki
pods: 110
System Info:
Machine ID: 46fe0c0dfdd3404e957c22297da1d059
System UUID: F03A4D56-380A-3AA7-8FE7-0BE2EF78035A
Boot ID: 2f34ef81-e0fb-493b-b514-30f83e2b7a06
Kernel Version: 3.10.0-693.21.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.5.2
Kube-Proxy Version: v1.5.2
ExternalID: 127.0.0.1
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default kubetest-8s1rt 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
30m 30m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet.
30m 30m 1 {kubelet 127.0.0.1} Warning ImageGCFailed unable to find data for container /
30m 30m 2 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk
30m 30m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientMemory Node 127.0.0.1 status is now: NodeHasSufficientMemory
30m 30m 1 {kubelet 127.0.0.1} Normal NodeHasNoDiskPressure Node 127.0.0.1 status is now: NodeHasNoDiskPressure
30m 30m 1 {kubelet 127.0.0.1} Normal NodeNotReady Node 127.0.0.1 status is now: NodeNotReady
30m 30m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady
27m 27m 1 {kubelet 127.0.0.1} Warning ImageGCFailed unable to find data for container /
27m 27m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet.
27m 27m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk
27m 27m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientMemory Node 127.0.0.1 status is now: NodeHasSufficientMemory
27m 27m 1 {kubelet 127.0.0.1} Normal NodeHasNoDiskPressure Node 127.0.0.1 status is now: NodeHasNoDiskPressure
27m 27m 1 {kubelet 127.0.0.1} Warning Rebooted Node 127.0.0.1 has been rebooted, boot id: 2f34ef81-e0fb-493b-b514-30f83e2b7a06
27m 27m 1 {kubelet 127.0.0.1} Normal NodeNotReady Node 127.0.0.1 status is now: NodeNotReady
27m 27m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady
10m 10m 2 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "kubetest-8s1rt_default(ccc51f1f-4416-11e8-a44c-000c2978035a)". Falling back to DNSDefault policy.
查看pod的详情,其中包括Event事件,是很重要的信息kubectl describe pod kubetest-8s1rt
[root@localhost registry.access.redhat.com]# kubectl describe pod kubetest-8s1rt
Name: kubetest-8s1rt
Namespace: default
Node: 127.0.0.1/127.0.0.1
Start Time: Fri, 20 Apr 2018 05:26:21 +0800
Labels: app=kubetest
Status: Running
IP: 172.17.0.2
Controllers: ReplicationController/kubetest
Containers:
kubetest:
Container ID: docker://6b1409d4214bce7d55b9d0582cfffd1f625d5690d1549a5ca20a3121a1a7d736
Image: 192.168.1.130:5000/kubetest:9
Image ID: docker-pullable://192.168.1.130:5000/kubetest@sha256:8e5773b23d775c414d25ee231e11403367b682ca713da6174cfd4e529343dda3
Port: 8080/TCP
State: Running
Started: Fri, 20 Apr 2018 05:42:42 +0800
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
41m 33m 29 {default-scheduler } Warning FailedScheduling no nodes available to schedule pods
30m 30m 1 {default-scheduler } Normal Scheduled Successfully assigned kubetest-8s1rt to 127.0.0.1
30m 19m 7 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
29m 14m 65 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
13m 13m 2 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
13m 13m 1 {kubelet 127.0.0.1} spec.containers{kubetest} Normal Pulled Container image "192.168.1.130:5000/kubetest:9" already present on machine
13m 13m 1 {kubelet 127.0.0.1} spec.containers{kubetest} Normal Created Created container with docker id 6b1409d4214b; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 127.0.0.1} spec.containers{kubetest} Normal Started Started container with docker id 6b1409d4214b
更多推荐
已为社区贡献1条内容
所有评论(0)