kubernetes replication controller

i've a simple kubernetes cluster with a master and 3 minions. In this scenario, if i run a simple pod of a nginx or a mysql it works properly but, if i change type of KIND into yaml file and i try to run a replicated service, pods will start but i can't access to the service.

this is my yaml file for nginx with 3 replicas:

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

this is service yaml config file:

apiVersion: v1
kind: Service
metadata: 
  labels: 
    name: nginx
  name: nginx
spec: 
  ports:
    - port: 80
  selector: 
    name: nginx

i run it with:

# kubectl create -f nginx-rc.yaml
# kubectl create -f nginx-rc-service.yaml

if i run:

# kubectl get pod,svc,rc -o wide

i see:

NAME          READY     STATUS    RESTARTS   AGE       NODE
nginx-kgq1s   1/1       Running   0          1m        node01
nginx-pomx3   1/1       Running   0          1m        node02
nginx-xi54i   1/1       Running   0          1m        node03
NAME         LABELS                                    SELECTOR     IP(S)                           PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>       10.254.0.1      443/TCP
nginx        name=nginx                                name=nginx   10.254.47.150   80/TCP
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS
nginx        nginx          nginx      app=nginx   3

i can see description for pod:

Name:                           nginx-kgq1s
Namespace:                      default
Image(s):                       nginx
Node:                           node01/node01
Labels:                         app=nginx
Status:                         Running
Reason:
Message:
IP:                             172.17.52.3
Replication Controllers:        nginx (3/3 replicas created)
Containers:
  nginx:
    Image:              nginx
    State:              Running
      Started:          Thu, 11 Feb 2016 16:28:08 +0100
    Ready:              True
    Restart Count:      0
Conditions:
  Type          Status
  Ready         True 
Events:
  FirstSeen                             LastSeen                        Count   From                            SubobjectPath                           Reason          Message
  Thu, 11 Feb 2016 16:27:47 +0100       Thu, 11 Feb 2016 16:27:47 +0100 1       {scheduler }                                                            scheduled       Successfully assigned nginx-kgq1s to node01
  Thu, 11 Feb 2016 16:27:57 +0100       Thu, 11 Feb 2016 16:27:57 +0100 1       {kubelet node01}        implicitly required container POD       pulled          Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  Thu, 11 Feb 2016 16:28:02 +0100       Thu, 11 Feb 2016 16:28:02 +0100 1       {kubelet node01}        implicitly required container POD       created         Created with docker id bed30a90c6eb
  Thu, 11 Feb 2016 16:28:02 +0100       Thu, 11 Feb 2016 16:28:02 +0100 1       {kubelet node01}        implicitly required container POD       started         Started with docker id bed30a90c6eb
  Thu, 11 Feb 2016 16:28:07 +0100       Thu, 11 Feb 2016 16:28:07 +0100 1       {kubelet node01}        spec.containers{nginx}                  created         Created with docker id 0a5c69cd0481
  Thu, 11 Feb 2016 16:28:08 +0100       Thu, 11 Feb 2016 16:28:08 +0100 1       {kubelet node01}        spec.containers{nginx}                  started         Started with docker id 0a5c69cd0481

this is what i see if i get description for rc:

Name:           nginx
Namespace:      default
Image(s):       nginx
Selector:       app=nginx
Labels:         app=nginx
Replicas:       3 current / 3 desired
Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen                             LastSeen                        Count   From                            SubobjectPath   Reason                  Message
  Thu, 11 Feb 2016 16:27:47 +0100       Thu, 11 Feb 2016 16:27:47 +0100 1       {replication-controller }                       successfulCreate        Created pod: nginx-kgq1s
  Thu, 11 Feb 2016 16:27:47 +0100       Thu, 11 Feb 2016 16:27:47 +0100 1       {replication-controller }                       successfulCreate        Created pod: nginx-pomx3
  Thu, 11 Feb 2016 16:27:47 +0100       Thu, 11 Feb 2016 16:27:47 +0100 1       {replication-controller }                       successfulCreate        Created pod: nginx-xi54i

and this is what i see if i get description of service:

Name:                   nginx
Namespace:              default
Labels:                 name=nginx
Selector:               name=nginx
Type:                   ClusterIP
IP:                     10.254.47.150
Port:                   <unnamed>       80/TCP
Endpoints:              <none>
Session Affinity:       None
No events.

as i can see, the problem may be that i don't have an ENDPOINT but i don't have any idea how i could solve.

Jon Skeet
people
quotationmark

It looks to me like the selector for your service is wrong. It's looking for a label of name: nginx, but your pods actually have app: nginx.

Try changing your service file to:

apiVersion: v1
kind: Service
metadata: 
  labels: 
    name: nginx
  name: nginx
spec: 
  ports:
    - port: 80
  selector: 
    app: nginx

... or change your replication controller template to use name: nginx instead of app: nginx as the label. Basically, the labels have to match so that the service knows how to present a unified facade over your pods.

people

See more on this question at Stackoverflow