Create pod, but can't see it ?

11 views
Skip to first unread message

jay vyas

unread,
Jan 6, 2015, 8:44:29 AM1/6/15
to google-c...@googlegroups.com
Hi kubernetes !

I've attempted to start my first container  in a new single node kube cluster, but It seems that (even though create suceeds...) it isnt shown via "get pods".

1) Is there somewhere I should look to check that the container is running ? 

2) how do i create another container of the same name ?

[root@xyz deploy]# kubectl create -f /home/jay/Development/kubernetes/examples/guestbook-go/guestbook-controller.json
F0106 08:39:50.598913     404 create.go:61] replicationController "guestbook-controller" already exists

[root@xyz deploy]# kubectl get pods
NAME                IMAGE(S)            HOST                LABELS              STATUS

Thanks !

Jay

Brendan Burns

unread,
Jan 6, 2015, 2:08:06 PM1/6/15
to google-c...@googlegroups.com
Hey Jay,
you have created a replication controller, not a pod.  The replication controller in turn is responsible for creating replicated pods, but it is not doing so, for some reason.  

As a start on debugging, try creating a single pod:

kubectl create -f .../kubernetes/examples/guestbook-go/redis-master-pod.json

And see what happens.

Thanks!
--brendan

--
You received this message because you are subscribed to the Google Groups "Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-contain...@googlegroups.com.
To post to this group, send email to google-c...@googlegroups.com.
Visit this group at http://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.

jay vyas

unread,
Jan 6, 2015, 2:13:31 PM1/6/15
to google-c...@googlegroups.com
thanks brendan ! yup, i can create a pod, at least, manually.  it seems to stay in status pending.    i'll look more to see if i can dig whats going on.

Mark Lamourine

unread,
Jan 6, 2015, 2:17:49 PM1/6/15
to google-c...@googlegroups.com
I'm having a similar sounding problem but I have no idea if it's related (yet). I'm creating pods from my own images. The kube JSON was causing the pods to instantiate and run prior to the winter break. Now I see the kube pause containers start, and kubectl indicates the pods exist, but are not running because the images are not (yet) available on the minions.

Yesterday I assumed that the Docker outage was the culprit, but I'm still seeing it today.

I'll run a couple of test runs and get some output to share.

- Mark

>
> --
> You received this message because you are subscribed to the Google Groups
> "Containers at Google" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to google-contain...@googlegroups.com.
> To post to this group, send email to google-c...@googlegroups.com.
> Visit this group at http://groups.google.com/group/google-containers.
> For more options, visit https://groups.google.com/d/optout.
>

--
Mark Lamourine <mlam...@redhat.com>
Sr. Software Developer, Cloud Strategy
Red Hat, 314 Littleton Road, Westford MA 01886
Voice: +1 978 392 1093
http://people.redhat.com/~mlamouri
markllama @ irc://irc.freenod.org*lopsa

Brendan Burns

unread,
Jan 6, 2015, 2:39:13 PM1/6/15
to google-c...@googlegroups.com
Can you do:

kubectl get minions

and validate that you still have nodes that things can run on?

You can also run

kubectl get events

and see what is happening.

--brendan

Mark Lamourine

unread,
Jan 6, 2015, 3:13:14 PM1/6/15
to google-c...@googlegroups.com
This appears to be a docker problem:

Run on minion-1

docker pull markllama/mongodb
Pulling repository markllama/mongodb
4ae4db141b71: Error pulling image (latest) from markllama/mongodb, Driver devicemapper failed to get image parent ff75b0852d47a18f23ebf57d2ef7974f470a754c534fa44dfb94d5deec69e6c0: Unknown device ff75b0852d47a18f23ebf57d2ef7974f470a754c534fa44dfb94d5deec69e6c0 57d2ef7974f470a754c534fa44dfb94d5deec69e6c0
2015/01/06 20:10:18 Error pulling image (latest) from markllama/mongodb, Driver devicemapper failed to get image parent ff75b0852d47a18f23ebf57d2ef7974f470a754c534fa44dfb94d5deec69e6c0: Unknown device ff75b0852d47a18f23ebf57d2ef7974f470a754c534fa44dfb94d5deec69e6c0

I'm wondering if there's an issue with my image(s) either because of a problem I created or the outages yesterday or last week.

I'm going to rebuild my images and push them up again since it appears that other bases are working.

- Mark

Mark Lamourine

unread,
Jan 6, 2015, 4:39:53 PM1/6/15
to google-c...@googlegroups.com


----- Original Message -----
> Can you do:
>
> kubectl get minions
>
> and validate that you still have nodes that things can run on?

kubectl get minions
Running: /home/bos/mlamouri/kubernetes/cluster/../cluster/vagrant/../../_output/dockerized/bin/linux/amd64/kubectl --auth-path=/home/bos/mlamouri/.kubernetes_vagrant_auth get minions
NAME LABELS
10.245.1.3 <none>
10.245.1.4 <none>
10.245.1.5 <none>

>
> You can also run
>
> kubectl get events

kubectl get events
Running: /home/bos/mlamouri/kubernetes/cluster/../cluster/vagrant/../../_output/dockerized/bin/linux/amd64/kubectl --auth-path=/home/bos/mlamouri/.kubernetes_vagrant_auth get events
TIME NAME KIND SUBOBJECT CONDITION REASON MESSAGE
Tue, 06 Jan 2015 20:57:29 +0000 10.245.1.3 Minion starting Starting kubelet.
Tue, 06 Jan 2015 21:01:20 +0000 10.245.1.4 Minion starting Starting kubelet.
Tue, 06 Jan 2015 21:05:36 +0000 10.245.1.5 Minion starting Starting kubelet.
Tue, 06 Jan 2015 21:07:46 +0000 pulpdb Pod Pending scheduled Successfully assigned pulpdb to 10.245.1.4
Tue, 06 Jan 2015 21:08:09 +0000 pulpdb BoundPod implicitly required container net waiting pulled Successfully pulled image "kubernetes/pause:latest"
Tue, 06 Jan 2015 21:08:10 +0000 pulpdb BoundPod implicitly required container net running started Started with docker id 357a36924c4a81d26cb3c38804bd9a06152665769e0ca10fcea431a55a79393e
Tue, 06 Jan 2015 21:08:10 +0000 pulpdb BoundPod implicitly required container net waiting created Created with docker id 357a36924c4a81d26cb3c38804bd9a06152665769e0ca10fcea431a55a79393e
Tue, 06 Jan 2015 21:10:24 +0000 pulpdb BoundPod spec.containers{pulp-db} failed failed Failed to pull image "markllama/mongodb"

Looks like my image can't be pulled. It pulls OK from the vagrant host, but not on the minions.

Hrrm.

- Mark

Mark Lamourine

unread,
Jan 6, 2015, 5:02:46 PM1/6/15
to google-c...@googlegroups.com
It's not just my images. I can't pull fedora:20 (the base image for all of mine) or fedora:21 to a minion successfully.

Brendan Burns

unread,
Jan 6, 2015, 5:04:27 PM1/6/15
to google-c...@googlegroups.com
is there an expired dockercfg or something?  Have you tried pinging hub.docker.com?

Maybe the IP address is cached somewhere?  Bounce the docker daemon?

Does a vanilla 'docker pull' work?

jay vyas

unread,
Jan 6, 2015, 7:11:03 PM1/6/15
to google-c...@googlegroups.com
1) looks like one error i had, was that i used "localhost" instead of "http://localhost" in some of my KUBE_MASTER parameter.

2) In any case, still seems that even though the services start just fine, minions arent running.(get minions returns nothing)...  so there are multiple causes i guess to the situation where you cant run a pod, and i think its just a issue that not enough info is given.

Brendan Burns

unread,
Jan 6, 2015, 7:14:19 PM1/6/15
to google-c...@googlegroups.com
minions are probably failing health check.  What are you setting for --machines to the controller-manager binary.

--

jay vyas

unread,
Jan 6, 2015, 10:22:14 PM1/6/15
to google-c...@googlegroups.com
for --machines:  (controller-manager).

# Comma seperated list of minions
KUBELET_ADDRESSES="--machines=localhost"
# Add you own!
KUBE_CONTROLLER_MANAGER_ARGS=""

Mark Lamourine

unread,
Jan 7, 2015, 11:23:06 AM1/7/15
to google-c...@googlegroups.com


----- Original Message -----
> is there an expired dockercfg or something? Have you tried pinging
> hub.docker.com?

yes: OK

>
> Maybe the IP address is cached somewhere? Bounce the docker daemon?

Done, no effect.

> Does a vanilla 'docker pull' work?

No, that's what gave that last message:

docker pull fedora:20
....
can't mount <long hex string>

I'm rebuilding from today's source and I'll update after the first try. If that fails I'm going to revert to a commit that worked pre-holiday and try that.

- Mark

Mark Lamourine

unread,
Jan 7, 2015, 11:24:55 AM1/7/15
to google-c...@googlegroups.com


----- Original Message -----
> minions are probably failing health check. What are you setting for
> --machines to the controller-manager binary.

It's really unfortunate that, now that "vagrant up <name>" is forbidden, I can't find a way to bring up the cluster one component at a time and then check it.

All or nothing is eating a lot of cycle time.

- Mark

Brendan Burns

unread,
Jan 7, 2015, 11:31:12 AM1/7/15
to google-c...@googlegroups.com

I think this might be a race in docker pull vs image GC, see the PR I sent last night.

Sorry!
Brendan

Mark Lamourine

unread,
Jan 7, 2015, 11:59:39 AM1/7/15
to google-c...@googlegroups.com


----- Original Message -----
> I think this might be a race in docker pull vs image GC, see the PR I sent
> last night.

I'll take a look. I'm still seeing it with a new build from this morning with basically any large base image.

http://www.fpaste.org/166817/

I tried pulling F20 and Ubuntu Precise and both had the same problem on a brand new minion.

>
> Sorry!

If there's a PR then there's a solution in the pipeline. I'm happy with that :-)

- Mark

Brendan Burns

unread,
Jan 7, 2015, 12:12:23 PM1/7/15
to google-c...@googlegroups.com

Yeah, PR is still in flight, so wouldn't expect it to work at head yet.

Mark Lamourine

unread,
Jan 7, 2015, 12:45:11 PM1/7/15
to google-c...@googlegroups.com


----- Original Message -----
> Yeah, PR is still in flight, so wouldn't expect it to work at head yet.

I have to learn how to pull commits from other people's repos still. I added your workspace as a remote and fetched it, but I didn't see the PR branch available.

I just looked at the PR commit, and cutnpaste the three lines changed to play with it for now and verify that it fixes my blocker.

Back in 30ish with an answer.

- Mark
Reply all
Reply to author
Forward
0 new messages