imagePullPolicy: Always is not triggering

1,443 views
Skip to first unread message

Norman Khine

unread,
Sep 28, 2016, 7:45:06 AM9/28/16
to Kubernetes user discussion and Q&A
Hello,
I have setup a lambda function to trigger a update to my k8s cluster, the problem is that the image is not being pulled.
Here is my yaml file

apiVersion: extensions/v1beta1
kind
: Deployment
metadata
:
  name
: app-prod
  labels
:
    pod
: app
    track
: production
  annotations
:
    scheduler
.alpha.kubernetes.io/affinity: >
     
{
       
"nodeAffinity": {
         
"requiredDuringSchedulingIgnoredDuringExecution": {
           
"nodeSelectorTerms": [
             
{
               
"matchExpressions": [
                 
{
                   
"key": "beta.kubernetes.io/instance-type",
                   
"operator": "In",
                   
"values": ["c4.large"]
                 
}
               
]
             
}
           
]
         
}
       
}
     
}
spec
:
  replicas
: 5
 
template:
    metadata
:
      labels
:
        pod
: app
        track
: production
    spec
:
      containers
:
       
- image: quay.io/user/api:develop
          name
: api
          ports
:
           
- containerPort: 3000
          env
:
           
- name: DATABASE
              value
: mongodb://mongo-db:27000//api-v2?replicaSet=mongo
          imagePullPolicy
: Always
       
- image: quay.io/user/media:develop
          name
: media
          ports
:
           
- containerPort: 4000
          imagePullPolicy
: Always
      imagePullSecrets
:
       
# we download these from quay.io account
       
- name: my-pull-secret

the problem is that on quay.io the `latest` tag is only applied to the default github repository and not `develop` branch for example

so my question, how do i force the kubectl to pull the latest image from a specific branch

here is my lambda function


/* eslint-disable no-console */
const fs = require('fs');
const K8Api = require('kubernetes-client');

module.exports.handle = (event, context, cb) => {
  console.log('quay.io', JSON.stringify(event.body));
  const clusterUrls = {
  };
  const stage = event.stage;
  const deployment = event.query.deployment;
  const container = event.query.container;
  const k8 = new K8Api.Extensions({
    url: clusterUrls[stage],
    version: 'v1beta1',
    ca: fs.readFileSync(`${stage}-ca.pem`),
    cert: fs.readFileSync(`${stage}-k8s-admin.pem`),
    key: fs.readFileSync(`${stage}-k8s-admin-key.pem`),
  });
  k8.ns.deployment.get(deployment, (err, result) => {
    if (err) {
      cb(err);
    } else {
      const match = result.spec.template.spec.containers
                     .filter(c => c.name === container);
      const currentImage = match && match[0].image;
      console.log('k8s.current', currentImage);
      const currentTag = currentImage.split(/:/)[1];
      const tags = event.body.docker_tags;
      const tag = tags.filter(t => t !== currentTag)[0] || 'latest';
      const dockerImage = `${event.body.docker_url}:${tag}`;
      const patch = {
        name: deployment,
        body: {
          spec: {
            template: {
              spec: {
                containers: [{
                  name: container,
                  image: dockerImage,
                }],
              },
            },
          },
        },
      };
      console.log('k8s.patch', stage, JSON.stringify(patch));
      k8.ns.deployments.patch(patch, (err2, result2) => {
        console.log('k8s.result', JSON.stringify(result2 || null));
        cb(err2, result2);
      });
    }
  });
};


then on quay.io i have a notification setup for Webhook POST, that executes on a successful build with an end point to the AWS lambda function, all works to a point, but for this inability to always pull the image 

any advice is much appreciate on how to solve this and whether it is the right way to do it?

Rodrigo Campos

unread,
Sep 28, 2016, 9:36:12 AM9/28/16
to kubernet...@googlegroups.com
What? Why does it matter, if you are using tag develop?

 
so my question, how do i force the kubectl to pull the latest image from a specific branch

The image is pulled from a docker registry. There are no branches there. Just a repo and a tag (your image name and a tag, in this case develop)

So, probably the image in your registry with tag develop is not what you want?



Thanks,
Rodrigo

Norman Khine

unread,
Sep 28, 2016, 10:10:13 AM9/28/16
to kubernet...@googlegroups.com
Well the issue is that the new updated version is not deployed, so when i push to `develop` branch I have my image built on quay.io and this then triggers the update, but on the k8s cluster it is using the cached image version so the only way to change force an update is to alternate between :latest and :develop

the same happen if i do it from the command line

kubectl replace -f templates/pods/prod/app.yaml

the only way to make a change is to alter the :tag in the yaml file from :develop to :latest and back!



--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/uh_ON-SrVoc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.



--
%>>> "".join( [ {'*':'@','^':'.'}.get(c,None) or chr(97+(ord(c)-83)%26) for c in ",adym,*)&uzq^zqf" ] )

Rodrigo Campos

unread,
Sep 28, 2016, 11:36:09 AM9/28/16
to kubernet...@googlegroups.com
On Wed, Sep 28, 2016 at 03:10:09PM +0100, Norman Khine wrote:
> Well the issue is that the new updated version is not deployed, so when i
> push to `develop` branch I have my image built on quay.io and this then
> triggers the update, but on the k8s cluster it is using the cached image
> version so the only way to change force an update is to alternate between
> :latest and :develop
>
> the same happen if i do it from the command line
>
> kubectl replace -f templates/pods/prod/app.yaml
>
> the only way to make a change is to alter the :tag in the yaml file from
> :develop to :latest and back!

Ohh, I thought the new deployment was not working. But ok, no new deployment is
made.

Yes, that is like that. It's not easy to do a rollback otherwise, etc. if you
always use the same tag. And I strongly recommend you to have this unless you
have a good reason. For example, in our case, a new tag is created for
development each time automatically. But is particularly important in
production, IMHO, because you can know exactly which version was running, have a
rollback or an update that fails and doesn't break the current running pods,
etc. For example, if you use "latest" for your production env and have 3 pods
running that, create a new image and tag it as latest again but the readiness
test fails, you can't rollback (your previous image is overwritten, as latest
points to the new one) and if one node crashes where the remaining pods are
running, they will fail to start because "latest" now is the new (failing)
image.

I think you might be able to force a deploy with the kubectl set-image option,
but haven't used it.



Thanks a lot,
Rodrigo

Norman Khine

unread,
Sep 28, 2016, 11:46:35 AM9/28/16
to kubernet...@googlegroups.com
yes, it makes sense, thank you

Rodrigo

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/uh_ON-SrVoc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages