Best practices for subversion updates on prod environment with kubernetes cluster in daemon set configuration

434 views
Skip to first unread message
Assigned to placi...@gmail.com by me

mder...@gmail.com

unread,
Apr 23, 2018, 6:57:11 AM4/23/18
to Kubernetes user discussion and Q&A
Hi all,
I have a Kubernetes cluster on my production environment that is composed by 6 pods.
At the moment when I have to make a new deploy, I create a new docker image on my local machine where I execute a svn update.
Then I push the new image on GCE and finally I can execute a rolling update.

But sometimes I have to make a lot of svn updates so I would like to know which are the best practices in order to have a persistent disk (where I could update my svn code as often as I want) that will be mounted by every pod of daemon set, without a significant decrease in performance.

Many thanks

Marco

Sunil Bhai

unread,
Apr 23, 2018, 7:24:32 AM4/23/18
to kubernet...@googlegroups.com, mder...@gmail.com

HI,

 

Check this one, hope it helps

 

 

 

 

 

Sent from Mail for Windows 10

--

You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.

To post to this group, send email to kubernet...@googlegroups.com.

Visit this group at https://groups.google.com/group/kubernetes-users.

For more options, visit https://groups.google.com/d/optout.

 


Virus-free. www.avast.com

Sunil Bhai

unread,
Apr 23, 2018, 7:27:31 AM4/23/18
to kubernet...@googlegroups.com, mder...@gmail.com

 

 

 

Sent from Mail for Windows 10

 

 

https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif

Virus-free. www.avast.com

 

 

mder...@gmail.com

unread,
Apr 23, 2018, 9:13:46 AM4/23/18
to Kubernetes user discussion and Q&A
Thanks for these suggestions!
But do these solutions use persistent disk?
In my case the persistent disk is a necessary requirement because in certain rare situations the pods restart. Therefore it is necessary to use a persistent disk so that the code does not change in case of reboot..

Just to clarify the scenario under analysis:
- the image that I'm using is a debian with svn installed and configured
- so inside the image there is all my project code (besides apache, php, etc)
- after a deploy I could execute a 'svn update' on every pod (using a multi-terminals app like Terminator) but the problem is that if a pod restarts, the code will return to the original revision when the image was created

So I'm searching a solution where I could use a daemonset configuration with a hostPath section where to indicate, in some way, a persistent disk (previously created) and the path where to mount it

Rodrigo Campos

unread,
Apr 23, 2018, 10:52:20 AM4/23/18
to kubernet...@googlegroups.com
Sorry, there are different parts that I don't follow. Why daemon set?

And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.

That being said, you can use a sidecar container to update the SVN code in a shared volume. That sounds like a good approach (in most Kubernetes examples shown with a webserver and git, but it's the same). And you should be able to handle restarts and that stuff fine.

mder...@gmail.com

unread,
Apr 24, 2018, 4:05:22 AM4/24/18
to Kubernetes user discussion and Q&A
Il giorno lunedì 23 aprile 2018 16:52:20 UTC+2, Rodrigo Campos ha scritto:
> Sorry, there are different parts that I don't follow. Why daemon set?

No problem.
So why daemon set? Because I have a cluster with 6 nodes (but in the future this number could be greater) and to ensure that every node will contain a single pod I use a daemonset deploy (as you advised me in this discussion: https://groups.google.com/forum/#!topic/kubernetes-users/t1cR-v6NCpM)



> And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.

To be honest I don't think that automate svn updates is a reliable solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files to make them available to the team. Now the svn revision is 200. But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and restarts it automatically. If any automatic code update mechanism is activated when the pod is started, it will lead to the situation where a pod will have code of revision 200 and all the others will remain to revision 123



> That being said, you can use a sidecar container to update the SVN code in a shared volume. That sounds like a good approach (in most Kubernetes examples shown with a webserver and git, but it's the same). And you should be able to handle restarts and that stuff fine.

In fact from what I have read on the internet this approach should be the most correct solution.
The question is: do I have to create a single persistent disk that will be mounted (read only I assume) on every pod (via daemonset.yml) ?
Or is it possible to create one persistent disk for every pod, where each one is mounted on a single pod in r/w mode?

Thanks

Marco

Rodrigo Campos

unread,
Apr 24, 2018, 11:31:46 AM4/24/18
to kubernet...@googlegroups.com


On Tuesday, April 24, 2018, <mder...@gmail.com> wrote:
Il giorno lunedì 23 aprile 2018 16:52:20 UTC+2, Rodrigo Campos ha scritto:
> Sorry, there are different parts that I don't follow. Why daemon set?

No problem.
So why daemon set? Because I have a cluster with 6 nodes (but in the future this number could be greater) and to ensure that every node will contain a single pod I use a daemonset deploy (as you advised me in this discussion: https://groups.google.com/forum/#!topic/kubernetes-users/t1cR-v6NCpM)

Okay, if you are sure about that, then it seems fine. Just checking =)
 

> And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.

To be honest I don't think that automate svn updates is a reliable solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files to make them available to the team. Now the svn revision is 200. But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and restarts it automatically. If any automatic code update mechanism is activated when the pod is started, it will lead to the situation where a pod will have code of revision 200 and all the others will remain to revision 123

Exactly. But that will only happen if you manage SVN files outside of the docker build. As long as the container files are contained there, this can't happen. This is exactly why I was advising not to manage files updates outside of the container.

Or am I missing something?

 
> That being said, you can use a sidecar container to update the SVN code in a shared volume. That sounds like a good approach (in most Kubernetes examples shown with a webserver and git, but it's the same). And you should be able to handle restarts and that stuff fine.

In fact from what I have read on the internet this approach should be the most correct solution.
The question is: do I have to create a single persistent disk that will be mounted (read only I assume) on every pod (via daemonset.yml) ?
Or is it possible to create one persistent disk for every pod, where each one is mounted on a single pod in r/w mode?

Why not an emptydir?

mder...@gmail.com

unread,
Apr 24, 2018, 12:21:13 PM4/24/18
to kubernet...@googlegroups.com
> And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.

To be honest I don't think that automate svn updates is a reliable solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files to make them available to the team. Now the svn revision is 200. But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and restarts it automatically. If any automatic code update mechanism is activated when the pod is started, it will lead to the situation where a pod will have code of revision 200 and all the others will remain to revision 123

Exactly. But that will only happen if you manage SVN files outside of the docker build. As long as the container files are contained there, this can't happen. This is exactly why I was advising not to manage files updates outside of the container.

Or am I missing something?

In our development env we don't use the docker image but a standard stack apache/php/db installed on linux machine.
Day to day we commit file changes on svn repo. Then when a deployment is required, we create a docker image on our local linux machine (without kubernetes).
In the docker container we'll update the svn code. Then push of the docker container (with a new image tag) on gce.
Finally rolling update with the new image tag on prod env via Kubernetes.
So after a deploy, where a new image (containing a particular svn revision) has been installed, we can misalign the code version of the image (in prod) respect to the version of the svn.


Why not an emptydir?
We need a persistent volume could be mounted in our web container (or better: on every container that daemonset will create). In this volume we would have our svn code.
With a single pod there are no problems to apply this approach.
We are able to execute a svn checkout on the persistent disk, then mount it in the directory we want.
If pod restarts due some problems, thanks the persistent disk, the pod will use the same svn revision.
Problems arise when we try to use the persistent volume (in ReadOnlyMany mode) inside daemonset configuration.
Below the yaml used in our tests:



### test-persistent-readonly-disk.yaml:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-persistent-disk               
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100Gi
  accessModes:
    - ReadOnlyMany                    
  gcePersistentDisk:
    pdName: "test-persistent-disk"   
    fsType: "ext4"




### test-persistent-readonly-disk-claim.yaml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-persistent-disk-claim
  labels:
    type: gcePersistentDisk
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 100Gi




### test-daemonset.yaml:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: test-daemonset

spec:
  updateStrategy:
    type: RollingUpdate                

  selector:
    matchLabels:
      app: webpod
     
  template:
    metadata:
      labels:
        app: webpod
       
    spec:
      nodeSelector:
        app: frontend-node
       
      terminationGracePeriodSeconds: 30
            
      volumes:        
      - name: persistent-volume
        persistentVolumeClaim:
          claimName: test-persistent-disk-claim
                      
      containers:         
      - name: web-container
        image: autoxyweb_build:v4.3.2
        ports:
          - containerPort: 80
        volumeMounts:
        - name: persistent-volume
          mountPath: /opt/live/
          subPath: svn/project



We get this error:

Back-off restarting failed container
Error syncing pod
AttachVolume.Attach failed for volume "pvc-a0683...." : googleapi: Error 400: The disk resource '.../disks/gke-test-cluster-8ef05-pvc-a0...' is already being used by '.../instances/gke-test-cluster-default-pool-32598eec-lfn7'     

    
Any suggestions?
Thanks ;)

Rodrigo Campos

unread,
Apr 26, 2018, 1:31:21 PM4/26/18
to mder...@gmail.com, kubernet...@googlegroups.com
Adding Kubernetes users again :)

On Thursday, April 26, 2018, Rodrigo Campos <rodr...@gmail.com> wrote:
On Thursday, April 26, 2018, <mder...@gmail.com> wrote:

Il 25/04/2018 01:27, Rodrigo Campos ha scritto:
I don't understand why can that happen if the code is in the container image. Unless you change it while it is running, there should be no chance to misalign anything. What am I missing?

We're working with continuous delivery approach so we would like to have the chance to execute svn up on prod env without the obligation to create a new image for every deploy

Really different things, though.

And, as I said in previous emails, you lose several advantages of immutability of containers.

For example, we are using zendesk/samson (project on GitHub) and when a merge is done, Travis runs the tests and if they pass, a new docker image is created and it gets deployed via Samson.
Automatically. And if something fails (like liveness probes or something) it rollbacks to the previous image.
This is pretty much what containers deploy look like in the common case: your code is in the container. You just create new containers images.

If you are managing all of this yourself, then you need to do it yourself and handle all these problems that other tools solve for you.

mder...@gmail.com

unread,
May 9, 2018, 10:31:17 AM5/9/18
to kubernet...@googlegroups.com
Hello,
I'm writing because I found a workaround in order to deploy minor updates without causing the restart of the containers. Maybe this idea could be useful for others..
The workaround is based on adding another business logic in the start script inside the docker image.
In my Dockerfile, the last line is:

CMD ["/tmp/final_script.sh"]

and in this script I perform some operations but above all I start apache service

Now before starting apache, the script updates a single file to the last revision
The file contains the svn revision related to prod env (example 21180).
After this update, the script checks if the revision obtained with the file (21180) is different from the one with which the Docker image was created (example 21165)
If it's so, the script update the code to the revision 21180
Then it starts apache as usual

This image (with the "extended" final_script.sh) has been deployed on prod with a rolling update.
From this moment, we're able to deploy minor updates without the need to create a new image and above all without any problem of breakdown during future deployments

Our approach is as follows:
- commit the new files (I get revision 21200)
- update (and commit) of the file containing the prod revision (it will contain 21200)
- custom script to execute svn up -r21200 on every web container
- if a container may be restarted due some problems, at the boot of the container the final_script would be executed. The image revision will be different in this case (21165 !=  21200) so the script will update the code to 21200. And the restarted container would have the same code as all the others  
Reply all
Reply to author
Forward
0 new messages