But sometimes I have to make a lot of svn updates so I would like to know which are the best practices in order to have a persistent disk (where I could update my svn code as often as I want) that will be mounted by every pod of daemon set, without a significant decrease in performance.
Many thanks
Marco
HI,
Check this one, hope it helps
https://github.com/waterplaclid/spark-kubernetes-demo.git ( path- spark-kubernetes-demo/brightics-on-kubernetes/subversion-service.yaml)
Also check this one https://hub.docker.com/r/solsson/svnsync/
Sent from Mail for Windows 10
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
Just to clarify the scenario under analysis:
- the image that I'm using is a debian with svn installed and configured
- so inside the image there is all my project code (besides apache, php, etc)
- after a deploy I could execute a 'svn update' on every pod (using a multi-terminals app like Terminator) but the problem is that if a pod restarts, the code will return to the original revision when the image was created
So I'm searching a solution where I could use a daemonset configuration with a hostPath section where to indicate, in some way, a persistent disk (previously created) and the path where to mount it
Il giorno lunedì 23 aprile 2018 16:52:20 UTC+2, Rodrigo Campos ha scritto:
> Sorry, there are different parts that I don't follow. Why daemon set?
No problem.
So why daemon set? Because I have a cluster with 6 nodes (but in the future this number could be greater) and to ensure that every node will contain a single pod I use a daemonset deploy (as you advised me in this discussion: https://groups.google.com/forum/#!topic/kubernetes-users/t1cR-v6NCpM)
> And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.
To be honest I don't think that automate svn updates is a reliable solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files to make them available to the team. Now the svn revision is 200. But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and restarts it automatically. If any automatic code update mechanism is activated when the pod is started, it will lead to the situation where a pod will have code of revision 200 and all the others will remain to revision 123
> That being said, you can use a sidecar container to update the SVN code in a shared volume. That sounds like a good approach (in most Kubernetes examples shown with a webserver and git, but it's the same). And you should be able to handle restarts and that stuff fine.
In fact from what I have read on the internet this approach should be the most correct solution.
The question is: do I have to create a single persistent disk that will be mounted (read only I assume) on every pod (via daemonset.yml) ?
Or is it possible to create one persistent disk for every pod, where each one is mounted on a single pod in r/w mode?
> And fundamentally why not rebuild ok SVN changes? You can automate that. Take into account that if you don't have different images with the code, you can't use Kubernetes to rollback either. Or you should check in some other way which pod had which svn revision at any moment in time, and also handle if an SVN up fails or fails in some pods only. OIW, it can add more problems than it solves to do that, consider it carefully.
To be honest I don't think that automate svn updates is a reliable solution.
Let me give an example:
- I commit some file --> revision 123 and I have to deploy those changes on prod
- I create a docker image where I update the code to revision 123
- Then I deploy the image with a rolling update to kubernetes cluster
- In the following days I work to the code, committing the files to make them available to the team. Now the svn revision is 200. But a deploy on prod is not scheduled
- For a memory problem, on prod env, kubernetes kills a pod and restarts it automatically. If any automatic code update mechanism is activated when the pod is started, it will lead to the situation where a pod will have code of revision 200 and all the others will remain to revision 123
Exactly. But that will only happen if you manage SVN files outside of the docker build. As long as the container files are contained there, this can't happen. This is exactly why I was advising not to manage files updates outside of the container.
Or am I missing something?
Why not an emptydir?We need a persistent volume could be mounted in our web container (or better: on every container that daemonset will create). In this volume we would have our svn code.
On Thursday, April 26, 2018, <mder...@gmail.com> wrote:
Il 25/04/2018 01:27, Rodrigo Campos ha scritto:
I don't understand why can that happen if the code is in the container image. Unless you change it while it is running, there should be no chance to misalign anything. What am I missing?
We're working with continuous delivery approach so we would like to have the chance to execute svn up on prod env without the obligation to create a new image for every deployReally different things, though.And, as I said in previous emails, you lose several advantages of immutability of containers.For example, we are using zendesk/samson (project on GitHub) and when a merge is done, Travis runs the tests and if they pass, a new docker image is created and it gets deployed via Samson.Automatically. And if something fails (like liveness probes or something) it rollbacks to the previous image.This is pretty much what containers deploy look like in the common case: your code is in the container. You just create new containers images.If you are managing all of this yourself, then you need to do it yourself and handle all these problems that other tools solve for you.