--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
Check out dynamic volumes provisioning here.
On Thu, Jan 5, 2017 at 3:07 PM, Montassar Dridi <montass...@gmail.com> wrote:
Hello!!--I'm using Kubernetes deployment with persistent volume to run my application, but when I try to add more replicas or autoscale, all the new pods try to connect to the same volume.How can I simultaneously auto create new volumes for each new pod., like statefulsets(petsets) are able to do it.
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
On Thu, Jan 5, 2017 at 5:24 PM, Montassar Dridi
<montass...@gmail.com> wrote:
> Hi Tim,
>
> I'm trying to do something like this example
> https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd
> I have a java web application and MYSQL database running within Kubernetes
> connected to each other, used "kubernetes Deployment" as the example above.
"connected" via a Service or in the same Pod?
> When I try to increase the number of the web replicas, they all try to
> connect to that persistent disk that was created from the beginning, and
> they get stuck not be able to create the new web pods.
PDs are only able to be mounted read-write by one pod at a time.
That's just a limitation of the block device+filesystem interface.
> So what I want is when I ask for new pods, a unique new persistent
> disks/volumes should be created and associated for each one of them, like
> how Statefulsets/PetSets do it.
Why do you want a new PD for each replica? If it is new, then the
"persistent" nature of it is not valuable, and you can just use plain
inline volumes instead of a claim. But if it is going to be released
with the pod, why use a PD at all? why not just use emptyDir?
> email to kubernetes-users+unsubscribe@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
I do not think you have to understand what you are asking for, I've learned a lot by asking questions I only half understood :) With that said autoscaling sts was a question and not a feature request :)
I do not see how "data is important, and needs to be preserved" and "pods (compute) have no identity" are mutually incompatible statements but if you say so. :)
In any case the data is more or less important since it is fetchable, its just that if its already there when a new pod is spun up, or when a deployment is updated and a new pod created, it speeds up the startup time drastically (virtually immediate vs 2-3 minutes to sync)
Shared storage would be ideal. (deployement with hpa, with a mounted nfs vol) But I get OOM errors when using RWX persistent volume and +1 pods are syncing the same data to that volume at the same time, i do not know why these OOM errors occur. But maybe that has something to do with the code running that syncs the data. RWX seems to be a recurring challenge.
Thank you Jing and thank you Tim.If this feature will allow HPA enabled deployment managed pods to spawn with a prepopulated volume each, that would be
nice. If not, using emptyDir with a 2 minute startup delay as the data is synced for each new pod is what it is.PS would be nice if GKE had a RWX SC out of the box.
FWIW, I recently ran into a similar issue, and the way I handled it was
to have each of the pods mount an NFS shared file system as a PV (AWS
EFS, in my case) and have each pod write its output into a directory on
the NFS share. The only issue then is just to make sure that each pod
writes it's output to a file that has a unique name (e.g., has the pod
name or ID in the file name) so that the pods don't overwrite each
other's data.
On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah <nas...@transit.app> wrote:Thank you Jing and thank you Tim.If this feature will allow HPA enabled deployment managed pods to spawn with a prepopulated volume each, that would beThis feature enables start-from-snapshot volumes but does not fundamentally alter the model.
nice. If not, using emptyDir with a 2 minute startup delay as the data is synced for each new pod is what it is.PS would be nice if GKE had a RWX SC out of the box.We have cloud file store (https://cloud.google.com/filestore/) if that works for you, but this is not a Google productailing list, so I won't advertise any more. :)
On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah <nas...@transit.app> wrote:I do not think you have to understand what you are asking for, I've learned a lot by asking questions I only half understood :) With that said autoscaling sts was a question and not a feature request :)LOL, fair enough
I do not see how "data is important, and needs to be preserved" and "pods (compute) have no identity" are mutually incompatible statements but if you say so. :)Think of it this way - if the deployment scales up, clearly we should add more volumes. What do I do if it scales down? Delete the volumes? Hold on to them for some later scale-up? For how long? How many volumes?Fundamentally, the persistent volume abstraction is wrong for what you want here. We have talked about something like volume pools which would be the storage equivalent of deployments, but we have found very few use cases where that seems to be the best abstraction.E.g. In this case, the data seems to be some sort of cache of recreatable data. Maybe you really want a cache?
In any case the data is more or less important since it is fetchable, its just that if its already there when a new pod is spun up, or when a deployment is updated and a new pod created, it speeds up the startup time drastically (virtually immediate vs 2-3 minutes to sync)Do you need all the data right away or can it be copied in on-demand?
Shared storage would be ideal. (deployement with hpa, with a mounted nfs vol) But I get OOM errors when using RWX persistent volume and +1 pods are syncing the same data to that volume at the same time, i do not know why these OOM errors occur. But maybe that has something to do with the code running that syncs the data. RWX seems to be a recurring challenge.RWX is challenging because block devices generally don't support it at all, and the only mainstream FS that does is NFS, and well... NFS...
Think of it this way - if the deployment scales up, clearly we should add more volumes. What do I do if it scales down? Delete the volumes? Hold on to them for some later scale-up? For how long? How many volumes?