Janet Kuo has invited you to comment on the following document:
I wrote a proposal for ConfigMap/Secret garbage collection. Please take a look.
(note, I'm going to just talk about configmaps, but take that to be configmap/secret)
I prefer some combination of "Specify ConfigMaps/Secrets garbage collection in PodSpec" and "Automatic ConfigMap/Secret References Updates"
Having configmaps be snapshotted at ReplicaSet creation time plays much better with things like helm I think. config can me managed by a helm chart. ReplicationSets snapshot the specified configmap so they can manage the lifecycle of the snapshot, not the source material. I don't think users will expect so much their uploaded configmaps getting taken over by a controller and getting deleted out from under them.
Having a flag on a volume reference does not seem overly verbose to me. It actually allows the use case where you may want some of the configmaps to be snapshotted and some to not be. For example, for kolla-kubernetes, I'd want config files to be snapshotted for things to be atomic. But the fernet-token configmap should never be snapshotted. it always must be the most recent version or keystone will malfunction and the system relies on updates making it from the ctronjob to the keystone pods. So both modes in one deployment is very important.
The snapshot reference update could be done at the Deployment level, where the snapshotting is done at the ReplicaSet level. The Deployment could look for all volumes labeled snapshot and watch them. When an snapshot labeled config is changed the Deployment just makes a new RS based on its template as normal. The template for the rs doesn't change. the rs just switches out the configmap name to its snapshotted version when it templates out the pod. I think this solves the concern with rewriting things?
It really feels like there could be two separate features here that combined covers the use case.
1. configmap snapshotting by lower level primitives like ReplicaSets.
2. configmap watching, new rs rollout triggering when changed.
This would allow a user to do something like kubectl edit configmap myconfig and things would roll automatically. or if the configmap was in a helm chart. tjhe user could helm upgrade foo stable/foo --set foo=bar and it would update just the configmap and it also would work properly.
Thanks,
Kevin
On Wednesday, October 4, 2017 at 8:13:51 PM UTC-7, Janet Kuo wrote:
Janet Kuo has invited you to comment on the following document:
I wrote a proposal for ConfigMap/Secret garbage collection. Please take a look.
Google Docs: Create and edit documents online.
Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
You have received this email because someone shared a document with you from Google Docs.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/427d34f8-e347-4bd6-b63a-af5fa26bf4bc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Regarding the second part, automatic reference updates, I spent some time trying to think of a way to make that work too. Janet found a blocker for my version of that plan though; it's summarized in the doc under Alternatives.The crux of the issue was that we have two incompatible desires:(1) A ConfigMap/Secret update should trigger a Deployment rollout.(2) Each Deployment should be able to use its normal rollback procedure to go back to a previous state, including previous ConfigMap/Secret values.Granted, (2) is not currently possible since ConfigMaps are mutable, but we would like to make it possible as part of this effort. This seems to require giving up on (1) because they conflict with each other when you try to roll back a Deployment without also rolling back the corresponding ConfigMap update. If anyone can think of a way out of this, I'm personally still interested in seeing if automatic reference updates are feasible.Regarding your particular version of auto-updates:> When an snapshot labeled config is changed the Deployment just makes a new RS based on its template as normalDeployment uses the content of the Pod Template to know whether any existing RS already matches its desired state. If the ConfigMap name stays the same (in the Pod Template), Deployment will see the existing RS and decide no action is needed.
On Thu, Oct 5, 2017 at 11:08 AM <kfox11...@gmail.com> wrote:
(note, I'm going to just talk about configmaps, but take that to be configmap/secret)
I prefer some combination of "Specify ConfigMaps/Secrets garbage collection in PodSpec" and "Automatic ConfigMap/Secret References Updates"
Having configmaps be snapshotted at ReplicaSet creation time plays much better with things like helm I think. config can me managed by a helm chart. ReplicationSets snapshot the specified configmap so they can manage the lifecycle of the snapshot, not the source material. I don't think users will expect so much their uploaded configmaps getting taken over by a controller and getting deleted out from under them.
Having a flag on a volume reference does not seem overly verbose to me. It actually allows the use case where you may want some of the configmaps to be snapshotted and some to not be. For example, for kolla-kubernetes, I'd want config files to be snapshotted for things to be atomic. But the fernet-token configmap should never be snapshotted. it always must be the most recent version or keystone will malfunction and the system relies on updates making it from the ctronjob to the keystone pods. So both modes in one deployment is very important.
The snapshot reference update could be done at the Deployment level, where the snapshotting is done at the ReplicaSet level. The Deployment could look for all volumes labeled snapshot and watch them. When an snapshot labeled config is changed the Deployment just makes a new RS based on its template as normal. The template for the rs doesn't change. the rs just switches out the configmap name to its snapshotted version when it templates out the pod. I think this solves the concern with rewriting things?
It really feels like there could be two separate features here that combined covers the use case.
1. configmap snapshotting by lower level primitives like ReplicaSets.
2. configmap watching, new rs rollout triggering when changed.
This would allow a user to do something like kubectl edit configmap myconfig and things would roll automatically. or if the configmap was in a helm chart. tjhe user could helm upgrade foo stable/foo --set foo=bar and it would update just the configmap and it also would work properly.
Thanks,
Kevin
On Wednesday, October 4, 2017 at 8:13:51 PM UTC-7, Janet Kuo wrote:
Janet Kuo has invited you to comment on the following document:
I wrote a proposal for ConfigMap/Secret garbage collection. Please take a look.
Google Docs: Create and edit documents online.
Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
You have received this email because someone shared a document with you from Google Docs.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
I think pod volumes should have a new flag, snapshot=true/false.
For the deployment object: the RS template the Deployment creates, I think, it creates it exactly as is, no modifications. The snapshot: True flag on the volumes are intact and the volume name is of the primary specified configmap.
When done this way, The Deployment can compare and still get the same result as works today. All the deployment does, is watch for changes in the primaries that are labeled as snapshot=true, and kick off exactly the same RS template over again to kick off a new snapshot process.
The ReplicaSet behavior would be something like the following: When a RS is first created, it looks for any snapshot=true volumes. It copies the configmap specified to a new configmap named: <rsname>-<somerandom-suffix>, and places in the status section of the new RS a mapping of the primary volume name to corrisponding snapshotted name.
When the ReplicaSet goes to instantiate a Pod, it copies its spec to a new pod document, substitutes any volume name with its snapshot-ed name from status, and removes the snapshot=true flag on the volume. then passes it on to be created. The created pod would work as normal, just pointing at the snapshotted configmap instead of the primary.
I wrote a proposal for ConfigMap/Secret garbage collection. Please take a look.
Google Docs: Create and edit documents online.
Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
You have received this email because someone shared a document with you from Google Docs.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/427d34f8-e347-4bd6-b63a-af5fa26bf4bc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/19b00914-5959-496b-8907-d1745e07b5d8%40googlegroups.com.
On Thu, Oct 5, 2017 at 6:16 PM <kfox11...@gmail.com> wrote:
<SNIP>
When the ReplicaSet goes to instantiate a Pod, it copies its spec to a new pod document, substitutes any volume name with its snapshot-ed name from status, and removes the snapshot=true flag on the volume. then passes it on to be created. The created pod would work as normal, just pointing at the snapshotted configmap instead of the primary.
This forces every ConfigMaps update (with snapshot=true) to make a referencing ReplicaSet update its pods. This breaks the Deployment rolling update feature. We separate ReplicaSets and Deployments intentionally. We don't want ReplicaSets to trigger any rollouts -- it should only watch current number of pods and create/delete some to make sure the number match its spec.replicas.
<snip>
Kind=pod
spec:
metadata:
name foo-67890-789
template:
spec:
volume:
- {name: config, configMap: {name: foo-67890-456}}
- {name: fernet, configMap: {name: fernet}}
The roll forward/rollback between either ReplicaSet's would work as expected, as each ReplicaSet only uses its own snapshots for pods, and all the Deployment does is increment/decrement the replica count in each one.
The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.
Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
In addition, we need a way to generate unique label/selectors so that these a ReplicaSet won't match the other's pods. Probably similar to how names are generated.
Kind=pod
spec:
metadata:
name foo-67890-789
template:
spec:
volume:
- {name: config, configMap: {name: foo-67890-456}}
- {name: fernet, configMap: {name: fernet}}
The roll forward/rollback between either ReplicaSet's would work as expected, as each ReplicaSet only uses its own snapshots for pods, and all the Deployment does is increment/decrement the replica count in each one.How is rollback done? Wouldn't it require the original ConfigMap (config) be updated back to foo-123456-123?
The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/3172d6e9-4291-49dc-a2db-7e3d9f3afb94%40googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/3172d6e9-4291-49dc-a2db-7e3d9f3afb94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/18ba3525-29bc-4ebe-8e0c-435f3b374752%40googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/3172d6e9-4291-49dc-a2db-7e3d9f3afb94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/3172d6e9-4291-49dc-a2db-7e3d9f3afb94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/18ba3525-29bc-4ebe-8e0c-435f3b374752%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/faeb8db9-72ce-41d7-ac94-6719cf2a610a%40googlegroups.com.
The lifecycle of the configmap snapshots needs to match 1 to 1 the lifecycle of a RS that owns them I think. I think if its the RS doing the copy/delete, that happens rather naturally?The spec of the RS always looks like what the Deployment uploaded so it wont get confused. And when the RS is deleted (triggered by the user completing the deployment upgrade), the old RS deletes its configmap snapshots listed in status.snapshots.What if the configmaps are still used by other resources?
The Primary configmap is owned by the user. If they use the configmap with other Pods that don't specify snapshot=true, then they are responsible for ensuring the configmap exists while pods reference it. or suffer the malfunction if they delete it out from under the pod. Thats how it works today.
For things marked snapshot=true, the primary configmap only needs to exist until the RS that references it makes a copy. After that, the RS or the Pods it creates will only ever reference the snapshot configmaps. The user could delete the primary configmap and the RS/Pods will only use their copy. Nothing outside of the RS or its pods should reference its copy of the configmap as it is the owner. Other things should be able to reference the primary, or their own snapshots of the primary.
This may not be as space efficient as aggressively sharing the configmaps and reference counting and things. but I think it makes a pretty good tradeoff between saving space, ensuring the needed immutable features are implemented, providing usability by users expecting to own things they create themselves, and not having a complex garbage collection system that could be buggy?
--Thus garbage collecting the unused snapshots.
Does that help clarify the idea?
Thanks,
Kevin<snip>
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/22124b39-9d54-4034-be0b-974157133f97%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/3172d6e9-4291-49dc-a2db-7e3d9f3afb94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.
To post to this group, send email to kubernete...@googlegroups.com.
--To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-apps/18ba3525-29bc-4ebe-8e0c-435f3b374752%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "kubernetes-sig-apps" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-apps+unsub...@googlegroups.com.