Serverside Apply; API & SLO questions

41 views
Skip to first unread message

Daniel Smith

unread,
Aug 20, 2019, 2:33:12 PM8/20/19
to K8s API Machinery SIG, kubernetes...@googlegroups.com, kubernetes-sig-architecture
Hi folks,

We've been working hard to reduce the performance impact from serverside apply so we beta / turn on by default:
* wrapped the fieldset (i.e. in something like a RawExtension) to make decoding it optional
* ~90% speedup (10x) in writing/reading the fieldsets
* collapsed writes by same users into the same entries (i.e., not split by time)
* ~50% speedup (2x) in the update & apply operations

We're still working to get final numbers with all these fixes in place at the same time. However, it's currently looking like > 75% of the latency increase is simply due to the increased size of the objects and not due to additional code (we measured controlled for size by making big annotations). So, we need to make some plans:

1. What is our latency budget? We need to know this ASAP.

If we are not meeting the target once all our optimizations are in, then we have some options for reducing the object size, which will be required for making additional significant improvement:

2. How to balance size with API usability?
2a. gzip existing fieldset and call it a day. 
    -> Not a great API experience for users.
    -> We can add a prefix byte to permit versioning.
    -> If we go to beta, we'd have to lize with this format forever.
2b. Invent a more concise format. This will be somewhat difficult while leaving it readable.
    -> Same re: living with this forever & versioning.
    -> I've got an improvement here, probably the best possible while maintaining JSON structure
2c. Hybrid approach: gzip or other binary format unless user requests it in pretty form (via header or something)
    -> No precedent in the API
    -> Possible format: omit it unless user asks for it
    -> Whatever you get from GET must go back to PUT without adverse consequences

Wojciech Tyczynski

unread,
Aug 21, 2019, 6:01:12 AM8/21/19
to Daniel Smith, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
On Tue, Aug 20, 2019 at 8:33 PM 'Daniel Smith' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:
Hi folks,

We've been working hard to reduce the performance impact from serverside apply so we beta / turn on by default:
* wrapped the fieldset (i.e. in something like a RawExtension) to make decoding it optional
* ~90% speedup (10x) in writing/reading the fieldsets
* collapsed writes by same users into the same entries (i.e., not split by time)
* ~50% speedup (2x) in the update & apply operations

We're still working to get final numbers with all these fixes in place at the same time. However, it's currently looking like > 75% of the latency increase is simply due to the increased size of the objects and not due to additional code (we measured controlled for size by making big annotations).

That's cool!. And that's actually matching what we're observing (and it with couple upcoming improvements like this it should be even better).
So I'm not too worried about that processing part - the thing that seem to be the most limiting now is etcd + what is below it (kernel, storage, ...).
We're currently (literally as we speak, though it's really tricky so it will take some time) in the process of investigating what exactly is the most limiting factor. 

Another interesting point is that we were investigating significant differences between kubemark and gce tests
in latencies. And apparently the biggest difference was in size of Node object (in kubemark it doesn't
contain any images) - and by adding a single annotation with 3KB of random data (to compensate for different
object size), is actually bringing the result to something much more similar.

So, we need to make some plans: 

1. What is our latency budget? We need to know this ASAP.

It's a very tricky question - I'm not sure we can actually give a very satisfying answer for that.
We have a lot of slack on low percentiles (e.g. 50th percentile is really low for api calls).
On high percentiles (99th, which is the most interesting from SLO POV) we're actually not that far.
With the old implementation (99th percentile over test) we have quite a significant slack (like ~40-50% for the worst calls),
but in the new implementation that we're trying to switch (which is aggregating how the SLO is actually saying into 5min windows),
we're really on the fence (in fact we're frequently not meeting it for some resources (like PUT leases)).

The other aspect is that even if we would have XX% of slack, how much can actually offer for this feature is not obvious.
Recently we switched on mTLS between etcd and apiserver and that also has eaten some non-negiligble part of the slack.
And I bet there will be more such features in the future that are pretty important and will require sacrificing some latency.

I would probably feel safe if you increases latencies by 50ms, but it's really hard to say if it's really max we can afford.
Also see my comments below.


If we are not meeting the target once all our optimizations are in, then we have some options for reducing the object size, which will be required for making additional significant improvement:

2. How to balance size with API usability?

What I'm most worried about is that the size of object with this change is increasing dramatically -
- by just a bit less than 2x - IIRC it was something like 70%, please correct me if I'm wrong. 
And what is the most frightening is that once we agree for something we will have to live with it
more-or-less forever. 
So while I would really like to see this feature promotes, I'm not 100% sure convinced if rushing
a week before freeze is the optimal decision.

I feel bad about that, but I would feel much more comfortable if:
1. we would really understood what is the root cause of the issues I described at the beginning to ensure
 that by increasing significantly size of etcd, we're not putting ourselves into much harder situation
2. we are really convinced that we can't do something to reduce size of the representation.
 I'm sure I'm missing a lot of stuff because I wasn't following this effort very deeply, but conceptually
 for many resources I would expect it's possible to have more concise representation (e.g. for pod,
 where 1 actor is creating it, scheduler is updating only nodeName field and Kubelet is writing the whole
 status). But I'm very far from saying I know how to do that.

2a. gzip existing fieldset and call it a day. 
    -> Not a great API experience for users.
    -> We can add a prefix byte to permit versioning.
    -> If we go to beta, we'd have to lize with this format forever.

I'm not convinced to this option, as this really makes it non-readable to user (unless that will 
explicitly decode it on their side, right?)
 
2b. Invent a more concise format. This will be somewhat difficult while leaving it readable.
    -> Same re: living with this forever & versioning.
    -> I've got an improvement here, probably the best possible while maintaining JSON structure

I didn't really look into it (and I didn't fully understand the format), but 20-30% in size of object
seems worth the effort.
Do we have any data how the object sizes would change if we would use it in real cluster
(I'm asking about Pods, Nodes, Endpoints, etc.)?

 
2c. Hybrid approach: gzip or other binary format unless user requests it in pretty form (via header or something)
    -> No precedent in the API
    -> Possible format: omit it unless user asks for it
    -> Whatever you get from GET must go back to PUT without adverse consequences

This requires additional changes in API (so another non-trivial amount of work), so I'm not sure
we can easily justify this one. 

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3bauUYysXs%3DDFHy2Jr-K1J%2B%2Bdyp%3D%2Bjf8bkSntAdPUZJG-A%40mail.gmail.com.

Daniel Smith

unread,
Aug 21, 2019, 12:31:25 PM8/21/19
to Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
On Wed, Aug 21, 2019 at 3:01 AM Wojciech Tyczynski <woj...@google.com> wrote:


On Tue, Aug 20, 2019 at 8:33 PM 'Daniel Smith' via kubernetes-sig-architecture <kubernetes-si...@googlegroups.com> wrote:
Hi folks,

We've been working hard to reduce the performance impact from serverside apply so we beta / turn on by default:
* wrapped the fieldset (i.e. in something like a RawExtension) to make decoding it optional
* ~90% speedup (10x) in writing/reading the fieldsets
* collapsed writes by same users into the same entries (i.e., not split by time)
* ~50% speedup (2x) in the update & apply operations

We're still working to get final numbers with all these fixes in place at the same time. However, it's currently looking like > 75% of the latency increase is simply due to the increased size of the objects and not due to additional code (we measured controlled for size by making big annotations).

That's cool!. And that's actually matching what we're observing (and it with couple upcoming improvements like this it should be even better).
So I'm not too worried about that processing part - the thing that seem to be the most limiting now is etcd + what is below it (kernel, storage, ...).
We're currently (literally as we speak, though it's really tricky so it will take some time) in the process of investigating what exactly is the most limiting factor. 

Another interesting point is that we were investigating significant differences between kubemark and gce tests
in latencies. And apparently the biggest difference was in size of Node object (in kubemark it doesn't
contain any images) - and by adding a single annotation with 3KB of random data (to compensate for different
object size), is actually bringing the result to something much more similar.

So, we need to make some plans: 

1. What is our latency budget? We need to know this ASAP.

It's a very tricky question - I'm not sure we can actually give a very satisfying answer for that.

Sooooo.... Does that mean we just turn it on and see if it causes a problem? We can optimize code nearly forever; we need a target.
 
We have a lot of slack on low percentiles (e.g. 50th percentile is really low for api calls).
On high percentiles (99th, which is the most interesting from SLO POV) we're actually not that far.
With the old implementation (99th percentile over test) we have quite a significant slack (like ~40-50% for the worst calls),
but in the new implementation that we're trying to switch (which is aggregating how the SLO is actually saying into 5min windows),
we're really on the fence (in fact we're frequently not meeting it for some resources (like PUT leases)).

The other aspect is that even if we would have XX% of slack, how much can actually offer for this feature is not obvious.
Recently we switched on mTLS between etcd and apiserver and that also has eaten some non-negiligble part of the slack.
And I bet there will be more such features in the future that are pretty important and will require sacrificing some latency.

I would probably feel safe if you increases latencies by 50ms, but it's really hard to say if it's really max we can afford.
Also see my comments below.


If we are not meeting the target once all our optimizations are in, then we have some options for reducing the object size, which will be required for making additional significant improvement:

2. How to balance size with API usability?

What I'm most worried about is that the size of object with this change is increasing dramatically -
- by just a bit less than 2x - IIRC it was something like 70%, please correct me if I'm wrong. 

It's not bad for json, but for proto the increase is around 66%, since we encode field names.

I am thinking right now, maybe we can use a compressed format for protos, and the current format for JSON. It means encoding/decoding more, but maybe we can be clever about that.
Not yet, I'm having trouble running the integration tests on my machine.

David Eads

unread,
Aug 21, 2019, 2:25:42 PM8/21/19
to Daniel Smith, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
I think this feature is important enough to suffer performance impacts.  If we serialize the content as a RawExtension that embeds a TypeMeta, I think we gain the flexibility to change the serialization format over time.  Based on our ObjectMeta experience and the need to change the content significantly from server-side-apply alpha1 to beta1, I see the ability to version this RawExtension as a requirement.  Direct interpretation and manipulation is an advanced feature, so I'm ok forcing such advanced clients to stay up to date with the serializations (we could add a flag to the server to decide which serialization to use).

If we have the ability to change our serialization and the increased size is responsible for most of our cost, I would elect to turn this feature on by default (with the ability to turn it back off via feature gate) and gain solid feedback about the additional cost.  Whether or not that makes 1.16, I haven't reviewed pulls in enough detail to have a strong opinion at the moment.

Daniel Smith

unread,
Aug 21, 2019, 2:31:03 PM8/21/19
to David Eads, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
After some discussion here, our plan is:
* use the existing json format
* version it (we came to the same conclusion, although our mechanism is going to be slightly different)
* hard-code disable apply for pods, nodes, endpoints.

Eric Tune

unread,
Aug 21, 2019, 2:45:11 PM8/21/19
to Daniel Smith, David Eads, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture

David Eads

unread,
Aug 21, 2019, 2:47:40 PM8/21/19
to Daniel Smith, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
I'm not immediately convinced that we can promote a feature to beta while at the same time deciding it isn't ready enough to be applied uniformally to all resources.  It's not clear to me why the feature would be good enough for some resources and not all of them.

With the introduction of ephemeral containers, pods are mutable objects.  The node object is modified by at least the controller-manager and the kubelet, as well as the user (during drains for instance).  Endpoints are directly user-modifiable and manipulable.  I'm sure there are many other resources that are write heavy and mostly single writer (thinking of openshift operator resources as a for instance).

Eric Tune

unread,
Aug 21, 2019, 2:54:37 PM8/21/19
to David Eads, Daniel Smith, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
It can't be made to uniformly apply to all resources:  Users can have an aggregated api server that doesn't support the feature, and we can't make them upgrade it.
Therefore clients need to be aware that not every resource supports Server-side Apply. (Also the case with Dry Run btw).



Eric Tune

unread,
Aug 21, 2019, 2:59:28 PM8/21/19
to David Eads, Daniel Smith, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
Users do not typically create pods, endpoints, or nodes using the "kubectl apply" flow.  Making the "kubectl apply" flow work well is the primary purpose of Server-Side Apply.

This is a reasonable trade-off between improving a user experience in the common case (e.g. applying changes to a deployment object) and avoiding taking a performance penalty on hot paths (e.g. write a pod status).

David Eads

unread,
Aug 21, 2019, 3:00:55 PM8/21/19
to Eric Tune, Daniel Smith, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
But with dry-run, we explicitly made sure to enable either all the resources we could or none of them.  Not a mix and match.

I supposed if we plan in advance to guarantee a second release of beta where we applied it to all resources, I could see having a more restricted set for an initial beta.  But I see a full release of a beta with all resources enabled as a prerequisite for an attempt to take the feature to GA. 

Daniel Smith

unread,
Aug 21, 2019, 4:08:34 PM8/21/19
to David Eads, Eric Tune, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
I agree we should be able to turn it on for everything before GA.

I've thought of another way to reduce the performance impact while still letting users gain experience (we've been working for too long without user reports): turn it on for an object only once APPLY has been used for the object.

Today, our format gzip'd is still a ~30% size increase on a proto object, which has an as-yet-unquantified impact on latency. We can likely do a bit better than gzip if we custom write something. As a working assumption, I think we should assume that this will be at least a 20% size increase.

Will we be able to afford that?

Wojciech Tyczynski

unread,
Aug 22, 2019, 3:27:56 AM8/22/19
to Daniel Smith, David Eads, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
David - I wouldn't worry that much about dry-run, because (unless I'm completely missing something) this is not reaching etcd and things below etcd seem to be the most problematic now.
Also +100 to allowing versioning it - my impression from Daniel`s initial email was that it's not really possible, but if it is, then that sounds like "must have" to me.

The plan of having 2-phase Beta: first for all but couple resources (except from Pod, Node, Endpoints - I would add Lease to this list), and second for all resources sounds great to me.
I think it's a great idea how to limit performance hit of it at the same time allowing feedback from users.


On Wed, Aug 21, 2019 at 10:08 PM Daniel Smith <dbs...@google.com> wrote:
I agree we should be able to turn it on for everything before GA.

I've thought of another way to reduce the performance impact while still letting users gain experience (we've been working for too long without user reports): turn it on for an object only once APPLY has been used for the object.

Today, our format gzip'd is still a ~30% size increase on a proto object, which has an as-yet-unquantified impact on latency. We can likely do a bit better than gzip if we custom write something. As a working assumption, I think we should assume that this will be at least a 20% size increase.

Will we be able to afford that?

I think this feature is important enough that 20% increase is justifiable. [There is significant difference between 20% and 66% you mentioned above :)]

David Eads

unread,
Aug 22, 2019, 7:58:46 AM8/22/19
to Daniel Smith, Jordan Liggitt, Eric Tune, Wojciech Tyczynski, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
I think that turning on only after using apply once will result in losing information about which fields have been set by which actors in the past.  I think a timeline like this could work:
  1. releaseA - beta1 on by default for most kube-apiserver resources, but skipped for resources of your choice
  2. releaseB - beta2 on by default for all kube-apiserver resources, no exceptions
  3. releaseC - GA, on for all kube-apiserver, no exceptions
In combination with the ability to independently version the serialization format, I think the beta rules will be sufficient for our likely needs.  My only concern is then a state of perma-beta if we get stuck between beta1 and beta2.  Adding Jordan specifically since he's been dealing with that this year.

Wojciech Tyczynski

unread,
Aug 22, 2019, 8:04:30 AM8/22/19
to David Eads, Daniel Smith, Jordan Liggitt, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
On Thu, Aug 22, 2019 at 1:58 PM David Eads <de...@redhat.com> wrote:
I think that turning on only after using apply once will result in losing information about which fields have been set by which actors in the past.  I think a timeline like this could work:
  1. releaseA - beta1 on by default for most kube-apiserver resources, but skipped for resources of your choice
  2. releaseB - beta2 on by default for all kube-apiserver resources, no exceptions
  3. releaseC - GA, on for all kube-apiserver, no exceptions
Yeah - this is exactly what I had on my mind above.

Jordan Liggitt

unread,
Aug 22, 2019, 8:52:55 AM8/22/19
to Wojciech Tyczynski, David Eads, Daniel Smith, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
David Eads <de...@redhat.com> wrote:
My only concern is then a state of perma-beta if we get stuck between beta1 and beta2.  Adding Jordan specifically since he's been dealing with that this year.

Before putting ourselves in that position, it would be helpful to know:
  1. How close are we to the scale target thresholds specifically on the resources we're considering disabling the feature on (pod/node/endpoint/lease)? (this is Daniel's "What is our latency budget?" question)
  2. In our current tests, how does the feature impact those resources when enabled?
  3. Are there concrete improvements we anticipate to the performance impact of the apply feature? (it sounds like most of those improvements have been done already, and further improvements would require a more substantial serialization rework)
  4. Are there concrete mitigating plans that would give us more scale headroom on those resources? Examples:
    • Node updates becoming less frequent using Lease instead of node status
    • Node size shrinking if we stop reporting images
    • Changes to writing endpoint updates? (if I recall correctly, the endpoint slices proposal didn't lessen Endpoints writes, but gave watchers a better resource to watch)

If enabling the feature puts us significantly over our targets, and we don't have concrete and plausible plans to close that gap, then committing to beta with the current format doesn't seem great.

If enabling the feature just puts us closer to the edge of our targets that we'd like, and we're just disabling those resources out of an abundance of caution initially, that seems more reasonable.

Either way, the availability of the apply feature should be discoverable at whatever granularity we intend to enable it. Accepted patch types are published in the openapi schema, so if we plan to disable it per resource, that should be reflected for those resources. Apply clients could use that per-resource availability to determine what type of patch to send, or attempt apply and fallback to more widely supported patch types.

Eric Tune <et...@google.com> wrote:
Making the "kubectl apply" flow work well is the primary purpose of Server-Side Apply.

It is also to make the apply functionality available to all API clients so they can simplify the way they submit arbitrary objects to the API. The harder it is for a client to know whether server-side-apply is available, the more detection/fallback complexity has to be built into all clients. Some complexity is inevitable while the feature is being rolled out, but O(years) after the feature is released, ideally clients could assume broad availability.

Daniel Smith

unread,
Aug 22, 2019, 1:40:39 PM8/22/19
to Jordan Liggitt, Wojciech Tyczynski, David Eads, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
OK so here is the revised plan considering everyone's feedback:

beta1:
* Launch with our existing JSON format, with provision for versioning it (@apelisse is working on this).
* Only track changes once something has been applied to an object. Jenny has a PR out already. Note: changes prior to opting in will be collected at the time of first apply and clearly marked for the future; it's actually not going to be a bad experience.
* Very, very low overall performance impact, as users have to opt-in per object.
* Easy for users to turn on tracking for any object they wish (just apply anything to it, e.g. an annotation)
* Permits us to collect user feedback widely while not endangering anything

beta2:
* We'll implement a binary format which is (at least) stored in the proto formats. I/we have a plan; I think we can beat gzip, possibly by a lot. Target is < 20% size gain over existing proto objects.
* Scan the API types and ensure we've marked all relevant fields atomic, so that we're not storing leaf fields in the set when the parent is appropriate (we need to do this anyway, and not just for the size decrease).
* In the unlikely case we can't get the size down, we'll have to make compensatory improvements in other parts of the stack.
* In the unthinkable case where this is impossible we can make feature adjustments that preserve the user-observed apply behavior but don't track system behavior in any detail beyond what's necessary for conflict generation. This would nerf the feature, so we'll work hard not to do this.
* We'll default on update tracking for non-performance critical resources (i.e., not pods, nodes, endpoints, leases)
* Such resources can still be opted-in.
* We collect real-world evidence about performance & usefulness of having it on by default.

beta3 and/or GA:
* We'll default on for all resources, we should have evidence that the performance is acceptable now. If there's any question it'll be a 3rd beta but I suspect we'll be confident enough to go straight to GA by this point.
* Some clusters are hitting existing size limits on some endpoints objects. Even if we can get the size increase down to 10%, we will have to either increase the limits, permit the server to omit fields data if it causes the object to be too large, switch to an opt-out model, etc-- there are many ways of dealing with this. 

for everything:
* The apply verb is enabled everywhere (well, for builtins and CRs) in all betas, we don't need to adjust the discovery docs specially.
* Clients will have to tolerate the absence of content in managed fields, but this was *always* going to be true, since we don't have pre-beta history. It will just be true a little longer.

David Eads

unread,
Aug 22, 2019, 2:36:42 PM8/22/19
to Daniel Smith, Jordan Liggitt, Wojciech Tyczynski, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
For beta1, will all resources (including pods, nodes, and endpoints), have apply enabled?  You have " The apply verb is enabled everywhere (well, for builtins and CRs) in all betas, we don't need to adjust the discovery docs specially." under "for everything", but you also explicitly say under "beta3 and/or GA" that "We'll default on for all resources".  

Antoine Pelisse

unread,
Aug 22, 2019, 2:41:28 PM8/22/19
to David Eads, Daniel Smith, Jordan Liggitt, Wojciech Tyczynski, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
On Thu, Aug 22, 2019 at 11:36 AM David Eads <de...@redhat.com> wrote:
For beta1, will all resources (including pods, nodes, and endpoints), have apply enabled?  You have " The apply verb is enabled everywhere (well, for builtins and CRs) in all betas, we don't need to adjust the discovery docs specially." under "for everything", but you also explicitly say under "beta3 and/or GA" that "We'll default on for all resources".  

My understanding is:
For beta1, all resources can be applied (including pods, nodes and endpoints), but we'll only track ownership of objects that have been applied,
For beta3 and/or GA, all resources will have ownership tracked at all times. 
 
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.

David Eads

unread,
Aug 22, 2019, 4:24:16 PM8/22/19
to Antoine Pelisse, Daniel Smith, Jordan Liggitt, Wojciech Tyczynski, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
If that's the case, then I'm ok with the phases.  I'm not close enough to the implementation to make a judgement about whether it can land in the last week of the release, but path to GA makes sense to me.

Daniel Smith

unread,
Aug 22, 2019, 4:34:34 PM8/22/19
to David Eads, Antoine Pelisse, Jordan Liggitt, Wojciech Tyczynski, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
Yeah, that should have read "tracking" on, not just "on", sorry.

"Verb enabled, tracking on once verb is used", "also turn tracking on by default for lower-volume apis", and "also turn tracking on by default everywhere else" are the different steps.

Wojciech Tyczynski

unread,
Aug 23, 2019, 2:33:00 AM8/23/19
to Daniel Smith, David Eads, Antoine Pelisse, Jordan Liggitt, Eric Tune, K8s API Machinery SIG, kubernetes-sig-scale, kubernetes-sig-architecture
SGTM - thanks a lot Daniel for pushing on this.
Reply all
Reply to author
Forward
0 new messages