CRD usage in kubelet

53 views
Skip to first unread message

Jan Safranek

unread,
Aug 2, 2018, 12:41:35 PM8/2/18
to Saad Ali, kubernetes-sig-storage-wg-csi
Today I've been playing with CRD in kubelet that would implement
https://github.com/kubernetes/kubernetes/issues/66497, but in a separate
"shadow node status" CRD instead of Node.Status.

We're the probably the first who adds any code to kubelet or
controller-manager that processes CRDs. There are some missing pieces in
current Kubernetes infrastructure:

* There are no validations and conversions between v1alpha1 and v1beta1
and v1 in future for CRDs. We will either need to implement it by
ourselves or create generators for that.

* Good thing is that we can get generated client, informers and deepcopy
very easily.

* I had to pass a new clientset interface for the CRD from cmd/kubelet
into depths of pkg/kubelet. It's not hard, but I'd expect some
resistance there.

* We need to add necessary permission to kubelet to create/update/watch
CRs for the CRD. This is quite bizarre, because the CRD does not need to
exist at all.

* We need to add some controller that installs the CRD on cluster
startup. And update it on cluster updates/downgrades.

It's quite possible that there is an easy way how to hook up CRD into
existing infrastructure and all the above would work out of the box (or
with very little changes). I just don't know how.

All this is already solved if we use in-tree API instead of CRD. At the
same time, we need to get through the same API review with CRD as with
in-tree API objects.

To me it seems that CRD adds a lot of new work with very little
benefits. Most importantly, IMO it won't remove need to go through API
review, which is the reason we don't want in-tree object.

So, does it make sense to rush the CRD in following 4-5 weeks?

Cheng Xing

unread,
Aug 2, 2018, 7:46:03 PM8/2/18
to Jan Safranek, Saad Ali, kubernetes-sig-storage-wg-csi
Just to be clear, are these the alternatives?

* "shadow node status" as in-tree object
* add to NodeStatus directly
* keep using Node annotation

And I'd imagine the concerns from sig-architecture between in-tree objects and CRDs are somewhat different?

If it's possible to have a brand new shadow object in-tree, and we have a proof-of-concept that tells us everything works, and it doesn't take too long to fully implement all the surrounding logic, and pushing in-tree objects and CRDs through API reviews are equal in difficulty, I think going with in-tree shadow object is the best approach.

If we are short on time, are there risks to keep adding to Node annotation for now, and introduce a new API later?

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage-wg-csi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-storage-wg-csi/f54aeb03-c40b-5b94-ea2f-316085c5c322%40redhat.com.
For more options, visit https://groups.google.com/d/optout.

Serguei Bezverkhi (sbezverk)

unread,
Aug 3, 2018, 4:14:23 PM8/3/18
to Cheng Xing, Jan Safranek, Saad Ali, kubernetes-sig-storage-wg-csi

Hi Jan,

 

As far as I know CRD versioning/conversion is coming in 1.12 as alpha. See this proposal: https://docs.google.com/document/d/1srADPF5gbmYyjZ1k-K-2rX2TjBNROYF7qFrgUzfKHWU/edit

 

Thank you

Serguei

Jan Safranek

unread,
Aug 9, 2018, 7:53:05 AM8/9/18
to kubernetes-sig...@googlegroups.com
On 02/08/18 18:41, Jan Safranek wrote:
> * I had to pass a new clientset interface for the CRD from cmd/kubelet
> into depths of pkg/kubelet. It's not hard, but I'd expect some
> resistance there.

I talked to Stefan Schimanski (sig-apimachinery) and this is not true.
We can put our types.go to pkg/apis/csi-storage/v1alpha1 and hack/*
will generate client and informers for us, both part of the usual
interfaces (e.g. clientset.Interface).

So adding new types.go for a CRD is actually pretty easy.

> * We need to add some controller that installs the CRD on cluster
> startup. And update it on cluster updates/downgrades.

API server has post-start hook that can be used to add such things. For
example, default RBAC rules for controllers are inserted in a post-start
hook.

We could inject a hook e.g. here:
https://github.com/kubernetes/kubernetes/blob/3cb771a8662ae7d1f79580e0ea9861fd6ab4ecc0/pkg/master/master.go#L369

> To me it seems that CRD adds a lot of new work with very little
> benefits. Most importantly, IMO it won't remove need to go through API
> review, which is the reason we don't want in-tree object.
>
> So, does it make sense to rush the CRD in following 4-5 weeks?

This is still valid. And the counter is at 3-4 weeks now.

Saad Ali

unread,
Aug 17, 2018, 3:07:37 AM8/17/18
to Jan Safranek, Tim Hockin, Daniel Smith, kubernetes-sig-storage-wg-csi
Tim or Daniel, can you confirm this:

> I talked to Stefan Schimanski (sig-apimachinery) and this is not true.
> We can put our types.go to pkg/apis/csi-storage/v1alpha1 and hack/*
> will generate client and informers for us, both part of the usual
> interfaces (e.g. clientset.Interface).

Or should the schema be put in an external package, e.g. "github.com/kubernetes-csi/{newRepoForStorageCRDSchemas}/...", and that path imported by k8s controllers that need to operate on this type?

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-storage-wg-csi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.

Daniel Smith

unread,
Aug 17, 2018, 12:51:46 PM8/17/18
to Saad Ali, Stefan Schimanski, David Eads, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
It is true that if you add the types there, regular clients will be generated. If we have done this before then I won't object now. However. It is not a good precedent IMO.

* Clients should have an expectation to import multiple typed client packages.
* Authors should not feel like their api type needs to be in k8s.io/api to be "real"
* k8s.io/api should not function like a global lock. APIs developed out-of-tree needn't evolve at the same rate as in-tree APIs: they could release faster or slower, for example. Putting your api types in the main type registry removes some of the benefits of developing out of tree in the first place.

The design of record for client libraries for something like a year has been a go-base with dynamic client, rest client, etc and no generated clients; and separate repos with generated clients, one per source where the apis are generated.

It may be expedient to put types in k8s.io/api but it is not good for the longer term because it makes it harder to get the client libraries into the right shape.

Saad Ali

unread,
Aug 17, 2018, 12:56:16 PM8/17/18
to Daniel Smith, Stefan Schimanski, David Eads, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Definitely for new components like Snapshots we are putting everything "out of tree".

In this case, we have a new objecct we want to introduce "CSIDriver" that we want the core k8s binaries to use. SIG-Arch clarified that in this case it would be fine to have some core binary or mechanism installing the CRD. But it's unclear where the API schema and generated client packages should go. Given that context, any changes in your recommendation? 

David Eads

unread,
Aug 17, 2018, 2:18:54 PM8/17/18
to Saad Ali, Daniel Smith, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
We have examples of APIs which live in other repos (CRDs come to mind). If you follow that example you can keep your API and client in the repo with the rest of your code.  When vgo arrives you can have separate modules in one repo  When you wish to make use of those clients, you can import the config types and you can import the client you want to use.  The two work together well.

Saad Ali

unread,
Aug 17, 2018, 2:57:52 PM8/17/18
to David Eads, Daniel Smith, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Thanks for the clarification! Will create a new repo under github.com/kubernetes-csi/... to house this and have k8s core import that.

David Eads

unread,
Aug 17, 2018, 3:10:50 PM8/17/18
to Saad Ali, Daniel Smith, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
Wait, I may have spoken too soon.  You need to have k8s.io/kubernetes import your repo?  We don't allow the vendoring of repos that depend upon k8s.io/apimachinery (or any other staging repo), because it creates a chicken and egg problem when something need refactoring/fixing.  I thought this was going out of tree and could be managed on a different cadence.

David Eads

unread,
Aug 17, 2018, 3:11:46 PM8/17/18
to Saad Ali, Daniel Smith, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
One possible solution to that could be to use a dynamic client for the usage inside of k8s.io/kubernetes if it was absolutely needed.

Saad Ali

unread,
Aug 17, 2018, 3:24:47 PM8/17/18
to David Eads, Brian Grant, kubernetes-sig-architecture, Daniel Smith, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Ok, I'll hold off 

> You need to have k8s.io/kubernetes import your repo?

Yes, we need core controllers to be able to read these objects and act on them (and possibly create them as well).

> One possible solution to that could be to use a dynamic client for the usage inside of k8s.io/kubernetes if it was absolutely needed.

Any examples of how to do this? Any major objections to just having the schema and generated client in the core (and of course still installed via CRD)?

+Sig-arch: this is following up to a discussion we had about how sig-storage is trying to introduce a new object "CSIDriver". Your recommendation was to make it a custom resource with some component in the core installing the CRD. Open question is where the schema and generated client for this resource would live. Any clarification would be greatly appreciated.

Daniel Smith

unread,
Aug 17, 2018, 3:33:13 PM8/17/18
to Saad Ali, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
On Fri, Aug 17, 2018 at 12:24 PM Saad Ali <saa...@google.com> wrote:
Ok, I'll hold off 

> You need to have k8s.io/kubernetes import your repo?

Yes, we need core controllers to be able to read these objects and act on them (and possibly create them as well).

It can live in staging.

Jordan Liggitt

unread,
Aug 17, 2018, 3:39:58 PM8/17/18
to Daniel Smith, Saad Ali, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
If core components depend on this API, why wouldn't the types live in k8s.io/api and generated clients live in k8s.io/client-go?

That does mean the API is not actually developed out of tree and is coupled to kube releases (but the API lifecycle is already tied at least somewhat to kube releases if core components are using it).




To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage-wg-csi+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-storage-wg-csi@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com.

Saad Ali

unread,
Aug 17, 2018, 5:21:44 PM8/17/18
to Jordan Liggitt, Chao Xu, Daniel Smith, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
> That does mean the API is not actually developed out of tree and is coupled to kube releases (but the API lifecycle is already tied at least somewhat to kube releases if core components are using it).

Yep, that should be ok, the only likely consumer will be kube binaries.

> It can live in staging.

> If core components depend on this API, why wouldn't the types live in k8s.io/api and generated clients live in k8s.io/client-go?

I'm ok with this, if it works. I spoke to +Chao Xu and his concern with this approach is that all the existing tooling assumes anything in kubernetes/kubernetes/pkg/apis is official API and will do things like generate documentation etc for it. But he also agrees that having it in an external repo and importing to k8s could result in weird circular dependencies.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.

Chao Xu

unread,
Aug 17, 2018, 7:33:13 PM8/17/18
to Saad Ali, Jordan Liggitt, Daniel Smith, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Saad and I discussed two options. In both options, "CSIDriver" will be installed as a CRD. kube-apiserver binary does not depend on the "CSIDriver" type.

Option 1:
  • create a staging repo "csi-api" and let the publish-bot to sync it to "kubernetes-csi/csi-api".
  • put the types.go in "csi-api"
  • generate CSIDriver's own clients, listers, and informers in "csi-api"
  • this is how "apiextension-apiserver" repo hosts the "CustomResouceDefinition" types and clients.
Option 2:
  • put the types.go in the existing staging "api" repo, under a new API group. 
  • CSIDriver's clients, listers, and informers will be generated in the same package as the kubernetes ones.
At first glance, option 1 decouples "csi-api" from the existing "api". However, "CSIDriver" users will almost always use CSIDriver clients with the kubernetes PV/PVC clients. For that to work, with today's golang dependency management, the two sets of clients need to vendor the same version of k8s.io/apimachinery, which isn't backwards compatible. Thus separating CSI clients from kubernetes clients only creates barriers for users.

Option 2 is more straightforward.


To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-storage-wg-csi+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-storage-wg-csi@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsubs...@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

Saad Ali

unread,
Aug 17, 2018, 7:38:39 PM8/17/18
to Chao Xu, Jordan Liggitt, Daniel Smith, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Thanks a lot for your time, and for the clarification, Chao!

Unless there are any objections, I will proceed with option 2.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.

Jan Safranek

unread,
Aug 20, 2018, 11:46:53 AM8/20/18
to Saad Ali, Chao Xu, Jordan Liggitt, Daniel Smith, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Tim Hockin, kubernetes-sig-storage-wg-csi
On 18/08/18 01:38, 'Saad Ali' via kubernetes-sig-storage-wg-csi wrote:
> Thanks a lot for your time, and for the clarification, Chao!
>
> Unless there are any objections, I will proceed with option 2.

I tried that last week and spent almost 2 days debugging generated
informers, only to find out that there is something wrong with
json/protobuf:

https://github.com/kubernetes/kubernetes/issues/67602

There is workaround included, so I am not blocked, still I'm not sure if
it's the right way to mix CRDs into core clientset.

>
> On Fri, Aug 17, 2018 at 4:33 PM Chao Xu <xuc...@google.com
> <mailto:xuc...@google.com>> wrote:
>
> Saad and I discussed two options. In both options, "CSIDriver" will
> be installed as a CRD. kube-apiserver binary does not depend on the
> "CSIDriver" type.
>
> Option 1:
>
> * create a staging repo "csi-api" and let the publish-bot to sync
> it to "kubernetes-csi/csi-api".
> * put the types.go in "csi-api"
> * generate CSIDriver's own clients, listers, and informers in
> "csi-api"
> * this is how "apiextension-apiserver" repo hosts the
> "CustomResouceDefinition" types and clients.
>
> Option 2:
>
> * put the types.go in the existing staging "api" repo, under a new
> API group. 
> * CSIDriver's clients, listers, and informers will be generated in
> the same package as the kubernetes ones.
>
> At first glance, option 1 decouples "csi-api" from the existing
> "api". However, "CSIDriver" users will almost always use CSIDriver
> clients with the kubernetes PV/PVC clients. For that to work, with
> today's golang dependency management, the two sets of clients need
> to vendor the same version of k8s.io/apimachinery
> <http://k8s.io/apimachinery>, which isn't backwards compatible
> <https://github.com/kubernetes/apimachinery#compatibility>. Thus

Daniel Smith

unread,
Aug 20, 2018, 12:45:55 PM8/20/18
to Chao Xu, Saad Ali, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig...@googlegroups.com
Option one is closer to how things Ought To Work (tm).

The go dependency problem doesn't seem to be an issue to me, since we will always publish client-go, apimachinery, and anything else in staging at the same time, so any given client ought to be able to import an api, api-machinery, and csi-api triplet that are all work together.

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-stora...@googlegroups.com.
To post to this group, send email to kubernetes-sig...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.

Saad Ali

unread,
Aug 20, 2018, 2:11:35 PM8/20/18
to Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, Tim Hockin, kubernetes-sig-storage-wg-csi
Ok, we're happy to pursue a separate staging directory/repo. We don't have the expertise in sig-storage. Is there any one in API Machinery who can help us with this?

Tim Hockin

unread,
Aug 20, 2018, 2:15:12 PM8/20/18
to Saad Ali, Jing Xu, Daniel Smith, Chao Xu, Liggitt, Jordan, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, kubernetes-sig...@googlegroups.com
Please sync with Jing re Snapshot stuff, too.

Saad Ali

unread,
Aug 20, 2018, 2:55:28 PM8/20/18
to Tim Hockin, Jing Xu, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Jan Safranek, kubernetes-sig-storage-wg-csi
We've managed to keep the snapshot stuff completely out-of-tree so it hasn't run in to any of these issues (these issues arise when we try to make core components depend on types installed as CRDs). But I'll continue to keep an eye on it.

Jan Safranek

unread,
Aug 21, 2018, 7:58:24 AM8/21/18
to Saad Ali, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Tim Hockin, kubernetes-sig-storage-wg-csi
On 20/08/18 20:11, Saad Ali wrote:
> Ok, we're happy to pursue a separate staging directory/repo. We don't
> have the expertise in sig-storage. Is there any one in API Machinery who
> can help us with this?

Stefan runs a bot that syncs staging/src/* to individual repos. Adding a
new repo to sync is easy, here is an example:
https://github.com/kubernetes/kubernetes/pull/67356: creates
staging/src/something
https://github.com/kubernetes/publishing-bot/pull/89: configures the bot
to sync it somewhere

The destination repo should be in github.com/kubernetes, otherwise we
need another bot instance + configuration (= some administrative
overhead). I suggest we create github.com/kubernetes/csi-api. Saad, Tim,
if we really want to have separate API in staging, can you create such
repo or find a better name/place?


> On Mon, Aug 20, 2018 at 9:45 AM Daniel Smith <dbs...@google.com
> <mailto:dbs...@google.com>> wrote:
>
> Option one is closer to how things Ought To Work (tm).
>
> The go dependency problem doesn't seem to be an issue to me, since
> we will always publish client-go, apimachinery, and anything else in
> staging at the same time, so any given client ought to be able to
> import an api, api-machinery, and csi-api triplet that are all work
> together.
>
> On Fri, Aug 17, 2018 at 4:33 PM Chao Xu <xuc...@google.com
> <mailto:xuc...@google.com>> wrote:
>
> Saad and I discussed two options. In both options, "CSIDriver"
> will be installed as a CRD. kube-apiserver binary does not
> depend on the "CSIDriver" type.
>
> Option 1:
>
> * create a staging repo "csi-api" and let the publish-bot to
> sync it to "kubernetes-csi/csi-api".
> * put the types.go in "csi-api"
> * generate CSIDriver's own clients, listers, and informers in
> "csi-api"
> * this is how "apiextension-apiserver" repo hosts the
> "CustomResouceDefinition" types and clients.
>
> Option 2:
>
> * put the types.go in the existing staging "api" repo, under a
> new API group. 
> * CSIDriver's clients, listers, and informers will be
> generated in the same package as the kubernetes ones.
>
> At first glance, option 1 decouples "csi-api" from the existing
> "api". However, "CSIDriver" users will almost always use
> CSIDriver clients with the kubernetes PV/PVC clients. For that
> to work, with today's golang dependency management, the two sets
> of clients need to vendor the same version of
> k8s.io/apimachinery <http://k8s.io/apimachinery>, which isn't
> backwards compatible
> <https://github.com/kubernetes/apimachinery#compatibility>. Thus
> separating CSI clients from kubernetes clients only creates
> barriers for users.
>
> Option 2 is more straightforward.
>
>
> On Fri, Aug 17, 2018 at 2:21 PM, Saad Ali <saa...@google.com
> <mailto:saa...@google.com>> wrote:
>
> > That does mean the API is not actually developed out of
> tree and is coupled to kube releases (but the API lifecycle
> is already tied at least somewhat to kube releases if core
> components are using it).
>
> Yep, that should be ok, the only likely consumer will be
> kube binaries.
>
> > It can live in staging.
>
> > If core components depend on this API, why wouldn't the
> types live in k8s.io/api <http://k8s.io/api> and generated
> clients live in k8s.io/client-go <http://k8s.io/client-go>?
>
> I'm ok with this, if it works. I spoke to +Chao Xu
> <mailto:xuc...@google.com> and his concern with this
> approach is that all the existing tooling assumes anything
> in kubernetes/kubernetes/pkg/apis is official API and will
> do things like generate documentation etc for it. But he
> also agrees that having it in an external repo and importing
> to k8s could result in weird circular dependencies.
>
> On Fri, Aug 17, 2018 at 12:39 PM Jordan Liggitt
> <jlig...@redhat.com <mailto:jlig...@redhat.com>> wrote:
>
> If core components depend on this API, why wouldn't the
> types live in k8s.io/api <http://k8s.io/api> and
> generated clients live in k8s.io/client-go
> <http://k8s.io/client-go>?
>
> That does mean the API is not actually developed out of
> tree and is coupled to kube releases (but the API
> lifecycle is already tied at least somewhat to kube
> releases if core components are using it).
>
>
>
>
> On Fri, Aug 17, 2018 at 3:33 PM, 'Daniel Smith' via
> kubernetes-sig-architecture
> <kubernetes-si...@googlegroups.com
> <mailto:kubernetes-si...@googlegroups.com>>
> wrote:
>
>
>
> On Fri, Aug 17, 2018 at 12:24 PM Saad Ali
> <saa...@google.com <mailto:saa...@google.com>> wrote:
>
> Ok, I'll hold off 
>
> > You need to have k8s.io/kubernetes <http://k8s.io/kubernetes>
> import your repo?
>
> Yes, we need core controllers to be able to read
> these objects and act on them (and possibly
> create them as well).
>
>
> It can live in staging.
>  
>
>
> > One possible solution to that could be to use
> a dynamic client for the usage inside of
> k8s.io/kubernetes <http://k8s.io/kubernetes> if
> it was absolutely needed.
>
> Any examples of how to do this? Any major
> objections to just having the schema and
> generated client in the core (and of course
> still installed via CRD)?
>
> +Sig-arch: this is following up to a discussion
> we had about how sig-storage is trying to
> introduce a new object "CSIDriver". Your
> recommendation was to make it a custom resource
> with some component in the core installing the
> CRD. Open question is where the schema and
> generated client for this resource would live.
> Any clarification would be greatly appreciated.
>
>
> On Fri, Aug 17, 2018 at 12:11 PM David Eads
> <de...@redhat.com <mailto:de...@redhat.com>> wrote:
>
> One possible solution to that could be to
> use a dynamic client for the usage inside of
> k8s.io/kubernetes <http://k8s.io/kubernetes>
> if it was absolutely needed.
>
> On Fri, Aug 17, 2018 at 3:10 PM David Eads
> <de...@redhat.com <mailto:de...@redhat.com>>
> wrote:
>
> Wait, I may have spoken too soon.  You
> need to have k8s.io/kubernetes
> <http://k8s.io/kubernetes> import your
> repo?  We don't allow the vendoring of
> repos that depend upon
> k8s.io/apimachinery
> <http://k8s.io/apimachinery> (or any
> other staging repo), because it creates
> a chicken and egg problem when something
> need refactoring/fixing.  I thought this
> was going out of tree and could be
> managed on a different cadence.
>
> On Fri, Aug 17, 2018 at 2:57 PM Saad Ali
> <saa...@google.com
> <mailto:saa...@google.com>> wrote:
>
> Thanks for the clarification! Will
> create a new repo under github.com/
> <http://github.com/>kubernetes-csi/...
> to house this and have k8s core
> import that.
>
> On Fri, Aug 17, 2018 at 11:18 AM
> David Eads <de...@redhat.com
> <mailto:de...@redhat.com>> wrote:
>
> We have examples of APIs which
> live in other repos (CRDs come
> to mind
> <https://github.com/kubernetes/apiextensions-apiserver/blob/master/pkg/apis/apiextensions/v1beta1/types.go#L190>).
> If you follow that example you
> can keep your API and client in
> the repo with the rest of your
> code.  When vgo arrives you can
> have separate modules in one
> repo  When you wish to make use
> of those clients, you can import
> the config types and you can
> import the client you want to
> use.  The two work together well.
>
> On Fri, Aug 17, 2018 at 12:56 PM
> Saad Ali <saa...@google.com
> <mailto:saa...@google.com>> wrote:
>
> Definitely for new
> components like Snapshots we
> are putting everything "out
> of tree".
>
> In this case, we have a new
> objecct we want to introduce
> "CSIDriver" that we want the
> core k8s binaries to use.
> SIG-Arch clarified that in
> this case it would be fine
> to have some core binary or
> mechanism installing the
> CRD. But it's unclear where
> the API schema and generated
> client packages should go.
> Given that context, any
> changes in your recommendation? 
>
> On Fri, Aug 17, 2018 at 9:51
> AM Daniel Smith
> <dbs...@google.com
> <mailto:dbs...@google.com>>
> wrote:
>
> It is true that if you
> add the types there,
> regular clients will be
> generated. If we have
> done this before then I
> won't object now.
> However. It is not a
> good precedent IMO.
>
> * Clients should have an
> expectation to import
> multiple typed client
> packages.
> * Authors should not
> feel like their api type
> needs to be in
> k8s.io/api
> <http://k8s.io/api> to
> be "real"
> * k8s.io/api
> <http://k8s.io/api>
> <http://k8s.io/api> but
> it is not good for the
> longer term because it
> makes it harder to get
> the client libraries
> into the right shape.
>
> On Fri, Aug 17, 2018 at
> 12:07 AM Saad Ali
> <saa...@google.com
> <mailto:saa...@google.com>>
> wrote:
>
> Tim or Daniel, can
> you confirm this:
>
> > I talked to Stefan
> Schimanski
> (sig-apimachinery)
> and this is not true.
> > We can put our
> types.go to
> pkg/apis/csi-storage/v1alpha1
> and hack/*
> > will generate
> client and informers
> for us, both part of
> the usual
> > interfaces (e.g.
> clientset.Interface).
>
> For https://github.com/kubernetes/community/pull/2514
> can I put my
> resource schema
> under
> "github.com/kubernetes/kubernetes/pkg/apis/{newGroupForStorageCRDs}/v1alpha1
> <http://github.com/kubernetes/kubernetes/pkg/apis/%7BnewGroupForStorageCRDs%7D/v1alpha1>"?
> Or should the schema
> be put in an
> external package,
> e.g.
> "github.com/kubernetes-csi/{newRepoForStorageCRDSchemas}/.
> <http://github.com/kubernetes-csi/%7BnewRepoForStorageCRDSchemas%7D/.>..",
> and that path
> imported by k8s
> controllers that
> need to operate on
> this type?
>
> On Thu, Aug 9, 2018
> at 4:53 AM Jan
> Safranek
> <jsaf...@redhat.com
> <mailto:jsaf...@redhat.com>>
> <mailto:kubernetes-sig-storage-wg-csi%2Bunsu...@googlegroups.com>.
> To post to this
> group, send
> email to
> kubernetes-sig...@googlegroups.com
> <mailto:kubernetes-sig...@googlegroups.com>.
> To view this
> discussion on
> the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-storage-wg-csi/d577d7d1-f612-a36b-ed17-5c5c1de54643%40redhat.com.
> For more
> options, visit
> https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed
> to the Google Groups "kubernetes-sig-architecture"
> group.
> To unsubscribe from this group and stop receiving
> emails from it, send an email to
> kubernetes-sig-arch...@googlegroups.com
> <mailto:kubernetes-sig-arch...@googlegroups.com>.
> To post to this group, send email to
> kubernetes-si...@googlegroups.com
> <mailto:kubernetes-si...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Saad Ali

unread,
Aug 21, 2018, 12:19:34 PM8/21/18
to Jan Safranek, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, Tim Hockin, kubernetes-sig-storage-wg-csi
Tim/Daniel: are you ok with github.com/kubernetes/csi-api github.com/kubernetes/csi-client-go as the name of the CSI specific repos?
If so, I'll kick off the process of getting those repos created.

Tim Hockin

unread,
Aug 21, 2018, 12:41:15 PM8/21/18
to Saad Ali, Jan Safranek, Daniel Smith, Chao Xu, Liggitt, Jordan, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Can you spend a couple sentences explaing a) why 2 repos, b) what they
are for ? You'll need that to get repos made anyway.

Eric Tune

unread,
Aug 21, 2018, 12:59:23 PM8/21/18
to Tim Hockin, Saad Ali, Jan Safranek, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-si...@googlegroups.com, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Hi all, coming late to this thread.
AIUI, the decision was made to host the types,go in k/k rather than the new repos.
Is it too late to reconsider this decision?

There are a couple of SDKs for authoring CRDs (kubebuilder and operator-sdk).  
Without saying that which this project should use, I do think this and all new CRD types should be strongly encouraged to use some CRD SDK - to increase consistency of generated code, clients, tests, and docs. 

These SDKs are built around the presumption that a group of related CRDs are defined in a single project, including the types.go. 

To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAO_RewYXBEk2f5OuiDQ0EQb1-1gV%3DFpnQSWWrq%3DHL4yq_JQAhg%40mail.gmail.com.

Saad Ali

unread,
Aug 21, 2018, 1:29:18 PM8/21/18
to Eric Tune, Tim Hockin, Jan Safranek, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
I believe the argument against that was: "You need to have k8s.io/kubernetes import your repo?  We don't allow the vendoring of repos that depend upon k8s.io/apimachinery (or any other staging repo), because it creates a chicken and egg problem when something need refactoring/fixing.  I thought this was going out of tree and could be managed on a different cadence... One possible solution to that could be to use a dynamic client for the usage inside of k8s.io/kubernetes if it was absolutely needed."

Eric Tune

unread,
Aug 21, 2018, 1:56:33 PM8/21/18
to Saad Ali, Tim Hockin, Jan Safranek, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-si...@googlegroups.com, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Generated clientsets (meaning not dynamic) currently depend on these parts of apimachinery:

Of this short list,  what, exactly, do we think needs to be refactored?  Isn't this list short enough that we could make an exception for vendoring generated clients?

Saad Ali

unread,
Aug 21, 2018, 2:00:00 PM8/21/18
to Eric Tune, Tim Hockin, Jan Safranek, Daniel Smith, Chao Xu, Jordan Liggitt, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi

Jordan Liggitt

unread,
Aug 21, 2018, 2:02:27 PM8/21/18
to Eric Tune, Saad Ali, Tim Hockin, Jan Safranek, Daniel Smith, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Things in k/k (even under staging) may depend on apimachinery.

The issue is k/k vendoring an independent repo that depends on apimachinery. That cycle makes it impossible to change apimachinery in any way the external dependency would have to react to





On Tue, Aug 21, 2018 at 1:56 PM, Eric Tune <et...@google.com> wrote:
Generated clientsets (meaning not dynamic) currently depend on these parts of apimachinery:

Of this short list,  what, exactly, do we think needs to be refactored?  Isn't this list short enough that we could make an exception for vendoring generated clients?
On Tue, Aug 21, 2018 at 10:29 AM Saad Ali <saa...@google.com> wrote:
I believe the argument against that was: "You need to have k8s.io/kubernetes import your repo?  We don't allow the vendoring of repos that depend upon k8s.io/apimachinery (or any other staging repo), because it creates a chicken and egg problem when something need refactoring/fixing.  I thought this was going out of tree and could be managed on a different cadence... One possible solution to that could be to use a dynamic client for the usage inside of k8s.io/kubernetes if it was absolutely needed."

On Tue, Aug 21, 2018 at 9:59 AM Eric Tune <et...@google.com> wrote:
Hi all, coming late to this thread.
AIUI, the decision was made to host the types,go in k/k rather than the new repos.
Is it too late to reconsider this decision?

There are a couple of SDKs for authoring CRDs (kubebuilder and operator-sdk).  
Without saying that which this project should use, I do think this and all new CRD types should be strongly encouraged to use some CRD SDK - to increase consistency of generated code, clients, tests, and docs. 

These SDKs are built around the presumption that a group of related CRDs are defined in a single project, including the types.go. 

>> >                 <kubernetes-sig-architecture@googlegroups.com
>> >                 <mailto:kubernetes-sig-archit...@googlegroups.com>>
>> >                                                         kubernetes-sig-storage-wg-csi+unsubscribe@googlegroups.com
>> >                                                         <mailto:kubernetes-sig-storage-wg-csi%2Bunsubscribe@googlegroups.com>.

>> >                                                         To post to this
>> >                                                         group, send
>> >                                                         email to
>> >                                                         kubernetes-sig-storage-wg-c...@googlegroups.com
>> >                                                         <mailto:kubernetes-sig-storage-wg-csi@googlegroups.com>.

>> >                                                         To view this
>> >                                                         discussion on
>> >                                                         the web visit
>> >                                                         https://groups.google.com/d/msgid/kubernetes-sig-storage-wg-csi/d577d7d1-f612-a36b-ed17-5c5c1de54643%40redhat.com.
>> >                                                         For more
>> >                                                         options, visit
>> >                                                         https://groups.google.com/d/optout.
>> >
>> >                     --
>> >                     You received this message because you are subscribed
>> >                     to the Google Groups "kubernetes-sig-architecture"
>> >                     group.
>> >                     To unsubscribe from this group and stop receiving
>> >                     emails from it, send an email to
>> >                     kubernetes-sig-architecture+unsub...@googlegroups.com
>> >                     <mailto:kubernetes-sig-architecture+unsubscribe@googlegroups.com>.

>> >                     To post to this group, send email to
>> >                     kubernetes-sig-architecture@googlegroups.com
>> >                     <mailto:kubernetes-sig-archit...@googlegroups.com>.

>> >                     To view this discussion on the web visit
>> >                     https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com
>> >                     <https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>> >
>> >                     For more options, visit
>> >                     https://groups.google.com/d/optout.
>> >
>> >
>> >
>>

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-architecture+unsub...@googlegroups.com.
To post to this group, send email to kubernetes-sig-architecture@googlegroups.com.

Daniel Smith

unread,
Aug 21, 2018, 2:41:16 PM8/21/18
to Jordan Liggitt, Eric Tune, Saad Ali, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
I can't emphasize what Jordan just said enough. Please internalize it; many people have heard this and repeated back to me a garbled version of it. (I also garble it myself occasionally.) We don't say this because we want folks to suffer, we say it because the versioning problem is currently intractable.

To put it humorously:

The issue is preventing unstable time loops in unversioned libraries. External repos ALWAYS depend on version N-1 of <things in staging>, because there is time lag between items being committed and published.

However, if we vendor something into k/k-- while there, it must compile against version N, not N-1, of <things in staging>.

Additionally, the thing in the vendor directory is always going to be version N-1 of the external repo, because there is lag between updating that and re-vendoring it. This means every combo in this table always has to work together, or someone's build is broken.
             | dep N | dep N-1
staging N    |       |
staging N-1  |       |

(This table can be bigger as some dependencies are especially bad citizens and vendor in k/k, too.)

You can't change version N of *anything* without it needing to compile against version N-1 of something else. You either need a time machine or a lot of commits that will be gated by (potentially) completely different organizations.

At first glance we could solve this by being good citizens about versioning everything, but to do that rigorously in many cases requires letting an especially old dependency drag in a 2nd copy (older version) of the thing it depends on.

I think it is past time we grew up and started versioning our stuff, with all the discipline & inconvenience implied. I hope go modules will give us a feasible route to that. This email is describing the way things are, not how they should be.

To take Eric's example packages--yes, that's a relatively short list. I think 50% of it is unlikely to change much. But we wouldn't have any way to prevent growth in that list, and there is a decent chance that we'll want to change SOMETHING in at least one of those packages.

To explain why the slow down is *so bad*: having a vendor loop makes it the *library author's* problem whether the *dependency* stays up to date, but the library author may have very little understanding/control of the dependency (both code-wise and social-structure-wise).

>> >                 <kubernetes-si...@googlegroups.com
>> >                 <mailto:kubernetes-si...@googlegroups.com>>
>> >                                                         kubernetes-sig-stora...@googlegroups.com
>> >                                                         <mailto:kubernetes-sig-storage-wg-csi%2Bunsu...@googlegroups.com>.

>> >                                                         To post to this
>> >                                                         group, send
>> >                                                         email to
>> >                                                         kubernetes-sig...@googlegroups.com
>> >                                                         <mailto:kubernetes-sig...@googlegroups.com>.

>> >                                                         To view this
>> >                                                         discussion on
>> >                                                         the web visit
>> >                                                         https://groups.google.com/d/msgid/kubernetes-sig-storage-wg-csi/d577d7d1-f612-a36b-ed17-5c5c1de54643%40redhat.com.
>> >                                                         For more
>> >                                                         options, visit
>> >                                                         https://groups.google.com/d/optout.
>> >
>> >                     --
>> >                     You received this message because you are subscribed
>> >                     to the Google Groups "kubernetes-sig-architecture"
>> >                     group.
>> >                     To unsubscribe from this group and stop receiving
>> >                     emails from it, send an email to
>> >                     kubernetes-sig-arch...@googlegroups.com
>> >                     <mailto:kubernetes-sig-arch...@googlegroups.com>.

>> >                     To post to this group, send email to
>> >                     kubernetes-si...@googlegroups.com
>> >                     <mailto:kubernetes-si...@googlegroups.com>.

>> >                     To view this discussion on the web visit
>> >                     https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com
>> >                     <https://groups.google.com/d/msgid/kubernetes-sig-architecture/CAB_J3baghqi7GE5V-iGh%2B-XWTedpxakxh3v%2BkMFZHg%2BzpskwAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>> >
>> >                     For more options, visit
>> >                     https://groups.google.com/d/optout.
>> >
>> >
>> >
>>

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.

Saad Ali

unread,
Aug 21, 2018, 4:18:47 PM8/21/18
to Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
> Can you spend a couple sentences explaing a) why 2 repos, b) what theyare for ?  You'll need that to get repos made anyway.

Tim:
b) CSI types will be defined in "k8s.io/kubernetes/pkg/apis/storagedrivers". The generated API and client will be published to a new directory in "k8s.io/kubernetes/staging/...". Upon merge the generated API and client will be published to https://github.com/kubernetes/csi-api and https://github.com/kubernetes/csi-client-go

Tim Allclair

unread,
Aug 21, 2018, 5:57:48 PM8/21/18
to Saad Ali, Kenneth Owens, Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
+Kenneth Owens - Are these the types of issues your doc was going to cover?

I don't want to derail the conversation, but if these types are so tightly coupled with core Kubernetes components, why is it so important for them to be CRDs? I'm sure I'm missing something, but right now it looks to me like we're creating unnecessary pain...

Saad Ali

unread,
Aug 21, 2018, 6:46:05 PM8/21/18
to Tim Allclair, Kenneth Owens, Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
> I don't want to derail the conversation, but if these types are so tightly coupled with core Kubernetes components, why is it so important for them to be CRDs? I'm sure I'm missing something, but right now it looks to me like we're creating unnecessary pain...

The argument from SIG-Architecture that makes most sense to me is that we want to strip the API Server of Kubernetes specific types so that it becomes a generic, reusable binary that can be used by other projects that want to expose a declarative API. At that point there would be no pre-installed types, everything would be installed as a CRD. To get to that vision we need to "draw the line somewhere" and stop adding more types to the API server.

Tim Allclair

unread,
Aug 21, 2018, 6:54:34 PM8/21/18
to Saad Ali, Kenneth Owens, Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
On Tue, Aug 21, 2018 at 3:46 PM Saad Ali <saa...@google.com> wrote:
> I don't want to derail the conversation, but if these types are so tightly coupled with core Kubernetes components, why is it so important for them to be CRDs? I'm sure I'm missing something, but right now it looks to me like we're creating unnecessary pain...

The argument from SIG-Architecture that makes most sense to me is that we want to strip the API Server of Kubernetes specific types so that it becomes a generic, reusable binary that can be used by other projects that want to expose a declarative API. At that point there would be no pre-installed types, everything would be installed as a CRD. To get to that vision we need to "draw the line somewhere" and stop adding more types to the API server.

Saad Ali

unread,
Aug 21, 2018, 7:07:57 PM8/21/18
to Tim Allclair, Kenneth Owens, Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
Good point. Someone from sig-arch might have a more compelling answer.

I'd love to get clarity on this one way or the other (code freeze is coming up fast, and we have lots of CSI features pending on this).

Kenneth Owens

unread,
Aug 21, 2018, 7:31:09 PM8/21/18
to Tim Allclair, Saad Ali, Daniel Smith, Jordan Liggitt, Eric Tune, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
No, I'm thinking about best practices around how we manage the installation of CRDs and controllers for both administrator and distribution installed extensions that may, or may not, interact with core components and the trade offs of the nessecary RBAC permissions to make it work. This includes isolating controllers to reduce the blast radius of goruotine panics, and when to hide a controller as part of a control plane or to expose it as an administrator owned Pod/StatefulSet/Deployment (assuming the K8s cluster makes a distinction between control plane and user nodes). Our vendoring issues and thier resolution deserve a thorough and separate treatment imo. 
Thanks,
     -Ken

Eric Tune

unread,
Aug 21, 2018, 8:10:20 PM8/21/18
to Kenneth Owens, Tim Allclair, Saad Ali, Daniel Smith, Jordan Liggitt, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-si...@googlegroups.com, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Talking to Tim, I learned that the control loop for this new CRD (CsiNode), according to current design plans, needs to be part of the existing PV controller.
I don't think it would be practical to use Kubebuilder or Operator-SDK with a pre-existing core control loop.  So, I withdraw my suggestion to use those SDKs for this use case. 

Saad Ali

unread,
Aug 21, 2018, 8:27:28 PM8/21/18
to Eric Tune, Kenneth Owens, Tim Allclair, Daniel Smith, Jordan Liggitt, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
Thanks Eric.
> Good point. Someone from sig-arch might have a more compelling answer.

Spoke to Tim about this. He pointed out that we eventually want that and the in-tree Kubernetes API server to converge so we don't have to maintain it exclusively as a fork for k8s.

> Tim:
> b) CSI types will be defined in "k8s.io/kubernetes/pkg/apis/storagedrivers". The generated API and client will be published to a new directory in "k8s.io/kubernetes/staging/...". Upon merge the generated API and client will be published to https://github.com/kubernetes/csi-api and https://github.com/kubernetes/csi-client-go

Tim verbally ok'd creating new repos to unblock creating new staging for these types. I will proceed with that unless there are major objections.

Jordan Liggitt

unread,
Aug 21, 2018, 9:29:49 PM8/21/18
to Saad Ali, Eric Tune, Kenneth Owens, Tim Allclair, Daniel Smith, Tim Hockin, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
On Tue, Aug 21, 2018 at 8:27 PM, Saad Ali <saa...@google.com> wrote:
Thanks Eric.

> Good point. Someone from sig-arch might have a more compelling answer.

Spoke to Tim about this. He pointed out that we eventually want that and the in-tree Kubernetes API server to converge so we don't have to maintain it exclusively as a fork for k8s.


sounds like there is confusion around the pieces that make up kube-apiserver... the kube-apiserver is built using that library already, not maintained as a fork.

These repos are published out of staging, and are the foundation of the kube-apiserver:
  • k8s.io/apiserver is the generic apiserver library, which implements consistent routing and REST behavior, given a map of resources and REST storage implementations
  • k8s.io/apiextensions-apiserver implements native REST storage for the CRD type, and dynamic unstructured REST storage for registered custom resources
  • k8s.io/kube-aggregator implements native REST storage for the APIService type, and routes requests for arbitrary resources to the registered backends
https://github.com/kubernetes/kubernetes/blob/master/pkg/master implements native REST storage for the kube API types using the k8s.io/apiserver library, and sets up the following topology internally:

kube-aggregator (APIService types, routing)
kube-apiserver (native kube types)
apiextensions-apiserver (CustomResourceDefinition type, and custom resources)



Jan Safranek

unread,
Aug 22, 2018, 3:37:24 AM8/22/18
to kubernetes-sig...@googlegroups.com
On 22/08/18 02:27, 'Saad Ali' via kubernetes-sig-storage-wg-csi wrote:
>> Tim:
>> a) Following the existing pattern of https://github.com/kubernetes/api
> and https://github.com/kubernetes/client-go
>> b) CSI types will be defined in
> "k8s.io/kubernetes/pkg/apis/storagedrivers
> <http://k8s.io/kubernetes/pkg/apis/storagedrivers>". The generated API
> and client will be published to a new directory in
> "k8s.io/kubernetes/staging/.. <http://k8s.io/kubernetes/staging/..>.".
> Upon merge the generated API and client will be published to
> https://github.com/kubernetes/csi-api and
> https://github.com/kubernetes/csi-client-go
>
> Tim verbally ok'd creating new repos to unblock creating new staging for
> these types. I will proceed with that unless there are major objections.

Do we really need two repos? k8s.io/metrics has both types.go and
generated client in one repo. Was there any problem with it? I suggest
we follow the same approach.

Tim Hockin

unread,
Aug 22, 2018, 1:28:21 PM8/22/18
to Eric Tune, Michelle Au, Kenneth Owens, Tim Allclair, Saad Ali, Daniel Smith, Liggitt, Jordan, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
I was wrong in the details but right overall -- the controllers are
per-driver, the main PV controller does NOT need this type, but
scheduler might.
On Tue, Aug 21, 2018 at 5:10 PM Eric Tune <et...@google.com> wrote:
>

Saad Ali

unread,
Aug 22, 2018, 1:39:21 PM8/22/18
to Tim Hockin, Eric Tune, Michelle Au, Kenneth Owens, Tim Allclair, Daniel Smith, Jordan Liggitt, Jan Safranek, Chao Xu, David Eads, Brian Grant, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig-storage-wg-csi
Jan, ack. I'm ok with one repo (just https://github.com/kubernetes/csi-api). Will modify my request accordingly.

Saad Ali

unread,
Aug 22, 2018, 5:39:45 PM8/22/18
to Tim Hockin, kubernetes-sig-architecture, Eric Tune, Michelle Au, Kenneth Owens, Tim Allclair, Daniel Smith, Jordan Liggitt, Jan Safranek, Chao Xu, David Eads, Brian Grant, Stefan Schimanski, kubernetes-sig-storage-wg-csi
[ACTION REQUIRED] Someone from SIG-Architecture: can one of you please provide a written ok for the creation of the https://github.com/kubernetes/csi-api repo as discussed in this thread (https://github.com/kubernetes/org/issues/30).

Tim Allclair

unread,
Aug 22, 2018, 5:48:52 PM8/22/18
to Saad Ali, Tim Hockin, kubernetes-sig-architecture, Eric Tune, Michelle Au, Kenneth Owens, Daniel Smith, Jordan Liggitt, Jan Safranek, Chao Xu, David Eads, Brian Grant, Stefan Schimanski, kubernetes-sig...@googlegroups.com

Aaron Crickenberger

unread,
Aug 22, 2018, 5:56:45 PM8/22/18
to tall...@google.com, Saad Ali, Tim Hockin, kubernetes-si...@googlegroups.com, Eric Tune, Michelle Au, Kenneth Owens, Daniel Smith, jlig...@redhat.com, jsaf...@redhat.com, xuc...@google.com, de...@redhat.com, Brian Grant, st...@redhat.com, kubernetes-sig...@googlegroups.com
To be clear: I asked for this because I don't know what SIG Architecture's policy is for formally making a decision, and part of having repos created in kubernetes is that SIG Architecture (not just some random member from it) must sign off.

I see the /approve from clayton for one of these things, but he's not formally listed on https://github.com/kubernetes/community/tree/master/sig-architecture

I'm willing to take it in good faith but I really need some clarity from this group on how to proceed in these situations


- aaron

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-architecture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-arch...@googlegroups.com.
To post to this group, send email to kubernetes-si...@googlegroups.com.

Brian Grant

unread,
Aug 23, 2018, 9:53:09 PM8/23/18
to Liggitt, Jordan, Saad Ali, Eric Tune, Kenneth Owens, Tim Allclair, Daniel Smith, Tim Hockin, jsaf...@redhat.com, Chao Xu, David Eads, kubernetes-sig-architecture, Stefan Schimanski, kubernetes-sig...@googlegroups.com
On Tue, Aug 21, 2018 at 6:29 PM Jordan Liggitt <jlig...@redhat.com> wrote:
On Tue, Aug 21, 2018 at 8:27 PM, Saad Ali <saa...@google.com> wrote:
Thanks Eric.

> Good point. Someone from sig-arch might have a more compelling answer.

Spoke to Tim about this. He pointed out that we eventually want that and the in-tree Kubernetes API server to converge so we don't have to maintain it exclusively as a fork for k8s.


sounds like there is confusion around the pieces that make up kube-apiserver... the kube-apiserver is built using that library already, not maintained as a fork.

These repos are published out of staging, and are the foundation of the kube-apiserver:
  • k8s.io/apiserver is the generic apiserver library, which implements consistent routing and REST behavior, given a map of resources and REST storage implementations
  • k8s.io/apiextensions-apiserver implements native REST storage for the CRD type, and dynamic unstructured REST storage for registered custom resources
  • k8s.io/kube-aggregator implements native REST storage for the APIService type, and routes requests for arbitrary resources to the registered backends
Eventually I would like what I'm calling a Resource Management Platform with basically just extension mechanisms built in:
  • APIService
  • CustomResourceDefinition
  • authentication plugins
  • the authorization hooks and query APIs
  • the admission hooks
  • Namespace
  • discovery APIs
  • OpenAPI
Still in question, but probably convenient to include:
  • ServiceAccount
  • Event, for async error delivery
  • an API for counting resources that object quota could be built upon
  • Endpoints for peer discovery, Secret (KSM plugin mechanism), ConfigMap (dynamic config), Lease (HA controllers)
  • RBAC, as one optional authz implementation
  • /healthz, /componentstatuses, and some other cruft that should be cleaned up

Brian Grant

unread,
Aug 23, 2018, 9:54:23 PM8/23/18
to Saad Ali, Tim Hockin, kubernetes-sig-architecture, Eric Tune, Michelle Au, Kenneth Owens, Tim Allclair, Daniel Smith, Liggitt, Jordan, jsaf...@redhat.com, Chao Xu, David Eads, Stefan Schimanski, kubernetes-sig...@googlegroups.com
Could you please write up the path you are following?

On Wed, Aug 22, 2018 at 2:39 PM Saad Ali <saa...@google.com> wrote:

Brian Grant

unread,
Aug 23, 2018, 9:55:45 PM8/23/18
to Tim Allclair, Saad Ali, Tim Hockin, kubernetes-sig-architecture, Eric Tune, Michelle Au, Kenneth Owens, Daniel Smith, Liggitt, Jordan, jsaf...@redhat.com, Chao Xu, David Eads, Stefan Schimanski, kubernetes-sig...@googlegroups.com
I'll dig up the other thread first, since I thought there were unresolved issues when I last saw it.

Tim Hockin

unread,
Aug 24, 2018, 1:42:42 PM8/24/18
to Brian Grant, Tim Allclair, Saad Ali, kubernetes-sig-architecture, Eric Tune, Michelle Au, Kenneth Owens, Daniel Smith, Liggitt, Jordan, Jan Safranek, Chao Xu, David Eads, Stefan Schimanski, kubernetes-sig...@googlegroups.com
We also need a naming convention for repos / staging dirs that hold these.

stc...@google.com

unread,
Aug 27, 2018, 2:38:03 PM8/27/18
to kubernetes-sig-storage-wg-csi
On Friday, August 24, 2018 at 10:42:42 AM UTC-7, Tim Hockin wrote:
We also need a naming convention for repos / staging dirs that hold these.

CSI is using `k8s.io/csi-api`, so I copied that with `k8s.io/node-api`. How closely should these repos map to the api group? I.e. would we expect approximately 1 repo per group?

Another pattern is k8s.io/metrics, which holds 3 different API groups related to metrics, and omits the `-api` suffix from the repo name.
Yet another example is k8s.io/kube-scheduler, which makes the top-level directory the apis root (i.e. omits the pkg/apis directory).
 
>>>>> >>>>>>>>>> >> >                 <mailto:kubernetes-sig-archit...@googlegroups.com>>
Reply all
Reply to author
Forward
0 new messages