It's hard to talk about network plugins without also getting into how
we can better align with docker and rocket-native plugins, but given
the immaturity of the whole space, let's try to ignore them and think
about what is the overall behavior we really want to get.
Can you answer how, in CNI, something like Docker would work? They
want the "bridge" plugin but they want to add some per-container
iptables rules on top of it.
Should they fork the bridge plugin into their own and implement their
custom behavior? Should they make a 2nd plugin that follows "bridge"
and adds their iptables (not allowed in CNI)? Should they make a
wrapper plugin that calls bridge and then does their own work?
Do you really want the "base" plugins to accumulate those sorts of
features? I like the idea of wrapping other plugins - formalizing
that pattern would be interesting. Keep a handful of very stable,
reasonably configurable (but not crazy) base plugins that people can
decorate.
On Fri, Aug 21, 2015 at 2:41 PM, <eugene.y...@coreos.com> wrote:
>
> On Friday, August 21, 2015 at 2:25:03 PM UTC-7, Tim Hockin wrote:
>>
>> Can you answer how, in CNI, something like Docker would work? They
>> want the "bridge" plugin but they want to add some per-container
>> iptables rules on top of it.
>>
>> Should they fork the bridge plugin into their own and implement their
>> custom behavior? Should they make a 2nd plugin that follows "bridge"
>> and adds their iptables (not allowed in CNI)? Should they make a
>> wrapper plugin that calls bridge and then does their own work?
>
>
> They can either fork the bridge plugin or do a wrapper one. Ideally they
> would abstract out the iptables rules into something they can contribute
> upstream
> to CNI's bridge plugin.
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-network" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-network+unsub...@googlegroups.com.
The MASQUERADE stuff is needed regardless of whether you use a docker
bridge or Calico or Weave or Flannel, but it's actually pretty
site-specific. In GCE we shouldbasically say "anything that is not
destined for 10.0.0.0/8 needs masquerade", but that's not right. It
should be "anything destined for and RFC1918 address". But there are
probably cases where we would want the masquerade even withing RFC1918
space (I'm thinking VPNs, maybe?). Outside of GCE, the rules are
obviously totally different. sShould this be something that
kubernetes understands or handles at all? Or can we punt this down to
the node-management layer (in as much as we have one)?
>
> We've been talking about locking down host-pod communication in the general
> case as part of our thoughts on security. There are still cases where
> host->pod communication is needed (e.g. NodePort), but at the moment our
> thinking is to treat infrastructure as "outside the cluster". As far as
> security is concerned, we think the default behavior should be "allow from
> pods within my namespace". Anything beyond that can be configured using
> security policy.
See above - what about cases where the node needs to legitimately
access cluster services (the canonical case being a docker registry)?
> I like this model because it would allow Calico to provide a single CNI
> plugin for Kubernetes, and have it run for any containerizer (docker, rkt,
> runc, ...). As k8s support for different runtimes grows, this will become an
> increasingly significant issue. (Right now we can just target docker and be
> done with it).
Does CNI work with Docker?
Notes as I read.
The biggest problem I have with this (and it's not necessarily a
show-stopper) is that a container created with plain old 'docker run'
will not be part of the kubernetes network because we will have
orchestrated the network at a higher level. In an ideal world, we'd
teach docker itself about the plugins and then simply delegate to it
as we do today.
That said, the more I dig into Docker's networking plugins the less I
like them. Philosophically and practically a daemon-free model built
around exec is so much cleaner. It seems at least theoretically
possible to bridge libnetwork to run CNI plugins, but probably not
without mutating the CNI spec to the more proscriptive libnetwork
model.
You say you'll push the IP to the apiserver - I guess you mean in
pod.status.podIP ?
Regarding CNI network configs, I assume that over time this might even
be something we expose through kubernetes - a la Docker networks.
The advantage here is that network management is a clean abstraction
distinct from interface management.
To your questions:
1) Can we eliminate Init?
I think yes.
2) Can we eliminate Status?
I think yes.
3) Can we cut over immediately to CNI, or do we need to keep the old
plugin interface for a time? If so, how long?
I think this becomes a community decision. There are a half-dozen to
a dozen places I know of using this feature. IFF they were OK making
a jump to something like CNI, we could do a hard cutover.
4) Can we live without the vendoring naming rules? Can we establish
that convention for plugins is to vendor-name the binary?
mycompany.com~myplug or something? Maybe it's not a huge deal.
I'll add #5 - does this mean we have no concept of in-process plugin?
Or do we retain the facade of an in-process API like we have now.
Overall this looks plausible, but I'd like to hear from all the folks
who have plugins implemented today, especially if you have both CNI
and libnetwork experience. The drawback I listed above (plain old
'docker run') is real, but maybe something we can live with. Maybe
it's actually a feature?
As a discussion point - how much would we have to adulterate CNI to
make a bridge? It sure would be nice to use the same plugins in both
Docker and rkt - I sure as hell don't want to tweak and debug this
stuff twice.
We could have a little wrapper binary that knew about a static network
config, and anyone who asked for a new network from our plugin would
get an error, then we just feed the static config(s) to the wrapped
CNI binary. We'd have to split Add into create/join but that doesn't
seem SO bad. What else?
I'll add #5 - does this mean we have no concept of in-process plugin?
Or do we retain the facade of an in-process API like we have now.
Added a bullet for this in the doc.CNI doesn't currently have the concept of an in-process plugin. Looks like with the current API this only for vendors that are extending the kubernetes codebase, or am I missing something?
I was pondering this approach -- the big stumbling block for me is that a CNM createEndpoint can occur on a different host than the joinEndpoint call, so naively we'd need a cluster-wide distributed datastore to keep track of the Create calls.Short of breaking the spec and disallowing Create and Join from being called on different hosts, I don't see a way around that issue.
Let us not make an assumption that all plugins will be Golang based.
OpenStack Neutron currently has python libraries for clients and my
plugin that integrates containers with openstack is python based.
Fwiw, Docker's libnetwork does not mandate golang plugins. It uses
REST APIs to talk to plugins.
> Wow, I was not aware of that. How does it work now? CreateEndpoint creates
> the interface (e.g. veth pair) on the host.
veth pairs are not mandated to be created on CreateEndpoint (). You
are only to return IP addresses, MAC addresses, Gateway etc. What this
does in theory is that it provides flexibility with container mobility
across hosts. So you can effectively create an endpoint from a central
location and ask a container to join that endpoint from any host.
>Join then specifies the
> interface names that should be moved into the sandbox. I don't really
> understand how Join can be called on a different host -- wouldn't there be
> no interface to move on that host then?
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-network" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-network+unsub...@googlegroups.com.
I feel dumb, but I don't get it. Since you seem to understand it, can
you spell it out in more detail?
On Tue, Sep 1, 2015 at 12:07 PM, <eugene.y...@coreos.com> wrote:
>
> On Tuesday, September 1, 2015 at 10:46:35 AM UTC-7, Paul Tiplady wrote:
>>>
>>>
>>> I'll add #5 - does this mean we have no concept of in-process plugin?
>>> Or do we retain the facade of an in-process API like we have now.
>>>
>> Added a bullet for this in the doc.
>>
>> CNI doesn't currently have the concept of an in-process plugin. Looks like
>> with the current API this only for vendors that are extending the kubernetes
>> codebase, or am I missing something?
>>
>
> CNI doesn't have in-process plugins because that requires shared object
> (.so) support and I believe that Go has problems with that (although it
> maybe fixed in 1.5). Technically CNI is not Go specific but realistically so
> much software in this space is written in Go. Having "in-tree" plugins don't
> require .so support but to be honest those never pass my definition of
> "plugins". FWIW, I would have been quite happy to just have .so plugins as
> there's no fork/exec overhead.
I didn't mean to imply .so, though that's a way to do it too. I meant
to ask whether kubernetes/docker/rkt could have network plugins
defined in code, one of which was an exec-proxy, or whether exec was
it. I don't feel strongly that in-process is needed at this point.
Let me try to explain what I mean to the best of my ability with an
analogy of VMs and Network Virtualization (But before that let me
clarify that since k8 is single tenant orchestrator and has been
designed with VIP and load balancers as a basic building block, the
feature is not really useful for k8. )
With Network Virtualization, you can have 2 VMs belonging to 2
different tenants run on the same hypervisor with the same IP address.
The packet sent by VM of one tenant will never reach the VM of another
tenant, even though they are connected to the same vSwitch (e.g
openvswitch). You can apply policies to these VM interfaces (e.g.
QoS, Firewall) etc. And then you can move one of the VM to a different
hypervisor (vMotion). All the policies (e.g QoS, firewall) will now
follow the VM to the new hypervisor automatically. The IP address and
MAC address follows too to the new VM. The network controller simply
reprograms the various vswitches so that the packet forwarding happens
to the new location.
Since you have already associated your policies (firewall, QoS etc)
with the endpoint, you can destroy the VM that the endpoint is
connected to and then create a new VM at a different hypervisor and
attach the old endpoint (with its old policies) to the new VM.
My reading of what libnetwork achieves with containers is the same as
above. i.e., you can create a network endpoint with policies applied
and then attach it to any container on any host.
On Tue, Sep 1, 2015 at 10:22 PM, Prashanth B <be...@google.com> wrote:
>> I hope that answers your question?
>
> Thanks for the example. So what I'm proposing is a networking model with the
> following limitations for the short term:
> 1. Only one (docker) network object, this is the kubernetes network. All
> endpoints must join it.
IMO, the "one" network object theoretically fits into the current k8
model wherein all pods can communicate with each other over L3. But
let me bring up a couple of points that provides a counter-view.
My understanding of implementation of Docker's inbuilt overlay
solution is that a "network" is a broadcast domain. So if you impose
the same meaning on k8 networking, you actually end up with a
humungous broadcast domain across multiple hosts and it won;t scale.
So one could argue that the current k8 model is that each minion is
one network and all networks are connected to each other via a router.
> 2. Containers can only join endpoints on the same host.
> 3. A join execs CNI plugin with json composed from the join and endpoint
> create, derived from storage (physical memory, sqlite, apiserver -- as long
> as it's not another database it remains an implementation detail).
>
>> First, libkv assumes an arbitrary KV store, which our APIserver is not.
>
> Doesn't look like libkv is a requirement for remote plugins.
> If we start
> docker with a plugin but without a kv store, the json will get posted to the
> localhost http server, but not propogated to the other hosts (untested, this
> from staring at code). This is ok, because there is only 1 network and no
> cross host endpoint joining. If we really need cross host consistency, we
> have an escape hatch via apiserver watch.
You have to start Docker daemon with libkv for libnetwork to work
(atleast based on my observation).
On Tue, Sep 1, 2015 at 4:30 PM, Gurucharan Shetty <she...@nicira.com> wrote:
> Since you have already associated your policies (firewall, QoS etc)
> with the endpoint, you can destroy the VM that the endpoint is
> connected to and then create a new VM at a different hypervisor and
> attach the old endpoint (with its old policies) to the new VM.
>
> My reading of what libnetwork achieves with containers is the same as
> above. i.e., you can create a network endpoint with policies applied
> and then attach it to any container on any host.
Write a "cni-exec" libnetwork driver.
You can not create new networks using it. When a CreateNetwork() call
is received we check for a static config file on disk.
E.g.CreateNetwork(name = "foobar") looks for
/etc/cni/networks/foobar.json, and if it does not exist or does not
match, fail.
PROBLEM: it looks like the CreateNetwork() call can not see the name
of the network. Let's assume that could be fixed.
CreateEndpoint() does just enough work to satisfy the API, and save
all of its state in memory.
PROBLEM: If docker goes down, how does this state get restored?
endpoint.Join() takes the saved info from CreateEndpoint(), massages
it into CNI-compatible data, and calls the CNI plugin.
Someone shoot this down? It's not general purpose in the sense that
docker's network CLI can't be used, but would it be good enough to
enable people to use the same CNI plugins across docker and rkt?
> I hope that answers your question?Thanks for the example. So what I'm proposing is a networking model with the following limitations for the short term:1. Only one (docker) network object, this is the kubernetes network. All endpoints must join it.
2. Containers can only join endpoints on the same host.
3. A join execs CNI plugin with json composed from the join and endpoint create, derived from storage (physical memory, sqlite, apiserver -- as long as it's not another database it remains an implementation detail)
> First, libkv assumes an arbitrary KV store, which our APIserver is not.
Doesn't look like libkv is a requirement for remote plugins. If we start docker with a plugin but without a kv store, the json will get posted to the localhost http server, but not propogated to the other hosts (untested, this from staring at code). This is ok, because there is only 1 network and no cross host endpoint joining. If we really need cross host consistency, we have an escape hatch via apiserver watch
> Third, if we only allow network objects through kubernetes we can't see the name of the object Docker thinks it is creating.We don't even have to allow this. The cluster is bootstrapped with a network object. It's readonly thereafter. Create network will noop after that.
This would give users the ability to dump their own docker plugins into /etc/docker/plugins, start the kubelet with --manage-networking=false, and use dockers remote plugin api. At the same time CNI should work with manage-networking=true.
We will eventually want to add something akin to multiple Networks, so
I want to be dead sure that it is viable before we choose and
implement a model.
BUT the deal-breaker is that the CNI plugin will expect to move the interface into the right NetNS itself, configure the interface's IP address itself, and more. CNM doens't allow that. CNM also doesn't expose the NetNS FD to the plugins in anyway (though in-process plugins might be able to find it), so the CNI plugin has no idea what network namespace to move the interface into. That's where I stopped with cni-docker-plugin because it just wasn't possible without some changes to CNI or CNM itself.
Someone shoot this down? It's not general purpose in the sense that
docker's network CLI can't be used, but would it be good enough to
enable people to use the same CNI plugins across docker and rkt?
Unfortunately I can't see a way to make existing CNI plugins work with libnetwork/CNM right now due to the fundamental difference in their granularity and handling of IP addressing and network namespace management...
For handling the granularity, how about splitting CNI's IPAM and ADD from the glue driver? The IPAM is anyway separately define-able in CNI, just that it is not called separately from ADD. And we make the IPAM understand if it is called directly by the glue code or through the ADD command (switch behaviour accordingly).
The big alternative is to say "forget it", and just run all our pods
with --net=none to docker, and use CNI ourselves to set up networking.
This means (as discussed) 'docker run' can never join the kubernetes
network and that we don't take advantage of vendors who implement
docker plugins (could we bridge it the other way? A CNI binary that
drives docker remote plugins :)
I feel like a prototype is warranted, and then maybe a get-together?
> The first fundamental mismatch between libnetwork/CNM and CNI is that CNM is
> much more granular than CNI, and it wants more information at each step that
> CNI isn't willing to give back until the end.
What about actually dong the CNI "add" operation on CNM's "create
endpoint" ? Is there a guarantee that "create endpoint" runs on only
one node? If not, this seems hard to surmount.
> The second fundamental mismatch is that libnetwork/CNM does more than CNI
> does; it handles moving the interfaces into the right NetNS, setting up
> routes, and setting the IP address on the interfaces. The plugin's job is
> simply to create the interface and allocate the addresses, and pass all that
> back to libnetwork. CNI plugins currently expect to handle all this
> themselves.
Hack: move into a tmp namespace in CNI plugins, and then move it out
in the bridge.
> Third, remote plugins are called in a blocking manner so they cannot call
> back into docker's API to retrieve any extra information they might need
> (eg, network name).
Do we understand WHY we can't have the network name?
>> PROBLEM: it looks like the CreateNetwork() call can not see the name
>> of the network. Let's assume that could be fixed.
>
> In my implementation I just cached the network ID and started a network
> watch to grab the name, and all the actual CNM work was done in Join().
Network watch of the libkv backend? Or something else?
On Sep 2, 2015, at 3:44 PM, Tim Hockin <tho...@google.com> wrote:Madhu,
Thanks for the clue on scope. It looks like all remote drivers are
assumed to be global.
https://github.com/docker/libnetwork/blob/master/drivers/remote/driver.go#L32
None of this addresses the other issues in lbnetwork - not wrappable,
IPAM is too baked-in, not available today, no access to network Name
field, complex model, etc. I keep hearing from people who tried to
implement libnetwork drivers that it's sort of a bad experience, and
docker doesn't seem keen to make it better (hearsay).
On Sep 2, 2015, at 8:52 PM, Tim Hockin <tho...@google.com> wrote:On Wed, Sep 2, 2015 at 7:24 PM, Jana Radhakrishnan
<jana.radh...@docker.com> wrote:
None of this addresses the other issues in lbnetwork - not wrappable,
IPAM is too baked-in, not available today, no access to network Name
field, complex model, etc. I keep hearing from people who tried to
implement libnetwork drivers that it's sort of a bad experience, and
docker doesn't seem keen to make it better (hearsay).
I'll just provide some answers for the perceived libnetwork problems:
* It should be fairly easy to wrap a libnetwork plugin with another plugin.
How? In CNI it's a shell script. How do I wrap a daemon?
* IPAM is coming out before we release. Please feel free to comment on the
proposal: https://github.com/docker/libnetwork/issues/489
Will do* Going to be available in stable release in 1.9
I'm anxious to see what happens with the separation of Services and
Networks. I think that conflation is part of what makes the
libnetwork model very complicated
* Network names should not be that relevant to drivers if their only
responsibility is to plumb low level stuff
I know you guys keep saying that, but lots of people implementing
drivers claim to need it, and now I see exactly why.
* I am not too sure about complexity of the model because the model consists
of just Networks and Endpoints :-)
And sandboxes. And KV stores, but optional. And IPAM. And global vs
local. And "creating endpoints" that get broadcasted across the
network. I'm sorry, the concept count on libnetwork is really high
and not at all obvious. Guru explained it up-thread in a way that was
pretty clear, but it was pretty clearly overkill.
* Implementing a libnetwork driver is all about just implementing 6 Apis
some of them can be very minimal or no-op
On top of that there is a general perception that you need libkv for
libnetwork to operate. But this is not true if the driver is available only
in local scope.
...once that bug is fixed. do local-scope drivers have persistence?
If I create a local driver, create a Network, attach a container to
that Network, and then bounce docker daemon - do my networks come
back?
On Sep 3, 2015, at 9:01 AM, Dan Williams <dc...@redhat.com> wrote:On Wed, 2015-09-02 at 15:28 -0700, Madhu Venugopal wrote:Copying Jana as well.
Will read & reply to this thread later today.
Just a quick clarification on some misunderstanding on libkv usage.
libkv supports local persistence (using boltdb) and libnetwork makes use of it for local persistence (https://github.com/docker/libnetwork/pull/466<https://github.com/docker/libnetwork/pull/466>).
And the questions about global vs local libnetwork events are purely a matter of scope of the driver.
If the driver scope is global, endpoint & network create calls are global. But Join is local.
But if driver is scoped local, then all the calls are local.
It wasn't clear from the code, but on libnetwork init, are global
endpoints created on every node as well? It looks like the code
explicitly gets all networks, but then simply starts watching for
endpoints in those networks. There doesn't seem to be an explicit
"ListEndpoints" call anywhere, but perhaps that's a side-effect of some
other behavior?
The reason I ask is that creating endpoints in the driver is a pretty
heavy operation, and if endpoints are global this would essentially
require the driver to create a kernel network interface for every
network, on every host, regardless of whether that host was running a
container that was joined to the endpoint.
The important thing to clarify here is that the driver is not required to create network interfaces during CreateEndpoint call. CreateEndpoint is used to eitherOn Sep 3, 2015, at 9:01 AM, Dan Williams <dc...@redhat.com> wrote:On Wed, 2015-09-02 at 15:28 -0700, Madhu Venugopal wrote:* Ask the driver to allocate host independent network resources like IP/MAC etc
On Sep 3, 2015, at 10:50 AM, eugene.y...@coreos.com wrote:How do I handle the case when the only way I can find out the MAC (or even IP) is by the act of creating the actual interface on the host (or at least once I know what host the container will run on)?
On Sep 3, 2015, at 11:22 AM, eugene.y...@coreos.com wrote:
On Thursday, September 3, 2015 at 11:02:03 AM UTC-7, Jana Radhakrishnan wrote:On Sep 3, 2015, at 10:50 AM, eugene.y...@coreos.com wrote:How do I handle the case when the only way I can find out the MAC (or even IP) is by the act of creating the actual interface on the host (or at least once I know what host the container will run on)?I am assuming you are talking about a specific case of a driver such as IPVLAN. In cases where you have to assign specific locality to the Endpoint, the network(and endpoint) needs to be backed by a local scoped driver. Once a network is created with a local scoped driver, CreateEndpoint and Join will happen in only one host and with that assumption it is trivial to solve your use case.Yes, I was talking about IPVLAN in the case of MAC and something like flannel in the case of IP (in flannel, each host is allocated a subnet from which container IPs are drawn).
So it sounds like with GlobalScope one also gets the libnetwork's control plane while with LocalScope it's possible to opt-out of it. This is like what Alex Pollitt talked about above -- splitting libnetwork into two.
May I ask about the logic behind the driver creating interfaces in the root network namespace and libnetwork moving them into the container namespace (sandbox). Why not give the driver access to the container namespace (as well as root ns)? Granted, sometimes an interface has to be created in the root ns and then moved but ideally the interface should be created in its final resting place. Also, if a driver has access to the container namespace, it could manipulate it in a way that is not currently envisioned by libnetwork (sysctls, netfilter rules, etc).
> > To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-network+unsub...@googlegroups.com <mailto:kubernetes-sig-network+unsubscribe@googlegroups.com>.
> > To post to this group, send email to kubernetes-...@googlegroups.com <mailto:kubernetes-sig-net...@googlegroups.com>.
On Sep 2, 2015, at 8:46 AM, Michael Bridgen <mic...@weave.works> wrote:So in theory drivers could report as "LocalScope" to a future
libnetwork, and not drag in libkv. However, I would be worried about
additional assumptions made by libnetwork.
> > To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com <mailto:kubernetes-sig-ne...@googlegroups.com>.
> > To post to this group, send email to kubernetes-...@googlegroups.com <mailto:kubernetes-...@googlegroups.com>.
> > Visit this group at http://groups.google.com/group/kubernetes-sig-network <http://groups.google.com/group/kubernetes-sig-network>.
> > For more options, visit https://groups.google.com/d/optout <https://groups.google.com/d/optout>.
>
--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-ne...@googlegroups.com.
To post to this group, send email to kubernetes-...@googlegroups.com.
Visit this group at http://groups.google.com/group/kubernetes-sig-network.
For more options, visit https://groups.google.com/d/optout.
Ravi,
On 1), I think I need to dig more into the use-cases you are envisioning. How about we continue that discussion on the other thread you opened? I don’t think this point affects the CNI vs. CNM question, since both support multiple networks.
For 2), perhaps we’re using different definitions for annotations. In the current K8s API, the concept of Annotations is defined here as “arbitrary non-identifying metadata… structured or unstructured”. K8s Annotations are just a bag of arbitrary KV pairs, and the values can be JSON if you want. That’s the place to put a JSON blob associated with a Pod spec.
I’m also using the distinction in the k8s API between Labels (which are identifying metadata, i.e. something you select on) and Annotations (things you don’t want to select on). That distinction is not necessarily required when specifying attributes on a CNM Endpoint, depending on the allowed values for a CNM label. But it seems to me that keeping a semantically similar split would make sense.
Cheers,
Paul
Ravi,
On 1), I think I need to dig more into the use-cases you are envisioning. How about we continue that discussion on the other thread you opened? I don’t think this point affects the CNI vs. CNM question, since both support multiple networks.
For 2), perhaps we’re using different definitions for annotations. In the current K8s API, the concept of Annotations is defined here as “arbitrary non-identifying metadata… structured or unstructured”. K8s Annotations are just a bag of arbitrary KV pairs, and the values can be JSON if you want. That’s the place to put a JSON blob associated with a Pod spec.
Annotations are map[string]stringProposed blob is map[string]interface{}If the Value is an interface{} instead of string, it allows for more flexible composition/readability of a Pod spec.
>To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-network+unsub...@googlegroups.com.
Have you implemented a network plugin using the current k8s API? Did it meet your needs?
Going forwards, would you prefer to use CNI or CNM for implementing Kubernetes plugins? (Feel free to include implementation concerns and/or higher-level architectural factors.)
Is it important to you to be able to write one plugin for all k8s container runtimes? (e.g. rkt, runC as well as docker.)
Is it important to you to be able to write a k8s plugin that's usable outside of k8s? (e.g. works natively with something like 'docker run' or 'rkt run'.)