Promoting MCS-API to beta!

140 views
Skip to first unread message

Arthur Outhenin-Chalandre

unread,
Dec 23, 2025, 2:13:19 PM12/23/25
to kubernetes-sig-multicluster
Hello everyone,

I am happy to announce that we are considering progressing the
Multi-Cluster Services API (MCS-API) KEP to beta and the associated CRD
to v1beta1 (with no API changes compared to v1alpha1).

MCS-API has matured significantly in recent years. We have now closed
most of the gaps that may have prevented us from reaching beta so far.
This milestone is the result of hard work by many folks in the SIG-MC
community. Also, despite its current alpha status, we have treated the
API as stable for quite some time now and have been careful to avoid
introducing breaking changes, making this graduation to beta long
overdue.

The PRs to handle this promotion have already been created:
- https://github.com/kubernetes/enhancements/pull/5746
- https://github.com/kubernetes-sigs/mcs-api/pull/133

We plan to reach a final decision during the SIG-MC call on January 6,
2026. Please join us there if you would like to express your opinion.
Alternatively, if you cannot make the call, please leave your feedback
via the usual channels (slack, mailing list, the PRs linked above).

For those interested in MCS-API but catching up on the news, here is a
short summary of "recent" improvements and news:
- Creation of a conformance suite which has seen constant improvement
  over the last two years, now testing 27 scenarios (we highly recommend
  checking this out if you are an existing implementer!)
- Closed the gap on missing Service fields
- Added the ability to export annotations and labels
- Formalized many status conditions, including adding conditions to
  ServiceImport
- Added opt-in MCS-API support for the default Kubernetes plugins in
  CoreDNS >= v1.12.2 (available by default in Kubernetes v1.35.0)

Cheers,

-- 
Arthur Outhenin-Chalandre

Tim Hockin

unread,
Jan 16, 2026, 6:18:25 PMJan 16
to Arthur Outhenin-Chalandre, kubernetes-sig-multicluster
Hi all,

This is something I have been thinking about for a while, and I apologize for having sat on it until now.  I also apologize for any details I get wrong - it's been a while since I looked at the spec for this in great detail.

Here it is - I fear we have over-specified.

I am still a believer in the idea of clusterset and "convention > configuration" across clusters (sameness).  I think ServiceExport is fine, but...do we need ServiceImport?  More precisely, does the MCS spec need to define ServiceImport?  The implication of ServiceImport is that some actor MUST aggregate all of the exported services' namespaces and ensure those namespaces exist in every cluster.  Whether the implementation needs that information or not.

ServiceImport feels like something that implementations should define for themselves, if and only if they need something like that.

Is there some reason I am forgetting why ServiceImport needs to be a required part of the MCS definition?  I am stretching my memory but it is failing.

Tim

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/kubernetes-sig-multicluster/c1eac696-5e0e-423e-97f3-f039620a128cn%40googlegroups.com.

Arthur Outhenin-Chalandre

unread,
Jan 17, 2026, 1:15:01 PMJan 17
to Tim Hockin, kubernetes-sig-multicluster
Hi Tim,

Here are my thoughts on the ServiceImport resource in MCS. This
doesn't necessarily encompass everything and I might have forgotten
some details too.

A ServiceImport does the following things:
- It exposes the resulting Service properties from all constituent
exported Services
- It includes the IP addresses allocated for this ServiceImport, which
could be the same for every cluster or different for each
- It includes the list of clusters exporting this Service
- It allows exposing any import problem/info within its status
conditions (one of which could be IP protocol incompatibility)
- It's auto-managed/created by the MCS implementation itself, which
allows a cluster only consuming some service without exporting it to
not have to create (almost) any additional resources

As a consequence of some of the points above, it does not directly
modify any existing Service behaviors. This is mainly because of the
IPs allocated on the ServiceImport and the fact that we use a
different DNS domain (typically clusterset.local).

Also note that we do not prescribe what happens when a Namespace does
not exist at the KEP level (implementations may or may not choose to
auto-create the namespace, and the different conditions and behavior
around this).

It's not mentioned explicitly in the KEP, but it's also probably
reasonable that an implementation could have some kind of policy (for
instance, a namespace annotation or some CRD) to have additional
control over what namespaces are allowed to export/import Services.

The most important parts of the ServiceImport are probably its IP and
type (headless/non-headless), and at least for headless ServiceImports
the related EndpointSlices. The rest could technically be skipped but
could still be useful when a user wants more visibility into what is
happening. It could also be required depending on how some third-party
controller is integrating with MCS (for instance, if a Gateway is
using the EndpointSlice directly, it should then be able to know the
different Service properties).

Hopefully this helps clarify the purpose of ServiceImport in MCS. Feel
free to join our weekly SIG-MC calls if you want to discuss this
further with other folks too!

Cheers,

--
Arthur Outhenin-Chalandre

Tim Hockin

unread,
Jan 17, 2026, 1:47:51 PMJan 17
to Arthur Outhenin-Chalandre, kubernetes-sig-multicluster
Hi,  thanks for the response and time you spent on it.

I do understand the design (I helped frame it in the beginning) but what I see now, some time later, is that ServiceImport is needed by SOME implementations but not ALL potential implementations.  In fact, the requirement that it be present actually seems to impede some possible implementations (e.g. an off-cluster mixer which delivers endpoints information to in-cluster components through a more purpose-built mechanism).

We don't consume ServiceImport from any core components and as far as I can tell it is ONLY used by MCS implementations, right?

As such, shouldn't it be the purvey of the implementation to define?

I wonder if perhaps we want to continue to define a schema for it and just make it an optional aspect of the spec for beta?  That could be a nice bridge during beta, but when it goes GA we could make it the implementations' responsibility to define a type - if and only if they need it.

As with all APIs, we want it to be as small as possible and no smaller.  I think this is a place we might have over-shot the mark.

I am not asserting too hard, not yet anyway.  I am looking for reasons why my evolved understanding of the role of this type is wrong. :).

Tim

Arthur Outhenin-Chalandre

unread,
Jan 17, 2026, 2:19:40 PMJan 17
to Tim Hockin, kubernetes-sig-multicluster
Hmm interesting, it's true that if for instance an implementation
handle DNS completely outside clusters you may be able to skip the
ServiceImport creation. It does mean that you won't be able to
integrate with any third party controllers (for instance anything that
are reading Service could also directly support ServiceImport) like
for instance any Gateway-API implementations that supports MCS-API.

I don't know if it makes sense to completely remove the type but
assuming an implementation does not need any in-cluster components to
import/consume MCS and doesn't want/need to integrate with anything
else it seems to be technically be possible to make ServiceImport an
optional thing indeed.

Stephen Kitt

unread,
Jan 20, 2026, 10:05:14 AMJan 20
to kubernetes-sig-multicluster
Hi Tim,

On Sat, Jan 17, 2026 at 10:47:34AM -0800, 'Tim Hockin' via kubernetes-sig-multicluster wrote:
> I do understand the design (I helped frame it in the beginning) but what I
> see now, some time later, is that ServiceImport is needed by SOME
> implementations but not ALL potential implementations. In fact, the
> requirement that it be present actually seems to impede some possible
> implementations (e.g. an off-cluster mixer which delivers endpoints
> information to in-cluster components through a more purpose-built
> mechanism).

As far as end users are concerned, I agree, ServiceImport is an
implementation detail; in fact we’ve always said (but not in the KEP)
that it is, which in particular means that there are no
user-serviceable parts inside. The MCS API can nearly be boiled down
to

* users create a ServiceExport in one or more exporting cluster(s)
* within a reasonable time frame, in other clusters, the corresponding
DNS name resolves to something providing the service

so as you say, the KEP could be simplified.

> We don't consume ServiceImport from any core components and as far as I can
> tell it is ONLY used by MCS implementations, right?

The only use I’m aware of outside of MCS implementations is in
CoreDNS, which supports resolving clusterset.local queries (or indeed
any other domain) using ServiceImports. This was initially implemented
in a specific multicluster plugin, but
https://github.com/coredns/coredns/pull/7266 added the functionality
to the kubernetes plugin itself.

Gateway API has a similar ServiceImport but it’s separately defined.

> As such, shouldn't it be the purvey of the implementation to define?
>
> I wonder if perhaps we want to continue to define a schema for it and just
> make it an optional aspect of the spec for beta? That could be a nice
> bridge during beta, but when it goes GA we could make it the
> implementations' responsibility to define a type - if and only if they need
> it.
>
> As with all APIs, we want it to be as small as possible and no smaller. I
> think this is a place we might have over-shot the mark.
>
> I am not asserting too hard, not yet anyway. I am looking for reasons why
> my evolved understanding of the role of this type is wrong. :).

I’ve started going through KEP-1645 and removing ServiceImport, and so
far I haven’t run into any blockers. The only issue I currently have
with dropping ServiceImport entirely is CoreDNS.

Regards,

--
Stephen Kitt
Senior Principal Software Engineer
Red Hat OpenShift Networking
signature.asc

Arthur Outhenin-Chalandre

unread,
Jan 20, 2026, 10:57:47 AMJan 20
to Stephen Kitt, kubernetes-sig-multicluster
Hi Stephen,

The following thing is not really true:

> Gateway API has a similar ServiceImport but it’s separately defined.

Gateway API implementations that does support MCS-API directly will
actively read the ServiceImport. For instance, envoyproxy Gateway-API
implementation uses directly the ServiceImport API imported from
kubernetes-sig/mcs-api.

This is mostly the case today only for GW-API but in fact anything
(controllers mostly) that would be actively watching Services could
also actively watch ServiceImports. One of the example could be
Prometheus operator which has a Service selector so it would have
actual value vs just providing a DNS name (some "Service Discovery"
per label). I don't have a lot of examples besides that because
I agree with you that most things will just consuming DNS including
with the normal Service API without interacting directly with the
Kubernetes API but I am sure that we could find a bit more
interesting examples by looking a bit more into it.

So making ServiceImport optional could be fine, it just means that some
integration will not be possible with some implementations that chose
to not create ServiceImport (and maybe it would fracture too much the
implementations as a result too...). But removing the API entirely
seems too much as it would make those type of integration impossible
or rely on provider specific ServiceImport equivalent :/.

Cheers,

--
Arthur Outhenin-Chalandre

Jeremy Olmsted-Thompson

unread,
Jan 20, 2026, 11:00:25 AMJan 20
to Stephen Kitt, kubernetes-sig-multicluster
As I see it, CoreDNS is a good example of ServiceImport it serving its purpose (MC-Gateway as well).

CoreDNS is the most popular K8s DNS solution these days and it just works because it can count on ServiceImport being present.

The other main benefit of ServiceImport not mentioned above, but called out in the KEP is for user driven service discovery. if I create a ServiceExport in one cluster, yes there might just eventually be a clusterset.local service to consume in another cluster, but how is someone with access only to that other cluster supposed to know about it?

I don't think we should remove ServiceImport or make it optional at this point. Implementations don't need to use it themselves, but for the user's sake and for useful addons to function, they need to create it.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

Stephen Kitt

unread,
Jan 20, 2026, 11:35:21 AMJan 20
to kubernetes-sig-multicluster
On Tue, Jan 20, 2026 at 08:00:08AM -0800, 'Jeremy Olmsted-Thompson' via kubernetes-sig-multicluster wrote:
> As I see it, CoreDNS is a good example of ServiceImport it serving its
> purpose (MC-Gateway as well).
>
> CoreDNS is the most popular K8s DNS solution these days and it just works
> because it can count on ServiceImport being present.
>
> The other main benefit of ServiceImport not mentioned above, but called out
> in the KEP is for user driven service discovery. if I create a
> ServiceExport in one cluster, yes there might just eventually be a
> clusterset.local service to consume in another cluster, but how is someone
> with access only to that other cluster supposed to know about it?

Right, and it’s used in this fashion by the conformance tests.

Another point mentioned in the KEP is that the ServiceImport’s
condition statuses are used to reflect certain error conditions. This
is the main sticking point as far as the KEP itself is concerned. See
<https://github.com/kubernetes/enhancements/pull/5816> for an update
removing ServiceImport; it has a TODO where something would have to be
provided to carry that information, and it’s not clear to me that
ServiceExport is the right place for it. As we discussed when detailed
conditions were added to both objects, some error conditions are
specific to the importing cluster and wouldn’t make sense on the
exporting cluster!

> I don't think we should remove ServiceImport or make it optional at this
> point. Implementations don't need to use it themselves, but for the user's
> sake and for useful addons to function, they need to create it.

Agreed, and I suspect all we can do is as yet another example of what
not to do in future KEPs ;-).
signature.asc

Stephen Kitt

unread,
Jan 20, 2026, 11:58:50 AMJan 20
to kubernetes-sig-multicluster
On Tue, Jan 20, 2026 at 04:57:28PM +0100, Arthur Outhenin-Chalandre wrote:
> The following thing is not really true:
>
> > Gateway API has a similar ServiceImport but it’s separately defined.
>
> Gateway API implementations that does support MCS-API directly will
> actively read the ServiceImport. For instance, envoyproxy Gateway-API
> implementation uses directly the ServiceImport API imported from
> kubernetes-sig/mcs-api.

My point was that the Gateway API spec has its own ServiceImport.
However the fact that Gateway API *implementations* use the MCS API
ServiceImport (presumably in addition to the Gateway API
ServiceImport) supports your conclusion:

[...]
> So making ServiceImport optional could be fine, it just means that some
> integration will not be possible with some implementations that chose
> to not create ServiceImport (and maybe it would fracture too much the
> implementations as a result too...). But removing the API entirely
> seems too much as it would make those type of integration impossible
> or rely on provider specific ServiceImport equivalent :/.

While it’s not explicit in the KEP, ServiceImport is indeed useful as
an indicator that an external service is available.

As suggested in other emails in this thread, it might not be possible
or wise to remove ServiceImport entirely. That leaves the question of
how to handle conformance for MCS implementations that don’t use it...

There is precedent in the KEP for optional features, and maybe that
would be appropriate for ServiceImport; see EndpointSlice support and
even DNS-based discovery (although in practice that’s not really
optional!).
signature.asc

Stephen Kitt

unread,
Jan 23, 2026, 3:25:34 AM (12 days ago) Jan 23
to kubernetes-sig-multicluster
On Tue, Jan 20, 2026 at 04:04:50PM +0100, Stephen Kitt wrote:
> On Sat, Jan 17, 2026 at 10:47:34AM -0800, 'Tim Hockin' via kubernetes-sig-multicluster wrote:
[…]
> > We don't consume ServiceImport from any core components and as far as I can
> > tell it is ONLY used by MCS implementations, right?
>
[…]
>
> Gateway API has a similar ServiceImport but it’s separately defined.

That’s rubbish, https://gateway-api.sigs.k8s.io/geps/gep-1748/
explicitly references MCS ServiceImport.
signature.asc

Tim Hockin

unread,
Jan 27, 2026, 12:30:14 PM (8 days ago) Jan 27
to Stephen Kitt, kubernetes-sig-multicluster
The thought that spawned this discussion, for me, was looking at possible implementations which don't need ServiceImport (e.g. mesh) but are obligated to create it.  This means that (assuming you want an approximately all-to-all MC setup), some actor MUST be empowered to create and label namespaces in each cluster, which is sort of a very high level of privilege.

I don't buy that discovery is a real problem for almost anyone -- nobody "goes shopping" for services in their own cluster.  They either know what they need or they don't need it.

I do buy that in-cluster implementations of DNS need *something* to tell them that name exists and what kind of name (service, MCService, possibly more).  Does that something have to come with a real namespace?  Or could imports be in a more centralized namespace?

I also buy that MCGateway currently depends on ServiceImport, and I haver always found that to be awkward.

The meeting is in O(minutes), so I guess I will jump in there.

Tim

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

Laura Lorenz

unread,
Jan 28, 2026, 4:24:31 PM (7 days ago) Jan 28
to kubernetes-sig-multicluster
Hello all,

After today's discussion I have created a decision doc along with tabs for a summary of the conversation to date and the transcript from today's call.

If you have more comments for the pro/con list, or can help add more detail to the Appendices where example YAML/rbac/kubectl commands are stored, I would appreciate the help.

One thing I noticed while drafting this is that the security concern issue regarding namespace generation that Tim brought up (which I believe is his topmost concern?) is being conflated with the presence/absence of ServiceImport, but the real reason that exists is to indicate importability into a cluster, not simply as a home for the ServiceImport. Even if we move ServiceImport, without something else or dropping that feature, we would still need actors to create namespaces to indicate the willingness to accept services from that (likely remote) workload namespace in their cluster.

Laura

Tim Hockin

unread,
Jan 28, 2026, 10:55:36 PM (7 days ago) Jan 28
to Laura Lorenz, kubernetes-sig-multicluster
Picking on one point, sorry it got long...


On Wed, Jan 28, 2026, 1:24 PM 'Laura Lorenz' via kubernetes-sig-multicluster <kubernetes-si...@googlegroups.com> wrote:

One thing I noticed while drafting this is that the security concern issue regarding namespace generation that Tim brought up (which I believe is his topmost concern?) is being conflated with the presence/absence of ServiceImport, but the real reason that exists is to indicate importability into a cluster, not simply as a home for the ServiceImport. Even if we move ServiceImport, without something else or dropping that feature, we would still need actors to create namespaces to indicate the willingness to accept services from that (likely remote) workload namespace in their cluster.

I find this quite backwards.  To make it concrete:

Alice and Adam are apps people, using namespaces "alice-team" and "adam-team", respectively.

By the principle of sameness, a namespace name, e.g. "alice-team", means the same thing across all clusters in the set.

Alice's team only uses cluster C1, and Adam's team only ever uses cluster C2.

Platform-person Pat has configured C1 to have namespace "alice-team" and C2 to have "adam-team".  The non-existence of the reciprocals give Pat a certain satisfaction - it is easy to verify.

Enter MCS.

Alice decides she wants to consume a service produced by adam-team.  Easy!  She can just ask Adam to drop a ServiceExport into his namespace on C2. And he does.

But that isn't sufficient!  

Without the adam-team namespace on C1, there's no place to put the ServiceImport.

So Alice needs to go bother Pat, and ask them to create namespace "adam-team" on cluster C1.  "But..." cries Pat, "but how will we KNOW that Adam's team doesn't accidentally accumulate additional permissions?" Now that the namespace exists it's a lot less obvious.

Now, what happens if Alice gets access to a second cluster -- should that ALSO have adam-team?  That's kind of tangled behavior.

And what if, at some later point, Alice doesn't need adam-team's service anymore?  Who will be responsible to clean that up?

Lastly. The non-existence of the import as a form of permission feels like security-by-obscurity.  Hiding the addresses of those backend pods doesn't actually prevent me from using them, just makes them a bit more difficult to find.

I buy the idea that there are rules about which clusters can EXPORT into a clusterset-service (there's never a reason for C1 to have a ServiceExport for adam-team, since he is not permitted to use C1), but hiding the imports feels like theatre.

Whew.  I don't know if that changes the preponderance of the decision, but I think the SIG's position should be that MCS is not a policy layer, it's a routing and discovery layer.  All exported services are exported to all member clusters, at least in concept.

Getting rid of (or changing) ServiceImport would mean that Pat doesn't need to muddy the water about which namespaces are supposed to be on which cluster.

Tom Pantelis

unread,
Jan 31, 2026, 9:03:53 AM (4 days ago) Jan 31
to Tim Hockin, Laura Lorenz, kubernetes-sig-multicluster
> "Alice decides she wants to consume a service produced by adam-team."

But doesn't this sort of contradict the earlier statement that "Alice's team only uses cluster C1"? And would Pat, the admin/platform person, be OK with Alice being able to access Adam's cluster without his approval  - seeing that since Pat was satisfied with the non-existence of the reciprocals implies he wants to keep them separate? If so, Pat would need to take some action to grant Alice access.

--
You received this message because you are subscribed to the Google Groups "kubernetes-sig-multicluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-sig-mult...@googlegroups.com.

Tim Hockin

unread,
Feb 1, 2026, 1:26:37 AM (3 days ago) Feb 1
to Tom Pantelis, Laura Lorenz, kubernetes-sig-multicluster
I can see that argument but I don't think it's realistic.  Pat, the platform person, should not need to be involved in the application layer, and that's what this is, right?

Pat out these clusters together in a clusterset, and enabled some sort of network connectivity between them AND enabled MCS.  He may choose to restrict who can create a ServiceExport, and presumably there is an authn/authz layer SOMEWHERE that governs whether Alice can actually do anything with that (and identity / policy should be a SUPER interesting topic for this SIG :)

Adam was allowed to export his service and he chose to do so.  Pat should not need to get any more involved than that.

IMO the only model that makes sense is that the adam-team NS in C1 would be auto-created. Mmmmmmaybe there's yet another layer of control somewhere that describes the allowed links between namespaces, but that feels the same as the authz layer to me?

Tom Pantelis

unread,
Feb 1, 2026, 12:38:22 PM (3 days ago) Feb 1
to Tim Hockin, Laura Lorenz, kubernetes-sig-multicluster
On Sun, Feb 1, 2026 at 1:26 AM Tim Hockin <tho...@google.com> wrote:
I can see that argument but I don't think it's realistic.  Pat, the platform person, should not need to be involved in the application layer, and that's what this is, right?


I don't know, perhaps. It could also be that this whole example/scenario is a bit contrived. 

I do know that we've had an MCS implementation out in the field for 5+ years and haven't encountered any user with such a scenario.

Tom Pantelis

unread,
Feb 1, 2026, 1:06:50 PM (3 days ago) Feb 1
to Tim Hockin, Laura Lorenz, kubernetes-sig-multicluster
On Sun, Feb 1, 2026 at 12:38 PM Tom Pantelis <tompa...@gmail.com> wrote:


On Sun, Feb 1, 2026 at 1:26 AM Tim Hockin <tho...@google.com> wrote:
I can see that argument but I don't think it's realistic.  Pat, the platform person, should not need to be involved in the application layer, and that's what this is, right?


I don't know, perhaps. It could also be that this whole example/scenario is a bit contrived. 

I do know that we've had an MCS implementation out in the field for 5+ years and haven't encountered any user with such a scenario.
 

I'll just make one more point re: "Pat, the platform person, should not need to be involved in the application layer":

Wouldn't that also include the application namespaces, ie "alice-team" and "adam-team"? If so then  then it seems this statement, "The non-existence of the reciprocals give Pat a certain satisfaction", wouldn't apply and neither would "So Alice needs to go bother Pat, and ask them to create namespace "adam-team" on cluster C1 ". 

Tim Hockin

unread,
Feb 2, 2026, 2:14:17 PM (2 days ago) Feb 2
to Tom Pantelis, Laura Lorenz, kubernetes-sig-multicluster
Hi Tom,

> I'll just make one more point re: "Pat, the platform person, should not need to be involved in the application layer":

> Wouldn't that also include the application namespaces, ie "alice-team" and "adam-team"? If so then  then it seems this statement, "The non-existence of the reciprocals give Pat a certain satisfaction", wouldn't apply and neither would "So Alice needs to go bother Pat, and ask them to create namespace "adam-team" on cluster C1 ". 

I am not sure I understand the point you are making, so let me try to clarify and you can tell me how badly I missed it :)

Pat decides, based upon who-knows-what, which teams get access to which clusters.  It might be capacity based, or internal funny-money based, or security.  I don't know and it should not matter to the technical design.  Do we agree on that much?

In this example, Pat has decided that Alice's team only uses cluster C1, and Adam's team only ever uses cluster C2.  There could easily be other clusters involved (e.g. Alice is on C1 and C3, Adam is on C2 and C4) but that doesn't fundamentally change the calculus, I think.

How Pat encodes that is similarly irrelevant.  It could be some control-plane or gitops or a shell script.  Don't know, don't care.  What we care about is that some actor has reified Pat's desired state into C1 and C2, in the form which distils to:
  * kubectl --context=c1 create ns alice-team
  * kubectl --context=c2 create ns adam-team

Pat knows that Alice is not allocated to C2 and Adam is not allocated to C1 (the reciprocals of the above desired state).

The non-existence of namespace alice-team in C2 and of adam-team in C1 is a pretty strong signal that the desired policy is being enforced.  This is the platform-team's job.  Through all of this, Pat never had to consider the applications that Alice and Adam actually run.

So the rub is that when Alice wants to use a service exported by adam-team (or ANY other namespace which doesn't happen to run in C1), Pat needs to sacrifice the "no reciprocals" property.  That bothers me.

Looking at it a different way:

* Suppose alice-team runs in C1
* Suppose adam-team runs in C1, C2, and C3 
* Adam's team exposes an MCS that is used by other teams, including Alice's
* Everything works fine on Monday
* On Tuesday, Adam's team turns up a replica in C4
* On Wednesday, they turn down their C1 replica
* ON Thursday, Pat wants to clean up

Should Pat delete C1's "adam-team" namespace or not?  He can't know that without knowing the application.  If he deletes it, he will break the app in alice-team.  If he doesn't delete it today, can he EVER delete it?

It seems to me that the only logical conclusion is that every namespace appears in every cluster (within the set) AND that the enforcement of team<->cluster mapping cannot use namespace existence as a data point.

Tim
Reply all
Reply to author
Forward
0 new messages