Share the IPC namespace between pods

564 views
Skip to first unread message

zoltan...@gmail.com

unread,
Nov 8, 2017, 12:22:10 PM11/8/17
to Kubernetes user discussion and Q&A
Currently it is possible to share (shared memory) IPC namespace within pods, but not possible to share between pods.

Is this something that will be supported in the future? Or goes against the very design of Kubernetes?

What is the general opinion of the Community on this?

Thanks,
Z

Tim Hockin

unread,
Nov 8, 2017, 12:42:39 PM11/8/17
to Kubernetes user discussion and Q&A
Pods should make very few assumptions about other pods. Sharing IPC
implies a high level of affinity, at which point I would question why
they are two different pods in the first place.
> --
> You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
> To post to this group, send email to kubernet...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

zoltan...@gmail.com

unread,
Nov 8, 2017, 1:08:05 PM11/8/17
to Kubernetes user discussion and Q&A
Thanks Tim,

Do you know of any technique(s) to speed up the network between pods (probably co-located onto the same machine)? Shared memory communication seems to be a good candidate within pods.

Matthias Rampke

unread,
Nov 8, 2017, 1:35:24 PM11/8/17
to kubernet...@googlegroups.com

How big is the overhead from going through the bridge normally?

Tim Hockin

unread,
Nov 8, 2017, 2:27:52 PM11/8/17
to Kubernetes user discussion and Q&A
Are you concerned about perf because you measured it? Or because you
suspect it might become a thing later?

Are you really sure that your pods will ALWAYS be on the same host?

Are your pods 1:1 or 1:N relationships?

Could these highly-connected pods just be one bigger pod?

To be sure, there's some overhead in networking containers today, but
you haven't really explained your problem.

zoltan...@gmail.com

unread,
Nov 9, 2017, 5:27:10 AM11/9/17
to Kubernetes user discussion and Q&A
We measured that we lose at least one order of magnitude in terms of latency, which is our key KPI in this setup.

Pods are not always on the same host, but we play with co-location a lot.
1:1 mainly.

Matthias, Tim, I will get back to you with a decent benchmark on our metals. Thanks so far for your kind help!

Zoltán

Rodrigo Campos

unread,
Nov 9, 2017, 9:06:00 AM11/9/17
to kubernet...@googlegroups.com
On Thursday, November 9, 2017, <zoltan...@gmail.com> wrote:
We measured that we lose at least one order of magnitude in terms of latency, which is our key KPI in this setup.

Which is the comparison here? Shared mem vs what?

If your architecture doesn't support apps to be run on different nodes and communicate via network (**any** network that you can have), then you are probably pretty limited and should re think stuff or just use a hot/standby setup. Maybe use a LOT co-location (but that can be out of control if not done carefully), etc.

Except there is a benchmark for something specific on the network needed for Kubernetes/containers, that it's not hit when not using Kubernetes/containers, this seems like a more general issue to me.

Am I missing something?

Tim Hockin

unread,
Nov 9, 2017, 12:12:10 PM11/9/17
to Kubernetes user discussion and Q&A
On Thu, Nov 9, 2017 at 2:27 AM, <zoltan...@gmail.com> wrote:
> We measured that we lose at least one order of magnitude in terms of latency, which is our key KPI in this setup.

If you are moving from a model where your apps were guaranteed to be
colocated into Kubernetes, you are going to have a rough trip.
Kubernetes starts by assuming that everything speaks over the network.

> Pods are not always on the same host, but we play with co-location a lot.
> 1:1 mainly.

Consider bundling them into the same Pod? Pods share IPC and are
co-scheduled. In some sense, a pod is a replacement for a VM.

Alternatively, you could consider the `hostIPC` field - it will put
your pods in the machine's IPC space, wherein they can find each
other. We don't have a general mechanism yet for sharing namespaces
across pods. we may get there some day, but the complexity has to be
justified pretty broadly.

zoltan...@gmail.com

unread,
Nov 9, 2017, 1:41:09 PM11/9/17
to Kubernetes user discussion and Q&A
Rodrigo you are correct, that results in a co-location hell. Also that requires careful calculations ahead of time to ensure that you can co-locate the corresponding data partitions (in case of a 1:N, where we have a high-demand database) at all times. Even in case of node failures. Moreover if you happen not to be able to co-locate a partition, on a synchronous call you might have a straggler partition.

> We don't have a general mechanism yet for sharing namespaces
across pods. we may get there some day, but the complexity has to be
justified pretty broadly.

Thanks Tim, what would you suggest for a 1:N scenario with the curse of demand that we would like to lose that one order of magnitude in DB access?
Reply all
Reply to author
Forward
0 new messages