Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Coda git repository available

0 views
Skip to first unread message

Jan Harkes

unread,
Apr 18, 2016, 3:59:59 PM4/18/16
to
On Tue, Jan 27, 2015 at 03:57:59PM -0500, Jan Harkes wrote:
> On Tue, Jan 27, 2015 at 02:39:32PM -0500, Greg Troxel wrote:
> >
> > Greg Troxel <g...@ir.bbn.com> writes:
> >
> > On 20011-03-11, I sent a patch to adapt coda to modern NetBSD. On
> > 6.9.5, this patch is still in pkgsrc. Is there some problem with
> > applying it? (The patch file has a new name, but it's the same bits.)
> >
> > (Also, is there anybody here?)
>
> Yes someone is here.
>
> Haven't checked yet if the patch is applied to CVS, but I have a couple
> of emails with Coda related patches that were sent to codalist and me
> privately tagged to be applied.

APPLIED! After only about 5 years, that must be a new record somewhere.
Anyway, now I have to go and look for those other Coda related patches.

And all of this is made possible because,

> I figured I'd apply them after the CVS -> Git conversion was complete
> and then release a new Coda version with the state of things. I've
> scripted the whole conversion process with reposurgeon and stopped
> trying to 'perfect' the conversion mostly because I seemed to be getting
> stuck on reposurgeon related bugs.

I finally finished the conversion, for a bit tried to finish the
conversion with a newer version of reposurgeon and cvs-fast-export but
they introduced a whole new slew of small issues and didn't actually fix
any of the coloring issues when trying to break apart and piece back
together the fragments of history across 4 CVS and git repositories.

The final check was to pull from CVS whatever was tagged for each Coda
release and any differences have been committed back into the git
repository and tagged with gpg-signed tags, making the tree look a bit
like the following.

o <--- regular commits from cvs->git conversion
|
| o <--- coda-x.y.z checked out from CVS and tagged as coda-x.y.z
|/
o <--- coda-x.y.z from cvs->git conversion
|


At some point I stopped trying to manually reconstruct information that
was lost because CVS doesn't track branching and merging or proper author
attribution except for whatever is noted in the commit message, the
original commit messages are still there.

The official Coda git repository is now at

https://github.com/cmusatyalab/coda

There is no new release yet, I just made the push to finish the conversion
last week and want to make sure there are not any more unapplied patches
floating around, and then I have to dust off/rewrite whatever I used to
use to make releases.

Jan

u-m...@aetey.se

unread,
Apr 19, 2016, 4:20:49 AM4/19/16
to
On Mon, Apr 18, 2016 at 03:59:51PM -0400, Jan Harkes wrote:
[skipped]

Great work Jan!

> The official Coda git repository is now at
>
> https://github.com/cmusatyalab/coda
>
> There is no new release yet, I just made the push to finish the conversion
> last week and want to make sure there are not any more unapplied patches
> floating around, and then I have to dust off/rewrite whatever I used to
> use to make releases.

For the record, there exists our yet-unpublished Aetey/Chalmers branch,
differing quite a bit from 6.9.5, with

- the s.k. modular clog included, with proper integration with Kerberos
and support for multiple authentication authorities (this appeared
at CMU long ago but never made it to upstream)
- server IP addresses can be changed without disrupting client
operation and without client-side administration, removing the afs-derived
"static server IP" contraint
- clents can handle servers running on non-standard ports (as a result
e.g. more than one server can be put behind the same NAT)
- a bunch of small fixes and tweaks

It remains compatible to servers and clients built from upstream code,
modulo the new features which upstream does not yet support.

The best would be to merge the changes upstream. They are vital for
usability.

What is the expected workflow for submitting and reviewing patches?

Best regards,
Rune

Jan Harkes

unread,
Apr 20, 2016, 11:50:54 AM4/20/16
to
On Tue, Apr 19, 2016 at 10:13:06AM +0200, u-m...@aetey.se wrote:
> > The official Coda git repository is now at
> >
> > https://github.com/cmusatyalab/coda
> >
> > There is no new release yet, I just made the push to finish the conversion
> > last week and want to make sure there are not any more unapplied patches
> > floating around, and then I have to dust off/rewrite whatever I used to
> > use to make releases.
>
> For the record, there exists our yet-unpublished Aetey/Chalmers branch,
> differing quite a bit from 6.9.5, with
>
> - the s.k. modular clog included, with proper integration with Kerberos
> and support for multiple authentication authorities (this appeared
> at CMU long ago but never made it to upstream)
> - server IP addresses can be changed without disrupting client
> operation and without client-side administration, removing the afs-derived
> "static server IP" contraint
> - clents can handle servers running on non-standard ports (as a result
> e.g. more than one server can be put behind the same NAT)
> - a bunch of small fixes and tweaks

I've seen the modular clog a long time ago and I remember it was not in
a state to be easily merged, added a bunch of complexity and additional
configuration for what I thought were not typical cases. The few
comments I remember were,

- the clog token file format was/is not portable across platforms
and needed to be cleaned up so that different endianess systems
could also read the tokens that were used, not everything is x86.
- there was some sort of configuration daemon serving up
configuration data for the authentication from a tcp port which
introduces a whole slew of unevaluated security concerns. Since it
is http how is the configuration secured from MitM attacks, can
someone replace authentication with an auth-null forcing the
clients to suddenly send out cleartext passwords, etc. And if it
is secured (HMAC?), how do client know how to properly vet the
signature, etc.

As far as the other changes, I haven't seen any of them but I can still
throw out some unsollicited comments.

Allowing server IP addresses to change without client-side intervention
has to introduce some new level of indirection above the IP layer and
the most obvious one is to refer to servers using their DNS names. This
works nicely with the 'new' RPC2 call I added almost 10 years ago.

https://github.com/cmusatyalab/coda/commit/86d97f6db3ac13d6d83f47636e23891f9380f537

The response to that RPC call even includes the port the server is
running at. The only part that has not been implemented is the client
side and it only got stalled because the available async DNS libraries
were either not LWP compatible, or had a non-GPL compatible license.
Synchronous DNS blocks the complete client process causing various kinds
of connectivity/reliability issues, so I am interested to see how you
solved that part of the problem.

> It remains compatible to servers and clients built from upstream code,
> modulo the new features which upstream does not yet support.
>
> The best would be to merge the changes upstream. They are vital for
> usability.

Unless you have managed to fix some horrible reintegration/resolution
conflict introducting bug, I respectfully disagree with the 'vital' part
of that statement. But merging upstream is for many things, especially
bug fixes, always a good idea.

> What is the expected workflow for submitting and reviewing patches?

Well, github has the concept of pull requests. The best method seems to
be to have small, independent, single purpose changes on 'feature
branches' that can be easily read, reviewed and merged. Any single large
diff that touches many different parts would inevitably take a longer
time and may require several roundtrips as issues are found and
addressed.

Right now I'm mostly concerned with cleaning things up and addressing
some bugs and possible security issues I believe I have identified.
Anything that removes lines of code, and/or reduces overall complexity
will be much easier to merge because it reduces the amount of cleanup
necessary.

Jan

ps. About the 'as yet-unpublished' part of your comment, I do hope you
are adhering to the GPL license, as you distribute binaries of Coda with
extensive changes you are expected to make the corresponding source
available. In fact as a company, you would be expected to make source
available even if there are no extensive changes.

u-m...@aetey.se

unread,
Apr 21, 2016, 8:17:20 AM4/21/16
to
Hello Jan,

Nice to feel your engagement, thanks for the comments.

On Wed, Apr 20, 2016 at 11:50:00AM -0400, Jan Harkes wrote:
> > For the record, there exists our yet-unpublished Aetey/Chalmers branch,
> > differing quite a bit from 6.9.5, with
> >
> > - the s.k. modular clog included, with proper integration with Kerberos
> > and support for multiple authentication authorities (this appeared
> > at CMU long ago but never made it to upstream)
> > - server IP addresses can be changed without disrupting client
> > operation and without client-side administration, removing the afs-derived
> > "static server IP" contraint
> > - clents can handle servers running on non-standard ports (as a result
> > e.g. more than one server can be put behind the same NAT)
> > - a bunch of small fixes and tweaks
>
> I've seen the modular clog a long time ago and I remember it was not in
> a state to be easily merged, added a bunch of complexity and additional
> configuration for what I thought were not typical cases. The few

The additional configuration is optional if you do not need the new
features.

There remains the added complexity. That's why the code is modular, most
of it is in separate (and optional unless you need the functionality)
executables.

Finally, I guess the notion of typical Coda deployments has changed
since when you saw the code.

> comments I remember were,
>
> - the clog token file format was/is not portable across platforms
> and needed to be cleaned up so that different endianess systems
> could also read the tokens that were used, not everything is x86.

If this was an issue it presumably has been solved long ago too.

Peeking at the current code...
This does not look broken, must be from the time before the code
reached Aetey:
----
...
static void export(ClearToken *cToken)
{
cToken->AuthHandle = htonl(cToken->AuthHandle);
cToken->ViceId = htonl(cToken->ViceId);
cToken->BeginTimestamp = htonl(cToken->BeginTimestamp);
cToken->EndTimestamp = htonl(cToken->EndTimestamp);
}

static void import(ClearToken *cToken)
{
cToken->AuthHandle = ntohl(cToken->AuthHandle);
cToken->ViceId = ntohl(cToken->ViceId);
cToken->BeginTimestamp = ntohl(cToken->BeginTimestamp);
cToken->EndTimestamp = ntohl(cToken->EndTimestamp);
}
...
void WriteTokenToBytestream(FILE *f, ClearToken *cToken,
EncryptedSecretToken sToken)
{
char *buf;
int len;

len = sizeof(ClearToken) + sizeof(EncryptedSecretToken);
buf = malloc(len);
export(cToken);
memcpy(buf, (char *)cToken, sizeof(ClearToken));
memcpy(buf + sizeof(ClearToken), sToken, sizeof(EncryptedSecretToken));
import(cToken);

fputs("*** Coda Token ***", f);
coda_base64_encode(f, buf, len);
free(buf);
}
...
----

FWIIW we are running the same code on ARM.

The only problem we hit there was not with clog but with memory alignment
in cfs (which we then fixed).

> - there was some sort of configuration daemon serving up
> configuration data for the authentication from a tcp port which
> introduces a whole slew of unevaluated security concerns. Since it
> is http how is the configuration secured from MitM attacks, can

There is no http involved (it would be a large overhead without any reason).

This data does not have to be secured against MitM, the worst which can
happen is disruption of communication / DOS which MitM always can achieve.

> someone replace authentication with an auth-null forcing the
> clients to suddenly send out cleartext passwords, etc. And if it
> is secured (HMAC?), how do client know how to properly vet the
> signature, etc.

There is no support for sending cleartext passwords so no danger of that
sort either. It is an explicit design choice, authentication methods
not providing reliable server authentication to the client are forbidden.
Actually, any method which could send a password to the server should
be banned as well and none of the built ones does, of course.

> As far as the other changes, I haven't seen any of them but I can still
> throw out some unsollicited comments.
>
> Allowing server IP addresses to change without client-side intervention
> has to introduce some new level of indirection above the IP layer and
> the most obvious one is to refer to servers using their DNS names. This
> works nicely with the 'new' RPC2 call I added almost 10 years ago.
>
> https://github.com/cmusatyalab/coda/commit/86d97f6db3ac13d6d83f47636e23891f9380f537

Even something which looks obvious can be deceiving.

For reference here is an excerpt from my letter to this list, 2010,
outlining one of the problems with it:
----
...

Even today, a server's invariant "identity" in a realm is not it's ipv4
address but it's server id.

...

Today we translate them to ip addresses (at wrong place but it is another
part of the story) without any possibility of the addresses' expiration
even though their exact values are not guaranteed to last forever nor
are relevant for realm consistency.

With ViceGetVolumeLocation() we make a similar mistake (!), we do
not allow for the returned string to be invalidated "properly". There
should have been some kind of a validity promise (say a TTL) included.
A possibility to change host names in a realm setup without confusing
the clients is a Good Thing (TM), why forbid this by design?
...
----

So this would replace the "constant ip numbers" constraint with
"constant host names", why keep such a constraint at all? From system
administration perspective this easily becomes a PITA in the long run
when networking changes, dns domain names change and naming policies
change as well.

The only string which really has to remain stable is the realm name,
as the handle for reaching the service and the data.

Actually, the whole concept of a "hostname" is broken, DNS purpose
is to map _service_ names to endpoints (ip+port) of the corresponding
daemons.
Hardware units or OS instances (aka hosts) are irrelevant for this
(even if they looked "natural" long ago when IP networking was in
its infancy and uucp was state-of-the-art).

That's why we resolve the services, which are the numbered server
instances in a realm, to endpoints via SRV records, the exact tool for
the purpose.

This fixed the ip-dependency issue, still keeping compatibility and
without complexity. (The commit introducing the change makes the code
grow 105 lines, comparable to the much more limited in scope commit for
ViceGetVolumeLocation() which added 85 lines).

> The response to that RPC call even includes the port the server is
> running at. The only part that has not been implemented is the client
> side and it only got stalled because the available async DNS libraries
> were either not LWP compatible, or had a non-GPL compatible license.
> Synchronous DNS blocks the complete client process causing various kinds
> of connectivity/reliability issues, so I am interested to see how you
> solved that part of the problem.

We did not see any practical problems or extra stalls caused by
synchronous DNS resolution. Definitely not an issue in our workloads.
Of course nothing precludes changing to asynchronous resolution if
needed but the effort and possible dependencies are hardly justified.

If not otherwise, the presence of callbacks is much more of a concern.
In a file system where clients go disconnected as a matter of normal
operation, callbacks do not give much benefit, at the same time callback
breaking _does_ cause stalls.

> > The best would be to merge the changes upstream. They are vital for
> > usability.
>
> Unless you have managed to fix some horrible reintegration/resolution
> conflict introducting bug, I respectfully disagree with the 'vital' part
> of that statement. But merging upstream is for many things, especially
> bug fixes, always a good idea.

It would be quite sad if the qualities which are vital for us (and
credibly for someone else too, even if not for CMU) will be neglected.

> Well, github has the concept of pull requests. The best method seems to
> be to have small, independent, single purpose changes on 'feature
> branches' that can be easily read, reviewed and merged.

Ok.

> Right now I'm mostly concerned with cleaning things up and addressing
> some bugs and possible security issues I believe I have identified.

Great, thanks for looking into this.

> Anything that removes lines of code, and/or reduces overall complexity
> will be much easier to merge because it reduces the amount of cleanup
> necessary.

Looking forward to it. As a low-hanging fruit, would you object to
removing the hack which supports running "numbered" server instances on
the same computer, with different assigned ips and dns names?
I guess the motivation for it disappeared over 10 years ago when you
implemented coalescing of free space in RVM and made the servers a lot
more scalable.

> ps. About the 'as yet-unpublished' part of your comment, I do hope you
> are adhering to the GPL license, as you distribute binaries of Coda with
> extensive changes you are expected to make the corresponding source
> available. In fact as a company, you would be expected to make source
> available even if there are no extensive changes.

You have no reason to worry. When we distribute the binaries, we distribute
the source side-by-side with them.

At the same time, publishing one's development history (like a git
repository) is not something which GPL mandates :)

Regards,
Rune

Jan Harkes

unread,
Apr 21, 2016, 10:58:09 AM4/21/16
to
On Thu, Apr 21, 2016 at 02:16:02PM +0200, u-m...@aetey.se wrote:
> We did not see any practical problems or extra stalls caused by
> synchronous DNS resolution. Definitely not an issue in our workloads.
> Of course nothing precludes changing to asynchronous resolution if
> needed but the effort and possible dependencies are hardly justified.

I am literally fuming reading this. I don't know if you remember, but
several years ago you had me chasing down a server 'deadlock' issue
related to callbacks, which I was unable to reproduce and I spend about
a week on this going back and forth with new patches trying to turn
readlocks into writelocks in the hope it would avoid some possible lock
ordering issue, adding global timeouts to the callback break multirpc
calls and other workarounds....

You were running your servers with clients that were doing ******
synchronous DNS lookups? And you don't think that would be causing any
practival problems or extra stalls?

> If not otherwise, the presence of callbacks is much more of a concern.
> In a file system where clients go disconnected as a matter of normal
> operation, callbacks do not give much benefit, at the same time callback
> breaking _does_ cause stalls.

And on top of that you are blaming the callbacks for your woes.

Sorry, but I have to cool down before I can respond to any of the rest
of your email. In the mean time I'll be busy finding and reverting the
patch that introduced a global timeout for callback rpcs and any other
possible regressions that may have been introduced.

Jan

u-m...@aetey.se

unread,
Apr 21, 2016, 11:48:38 AM4/21/16
to
Hello Jan,

On Thu, Apr 21, 2016 at 10:57:21AM -0400, Jan Harkes wrote:
> On Thu, Apr 21, 2016 at 02:16:02PM +0200, u-m...@aetey.se wrote:
> > We did not see any practical problems or extra stalls caused by
> > synchronous DNS resolution. Definitely not an issue in our workloads.
> > Of course nothing precludes changing to asynchronous resolution if
> > needed but the effort and possible dependencies are hardly justified.
>
> I am literally fuming reading this. I don't know if you remember, but
> several years ago you had me chasing down a server 'deadlock' issue
> related to callbacks, which I was unable to reproduce and I spend about
> a week on this going back and forth with new patches trying to turn
> readlocks into writelocks in the hope it would avoid some possible lock
> ordering issue, adding global timeouts to the callback break multirpc
> calls and other workarounds....
>
> You were running your servers with clients that were doing ******
> synchronous DNS lookups?

No.

(we switched to the DNS-based server lookups in 2014)

> And you don't think that would be causing any > practival problems or extra stalls?
>
> > If not otherwise, the presence of callbacks is much more of a concern.
> > In a file system where clients go disconnected as a matter of normal
> > operation, callbacks do not give much benefit, at the same time callback
> > breaking _does_ cause stalls.
>
> And on top of that you are blaming the callbacks for your woes.

On top of what?

> Sorry, but I have to cool down before I can respond to any of the rest

This would certainly not hurt. :)

> of your email. In the mean time I'll be busy finding and reverting the
> patch that introduced a global timeout for callback rpcs and any other
> possible regressions that may have been introduced.

Oh thanks for looking. Nice if getting rid of such stalls is possible.

(When you have several hundred clients holding callbacks on a volume or
on a common directory, some of them definitely end up disconnected/dead
when you happen to update something, then it takes time to break their
callbacks...)

> Jan

Best regards,
Rune

u-m...@aetey.se

unread,
Apr 21, 2016, 12:13:46 PM4/21/16
to
On Thu, Apr 21, 2016 at 05:47:33PM +0200, u-m...@aetey.se wrote:
> On Thu, Apr 21, 2016 at 10:57:21AM -0400, Jan Harkes wrote:
> > several years ago you had me chasing down a server 'deadlock' issue
> > related to callbacks, which I was unable to reproduce and I spend about
> > a week on this going back and forth with new patches trying to turn
> > readlocks into writelocks in the hope it would avoid some possible lock
> > ordering issue, adding global timeouts to the callback break multirpc
> > calls and other workarounds....
> >
> > You were running your servers with clients that were doing ******
> > synchronous DNS lookups?
>
> No.
>
> (we switched to the DNS-based server lookups in 2014)

The troubleshooting of server meltdowns which you refer to happened with
Coda ~6.9.4, in the years 2008-2009.

The problem remained unsolved but it gradually ceased, I guess due to us
phasing out old installations, both the old Coda clients and OS releases,
some kernel networking misbehaviour might have been the culprit or
possibly a trigger.

Hope this makes you feel better,
there is no need to suspect us for wrongdoing.

Regards,
Rune

Jan Harkes

unread,
Apr 21, 2016, 2:26:17 PM4/21/16
to
On Thu, Apr 21, 2016 at 02:16:02PM +0200, u-m...@aetey.se wrote:
> The only problem we hit there was not with clog but with memory alignment
> in cfs (which we then fixed).

Looking forward to seeing patches related to that.


> > - there was some sort of configuration daemon serving up
> > configuration data for the authentication from a tcp port which
> > introduces a whole slew of unevaluated security concerns. Since it
> > is http how is the configuration secured from MitM attacks, can
>
> There is no http involved (it would be a large overhead without any reason).
>
> This data does not have to be secured against MitM, the worst which can
> happen is disruption of communication / DOS which MitM always can achieve.

Depends on what the configuration contains. If it lists anything like a
set of suggested authentication methods then you are exposed to all
security bugs in any of the mechanisms the client supports even when
your configuration has disabled them.

> > Allowing server IP addresses to change without client-side intervention
> > has to introduce some new level of indirection above the IP layer and
> > the most obvious one is to refer to servers using their DNS names. This
> > works nicely with the 'new' RPC2 call I added almost 10 years ago.
> >
> > https://github.com/cmusatyalab/coda/commit/86d97f6db3ac13d6d83f47636e23891f9380f537
>
> Even something which looks obvious can be deceiving.
>
> For reference here is an excerpt from my letter to this list, 2010,
> outlining one of the problems with it:

> ----
> ...
>
> Even today, a server's invariant "identity" in a realm is not it's ipv4
> address but it's server id.

No, server ids are an implementation artifact that have crept into part
that have become visible to clients (like in the volumeids). Ideally we
don't have (or pay attention to) server ids which would simplify things
and allow others such as migrating volumes from one server to another
for load balancing purposes.

> With ViceGetVolumeLocation() we make a similar mistake (!), we do
> not allow for the returned string to be invalidated "properly". There
> should have been some kind of a validity promise (say a TTL) included.
> A possibility to change host names in a realm setup without confusing
> the clients is a Good Thing (TM), why forbid this by design?

GetVolumeLocation returns a DNS hostname, DNS records have a TTL. There
is nothing here forbidden by design, we just don't duplicate something
that is already there.

> So this would replace the "constant ip numbers" constraint with
> "constant host names", why keep such a constraint at all? From system
> administration perspective this easily becomes a PITA in the long run
> when networking changes, dns domain names change and naming policies
> change as well.

I don't care if you want to name (or cname) your servers 1.mydomain.com,
2.mydomain.com, etc. There is no difference there.

> Actually, the whole concept of a "hostname" is broken, DNS purpose
> is to map _service_ names to endpoints (ip+port) of the corresponding
> daemons.
> Hardware units or OS instances (aka hosts) are irrelevant for this
> (even if they looked "natural" long ago when IP networking was in
> its infancy and uucp was state-of-the-art).
>
> That's why we resolve the services, which are the numbered server
> instances in a realm, to endpoints via SRV records, the exact tool for
> the purpose.

Actually, if you really read your argument, you should be pushing volume
name -> endpoint mapping to the the SRV records. Skip volume ids,
server ids, and everything else inbetween. A client wants to connect to
a volume wherever it is. There is nothing special about the server id
number.

It just seems like we're having the same discussions over and over
again, and yet it always results in to us disagreeing and you just
doing your thing anyway,

http://coda.cs.cmu.edu/maillists/codalist/codalist-2014/9334.html
- That adding an alternate 'server-id' namespace based on an
internal implementation artifact is not a good idea aside from the
fact that, as is, it only allows for at most 253 servers in a realm.
- The fact that synchronous DNS calls are bad on the client.

http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2014/9338.html
- A mention by me to put volume -> endpoint lookups in DNS.

http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2014/9373.html
- My expression of annoyance that although you keep talking about
patches, I have not actually seen much especially in areas that would
help move things (like the cvs -> git conversion) forward.

http://www.coda.cs.cmu.edu/maillists/codalist/codalist-2014/9322.html
- Nice, I just found this updateclnt fd-leak patch from you, applied!

> > Anything that removes lines of code, and/or reduces overall complexity
> > will be much easier to merge because it reduces the amount of cleanup
> > necessary.
>
> Looking forward to it. As a low-hanging fruit, would you object to
> removing the hack which supports running "numbered" server instances on
> the same computer, with different assigned ips and dns names?

Yes that would be a candidate, we haven't used it and it relied on
binding multiple IP addresses to an interface and then explictly binding
each server to each address, it was useful at the time.

> I guess the motivation for it disappeared over 10 years ago when you
> implemented coalescing of free space in RVM and made the servers a lot
> more scalable.

More agressive RVM defragmentation definitely helped some, but what
helped more is that the average file size has gone up considerably.
People aren't so much storing more files as they are storing much larger
files. As an example, over the same time period digital cameras went
from low-res 640x480 jpeg compressed images to DSLR cameras with 40-50
megapixel RAW images.

Aside from that probably several of the Coda limits have prevented the
necessary growth, directory size limitations, client cache size limits,
number of volumes per server and number of servers per realm.

Finally with virtualization and containerization it has become easier to
just deploy multiple Coda file servers in separate guest VMs on a
rackmount server instead of trying to cram multiple servers together on
the same host.

Jan

0 new messages