Am I totally missing something or hasn't been the binary RPC
of that style been dead ever since SUNRPC? Hasn't the eulogy
been delivered by CORBA? Haven't folks realized that S-exprs
are really quite good for data serialization in the heterogeneous
environments (especially when they are called JSON) and you
really shouldn't be made to figure out how large is the integer
on a host foo?
Thanks,
Roman.
Just because the world is full of XML zealots does not mean that XDR (ONC
RPC), NDR (DCE RPC) and others are dead. Neither of these makes you
figure out how large the integer on host foo is (thats half the point).
I personally like the XDR standard (I can do without NDR and BER/DER).
> Roman.
Tim Newsham
http://www.thenewsh.com/~newsham/
i'm sure i'll be flamed into nothingness for this, but a lot of what
the supposedly cutting edge does these days seems to me to be
exactly what we were taught only those reprobate mainframers
did in unix' heyday. the stuff i've seen recently seems fragile and
special purpose. i didn't see any ideas i recognized in the gfs
interview. i missed the organizing principles of gfs, except if they
are to just to go fast. it all seems to hark back to the days
mainframers put disk addresses in their data.
if we're going back there, just take me out back and shoot me now.
i want to remember some progress in computer science.
i maintain just a flicker of hope that computer guys are bad at
history. the days of mach seemed especially dark, too.
- erik
Well, they sure smell that way. The only non trivial application of
SUN RPC still
in existence that I know of is NFS (plus its evil twins). I would be
very curious
to know if you have examples of interesting apps using both types of
RPC.
> and others are dead. Neither of these makes you figure out how
> large the integer on host foo is (thats half the point).
What's your method of portably stuffing a *native* 64 int (the one
from the ILP64 type of
ABIs)?
> I personally like the XDR standard (I can do without NDR and BER/DER).
Here's an honest question: do you really like debugging rpcgen emitted
code?
Thanks,
Roman.
> if we're going back there, just take me out back and shoot me now.
> i want to remember some progress in computer science.
The principal joy I derive from using Plan 9 (and I am quite new) is
that it is so well architected. By day I am a web developer (when I'm
employed) and I am just thoroughly sickened by the industry. It seems
to me that at some point, the cool guys that beat me up in middle
school somehow insinuated their way into technology and have hijacked
everything. Currently they seem to be proceeding to reinvent the same
things over and over again, on top of their own reinventions, for no
particular gain except to make new jargon and get their name on the
latest version. It's hard to even maintain a portfolio of work one's
done when the lifespan of a website is dwindling to one year or six
months. And that certainly reduces the incentive to give it everything
you've got and make something really good.
I was curious about ICE, because it seemed like they actually took
CORBA and said, what would this look like if it were implemented by
engineers rather than a committee? But I don't think the problem
facing the world is "how do I integrate all these languages, possibly
over the network?" but rather "how do I minimize all of this fucking
complexity and still get things done?" XML-RPC and SOAP are answers to
stupid questions, which is why we have REST, but the joke is that none
of the technologies that it relies on are even implemented enough by
their own specifications such that it can really be used. It strikes
me as ludicrous that you can go make a new Rails app and have to write
by hand (or find someone's plugin) to create a login system for you,
which won't even happen on the HTTP level (which supports it), or the
RDBMS level (which also supports it), or the OS level (which again
supports it.) How many times do we have to write username/password
logins before we're done and we can fucking can move on? It's not like
anything is really different at any of these levels, just the way the
bytes get handed around. Then you have to be sure to use a database
abstraction layer, because everyone seems to have forgotten that the
database *is* an abstraction layer—this fact got lost in the shuffle
as it became too complex for anyone to really understand completely.
Yet nobody seems to be worried that the same thing might happen to
their little project as they pile code upon code and it slowly swells
up just like everything that came before or that it depends on. Before
long, they need an abstraction layer for their abstraction layer! Then
the schmucks come along and complain about performance and demand to
be taught every dirty trick to take their barely useful code and
remove all the clarity from it in the name of a performance. Software
is cancer.
I don't know how long you've been a programmer, Erik, but I'm sure
it's far longer than I. From my perspective, no, there is no progress
in computer science, we're spending all our time trying to climb out
of the same muddy hole we've been in since Dijkstra was a newlywed and
Knuth was writing for MAD Magazine. CS has such advanced amnesia that
it can't remember what prompted the last question it was asked and so
it just repeats the question to itself over and over, never really
aware that it isn't an answer. We dig and dig but the problem only
gets worse because digging doesn't get you out of a muddy hole.
The things that keep me going are the pleasure I get from knowing a
lot of obscure stuff, talking to intelligent, knowledgeable people
such as comprise this mailing list, and (oddly) writing SQL. I
wouldn't say I have much hope for the industry in general unless
there's some sort of major restructuring. I try not to make that my
problem and instead share the things I know about with people I think
might benefit. So consider this the opposite of being flamed. I feel
exactly the same way you do. I hope that in some time I will be doing
as much for the good as you and others on this list that carry the
Plan 9 torch and endure my stupid questions (and now my rants.)
—
Daniel Lyons
Never mind disk addresses. We used to put whole channel programs into
our data. How else would you implement a fast disk search without
bothering the CPU? Just build a self-grepping file ...
i'm not familiar with Thrift, but i've done some stuff with google protobufs,
from which i think Thrift is inspired.
speaking of protobufs, i don't think they're a bad idea.
they're specifically designed to deal
with forward- and backward-compatibility, which is something
you don't automatically get with s-expressions, and if you're
dealing with many identically-typed records, the fact that each field
in each record is not string-tagged counts saves a lot of bandwidth
(and makes them more compressible, too).
we don't use text for 9p, do we?
> and you
> really shouldn't be made to figure out how large is the integer
> on a host foo?
?
uriel
the difference being, 9p is the transport not
the representation of the data and 9p has
a fixed set of messages.
- erik
Also 9p aims at file systems pretty obviously where Thirft is a
generic RPC mechanism with stub compilers for bindings for several
languages.
I have not been able to convince coworkers that filesystem namespaces
are the way to go. I think they think it is too hard.
*shrug* you can lead a horse...
> - erik
>
>
i wasn't trying to defend the RPC mechanism, just the data format,
which i think can be fine when bandwidth is an issue.
doing everything with text in the filesystem is no magic bullet either.
many textual formats in plan 9 could do with being a little more
self-describing.
> I have not been able to convince coworkers that filesystem namespaces
> are the way to go. I think they think it is too hard.
i think it's undeniably true that writing a 9p/styx file server is
harder than writing a function to be called via some RPC mechanism.
personally, i think that the added value you get from having the filesystem
abstraction is well worth the cost, but it is an arguable point.
Funny, the problem I usually have is that people think file systems
are *too simple*, oh, no data types other than *byte stream* and
*drectory*, and no type checking! We are all going to die!
People seem to have trouble believing something simple can do a job
that they have convinced themselves needs to be very complicated.
uriel
There's really no point in worrying about that.
The java 9p server was interesting in that it served functions as
files. I think we need a few more thinga like that where we can begin to
stop thinking about the 9p part when designing a service
I'd be very curious to know the details of the project where you
did find protobufs useful. Can you share, please?
> speaking of protobufs, i don't think they're a bad idea.
> they're specifically designed to deal
> with forward- and backward-compatibility, which is something
> you don't automatically get with s-expressions,
Not unless you can make transport protocol take care of that for you.
HTTP most certainly can and same can be said about 9P. I truly believe
that such a division of labor is a good thing. Thrift and protobufs are
doing RPC, I think we can agree on that. So if we were to look at how
local procedure calls are dealing with the compatibility issue we get
dynamic linker symbol versioning. I don't like those, but at least they
are implemented in the right place -- the dynamic linker. What Thrift
feels like would be analogous to pushing that machinery into my
library.
And since we are on this subject of dynamic linking -- one of the
fallacies of introducing versioning was that it would somehow make
compatibility seamless. It didn't. In all practical cases it made things
much, much, much worse. Perhaps web services are different, but I can't
really pinpoint what makes them so: you are doing calls to the symbols,
the symbols are versioned and they also happen to be remote. Not much
difference from calling your trustworthy read@version1 ;-)
> and if you're
> dealing with many identically-typed records, the fact that each field
> in each record is not string-tagged counts saves a lot of bandwidth
> (and makes them more compressible, too).
That's really a type of YMMV argument. I can only speak from my personal
experience where the latency is much more a problem than bandwidth.
> we don't use text for 9p, do we?
No we don't. But, as erik pointed out we use 9P as a transport
protocol. My biggest beef with how Thrift was positioned (and
that's why I'm so curious to know the details of your project)
is the fact that they seem to be pushing it as a better JSON.
At that level -- you already have a transport protocol, and
it just doesn't make a lot of sense to represent data in such
an unfriendly manner. And representation is important. After all,
REST stands for *representational* state transfer, doesn't it?
I certainly wouldn't object to Thrift use as a poorman's way
of implementing an equivalent of 9P (or any kind of protocol
for that matter) cheaply.
Hm. Now that I've mentioned it, perhaps trying Thrift out as
an implementation mechanism for 9P and comparing the result
with the handwritten stuff would be a good way for me to
really see how useful it might be in practice.
> > and you
> > really shouldn't be made to figure out how large is the integer
> > on a host foo?
I firmly believe in self-descriptive data. If you have an integer
you have an integer. You shouldn't burden the representation layer
with encoding issues. Thus:
{ integer: 12312321...very long stream of digits...123123 }
is a perfectly good way to send the data. You might unpack it
into whatever makes sense on the receiving end, but please don't
make me suffer at the data representation layer.
Thanks,
Roman.
Funny, the problem I usually have is that people think file systemsOn Thu, Aug 13, 2009 at 4:27 PM, David Leimbach<lei...@gmail.com> wrote:
> On 8/13/09, erik quanstrom <quan...@coraid.com> wrote:
>>> we don't use te*xt for 9p, do we?
>>
>> the difference being, 9p is the transport not
>> the representation of the data and 9p has
>> a fixed set of messages.
>>
>
> Also 9p aims at file systems pretty obviously where Thirft is a
> generic RPC mechanism with stub compilers for bindings for several
> languages.
>
> I have not been able to convince coworkers that filesystem namespaces
> are the way to go. I think they think it is too hard.
>
> *shrug* you can lead a horse...
are *too simple*, oh, no data types other than *byte stream* and
*drectory*, and no type checking! We are all going to die!
People seem to have trouble believing something simple can do a job
that they have convinced themselves needs to be very complicated.
uriel
ron
ron