Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Plan9 development

195 views
Skip to first unread message

Admiral Fukov

unread,
Nov 4, 2010, 5:36:11 AM11/4/10
to
I'm looking at

http://plan9.bell-labs.com/sources/plan9/sys/src/

and I noticed that most of the distribution hasn't been updated in
years.
Is the development of plan 9 abandoned?

Lucio De Re

unread,
Nov 4, 2010, 5:50:47 AM11/4/10
to

Why fix what's perfect? ;-)

++L

Steve Simon

unread,
Nov 4, 2010, 6:22:59 AM11/4/10
to
> and I noticed that most of the distribution hasn't been updated in
> years.
> Is the development of plan 9 abandoned?

No it is not, a lot, depends on which file(s) you are looking at.
There has been much work recently on the ARM port (guru plug and beagleboard)
Other changes happen as required (or as patches are submitted).

What where you expecting to see?

-Steve

Brantley Coile

unread,
Nov 4, 2010, 7:32:32 AM11/4/10
to
Unlike many open source systems, plan 9 is stable. Very reliable. It doesn't get changed just for fun.

iPhone email

dexen deVries

unread,
Nov 4, 2010, 8:18:21 AM11/4/10
to
On Thu, Nov 04, 2010 at 09:36:11AM +0000, Admiral Fukov wrote:
> I'm looking at
>
> http://plan9.bell-labs.com/sources/plan9/sys/src/
>
> and I noticed that most of the distribution hasn't been updated in
> years.
> Is the development of plan 9 abandoned?


Please note the timestamps in listings describe just the directory or file
itself; it is not computed recursively. In particular, a directory has its
timestamp updated only when a new entry is added -- not merely modified --
which is a rare event for top-level directories. If you browse a bit deeper,
there's plenty enough of newer timestamps; mind the files.

Anyway -- as others wrote before -- the core of P9 is pretty stable :)

--
dexen deVries


``One can't proceed from the informal to the formal by formal means.''

Venkatesh Srinivas

unread,
Nov 4, 2010, 8:50:55 AM11/4/10
to
In the ~8 years since the 4th edition release, there has been pretty continuous work on Plan 9, both at Bell Labs and in the 9fans community; nightly an ISO is constructed and uploaded. Changes have been incremental -- a tool appears, bugs are fixed, etc..

http://acm.jhu.edu/git/plan9 has a git tree of the sources from December 2002 through February 2009, if you like to view changes in that form. http://bitbucket.org/rminnich/sysfromiso has a mercurial tree of more recent changes, if that's what you're looking for.

-- vs

David Leimbach

unread,
Nov 4, 2010, 11:40:59 AM11/4/10
to
There's a plan9changes google group I believe that will let you see the commits that have been going in.

Plan 9 is satisfying all it's users' needs at the moment.  There have been proposals to make it more linuxy in the past, but we know where linux is if we want it.  

That's not to say people aren't exploring new ways to develop it and make it a better Plan 9.  



Stanley Lieber

unread,
Nov 4, 2010, 11:57:55 AM11/4/10
to
On Thu, Nov 4, 2010 at 10:39 AM, David Leimbach <lei...@gmail.com> wrote:
>
> There's a plan9changes google group I believe that will let you see the
> commits that have been going in.

http://groups.google.com/group/plan9changes/topics

doesn't show any updates since July, 2008.

Is there another way to track updates besides simply running pull?


> Plan 9 is satisfying all it's users' needs at the moment.

!

-sl

erik quanstrom

unread,
Nov 4, 2010, 12:02:42 PM11/4/10
to

isn't the posting name "Admiral Fukov" enough to clue us in
that this is just a troll? that and the fact we get this one about
once a year?

- erik

John Floren

unread,
Nov 4, 2010, 12:03:36 PM11/4/10
to

Watch Ron's repository, which Venkatesh posted earlier.
http://bitbucket.org/rminnich/sysfromiso

ron minnich

unread,
Nov 4, 2010, 12:41:07 PM11/4/10
to
ignoring the troll, but for the rest of you here: plan 9 is *very* active.

If you're the kind of person who understands that we don't need to
change 'cat' any further, then you understand the work that is going
on.

If you're the kind of guy who can't resist changing things that don't
need changing, then you won't get it; perhaps you'd be better off
working on libtool. But Plan 9 is far from dead.

sysfromiso makes that clear. It's a great way to watch the
improvements going in. Hats off to the folks at Bell Labs for keeping
it rolling.

ron

Stanley Lieber

unread,
Nov 4, 2010, 12:59:20 PM11/4/10
to
On Thu, Nov 4, 2010 at 11:01 AM, John Floren <slawm...@gmail.com> wrote:
>
> Watch Ron's repository, which Venkatesh posted earlier.
> http://bitbucket.org/rminnich/sysfromiso

On Thu, Nov 4, 2010 at 11:39 AM, ron minnich <rmin...@gmail.com> wrote:
>
> sysfromiso makes that clear. It's a great way to watch the
> improvements going in. Hats off to the folks at Bell Labs for keeping
> it rolling.

Thanks, guys, I missed Venkatesh's earlier message somehow.

-sl

Don Bailey

unread,
Nov 4, 2010, 1:03:12 PM11/4/10
to
Request to add "If you're the kind of person who understands that we

don't need to change 'cat' any further, then you understand the work
that is going on." to fortune.

Ron++

D

Admiral Fukov

unread,
Nov 4, 2010, 1:15:24 PM11/4/10
to
Some people prefer privacy, others are just assholes who want to pick fights because they are bored.
 
Asking a perfectly valid question is not trolling, unlike your reply that adds nothing to the subject at hand.


Admiral Fukov

unread,
Nov 4, 2010, 1:15:51 PM11/4/10
to

Thank you for the links.

Jeff Sickel

unread,
Nov 4, 2010, 1:32:59 PM11/4/10
to

On Nov 4, 2010, at 11:39 AM, ron minnich wrote:

> If you're the kind of person who understands that we don't need to
> change 'cat' any further, then you understand the work that is going
> on.

But why isn't the source for mk (3929 lines w/ headers, okay 4661 with mkfile and acid) at least as long as all that Java in the ant distribution (213151 lines)? That's a lot of catching up to do. The market has clearly spoken, and it appears that more lines dominates the soup.

> sysfromiso makes that clear. It's a great way to watch the
> improvements going in. Hats off to the folks at Bell Labs for keeping
> it rolling.

Thanks for putting sysfromiso together. It does help for when I'm looking in from a browser.

-jas


andrey mirtchovski

unread,
Nov 4, 2010, 1:33:07 PM11/4/10
to
> Some people prefer privacy, others are just assholes who want to pick fights
> because they are bored.
>
> Asking a perfectly valid question is not trolling, unlike your reply that
> adds nothing to the subject at hand.

so, no bait and switch then? was your question answered satisfactorily?

Admiral Fukov

unread,
Nov 4, 2010, 2:05:52 PM11/4/10
to
I had to google "bait and switch" :)
Nice idiom.

Yes, my question was answered by dexen and Venkatesh.




David Leimbach

unread,
Nov 4, 2010, 4:28:35 PM11/4/10
to
Maybe.  If it's a troll, it is pointing out something that might not be obvious to people on the outside.

 People are still using Plan 9
 People are still working on Plan 9


Charles Forsyth

unread,
Nov 4, 2010, 6:15:58 PM11/4/10
to
>But why isn't the source for mk (3929 lines w/ headers, okay 4661 with mkfile and acid)
>at least as long as all that Java in the ant distribution (213151 lines)?
>That's a lot of catching up to do.
>The market has clearly spoken, and it appears that more lines dominates the soup.

one interesting thing about that example is that if it were done again
for the Plan 9 environment, mk might well be even smaller, since
some of the existing functionality isn't really used,
or might be achieved by simpler mechanisms, or with functionality
added instead by further composition with other programs;
alternatively, it might be redone in a radically different way.

either way you probably wouldn't get an entire O'Reilly book out of it though.

Venkatesh Srinivas

unread,
Nov 4, 2010, 9:56:05 PM11/4/10
to
one interesting thing about that example is that if it were done again
for the Plan 9 environment, mk might well be even smaller, since
some of the existing functionality isn't really used,
or might be achieved by simpler mechanisms, or with functionality
added instead by further composition with other programs;
alternatively, it might be redone in a radically different way.


How would you rewrite mk to be simpler?

Thanks,
-- vs 

Bruce Ellis

unread,
Nov 4, 2010, 11:52:35 PM11/4/10
to
mash has a make builtin. very brief, as all the shell type stuff in mk
goes away..

brucee

Lucio De Re

unread,
Nov 5, 2010, 3:16:58 AM11/5/10
to
On Fri, Nov 05, 2010 at 02:50:22PM +1100, Bruce Ellis wrote:
>
> mash has a make builtin. very brief, as all the shell type stuff in mk
> goes away..
>
I seem to remember that the mash source was lost?

++L

Bruce Ellis

unread,
Nov 5, 2010, 3:58:17 AM11/5/10
to
no. it was the last thing i wrote for the bidness unit.

brucee

Eric Van Hensbergen

unread,
Nov 5, 2010, 9:35:24 AM11/5/10
to
Quite right:
http://code.google.com/p/inferno-os/source/browse/#hg/appl/cmd/mash

Although, no doubt brucee has a new, improved version not fit for mere
mortals to gaze upon.

-eric

C H Forsyth

unread,
Nov 5, 2010, 10:59:49 AM11/5/10
to
> http://code.google.com/p/inferno-os/source/browse/#hg/appl/cmd/mash

that one is indeed fairly old, much as we received it, except for
changes to fit any changes in the environment, but

http://www.vitanuova.com/inferno/man/1/mash.html
and
http://www.vitanuova.com/inferno/man/1/mash-make.html

gives you some idea, especially the latter, in this context.

dexen deVries

unread,
Nov 5, 2010, 1:09:22 PM11/5/10
to
On Friday 05 of November 2010 14:31:01 Eric Van Hensbergen wrote:
> Quite right:
> http://code.google.com/p/inferno-os/source/browse/#hg/appl/cmd/mash
>
> Although, no doubt brucee has a new, improved version not fit for mere
> mortals to gaze upon.


A honest question: what is the rationale for merging functionality of make and
shell into one? Is mash meant to be default interactive shell?

--
dexen

Nick LaForge

unread,
Nov 5, 2010, 1:21:36 PM11/5/10
to
> A honest question: what is the rationale for merging functionality of make and
> shell into one?

Use your imagination....

Nick

dexen deVries

unread,
Nov 5, 2010, 1:34:43 PM11/5/10
to

Tried, failed.
To me, make is a tool for generating an acyclic, directed graph of
dependencies between build steps from some explicit and some wildcard rules
-- and then traversing it in a sensible order. How's that for daily use shell?


Perhaps something about `doing a reasonable action for every target file named
on the command line'?

--
dx

andrey mirtchovski

unread,
Nov 5, 2010, 1:42:19 PM11/5/10
to
> To me, make is a tool for generating an acyclic, directed graph of
> dependencies  between build steps from some explicit and some wildcard rules
> -- and then traversing it in a sensible order. How's that for daily use shell?

your focus is too narrowed on building. a sequence of commands piping
output to each other is also a directed acyclic graph.

dexen deVries

unread,
Nov 5, 2010, 1:58:25 PM11/5/10
to

A bit in the style of plumber, one would have set of make-like rules defined in
some $home/lib/mash, and mash would automagically apply them when target(s)
match?

Currently shell use consists of indicating data source and actions to be
taken. With mash it would be more about indicating desired targets in the
current context, to be created with mash rules in currenct context, right?


On Friday 05 of November 2010 18:45:17 David Leimbach wrote:
> The possibilities are finite!

and so is the memory in a Turing machine...
*mumbles something about turing tar-pit*


David Leimbach

unread,
Nov 5, 2010, 2:30:05 PM11/5/10
to
On Fri, Nov 5, 2010 at 10:32 AM, dexen deVries <dexen....@gmail.com> wrote:
On Friday 05 of November 2010 18:18:44 Nick LaForge wrote:
> > A honest question: what is the rationale for merging functionality of
> > make and shell into one?
>
> Use your imagination....

Tried, failed.
To me, make is a tool for generating an acyclic, directed graph of
dependencies  between build steps from some explicit and some wildcard rules
-- and then traversing it in a sensible order. How's that for daily use shell?


Why is a shell that can generate acyclic digraphs of dependencies bad?  Someone clearly found a use for it at some point or it wouldn't have been done.

I guess one could try to use make as an init system for services in a configuration, but I don't see why not having those features in a shell is better than having those features in a shell.

I do not currently use mash, however, I wonder if it's suitable for a startup mechanism for services just after booting a kernel, to get stuff started in the right order, without lavish attempts at building up those dependencies in a script that can't make acyclic digraphs of dependencies make sense natively.
 

Perhaps something about `doing a reasonable action for every target file named
on the command line'?

The possibilities are finite!
 

--
dx


roger peppe

unread,
Nov 5, 2010, 2:38:44 PM11/5/10
to
On 5 November 2010 18:14, erik quanstrom <quan...@labs.coraid.com> wrote:
>> > -- and then traversing it in a sensible order. How's that for daily use
>> > shell?
>> >
>> >
>> Why is a shell that can generate acyclic digraphs of dependencies bad?
>>  Someone clearly found a use for it at some point or it wouldn't have been
>> done.
>
> it is silly bloat if it's not an essential part of the shell.
> but (as andrey has noted)  if you were to replace the
> machinery behind these normal shell dag builders
> ('&', '&&', '||', if, '|', 'and '`{}') with something general
> enough to replace mk, you'd be on to something.

i did a mash-inspired version of mk as an inferno shell module once.
it required no new syntax (although it could be confused by
files named ":"...)

part of the problem was that it's not that useful to have
a "mkfile"-like syntax that's only understood on one system.

we ended up porting mk.

Bakul Shah

unread,
Nov 5, 2010, 2:45:06 PM11/5/10
to

Some random thoughts triggered by Charles'es message:

1. The idea is to map mk to a special filesystem -- "mkfs"
takes on a whole new meaning! One would overlay mkfs on
a source tree.

We are going to build foo
mkdir foo foo/src
<put foo's source files in foo/src/>
cd foo

Specify a build rule for command foo:
echo 8c src/*.c -o .build > .rule

Specify dependencies a b & c of foo:
ln -s ../^(a b c) .dep/

Build foo:
ls .build
This checks dependencies a b c (if they are directories,
checks a/.build etc) and builds them. This will need to
be fleshed out the most... Default rules can be added
with something like plumb....

Install foo:
cp .build /bin/foo

Remove temporary files
rm .obj/* .build

I used symlinks to point to dependencies but may be there
is a smarter way.

A default .rule can be derived depending on what is in src/

Initial .dep/ may be derived by mkfs running a dependency
deriving program.

Objects in .obj/ may be similarly derived.

The utility if any is that a small set of mechanisms is
used, coupled with a simple convention.

This was fun little exercise (spent more time on
writing this email) so I am sure there are lots of holes.
Probably not worth building. It'd be too slow.

2. Factor out a way to track changes in any fs and trigger
commands. This would probably obviate the need to build a
number of special purpose file systems.

erik quanstrom

unread,
Nov 5, 2010, 2:59:24 PM11/5/10
to
> > -- and then traversing it in a sensible order. How's that for daily use
> > shell?
> >
> >
> Why is a shell that can generate acyclic digraphs of dependencies bad?
> Someone clearly found a use for it at some point or it wouldn't have been
> done.

it is silly bloat if it's not an essential part of the shell.


but (as andrey has noted) if you were to replace the
machinery behind these normal shell dag builders
('&', '&&', '||', if, '|', 'and '`{}') with something general
enough to replace mk, you'd be on to something.

personally, i think getting the syntax right would be
the hard part.

> I guess one could try to use make as an init system for services in a
> configuration, but I don't see why not having those features in a shell is
> better than having those features in a shell.

that's been done with mk for linux by a rose hullman
student. it was faster than some of the fancy purpose-
built tools due to better parallism. see the list archives.

- erik

erik quanstrom

unread,
Nov 5, 2010, 3:09:44 PM11/5/10
to
> > ('&', '&&', '||', if, '|', 'and '`{}') with something general
> > enough to replace mk, you'd be on to something.
>
> i did a mash-inspired version of mk as an inferno shell module once.
> it required no new syntax (although it could be confused by
> files named ":"...)

what you did was very cool, but iirc this was in addition
to, not replacing the standard && || ... bits. one
could build pipelines and specify command order in one
unified way, no?

- erik

Eric Van Hensbergen

unread,
Nov 5, 2010, 4:40:01 PM11/5/10
to

Perhaps Brzr could post the paper -- perhaps that what was lost....

-eric

Charles Forsyth

unread,
Nov 5, 2010, 8:51:33 PM11/5/10
to
> A honest question: what is the rationale for merging functionality of make and
> shell into one?

at the time, people were pushing more and more scripting or programming language
functionality into accretions of the original make, and someone observed that it might be better
instead to put a small amount of support for dependencies into a proper scripting language.

Bruce Ellis

unread,
Nov 5, 2010, 10:24:11 PM11/5/10
to
i can answer that one easily. that's why it's called mash rather than
"random marketting name". the intention was to replace plan9 rc with a
shell that was maintainable and had loadable modules. i wrote it in
limbo to show it works, damned well. the first requirement was a make
loadable. it's not built into mash, it's loadable. a few pages of code
that uses the shell rather than mk's builtin shell like stuff.

brucee

dexen deVries

unread,
Nov 6, 2010, 4:27:49 PM11/6/10
to
On Saturday 06 of November 2010 03:20:55 Bruce Ellis wrote:
> i can answer that one easily. that's why it's called mash rather than
> "random marketting name". the intention was to replace plan9 rc with a
> shell that was maintainable and had loadable modules. i wrote it in
> limbo to show it works, damned well. the first requirement was a make
> loadable. it's not built into mash, it's loadable. a few pages of code
> that uses the shell rather than mk's builtin shell like stuff.


Why loadable modules and what kind of modules were expected to be of frequent
use? Also, would the modules only provide new commands, or could they add
whole semantics and/or syntax constructs?

--
dexen

roger peppe

unread,
Nov 8, 2010, 6:09:22 AM11/8/10
to

well, all it knows about are commands, fds and environment
variables. if, && and || are all defined externally.

a "mkfile" using the sh mk module looked something like this:

#!/dis/sh
load mk
metarule %.dis : %.b {
limbo -gw $stem.b
}
TARGETS=x.dis y.dis z.dis
rule -V all : $TARGETS {
echo done
}
mk

it'd be trivial to add a bit of syntactic sugar
a la mash, to make the syntax more mk-like,
but it seemed better just to port mk itself.

Bruce Ellis

unread,
Nov 8, 2010, 4:26:52 PM11/8/10
to
that doesn't describe mash at all. my talk at IWP9 hinted on the functionality.

the first advice i was given when i started on inferno was not to port
everything in sight - think forward. fixing the awkward and backward
syntax and semantics of rc+mk, and the replication, was the
intention..

brucee

Charles Forsyth

unread,
Nov 8, 2010, 5:17:06 PM11/8/10
to
> but it seemed better just to port mk itself.

the intention was to support building both inside and outside the Inferno environment,
and neither sh nor mash were going to be as easy to reproduce
outside Inferno as simply making mk work (more or less) inside Inferno.
that action alone wasn't intended to represent any vision for the future.

Charles Forsyth

unread,
Nov 8, 2010, 5:29:19 PM11/8/10
to
>the intention was to support building both inside and outside the Inferno environment,

oh. and just like Plan 9 mkfile's and for the same reason, Inferno's mkfiles were
essentially concise lists of the names of inputs and the names of outputs, with few instructions,
which suited my little brain.

``My last company switched to nmake, and they're OUT OF BUISINESS :-) :-) :-)''
[fortune]

Bruce Ellis

unread,
Nov 8, 2010, 5:29:34 PM11/8/10
to
a more than fair justification.

brucee

Jeff Sickel

unread,
Nov 8, 2010, 9:16:10 PM11/8/10
to

On Nov 8, 2010, at 4:33 PM, Charles Forsyth wrote:

> ``My last company switched to nmake, and they're OUT OF BUISINESS :-) :-) :-)''

That line is tee-shirt worthy.


EBo

unread,
Nov 8, 2010, 10:20:19 PM11/8/10
to

>> ``My last company switched to nmake, and they're OUT OF BUISINESS
>> :-) :-) :-)''
>
> That line is tee-shirt worthy.

There are places you can get custom t-shirts made for a reasonable fee,
so you should be able to have one made ;-)

Bruce Ellis

unread,
Nov 9, 2010, 3:12:17 AM11/9/10
to
brucee has a t-shirt press.

fun - $5 - 5 minutes (mate's rates). cinap has a reversed glenda shirt
to remind him the cars are on the left of the road. he should be
surfing with tiger - waves were a bit scarey for the boy today.

brucee

"GCC makes me wanna smoke crack"

Enrico Weigelt

unread,
Nov 13, 2010, 2:32:49 PM11/13/10
to
* Charles Forsyth <for...@terzarima.net> wrote:

> ``My last company switched to nmake, and they're OUT OF BUISINESS :-) :-) :-)''

> [fortune]

When it comes to builder systems, I'm still thinking of an more
declarative approach: describing the software's structure by
certain object types and their relations (eg. we have some
executable, made of some list of sources and importing some
libraries) and then let the buildsystem handle it all
(won't be as flexible as make+friends, but on an much higher
abstraction level).

Some time ago I'd started a little reference implementation:

git://pubgit.metux.de/projects/treebuild.git

Maybe somebody finds it a bit interesting ;-)


cu
--
----------------------------------------------------------------------
Enrico Weigelt, metux IT service -- http://www.metux.de/

phone: +49 36207 519931 email: wei...@metux.de
mobile: +49 151 27565287 icq: 210169427 skype: nekrad666
----------------------------------------------------------------------
Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
----------------------------------------------------------------------

Enrico Weigelt

unread,
Nov 13, 2010, 2:39:42 PM11/13/10
to
* ron minnich <rmin...@gmail.com> wrote:

> If you're the kind of guy who can't resist changing things that don't
> need changing, then you won't get it; perhaps you'd be better off
> working on libtool. But Plan 9 is far from dead.

Nobody with at least half-sane mind voluntarily works on or even
with libtool ... it's basic concepts are fundamentally insane ;-o

Gary V. Vaughan

unread,
Nov 13, 2010, 9:20:43 PM11/13/10
to
[[resent from my subscribed email address after the mailing list rejected the original]]

Hi Enrico,

On 14 Nov 2010, at 02:24, Enrico Weigelt wrote:
> * ron minnich <rmin...@gmail.com> wrote:
>
>> If you're the kind of guy who can't resist changing things that don't
>> need changing, then you won't get it; perhaps you'd be better off
>> working on libtool. But Plan 9 is far from dead.
>
> Nobody with at least half-sane mind voluntarily works on or even
> with libtool ... it's basic concepts are fundamentally insane ;-o

People like to beat on GNU Libtool, and in some cases that criticism is
not undeserved... but in my experience, many critics of the tool come
from a perspective of building on a single architecture. If you have
never tried to build and link shared libraries from the same code-base
on 30 (or even 3) separate incompatible architectures, then you are
probably missing the point, and needn't read any further.

I'll be the first to admit that the complexity of the shell code inside
GNU Libtool is asymptotically approaching unmaintainable... and if I had
a lot more hacking time, I'd be spending more of it on untangling the
spaghetti. But that is an implementation issue, not a design problem.

That said, your comment strikes me as entirely unsubstantiated. Why do
you think the concepts themselves are insane? Setting aside the admitted
implementation shortcomings for the sake of argument; if you were
designing GNU Libtool from scratch, how would you do it differently?

AFAICT, without rewriting the entire GNU build system from the ground
up (and there is far too much momentum behind it to ever gain enough
traction to switch the GNU eco-system to an entirely new and different
build system anyway) the following precepts are immutable:

1. Unix variants (including POSIX layers of non-Unix architectures)
build shared libraries in vastly different ways, GNU Libtool
needs to handle all of them;
2. Similarly, there are many binary formats that handle run-time
loading of code in vastly different ways, all of which must be
handled too;
3. There's no use in fighting against GNU Autoconf and GNU Automake,
(and, believe me, I hate Perl as much as the next guy), so
integration with these is required - Libtool is not a build system
it just wants to provide a uniform interface to building and
loading dynamic shared objects;
4. GNU Libtool is a system bootstrap tool that is used to build some
of the very lowest level components of a Unix installation, so
it needs to have the smallest number of runtime dependencies as
reasonable possible (GNU M4 is a *build* time dependency, and
even that is forced on me by Autoconf).
5. When you link your project code against dependent libraries, you
should only have to specify the libraries that you use directly -
the indirect libraries should be taken care of by the operating
system, and when the OS is deficient, libtool should do the heavy
lifting.

The current implementation of GNU Libtool also has the following (IMHO
desirable) aspects, which are not immutable, but make Libtool easier to
use than it would be otherwise:

1. Once installed, it is useable outside the GNU eco-system by any
build-system that is prepared to call libtool rather than the
C-compiler for building and linking against shared compilation
units;
2. It can be used to bootstrap the C compiler (and is used by GNU
gcc in that capacity to simplify tracking all the varied options
for bootstrapping it's own runtime with the vendor compiler on all
the systems it supports).

I could probably think of others, but don't want to make this post any
longer than it already is.

I'm entirely open to reasoned criticism, and especially to useable
suggestions on how to improve the design of GNU Libtool. I probably
won't pay too much attention if you tell me that I should rewrite the
entire GNU build system and expect several thousand packages to pay
any attention to me. I only maintain GNU Libtool and GNU M4, so my
scope, and hacking time, is much smaller than that.

Thanks for reading this far!

Cheers,
--
Gary V. Vaughan (ga...@gnu.org)

PGP.sig

Anthony Sorace

unread,
Nov 14, 2010, 12:51:34 AM11/14/10
to
On Nov 13, 2010, at 21:17, Gary V. Vaughan wrote:

> People like to beat on GNU Libtool, and in some cases that criticism is
> not undeserved... but in my experience, many critics of the tool come
> from a perspective of building on a single architecture. If you have
> never tried to build and link shared libraries from the same code-base
> on 30 (or even 3) separate incompatible architectures, then you are
> probably missing the point, and needn't read any further.

When I write C code which I intend to be portable, I write against p9p, which gives me roughly a dozen architectures. Before that, I wrote against APE, which does a good job of providing a "least common denominator" posix layer that works with many systems. Simple changes to the mkfile make up most of the difference between platforms. If I cared about some architecture p9p didn't support I'd put the time into p9p; if i really wanted to write posix code, i'd put the time into fixing bugs in the posix libraries.

> AFAICT, without rewriting the entire GNU build system from the ground
> up (and there is far too much momentum behind it to ever gain enough
> traction to switch the GNU eco-system to an entirely new and different
> build system anyway) the following precepts are immutable:

This misses the point, or at least the point here. It may all be true (#5 in particular raises some interesting philosophical questions), but it's #3 that makes clear that we're operating in totally different worlds. libtool may well be the most sensible way of accommodating autoconf/automake - but the most sensible thing to do is *not* to accommodate them.

You may well be right that there's too much momentum behind autoconf/automake to change GNU. But that doesn't mean it's the right thing to do, or something sensible people ought to choose to participate in.

> I'm entirely open to reasoned criticism, and especially to useable
> suggestions on how to improve the design of GNU Libtool. I probably
> won't pay too much attention if you tell me that I should rewrite the
> entire GNU build system and expect several thousand packages to pay
> any attention to me. I only maintain GNU Libtool and GNU M4, so my
> scope, and hacking time, is much smaller than that.

Again, we're asking totally different questions. You seem to be saying "what's the best way to make the gnu build system usable?", and libtool may well be a great answer to that question. But it's not the question we ask. If I instead ask "what's the best way to write portable cross-platform code?", autoconf, automake, and libtool don't enter into the discussion.

PGP.sig

Russ Cox

unread,
Nov 14, 2010, 1:28:51 AM11/14/10
to
> When I write C code which I intend to be portable, I write against p9p, ...

I don't think this is fair to Gary's well-reasoned mail.
He explicitly said libtool was solving the problem of
providing a single consistent command line tool that
handled the job of building a *shared library* on a
variety of different systems.

Plan9port mostly addresses the problem of providing
a consistent C programming interface (library code)
across a variety of different systems. There are the
9c and 9l scripts, but they are hardly a paragon of virtue
and don't even bother trying to create shared libraries.

That is, libtool says "you want to make shared libraries; I can help."
Plan9port says "sorry, shared libraries are too hard; don't do that."
Either approach could be valid depending on the context.

A lot of people here on 9fans lump all GNU software
together, but the different pieces can be very different,
and there are good ones. To point some of those out:
GNU awk is a nice piece of software. The core of
GNU grep is very well written even if the surrounding
utility has been embellished a bit too much. Groff is
certainly less buggy and more capable than troff,
though Heirloom troff probably beats them both.

Russ

erik quanstrom

unread,
Nov 14, 2010, 1:31:23 AM11/14/10
to
> You may well be right that there's too much momentum behind
> autoconf/automake to change GNU. But that doesn't mean it's the right
> thing to do, or something sensible people ought to choose to
> participate in.

to be a bit more blunt, the argument that the tyrrany of the
auto* is unstoppable and will persist for all time is persuasive.

so i choose at this point to get off the gnu train and do something
that affords more time for solving problems, rather than baby
sitting tools (that baby sit tools)+. i believe "no" is a reasoned answer,
when faced with an argument that takes the form of "everybody's
doing it, and you can't stop it". i suppose everybody has had that ex-boss.

i also think it's reasonable, as anthony points out, just to avoid shared
libraries, if that's the pain point. sure, one can point out various
intracacies of bootstrapping gnu c. but i think that's missing the
point that the plan 9 community is making. many of these wounds
are self-inflicted, and if side-stepping them gets you to a solution faster,
then please side step them. there's no credit for picking a scab.

please do take a look at plan9ports. it's portable across cpu type and
os without fanfare, or even much code. plan 9 is similar, but much
simpler, since it doesn't need to fend off the os.

- erik

Anthony Sorace

unread,
Nov 14, 2010, 3:06:40 AM11/14/10
to
On Nov 14, 2010, at 1:26, Russ Cox <r...@swtch.com> wrote:

[a bunch of very reasonable stuff]

I clearly didn't write that well because Russ just disagreed with me by saying exactly what I was trying to say: the approaches ask and answer different questions. My main interest was to point out that the mail Gary was responding to comes from the perspective of the "other" question.

Gary V. Vaughan

unread,
Nov 14, 2010, 3:15:08 AM11/14/10
to
Hi Erik et. al,

Thanks for the feedback, all.

On 14 Nov 2010, at 13:24, erik quanstrom wrote:
>> You may well be right that there's too much momentum behind
>> autoconf/automake to change GNU. But that doesn't mean it's the right
>> thing to do, or something sensible people ought to choose to
>> participate in.
>
> to be a bit more blunt, the argument that the tyrrany of the
> auto* is unstoppable and will persist for all time is persuasive.

Well, I wouldn't take it quite as far as that. My point is really that
there is already a vast amount of (often good) software written by
(often skilled) programmers who have invested a huge amount of time
and energy into the existing eco-system, and (quite reasonably) want to
enjoy the advantages of installing and utilising dynamic shared objects.

I doubt that anyone would argue for a full static copy of the C runtime
in every binary, and between there and making every code library a
runtime linkable shared library is just a matter of degrees. Since you
really need to solve the shared compilation unit problem at the lowest
level anyway, you might as well expose it to at least the next few layers
in the application stack at least.

> so i choose at this point to get off the gnu train and do something
> that affords more time for solving problems, rather than baby
> sitting tools (that baby sit tools)+. i believe "no" is a reasoned answer,
> when faced with an argument that takes the form of "everybody's
> doing it, and you can't stop it". i suppose everybody has had that ex-boss.

I would be the last person to sing the praises of the existing GNU
build system, and I hope the fact that I lurk on this list shows that
I like to hang around smart people in the hope of picking up some good
ideas. However, I don't really have the time to write the next big
build system that solves all of the growing pains of the GNU eco-system,
and I'm almost entirely certain that even if I did... my efforts would
go almost entirely unnoticed. Similarly, I don't have the luxury of
letting the train leave the station without me, unless I first have
another way of earning a living - and neither would I want to, I
consider myself blessed that I can earn my living by being involved in,
(and to a very small extent help to steer a proportion of) the Free
Software community.

On the other hand, I think that there must be room for incremental
improvements to the incumbent GNU build system, but I doubt that I
would see them right away when I'm so close to development of what
is already in fashion. My ears pricked up when I saw someone claim
that GNU Libtool is insane, because I'm interested to hear where the
insanity lurks, and maybe even gain some insight into what the cure
is. Not only that, I have the rare opportunity of being able to push
the GNU build system forward if anyone can help me to understand where
the bad design decisions were made.

> i also think it's reasonable, as anthony points out, just to avoid shared
> libraries, if that's the pain point.

:-o For an embedded system I would agree, up to a point. But when I'm
trying to support hundreds of users each running dozens of simultaneous
binaries, then forcing each binary to keep it's own copy of whatever version
of each library (and it's dependent libraries) were around at link time
in memory simultaneously surely can't be the best solution? Or even a
reasonable solution. I'm not even sure that statically relinking everything
on the system (actually 30 different architectures in my own case) each
time a low-level library revs so that the OS memory management can optimise
away all those duplicate libraries is a reasonable solution.

> sure, one can point out various
> intracacies of bootstrapping gnu c. but i think that's missing the
> point that the plan 9 community is making. many of these wounds
> are self-inflicted, and if side-stepping them gets you to a solution faster,
> then please side step them. there's no credit for picking a scab.

I have no doubt that the plan 9 community is doing something good for
the future development of operating systems and software, but that doesn't
solve anything for my customers who want to run Gnome, KDE and Emacs on
their AIX, Solaris and HP-UX systems. I still have to build that software
for them to make a living... and GNU Libtool makes my life immeasurably
easier. I know this because porting an application written by a GNU
build system using developer who only ever builds and tests on Mac OS
usually takes much less than a day, and often no more than an hour to
pass it's testsuite on all the platforms our customers care about. The
packages that use cmake and scons and all the other "portable" build
systems rarely take me less than a week and often somewhat longer to port
to systems the developer hadn't considered... to the point where nowadays,
it's easier to port all but the very largest software packages to the GNU
build system first.

I'm still waiting to hear someone who can make a convincing argument that
GNU Libtool is not the least bad solution to the problems it wants to
help developers solve.

> please do take a look at plan9ports. it's portable across cpu type and
> os without fanfare, or even much code. plan 9 is similar, but much
> simpler, since it doesn't need to fend off the os.

I have looked at length already, although upgrading to VMWare 4 last year
killed my Plan 9 VMs, and I didn't yet have the time to try to get them
running again yet.

PGP.sig

tlar...@polynum.com

unread,
Nov 14, 2010, 4:12:47 AM11/14/10
to
Hello,

On Sun, Nov 14, 2010 at 09:17:46AM +0700, Gary V. Vaughan wrote:
> [[resent from my subscribed email address after the mailing list rejected the original]]
>

> [...]

> AFAICT, without rewriting the entire GNU build system from the ground
> up (and there is far too much momentum behind it to ever gain enough
> traction to switch the GNU eco-system to an entirely new and different
> build system anyway) the following precepts are immutable:

It took me less time to write R.I.S.K. (the building framework used
for KerGIS and KerTeX)---and R.I.S.K. does shared libraries if supported
and desired---from scratch than to try to understand auto*
and libtool. Furthermore, the auto* and libtool were typically made
for trying to do something "working" to some extend with a chaotic
source. They typically manage to compile "things" written by
programmers who have been encouraged to look at the finger ignoring
the moon: to concentrate on the "GNU" tools and "GNU" libraries
etc, and not on C89 (or C99), POSIX etc.

Furthermore, the tools were not written with cross-compilation in mind:
compiling and testing a program to run it for obtaining an information
is making the assumption that MATRIX == TARGET. Cross-compilation allows
you to build with some assumptions and some warranties (on the MATRIX)
for a TARGET that has perhaps not, by itself, all the utilities you are
assuming.

The main problem is indeed "philosophy": encourage the bazaar even in
the code. I personnally don't buy that. And the acronym "GNU is Not
Unix" is a sophism since GNU will be strictly nothing without Unices,
especially open source ones. But since GNU is not Unix, there are
imbeciles that insist that a real GNU system must not conform to POSIX,
because it is a Unix thing, neither to a C standard ([scornful]: if you
really want this my dear, try "-pedantic"...).

The GNU way, with the auto* and libtool is demonstrated with GPL GRASS
and GPL TeXLive. And I have made the demonstration of the power of
organized against bazaar by reworking the two monstruous things
_alone_ so that they are maintainable, that is able to be "hold in
on hand".

Cheers,
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C

Gary V. Vaughan

unread,
Nov 14, 2010, 4:37:14 AM11/14/10
to
On 14 Nov 2010, at 16:10, tlar...@polynum.com wrote:
> Hello,

Hi Thierry,

> On Sun, Nov 14, 2010 at 09:17:46AM +0700, Gary V. Vaughan wrote:
>> [[resent from my subscribed email address after the mailing list rejected the original]]
>>
>> [...]
>> AFAICT, without rewriting the entire GNU build system from the ground
>> up (and there is far too much momentum behind it to ever gain enough
>> traction to switch the GNU eco-system to an entirely new and different
>> build system anyway) the following precepts are immutable:
>
> It took me less time to write R.I.S.K. (the building framework used
> for KerGIS and KerTeX)---and R.I.S.K. does shared libraries if supported
> and desired---from scratch than to try to understand auto*
> and libtool. Furthermore, the auto* and libtool were typically made
> for trying to do something "working" to some extend with a chaotic
> source. They typically manage to compile "things" written by
> programmers who have been encouraged to look at the finger ignoring
> the moon: to concentrate on the "GNU" tools and "GNU" libraries
> etc, and not on C89 (or C99), POSIX etc.

I mostly agree with everything you say here. However, please try not
to confuse libtool (essentially a wrapper for vendor compilers that
allows developers to use the uniform ELFish conventions of the libtool
interface, rather than jump through the various peculiar vendor compiler
hoops for each new platform) with the GNU build system as a whole. Note,
that Libtool actually works rather well in isolation, and doesn't rely
on the rest of the GNU build system to be useful.

> Furthermore, the tools were not written with cross-compilation in mind:
> compiling and testing a program to run it for obtaining an information
> is making the assumption that MATRIX == TARGET. Cross-compilation allows
> you to build with some assumptions and some warranties (on the MATRIX)
> for a TARGET that has perhaps not, by itself, all the utilities you are
> assuming.

Not true. Auto* makes a valiant attempt at supporting cross compilation,
but when one's philosophy is "don't tabulate platform differences per
portability issue per vendor per release; actually check what the real
behaviour is"... and there is no runtime for the cross-environment on
the build host, what else can you do? Auto*, in that case makes a
conservative guess, and allows the user to override it incase they know
better.

I think that's starting to get off topic though.

> The main problem is indeed "philosophy": encourage the bazaar even in
> the code. I personnally don't buy that. And the acronym "GNU is Not
> Unix" is a sophism since GNU will be strictly nothing without Unices,
> especially open source ones. But since GNU is not Unix, there are
> imbeciles that insist that a real GNU system must not conform to POSIX,
> because it is a Unix thing, neither to a C standard ([scornful]: if you
> really want this my dear, try "-pedantic"...).

In light of that, maybe you can suggest a better means for GNU Libtool
to prod the build environment and figure out what the characteristics
and limitations of shared libraries that need to be accounted for will
be? Obviously, following the Auto* philosphy, we currently try out each
of the things we care about to see how they work, and then keep a record
of the results in order to build the libtool runtime script for
installation.

Does your build system work correctly with shared libraries in Mingw,
cygwin, AIX, HP-UX (to name just a few of the more awkward under-
featured shared library implementations I care about) under various
compiler and library releases? How did you do that without either
probing the environment (which is what we do already) or tabulating
known results (which breaks every time you encounter a new system, and
requires maintenance for every supported combination of compiler/libc/OS
if a new variable is added to the tabulation)?

PGP.sig

tlar...@polynum.com

unread,
Nov 14, 2010, 5:24:24 AM11/14/10
to
Hello Gary,

On Sun, Nov 14, 2010 at 04:32:34PM +0700, Gary V. Vaughan wrote:
> [...]


>
> Does your build system work correctly with shared libraries in Mingw,
> cygwin, AIX, HP-UX (to name just a few of the more awkward under-
> featured shared library implementations I care about) under various
> compiler and library releases? How did you do that without either
> probing the environment (which is what we do already) or tabulating
> known results (which breaks every time you encounter a new system, and
> requires maintenance for every supported combination of compiler/libc/OS
> if a new variable is added to the tabulation)?

I do it the other way around, that is, it is not every package that is
trying to discover where it is compiled etc. but two parameters files
(small basic Bourne shell variables definitions) in the R.I.S.K. that
describe the MATRIX and TARGET.

Adding a "system" is trivial, and is done only once. Not every package
has to redo exploration again and again and again.

R.I.S.K. is Reduced Instruction because the project (package) must know
what it is doing; and the hosts systems (MATRIX and TARGET) must be
described hence know what they are providing (not third/opt packages:
only the directories to explore to find an optional library and
its header).

At the moment, I had no problem adding Plan9 (APE), Darwin (Mach-O);
Mingw should be trivial; NetBSD and FreeBSD are here, OpenBSD should be
a copy. The main "divergences" will be with the Linuces---there is a
Linux target, but there should be "flavors"---that manage to
do it differently, Linuces being no "system" since the kernel, the libc
and the basic utilities are orthogonal entities! combined by mood!

So my system is probably not able to accomodate the way the GNU
framework does, but it is not its aim: if the package doesn't know what
it is doing and needing, and the systems don't know what they are
providing, the engineering principle is: no engineering! and I simply
don't waste time with that ;)

And to the question "but what do you do if a package you need doesn't
fit your engineering principles?" the answer is: I redo it to my taste.

Done for GRASS -> KerGIS, Shapelib -> my own version publish with
open source version of KerGIS, (netpbm, libtiff not published),
GPL teTeX/TeXLive -> KerTeX etc.

The only system I'm willing to contribute to but not redo since it is
far better than anything I could dream to come up myself is Plan9...

Carl-Daniel Hailfinger

unread,
Nov 14, 2010, 5:53:29 AM11/14/10
to
On 14.11.2010 10:10, tlar...@polynum.com wrote:
> Furthermore, the auto* and libtool were typically made
> for trying to do something "working" to some extend with a chaotic
> source. They typically manage to compile "things" written by
> programmers who have been encouraged to look at the finger ignoring
> the moon: to concentrate on the "GNU" tools and "GNU" libraries
> etc, and not on C89 (or C99), POSIX etc.
>

Heh. Pure C99 code (with no GNU extensions or OS specific stuff
whatsoever) doesn't compile with pcc unless you avoid some of the really
useful features and some of the standard headers. I can quote the C99
standard if you doubt this.

I have successfully avoided using autoconf and similar stuff in my
projects by adhering to strict C99, but in an ironic twist of fate, Plan
9 will be the OS that forces me to use something like autoconf due to
the limited C99 support.

Regards,
Carl-Daniel

--
http://www.hailfinger.org/


tlar...@polynum.com

unread,
Nov 14, 2010, 6:49:29 AM11/14/10
to
On Sun, Nov 14, 2010 at 11:50:53AM +0100, Carl-Daniel Hailfinger wrote:
> On 14.11.2010 10:10, tlar...@polynum.com wrote:
> > Furthermore, the auto* and libtool were typically made
> > for trying to do something "working" to some extend with a chaotic
> > source. They typically manage to compile "things" written by
> > programmers who have been encouraged to look at the finger ignoring
> > the moon: to concentrate on the "GNU" tools and "GNU" libraries
> > etc, and not on C89 (or C99), POSIX etc.
> >
>
> Heh. Pure C99 code (with no GNU extensions or OS specific stuff
> whatsoever) doesn't compile with pcc unless you avoid some of the really
> useful features and some of the standard headers. I can quote the C99
> standard if you doubt this.
>

I don't doubt this. But as long as you know what standard you do use,
you know exactly the delta between C99 and C89, and you have two ways:
whether providing a C99 "emulation" to insert between your sources and
the C89 framework (this is the way I would do it with RISK); or add C99
support to pcc...

As long as you know exactly what you do use, a solution is always at
hand.

When one is already beating around the bush in one's code, it is almost
hopeless...

erik quanstrom

unread,
Nov 14, 2010, 10:34:25 AM11/14/10
to
> GNU awk is a nice piece of software. The core of
> GNU grep is very well written even if the surrounding
> utility has been embellished a bit too much. Groff is

when mike wrote it, gnu grep was the best thing one
could get if one wasn't at the labs. since brucee started
this, i was in the room when it was written. or at least,
one day in the science center mike was explaining the
algorithm and he said something to the effect of, oops.
he ran off to a h29 terminal, fixed the bug, confirmed it
with the old executable, recompiled, and confirmed the
fix.

unfortunately, the last i checked, gnu grep mallocs
for each byte of input when using a utf-8 locale. i even
submitted a patch for this, but i don't think the current
maintainers understood why that's a problem.

anyway, i think you know why we paint with a broad brush.
when you love well and simply written software, gnu's just
going to make you a bit cranky.

even if gnu * has a well-written core, the layers of crunchy
bits make them difficult to understand, difficult to read,
and wickedly difficult to fix. at least that's my experience.

- erik

Jacob Todd

unread,
Nov 14, 2010, 10:58:26 AM11/14/10
to

The full standard c library isn't included in a statically linked executable. Only what's needed is, at least on plan 9, i have no idea what gcc does.

ron minnich

unread,
Nov 14, 2010, 11:27:12 AM11/14/10
to
On Sun, Nov 14, 2010 at 7:56 AM, Jacob Todd <jaket...@gmail.com> wrote:
> The full standard c library isn't included in a statically linked
> executable. Only what's needed is, at least on plan 9, i have no idea what
> gcc does.

To emphasize this comment: Plan 9 has always done the equivalent of
what gcc recently got; only the code you use is bound in. The plan 9
linkers do the equivalent of the segment garbage collection that went
into gld a while back. And there's no need to tell the Plan 9 C
compiler to put everything in its own section so it can be
garbage-collected by the linker.

ron

Russ Cox

unread,
Nov 14, 2010, 12:48:59 PM11/14/10
to
> unfortunately, the last i checked, gnu grep mallocs
> for each byte of input when using a utf-8 locale.

that bug was fixed in gnu grep years ago,
probably before you found and reported it.
unfortunately, linux distributions were for
many years not updating their copies of
gnu grep to the latest version, so very few
'/bin/grep's had the bug fix.

russ

Charles Forsyth

unread,
Nov 14, 2010, 4:38:08 PM11/14/10
to
>I have successfully avoided using autoconf and similar stuff in my
>projects by adhering to strict C99, but in an ironic twist of fate, Plan
>9 will be the OS that forces me to use something like autoconf due to
>the limited C99 support.

the list of unimplemented items in /sys/src/cmd/cc/c99* is:

9, 19. Hexdecimal floating point constants.
11. _Complex, _Imaginary, _Bool
14. Variable arrays in parameter lists.
33. Variable-length arrays
34. goto restrictions for variable-length arrays
18. Notation for universal characters \uXXXX
25. Division and mod truncate toward zero.
26. _Bool, float _Complex, double _Complex, long double _Complex

i can think of something else that's not been noticed,
but what other things have you found?

Ori Bernstein

unread,
Nov 14, 2010, 4:50:43 PM11/14/10
to
Compound literal support is unimplemented for arrays. Also, most c99
headers are missing, even the simple ones like stdint.h. It seems most of
the work to fix that would be teaching OSTRUCT to work with arrays in
com.c

*dives back into schoolwork*

On Sun, 14 Nov 2010 21:44:07 +0000, Charles Forsyth <for...@terzarima.net> wrote:

> >I have successfully avoided using autoconf and similar stuff in my
> >projects by adhering to strict C99, but in an ironic twist of fate, Plan
> >9 will be the OS that forces me to use something like autoconf due to
> >the limited C99 support.


--
Ori Bernstein

erik quanstrom

unread,
Nov 14, 2010, 5:23:03 PM11/14/10
to

if i recall correctly, i found that in 2004 or 2005
and fixed it directly from the gnu.org source.
perhaps you remember something i don't.

in any event, it's still not really fixed. utf-8
performance still sucks:

; grep --version >[2=1] | sed 1q
GNU grep 2.5.4
; time grep missingstring1 mail.tar
0.00u 0.00s 0.01r grep missingstring1 mail.tar # status=1
LANG=en_US.UTF-8 time grep missingstring1 mail.tar
0.44u 0.00s 0.53r grep missingstring1 mail.tar # status=1

- erik

Enrico Weigelt

unread,
Nov 14, 2010, 8:20:05 PM11/14/10
to
* Gary V. Vaughan <ga...@gnu.org> wrote:

> People like to beat on GNU Libtool, and in some cases that criticism is
> not undeserved... but in my experience, many critics of the tool come
> from a perspective of building on a single architecture.

Actually, I'm building for lots of different archs almost all the day.
Crosscompiling w/ sysroot, of course. And that's exactly the point
where libtool and other autotools stuff was quite unusable until just
a few years ago (eg. passed *wrong* library pathes to the toolchain).

> That said, your comment strikes me as entirely unsubstantiated. Why do
> you think the concepts themselves are insane?

The whole idea of libtool essentially being a command line filter
instead of defining an own coherent abstraction interface and one
implementation/configuration instance per target instead of an
autofooled instance per individual package build.

> Setting aside the admitted implementation shortcomings for the sake
> of argument; if you were
> designing GNU Libtool from scratch, how would you do it differently?

See git://pubgit.metux.de/projects/unitool.git

> 1. Unix variants (including POSIX layers of non-Unix architectures)
> build shared libraries in vastly different ways, GNU Libtool
> needs to handle all of them;

That's an issue of individual platform backends, which should be
completely transparent to the calling package.

> 3. There's no use in fighting against GNU Autoconf and GNU Automake,

Ah, resistance is futile ? ;-o

> 1. Once installed, it is useable outside the GNU eco-system by any
> build-system that is prepared to call libtool rather than the
> C-compiler for building and linking against shared compilation
> units;

Anyone seriously doing that ? I only see a wide tendency to move away
from libtool in GNU world ...

Gary V. Vaughan

unread,
Nov 14, 2010, 11:31:27 PM11/14/10
to

On 14 Nov 2010, at 17:50, Carl-Daniel Hailfinger wrote:
> On 14.11.2010 10:10, tlar...@polynum.com wrote:
>> Furthermore, the auto* and libtool were typically made
>> for trying to do something "working" to some extend with a chaotic
>> source. They typically manage to compile "things" written by
>> programmers who have been encouraged to look at the finger ignoring
>> the moon: to concentrate on the "GNU" tools and "GNU" libraries
>> etc, and not on C89 (or C99), POSIX etc.
>>
>
> Heh. Pure C99 code (with no GNU extensions or OS specific stuff
> whatsoever) doesn't compile with pcc unless you avoid some of the really
> useful features and some of the standard headers. I can quote the C99
> standard if you doubt this.

Even then, many vendor compilers and linkers have many non-conformances,
and outright bugs. Take a look at the number of workarounds that make
their way into gnulib to cover breakage in the POSIX APIs alone.

You can either try to remember what all of those are, or use something
like autoconf to probe for known bugs, and gnulib to plug them, or
you can link against a shim library like GNU libposix which will
do all of that automatically when it is built and installed, allowing
you to write to the POSIX APIs with impunity.

> I have successfully avoided using autoconf and similar stuff in my
> projects by adhering to strict C99, but in an ironic twist of fate, Plan
> 9 will be the OS that forces me to use something like autoconf due to
> the limited C99 support.

And sadly, there is a good chance that your blind faith in having fully
conformant APIs would come unstuck quite quickly if your code needed to
work on a large selection of commercial UNIX releases (assuming you
didn't code around all of those shortcomings in each of your projects
that is).

Plan 9 is far from alone in having limited C99 and POSIX API support.

PGP.sig

Carl-Daniel Hailfinger

unread,
Nov 15, 2010, 12:08:40 AM11/15/10
to
On 15.11.2010 05:29, Gary V. Vaughan wrote:
> On 14 Nov 2010, at 17:50, Carl-Daniel Hailfinger wrote:
>
>> On 14.11.2010 10:10, tlar...@polynum.com wrote:
>>
>>> Furthermore, the auto* and libtool were typically made
>>> for trying to do something "working" to some extend with a chaotic
>>> source. They typically manage to compile "things" written by
>>> programmers who have been encouraged to look at the finger ignoring
>>> the moon: to concentrate on the "GNU" tools and "GNU" libraries
>>> etc, and not on C89 (or C99), POSIX etc.
>>>
>>>
>> Heh. Pure C99 code (with no GNU extensions or OS specific stuff
>> whatsoever) doesn't compile with pcc unless you avoid some of the really
>> useful features and some of the standard headers. I can quote the C99
>> standard if you doubt this.
>>
>
> Even then, many vendor compilers and linkers have many non-conformances,
> and outright bugs. Take a look at the number of workarounds that make
> their way into gnulib to cover breakage in the POSIX APIs alone.
>
> You can either try to remember what all of those are, or use something
> like autoconf to probe for known bugs, and gnulib to plug them, or
> you can link against a shim library like GNU libposix which will
> do all of that automatically when it is built and installed, allowing
> you to write to the POSIX APIs with impunity.
>

Oh, I don't doubt that there is lots of API breakage on various unixes.
However, I hope that printf, scanf, fopen and similar basic functions
are working well in all those environments. That said, non-portable
constructs like mmap have to be avoided (or at least wrapped in some
cross-platform interface) once you want to run software on Windows. And
AFAIK hardware accesses are totally non-portable anyway. libpci/pciutils
and X.org have their own abstraction layer for this, and flashrom uses
its own abstraction layer as well. I have yet to find a general purpose
library which handles hardware accesses and works on DOS, Windows, *BSD,
Linux, Solaris, MacOSX, ... X.org might be close, though.

>> I have successfully avoided using autoconf and similar stuff in my
>> projects by adhering to strict C99, but in an ironic twist of fate, Plan
>> 9 will be the OS that forces me to use something like autoconf due to
>> the limited C99 support.
>>
>
> And sadly, there is a good chance that your blind faith in having fully
> conformant APIs would come unstuck quite quickly if your code needed to
> work on a large selection of commercial UNIX releases (assuming you
> didn't code around all of those shortcomings in each of your projects
> that is).
>
> Plan 9 is far from alone in having limited C99 and POSIX API support.
>

So far this has not been a problem for flashrom, but that may also be
due to the really small number of commercial unixes being supported by
flashrom (no user interest, and thus pointless to port).
That said, having a full-featured compiler like clang or gcc available
allows coding for the compiler, and only to a lesser degree for the OS.

The good thing about flashrom is that it uses only very few interfaces,
and most of those need platform specific handling anyway.

Writing userspace software with a nice GUI and all sorts of bells and
whistles is probably a lot more prone to exercise broken code paths in
libraries than an app which has no interactive behaviour and avoids
pretty much every convenience feature. For such GUI apps it makes a lot
of sense to use an abstraction layer which hides/replaces broken
functionality in the envoronment.

Dan Cross

unread,
Nov 15, 2010, 10:49:52 AM11/15/10
to
On Sun, Nov 14, 2010 at 11:29 PM, Gary V. Vaughan <ga...@vaughan.pe> wrote:
> You can either try to remember what all of those are, or use something
> like autoconf to probe for known bugs, and gnulib to plug them, or
> you can link against a shim library like GNU libposix which will
> do all of that automatically when it is built and installed, allowing
> you to write to the POSIX APIs with impunity.

I've read this discussion with interest. Something that strikes me is
that there are certain underlying beliefs and assumptions in the Plan
9 community that are taken for granted and rarely articulated, but
that frame many of the comments from 9fans. Further, those are, in
many ways, contrary to the assumptions and requirements Gary is
constrained by when working on libtool.

I believe that one of the most powerful decisions that the original
plan 9 developers made was consciously resisting backwards
compatibility with Unix. That's not to say that they completely
ignored it, or that it was actively worked against, but that it was
not a primary consideration and did not (overly) constrain either
design or implementation. This freed them to revisit assumptions,
rethink many of the decisions in the base system, and clean up a lot
of rough edges.

For instance, and relevant to this discussion, take a look at how
cross-compilation and platform independence on Plan 9 "just works" in
a simple, consistent way across multiple architectures. I was
surprised how an earlier message in this discussion when Gary said,

> If you have never tried to build and link shared libraries from the same
> code-base on 30 (or even 3) separate incompatible architectures, then
> you are probably missing the point, and needn't read any further.

Granted, I think the key thing here is that Gary's talking about
shared libraries (which, as Russ said, the Plan 9 developers simply
punted on), instead of just building, but I can't help but feel that
this overlooks part of the plan 9 way of doing things.

The plan 9 developers made a decision to break with the convention of
naming object files with a ".o" extension, assigned a letter to each
archicture, established the convention that object files and libraries
would use that letter in their filenames, and renamed the compiler,
assembler and linker accordingly. Then they modified the filesystem
hierarchy to have archiecture specific directory trees for
architecture specific things (which is easy to do with you've got
mutable namespaces). Mk was smart enough that these conventions could
be used in the build system pretty easily. None of these are name
changes are particularly deep; in many ways, they are simply cosmetic.
However, they led to a simplification that makes building for
different architectures out of the same tree nearly trivial. Just by
setting an environment variable, I can build the entire system for a
different architecture, in the same tree, with a single command, with
no conflicts. Since the compiler for each architecture is just
another program, cross-compilation isn't special or complicated.
Compare this to setting up gcc for cross compilation.

And that's sort of the point. 9fans tend not to ask, "how can I make
this work across multiple systems that are immutable as far as I'm
concerned as a developer" but rather they ask, "how can the system
support me doing this in a simpler, orthogonal, more consistent way?"
If that means shedding some convention and replacing it with something
else that makes life easier, there's less hesitation to do that.

To that end, libtool, autoconf and automake, etc, all seem to answer
the wrong question. From the 9fans perspective (and take this with a
grain of salt; I can't claim to speak for all of us), libtool seems
"crazy" because it puts a bandaid on a wart instead of trying to solve
the underlying problem of complex, inconsistent interfaces across
systems. In this way, it is reactionary. Autoconf et al are
analogous to a bunch of nested #ifdef's, and most 9fans would chose to
program to some sort of shim library that provided a consistent
interface as a matter of course. Better yet, they'd go fix the
underlying systems so that they correctly implemented the relevant
standard or interface. I'm not sure that's possible with Unix, as
Gary rightly points out, because of the weight of the installed base,
fragmentation between variants and the requirements of backwards
compability. Though unrealistic, it's certainly idealistic.

One of the enduring benefits of Plan 9 is that it is (still) a good
example of how well-reasoned engineering tradeoffs and a modicum of
good taste can produce a powerful and elegant system with a simple
implementation. Rob Pike is (in?)famously quoted as saying, "not only
is Unix dead, it's starting to smell really bad" (note to Rob: is this
apocryphal? I've never found an original source). I think that's
often taken out of context; Unix may be dead as an exciting venue for
the exploration of fundamentally new ways of doing things, for all the
reasons that have been mentioned. That doesn't mean it's not useful
for getting real work done. In this sense it's more like a large
wooden support beam; it's dead in the sense that the tree it came from
(presumably) isn't growing anymore, even though the beam serves some
useful purpose. Libtool is more inline with this view of the world.
It facilitates divergent systems doing useful things (in the sense of
end-users finding those systems useful; my mom could care less what
hardware and software platform gmail runs on, as long as she can
communicate with family and friends). It fits the world view of the
massive number of developers who are already familiar with the Unix
and Linux model and who aren't particularly interested in other
models. But it's not exciting as tool for figuring out how to make
those systems better in and of themselves. The assumptions that 9fans
make are that the latter is more important than the former.

Sorry, this was long-winded.

- Dan C.

Gary V. Vaughan

unread,
Nov 15, 2010, 11:08:21 AM11/15/10
to
On 15 Nov 2010, at 08:02, Enrico Weigelt <wei...@metux.de> wrote:
> * Gary V. Vaughan <ga...@gnu.org> wrote:
>> People like to beat on GNU Libtool, and in some cases that criticism is
>> not undeserved... but in my experience, many critics of the tool come
>> from a perspective of building on a single architecture.
>
> Actually, I'm building for lots of different archs almost all the day.
> Crosscompiling w/ sysroot, of course. And that's exactly the point
> where libtool and other autotools stuff was quite unusable until just
> a few years ago (eg. passed *wrong* library pathes to the toolchain).

I have been compiling cross-platform Free Software for a living for about 8
years now, and I have been maintaining GNU Libtool for close to twice that
long. I have never used sysroot in all that time, and no else offered patches
until quite recently. Libtool has been immeasurably useful to me entirely
without that particular feature. At the risk of getting off topic, that's kind of the
point of free software - if it doesn't work quite the way you would like, fix it!
If your fixes make any kind of sense, they'll likely be adopted upstream for
everyone else to enjoy too.

>> That said, your comment strikes me as entirely unsubstantiated. Why do
>> you think the concepts themselves are insane?
>
> The whole idea of libtool essentially being a command line filter
> instead of defining an own coherent abstraction interface

What is incoherent or unabstract about offering a static-library like interface
to building shared libraries, or an ELF like library versioning scheme?

> and one
> implementation/configuration instance per target instead of an
> autofooled instance per individual package build.

How does that scale in the face of a dozen or more OSes, each with at least
a handful of supported library releases and compiler revisions each with a
handful of vendor maintenance patches, each with several hundred API
entry-points of which several dozen of each have non-conformances or
outright bugs. Worse, many of my clients mix and match gcc with vendor
ldd and runtime in various combinations or install and support 3 or more
compilers per architecture. Libtool figures out what to do in all of those
thousands of combinations, by probing the environment at build time... I'd
*much* rather wait 90 seconds for each build that try to maintain a giant
tabulation with thousands of entries, which go out of date every time a new
patch or revision of libc or the compiler or the os or the linker comes along.

>> Setting aside the admitted implementation shortcomings for the sake
>> of argument; if you were
>> designing GNU Libtool from scratch, how would you do it differently?
>
> See git://pubgit.metux.de/projects/unitool.git

Java? For a bootstrapping tool? Does Java even get worthwhile support
outside of Windows and Solaris these days? If it works for you, that's
great, but I would have an extremely hard sell on my hands if I told my
clients they would need to have a working Java runtime on Tru64 Unix
before I could provide a zlib build recipe for them :-o

>
>> 1. Unix variants (including POSIX layers of non-Unix architectures)
>> build shared libraries in vastly different ways, GNU Libtool
>> needs to handle all of them;
>
> That's an issue of individual platform backends, which should be
> completely transparent to the calling package.

Agreed, that's what libtool provides, but to do that it needs to be intimately
familiar with how each combination works. It certainly shouldn't be trying
to do that without calling the vendor compiler and linker.

>> 3. There's no use in fighting against GNU Autoconf and GNU Automake,
>
> Ah, resistance is futile ? ;-o

Without user acceptance, that 2 man years of effort I could sink into a new
all singing all dancing build system would be a waste of my time. I'd much
rather spend that time on tools people will get mileage from.

>> 1. Once installed, it is useable outside the GNU eco-system by any
>> build-system that is prepared to call libtool rather than the
>> C-compiler for building and linking against shared compilation
>> units;
>
> Anyone seriously doing that ? I only see a wide tendency to move away
> from libtool in GNU world ...

Yep. If I'm porting a cmake package (for example) to the 30 architectures
we support, and shared libraries are required - calling the installed libtool
from the cmake rules is an order of magnitude less work than trying to
encode all the different link flags, install and post install rules or other
system specific peculiarities into the compile and link rules in every build
file... And also a lot easier than trying to shoehorn a libtool instance into
the porting source tree. There's a perfectly good working /usr/local/bin/libtool
so why not use that?

Lucio De Re

unread,
Nov 15, 2010, 12:00:24 PM11/15/10
to
Dan makes a good point and I agree entirely with his sentiments. But I do
have a qualm: the Plan 9 designers managed to simplify cross-compilation
to a single underlying (OS) platform, but failed (in a suprisingly ugly
way) to cater for different target object formats, even though there were
efforts to do so. In my opinion - and this is all I hold against Plan
9 - by shoehorning various target object formats in the linker/loader
as options, they spoiled the consistency of the system.

I have no doubt at all that this was an afterthought or at any rate an
attempt to make the most of a situation they could not have control over,
but I think that the problem ought to have been given more attention
and a better solution sought. Of course, I can plead ignorance and
stupidity and admit that I have no idea how I would address the same
problem, but I'd like to raise it, because I think in a forum like this
it may well stimulate the type of productive discussion that leads to
a better mouse trap.

To put the problem into perspective, think of Go: the developers have
added more shoehorning to target ELF and possibly other object models;
I'm sure that, had they had space to do it, they would have found it
more fruitful to distil that portion of the development system into a
separate or at least better structure.

Having investigated this and painted myself into a corner, I'm curious
to hear what others think of the issue. Specially those, like Russ,
who were involved in the initial decisions regarding Go. Looking at the
outcome, I can't help but think that the Plan 9 toolchain is infinitely
superior to its current competitors. And I'd also like to point out
that any shortcomings it may have regarding implementation of C99 can
almost certainly be addressed within the ability of a single, no doubt
gifted, but not infinitely so, individual.

++L

erik quanstrom

unread,
Nov 15, 2010, 12:01:24 PM11/15/10
to
> *much* rather wait 90 seconds for each build that try to maintain a giant
> tabulation with thousands of entries, which go out of date every time a new
> patch or revision of libc or the compiler or the os or the linker comes along.

there seems to be an affliction in the unix world, that started out as
(a) you are required to use the esoteric features of a
library within milliseconds of the release of a new version, and
(b) libraries must have the maximum amount of api churn possible
recently this has morphed into libraries removing "old" working
functions on dot releases because of the the first condition.

"modern" languages like perl and python even subscribe to this
model. it's okay to break the language between releases.

- erik

Brian L. Stuart

unread,
Nov 15, 2010, 12:28:35 PM11/15/10
to
> to a single underlying (OS) platform, but failed (in a
> suprisingly ugly
> way) to cater for different target object formats, even
> though there were
> efforts to do so.  In my opinion - and this is all I
> hold against Plan
> 9 - by shoehorning various target object formats in the
> linker/loader
> as options, they spoiled the consistency of the system.

I always had the impression that the object formats
used by the various ?l are more for kernels and the
various formats expected by loaders than for userland
apps. For userland, I would think the intent is for
there to be a single consistent object format (at least
for a given architecture).

BLS


Dave Eckhardt

unread,
Nov 15, 2010, 1:13:47 PM11/15/10
to
> Even then, many vendor compilers and linkers have many
> non-conformances, and outright bugs. Take a look at the
> number of workarounds that make their way into gnulib to
> cover breakage in the POSIX APIs alone.
>
> You can either try to remember what all of those are, or
> use something like autoconf to probe for known bugs, and
> gnulib to plug them, or you can link against a shim library
> like GNU libposix which will do all of that automatically
> when it is built and installed, allowing you to write to the
> POSIX APIs with impunity.

The autoconf ecosystem represents a hypothesis. I think we
have gathered enough data to seriously evaluate the truth of
the hypothesis, and I don't think it's worked out very well.

Before auto*, the "old way" was for each package to separate out
platform-specific code into a module per platform, e.g., sunos.c.
That meant that each package had to have an expert for each
platform, somebody who was familiar with that package and that
platform and who knew C. Each time a platform revved there
would be a delay while the platform expert for each package
figured out what to do (generally throw in an ifdef and write
a couple lines of code).

In the Brave New World of GNU auto*, in theory all packages can
share all of the platform-specific tweaks, and in theory the tweaks
aren't specific to platforms anyway, but to features. In practice,
however, when a platform revs, all of the tweak-detection code
breaks, which means that a 5,000 line shell script goes "configure.sh:
4232: syntax error", and the situation can be fixed only by somebody
who is an expert on:
* that platform
* the package
* C
* M4
* bash
* autoconf
* autoheader
* libtool
* gawk (the gawk scripts say #!/usr/bin/awk at the top, but woe
betide anybody who attempts to run them without "awk" being
gawk)

So there is a delay until one of the very few people on the planet
conversant with all of those things figures out what to do. The
feature tests are brittle (actual example: we decide whether we have
MIT Kerberos or Heimdal Kerberos by seeing whether libkrb5 contains
some oddly-named extension function; a year later, the other group
implements that function and kablooie, no package knows whether to
-lkrbsupplemental or -lkrbadditional).

In both the "old way" or the "new way", every time a platform revs
most complex packages fail to build. In the first scheme, it is
frequently the case that anybody competent to build a package from
source can lash together a fix which works for their situation until
an official fix comes out--and there's a good chance that the simple
fix is actually the right fix for all users of that package on that
platform. The second scheme is based on the hypothesis that one
many-skilled person on one platform can tweak an immensely complex
ecosystem so that it will run on many platforms that person has no
access to. I think that hypothesis has turned out to be false.
Packages are still buildable on exactly those platforms where an
expert has done work specific to that package and that platform,
only now it is much harder to diagnose and fix build problems.

Essentially, the underlying assumption was that an N*M problem
could be collapsed down to an N+M problem; sadly, the complexity
of the result is more like 2**(N+M).

Dave Eckhardt

P.S. I also think we have enough data to reject the hypothesis
that 5,000-line shell scripts are a good idea. Both hypotheses
had their attractiveness at inception, but the point of running
experiments is (hopefully) to learn from them.

P.P.S. I am leaving out conundrums like "the feature tests that
auto* version x.y uses are not compatible with the feature
tests used by auto* version y.x, so you can't switch a package
from auto* x.y to y.x, but auto* x.y predates the existence of
the platform I'm trying to build on, so it does *everything*
wrong on that platform".

Steve Simon

unread,
Nov 15, 2010, 3:09:06 PM11/15/10
to
My personal disappointment with autoconf, is that there was no simple
file which the package author writes (or even autgenerates)
describing what features their package depends on.

There is a file, but its anything but simple and as it (ab)uses m4
and shell script macros that it "knows" exist. It is not reasonable to
analyse this file on a foreign OS (e.g. plan9) and work out what might
be required to build the package.

-Steve

lu...@proxima.alt.za

unread,
Nov 15, 2010, 10:35:08 PM11/15/10
to
> I always had the impression that the object formats
> used by the various ?l are more for kernels and the
> various formats expected by loaders than for userland
> apps. For userland, I would think the intent is for
> there to be a single consistent object format (at least
> for a given architecture).

Well, we had alef for Irix and other similar user level/application
level tricks that no longer seem important today, but without the
option trickery Go would have had to wait for Ian Lance Taylor to
produce a GCC version :-(

Myself, I'm still trying to combine the Go toolchain with the Plan 9
toolchain so that we can have a consistent framework for real
cross-platform development, but the task doesn't quite fit within my
resources and skills. I don't have a problem with the trickery, it's
just a shame (IMO) that it wasn't designed the same way as the target
architecture stuff. I understand the complexity involved and I'm still
looking for ideas on reducing that complexity.

Typically, the Go toolchain still has (had?) code in it to produce
Plan 9 object code, but one could easily imagine that stuff
bit-rotting. If it hasn't been removed yet, it sure runs the risk of
being removed before long.

Of course, the ideal situation would be for Go and p9p to converge and
the whole lot to be back ported to Plan 9. I think it's possible, but
somebody of my skill level trying to do this will often need to be
rescued after painting himself in a corner. But I have made a start
and, again, I must rescue and document my efforts, lest somebody have
to go through those pains again unnecessarily.

++L

PS: I think 9vx has brought me closer to this objective, but it's
still orders of magnitude bigger than I am competent. But less so
than the auto* stuff.


erik quanstrom

unread,
Nov 16, 2010, 12:01:28 AM11/16/10
to
> Of course, the ideal situation would be for Go and p9p to converge and
> the whole lot to be back ported to Plan 9.

if you just want go on plan 9, i think object formats
are a non-sequitor.

calling out the guys who wrote plan 9 for not supporting
object formats that plan 9 never used, seems a bit rude
to me.

- erik

lu...@proxima.alt.za

unread,
Nov 16, 2010, 12:10:48 AM11/16/10
to
> if you just want go on plan 9, i think object formats
> are a non-sequitor.
>
But that's not it, really, I want both Go and the ELF capabilities :-)

> calling out the guys who wrote plan 9 for not supporting
> object formats that plan 9 never used, seems a bit rude
> to me.

I am willing to apologise if that is how it's perceived, but the
intent is not to insult anyone, but rather to extend the Plan 9
toolchain beyond the Plan 9 scope, something the Go developers did to
a great extent and something I would dearly like to retrofit to Plan
9. Getting Go in the bargain is an exciting side effect.

++L


Christopher Nielsen

unread,
Nov 16, 2010, 5:21:53 PM11/16/10
to
On Mon, Nov 15, 2010 at 19:32, <lu...@proxima.alt.za> wrote:
>> I always had the impression that the object formats
>> used by the various ?l are more for kernels and the
>> various formats expected by loaders than for userland
>> apps.  For userland, I would think the intent is for
>> there to be a single consistent object format (at least
>> for a given architecture).
>
> Well, we had alef for Irix and other similar user level/application
> level tricks that no longer seem important today, but without the
> option trickery Go would have had to wait for Ian Lance Taylor to
> produce a GCC version :-(
>
> Myself, I'm still trying to combine the Go toolchain with the Plan 9
> toolchain so that we can have a consistent framework for real
> cross-platform development, but the task doesn't quite fit within my
> resources and skills.  I don't have a problem with the trickery, it's
> just a shame (IMO) that it wasn't designed the same way as the target
> architecture stuff.  I understand the complexity involved and I'm still
> looking for ideas on reducing that complexity.
>
> Typically, the Go toolchain still has (had?) code in it to produce
> Plan 9 object code, but one could easily imagine that stuff
> bit-rotting.  If it hasn't been removed yet, it sure runs the risk of
> being removed before long.

FWIW, someone is working on a Plan 9 port of Go.

--
Christopher Nielsen
"They who can give up essential liberty for temporary safety, deserve
neither liberty nor safety." --Benjamin Franklin
"The tree of liberty must be refreshed from time to time with the
blood of patriots & tyrants." --Thomas Jefferson

Pavel Zholkover

unread,
Nov 17, 2010, 2:41:46 AM11/17/10
to
Hi,
I did a Go runtime port for x86, it is in already in the main hg repository.
Right now it is cross-compile from Linux for example (GOOS=plan9 8l -s
when linking. notice the -s, it is required).

There were a few changes made to the upstream so the following patch
is needed until the fix is committed:
http://codereview.appspot.com/2674041/

Right now I'm working on syscall package.

Pavel

Lucio De Re

unread,
Nov 17, 2010, 2:47:35 AM11/17/10
to
On Wed, Nov 17, 2010 at 09:38:46AM +0200, Pavel Zholkover wrote:
> I did a Go runtime port for x86, it is in already in the main hg repository.
> Right now it is cross-compile from Linux for example (GOOS=plan9 8l -s
> when linking. notice the -s, it is required).
>
I have Plan 9 versions of the toolchain that ought to make it possible
to do the same under Plan 9. I'll have a look around the repository,
see if I can add any value.

> There were a few changes made to the upstream so the following patch
> is needed until the fix is committed:
> http://codereview.appspot.com/2674041/
>
> Right now I'm working on syscall package.
>

Thanks for letting us know.

++L

Joel C. Salomon

unread,
Nov 18, 2010, 12:33:51 AM11/18/10
to
On 11/14/2010 04:44 PM, Charles Forsyth wrote:
> the list of unimplemented items in /sys/src/cmd/cc/c99* is:
<snip>

> i can think of something else that's not been noticed, but what other things have you found?

Why is __func__ listed as “unwanted”? I’ve found it useful for some
logging functions.

--Joel

erik quanstrom

unread,
Nov 18, 2010, 1:31:29 AM11/18/10
to
> Why is __func__ listed as “unwanted”? I’ve found it useful for some
> logging functions.

i think the correct interpretation of unwanted in this
context is either don't want or don't want to implement.

one former member of the don't-want list was varadic macros,
which are now supported by both the compiler and cpp.
i used to think that __func__ would be useful, but i've never
actually found a use for it. there almost always seems to be a
better option.

- erik

Federico G. Benavento

unread,
Nov 18, 2010, 5:54:41 PM11/18/10
to
isn't this redundant with cpp(1)'s __FUNCTION__?

if __FUNCTION__ isn't standard, then we should change
it to __func__ in cpp and that's it

--
Federico G. Benavento

Joel C. Salomon

unread,
Nov 18, 2010, 9:10:40 PM11/18/10
to
On 11/18/2010 05:50 PM, Federico G. Benavento wrote:
> On Thu, Nov 18, 2010 at 2:30 AM, Joel C. Salomon <joelcs...@gmail.com> wrote:
>> Why is __func__ listed as “unwanted”? I’ve found it useful for some
>> logging functions.
>>
> isn't this redundant with cpp(1)'s __FUNCTION__?
>
> if __FUNCTION__ isn't standard, then we should change
> it to __func__ in cpp and that's it

Um, how can the preprocessor know what function it’s in middle of?

(That’s why, unlike the preprocessor symbols __FILE__ & __LINE__, C99’s
__func__ is an identifier.)

--Joel

Federico G. Benavento

unread,
Nov 18, 2010, 10:15:40 PM11/18/10
to
my bad, I thought cpp(1) implemented __FUNCTION__...

On Thu, Nov 18, 2010 at 11:06 PM, Joel C. Salomon


<joelcs...@gmail.com> wrote:
> On 11/18/2010 05:50 PM, Federico G. Benavento wrote:
>> On Thu, Nov 18, 2010 at 2:30 AM, Joel C. Salomon <joelcs...@gmail.com> wrote:
>>> Why is __func__ listed as “unwanted”?  I’ve found it useful for some
>>> logging functions.
>>>
>> isn't this redundant with cpp(1)'s __FUNCTION__?
>>
>> if __FUNCTION__ isn't standard, then we should change
>> it to __func__ in cpp and that's it
>
> Um, how can the preprocessor know what function it’s in middle of?
>
> (That’s why, unlike the preprocessor symbols __FILE__ & __LINE__, C99’s
> __func__ is an identifier.)
>
> --Joel
>
>

--
Federico G. Benavento

Enrico Weigelt

unread,
Nov 23, 2010, 3:54:47 AM11/23/10
to
* Gary V. Vaughan <ga...@vaughan.pe> wrote:

> I have never used sysroot in all that time,

Maybe that's the problem. ;-p

I'm using sysroot all the time, since I don't want to tweak
every single package so that the right pathes are found, at
build- as well as runtime.

W/ sysroot you build and install everything as it would be on
the target system (despite that certain unneeded filesnwill be
removed from the production target image). Most packages wont
need any special handling for that, as long as there's nobody
in the middle who messes up the pathes.

> and no else offered patches until quite recently.

Actually, I was trying to fix it several years ago, but the whole
codebase was so utterly complex that I decided to write some new
universal toolchain wrapper with an consistent and platform-agnostic
interface and additionally an drop-in replacement for libtool
(ltmain.sh).

> Libtool has been immeasurably useful to me entirely without
> that particular feature.

For my projects it had been an absolutely catastrope, since
I simply *need* sysroot. And in the end it was much easier
replacing it completely than trying to fix it.

> > The whole idea of libtool essentially being a command line filter
> > instead of defining an own coherent abstraction interface
>
> What is incoherent or unabstract about offering a static-library
> like interface to building shared libraries, or an ELF like
> library versioning scheme?

I'm talking about libtool, the big script, not the ltdl library.

The main design problem here is that it's called instead of the
direct toolchain commands, but with an derivative of their
command line interface, changing commands in unobvious ways.
It's interface varies between platforms/toolchains.

With "coherent abstraction interface" I mean some completely
differnet interface that wraps behind all the platform/toolchain
specific things, which stays the same everywhere.

> > and one
> > implementation/configuration instance per target instead of an
> > autofooled instance per individual package build.
>
> How does that scale in the face of a dozen or more OSes,

For each platform you'll need an proper target configuration.
But only once per platform. And you can easily tweak it to your
specific needs, w/o touching all the invidual packages.

> each with at least a handful of supported library releases

Libraries simply have provide an coherent interface, across
releases. Otherwise they're simply broken. Either fix them or
dont use them. Everyting else gives to maintainance overhead
of exponential complexity.

> and compiler revisions each with a

Same as w/ OS'es. Configure the target config once and for all.
(most likely the job of the distro maintainers).

> handful of vendor maintenance patches, each with several hundred API
> entry-points of which several dozen of each have non-conformances or
> outright bugs.

Fix the bugs instead of "supporting" broken stuff.

If you can't fix the system itself, the fixes can be put into the
toolchain (eg. use fixed versions of broken/missing libc functions).

> Worse, many of my clients mix and match gcc with vendor ldd and
> runtime in various combinations or install and support 3 or more
> compilers per architecture.

They simply shouldn't do this. If the vendor's toolchain/libc is
broken, use a fixed one.

> Libtool figures out what to do in all of those thousands of combinations,
> by probing the environment at build time...

There're sometimes cases where these things cannot be guessed at
build time or it simply guesses wrong. I dont see that libtool
offers some clean interface for _specifying_ those things once
per target type.

> I'd *much* rather wait 90 seconds for each build that try to maintain a giant


> tabulation with thousands of entries, which go out of date every time a new
> patch or revision of libc or the compiler or the os or the linker comes along.

You dont need either. Just have one exact configuration per target
(which itself *might* be created in assistance of some separate
detection tool). Normally the job of the distro maintainers.

> > See git://pubgit.metux.de/projects/unitool.git
>
> Java? For a bootstrapping tool?

This is just an reference implementation, an proof-of-concept.

> Does Java even get worthwhile support outside of Windows and Solaris these days?

Yes, works fine for example on GNU/Linux - gcc builds ELF executables
out of it easily.

> If it works for you, that's great, but I would have an extremely
> hard sell on my hands if I told my clients they would need to have
> a working Java runtime on Tru64 Unix before I could provide a zlib
> build recipe for them :-o

In my model, unitool is part of the toolchain. In a way, it *is* the
toolchain (or at least the front side).

BTW: zlib doesnt use neither libtool nor autoconf.

> > That's an issue of individual platform backends, which should be
> > completely transparent to the calling package.
>
> Agreed, that's what libtool provides, but to do that it needs to be intimately
> familiar with how each combination works. It certainly shouldn't be trying
> to do that without calling the vendor compiler and linker.

Still it's not completely transparent. It rewrites parts of the
command line and passes the rest through. Similar to m4, it's an
filter, not an real abstraction.



> >> 3. There's no use in fighting against GNU Autoconf and GNU Automake,
> >
> > Ah, resistance is futile ? ;-o
>
> Without user acceptance, that 2 man years of effort I could sink into a new
> all singing all dancing build system would be a waste of my time.

That's not necessary. Just clean up the code of the packages you need.
It doesnt take any new super-duper build system - in very most cases,
clean Makefiles and an _properly_ written shell script (where most of
the functions can come from a separate library package, so collecting
the knowledge) will suffice. All that's now done via complex and
error-prone m4 macros could be easily done using shell functions.

> Yep. If I'm porting a cmake package (for example) to the 30 architectures
> we support, and shared libraries are required - calling the installed libtool
> from the cmake rules is an order of magnitude less work than trying to
> encode all the different link flags, install and post install rules or other
> system specific peculiarities into the compile and link rules in every build
> file...

hmm, cmake doesnt support including an common file which contains
several variables and that is generated by some well-written
shellscript or could be tweaked manually when required ?

Greg Comeau

unread,
Nov 25, 2010, 4:39:44 AM11/25/10
to

In article <AANLkTikPMGQL8MkTb9TrJ...@mail.gmail.com>,

Federico G. Benavento <bena...@gmail.com> wrote:
>isn't this redundant with cpp(1)'s __FUNCTION__?
>
>if __FUNCTION__ isn't standard, then we should change
>it to __func__ in cpp and that's it

I'm not sure what cpp's __FUNCTION__ is, but be careful, as it's
easy to loose some semantics in subtle ways and/or corner cases
when making such "easy" changes. That said, isn't cpp independent
of the current compilers, and, as well, even if cpp were to support
some analogous capability, cpp does not normally have capability
to obtain function names as it is a later phase of translation.
--
Greg Comeau / 4.3.10.1 with C++0xisms now in beta!
Comeau C/C++ ONLINE ==> http://www.comeaucomputing.com/tryitout
World Class Compilers: Breathtaking C++, Amazing C99, Fabulous C90.
Comeau C/C++ with Dinkumware's Libraries... Have you tried it?

0 new messages