Acme is now included in the ports tree, and has been
for a few weeks. Rob tracked down a bad stack overflow
in libregexp, and since then it's been quite stable.
I changed some constants in 9term to make it look more
like a normal rio window and less like an acme window.
There is a new command "rio" which is David Hogan's 9wm
cleaned up to give most of the modern rio interface rather
than 8.5. See src/cmd/rio/README for more. It's missing
a few minor details (e.g., I would like to be able to drag window
borders to resize and move windows), but for the most part
it's quite convincing. Rio and 9term are not well tested and
are only known to work on Linux, so they're not built by
default. If you want them, cd into their directories and mk install.
Bug reports appreciated. As usual see the notes linked
from the main web page for more.
Russ
Noah
9c fortune.c
9l -o o.fortune fortune.o ../../lib/libsec.a ../../lib/libfs.a ../../lib/libmux.a ../../lib/libregexp9.a ../../lib/libthread.a ../../lib/libbio.a ../../lib/lib9.a
install -c o.fortune ../../bin/fortune
9c freq.c
9l -o o.freq freq.o ../../lib/libsec.a ../../lib/libfs.a ../../lib/libmux.a ../../lib/libregexp9.a ../../lib/libthread.a ../../lib/libbio.a ../../lib/lib9.a
install -c o.freq ../../bin/freq
9c fsize.c
9l -o o.fsize fsize.o ../../lib/libsec.a ../../lib/libfs.a ../../lib/libmux.a ../../lib/libregexp9.a ../../lib/libthread.a ../../lib/libbio.a ../../lib/lib9.a
install -c o.fsize ../../bin/fsize
9c idiff.c
idiff.c:15: error: conflicting types for `opentemp'
../../include/lib9.h:459: error: previous declaration of `opentemp'
mk: 9c idiff.c : exit status=exit(1)
mk: for i in ... : exit status=exit(1)
mk: for i in ... : exit status=exit(1)
>idiff didn't compile :(
>
>
should work now.
On linux/x86, gcc wants LL for the uvlong constants in nan64.c.
Acme fails to build (but mk doesn't stop):
9l -o o.acme acme.o addr.o buff.o cols.o disk.o ecmd.o edit.o elog.o exec.o file.o fsys.o look.o regx.o rows.o scrl.o text.o time.o util.o wind.o xfid.o ../../../lib/libcomplete.a ../../../lib/libplumb.a ../../../lib/libfs.a ../../../lib/libmux.a ../../../lib/libthread.a ../../../lib/libframe.a ../../../lib/libdraw.a ../../../lib/libbio.a ../../../lib/lib9.a -L/usr/X11R6/lib -lX11
look.o(.text+0x29): In function `plumbproc':
/home/schwartz/plan9/src/cmd/acme/look.c:33: undefined reference to `plumbrecvfid'
look.o(.text+0x77): In function `startplumbing':
/home/schwartz/plan9/src/cmd/acme/look.c:43: undefined reference to `plumbopenfid'
look.o(.text+0xaa):/home/schwartz/plan9/src/cmd/acme/look.c:50: undefined reference to `plumbopenfid'
look.o(.text+0x72e): In function `look3':
/home/schwartz/plan9/src/cmd/acme/look.c:159: undefined reference to `plumbsendtofid'
collect2: ld returned 1 exit status
mk: 9l -o o.acme ... : exit status=exit(1)
In 9term (and maybe other places) it would be nicer if it wouldn't abort
if the plumber isn't running.
P.S. Minimalism is no longer confined to Plan 9 but is
pervasive throughout Unix systems. Great job!
kyxap
If you downloaded the big tarball rather than use CVS,
you didn't get rio, for instance. I've fixed that now.
Russ
Noah
The rio wm has kept the original 8.5 feel though, so I backported my changes
to 9wm into it to make it feel like rio does on window creation and
movement. Here's what the changes do:
- the user is prompted to swipe a window whenever a new one is created
(via the menu's New option or any other way). clicking a mouse button
will cancel the swipe and open the window with whatever default size
it has.
- moving a window (with the menu's Move option) does not reposition the
mouse at the top right corner of a window
- the menu font is changed to lucidasanstypewriter-12 -- it looks just
like the default installation font and is good for laptops and larger
monitors
- ignore GraphicsExpose events -- makes rio much quieter...
at some point i should think about adding support for resizing windows by
dragging the borders..
tell me if you want me to send them, they're about 30 lines altogether, 4
files...
andrey
But IMO the window placement is better as it is now until there's something that works like the window command.
Personally I really don't open that many windows after I start initially, but I have a setup with a couple of windows that I like to position exactly right using geometry commands in X. When 9wm-1.2 opens I have to left click to make all of the programs in my .xinitrc position themselves correctly.
In Russ's version they pop up automatically right where I want them.
Noah
----- Original Message -----
From: andrey mirtchovski <mirt...@cpsc.ucalgary.ca>
Date: Sunday, March 21, 2004 4:03 pm
Subject: Re: [9fans] acme, rio workalike available in plan 9 ports
you're right. Russ also wants the current behaviour with regards to
opening windows to be kept untouched (i.e. the create-and-sweep patch
won't be applied). i've sent a one-line diff to swipe on create only
for xterms and 9terms -- if you open a terminal it prompts you to
sweep, all other programs just open windows without user interaction.
i'm convinced that's the right way to go now too :)
andrey
Tried building all (esp. acme) on SunOS 5.8.
Build works best with gcc.
Anybody else trying the stuff on SunOS 5.8?
Got a number of warnings, and some 'fatal' problems:
no futimesat for lib9/dirfwstat.c (could not find futimes at all)
no TIOCGWINSZ for cmd/mc.c (needs <sys/termios.h>
Unfortuntely I'm still running 8-bit depth.
Running acme on 8-bit depth only works after commenting out
in libdraw/x11-init.c the XMatchVisualInfo tests for 16, 15, 24 bits.
After that, acme runs on 8-bit.
However, resizing causes segfault.
I see what happens but don't see how to fix it.
in the getwindow (or before?) (Image) row.tag.fr.b gets freed,
I guess, because after the mallocz in getimage0
(from getwindow from acme.c) its is zeroed (id field incremented),
with resulting segfault when row.tag.fr.b->display is accessed in frinit.
Commenting out in libdraw/alloc.c in freeimage: free(i);
makes that it no longer segfaults, but then when I resize
to make the window larger, the additional space is not used by
the text already in the window, that resizes in the old size.
When I Delcol and the Newcol the rightmost column the stuff
in the new column does use the whole availabe size.
If more info is needed to track this down, I'm happy to help.
Axel.
i am running SunOS 5.8 as my only system (at work).
i found that sun cc (Forte/6u2/SUNWspro/bin/cc) does not accept some
plan9 cc contructs. ramfs.c has
char *(*fcalls[])(Fid*) = {
[Tversion] rversion,
where [Tversion] is not understood.
moreover, there are some files where plan9 cc allows a cast to a struct,
which sun cc does not allow. one example, acme.c, has
rs = cleanrname((Runestr){rb, nr});
> Got a number of warnings, and some 'fatal' problems:
> no futimesat for lib9/dirfwstat.c (could not find futimes at all)
> no TIOCGWINSZ for cmd/mc.c (needs <sys/termios.h>
using gcc it is possible to get this far. note that mc.c also is missing a
struct winsize
it too is found in <sys/termios.h>.
...deleted
> If more info is needed to track this down, I'm happy to help.
add me to the list of happy to help'ers.
bengt
This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you.
E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof.
this is Ken C. [Tversion] is used as the value for the index in
the array to reference rversion.
for lunix systems, just hack/comment them out and make sure
all the tables are in sync (by hand or |sort).
> moreover, there are some files where plan9 cc allows a cast to a struct,
> which sun cc does not allow. one example, acme.c, has
>
> rs = cleanrname((Runestr){rb, nr});
more Ken C: use a temporary variable.
gosh. I thought you were boyd.
nah, i'm nemo.
> i found that sun cc (Forte/6u2/SUNWspro/bin/cc) does not accept some
> plan9 cc contructs. ramfs.c has
>
> char *(*fcalls[])(Fid*) = {
> [Tversion] rversion,
>
> where [Tversion] is not understood.
>
> moreover, there are some files where plan9 cc allows a cast to a
> struct, which sun cc does not allow. one example, acme.c, has
>
> rs = cleanrname((Runestr){rb, nr});
thanks. i tried to remove most of these,
but i missed a few. gcc gets more and more
like ken's c every day. i haven't yet worked
very hard on sunos. i'm glad to hear it works
though.
russ
Then I must be boyd.
except for size, speed, and readability.
>except for size, speed, and readability.
and most important, being predictable.
i was referring to the language, not the compiler.
though your points apply equally well.
no, he's susan, remember?
Great job!!! Thanks!!!
FYI:
Another thing that needs some work ons sunos 5.8
is bin/ps (almost there, I'll supply a patch)
Please don't take this as criticism, but I agree with
what is suggested in the NOTES, if I read it correctly,
that it would be nice to have multiple architecture
dependent bin/lib directories as we have in plan 9
(have now separate trees for linux and sunos, and thought
of using (ahem... cough...) symlink tricks to share sources)
wrt (acme) color depth: I found why I could not run 24bit colors;
I'll try to clean up and (off list?) supply a patch.
Have a similar problem with drawterm, would be cool
if the same fix works there!
The acme resizing problem still beats me -- I today
build for linux (Red Hat Linux 7.3) and have the same
problem there (only tried running remotely, display on sun).
Axel.
see also /usr/inferno (or the like).
ã‚¢
the problem isn't that i don't know what the structure should look like.
the problem is that i don't want to do it. i find it annoying to have
to type things like /usr/local/plan9/FreeBSD/386/bin/acme, and further
it makes writing shell scripts impossible:
#!/usr/local/plan9/FreeBSD/386/bin/rc
isn't very portable! you'd have to have a shell script "rc" that ran
the binary rc and then use
#!/usr/local/plan9/rc/bin/rc
or something like that. it's just disgusting. i don't have many
environments
where different architectures share a tree like /usr/local, so i don't worry
much about it. if we had union directories (or even athena's environment
variable symlinks) to hide the ugliness, i might feel differently.
it's easy enough to change if you want to change it -- just edit
/usr/local/plan9/src/mkhdr and change BIN and LIB.
russ
I see your point.
and, for the record, i wasn't suggesting you do so. given the very
different needs (and tastes) at different sites, i generally think
the best route for a package maintainer is to aim for the general
target (/usr/local/plan9 or whatever) and provide a sane way to
build in other places and modify what the default location for tools
to look is. i think rsc's dist does that nicely, and (were i still
in a heterogeneous environment) allows me to do either the symlink
or environment variable tricks just fine.
ã‚¢
plumber works after fixing a name clash in fsys.c (s/clock/myclock/)
In acme, for some commands, button-2 execute consistently fails
(command args matter: 'echo date' works, 'echo $PLAN9' doesn't)
with an error like
echo : sys: segmentation violation
in the main +Errors window, not in the one for the directory
associated with the tag in which I have the command.
I'll start digging - but maybe somebody has an idea?
In the same way, the plan9/bin/ps (patched for sunos)
occasionally gives a similar error about sort
(fail to reproduce right now). Any ideas, anyone?
Axel.
surely you would have set your $path to include
/usr/local/plan9/FreeBSD/386/bin ?
> and further
> it makes writing shell scripts impossible:
>
> #!/usr/local/plan9/FreeBSD/386/bin/rc
>
> isn't very portable! you'd have to have a shell script "rc" that ran
> the binary rc and then use
>
> #!/usr/local/plan9/rc/bin/rc
>
> or something like that. it's just disgusting.
is it correct to assume that this unportableness of rc scripts is the
reason for using /bin/sh all over the place? including in mk, which is
not entirely to my liking.
> i don't have many
> environments
> where different architectures share a tree like /usr/local, so i don't
> worry
> much about it. if we had union directories (or even athena's environment
> variable symlinks) to hide the ugliness, i might feel differently.
how about ignoring those that have the same home directory on several
architectures? this leaves ''only'' the simpler problem of installing
different files into $PLAN9/bin depending upon architecture.
the reason that i am willing to ignore these people is that i no longer
am one of them. at the time i did inded use the "disgusting" method of
having $PLAN9/bin/rc calling different binaries.
> it's easy enough to change if you want to change it -- just edit
> /usr/local/plan9/src/mkhdr and change BIN and LIB.
since i am not entirely sure about what "it's" refers to, i will ask:
do you mean that changing BIN and LIB will make it possible to install
into /usr/local/plan9/FreeBSD/386
bengt
> Russ Cox wrote:
> ...deleted
>
>>
>> the problem isn't that i don't know what the structure should look like.
>> the problem is that i don't want to do it. i find it annoying to have
>> to type things like /usr/local/plan9/FreeBSD/386/bin/acme,
>
>
> surely you would have set your $path to include
> /usr/local/plan9/FreeBSD/386/bin ?
sure but i still type the paths a lot.
> > and further
>
>> it makes writing shell scripts impossible:
>>
>> #!/usr/local/plan9/FreeBSD/386/bin/rc
>>
>> isn't very portable! you'd have to have a shell script "rc" that ran
>> the binary rc and then use
>>
>> #!/usr/local/plan9/rc/bin/rc
>>
>> or something like that. it's just disgusting.
>
>
> is it correct to assume that this unportableness of rc scripts is the
> reason for using /bin/sh all over the place? including in mk, which is
> not entirely to my liking.
the reason mk uses sh is that it came before rc.
i'm not sure what to do here. i'm not convinced
rc is quite stable enough to be used in mk.
> > i don't have many
>
>> environments
>> where different architectures share a tree like /usr/local, so i
>> don't worry
>> much about it. if we had union directories (or even athena's
>> environment
>> variable symlinks) to hide the ugliness, i might feel differently.
>
>
> how about ignoring those that have the same home directory on several
> architectures? this leaves ''only'' the simpler problem of installing
> different files into $PLAN9/bin depending upon architecture.
how does this differ from what i am currently doing?
>> it's easy enough to change if you want to change it -- just edit
>> /usr/local/plan9/src/mkhdr and change BIN and LIB.
>
>
> since i am not entirely sure about what "it's" refers to, i will ask:
> do you mean that changing BIN and LIB will make it possible to install
> into /usr/local/plan9/FreeBSD/386
yes.
solid as a rock, since '91 or so. ran it for 2.5 years at PRL as my shell,
on mutiple archictectures and os's.
>| rc is quite stable enough to be used in mk.
>
>Byron's rc is!
>
>
if not!
curmugeon ;)
i would conjecture that we have not one, but two ''mk''.
one from mk-20040301.tgz that is of interest to a unix user that wants a
new ''make'', but is not interested in plan9. this mk does not need to
use ''rc''.
the other ''mk'' is the one in plan9-20040321.tar.gz. this mk will be
used by someone interested in plan9. i belive that this mk should use
rc. if rc is not stable enough it should be made stable enough.
a _starting point_ for testing plan9 rc could be the ''trip'' test that
comes with byrons rc.
since it is never a good idea to have different commands with the same
name (provided we have the same context. in this case unix) i suggest
that one of the two mk are renamed. the one to rename, imho, is the one
that uses sh. possible names would be: smk, umk (or mks, mku).
...deleted
>>
>> how about ignoring those that have the same home directory on several
>> architectures? this leaves ''only'' the simpler problem of installing
>> different files into $PLAN9/bin depending upon architecture.
>
>
>
> how does this differ from what i am currently doing?
in plan9-20040321.tar.gz the ''ps'' command installed in $PLAN9/bin did
not work on SunOS. i think it was working on another architecture. my
solution was to choose/build the correct ''ps'' during install.
i see that ''ps'' in plan9-20040330.tar.gz has solved the problem by
detecting SunOS at runtime instead.
about the new ''ps''.
(a warning here. i am using openbsd man pages to interpret ''-axww''.
this could very well be wrong since you presumably did now create ''ps''
on openbsd. however, openbsd seems to have the best man pages available
online. please correct me if i am mistaken.)
the ''x'' flag on openbsd means "Display information about processes
without controlling terminals."
the ''a'' flag on SunOS means "Lists information about all processes
most frequently requested: all those except process group leaders and
processes not associated with a terminal."
i am not a native english speaker but my interpretation is that ''a''
does not include processes without controlling terminals. i therefore
suggest that it is removed. ''A'' on its own should work.
bengt
the starting point in question proved to be a bit of a problem. the very
first thing (not even a test i presume) of trip.rc is:
# trip.rc -- take a tour of rc
# Invoke as "path-to-new-rc < trip.rc"
rc=$0
echo tripping $rc
byrons rc (brc) thinks that $0 is:
/home/eleberg/plan9/bin/rc
but plan9 rc (9rc) thinks $0 is:
/dev/stdin
i am sure there is a prefectly good reason for this. does anybody know
the reason and have the time to explain?
bengt
I'am afraid Byron made some changes from (the one true ?) rc,
some of these changes where usefull, some just taste but the bottom line
is that trip.rc will not run under plan9 rc without a lot or work.
-Steve
it's the name of the file (script) that rc is running, which isn't usually
where-my-bin-is-today/rc itself
on Plan 9, it's #d/0
these are the same mk, built from the same sources.
> the other ''mk'' is the one in plan9-20040321.tar.gz. this mk will be
> used by someone interested in plan9. i belive that this mk should use
> rc. if rc is not stable enough it should be made stable enough.
in an ideal world, where there is more time for such things.
it is slowly becoming stable enough. there is some funniness
with process groups and backgrounded processes and interrupt
notes that i have yet to work out, but otherwise it is now fine.
> since it is never a good idea to have different commands with the same
> name (provided we have the same context. in this case unix) i suggest
> that one of the two mk are renamed. the one to rename, imho, is the
> one that uses sh. possible names would be: smk, umk (or mks, mku).
it's the same tool, just using a different shell. renaming it seems
quite weird.
should it read umkfile too? the last thing we need is for mk to bifurcate
into odd variants just like make has.
there is historical precedent for mk on unix using sh. i'm not claiming it
should, just that it's not obviously wrong.
there are non-plan 9 users who use mk on unix and expect it to use sh.
telling them that all of a sudden they have to rename their mkfiles and
start typing umk is odd.
i have some ideas about how to solve the problem without splitting mk,
but at the moment it's not very high on my list. the number of recipes
i write that aren't simultaneously valid rc and valid sh is very small.
russ
i maintain several packages across Plan 9 (rc), Unix (sh) and Windows (rcsh.exe) that way.
it seems easier than having to have several files with similar contents but different names.
yes, in the 'general case' a plan 9 rc script will break on byron's rc.
as russ said: 'if not' being replace by 'else' was a bad idea.
you are ofcourse correct. i was beeing very fuzzy in my writing here. i
was not thinking of ''mk, the implementation'', but more of ''mk, the
program the user interfaces too''.
>> the other ''mk'' is the one in plan9-20040321.tar.gz. this mk will be
>> used by someone interested in plan9. i belive that this mk should use
>> rc. if rc is not stable enough it should be made stable enough.
>
>
>
> in an ideal world, where there is more time for such things.
> it is slowly becoming stable enough. there is some funniness
> with process groups and backgrounded processes and interrupt
> notes that i have yet to work out, but otherwise it is now fine.
is it ok to interpret this to mean that rc is good enough to be used as
the shell in mk?
>> since it is never a good idea to have different commands with the same
>> name (provided we have the same context. in this case unix) i suggest
>> that one of the two mk are renamed. the one to rename, imho, is the
>> one that uses sh. possible names would be: smk, umk (or mks, mku).
>
>
>
> it's the same tool, just using a different shell. renaming it seems
> quite weird.
> should it read umkfile too? the last thing we need is for mk to bifurcate
> into odd variants just like make has.
i fully agree. it was this kind of problem that made me raise the
question in the first place. i think mk using sh is an odd variant. i
think of mk as using rc. this is also why i decided it would be best
(for me) to penalise old mk-on-unix users (the ones already using sh).
> there is historical precedent for mk on unix using sh. i'm not claiming it
> should, just that it's not obviously wrong.
>
> there are non-plan 9 users who use mk on unix and expect it to use sh.
> telling them that all of a sudden they have to rename their mkfiles and
> start typing umk is odd.
this is true. however, how large a user base is neccessary to stop
progress in the name of backwards compatibility?
> i have some ideas about how to solve the problem without splitting mk,
> but at the moment it's not very high on my list. the number of recipes
> i write that aren't simultaneously valid rc and valid sh is very small.
a kitchen sink approach would be for mk to recognise sh or rc recipes,
and handle them accordingly.
another solution would be for mk to check for $PLAN9, and then use rc
but mostly this would mean mkfiles that sometimes work, and sometimes
does not work.
we could call mk-with-rc for rmk, mk9, ...
or use ''mk -r[c]''.
i will use mk instead of make, even if you decide to keep sh.
bengt
if the unix port of plan9 rc had been very unstable (something i
prematurely thought was the case when mr cox did not want to use rc as
the shell for mk) trip.rc might have been a _starting point_ for testing
plan9 rc.
since then mr cox has explained that there are only a few problems (with
process groups and backgrounded processes and interrupt
notes) left with the unix port of plan9 rc this is not so interesting
any more. although i would suggest that a test suite is always a good thing.
bengt
Perhaps I am missing the point entirely but do unix mk-with-sh users
generally (always) use sh as their login shell?
One could just code mk to fork the shell specified in the SHELL=
enviroment variable if it exists (bash/ash/sh/ etc) and rc if not
(I'm pretty sure rc doesn't set shell=).
one could also override this using:
SHELL=/bin/sh mk
It does feel a bit like more(1) changing mode when stdout
is a tty, but mk appears to have fallen the way of all software
(backward compatability) so somthing has to give...
just my 2¢
-Steve
not sure. ask Linus.
russ
compatibility? ohh, you can get into a real mess with that on linux when
your shared libs don't match your binaries.
imho the idea with mk is that it should be possible to send somebody
mkfiles (and source etc) that will work on their system. generally
speaking it is unlikely that ''they'' are all running the same shell, or
even one that is compatible with the SHELL in your environment.
therefore it is probably a good idea that mk uses one and the same shell
everywhere. in my opinion that shell should be rc.
bengt
bengt
the problem isn't with mk, it's the recipies; the chunks of code
that get run could (should?) be written in a 7th Ed shell or rc
independent 'style' if possible.
debian managed to shoot themselves in the foot with some libc,
some time back. you couldn't go forward 'cos other stuff would
break and you couldn't go back 'cos more stuff would break.
i smirked a lot.
A "SHELL=" assignment in the mkfile could go a long way to guarantee
recipe compatibility, surely? Or should it be a command line argument
to mk?
++L
PS: I use mk very little, so I may be off the mark above.
no, consider SHELL=/bin/cat
Sure, but the assumption is that the user distributing a mkfile
actually would want it to produce the desired result? If
SHELL=/bin/cat appears in such a mkfile, then it is a joke.
I'm assuming that the various /bin/^(sh csh ksh bash tcsh) are
reasonably portable across platforms that the publisher's intent can
mostly be met. Not unlike having #!/bin/rc at the beginning of a
script.
The interesting question is whether mk, unlike make, can be shell
agnostic. If I specify SHELL=/usr/bin/perl and supply perl recipes,
will mk cope more or less invisibly?
++L
I meant, in this theoretical situation where SHELL= determines which
shell to use, of course.
++L
> debian managed to shoot themselves in the foot with some libc,
> some time back. you couldn't go forward 'cos other stuff would
> break and you couldn't go back 'cos more stuff would break.
gets better. Symbols are now versioned (well, this really happened a few
years back). So you are very tightly screwed (good word) to the library
you use, and it covers a definite range forward/backward. I assume but am
not sure that glibc nowadays encompaesses lots of versions of lots of
functions going back for years. Which also means the version naming of the
file (libc-2.3.2.so) has a lot less meaning than it used to. At some
point, given the tight wiring of an executable to the particular library
version, one starts to lose track of just why .so's are still thought to
be a good idea (I mean, on a 1960s-era Burroughs machine with not much
memory, I get it, but ... /bin/cat on my Redhat box at 20K, is not much
smaller than /bin/cat on Plan 9 (22K stripped), and the Plan 9 one doesn't
do symbol fixup every time it runs ...).
And, as Linus mentioned, TLBs matter. Hmm. Judging by 'ps', cat on linux
needs 256 of them, and cat on Plan 9 needs 6. xclock has got 900 or so,
and Plan 9 clock appears to have 3*30 or so (3 clock procs when you run
clock).
So you do pay a bit for .so's. You don't gain an
implementation-independent interface for your programs, since the .so is
versioned and the symbols in it are versioned; I wonder what you DO gain?
The theory always was you could swap out a shared library and swap in a
bug-fixed version, which sounds nice until you try it and it fails
miserably (there was a time when this worked ...)
ron
Why does this no longer work? I've upgraded glibc on a gentoo linux
system without much dificulty.
I like the simplicity of static linkage but it seems like a nightmare if
Oracle had to build/test/ship a new version of their code every time
there was a bug in printf or some xml parsing library.
Regarding the TLB and SOs, I can't figure out all the fuss either since
the TLB is flushed on context switch (on x86 at least).
eli
> Regarding the TLB and SOs, I can't figure out all the fuss either since
> the TLB is flushed on context switch (on x86 at least).
and that's the problem.
ron
i think they still have to test their software with the changed bit, regardless whether the library
is statically or dynamically linked! indeed, i'd have thought one
advantage of static linking in that case is that the thing being run
is more likely to be the thing that was tested, without (say)
a new malloc being substituted by dynamic linking.
> >>Oracle had to build/test/ship a new version of their code every time
>
> i think they still have to test their software with the changed bit, regardless whether the library
> is statically or dynamically linked!
Look at it this way: see how many vendors ship a shared library with their
code. That should answer that question :-)
ron
For security bugs this is a major disadvantage, because you never want to
keep an old copy of the library around. A buffer overflow in strcmp is now
present in practically every binary on the system. Security bugs need to
be fixed asap -- updating copies of all your applications (which you might
not have sources for) just to get rid of a single buffer overflow seems
unacceptable. This of course assumes you can easily track different
versions of strcmp in all your applications with static linking, which is
time intensive at best.
Plan 9 on the x86 does not do a very good job of managing the TLB, when
the port was done there was only the option of flushing it entirely. Better
code for the modern variants may happen.
Where could it be improved? My understanding is that on x86 the TLB needs
to be flushed entirely on context switch regardless of the OS (unless you
want to carve up the 4GB linear address space among all proceses and use
segmentation registers).
> Plan 9 on the x86 does not do a very good job of managing the TLB, when
> the port was done there was only the option of flushing it entirely. Better
> code for the modern variants may happen.
I'm not being real specific I think. I'm just saying that if you have to
flush/reload the tlb, it's nicer if your use of tbl is rather small, as it
is for Plan 9 programs.
ron
in this case, the problem lies with oracle.
WTF? strcmp() is not exactly _hard_ ...
if the system is requires shared libs to be maintainable it is, by
definition, unmaintainable.
on 1 MIP VAXes we could re-compile the whole kernel in less than 20 minutes.
now, on plan 9 you can do it in 20 seconds (or less).
iirc the VAX would let you flush one entry or the whole thing; something
not to mess up should you implement copy-on-write fork.
for how many architectures?
i'll bet you it's a lot less woe and time than any oracle/lunix related
screwup.
someone has the numbers. anyone?
took me about 45 mins on an athlon 800
45 minutes seems a little long, though there is a lot of code to
compile.
nothing, compared to the time needed to read oracle doc.
it was from kfs running on my pentium 150 with 64mb ram
my ether is 100mbit switched
I was impressed enough with 45mins after previously waiting all night for XFree to compile
or the aforementioned 3 hour install of the MS Core SDK files
m
i'm sure that when i installed visual whatsit .net from CD it deployed a new `autoboot' feature
that allowed it to do the necessary 9 or was it 15 reboots without my having to be there.
shame about the one modal dialogue that did appear that suspended it early on!
ghostscript takes most of the time, i bet.
the nightly builds used to take almost exactly
one hour on a fairly fast p4 machine. 386 only.
> > how long does it take to recompile the entire system if you have a bug in
> printf?
wrong question. On many of these later systems, as I said, app .o is tied
to lib .so. So it doesn't matter if you just rebuild the library -- the
app may not be fixed because it is tied to (e.g.) GLIBC 2.0 versioned
symbols, and the fixed symbols are later versions. So you replace a broken
libc.so with a new one, and guess what? not all your apps are fixed. You
have to replace the apps anyway, so they'll use the right symbols.
Maybe I'm painting a grimmer picture than reality, but that's the way it
looks to me.
The old goal of shared libs was to provide an implementation-independent
interface to a set of functions in a possibly changing library. It's
pretty hard to square this goal with the idea of versioned symbols. I've
seen cases on some OSes where the vendor told you (to fix a bug) to update
the shared library and all the apps -- although the type signature of the
fixed function in question had not changed. That's weird.
A rebuild of all of gnubin is a rather long process. Best way to find out
how long is to build a gentoo linux system. It's interesting.
ron
no, i didn't!
Pentium 4, 2.66Ghz, local fossil, only 386 compiled:
450 seconds with the default fossil
320 seconds with a fossil compiled with brucee's
non-vlong-challenged compiler
that from two weeks ago, could be faster today ;)
i get
Wed Apr 21 01:17:54 BST 2004
217.64u 158.54s 891.96r mk all
h% cat '#P/cputype' # cpu server
AMD-Athlon 750
athlon 800 /sys/src/fs file server, scsi drives, 128mb ram
i didn't do a mk install just a mk all >/dev/null, after a
ramfs and mk clean
> no, i didn't!
It's some dude's mailer -- it is misquoting things.
Sorry.
ron
undoubtably, but if i need to be shot in the foot i'd rather do it myself ;)
if you want a job done properly ...
brucee
brucee
> > It's some dude's mailer -- it is misquoting things.
>
> undoubtably, but if i need to be shot in the foot i'd rather do it myself ;)
actually the dude in question was me, but let's not get into it.
ron
me too
I did wonder why MS decided to change the appearance of the dialog controls
seems each developer will now be free to choose his/her own inconsistent button colours
m
since it is the mkfiles that are the ''problem'' it would be best to put
the information into them.
note that it would not be possible to use the full path to the shell,
since atleast rc would be possible to find in different locations. which
means that the current use of execl() in plan9/src/cmd/mk/unix.c would
have to be replaced with execlp().
bengt
imho, you are mostly correct. it is mainly the receipes in the mkfiles
that are affected by the sh/rc differences.
however, i sometimes find myself doing this, or similar, in a mkfile:
DIR = `{pwd}
this is not what i call a recipe, and i do not know how to vrite it in
an independent style.
bengt
could you please expand upon your question. i am currently understanding
it as:
if we have a (future, theoretical) mk that can handle SHELL=xyz instead
of the current hard coded /bin/sh, would that work?
to me the only possible answer would be yes. perhaps your question
really is:
it is worth it to expand(/improve?) mk to be able to handle SHELL=xyz
instead of the current hard coded /bin/sh?
then i would guess that the answer is:
please, go ahead and try to do it. then we will know :-)
bengt
> imho, you are mostly correct. it is mainly the receipes in the mkfiles
> that are affected by the sh/rc differences.
> however, i sometimes find myself doing this, or similar, in a mkfile:
>
> DIR = `{pwd}
>
> this is not what i call a recipe, and i do not know how to vrite it in
> an independent style.
>
I can't see how it could be done portably. Teaching mk all sorts of
tricks would be counterproductive, may as well use GNU Make then.
++L
no, or rather perhaps you are! but it's just as likely to have been some other microsoft
stuff i installed two years ago, although since it was my machine
and that's about all i've got on it, i'd have thought it was VC++6.
i certainly don't propose to retry it to find out!
i'd assumed that was the main reason for microsoft making
their kit freely available.
i'd say so. i plugged my logitech webcam into a different
USB port and it auto-re-installed the s/w and THEN
wanted a reboot, even though said camera had been
working fine on the other USB port. i was just waiting
for a crash, but it was not to be. things did get _very_
slow for a _long_ time while it was doing this [700Mhz
P3]. this combined with some idiot asking me to turn
off my firewall turned my thoughts to applying
different 'tools'.
fancy stuff in a recipe has always been a bad idea, although mk
does fix this problem which then allows you to write fancy stuff.
goto fonfon;
Your statement doesn't make a lot of sense. The symbol version is
incremented when the interface changes, e.g. FILE grows or off_t or what
the heck. It's not incremented when the implementation changes. And
when you look over glibs you'll the symbols with older versions are
usually wrappers around the current version. And even if not you can
be sure the maintainers will fix all versions if it's possible without
changing the interface.
That beeing said I'm the last one to defend glibc's bloat, but in a
system where you can't easily rebuild all binaries for whatever
reason shared libraries and symbol versioning makes a lot of sense.
and define a broken [unmaintainable] system.
> That beeing said I'm the last one to defend glibc's bloat, but in a
> system where you can't easily rebuild all binaries for whatever
> reason shared libraries and symbol versioning makes a lot of sense.
No, it doesn't help much at all.
Let's take program 'a', which depends on stat. In the new order of gcc,
when built, 'a' will depend on stat from glib 2.0. A new stat comes along
with fixes. It gets built into glibc 2.1. You install glibc 2.1. Program
'a', unless I rebuild or replace it, will be using the old stat. Of
course, I might think that the shared library has fixed all binaries using
stat, and I'm wrong -- or am I right? Is the V1 stat just a wrapper? who
knows? And do you cover all the cases? And maybe it isn't calling stat and
I don't know it. Maybe it's calling one of these:
000c888c t __GI___fxstat
000c90cc t __GI___fxstat64
000c90cc t ___fxstat64
000c888c T __fxstat
000c90cc T __fxstat64@@GLIBC_2.2
000c90cc T__fxstat64@GLIBC_2.1
000c90cc t __old__fxstat64
000c888c t _fxstat
I've found programs that call all these variants, because the functions
they call call different library functions. It's quite interesting to see.
Which one is 'a' calling? Oh yeah, you can max out the ld.so debug
options, because of course weak symbols come into this game too, and
you're not really sure unless you watch this:
19595: binding file /lib/libpthread.so.0 to /lib/libc.so.6:
normal symbol `getrlimit' [GLIBC_2.2]
19595: symbol=__getpagesize; lookup in file=date
19595: symbol=__getpagesize; lookup in file=/lib/libpthread.so.0
19595: symbol=__getpagesize; lookup in file=/lib/librt.so.1
19595: symbol=__getpagesize; lookup in file=/lib/libc.so.6
yup, several hundred lines of this stuff, for 'date'. Of course it's kind
of interesting: Posix threads are used by 'date'. I had no idea that
printing a date could be so complex. Maybe that's why it's 40k -- bigger
than some OSes.
The symbol versioning breaks assumptions users have about how shared
libaries work -- that they provide a link to one version of a function and
if you replace the library all the programs get fixed. I've seen this
problem in practice, for both naive users and very non-naive sysadmins.
The symbol versioning wires programs to something beyond a library
version, in a way that is not obvious to most people. To fix a binary that
uses a library, you have to replace the binary, not just the library, or
you can not be sure anything gets fixed.
That said, if you can't rebuild all the binaries, well then you're stuck,
and have no idea if your new shared library is going to fix anything at
all for some of those binaries. Some will stay broken, since replacing the
library did not necessarily replace broken functions -- the new library
has them too, for backwards compatibility. So the upgrade is not an
upgrade. This is a feature?
ron
Stop here. You don't get a new symbol version just because your new
version ends up in a new glibc. so unless your fix changes the
interface to stat it will retain it's old symbol version. Bumping the
symbol version is an explicit action of the program author. And because
of the maintaince issue they rewrite the old version as a wrapper around
the new one in every case I've seen so far. If this wasn't the case
_and_ the author wouldn't update the version that would be considered a
huge bug indeed. But that's not what happens in real life.
I've looked for an example that shows this without too much code and
while we're at it shows some glibc braindamage (messing up kernel
syscalls when translating them to library calls), and that would be the
sched_getaffinity call, implemented in
sysdeps/unix/sysv/linux/sched_getaffinity.c.
We have a routine called __sched_getaffinity_new implementing a small
wrapper for the sched_getaffinity syscalls. With
versioned_symbol (libc, __sched_getaffinity_new, sched_getaffinity,
GLIBC_2_3_4);
it's exported as sched_getaffinity for the symbol version GLIBC_2_3_4.
Then
int
attribute_compat_text_section
__sched_getaffinity_old (pid_t pid, cpu_set_t *cpuset)
{
/* The old interface by default assumed a 1024 processor bitmap. */
return __sched_getaffinity_new (pid, 128, cpuset);
}
compat_symbol (libc, __sched_getaffinity_old, sched_getaffinity, GLIBC_2_3_3);
implements the older version ontop of the new one.
Well, here are details I was reluctant to put into email due to my
sensitive nature.
But take this one comment:
>And because of the maintaince issue they rewrite the old version as a
>wrapper around the new one in every case I've seen so far.
I think you're assuming a lot now and into the future. But I'll take your
word for it.
I think unless I'm wrong that you've actually demonstrated the problem I'm
trying to show -- an old binary, using a broken interface, doesn't get
fixed just because you update the library. What happens is that the old
binary retains the broken interface. This is not what people think of with
shared libraries -- they naively think you upgrade the library and good
things happen, bugs are fixed, angels sing. glibc retains old broken
interfaces as time moves forward -- is this really a good idea?
In other words, shared libraries certainly solve a problem, but I'm not
sure any more which one it is.
ron
E.g.
void print_msg(void) {printf("hello wrold\n");}
can be replaced in a newer version of the library by:
void print_msg(void) {printf("hello world\n");}
without changing the interface, but not with:
void print_msg(char *mgs) {printf(msg);}
If it's an interface that's being changed, your program needs a source
code update to use the new function, and versioning can keep the old
interface available. (And fix the spelling, too...)
Have I completely misunderstood the problem?
--Joel
I think this depends on your defintion of broken. E.g. if FILE grows
to support $RANDOM new feature is the old interface broken because of
that? If I had a third party program on my system that I don't have
sources for I'd be more than happy to keep that supported, yes.
I think I understand your point, but it looks more like a thoeretical
issue than something that comes up in practice. Yes, symbol
versioning does allow you to keep the old version around, as do weak
symbols if you overrode it in your program, etc.. The thing is that
the way it's used in practice works different.
> In other words, shared libraries certainly solve a problem, but I'm not
> sure any more which one it is.
They still allow you to fix problems in a central place. Just because
you might have two versions of the same symbol that doesn't mean you're
magically unable to fix them.
> I think I understand your point, but it looks more like a thoeretical
> issue than something that comes up in practice. Yes, symbol versioning
> does allow you to keep the old version around, as do weak symbols if you
> overrode it in your program, etc.. The thing is that the way it's used
> in practice works different.
not according to the poster from SGI, unless I missed his point, and who
knows? Maybe I did.
ron