Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Which distribution?

1 view
Skip to first unread message

Espen Kristensen

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
Anyone who got a suggestion on which distribution I should choose? I'm gonna
use the box as webserver, mailserver, Nameserver and telnet.

Best Regards
Espen.

Martin Maney

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to

I'm partial to Debian, myself. The one fly in that ointment is that the
current stable/released version is 2.1 (slink), and it came out just a
little too early to be base don kernel 2.2. OTOH, the unstable branch,
called "potato", is just before going into freeze for pre-release testing
and the "fix it or drop it" phase. FWIW, I've been running potato on a
non-critical machine for... how long has it been, anyway? Over a month,
certainly, probably almost two months. There have been a few interesting
moments, as when an update to the disk tools package left the machine
without an fdisk for a day or two, but on the whole it's unstable chiefly in
that they keep fixing broken things - I'd guess apt has been fetching and
installing an average of a MB or two per day.

If 2.2.x kernel features aren't at the top of your list, the slink-based
machines and even a couple of boxes that are still running hamm (the 2.0
release) all survived Y2K just fine (found one that had a display glitch -
turned out that box hadn't had the last round of hamm updates installed, so
it got fixed at about 11:50 local time last night <grin>).

Happy Y2K!

--
NT scales to quad Xeons? Linux scales to S/390. Benchmark that, soft ones.

Jeff Fisher

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
In article <hNqb4.7151$sH.3...@news1.online.no>, Espen Kristensen wrote:
>I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
>Anyone who got a suggestion on which distribution I should choose? I'm gonna
>use the box as webserver, mailserver, Nameserver and telnet.

The guys over at www.freebsd.org have a really good linux distribution. ;)

-----
Jeff Fisher
je...@jeffenstein.org http://www.jeffenstein.org
"Bother," said Pooh as he struggled with /etc/sendmail.cf, "It never
does quite what I want. I wish Christopher Robin was here."
-- Peter da Silva in a.s.r

Jim Kingdon

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
> I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.

Well, I'm not sure I'd form too many conclusions based solely on what
people are saying. The most serious bugs (and there were more than
I'd like, especially in the installer which is new with 6.1) are fixed
at http://www.redhat.com/errata/

Having said that, the suggestion of Debian is good - the main reason
I'm running Red Hat is that is what the people at my Linux User's
Group run, so that makes it easier to get help (well, I do work for
Red Hat, but I was using the distribution long before then).

Martin Maney

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
Jeff Fisher <je...@tinc-org.com> wrote:
> In article <hNqb4.7151$sH.3...@news1.online.no>, Espen Kristensen wrote:
>>I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
>>Anyone who got a suggestion on which distribution I should choose? I'm gonna
>>use the box as webserver, mailserver, Nameserver and telnet.

> The guys over at www.freebsd.org have a really good linux distribution. ;)

Yeah, but what does it do with the second CPU? :-(

Peter da Silva

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
In article <84m0lv$2io$1...@wheel.two14.lan>,

Martin Maney <ma...@pobox.com> wrote:
>Jeff Fisher <je...@tinc-org.com> wrote:
>> In article <hNqb4.7151$sH.3...@news1.online.no>, Espen Kristensen wrote:
>>>I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
>>>Anyone who got a suggestion on which distribution I should choose? I'm gonna
>>>use the box as webserver, mailserver, Nameserver and telnet.

>> The guys over at www.freebsd.org have a really good linux distribution. ;)

I can see a comment like that on IRC, but in Usenet II? We're supposed to be
the good guys.

>Yeah, but what does it do with the second CPU? :-(

/usr/src/sys/i386/conf/LINT:
#####################################################################
# SMP OPTIONS:
#
# SMP enables building of a Symmetric MultiProcessor Kernel.
# APIC_IO enables the use of the IO APIC for Symmetric I/O.
# NCPU sets the number of CPUs, defaults to 2.
# NBUS sets the number of busses, defaults to 4.
# NAPIC sets the number of IO APICs on the motherboard, defaults to 1.
# NINTR sets the total number of INTs provided by the motherboard.
#
# Notes:
#
# An SMP kernel will ONLY run on an Intel MP spec. qualified motherboard.
#
# Be sure to disable 'cpu "I386_CPU"' && 'cpu "I486_CPU"' for SMP kernels.
#
# Check the 'Rogue SMP hardware' section to see if additional options
# are required by your hardware.
#

# Mandatory:
options SMP # Symmetric MultiProcessor Kernel
options APIC_IO # Symmetric (APIC) I/O

# Optional, these are the defaults plus 1:
options NCPU=5 # number of CPUs
options NBUS=5 # number of busses
options NAPIC=2 # number of IO APICs
options NINTR=25 # number of INTs

#
# Rogue SMP hardware:
#

# Bridged PCI cards:
#
# The MP tables of most of the current generation MP motherboards
# do NOT properly support bridged PCI cards. To use one of these
# cards you should refer to ???

--
This is The Reverend Peter da Silva's Boring Sig File - there are no references
to Wolves, Kibo, Discordianism, or The Church of the Subgenius in this document

Executive Vice President, Corporate Communications, Entropy Gradient Reversals.

Jim Kingdon

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
> Thanks for answering, but is the security in Redhat as good as Slackware, or
> the other distributions? I like Redhat alot, but ppl says that Slackware is
> better and blah blah blah.

Oh, I says they are all good and blah blah blah blah. And I probably
should just leave it at that, but at the risk of getting into some
kind of flame war or something:

My big problem with Slackware is that having used a package manager, I
can't imagine going back to not having one. Pretty much every linux
distro except slackware uses either RPM or dpkg. I don't know, if you
wouldn't miss one, this might not apply.

I would guess most of them are roughly as good as the others regarding
security (e.g. put out security bugfixes promptly). With Red Hat it
isn't quite as easy as it should be to set up which network daemons
you want running, but it is decent (the ntsysv command ran as part of
the 6.0 install, I don't remember what the 6.1 install does frankly
but you can always run ntsysv afterwards). Dunno how Slackware is
with that kind of stuff.

Jim Kingdon

unread,
Jan 1, 2000, 3:00:00 AM1/1/00
to
> But then, perhaps I've just become an embittered OS bigot.

Haven't we all? ;-)

(not sure whether I mean embittered, or OS bigot, or both, or even
neither, but somehow it seems like the only response. I don't think I
should try to explain what I'm getting at here, it probably only makes
sense to me :-)).

Espen Kristensen

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Thanks for answering, but is the security in Redhat as good as Slackware, or
the other distributions? I like Redhat alot, but ppl says that Slackware is
better and blah blah blah.

Best Regards.
Espen

"Jim Kingdon" <kin...@panix.com> wrote in message
news:p4waemp...@panix6.panix.com...


> > I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
>

Martin Maney

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Peter da Silva <pe...@taronga.com> wrote:
> Martin Maney <ma...@pobox.com> wrote:
>>Jeff Fisher <je...@tinc-org.com> wrote:
>>> The guys over at www.freebsd.org have a really good linux distribution. ;)

> I can see a comment like that on IRC, but in Usenet II? We're supposed to be
> the good guys.

And good guys aren't allowed to have a little fun, Peter? I thought it was
a perfectly reasonable issue to raise.

>>Yeah, but what does it do with the second CPU? :-(

> /usr/src/sys/i386/conf/LINT:

Sorry, I don't grovel in sources, at least not ones I don't have handy,
looking for a mention - any mention less than a couple years old, as far as
I could find on freebsd.org - of a fairly substantial feature like SMP. Is
this actually in the released versions now? I swear, from the stuff on the
web site it looks like the project died on the vine.

Matt McLeod

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Yea, it is written in the Book of Cyril

that Martin Maney did write:
>Sorry, I don't grovel in sources, at least not ones I don't have handy,
>looking for a mention - any mention less than a couple years old, as far as
>I could find on freebsd.org - of a fairly substantial feature like SMP. Is
>this actually in the released versions now? I swear, from the stuff on the
>web site it looks like the project died on the vine.

It's been in release versions since 3.0-RELEASE. Which was,
oh, maybe a year or so ago.

Matt
(who's going to have to get used to Linux again, as his
new toy is about as not-supported-by-FreeBSD as you can get,
and doesn't have the time or skill to port to PPC750).

--
"I'm not sure if this is a good or a bad thing.
Probably a bad thing; most things are bad things."
-- Nile Evil Bastard

Matt McLeod

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Yea, it is written in the Book of Cyril
that Espen Kristensen did write:
>Thanks for answering, but is the security in Redhat as good as Slackware, or
>the other distributions? I like Redhat alot, but ppl says that Slackware is
>better and blah blah blah.

Just about any Unix-like OS can be secured to a reasonable
degree if you know what you're doing. At least if we're talking
"don't let anyone in unless they're x, y, or z". Of course
there are enough exploits being found every so often that keeping
up can be something of an issue if you've got lots of widely-varying
systems, but...

If a secure default configuration is an issue for you, take
a look at OpenBSD.

Matt

--
Errors have occurred.
We won't tell you where or why.
Lazy programmers.

Jeff Fisher

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
In article <84m3bh$13at$1...@citadel.in.taronga.com>, Peter da Silva wrote:
>In article <84m0lv$2io$1...@wheel.two14.lan>,

>Martin Maney <ma...@pobox.com> wrote:
>>Jeff Fisher <je...@tinc-org.com> wrote:
>>>
>>> The guys over at www.freebsd.org have a really good linux distribution. ;)
>
>I can see a comment like that on IRC, but in Usenet II? We're supposed to be
>the good guys.

Well, I've been asked too many times about "I want to install Linux...." to
take it very seriously anymore. Really, if he's installed any UNIX a
couple of times, FreeBSD isn't that much of a problem to figure out. It
also doesn't suffer from the 'kernel a week' problem, unless you really
like cvsup. But then, perhaps I've just become an embittered OS bigot.

"We're from the government"
"No thanks. We've got all the government we need." -- The Tick

Bill Cole

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
In article <slrn86tcvq.ls...@enzo.netizen.com.au>,
matt+...@netizen.com.au (Matt McLeod) wrote:

> Matt
> (who's going to have to get used to Linux again, as his
> new toy is about as not-supported-by-FreeBSD as you can get,
> and doesn't have the time or skill to port to PPC750).

Have you looked at Darwin?

Russ Allbery

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Tim Skirvin <tski...@killfile.org> writes:

> Just for fun, I think I'll plug epkg, a system-independent package
> manager that just came out of beta today...

> <URL:http://encap.cso.uiuc.edu>

Yeah, looks like an automated way of doing what a lot of us do. I have a
fairly complete set of tools for doing what it sounds like Encap does,
although they're still a mess and mostly undocumented. They do handle
things like multiple architectures, things shared between architectures
and things that aren't, two separate link trees (one for testing and one
for production), a full database of what link goes to what package and
what versions of what packages are installed, and a tool to automate the
software build and installation process somewhat. Unfortunately, they're
pretty heavily dependent on AFS and our particular directory layout.

At last check, we had 776 software packages (counting different versions
as different packages) installed in our site-wide software tree. You kind
of have to have something like this to manage that, and I bet most large
AFS sites have already rolled something similar of their own.

--
Russ Allbery (r...@stanford.edu) <URL:http://www.eyrie.org/~eagle/>

Russ Allbery

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
Jim Kingdon <kin...@panix.com> writes:

> * RPM, at least, has been ported to a variety of systems. Of course,
> the thing which really intrigues me is the doohickey (pkgmaker? I
> could probably dig up a URL if people care) which will take a source
> RPM and produce a binary Solaris package. But I'm a sucker for
> Fancy Compatibility Tricks(TM).

> * The key benefit is having the OS itself built up of packages. For
> example, trying to find the documentation for /etc/X11/XF86Config?
> Or /usr/lib/libmumble.so? "rpm -qlf /usr/lib/libmumble.so" shows
> you all the files which are part of the same package. Grep out (or
> scan visually) /usr/doc, /usr/man, &c, and you see the files.

RPM is well-suited to the problem of producing a large set of prebuilt
packages that can be distributed over the network and used to put together
a system composed of a bunch of software on local disk. Which is good,
since that's what it was designed for and is mostly used for.

RPM is extremely bad at managing the building and installation of a large
number of packages for a large number of different operating systems
simultaneously on a file server machine that will be remotely mounted by
individual systems as a small set of pre-determined paths. Which is
unsurprising, given that that's not what it's for.

For the latter task, you want compilation tools that will help you with
the next version of the package and that let you automate builds as far as
possible so that you can run them in parallel and automatically apply
local patches. You *don't* want binary RPMs, as the only purpose served
by binary RPMs is to allow compilation once and installation multiple
times and with a file server you're installing the same number of times
you're compiling. And while you don't absolutely have to have a link
forest rather than installing directly into bin, lib, etc., you waste tons
of space with the latter since you can't make architecture-specific only
those things that are architecture-specific and share the rest across all
the platforms you're supporting.

(Plus you pretty much have to use a link forest if you're using AFS, or
you end up with unmanageably large volumes.)

I'm also not a big fan of the central database model of tracking; I much
prefer the link forest. I actually maintain both with my tools, but my
tools are designed to rebuild the database on demand directly from the
links, so that I can when necessary go tweak the links by hand and then
just regenerate the database. The database is there for reporting and
overview purposes and to drive a user-visibile query tool, not as the
canonical source of information about what's where.

A package manager like RPM really isn't what you want for a central
software distribution, which is why tools like depot still flourish. In
fact, they're almost (but not quite) disjoint.

Peter da Silva

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
In article <84m90l$2o3$1...@wheel.two14.lan>,

Martin Maney <ma...@pobox.com> wrote:
>Sorry, I don't grovel in sources, at least not ones I don't have handy,
>looking for a mention - any mention less than a couple years old, as far as
>I could find on freebsd.org - of a fairly substantial feature like SMP. Is
>this actually in the released versions now? I swear, from the stuff on the
>web site it looks like the project died on the vine.

It was in 3.0-current, and release is now 3.4 and current is 4.0.

The FreeBSD website probably needs some updating.

Tim Skirvin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Jim Kingdon <kin...@panix.com> writes:

>My big problem with Slackware is that having used a package manager, I
>can't imagine going back to not having one.

Just for fun, I think I'll plug epkg, a system-independent package


manager that just came out of beta today...

<URL:http://encap.cso.uiuc.edu>

- Tim Skirvin (tski...@killfile.org)
--
<URL:http://www.killfile.org/~tskirvin/> Skirv's Homepage <FISH><
<URL:http://www.killfile.org/dungeon/> The Killfile Dungeon <*>

Matt McLeod

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Yea, it is written in the Book of Cyril

Not as yet, although my sister lives there...

*goes digging*

Hm. Looks *very* interesting. OS X Server isn't supported on
the new toy (an iMac DV SE), but presumably OS X Consumer will
be, and my understanding is that they share the same base, which
should mean that Darwin will also run on it in a few months.

Although the requirement for a seperate disk drive seems a bit
odd. Not to mention difficult, as it doesn't support USB or
Firewire drives.

--
"From empirical experience, your Exchange admin needs to put down the crack
pipe and open a window to disperse the fumes." -- Joe Thompson, ASR

Matt McLeod

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Yea, it is written in the Book of Cyril
that Matt McLeod did write:
>Yea, it is written in the Book of Cyril
>that Bill Cole did write:
>>In article <slrn86tcvq.ls...@enzo.netizen.com.au>,
>>matt+...@netizen.com.au (Matt McLeod) wrote:
>>> (who's going to have to get used to Linux again, as his
>>> new toy is about as not-supported-by-FreeBSD as you can get,
>>> and doesn't have the time or skill to port to PPC750).
>>
>>Have you looked at Darwin?
<
>*goes digging*

After more digging and general browsing, looks like I
may not have to even touch Linux any more. I'll be
interested to see how NetBSD copes on an iMac (particularly
whether or not X is running -- I know the LinuxPPC guys
have X up and running on those boxes).

Mmmm. BSD userland on a nice fast PPC...

--
Error reduces
Your expensive computer
To a simple stone

Jim Kingdon

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
> <URL:http://encap.cso.uiuc.edu>

Hmm, looks useful. And the idea of using symlinks to serve many
(all?) of the functions of a database like /var/lib/rpm/<mumble> does
in RPM is kind of intriguing.

Of course, since no borderline-flame-potential thread should be let to
die (:-)), I feel the obligation to prolonge it:

Tim Skirvin

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Jim Kingdon <kin...@panix.com> writes:

>> <URL:http://encap.cso.uiuc.edu>
>Hmm, looks useful.

Every now and then I forget that nobody here at UIUC has bothered
to mention encap to the rest of the world; it's probably the single most
handy admin tool we've got around here, and a lot of admin-time is spent
trying to figure out how to make various old packages compatible with it.
Just about every major system on campus uses it, or a variety of it; holy
wars over which version to use are as bloody as editor wars. Me, I still
use encapper.c. I like being a minimalist. *grin*

Somebody sent something about an encap-like system to freshmeat a
few months ago; one of my co-workers was angry that someone had "stolen"
the idea. I pointed out to him that he could submit a mostly-complete
version himself and change the 'net, but he was being a bit lazy that
day...

>* The key benefit is having the OS itself built up of packages. For
> example, trying to find the documentation for /etc/X11/XF86Config?

My boss spent about 12 hours tinkering with configure and imake to
finally figure out how to make X encap properly. I still haven't sat down
and translated what he wrote...

Anyway, when it comes down to it, the way that we get by on
non-PC Unices around here is to build everything from scratch in the
first place, so in essence we *do* have an OS built up out of packages.
And if our local Linux User Group would get off their butts and finish the
installation scripts for the UIUC Linux Distrib, we'd have that too. Aah,
well...

Mark Brown

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
tski...@killfile.org (Tim Skirvin) writes:

> Every now and then I forget that nobody here at UIUC has bothered
> to mention encap to the rest of the world; it's probably the single most
> handy admin tool we've got around here, and a lot of admin-time is spent

There seem to be a few systems with this sort of idea around, with
varying degrees of complexity and portability - it's just that none of
them ever seem to have got much mindshare (I'd guess that most of them
never make it off the system they were developed on). Hopefully one
of these days one of these them is going to become commonplace.

--
Mark Brown mailto:bro...@tardis.ed.ac.uk (Trying to avoid grumpiness)
http://www.tardis.ed.ac.uk/~broonie/
EUFS http://www.eusa.ed.ac.uk/societies/filmsoc/

Matt McLeod

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Yea, it is written in the Book of Cyril
that Mark Brown did write:
>tski...@killfile.org (Tim Skirvin) writes:
>
>> Every now and then I forget that nobody here at UIUC has bothered
>> to mention encap to the rest of the world; it's probably the single most
>> handy admin tool we've got around here, and a lot of admin-time is spent
>
>There seem to be a few systems with this sort of idea around, with
>varying degrees of complexity and portability - it's just that none of
>them ever seem to have got much mindshare (I'd guess that most of them
>never make it off the system they were developed on). Hopefully one
>of these days one of these them is going to become commonplace.

That'd be nice. I'll have to take a look at encap now that things
will hopefully settle down a little. We're currently using a
system hacked up at Melbourne Uni, which doesn't do multiple
architectures or anything nifty like that. That was OK until
someone forced Linux boxes on us...

(Aargh! The horror! The horror! Our nice Solaris environment
being invaded by bloody Linux boxes...)

--
"I offer you a new vision of Hell: Watching an entire ISO
committee trying to agree on what wine to have with their meal."
-- Tanuki the Raccoon-dog, ASR

Jim Kingdon

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
> And if our local Linux User Group would get off their butts and finish the
> installation scripts for the UIUC Linux Distrib, we'd have that too. Aah,
> well...

Yes, the idea of having a distribution which runs on top of a variety
of kernels does intrigue me (I guess just because I find that kind of
thing cool for no obvious reason - I was/am, after all, involved in
the FreeVMS project which is about as far as one can go in terms of
compatibility hacks and lost causes :-)).

The other reason I think that such things are Cool(TM) is that the
more they (or other ways of solving the problem) leak out into the
greater Linux/*BSD community, the more likely that we'll have
solutions like this out of the box. I mean, that is what has made
free Unix what it is.

Jim Kingdon

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
> For the latter task, you want compilation tools that will help you with
> the next version of the package and that let you automate builds as far as
> possible so that you can run them in parallel and automatically apply
> local patches.

Ahh, but you very much want this for RPM too - that is why RPM has
source RPMs and specfiles (which can contain local patches) and such.
For example, in Red Hat's internal build system I can just say "build
dist-6.1 foo.src.rpm" and the build system then automatically builds
binary RPMs for x86, alpha and sparc.

As for the bit about having the shareable parts shared via
NFS/Coda/&c, seems like you could do that with RPM (just mount
/usr/share from somewhere, and /usr/local from somewhere else - I'm
assuming you are building your own packages which are relocatable or
hardcoded for /usr/local or /usr/myuniversity or whatever your local
convention is). Possible I've forgotten a gotcha or two - I haven't
done this myself.

You could even use "rpm -bi" (instead of rpm -bb) and skip the binary
RPMs completely.

> A package manager like RPM really isn't what you want for a central
> software distribution, which is why tools like depot still flourish. In
> fact, they're almost (but not quite) disjoint.

I could believe it if depot can do some things which source RPMs
cannot. But the general idea seems similar to me. My impression is
that people do use RPM in heterogeneous environments
(http://www.rpm.org/platforms.html), although I'm not really up on the
details.

Mark Brown

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Jim Kingdon <kin...@panix.com> writes:

> > A package manager like RPM really isn't what you want for a central
> > software distribution, which is why tools like depot still flourish. In
> > fact, they're almost (but not quite) disjoint.

> I could believe it if depot can do some things which source RPMs
> cannot. But the general idea seems similar to me. My impression is
> that people do use RPM in heterogeneous environments
> (http://www.rpm.org/platforms.html), although I'm not really up on the
> details.

Producing source RPMs is sometimes much more difficult than producing
a binary install tree with manual intervention.

Russ Allbery

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Jim Kingdon <kin...@panix.com> writes:

> Ahh, but you very much want this for RPM too - that is why RPM has
> source RPMs and specfiles (which can contain local patches) and such.
> For example, in Red Hat's internal build system I can just say "build
> dist-6.1 foo.src.rpm" and the build system then automatically builds
> binary RPMs for x86, alpha and sparc.

spec files seem rather too complex to me. I looked at trying to just use
spec files when I started writing wrap, and then decided they were trying
to do too much and trying to hard to be a full-blown package system and
went with something a lot simpler and easier to use.

> As for the bit about having the shareable parts shared via NFS/Coda/&c,
> seems like you could do that with RPM (just mount /usr/share from
> somewhere, and /usr/local from somewhere else - I'm assuming you are
> building your own packages which are relocatable or hardcoded for
> /usr/local or /usr/myuniversity or whatever your local convention is).
> Possible I've forgotten a gotcha or two - I haven't done this myself.

Yeah, that just doesn't work. Too many packages don't separate share from
lib, have platform-specific include files along with platform-independent
ones, and have scripts that you want to share across platforms. Plus you
want to easily set up the link trees so that some software packages aren't
even installed on some platforms.

If you put the effort into writing RPMs that fix the location of things in
the package, maybe, but that's more work than I want to put into a lot of
software packages. We don't have a whole company of people to do software
packaging; we have maybe one full-time person (split between three
people's time) to do this. :)

And you also want to have support for doing interesting things like
extracting all of the binaries and support files needed for a full tree on
a particular platform out of the existing broken-out package trees so that
you can freeze support for that platform at the current level and have it
be unaffected by later upgrades of packages on all the other platforms.

> I could believe it if depot can do some things which source RPMs cannot.
> But the general idea seems similar to me. My impression is that people
> do use RPM in heterogeneous environments
> (http://www.rpm.org/platforms.html), although I'm not really up on the
> details.

It would be doable, but it would be a lot of work, and it would require
adding a good bit of functionality to RPM. I seriously considered trying
to go the RPM route, and read up on what it could do (both for this and
for doing Jumpstart installations), and decided that it was just the wrong
tool. We use a locally maintained packaging system called bundle for
Jumpstart installations and use wrap and the pubswlink/pubswtrawl stuff
for site-wide software installations.

At some point, I'll bundle that stuff together into a real release so that
people can poke at it. Chip convinced me that bundle should really be a
module instead of a script first, though, and I have to find time to do
that work (which should also make it a lot faster since then I can use
AutoLoader; right now, it's a 1,912 line Perl script including the fairly
extensive documentation).

Bryan C. Andregg

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
On Sat, 1 Jan 2000 18:26:23 -0000, Espen Kristensen <esp-...@online.no> wrote:
>I got an Dual P2 300Mhz, I got Redhat-6.1 and ppl says it's full of bug.
>Anyone who got a suggestion on which distribution I should choose? I'm gonna
>use the box as webserver, mailserver, Nameserver and telnet.
>
>Best Regards
>Espen.

I personally like Red Hat, but then again I haven't used anything else in
about 4 years and I have a vested interest in the matter. That said, I am
dismayed by the comment "ppl says it's full of bug." I use it on a daily basis
and we use if for all of the things you want to use it for, except telnet, we
don't do that. More to the point, instead of randomly customizing our
environment we insist on running packages from the distribution. Guess what,
we don't have any problems.

Now, as a more personal, less Red Hat, side note:

Everyone has a different favorite distribution for a different favorite
reason. All of them have at least one bug and most of them have few more.
Almost all of the Linux distributions that you have heard of will work for you
in a bug free, secure manner ... PROVIDED that you keep up with your vendor's
updates.


--
Bryan C. Andregg * <band...@redhat.com> * Red Hat, Inc.

1024/625FA2C5 F5 F3 DC 2E 8E AF 26 B0 2C 31 78 C2 6C FB 02 77
1024/0x46E7A8A2 46EB 61B1 71BD 2960 723C 38B6 21E4 23CC 46E7 A8A2

Kai Henningsen

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
r...@stanford.edu (Russ Allbery) wrote on 03.01.00 in <ylwvpqn...@windlord.stanford.edu>:

> Jim Kingdon <kin...@panix.com> writes:
>
> > Ahh, but you very much want this for RPM too - that is why RPM has
> > source RPMs and specfiles (which can contain local patches) and such.
> > For example, in Red Hat's internal build system I can just say "build
> > dist-6.1 foo.src.rpm" and the build system then automatically builds
> > binary RPMs for x86, alpha and sparc.
>
> spec files seem rather too complex to me. I looked at trying to just use
> spec files when I started writing wrap, and then decided they were trying
> to do too much and trying to hard to be a full-blown package system and
> went with something a lot simpler and easier to use.

dpkg is pretty traditional. You have a something.tar.gz and a
something.diff.gz which together are the source; building is driven by
debian/rules which is just an executable makefile (#!/usr/bin/make -f),
and all control files are in the same, RFC 822-like format. Plus there are
several tools to create skeleton control files given a package; if it's a
traditional GNU autoconf-using package with typical makefiles, all you
need for an "official" package after this is write some text for
descriptions and readmes - local packages don't need that, of course.

I think the oldest was debmake, then came the debhelper package, and I
haven't looked at yada yet. (I should. The idea is to make this all
simpler yet by being (nearly) purely declarative. [Some time later] Nice,
just not quite finished yet.)

Kai
--
http://www.westfalen.de/private/khms/
"... by God I *KNOW* what this network is for, and you can't have it."
- Russ Allbery (r...@stanford.edu)

Kai Henningsen

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
r...@stanford.edu (Russ Allbery) wrote on 02.01.00 in <ylvh5bz...@windlord.stanford.edu>:

> For the latter task, you want compilation tools that will help you with
> the next version of the package and that let you automate builds as far as
> possible so that you can run them in parallel and automatically apply

> local patches. You *don't* want binary RPMs, as the only purpose served

Debian has something in that direction (not exactly this) for internal use
- that is, mostly-automatic compilation of a new package for all the
architectures the maintainer doesn't provide herself. Most non-Intel
packages come out of an autobuilder. (And of course, the Debian source
format does contain a "local" (to Debian, that is) diff.) I'm not sure how
much of this has been packaged yet.

> I'm also not a big fan of the central database model of tracking; I much
> prefer the link forest. I actually maintain both with my tools, but my

The one thing I don't get in this thread is, what does the link forest
*do* that you'ld otherwise handle with the database? I know dpkg far
better than rpm, so maybe rpm is different, but for dpkg I don't see what
_could_ be done that way. (Except for what dpkg already does handle with
symlink forests, primarily the "alternates" mechanism, stuff like make
"vi" point to one of various installed vi binaries, /usr/bin/vi -> /etc/
alternates/vi -> /usr/bin/visomething.)

Of course, the basic dpkg database does consist of plain text files.

Kai Henningsen

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
matt+...@netizen.com.au (Matt McLeod) wrote on 03.01.00 in <slrn8705ee.nj...@enzo.netizen.com.au>:

> Hm. Looks *very* interesting. OS X Server isn't supported on
> the new toy (an iMac DV SE), but presumably OS X Consumer will
> be, and my understanding is that they share the same base, which
> should mean that Darwin will also run on it in a few months.

Btw, Darwin uses the Debian dpkg package manager. Just as an aside. Klee
Dienes(sp?) did that.

Kai Henningsen

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
je...@tinc-org.com (Jeff Fisher) wrote on 02.01.00 in <slrn86tkp...@tinc-org.com>:

> couple of times, FreeBSD isn't that much of a problem to figure out. It
> also doesn't suffer from the 'kernel a week' problem, unless you really

What *is* this mythical problem? I keep hearing about it, but I don't see
it happen.

Russ Allbery

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
Kai Henningsen <kaih=7WfI8...@khms.westfalen.de> writes:
> r...@stanford.edu (Russ Allbery) wrote:

>> I'm also not a big fan of the central database model of tracking; I
>> much prefer the link forest.

> The one thing I don't get in this thread is, what does the link forest

> *do* that you'ld otherwise handle with the database?

Lets you put things on different disk partitions. (Or different AFS
volumes, which is even more important.) Or were you asking what the
database could do that the link forest also does? Encode what package a
given file is part of.

David Damerell

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
Kai Henningsen <kaih=7WfI8...@khms.westfalen.de> wrote:
>_could_ be done that way. (Except for what dpkg already does handle with
>symlink forests, primarily the "alternates" mechanism, stuff like make
>"vi" point to one of various installed vi binaries, /usr/bin/vi -> /etc/
>alternates/vi -> /usr/bin/visomething.)

Before someone reads this and looks for it and gets utterly confused; IWJ
is British. /etc/alternatives, and so forth.
--
David/Kirsty Damerell. dame...@chiark.greenend.org.uk
http://www.chiark.greenend.org.uk/~damerell/ w.sp.lic.#pi<largestprime>.2106
|___| Consenting Mercrediphile. Bev White's answer to |___|
| | | Next attempt to break the world in progress Andrew S. Damick | | |

Kai Henningsen

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
r...@stanford.edu (Russ Allbery) wrote on 11.01.00 in <yld7r8y...@windlord.stanford.edu>:

> Kai Henningsen <kaih=7WfI8...@khms.westfalen.de> writes:
> > r...@stanford.edu (Russ Allbery) wrote:
>
> >> I'm also not a big fan of the central database model of tracking; I
> >> much prefer the link forest.
>
> > The one thing I don't get in this thread is, what does the link forest
> > *do* that you'ld otherwise handle with the database?
>
> Lets you put things on different disk partitions. (Or different AFS

Isn't that completely orthogonal to the database?

> volumes, which is even more important.) Or were you asking what the
> database could do that the link forest also does? Encode what package a
> given file is part of.

How? Dpkg currently does that by just having a per-package text file that
lists those files, /var/lib/dpkg/info/<package>.list.

Kai Henningsen

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
dame...@chiark.greenend.org.uk (David Damerell) wrote on 12.01.00 in <ygp*au...@news.chiark.greenend.org.uk>:

> Kai Henningsen <kaih=7WfI8...@khms.westfalen.de> wrote:
> >_could_ be done that way. (Except for what dpkg already does handle with
> >symlink forests, primarily the "alternates" mechanism, stuff like make
> >"vi" point to one of various installed vi binaries, /usr/bin/vi -> /etc/
> >alternates/vi -> /usr/bin/visomething.)
>
> Before someone reads this and looks for it and gets utterly confused; IWJ
> is British. /etc/alternatives, and so forth.

Bah, who types those in full anyway? /etc/alter<tab> will get it right.

Russ Allbery

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
Kai Henningsen <kaih=7Wk8r...@khms.westfalen.de> writes:
> r...@stanford.edu (Russ Allbery) wrote:
>> Kai Henningsen <kaih=7WfI8...@khms.westfalen.de> writes:

>>> The one thing I don't get in this thread is, what does the link forest
>>> *do* that you'ld otherwise handle with the database?

>> Lets you put things on different disk partitions. (Or different AFS

> Isn't that completely orthogonal to the database?

Right, that's a reason why you'd want the link forest even if you had the
database.

>> Encode what package a given file is part of.

> How? Dpkg currently does that by just having a per-package text file
> that lists those files, /var/lib/dpkg/info/<package>.list.

windlord:~> dir /usr/pubsw/bin/ls
lrwxr-xr-x 1 rra root 45 Sep 21 04:38 /usr/pubsw/bin/ls -> ../package/File/fileutils-4.0/sun4x_56/bin/ls*

Keeping the data in any external file risks skew (and it's very easy to
have happen; I maintain an external database that I can regenerate from
the links and I have to regenerate it with some frequency as things get
moved around without getting recorded). When it's encoded in the link,
you always are guaranteed to have it or the software doesn't work.

Rachael Munns

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
In net.computers.os.unix.linux, Alan Bellingham wrote:

> You can be British without quite being human? What a wonderfully
> tolerant country we live in.

Hey, my cat is British.

--
Rachael

Darrell Fuhriman

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
al...@lspace.org (Alan Bellingham) writes:

> You can be British without quite being human? What a wonderfully
> tolerant country we live in.

Oh absolutely, but I'm not so sure about the converse. ;)

Darrell... ducking

Kai Henningsen

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
al...@lspace.org (Alan Bellingham) wrote on 13.01.00 in <38b7ab1f....@news.lspace.org>:

> David Damerell <dame...@chiark.greenend.org.uk> wrote:
>
> >Before someone reads this and looks for it and gets utterly confused; IWJ
> >is British.
>

> You can be British without quite being human? What a wonderfully
> tolerant country we live in.

Is there any other way you can be British?

Kai Henningsen

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
r...@stanford.edu (Russ Allbery) wrote on 12.01.00 in <yl4scje...@windlord.stanford.edu>:

> Kai Henningsen <kaih=7Wk8r...@khms.westfalen.de> writes:
> > r...@stanford.edu (Russ Allbery) wrote:

> >> Encode what package a given file is part of.
>
> > How? Dpkg currently does that by just having a per-package text file
> > that lists those files, /var/lib/dpkg/info/<package>.list.
>
> windlord:~> dir /usr/pubsw/bin/ls
> lrwxr-xr-x 1 rra root 45 Sep 21 04:38 /usr/pubsw/bin/ls ->
> ../package/File/fileutils-4.0/sun4x_56/bin/ls*

Ugh. Please, no.

As Linus would probably say, don't put a slow operation in the typical
path to speed up an infrequent operation.

> Keeping the data in any external file risks skew (and it's very easy to
> have happen; I maintain an external database that I can regenerate from
> the links and I have to regenerate it with some frequency as things get
> moved around without getting recorded). When it's encoded in the link,
> you always are guaranteed to have it or the software doesn't work.

That assumes you change stuff by hand. Don't. Especially with large
installations, it's just not worth the hassle.

It's not just losing track of stuff in the database, it's creating
unexpected conflicts between packages, it's stuff not working because
other stuff has moved out from under it, and so on.

IMO, a single system is already too large for this.

Russ Allbery

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
Kai Henningsen <kaih=7WlXC...@khms.westfalen.de> writes:
> r...@stanford.edu (Russ Allbery) wrote:

>> windlord:~> dir /usr/pubsw/bin/ls
>> lrwxr-xr-x 1 rra root 45 Sep 21 04:38 /usr/pubsw/bin/ls ->
>> ../package/File/fileutils-4.0/sun4x_56/bin/ls*

> Ugh. Please, no.

> As Linus would probably say, don't put a slow operation in the typical
> path to speed up an infrequent operation.

If you have some better way of putting all the binaries in a single path
entry while having them all live on separate partitions, I'd love to hear
it. (Wouldn't implement it, since this works just fine, but I'd love to
hear it.) Putting them all in the same volume isn't an option.

Remote file access is slow enough that you won't notice the symlink
traversal, honest.

> That assumes you change stuff by hand. Don't. Especially with large
> installations, it's just not worth the hassle.

Easy to say. If I had oodles of excess time to polish all the tools so
that I'd never have to touch anything by hand, I'd be all in favor of that
approach.

We have one and a half people handling as many packages as an average
Linux distribution on eight different platforms. This stuff works and is
fast and easy to maintain. Databases are neither, without way more
development effort than anyone here has time for. (And RPM is simply the
wrong tool.)

Kai Henningsen

unread,
Jan 27, 2000, 3:00:00 AM1/27/00
to
r...@stanford.edu (Russ Allbery) wrote on 13.01.00 in <ylvh4xh...@windlord.stanford.edu>:

> Kai Henningsen <kaih=7WlXC...@khms.westfalen.de> writes:
> > r...@stanford.edu (Russ Allbery) wrote:
>
> >> windlord:~> dir /usr/pubsw/bin/ls
> >> lrwxr-xr-x 1 rra root 45 Sep 21 04:38 /usr/pubsw/bin/ls
> >> -> ../package/File/fileutils-4.0/sun4x_56/bin/ls*
>
> > Ugh. Please, no.
>
> > As Linus would probably say, don't put a slow operation in the typical
> > path to speed up an infrequent operation.
>
> If you have some better way of putting all the binaries in a single path
> entry while having them all live on separate partitions, I'd love to hear
> it. (Wouldn't implement it, since this works just fine, but I'd love to
> hear it.) Putting them all in the same volume isn't an option.

First, I'm somewhat suspicious of that assertion. However, it typically
ought to be possible to use just a few directories - probably divided by
application area (just like bin vs. sbin vs. games), especially as that
gives the option of chosing which of them to put into the path.

> > That assumes you change stuff by hand. Don't. Especially with large
> > installations, it's just not worth the hassle.
>
> Easy to say. If I had oodles of excess time to polish all the tools so
> that I'd never have to touch anything by hand, I'd be all in favor of that
> approach.

I'd think the amount of work involved in changing at the source,
rebuilding and reinstalling a package would be not all that different from
changing at the destination - especially when it's more than one
destination.

> We have one and a half people handling as many packages as an average
> Linux distribution on eight different platforms. This stuff works and is
> fast and easy to maintain. Databases are neither, without way more
> development effort than anyone here has time for. (And RPM is simply the
> wrong tool.)

Well, I've next to zero experience with RPM, so I can't comment on how it
works.

However, I suspect that you still think of something like mysql when you
say "database", whereas I think of a directory full of simple textfiles.
And of code that started out, IIRC, as a bunch of Perl, some parts of
which were then rewritten in C to improve performance.

I can still find stuff just using grep.

Russ Allbery

unread,
Jan 28, 2000, 3:00:00 AM1/28/00
to
Kai Henningsen <kaih=7XeCo...@khms.westfalen.de> writes:
> r...@stanford.edu (Russ Allbery) wrote:

>> If you have some better way of putting all the binaries in a single
>> path entry while having them all live on separate partitions, I'd love
>> to hear it. (Wouldn't implement it, since this works just fine, but
>> I'd love to hear it.) Putting them all in the same volume isn't an
>> option.

> First, I'm somewhat suspicious of that assertion.

Putting all the binaries on the same volume? The volume would be over 1GB
and would have to be frequently released. Very bad. AFS no like. AFS
would cope, but releasing new software would be painful. (Volume
replication isn't fast. It doesn't have to be; it's infrequent.)

> However, it typically ought to be possible to use just a few directories
> - probably divided by application area (just like bin vs. sbin
> vs. games), especially as that gives the option of chosing which of them
> to put into the path.

We used to do it this way. It had tons of problems, which is why we
stopped. The large volume problem is a big one, but also on the list are
problems like being unable to change one software package without
releasing pending changes in a separate one (something that we do all the
time). But you'd have to grok AFS, read/write vs. read-only paths, and
the gritty details of volume replication to understand a lot of the
reasoning that went into what we're doing now. And it probably isn't that
generally applicable to non-AFS situations.

> I'd think the amount of work involved in changing at the source,
> rebuilding and reinstalling a package would be not all that different
> from changing at the destination - especially when it's more than one
> destination.

Changing a few symlinks is *way* faster than rebuilding and reinstalling a
package. Particularly for those packages we don't have fully automated.

> However, I suspect that you still think of something like mysql when you
> say "database", whereas I think of a directory full of simple textfiles.

No, I'm thinking of a directory full of text files. I maintain that sort
of database right now, which is why I know it gets out of date and why I
know I want to be able to easily rebuild it from the installed software
tree.

If we had more time to write tools, the situation would be different,
agreed.

0 new messages