Now that the immediate rush to get boxes on desks is over,
I'm actually allowed to spend some time thinking about longer
term issues, like "how the hell am I going to keep 50+ of these
things in reasonable condition, update-wise?".
We solve this problem with Solaris workstations by keeping
most of the applications in a central location which is shared
using NFS. OS updates are done either by re-jumpstarting the
box, or if it's a fix for a "special" problem, installing the
relevant patch.
The central area is organized to make it reasonably easy to
switch between versions of apps.
This doesn't exactly seem in keeping with "the Red Hat way",
as it'd make the rpm database next to useless, but I'm currently
leaning towards something like:
* fairly basic OS installs, done by kickstart (enough to boot,
do maintainance, similar level of functionality as a stock
Solaris or FreeBSD install, but probably without the pretty
desktop)
* everything else compiled from source, and rdist'd out to the
Linux boxes.
(Can you tell I'm a compile-it-from-source-nazi?)
I know there are at least a couple of folks here who maintain
a whole heap of Linux boxes, and I'm *definately* interested
in hearing how you handle these sorts of issues.
Matt
--
Subtlety: the art of saying what you want to say and
getting out of range before it is understood.
And now another one.
I've built a kickstart file specifying what I think is a
reasonable base install (no X, in this case). However,
a bunch of packages which I explicitly did *not* tell it
to install got installed anyway - e.g., sendmail, gmp,
apmd, libstdc++.
When going through and deciding not to install these, I
made sure that no package claimed to depend on them.
(Yes, I really *do* want to install without sendmail. I'd
rather use exim. And I really do want to install without
libstdc++, as I'll be adding gcc-2.95 later).
Any ideas?
Matt
--
And your diminutive canine also!
And as someone kindly mentioned off-group, I forgot that
the installer automatically installs all of the Base
group.
I have to admit that I'm not entirely convinced that logos,
a precision math library, and GIF drawing library really
constitute the *base* of the OS, but who am I to argue?
Matt
(Is a one-person thread just a wee bit naff?)
--
+++ Out of Cheese Error +++ MELON MELON MELON +++
As a side issue, if a Debian package won't compile from source, you should
report a bug...
--
David/Kirsty Damerell, dame...@chiark.greenend.org.uk. All Hail Discordia!
| | And then they came and took me out, The men of doom and malice: | |
|---|Destroyed my life, removed my sense, Gave me the poisoned chalice.|---|
| | | My betrayal's life to me... Elder Sign: Treachery | | |
Well, you can build your own RPMs and then install them on the boxes,
and that would get you the rpm database. However, it has its
weaknesses too - most notably you'd need to play with relocatable RPM
packages and such if you want to handle multiple versions.
Shrug, if you are happy with what you have been doing, you don't need
to abandon it just because someone invented this thing called RPM (or
dpkg or <insert favorite package manager>)....
Red Hat packages should build too (from the source RPM).
The process of going from the source RPM to what we ship is basically
automated, so I'd have a bit of trouble uploading a package which
doesn't build from source even if I tried. Changing compiler versions
is the main reason I can think of that could lead to that.
Jim Kingdon
Red Hat
On Mon, 23 Aug 1999 08:33:02 GMT, Matt McLeod <m...@asac.ericsson.se> wrote:
>
>And now another one.
>
>I've built a kickstart file specifying what I think is a
>reasonable base install (no X, in this case). However,
>a bunch of packages which I explicitly did *not* tell it
>to install got installed anyway - e.g., sendmail, gmp,
>apmd, libstdc++.
>
>When going through and deciding not to install these, I
>made sure that no package claimed to depend on them.
>
>(Yes, I really *do* want to install without sendmail. I'd
> rather use exim. And I really do want to install without
> libstdc++, as I'll be adding gcc-2.95 later).
>
>Any ideas?
--
Bryan C. Andregg * <band...@redhat.com> * Red Hat, Inc.
1024/625FA2C5 F5 F3 DC 2E 8E AF 26 B0 2C 31 78 C2 6C FB 02 77
1024/0x46E7A8A2 46EB 61B1 71BD 2960 723C 38B6 21E4 23CC 46E7 A8A2
That's what I've ended up doing, although it's
not really aesthetically pleasing.
Matt
(who has spent the last six months supporting a hack-upon-hack-upon-hack
system, and would like to avoid introducing such things to this one)
Jim Kingdon <kin...@panix.com> wrote:
>David Damerell:
>>As a side issue, if a Debian package won't compile from source, you should
>>report a bug...
>Red Hat packages should build too (from the source RPM).
Mmmm. Red Hat have the advantage that they can be sure that the source
supposedly for version X actually compiles on version X and not just on
some random version Y that some developer uses, which does tend to produce
a slightly more reliable result; Debian developers, being individuals,
only have (typically) one machine apiece. The only good thing about that
is it produced the nifty libc5 development environment for libc6
machines...
I suspect RH have the advantage here, to be frank.
--
David/Kirsty Damerell. dame...@chiark.greenend.org.uk
CUWoCS President. http://www.chiark.greenend.org.uk/~damerell/ Hail Eris!
|___| fak...@fowler-schocken.culture.dotat.at is not my email address,|___|
| | | and email sent to it will be assumed to be spam and blocked. | | |
> Please do not delete attribution lines.
I don't usually use them because I am trying to respond to ideas more
than to people (or something like that), and in my many years of
experience on Usenet I find that attributions tend to, most of the
time, create more confusion than they solve (my own analogue to
Godwin's law is "by the time that the followups are nested so deeply
that people are arguing over attributions, the thread is over").
Of course, if this makes no sense to you, you are like most people,
but so far on net.* people have seemed willing to treat this as an
eccentricity of mine.
> Red Hat have the advantage that they can be sure that the source
> supposedly for version X actually compiles on version X and not just
> on some random version Y that some developer uses
Basically true. Although developers at Red Hat may have random
versions on our workstations, the packages we ship are built not on
our workstations but on a build machine which is running something
reasonably up-to-date. However, there is a caveat - we don't rebuild
everything when we upload a new compiler. So if Red Hat Linux 5.2
ships with gcc-2.7-13 (or whatever it was), some of the packages might
have been built with gcc-2.7-11 (a development version) or even
carried over from the packages shipped with Red Hat Linux 5.1. In some
cases like libc5->glibc, we carried over nothing, but other times we
do. I think (the disclaimer is that I'm new to Red Hat and not sure I
understand everything yet).
Ideally Debian developers would upload source packages which would then be
magically built; they'd then have a look at the resulting binaries and see
if they actually worked...
>>>>> On 24 Aug 1999 16:32:22 +0100 (BST)
>>>>> David Damerell <dame...@chiark.greenend.org.uk> said:
David> Ideally Debian developers would upload source packages which
David> would then be magically built; they'd then have a look at the
David> resulting binaries and see if they actually worked...
work on *which* system? The developer's?
Providing a good "check" target where possible so that tests can be
run on a the dedicated machine is probably a much better solution. And
less work in the long run ...
Robbe
--
Robert Bihlmeyer reads: Deutsch, English, MIME, Latin-1, NO SPAM!
<ro...@orcus.priv.at> <http://stud2.tuwien.ac.at/~e9426626/sig.html>
Amen, brother! I've always been a big fan of automated testing (both
as part of the packaging/release process, as described here, and also
nightly testing during development itself).
Some people here at Red Hat are starting to get religion on automated
testing but it might be a while before it really gets going in a big
way (my experience with past projects is that it can be difficult
and/or lengthy to really get something like that running smoothly).
But I agree that automated testing is less work when/if you get it
working well. For example, when someone reported a bug in CVS, rather
than try to reproduce it manually I would usually try to reproduce it
in the testsuite. My testsuite work wasn't a separate step, it was an
integral part of the development I was doing. I think the new
maintainers of CVS may be retreating from the testsuite-centric view
at least slightly (probably makes sense, in that the rest of the CVS
development community didn't really have the testsuite religion to
quite the extent that I did).
Bringing it back to Linux, one project I want to get back to is
getting the testsuites for my packages (gcc, gdb, &c) running cleanly
on Linux. However, this is on the back burner for the moment (sigh).
>>>>> On 25 Aug 1999 11:59:01 -0400
>>>>> Jim Kingdon <kin...@panix.com> said:
Jim> (my experience with past projects is that it can be difficult
Jim> and/or lengthy to really get something like that running
Jim> smoothly).
Indeed. This prevents a number of projects from including proper
testsuites. I tried getting the hang of dejagnus until I got disgusted
(it needs TCL! it needs an old version of TCL!! blech!!!) and wrote
a small C program that does the tests - certainly not a very good
solution.
Can anybody tell me whether there is a nice test environment around?
Jim> But I agree that automated testing is less work when/if you get
Jim> it working well.
Yeah, adding /another/ test case is easy. Seeing that your new version
passed all regression tests gives a very good feeling.
Oh, dejagnu is gross, no argument there. The only reason to consider
it is if you need the large library of "connect to odd embedded target
board X" functions.
TET is kind of overcomplicated for what it gets you.
> wrote a small C program that does the tests - certainly not a very
> good solution.
Why not a good solution? The bit about "wrote" and "small" make
perfect sense to me. I would lean more to "perl" or "python" rather
than "C" but I haven't seen any particularly good reason to download
some elaborate framework if a page or a few of code at the start of
your testsuite can do the same thing.
See http://www.cyclic.com/cvs/dev-tests.html (especially "Testsuite
development") and
http://www.cyclic.com/cgi-bin/cvsweb/ccvs/TESTS?rev=1.16 (especially
"ABOUT TEST FRAMEWORKS") for more on what we did for CVS and
alternatives we at least thought about.
> Can anybody tell me whether there is a nice test environment around?
Mmm, I should probably someday see if Red Hat is going to release our
internal environment. It is this web-based thingie which includes
stuff for creating a test from keystrokes and being able to test
curses/X11 programs, and that kind of thing. I haven't used it much
but it seems like it could be useful. Of course at the moment they've
firewalled it even from parts of our internal network (accidentally, I
assume), so they are moving in the wrong direction with respect to
releasing it :-).
> Seeing that your new version passed all regression tests gives a very
> good feeling.
Well, yeah, but part of the struggle is making it so the the "passed
all tests" status continues to be true of the "working" versions (the
gcc or gdb projects aren't there, for example, although they are
close).
>>>>> On 26 Aug 1999 08:58:13 -0400
>>>>> Jim Kingdon <kin...@panix.com> said:
Jim> Why not a good solution? The bit about "wrote" and "small" make
Jim> perfect sense to me.
I'm under the strong impression that I'm reimplementing something
here, that somebody, somewhere has written before, perhaps in a better
way. That irritated me, although writing the thing was fun, as usual.
Jim> I would lean more to "perl" or "python" rather than "C"
My first version was in sh, but I saw no clean way to kill off the
daemon that the script started, so I switched to C. I would have
chosen Perl, but adding another dependency sounded icky.
Jim> but I haven't seen any particularly good reason to download some
Jim> elaborate framework if a page or a few of code at the start of
Jim> your testsuite can do the same thing.
True. But if fledgling programers could whip up a test suite with a
few lines rather than a few pages, more projects would have regression
testing. I imagine something as simple as automake/autoconf.
Jim> Mmm, I should probably someday see if Red Hat is going to
Jim> release our internal environment. It is this web-based thingie
... web-based ... hmmm.
Jim> which includes stuff for creating a test from keystrokes and
Jim> being able to test curses/X11 programs, and that kind of thing.
That sounds good. We'll see when/if it is released.
Ahh, I just found out that Greg is a GNU project now:
<URL:http://www.gnu.org/software/greg/greg.html>. That could mean that
it is actually kind of usable (older versions were non-trivial to
install when one did not have GNUstep).
Greg is a testing framework based on guile.