Hey, what do you know! I've got all that already. Too bad there's
no operating system written in my favorite language...
'james
That's not why Elk needs the SIGBUS handler stuff. Elk's generational
garbage collector uses the VM system's write-protection as a "write
barrier", and thus needs to be able to recover from a write attempt
to a read-only page, flag the page as needing scanning for pointers
to younger generations during the next GC, change the page to "writable",
then restart the failing write so that it can complete successfully.
If you just want to get Elk running without bothering to figure out
the SIGBUS stuff, you can put the line "generational_gc=no" in the
"site" file, and rebuild. That will use the stop© collector...
-Rob
-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043
> Hey, what do you know! I've got all that already. Too bad there's
> no operating system written in my favorite language...
Someone is investigating the possibility of doing something similar:
Schema
http://mailhost.integritysi.com/mailman/listinfo/schema
Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
There is a Debian GNU/Linux package for Elk that works on my glibc 2.1
Debian (potato) system. You might want to check the diffs for that:
<URL:ftp://ftp.debian.org/pub/debian/dists/potato/main/source/devel/>
or thereabouts. elk-something.diff.gz is a diff to be applied to the
original source (elk-something.tar.gz).
(If you actually have a Debian system, just say "apt-get install elk"...)
--
-=- Rjs -=- r...@lloke.dna.fi
Well, I should let the perpetrators speak for themselves, but the list
has been very quiet lately (like several months).
It's not *that* close to an OS, at least IMNSHO. They're more trying
to buile the Lisp-Machine user environment to run on top of a
GNU/Linux core. The last thing I remember from the list traffic was
discussion on what sort of Scheme platform to use. There seemed to be
a fair amount of favor for PreScheme...
david rush
--
This space intentionally left blank
> On 24 Apr 2000 19:53:22 -0800, The Almighty Root <ja...@fredbox.com> wrote:
>
> > Hey, what do you know! I've got all that already. Too bad there's
> > no operating system written in my favorite language...
>
> Someone is investigating the possibility of doing something similar:
>
> Schema
> http://mailhost.integritysi.com/mailman/listinfo/schema
This is a small world... I created that particular list and Schema is
my project. I have remade that list in a new location (I no longer work
for that company and their mailserver has problems now that I'm not
maintaining it) but I never got around to updating anything or moving
the archives (or subscriptions for that matter) because I haven't had
anything to report on the progress.
Actually I'm reconsidering the entire design once again, this time for
pragmatic reasons. To tell the truth, someone (Shriram Krishnamurthi?
I've forgotten who...) made the wise comment that we don't really need
YA Scheme implementation, and this woke me up so I'm shopping for ideas
right now.
Basically what I'm thinking of doing, at least to get some quick
headway, is to utilize the hardware support that both X and the Linux
kernel give. Both of these software systems provide a vast amount of
support for different hardware devices and platforms and I don't want to
remake the wheel when it comes to implementing the low-level device and
architecture support for an operating system. I'm sure that at some
point a successful Scheme-based operating system has to consider this,
but I'm not going to start with bare metal because it takes so long to
obtain usable results. I've realized that the Unix design, although a
sad excuse for a real operating system, does happen to provide an
excellent skeleton infrastructure for higher-level operating system (or
better, `operating environment') implementation.
The most irritating part of Unix IMHO is not the design of the kernel
(yeah, yeah it's a monolithic spaghetti ball) or the functionality of
system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
panics, or whatever else is associated with the kernel and driver
implementations. What's *really* irritating about the Unix design is
all the institutionalized crufty software still floating around after
thirty years of development, redesign, and redevelopment. Unix hackers
have long spent time hacking on the hardware support, improving process
scheduling, memory management, and the like, but they still live with an
interface that feels just like 2.9BSD on a PDP-11/40, with some frills.
It's disgusting. Everything from the init process on upwards is
institutionalized, designed just like it was on the good old
minicomputers.
(I'm not degrading the Unix (or UN*X as it were) of that era, nor the
machines it ran on, many of which I'm enamored of and wish I could own.
I'm criticizing the stubborness of an operating system that dates from
that era and appears to be little changed from it.)
Runlevels are a supreme example of what I'm talking about. What the
hell is a runlevel, really? If I check the manual page for init(8)
("man init" -- how obvious is that?) I read:
"A _runlevel_ is a software configuration of the system which allows
only a selected group of processes to exist. The processes spawned by
init for each of these runlevels are defined in the #/etc/inittab# file.
Init can be in one of eight runlevels: 0-6 and S or s. The runlevel is
changed by having a priveleged user run #telinit#, which sends
appropriate signals to #init#, telling it which runlevel to change to."
[Linux System Administrator's Manual, init(8)]
Now all that init really does, it seems, (at least sysvinit, the init
package used on my Linux box, but most other init packages for Unices
are similar) is spawn some gettys and a few programs to handle signals
(like shutting down on C-M-Delete), and run a big nasty wad of shell
scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).
You have the choice to boot the machine into a certain runlevel that
will necessitate the running of a horde of incomprehensible scripts that
either spew senseless `informational' error messages on the screen at a
rate so fast on newer machines that you can't read them, or that spew
green ASCII art on the screen through the use of yet more arcane and
incomprehensible scripts that aren't even documented. Sometimes the
latter scripts even spew little red ASCII art to tell you that something
went wrong, but the little red ASCII art scrolls off the screen so fast
that you can't tell what happened by the time a getty takes over the
console and obliterates what information might be left or still saved in
the console's scrollback buffer. And if you want to fix such a problem
you have to wade through logs to find what happened, interpret the
arcane error message ("modprobe: modprobe: Can't locate module
sound-service-0-0\naumix-minimal: aumix: error opening mixer"), guess
which of the N scripts generated this message, find the offending line
in that script, hunt for the variables that the script inherited from
some other script that ran it which inherited those variables from the
previous script that found them in some script consisting entirely of
variables which are passed to the offending program giving it the
completely wrong idea about reality that it is suffering from, wade
through five manual pages with no index other than a raw string search,
discover the appropriate argument to some command line option by sheer
luck, change the original file that the variables came from to the
correct the mistaken assumption that the script writer had about your
machine, simulate actually running the script with the correct arguments
by running the program in question with a hand written series of options
gleaned from the scripts being modified, and hope like hell the whole
mess works the next time the machine is rebote because some stupid web
browser wanked out and wedged all the fscking input devices.
It's rather patently obvious that a vastly better design is possible
than the current init cruft. Most any design for init would be better
than such a mess. I'm not suggesting something even more inane, like
the garbage that Windos NT has foisted off on the public, with its
little window in an itty-bitty font that contains supposed `services'
(Unix daemons) that you click on and press buttons that frequently don't
do the same thing they advertise. That's just as bad if not worse than
Unix init.
I know that it's very impolite to criticize a design without offering
some sort of alternative. So how about this. Init, when started,
spawns off a handful of processes and manages them (keeping them alive,
killing them, restarting them, etc.). These processes are daemons
dedicated to some major subsystem providing necessary functionality for
the system. Such a daemon itself manages other daemons that it spawns
off, each of which implements the actual functionality of a service. So
we have an init which starts the supervisory daemon (superdaemon) for
networking, which subsequently spawns the TCP/IP daemon (aka inetd)
which then handles all subsequent TCP/IP services. If we change the
configuration file for the TCP/IP daemon and wish it to reload its
configuration we inform the networking superdaemon which performs the
necessary magic on the TCP/IP daemon to achieve that end. If for some
reason the TCP/IP daemon gets killed then the networking superdaemon
will check to see how it died and will send the appropriate log message
(which is written out to a log by the logging daemon which also informs
the administrator through the appropriate channels) as well as
restarting it. Init itself has a configuration file determining which
services should be started and how they should be managed, and can take
a command line argument which disables all of them and drops into the
equivalent of single user mode. Each superdaemon has an appropriate
configuration file which instructs it on what daemons to start with what
arguments and how to manage each of them individually. Every daemon has
six things that can be done to it by its superdaemon -- stop it, start
it, cycle it (stop and restart), pause it, reconfigure it (tell it to
reload its configuration files), and obtain its current running status.
Of course, all of this structure should be built in Scheme. It should
be integrated into coherent parts, all of which access each other
through well-defined interfaces and operate in a simple, predictable
manner. Nothing like the ad hoc mudball of the current init.
That's my current project. If I can replace the init structure that
exists with something more sensible and easy to configure and maintain
then I'll have made a big step forward towards a complete Scheme-based
system running on the Linux kernel. From there I suppose a Scheme-based
command interpreter will be necessary, some sort of Scheme interpreter
with a special mode for a terse, low-parenthesization syntax. From
there perhaps a rewrite and redesign of the usual Unix utilities,
replacing those which are a pure sop for shell programming with loadable
Scheme libraries to do various utilitarian tasks. And then perhaps an
Emacs-like editor in Scheme (which can probably be ripped off). From
there the GUI programs need to be written, such as an integrated
Scheme-based window manager, a web browser, and other interesting parts.
Then perhaps the slow, steady replacement of the existing libraries and
development tools with Scheme-based tools. Then a portability library
and tools to integrate the other Unix utilities and programs which are
more difficult to replace, such as TeX, GCC, media applications (eg Xmms
and RealPlayer(tm)), and so forth into the blue, blue sky.
This plan is as ambitious as a Scheme-based operating system implemented
atop bare metal, but results of this plan will become usable much sooner
than a bare metal implementation.
The first problem I'm currently facing is whether to design and
implement a new Scheme that will provide both a good compiler (we are
after all making system programs here) and the requisite FFI and support
for Linux system calls, or whether I can find one that with some hacking
will fit my needs. If indeed I do find the right Scheme implementation
I'll need to learn how its guts work so I can hack on it to add the
features I'll need. I'll also need to find a compiler if the right
implementation is lacking one and adapt it to that implementation. Then
I'll have to start testing the syscalls and make Scheme stubs for FFI
calls to the various libraries I'll be using. Then things will begin to
happen.
This is why I was curious about getting Elk running. It seemed like it
might have enough support to get started with. Scsh looks suitable as
well, but I've never heard of a native compiler for S48 and I'm somewhat
wary of implementing one for it. Guile looked might it have promise at
first, but I've been getting bad vibes from the documentation. MIT
Scheme might have the necessary functionality as well, but I haven't
looked too closely at it. I also haven't gotten around to examining
MzScheme, which might show promise as well.
So can anyone offer suggestions as to Scheme choices? Experience with
the various FFIs and compilers? Ideas and concerns about implementing
native compilers for the various Schemes? I won't kid myself about the
largeness of this undertaking and how much work it will take to
implement...
If you think I'm crazy then go ahead and say it, but that doesn't mean
I'm going to listen very closely.
'james
> I know that it's very impolite to criticize a design without offering
> some sort of alternative. So how about this. [snip]
the design you propose sounds remarkably similar to the design used for
many unix systems already. perhaps it is not redesign that you desire?
> This is why I was curious about getting Elk running. It seemed like it
> might have enough support to get started with. Scsh looks suitable as
> well, but I've never heard of a native compiler for S48 and I'm somewhat
> wary of implementing one for it. Guile looked might it have promise at
> first, but I've been getting bad vibes from the documentation. MIT
> Scheme might have the necessary functionality as well, but I haven't
> looked too closely at it. I also haven't gotten around to examining
> MzScheme, which might show promise as well.
guile docs give me strange vibes, too. perhaps this tutorial can help:
http://freespace.virgin.net/david.drysdale/guile/tutorial.html
> If you think I'm crazy then go ahead and say it, but that doesn't mean
> I'm going to listen very closely.
lucky for you there are enough lunatics out there already doing pieces
of what you want to do. if you can munge the nature of your craziness
from implementation to integration, you might get results faster.
thi
> The Almighty Root <ja...@fredbox.com> writes, among other things:
>
> > I know that it's very impolite to criticize a design without offering
> > some sort of alternative. So how about this. [snip]
>
> the design you propose sounds remarkably similar to the design used for
> many unix systems already. perhaps it is not redesign that you desire?
It is to a certain extent but I'm trying very hard to regularize it as
much as possible. Perhaps to the extent of making wrappers for
daemons, or even replacing them. Though one daemon may like a SIGHUP
to tell it to reload its configuration and another one likes a SIGUSR1
for the same the interface to them both will be completely the same.
And the administrator/user won't ever send signals to daemons by hand,
but will use the provided channels. The only time an administrator
should need to send signals with kill (other than for broken user
processes) would be if a daemon has wedged itself horribly. In that
case the superdaemon would respawn the appropriate daemon without even
thinking.
That entire spiel was, I should note, a one-off. I just thought it
out while composing that message, although the idea had been bouncing
around in my head for the last week. Suggestions are certainly
welcome, especially before I start writing anything, while my opinions
are still malleable. Any other people have init-replacement ideas?
> guile docs give me strange vibes, too. perhaps this tutorial can help:
>
> http://freespace.virgin.net/david.drysdale/guile/tutorial.html
That's an interesting tidbit. It however reinforces my belief that
guile isn't meant to be used except as an extension to a C program. I
don't want any C floating around except perhaps within the Scheme
implementation itself. That I'd have to call the interpreter and feed
it functions all in C leaves a bad taste in my mouth.
This is the same reason why I don't hack on SCWM, although I use it
incessantly. Because to hack on it requires extensive knowledge of
the C implementation and the Xlib cruft that lurks inside it. I
wouldn't mind Xlib programming if I didn't have to use C. Using
Scheme as an extension language for an already written program is a
Guile mindset that I don't share. And don't like.
Oh, for a free Scheme compiler that produces independently linkable
and executable native object code... (And implements all of R5RS!)
> > If you think I'm crazy then go ahead and say it, but that doesn't mean
> > I'm going to listen very closely.
>
> lucky for you there are enough lunatics out there already doing pieces
> of what you want to do. if you can munge the nature of your craziness
> from implementation to integration, you might get results faster.
I had this idea. I figure that by showing some initiative on a hard
part of this project and by manufacturing tools and an implementation
base I'd get enough people interested to start writing new software
and integrating existing software.
But first I've got to get those tools made and the base started...
'james
> The Almighty Root <ja...@fredbox.com> writes:
> >Has anyone had success at getting Elk 3.0 to run on Linux 2.2.x kernels
> >with glibc 2.1 (libc 6)? The i486-linux-gcc configuration file is
> >woefully out of date wrt libc 6 systems, and doesn't understand Linux's
> >ELF (it supports a.out only and no dynamic linking). I'm quite sure
>
> There is a Debian GNU/Linux package for Elk that works on my glibc 2.1
> Debian (potato) system. You might want to check the diffs for that:
> <URL:ftp://ftp.debian.org/pub/debian/dists/potato/main/source/devel/>
> or thereabouts. elk-something.diff.gz is a diff to be applied to the
> original source (elk-something.tar.gz).
Thank you very much. It's actually compiling now as I write this,
which is much better than a bunch of make errors.
'james
It's called Bigloo. I mentioned this to you back in the days of the
Schema mailing list, but nobody seemed interested. Bigloo is a
Scheme->C with a very C-friendly FFI (as in you don't need to wrap
existing C libraries with still more C-code). The only problem I've
ever had with it is in a call/cc-heavy program (which was slow, and
has GC problems on certain platforms), but that's a result of
its decision to be C-friendly (and the Boehm collector).
The beauty was that I was able to do a nearly mindless port to other
Schemes from Bigloo. It is *very* standards compliant, including
SRFI-0 et al.
david rush
--
A Bigloo fan since the last century...
> I've realized that the Unix design, although a
> sad excuse for a real operating system, does happen to provide an
> excellent skeleton infrastructure for higher-level operating system (or
> better, `operating environment') implementation.
>
> [rant elided]
>
> I know that it's very impolite to criticize a design without offering
> some sort of alternative.
Traditional Unix-haters didn't feel a need to offer an alternative for
a number of reasons:
1. It's not a `design' when it is patently obvious that no
forethought went into it.
2. Any reasonable person with a 6th grade education could do better,
so several obvious alternatives have probably already been dreamed
up.
3. Implying that improvements would be considered or adopted offends
the jaded demeanor of other Unix haters.
4. The catharsis comes from the vitriol.
That being said....
> So can anyone offer suggestions as to Scheme choices? Experience with
> the various FFIs and compilers? Ideas and concerns about implementing
> native compilers for the various Schemes? I won't kid myself about the
> largeness of this undertaking and how much work it will take to
> implement...
MIT Scheme is a hairball, but it has a *great* compiler. Very few
people (1?) are currently working on developing it.
MzScheme is under active development, and it has been compiled to a
standalone kernel using the Flux OS Toolkit.
--
~jrm
> The most irritating part of Unix IMHO is not the design of the kernel
> (yeah, yeah it's a monolithic spaghetti ball) or the functionality of
> system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
> panics, or whatever else is associated with the kernel and driver
> implementations. What's *really* irritating about the Unix design is
Ya know, I've seen other references to "PCLusering" in Lisp groups
recently, I think I know what it means, and I've read Gabriel's parable.
Given that BSD has had restartable system calls for the last 17
years, could someone explain to me what "PCluser" problems still exist in
modern Unix?
Tim
Basically what I'm thinking of doing, at least to get some quick
headway, is to utilize the hardware support that both X and the Linux
kernel give. Both of these software systems provide a vast amount of
support for different hardware devices and platforms and I don't want to
remake the wheel when it comes to implementing the low-level device and
architecture support for an operating system.
OSKit gets you up and running on the bare metal pretty painlessly. It's
been used to get Scheme, Java & ML systems up on raw machines.
Scsh looks suitable as well, but I've never heard of a native compiler for
S48 and I'm somewhat wary of implementing one for it.
Scsh is not primarily an implementation. It is a design and a huge pile of
source, both of which are available, for free, on the Net. You can take
it all and repurpose it to any end you like. Just the design work it
represents is not insignificant.
A full compiler for S48 would be some work, but it would be quite easy to do a
byte-code->x86 translator. That would get you a huge performance improvement.
Performance is just not the issue, though. If you build something -- anything
-- real and it's useful, people will figure out ways to make it faster.
The big outstanding issues with scsh (Scheme48) are how to get dynamic
module loading/linking, and separate byte compilation of source modules.
I've been waiting 9 years for these things. Adding them to S48 would have
a big impact on the useability of the system, in terms of startup time
and memory footprint.
Runlevels are a supreme example of what I'm talking about. What the
hell is a runlevel, really? If I check the manual page for init(8)
("man init" -- how obvious is that?) I read:
Now all that init really does, it seems, (at least sysvinit, the init
package used on my Linux box, but most other init packages for Unices
are similar) is spawn some gettys and a few programs to handle signals
(like shutting down on C-M-Delete), and run a big nasty wad of shell
scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).
If you don't like that stuff, you can replace it with something written in any
good Unix-based Scheme *without* getting into the mess of doing your own
OS. The init process can be anything you want it to be; its architecture is
not baked into the Unix kernel design. Its job is one very well suited to
Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
would be a fine thing.
I have, over time, moved a lot of the /etc scripts on my notebook over to scsh
-- ppp dialup, pcmcia, config bits, backup dumps, etc. It's *very* pleasant to
do this kind of stuff in scsh.
If you think I'm crazy then go ahead and say it,
You are crazy, but that's not important. The only thing that matters is
whether or not you do anything. Do *anything*, and you matter.
Just for fun, I append a typical system script I use that's written in Scheme.
It does backups over the net; I use it almost every day.
-Olin
-------------------------------------------------------------------------------
#!/usr/local/bin/scsh \
-o let-opt -e main -s
!#
;;; Dump a file system on my notebook computer out to a backed-up
;;; disk on a sessile system. The bits are compressed, encrypted,
;;; and copied over the net using ssh to a file named
;;; $name$level.gz.2f.
;;; If you say
;;; netdump 0 / root
;;; then you do a level 0 dump of the / file system to a file named root0.dgz.bf
;;; in the fixed directory /home/c3/shivers/mk-backup/stable/.
;;;
;;; We play some games with ssh and su, because this script has to be run
;;; by root in order to have total access to the file system being dumped,
;;; but you must be someone less threatening (me) so that the remote machine
;;; will allow you to ssh over and write the bits.
;;; -Olin
(define tdir "/opt/backups/spool/shivers") ; The target directory
(define me "shivers")
(define rhost "tin-hau.ai.mit.edu")
(define (useage)
(format (error-output-port)
"Usage: netdump level dir name\nFiles backed up to ~a on ~a.\n"
tdir rhost)
(exit -1))
;;; These guys are useful for root scripts.
(define-syntax exec/su ; (exec/su uname . epf)
(syntax-rules () ; Su to UNAME, then exec EPF.
((exec/su user . epf)
(begin (set-uid (->uid user))
(exec-epf . epf)))))
(define-syntax run/su ; (run/su uname . epf)
(syntax-rules () ; Run command EPF as user UNAME
((run/su user . epf)
(wait (fork (lambda () (exec/su user . epf)))))))
(define (main args)
(if (= 4 (length args))
(let* ((level (cadr args))
(dir (caddr args))
(name (cadddr args))
(fmt (lambda args (apply format #f args))) ; abbreviation
(newfile (fmt "~a/new/~a~a.tgz.2f" tdir name level))
(stablefile (fmt "~a/stable/~a~a.tgz.2f" tdir name level)))
(format (error-output-port) "Starting level ~a dump of ~a to ~a.\n"
level dir newfile)
;; The exit status of a pipeline is the exit status of the last element
;; in the pipeline -- so (| (dump) (copy-to-remote-machine)) won't
;; tell us if the dump succeeded. So we do it the hard way -- we
;; explicitly fork off the dump and copy procs, pipe them together
;; by hand, and check them both.
;; Fork off the dump process: dump uf<level> - <dir>
(receive (from-dump dump-proc)
(run/port+proc (dump ,(fmt "uf~a" level) - ,dir))
;; Fork off the compress/encrypt/remote-copy process,
;; sucking bits from dump's stdout.
(let ((copy-proc (fork (lambda ()
(exec/su me
(| (gzip)
(mcrypt)
(ssh ,rhost
; "dd of=/dev/tape"
,(fmt "cat > ~a" newfile)))
(= 0 ,from-dump))))))
(close from-dump)
(cond ((and (zero? (wait dump-proc)) ; Wait for them both
(zero? (wait copy-proc))) ; to finish.
;; The dump&net-copy won; move the file to the stable dir.
(run/su me (ssh ,rhost mv ,newfile ,stablefile))
(format (error-output-port) "Done.\n"))
(else (format (error-output-port) ; Oops.
"Had a problem dumping ~a = ~a!\n" dir name)))))
(exit))
(useage)))
You won't find any consensus on a Scheme implementation. This is a
religious question. It's like asking for the preferred Editor, or Language,
or Native-/C-backend, or CPS-/Direct-style, or,or,or... :-)
How about Bigloo or Gambit? Since they generate C it should be possible
to start smoothly with replacing OS functionality. Not to speak of the
portability
gains.
BTW, How many has this idea (LISP/Scheme OS/OE) before? And failed? :-)
felix
(who probably doesn't know what you're talking about)
The research material that has come out of that group has been rather
neat; the substrate that pulls in Linux and FreeBSD drivers via
something resembling COM is the _slickest_ idea of modern days for
gaining some advantage from the development of device drivers for
Linux and FreeBSD.
I am, however, a bit skeptical that this approach is of _massive_
benefit.
The problem that virtually all attempts at "LispOS" implementations
have fallen prey to is that of getting caught up in having to support
all sorts of bizarre sorts of hardware.
A whole bunch <http://www.hex.net/~cbbrowne/lisposes.html> have come
and gone.
The OSKit approach looks like the one that most plausibly offers a
route to get _some_ benefit from the _massive_ efforts going into
hardware support on Linux and *BSD; it is, nonetheless, only providing
the hardware support that was available in early 1997. (Linux 2.0.29)
Further, OSKit is not portable to more than IA-32 systems. More is
predicted, but I rather think that it has been predicted for several
years now, to no avail.
The approach that seems _rather_ more "production-worthy," at this
point, is that of building a "Lisp System" by layering a Lisp-based
set of user space tools atop a kernel coming from Linux or one of the
BSDs.
>>Runlevels are a supreme example of what I'm talking about. What the
>>hell is a runlevel, really? If I check the manual page for init(8)
>>("man init" -- how obvious is that?) I read:
>
>>Now all that init really does, it seems, (at least sysvinit, the init
>>package used on my Linux box, but most other init packages for Unices
>>are similar) is spawn some gettys and a few programs to handle signals
>>(like shutting down on C-M-Delete), and run a big nasty wad of shell
>>scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).
>
>If you don't like that stuff, you can replace it with something written in any
>good Unix-based Scheme *without* getting into the mess of doing your own
>OS. The init process can be anything you want it to be; its architecture is
>not baked into the Unix kernel design. Its job is one very well suited to
>Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
>would be a fine thing.
Yes, indeed.
<ftp://linux01.gwdg.de/pub/cLIeNUX/interim/> is the home of cLIeNUX, a
Linux that is essentially "Forth-based."
Notable properties:
- It uses a very different filename hierarchy that is very
non-UNIX-like:
<ftp://linux01.gwdg.de/pub/cLIeNUX/descriptive/DSFH.html>
"cLIeNUX now implements what I call the DSFH, the Dotted Standard
Filename Hierarchy. I had some nice docs on this that got vaporized
in a reboot accident. This happens when doing a distro. For now,
look at the DSFHed script, and the symlinks in / . DSFH makes the
standard unix filenames invisible, and modifies them. They are
still there though, in modified form. Stuff that looks for
e.g. /bin automatically can be converted to look for /.bi
automatically, automatically. And the user gets sensible names to
look at in her native language. Sorry if that sounds crazy."
- It is based on LIBC5, and uses C and FORTH as the base programming
languages.
It _appears_ that it uses a customized init; that is the _really
crucial_ thing that would change in creating a "LispOS" atop the Linux
kernel. I'm not sure if cLIeNUX init is written in FORTH; that would
be a pretty appropriate thing, although I somehow suspect that it is
not.
Everything else, on Linux, is invoked via init, whether directly or
indirectly, so that if you change init, that provides a substantially
different character to the system.
The other notable Linux that has a Rather Unique Init is
<http://www.pell.chi.il.us/~orc/Mastodon/> David Parsons' "Mastodon."
The point, if it's not clear, is that there is fairly ample opportunity
to customize a system _based on Linux_ into whatever form you like.
cLIeNUX is an example of how a "Forth person" built something that
runs a C-and-Forth "userspace."
Creating a "userspace" in your favorite image may be adequate to
provide the environment desired, and if that be so, that is likely to
be a stabler choice than most of the alternatives, as the ongoing
development of Linux-as-kernel provides a platform that can improve
without necessarily forcing you to rewrite great gobs of it each time
Intel comes out with a new CPU.
>I have, over time, moved a lot of the /etc scripts on my notebook
>over to scsh -- ppp dialup, pcmcia, config bits, backup dumps,
>etc. It's *very* pleasant to do this kind of stuff in scsh.
This is an area in which it is quite unfortunate that there _hasn't_
been any improvement on UNIX; there have traditionally been two
approachs:
a) The BSD way, where you have a script that starts up desired
services, and
b) The SysV way, where there is a boatload of scripts that start up
individual services, and then symbolic links to turn this into a
list that can be executed.
People of both 'religion' take potshots at the other, so that the only
choice is between the Right Way, which is the init that _I_ use, and
the Wrong Way, which is the init that _you_ use.
Virtually no examination of the question, "Is there perhaps a better
way?"
David Parsons' approach seems more like using a UNIX "Makefile" to
establish a set of service dependancies that need to be satisfied.
There are some deadlock conditions to worry about, but it would be a
Truly Good Thing to try to come up with a better way of managing this
stuff.
Note that the Software Carpentry
<http://software-carpentry.codesourcery.com/> project is seeking to
build tools to supercede autoconf, make, Expect, and Bugzilla.
They've got some funding, and a goodly hundred or so candidate
utilities proposed in the various tool classifications.
Something good may come out of _that_.
>>If you think I'm crazy then go ahead and say it,
>
>You are crazy, but that's not important. The only thing that matters
>is whether or not you do anything. Do *anything*, and you matter.
Indeed.
Take a Linux kernel, write an "init" that uses Scheme-based startup
scripts, and build up a "takes-a-few-floppies" distribution that
parallels cLIeNUX in having a user-space that is largely coded in
Scheme, and this can become an interesting project.
Unfortunately, there is a dearth of Schemes that compile directly to
machine code; that is rather more common with Forth. It might be more
"natural" to implement this using CMU Common Lisp instead. But the
exploration of how to implement this would doubtless provide
interesting insights and learning...
--
A student, in hopes of understanding the Lambda-nature, came to
Greenblatt. As they spoke a Multics system hacker walked by. "Is it
true", asked the student, "that PL-1 has many of the same data types
as Lisp?" Almost before the student had finished his question,
Greenblatt shouted, "FOO!", and hit the student with a stick.
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>
> > Oh, for a free Scheme compiler that produces independently linkable
> > and executable native object code... (And implements all of R5RS!)
>
> It's called Bigloo. I mentioned this to you back in the days of the
> Schema mailing list, but nobody seemed interested.
That's because I was still trying to implement my own VM for a Scheme
system. Which was really just spending time in the wrong place, since
I'd never catch up with what's been done already.
> Bigloo is a
> Scheme->C with a very C-friendly FFI (as in you don't need to wrap
> existing C libraries with still more C-code). The only problem I've
> ever had with it is in a call/cc-heavy program (which was slow, and
> has GC problems on certain platforms), but that's a result of
> its decision to be C-friendly (and the Boehm collector).
I've never understood why compiling call/cc should be so much problem.
I'm under the impression (from not too thoroughly reading various PhD
theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
making use of call/cc can be CPS transformed into a program with only
explicit continuations. From there some partial evaluation and
optimization can be done, if wished, and then the program can be
transformed into machine code (or C, which is close enough). In that
case, since call/cc is transformed into various shapes of explicit
continuations, it shouldn't serve to be a problem. Am I confused
about this? Or is theory not directly applicable to practice in this
case? Or is it just that many implementations have been made by
people unfamiliar with the CPS transformation and competitors (I don't
buy that *at all*)?
> The beauty was that I was able to do a nearly mindless port to other
> Schemes from Bigloo. It is *very* standards compliant, including
> SRFI-0 et al.
Bigloo sounds very promising. I'll dl a copy and take a good look,
and compare with Scsh (which is the top of my list right now).
> A Bigloo fan since the last century...
Since 1900 or earlier?? Sorry... But I had to dig at that since I've
been kidding everyone else about this Brand New Century stuff. There
was no year 0, so you start from one. Thus the year 10 is still in
the first decade, and the year 2000 is the last year of the 20th
century. The easiest way to remember all of this is that the century
you're in is named after the last year in it -- 19th century was 1801
to 1900, and 20th century is from 1901 to 2000. It's an off-by-one
error, though not very obvious to people used to counting from zero.
'james
> Traditional Unix-haters didn't feel a need to offer an alternative for
> a number of reasons:
>
> 1. It's not a `design' when it is patently obvious that no
> forethought went into it.
>
> 2. Any reasonable person with a 6th grade education could do better,
> so several obvious alternatives have probably already been dreamed
> up.
>
> 3. Implying that improvements would be considered or adopted offends
> the jaded demeanor of other Unix haters.
>
> 4. The catharsis comes from the vitriol.
Hahaha! It's nice to see that someone can reason about Unix-haters...
So many of them are secret Unix bigots in the first place. Although
admittedly some of them come from more illustrious backgrounds, like
the Lisp Machines, or TENEX and TWENEX, or the like.
> That being said....
>
> > So can anyone offer suggestions as to Scheme choices? Experience with
> > the various FFIs and compilers? Ideas and concerns about implementing
> > native compilers for the various Schemes? I won't kid myself about the
> > largeness of this undertaking and how much work it will take to
> > implement...
>
> MIT Scheme is a hairball, but it has a *great* compiler. Very few
> people (1?) are currently working on developing it.
I had thought that the compiler didn't actually generate independently
executable code, but code only loadable into the interpreter. In that
case the interpreter would have to be loaded and running for anything
else to happen, which would slow the boot process down quite a bit on
slower machines (like mine).
And the fact that only Chris Hanson (sp?) is apparently maintaining
it, and that no new releases have come out for a *long* time, and that
it isn't R5RS compliant (with the requisite implementation of the
macro system, I feel there are too many weights against it.
> MzScheme is under active development, and it has been compiled to a
> standalone kernel using the Flux OS Toolkit.
I had thought about this before, but it does away with the hardware
support that I can get from the Linux/X combination. The Flux toolkit
would allow me to make modules out of all the code that I might use
for hardware support, process scheduling, etc, but then I'd have to
keep watch on gritty parts of the Linux kernel and the X system for
what I'd need to update. That would defeat much of the purpose of
this in the first place.
In the end I'd really like to have something akin to a Linux
distribution, but with Scheme-based programs replacing much of the OS.
Given that many tools for developing such distributions are already
available, I feel that this is a goal with some near-future promise.
'james
> On 25 Apr 2000, The Almighty Root wrote:
>
> > The most irritating part of Unix IMHO is not the design of the kernel
> > (yeah, yeah it's a monolithic spaghetti ball) or the functionality of
> > system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
> > panics, or whatever else is associated with the kernel and driver
> > implementations. What's *really* irritating about the Unix design is
>
> Ya know, I've seen other references to "PCLusering" in Lisp groups
> recently, I think I know what it means, and I've read Gabriel's parable.
> Given that BSD has had restartable system calls for the last 17
> years, could someone explain to me what "PCluser" problems still exist in
> modern Unix?
I'm not sure about the BSD implementation, but in ITS ISTR any system
call could not only be restarted, but totally backed out of such that
the system call seemed to never actually have happened. The feeling
of the ITS hackers is that if this was already done once there's no
reason for anyone not to implement it again, since the brain work of
inventing it has already been done. Nevermind the fact that all the
ITS source was written in an incompatible (sorta) version of the
PDP-10 assembly language (which had many features of the higher level
languages of the time, in fact), and that the PDP-10 instruction set
had certain aspects that were hard to duplicate on other platforms.
And that much of the code to ITS is impossible to read without
commentary from the original authors.
There's a paper about PCLSRing written by Alan Bawden that I can't
seem to recall. But if you search for his name and the string "PCLSR"
you'll probably hit paydirt. Or you could stop by alt.sys.pdp10,
which has been very active recently, and is filled with crufty hackers
discussing various crufty aspects of the -10 series computers.
If I'm wrong about what I said I apologize in advance. It's been well
over a year and a half since I read that paper, and I've never worked
on an ITS system. Just read about them and appreciated their
grandeur. And browsed some source and docs.
'james
This must be the one you're thinking of:
http://www.inwap.com/pdp10/pclsr.txt
> Centuries ago, Nostradamus foresaw a time when Olin Shivers would say:
> >OSKit gets you up and running on the bare metal pretty painlessly. It's
> >been used to get Scheme, Java & ML systems up on raw machines.
>
> The research material that has come out of that group has been rather
> neat; the substrate that pulls in Linux and FreeBSD drivers via
> something resembling COM is the _slickest_ idea of modern days for
> gaining some advantage from the development of device drivers for
> Linux and FreeBSD.
>
> I am, however, a bit skeptical that this approach is of _massive_
> benefit.
>
> The problem that virtually all attempts at "LispOS" implementations
> have fallen prey to is that of getting caught up in having to support
> all sorts of bizarre sorts of hardware.
This is exactly what I already came to terms with. I've never liked
writing drivers, or any other code that operates at a similar level.
Even serial communications programs bug me. I don't like to think of
bit-shifting and masking unless I have to. It takes too many cycles
best spent on other things. Writing something which is essentially
*only* that is right out, in my opinion.
> The OSKit approach looks like the one that most plausibly offers a
> route to get _some_ benefit from the _massive_ efforts going into
> hardware support on Linux and *BSD; it is, nonetheless, only providing
> the hardware support that was available in early 1997. (Linux 2.0.29)
> Further, OSKit is not portable to more than IA-32 systems. More is
> predicted, but I rather think that it has been predicted for several
> years now, to no avail.
Also note that keeping the OSKit up to date requires extensive
knowledge of both *BSD and Linux and following their respective
development processes intently. Understanding the changes being made
to the entire kernel structure of both systems, in parallel, is a very
difficult undertaking. Managing to unglue these parts and meld them
into the OSKit is equally nontrivial. Doing this alone, even just
once to bring things up to date before you develop your OS, is
unreasonable at best.
> The approach that seems _rather_ more "production-worthy," at this
> point, is that of building a "Lisp System" by layering a Lisp-based
> set of user space tools atop a kernel coming from Linux or one of the
> BSDs.
Which is what I mentioned earlier. I figure that replacing the user
space of a Linux system in an incremental fashion will succeed where
all other attempts have failed. Linux is already the most-ported OS
in history. Unreasonable amounts of support for all sorts of generic,
crufty, and crappy hardware is already available, and the list is
growing longer as I type. If a Scheme-based user space was
implemented atop this then the only thing that would need porting
would be the compiler and interpreter, and total portability would be
achieved, modulo programs depending on hardware support, which I think
would be few.
> >If you don't like that stuff, you can replace it with something written in any
> >good Unix-based Scheme *without* getting into the mess of doing your own
> >OS. The init process can be anything you want it to be; its architecture is
> >not baked into the Unix kernel design. Its job is one very well suited to
> >Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
> >would be a fine thing.
Agree. An MTA is one of the things I'd like to tackle after replacing
init and friends.
> Yes, indeed.
>
> <ftp://linux01.gwdg.de/pub/cLIeNUX/interim/> is the home of cLIeNUX, a
> Linux that is essentially "Forth-based."
>
> Notable properties:
>
> - It uses a very different filename hierarchy that is very
> non-UNIX-like:
> <ftp://linux01.gwdg.de/pub/cLIeNUX/descriptive/DSFH.html>
> "cLIeNUX now implements what I call the DSFH, the Dotted Standard
> Filename Hierarchy. I had some nice docs on this that got vaporized
> in a reboot accident. This happens when doing a distro. For now,
> look at the DSFHed script, and the symlinks in / . DSFH makes the
> standard unix filenames invisible, and modifies them. They are
> still there though, in modified form. Stuff that looks for
> e.g. /bin automatically can be converted to look for /.bi
> automatically, automatically. And the user gets sensible names to
> look at in her native language. Sorry if that sounds crazy."
Why is it that when anyone proposes a replacement or redesign for Unix
Brain Damage they always feel as though they have to apologize for
their craziness? I already apologized myself. It's almost
instinctive that hordes of Unix weenies are going to pour out of the
hills with their little furry hats on waving curved swords and silk
banners, screaming in arcane Mongolian tongues like Tcsh, Awk, and
Perl, lusting after the thoughtful person's blood and his female
family members.
I personally don't like changing the structure of the root directory
too much, but I would like to see the /usr and /usr/local hierarchies
folded in with root. There are reasons for the separation, but as
machines become more and more single-user oriented these distinctions
become lost. And the introduction of /../sbin directories has
provided a much more effective separation of binaries than /bin and
/usr/bin ever did.
I really differ with native language directories. There's no reason
to change them since they really aren't in anyone's human language.
Until I used Unix I'd never be able to identify what /sbin meant. Nor
would /var/adm mean anything to me. While these threeletterisms are
supposedly mnemonic there really isn't any solid meaning attached to
them. I just think of /var as where the logs and assorted program
state files go. /etc is where the config files go, except for some
which want to be in /var somewhere. /usr is the big filesystem with
most of everything on it. To me it has nothing to do with a user, it
just happens to be pronounced that way. /gbr would have as much
meaning. (Dutch `gebruiker'.)
> - It is based on LIBC5, and uses C and FORTH as the base programming
^^^^^ not so good -- this means binary
incompatibility with the newer Linux systems who
use glibc-2 aka libc-6.
> languages.
(It's nice to see FORTH now and then, though. It's been undercover
with minimal press since the days of the 8 bit micros...)
> It _appears_ that it uses a customized init; that is the _really
> crucial_ thing that would change in creating a "LispOS" atop the Linux
> kernel.
I agree, which is why I decided that it should be the first thing to
be redesigned. The fact that a Unix kernel still lurks underneath
still shapes the design to some extent (like the extensive use of
cheap process spawning with fork/exec), but the end result should be
plenty foreign to the Unix state of mind. The design of the new init
will set the style for the rest of the system. If it's too klugy then
the rest of the system will seem vaguely patched together as well. If
it's rock solid, indestructable, and has the elegance and flair of a
Japanese castle with the same toughness, then the system will be a big
win.
> I'm not sure if cLIeNUX init is written in FORTH; that would
> be a pretty appropriate thing, although I somehow suspect that it is
> not.
>
> Everything else, on Linux, is invoked via init, whether directly or
> indirectly, so that if you change init, that provides a substantially
> different character to the system.
Yes, what I said. I should read ahead more instead of shooting from
the hip.
> The other notable Linux that has a Rather Unique Init is
> <http://www.pell.chi.il.us/~orc/Mastodon/> David Parsons' "Mastodon."
>
> The point, if it's not clear, is that there is fairly ample opportunity
> to customize a system _based on Linux_ into whatever form you like.
> cLIeNUX is an example of how a "Forth person" built something that
> runs a C-and-Forth "userspace."
Hear, hear.
> Creating a "userspace" in your favorite image may be adequate to
> provide the environment desired, and if that be so, that is likely to
> be a stabler choice than most of the alternatives, as the ongoing
> development of Linux-as-kernel provides a platform that can improve
> without necessarily forcing you to rewrite great gobs of it each time
> Intel comes out with a new CPU.
The primary bitch that I heard from various Lispophiles about not
using a Lisp-based kernel was that it would be inconvenient to patch
the running kernel with new code. And it wouldn't be as easy to
modify the existing code for the kernel. In short, it wouldn't be
like the Lisp Machines.
I have to say that both arguments are bogus. There's no reason for
anyone to even *care* what's going on in the kernel anymore. There
are people who have specialized their entire career to nothing but
kernel hacking. Even the competent Unix user doesn't even understand
what's actually happening in a kernel aside from generalities, and a
Scheme-based kernel would be just as difficult to understand (modulo
readability ;), and would be more inefficient than the C-based
monsters that are out there, simply because Scheme wouldn't provide
the near 1-1 assembly to language mapping that C does, which seems
essential to hardware control. (The Lisp Machines got off easy since
they only had one small set of hardware to work with, and the drivers
for said hardware could be nearly perfected. The hardware was also
totally known, something that doesn't always exist on modern
machines.)
If you care about the kernel and its design so much, then write one.
I'd be happy to use it, if it supported my hardware. If it doesn't,
then keep writing. Otherwise, I'll be happy with my Scheme
environment, thank you.
> >I have, over time, moved a lot of the /etc scripts on my notebook
> >over to scsh -- ppp dialup, pcmcia, config bits, backup dumps,
> >etc. It's *very* pleasant to do this kind of stuff in scsh.
>
> This is an area in which it is quite unfortunate that there _hasn't_
> been any improvement on UNIX; there have traditionally been two
> approachs:
> a) The BSD way, where you have a script that starts up desired
> services, and
> b) The SysV way, where there is a boatload of scripts that start up
> individual services, and then symbolic links to turn this into a
> list that can be executed.
>
> People of both 'religion' take potshots at the other, so that the only
> choice is between the Right Way, which is the init that _I_ use, and
> the Wrong Way, which is the init that _you_ use.
>
> Virtually no examination of the question, "Is there perhaps a better
> way?"
Most people have been either too busy to care (until they have to hack
init scripts) or too afraid of retaliation from *both* camps united
against them to broach the subject, IMO.
> David Parsons' approach seems more like using a UNIX "Makefile" to
> establish a set of service dependancies that need to be satisfied.
>
> There are some deadlock conditions to worry about, but it would be a
> Truly Good Thing to try to come up with a better way of managing this
> stuff.
I don't suggest trying to automatically establish dependencies between
services. It just seems smarter to have the administrator write such
things themselves, since they can comprehend manual pages better than
the computer can. No tackling AI-complete problems for me...
Providing a simple mechanism for implementing the dependency plan
seems much easier, and there's less probability of klugification of
the design.
> Note that the Software Carpentry
> <http://software-carpentry.codesourcery.com/> project is seeking to
> build tools to supercede autoconf, make, Expect, and Bugzilla.
> They've got some funding, and a goodly hundred or so candidate
> utilities proposed in the various tool classifications.
>
> Something good may come out of _that_.
>
> >>If you think I'm crazy then go ahead and say it,
> >
> >You are crazy, but that's not important. The only thing that matters
> >is whether or not you do anything. Do *anything*, and you matter.
>
> Indeed.
Thanks for the votes of confidence. :)
> Take a Linux kernel, write an "init" that uses Scheme-based startup
> scripts, and build up a "takes-a-few-floppies" distribution that
> parallels cLIeNUX in having a user-space that is largely coded in
> Scheme, and this can become an interesting project.
>
> Unfortunately, there is a dearth of Schemes that compile directly to
> machine code; that is rather more common with Forth.
My current problem. I'm afraid of having to write a compiler that
generates *independently* linkable and loadable objects before I have
anything to work with. I've not written something of this complexity
before, and I'm worried that it will never get to a point of
usability. I'll get mired in the Turing Tarpit as it were, and not
able to move on to the real goal.
By `independently' I mean a native binary object that can be used just
like the typical .o file generated by a C compiler. A raw binary
object that can be linked to libraries and executed independent of any
existing Scheme implementation. This way Scheme doesn't have to be
running before anything happens, and we escape the situation that the
Lisp Machine OS (and its descendants) was in, that Lisp had to be
started first before anything else could happen, and that namespace
pollution was almost inevitable, even with a powerful module system.
> It might be more
> "natural" to implement this using CMU Common Lisp instead. But the
I really don't want to have to do this since I'd be writing a
Lisp-based OS and not a Scheme-based OS. I'll use MIT Scheme before I
start using Lisp. I'd even have to change the name then, and I *like*
`Schema' -- it even has a neat plural form! :)
> exploration of how to implement this would doubtless provide
> interesting insights and learning...
Oh yeah. Learning. That's all I'm doing right now. And all I'll
ever be doing...
I think the real goal I have is that nobody will have to do this
again. Linux looks like it's going to persist, so this Scheme-based
replacement will likely hang on too, if it gets anywhere. But nobody
can predict the future, not even Nicholas Negroponte.
> --
> A student, in hopes of understanding the Lambda-nature, came to
> Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> true", asked the student, "that PL-1 has many of the same data types
> as Lisp?" Almost before the student had finished his question,
> Greenblatt shouted, "FOO!", and hit the student with a stick.
Replace PL/1 with C. Much more current, that. Same damned problem.
"FOO!" *smack*
Does anyone know the actual event behind this koan?
'james
That's it precisely. I'll grab it right now for my files.
'james
> I had thought that the compiler didn't actually generate independently
> executable code, but code only loadable into the interpreter. In that
> case the interpreter would have to be loaded and running for anything
> else to happen, which would slow the boot process down quite a bit on
> slower machines (like mine).
A minor correction. The interpreter is not needed (although it is
always there). It wouldn't be hard to splice it out.
The MIT Scheme runtime system is needed. This is composed of both a
library written in C (pretty minimal but includes GC and the guts of
call-with-current-continuation) and a large library written in Scheme
and compiled.
You don't need any interpreted code or interpreter -- in fact, when
you start MIT Scheme, there isn't any interpreted code.
The compiler doesn't produce independently-executable code, but at a
similar level neither does your C compiler -- you need anything from
crt0.o to the C library (including stdio, stdlib, etc.) in Unix, and
similarly in Windows (that's what most DLLs are about).
What MIT Scheme doesn't have is a linking loader separate from the
interactive one -- again, totally orthogonal from interpretation.
> I've never understood why compiling call/cc should be so much problem.
> I'm under the impression (from not too thoroughly reading various PhD
> theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
> making use of call/cc can be CPS transformed into a program with only
> explicit continuations. From there some partial evaluation and
> optimization can be done, if wished, and then the program can be
> transformed into machine code (or C, which is close enough). In that
> case, since call/cc is transformed into various shapes of explicit
> continuations, it shouldn't serve to be a problem. Am I confused
> about this? Or is theory not directly applicable to practice in this
> case? Or is it just that many implementations have been made by
> people unfamiliar with the CPS transformation and competitors (I don't
> buy that *at all*)?
There is no _conceptual_ problem with call-with-current-continuation
(ignoring dynamic-wind, which adds some quirks).
There are plenty of pragmatic issues, however.
To a coarse approximation, there are two major ways to implement
Scheme (and ML, which at this level is indistinguishable)
1. CPS-based. This converts all programs to explicit continuation
passing style. Continuations then become simple closures that can
be handled identically to all others -- in particular, they can
easily be heap allocated. call-with-current-continuation is
conceptually trivial.
However, just because this is simple, it is not necessarily
desirable.
In particular, there are plenty of reasons why stack allocation is
preferable to heap allocation. If you go the full blown (true) CPS
way, then it becomes difficult to do stack allocation, and this can
cause performance problems for code that doesn't use
call-with-current-continuation (although it may make programs that
do use it relatively faster).
In theory, a full-blown extent and escape analysis on the resulting
CPS program should allow you to stack-allocate some of the
closures, perhaps even some that were not originally continuations.
In practice, I don't know of anyone who's done this (but I'm
somewhat out of touch), especially since extent and escape analysis
are almost always inconclusive in the presence of separate
compilation (different modules compiled in isolation).
Thus the only CPS-based systems that retain stack allocation (to my
knowledge) are those that use "pseudo-CPS". They use syntactic
CPS, but retain the distinction between those closures that arise
from CPS (and hence "stack-allocatable"), and those that arise in
other ways and will not be allocated on a stack (heap or none at
all).
In these "pseudo-CPS" systems, since continuations are
stack-allocated, call-with-current-continuation must be implemented
using one of the many techniques used in the direct systems:
2. Direct systems. These don't do CPS. They consider
call-with-current-continuation a library function to be implemented
in the runtime system, and otherwise compile Scheme in a way
similar to how most other languages are compiled -- except for true
tail recursion, which adds its own warts, requiring either lambda
lifting or some real cleverness.
These systems typically use a "call stack" (in Scheme it is not a
call stack, but a continuation stack, since tail calls don't grow
it).
call-with-current-continuation must then manage the stack by
copying as necessary. The details differ depending on the detailed
technique used. There are many possibilities (and I'm sure I'm
missing some):
- stop and copy in/out from a global area
- incremental copy in/out
- using stack-lets and copying stack-lets only when explicit
continuations are invoked.
Note that in either case call-with-current-continuation is not a
problem.
In the true CPS systems, it becomes just closure creation, something
which the compiler must know how to do.
In the pseudo-CPS or direct systems it becomes a problem for the
runtime system, and not for the compiler -- to the compiler it is just
like any other external call (e.g. length, apply, or
open-window-with-scrollbars-and-fuzzy-corners).
There are additional complications if your target is not machine code
(or assembly language -- same difference) and is a high-level language
that does not provide true and complete tail recursion (e.g. C,
although it is hard to call it a high-level language).
- True CPS systems rely extremely heavily on proper tail calls, since
even "returning" involves a tail call. Thus some of the
approximations to true tail recursion that some implementations have
done (e.g. Scheme->C) are not feasible in a true CPS system.
Getting true tail calls out of C (or Pascal for that matter) is
painful. There are several long-standing tricks such as driver
loops [*] and some new ones to provide true tail recursion.
- For direct systems, you have to be able to reify the control stack
-- something that most other languages don't let you do. You again
end up with a painful task. This can be done by a handful of other
techniques (e.g. keeping a dual "data" stack, or resorting to
assembly/machine language for the core of
call-with-current-continuation).
[*] By driver loop I mean that the implementation never allows the
host call stack to get very deep. Every so often (the details of
when and how vary according to the implementation), instead of doing
a native call, the current procedure returns to an outer driver loop
with some state and arguments that cause it to call the next
procedure:
extern SCHEME_OBJECT proc, args[N];
void
driver_loop (void)
{
while (1)
{
scheme_funcall (proc, args[0], ... args[N - 1]);
/* And invoking proc eventually overwrites proc and args for
the next iteration.
*/
}
}
These is the same technique that the original Scheme implementation
used to get true tail recursion out of MacLisp (which doesn't
guarantee it).
> > A student, in hopes of understanding the Lambda-nature, came to
> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > true", asked the student, "that PL-1 has many of the same data types
> > as Lisp?" Almost before the student had finished his question,
> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>
> Replace PL/1 with C. Much more current, that. Same damned problem.
> "FOO!" *smack*
>
> Does anyone know the actual event behind this koan?
I believe that Danny Hillis wrote it. I wouldn't know if there was an
actual event upon which this is based, but with Greenblatt involved, I
wouldn't rule it out.
> I've never understood why compiling call/cc should be so much problem.
> I'm under the impression (from not too thoroughly reading various PhD
> theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
> making use of call/cc can be CPS transformed into a program with only
> explicit continuations. From there some partial evaluation and
> optimization can be done, if wished, and then the program can be
> transformed into machine code (or C, which is close enough). In that
> case, since call/cc is transformed into various shapes of explicit
> continuations, it shouldn't serve to be a problem. Am I confused
> about this? Or is theory not directly applicable to practice in this
> case? Or is it just that many implementations have been made by
> people unfamiliar with the CPS transformation and competitors (I don't
> buy that *at all*)?
You can do this, but it comes with a price: the CPS code might not
use continuations in a stack-like manner. You have two alternatives,
1) punt on using the stack and just heap allocate all your
continuation frames,
2) make your compiler `smart enough' to figure out when it can use the
stack.
Since cwcc is so rarely used in production code, it seems
reasonable to put the entire burden of using cwcc on the primitive
itself rather than in the compiler.
> Since 1900 or earlier?? Sorry... But I had to dig at that since I've
> been kidding everyone else about this Brand New Century stuff. There
> was no year 0, so you start from one. Thus the year 10 is still in
> the first decade, and the year 2000 is the last year of the 20th
> century. The easiest way to remember all of this is that the century
> you're in is named after the last year in it -- 19th century was 1801
> to 1900, and 20th century is from 1901 to 2000. It's an off-by-one
> error, though not very obvious to people used to counting from zero.
Yes, but if you were planning a big party for the end of the
millenium, you might find people less enthusiastic because they were
partying about 4 months ago.
Hi, Bill.
> - True CPS systems rely extremely heavily on proper tail calls, since
> even "returning" involves a tail call. Thus some of the
> approximations to true tail recursion that some implementations have
> done (e.g. Scheme->C) are not feasible in a true CPS system.
> Getting true tail calls out of C (or Pascal for that matter) is
> painful. There are several long-standing tricks such as driver
> loops [*] and some new ones to provide true tail recursion.
>
> [*] By driver loop I mean that the implementation never allows the
> host call stack to get very deep. Every so often (the details of
> when and how vary according to the implementation), instead of doing
> a native call, the current procedure returns to an outer driver loop
> with some state and arguments that cause it to call the next
> procedure:
>
> extern SCHEME_OBJECT proc, args[N];
>
> void
> driver_loop (void)
> {
> while (1)
> {
> scheme_funcall (proc, args[0], ... args[N - 1]);
> /* And invoking proc eventually overwrites proc and args for
> the next iteration.
> */
> }
> }
>
> These is the same technique that the original Scheme implementation
> used to get true tail recursion out of MacLisp (which doesn't
> guarantee it).
Baker suggested a trick where you never pop the C stack but just let
it grow in one direction. When you fall off the end, you run the
garbage collector to evacuate the continuations off the stack and then
use LONGJMP to clear the stack. This gives you proper tail recursion
*and* first-class continuations in one whack, bypassing at least some
of the problems with using C.
--
~jrm
/gbr in English might conceivably be a contraction of /goober, which
is a *wonderfule* place to put the user-space code ;)
> Scheme-based kernel would be just as difficult to understand (modulo
> readability ;)
A big issue actually...
> and would be more inefficient than the C-based
> monsters that are out there, simply because Scheme wouldn't provide
> the near 1-1 assembly to language mapping that C does, which seems
> essential to hardware control.
I've just *got* to disagree with this. I haven't felt as close to the
silicon as I do in Scheme for *years*. Once you get it into your
head that function names are (equivalent to) labels and parameters are
(equivalent to) registers it gets *really* cool.
Now, I'm not saying that R5RS Scheme is a systems programming
language, but it's not very far removed from being one. The changes
I'd make would be:
1) replace the numeric tower with machine-integer types and
*nothing* else
2) add a way to directly access non-GC memory
3) global interrupt/exception handlers - handler taking the
current continuation as one parameter, other params need
more thought
more radical (or expensive) ideas include:
4) pitch ports as a standard datatype
5) replacing symbols w/Scheme48's enumerated
6) a (ML-ish) module system that admits categorical
composition of functionality
I *think* that such a system (changes 1-3) would be sufficient for a
pretty groovy and potentially fast systems programming language. And
C-compatibility? Don't need it. Let the C compiler eat cake...
<re: compilers for systems programming in Scheme>
> By `independently' I mean a native binary object that can be used just
> like the typical .o file generated by a C compiler.
If you're ok about linking w/the Scheme RTS you'll be OK. If not,
you're going to need a compiler that does hefty region analysis. Those
beasties aren't common anywhere yet, although I have the impression
that Jeff Siskind is trying to incorporate that into Stalin.
david rush
--
Thinking dangerous thoughts...
The Almighty Root wrote:
>
> It's rather patently obvious that a vastly better design is possible
> than the current init cruft. Most any design for init would be better
> than such a mess. I'm not suggesting something even more inane, like
> the garbage that Windos NT has foisted off on the public, with its
> little window in an itty-bitty font that contains supposed `services'
> (Unix daemons) that you click on and press buttons that frequently don't
> do the same thing they advertise. That's just as bad if not worse than
> Unix init.
Well, Apple recently showed a glimpse of what they're working on for Mac
OS X:
- daemons/scripts/services are rewritten to take their init info from
files in the format of XML property lists, eliminating some of the mess
of the zillion different config file formats
- there is a simple graphical editor built in that presents the property
lists as outlines
- the services all have 2 new properties added to whatever else is in
their config files: DEPENDS and PROVIDES
- the system tracks the DEPENDS and PROVIDES properties and
automatically determines load order, eliminating the messy crap of
directories full of scripts with names numbered to force load order
Food For Thought?
In terms of number of platforms supported, NetBSD runs on more
platforms than Linux--about the only place Linux has an edge is
support for different i386 configurations. The NetBSD release cycle
also tends to be more stable than Linux's, which might make it a
better target for replacing userland.
--
Dan Riley d...@mail.lns.cornell.edu
Wilson Lab, Cornell University <URL:http://www.lns.cornell.edu/~dsr/>
"History teaches us that days like this are best spent in bed"
It's not quite proper: the C stack still grows, so you keep allocating
memory (for the C stack-frame, which is build anyway) even if you
are in a tight loop that does not cons as such.
But the approach really is elegant. I'm writing on a compiler that uses
this strategy and it works fine. The compiler itself is only about 2600
lines of Scheme code and the performance of the generated executables
is reasonable (And call/cc intensive benchmarks really burn!).
Watch this space for further news.
felix
If the goal is to build a Lisp _environment_, then the creation of
device drivers is largely a distraction, as, while device drivers may
be _necessary_ to have a functioning system, their development biases
towards the "environment" side, and away from the "Lisp" side.
>> The OSKit approach looks like the one that most plausibly offers a
>> route to get _some_ benefit from the _massive_ efforts going into
>> hardware support on Linux and *BSD; it is, nonetheless, only providing
>> the hardware support that was available in early 1997. (Linux 2.0.29)
>> Further, OSKit is not portable to more than IA-32 systems. More is
>> predicted, but I rather think that it has been predicted for several
>> years now, to no avail.
>
>Also note that keeping the OSKit up to date requires extensive
>knowledge of both *BSD and Linux and following their respective
>development processes intently. Understanding the changes being made
>to the entire kernel structure of both systems, in parallel, is a very
>difficult undertaking. Managing to unglue these parts and meld them
>into the OSKit is equally nontrivial. Doing this alone, even just
>once to bring things up to date before you develop your OS, is
>unreasonable at best.
Ah, yes, right you are.
There is the theory that UDI (Uniform Driver Interface)
<http://www.projectudi.org/> could provide a more "universal" way of
coping with this; that seems more to be an attempt by the commercial
UNIX folk to try to lure the Linux developers to create device drivers
that "True UNIXes" can use as well. Which is, in a sense, the same
goal that the OSKit has at heart...
>> The approach that seems _rather_ more "production-worthy," at this
>> point, is that of building a "Lisp System" by layering a Lisp-based
>> set of user space tools atop a kernel coming from Linux or one of the
>> BSDs.
>
>Which is what I mentioned earlier. I figure that replacing the user
>space of a Linux system in an incremental fashion will succeed where
>all other attempts have failed. Linux is already the most-ported OS
>in history. Unreasonable amounts of support for all sorts of generic,
>crufty, and crappy hardware is already available, and the list is
>growing longer as I type. If a Scheme-based user space was
>implemented atop this then the only thing that would need porting
>would be the compiler and interpreter, and total portability would be
>achieved, modulo programs depending on hardware support, which I think
>would be few.
One think I'd see as plausible is that there _could_ be some value in
having some "hacks" added to the kernel that would be supportive of
the "Lisp Environment" needs.
1. It might be a neat idea to have a Lisp-based equivalent to the
/proc virtual filesystem.
On Linux (and Solaris, and possibly others...), you can head to
the /proc directory and see a directory hierarchy that can be
queried to get kernel-level information about system
configuration. In some cases, you can drop data into the files
and change kernel settings.
Wouldn't It Be Neat to have a Lisp-oriented interface where these
would be mapped onto a tree of "association lists" that one could
explore from within the Lisp environment?
A patch to allow this to be dealt with on "Lisp terms" could even
be fed back to the Offical Kernel Tree.
>> >If you don't like that stuff, you can replace it with something written in any
>> >good Unix-based Scheme *without* getting into the mess of doing your own
>> >OS. The init process can be anything you want it to be; its architecture is
>> >not baked into the Unix kernel design. Its job is one very well suited to
>> >Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
>> >would be a fine thing.
>
>Agree. An MTA is one of the things I'd like to tackle after replacing
>init and friends.
Inetd would be interesting to "redo;" I am rather _less_ excited about
replacements for Sendmail when there are already so many of them.
>> - It is based on LIBC5, and uses C and FORTH as the base programming
> ^^^^^ not so good -- this means binary
> incompatibility with the newer Linux systems who
> use glibc-2 aka libc-6.
>> languages.
There are not _massive_ merits to being LIBC5-based; I'd argue in
favor of GLIBC 2.1, as it is _far_ more portable.
>(It's nice to see FORTH now and then, though. It's been undercover
>with minimal press since the days of the 8 bit micros...)
Forth is well-suited to the purpose at hand:
-> It provides a model that makes it natural to have both
"interpreted" and "compiled" forms.
>> Unfortunately, there is a dearth of Schemes that compile directly to
>> machine code; that is rather more common with Forth.
>
>My current problem. I'm afraid of having to write a compiler that
>generates *independently* linkable and loadable objects before I have
>anything to work with. I've not written something of this complexity
>before, and I'm worried that it will never get to a point of
>usability. I'll get mired in the Turing Tarpit as it were, and not
>able to move on to the real goal.
>
>By `independently' I mean a native binary object that can be used just
>like the typical .o file generated by a C compiler. A raw binary
>object that can be linked to libraries and executed independent of any
>existing Scheme implementation. This way Scheme doesn't have to be
>running before anything happens, and we escape the situation that the
>Lisp Machine OS (and its descendants) was in, that Lisp had to be
>started first before anything else could happen, and that namespace
>pollution was almost inevitable, even with a powerful module system.
What The World Probably Needs is a Scheme parser for GCC, so that
you'd do:
% gcc -c some_schemefile.scm -O3
some_schemefile.o
%
Various of the Scheme systems provide Scheme-to-C translations which
might do the trick, albeit with the blemish that you have to be aware
of doing C #include configuration along with any Scheme configuration.
Stalin gets cited a lot, but it seems to be _incredibly_ consumptive
of memory, so I am skeptical that it will ever be of general interest.
--
Rules of the Evil Overlord #13. "I will be secure in my
superiority. Therefore, I will feel no need to prove it by leaving
clues in the form of riddles or leaving my weaker enemies alive to
show they pose no threat."
<http://www.eviloverlord.com/lists/overlord.html>
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>
> Joe Marshall wrote in message ...
> >
> >Baker suggested a trick where you never pop the C stack but just let
> >it grow in one direction. When you fall off the end, you run the
> >garbage collector to evacuate the continuations off the stack and then
> >use LONGJMP to clear the stack. This gives you proper tail recursion
> >*and* first-class continuations in one whack, bypassing at least some
> >of the problems with using C.
> >
>
>
> It's not quite proper: the C stack still grows, so you keep allocating
> memory (for the C stack-frame, which is build anyway) even if you
> are in a tight loop that does not cons as such.
Since you are discarding it at the rate you are allocating it, it is
properly tail recursive at the Scheme level. What it is at the C
level is another thing.
> ja...@fredbox.com (James A. Crippen) writes:
>
> > I had thought that the compiler didn't actually generate independently
> > executable code, but code only loadable into the interpreter. In that
> > case the interpreter would have to be loaded and running for anything
> > else to happen, which would slow the boot process down quite a bit on
> > slower machines (like mine).
>
> A minor correction. The interpreter is not needed (although it is
> always there). It wouldn't be hard to splice it out.
>
> The MIT Scheme runtime system is needed. This is composed of both a
> library written in C (pretty minimal but includes GC and the guts of
> call-with-current-continuation) and a large library written in Scheme
> and compiled.
So what you're saying, and correct me if I'm wrong, is that the guts
of the Scheme system can be linked with much as a typical shared
object library? Or treated as such, in any case?
> You don't need any interpreted code or interpreter -- in fact, when
> you start MIT Scheme, there isn't any interpreted code.
Yes, I had gathered that after tinkering with it some time ago.
> The compiler doesn't produce independently-executable code, but at a
> similar level neither does your C compiler -- you need anything from
> crt0.o to the C library (including stdio, stdlib, etc.) in Unix, and
> similarly in Windows (that's what most DLLs are about).
Indeed. What I was sort of hoping for was compiled machine code
objects that could be converted to ELF binaries for linking and
executing. So that the usual collection of binary manipulation tools
could be used on them, and that they would be similar to the output of
Unix compilers everywhere. I get the same from my f77 compiler (I
compiled ADVENT not too long ago, worked perfectly), and I figure if a
compiler is generating a code object and the guts of the Scheme system
are available as a shared object library then I could work with Scheme
binaries in the same manner as all other binaries on the system.
> What MIT Scheme doesn't have is a linking loader separate from the
> interactive one -- again, totally orthogonal from interpretation.
Orthogonal from interpretation because interpretation wouldn't require
any other sort of linking loader?
What I'm really looking for, and I'm not sure if I said this already,
is a Scheme system that doesn't have to be *running* to execute Scheme
programs. C doesn't have to be running for me to execute a C program.
I want something which behaves similarly. A compiler which produces
objects suitable for a linker which can produce libraries and
executables for use by ld.so. Something which merges seamlessly with
the existing Unix structure.
'james
> ja...@fredbox.com (James A. Crippen) writes:
> > Linux is already the most-ported OS in history.
>
> In terms of number of platforms supported, NetBSD runs on more
> platforms than Linux--about the only place Linux has an edge is
> support for different i386 configurations. The NetBSD release cycle
> also tends to be more stable than Linux's, which might make it a
> better target for replacing userland.
Someone else mentioned this to me in an email and I told him that I
would take NetBSD under serious consideration. As I said to him, my
main issue with using it will likely be personal, that I'll be using a
Linux system for development and a separate drive for abuse. If
NetBSD can be booted via Lilo then I will have no qualms about using
it at all, other than it will take me longer to develop anything since
I'm not familiar with using NetBSD aside from the occasional login so
I don't know anything about the boot process or other intricacies.
I also considered trying to maintain a certain level of platform
independence, supporting more than one platform. This may or may not
be feasible, depending entirely on what sort of back-breaking
gymnastics the different platforms will push me into. If I could get
this running on different kernels then we'd have a big win. But I
have suspicions that this is more difficult than it may appear at
first.
'james
> > --
> > A student, in hopes of understanding the Lambda-nature, came to
> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > true", asked the student, "that PL-1 has many of the same data types
> > as Lisp?" Almost before the student had finished his question,
> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>
> Replace PL/1 with C. Much more current, that. Same damned problem.
> "FOO!" *smack*
>
> Does anyone know the actual event behind this koan?
>
> 'james
My question is even easier: "Can someone explain what this koan
*means*?"
Thanks,
John
Which is exactly why I asked
If you have to ask, you obviously don't understand the Lambda-nature.
:-)
--
"There is no reason anyone would want a computer in their home". --
Ken Olson, Pres. and founder of Digital Equipment Corp. 1977
cbbr...@hex.net - - <http://www.ntlug.org/~cbbrowne/lsf.html>
Hmm... Perhaps you should contemplate this koan:
"A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
not?" Joshu answered: "Mu."
> What I'm really looking for, and I'm not sure if I said this already,
> is a Scheme system that doesn't have to be *running* to execute Scheme
> programs. C doesn't have to be running for me to execute a C program.
this is not entirely correct. grep your system for crt0, etc.
> I want something which behaves similarly. A compiler which produces
> objects suitable for a linker which can produce libraries and
> executables for use by ld.so. Something which merges seamlessly with
> the existing Unix structure.
you could implement an analogous srt0...
thi
You *can't* explain a koan. If you could, it wouldn't be a koan. The path to
true enlightment is long and hard. Sit in front of your computer. Program
in Scheme. Read the koan. Meditate. You will then achieve enlightment.
-- But how do I know I achieved enlightment?
-- You will understand the koan.