[dev] oasis: small linux system inspired by stali

177 views
Skip to first unread message

Michael Forney

unread,
Feb 12, 2017, 4:56:30 PM2/12/17
to dev mail list
For a while now, I've been working on putting together a linux system
based on suckless core tools, as well as various other projects. There
are still a number of things left to do, but I'm now at a point where
it is quite usable for me, and maybe for others as well.

https://github.com/michaelforney/oasis/wiki
https://github.com/michaelforney/oasis/releases

# stali similarities
- Uses musl libc and suckless core tools.
- Everything is statically linked.
- Everything is built from a single repository.
- Root filesystem is managed with git.
- Package build systems are ignored in favor of uniform custom build scripts.

# stali differences
- Package sources are not checked into the source repository, but
fetched on demand depending on which packages you have selected in
config.rc[0].
- Though stali has given up on being a general purpose desktop system,
I have not, so oasis includes my wayland display server (velox),
wayland ports of dmenu and st, and a wayland implementation of the
framebuffer backend for the netsurf browser.
- Only x86_64 is supported at the moment.
- Self-bootstrappable. Nearly all packages are buildable with only the
core packages[1] and a toolchain targeting musl (some need special
handling to first build some tool and add to PATH). A new toolchain
can be built with the above and GNU make (using musl-cross-make). A
new kernel can be built with the above and python (after applying a
couple patches[2]).
- Instead of makefiles, oasis uses rc scripts to generate ninja build
files (maybe in the future I will work on a ninja-compatible build
tool in C to avoid the C++ dependency).
- Binaries are not distributed, users are expected to build their own
(though, you can do so on a separate system and pull from it).

# Trying it out
## Real hardware
I wrote up some installation instructions[3] on the github wiki. I've
tested the build and install with the xubuntu 16.10 live
environment[4]. If you try this and run into any problems, please let
me know so that I can fix/clarify things.

## qemu
I built a qemu image[5] containing all the core, extra, and desktop
packages as well as a toolchain. It is probably the best way to play
around with oasis. The installed kernel is pretty minimal, and only
has support for virtio drivers. The root password is `oasis`.

qemu-system-x86_64 \
-enable-kvm -cpu host -m 2048 \
-device virtio-balloon \
-device virtio-keyboard \
-device virtio-tablet \
-device virtio-net,netdev=net \
-device virtio-scsi \
-device virtio-serial \
-device virtio-vga \
-chardev stdio,id=cons \
-mon chardev=cons \
-drive if=none,id=hd,file=oasis-20170211.qcow2 \
-device scsi-hd,drive=hd \
-netdev user,id=net

To enable ssh login, `ssh-keygen -A && perpctl A sshd` (you will
probably want to set up a bridge and change `user` to `bridge` in the
netdev option to access from the host system).

To start velox, `swc-launch velox`.

To run without graphics mode, add `mux=on,signal=off` to the chardev
option, and add `-device virtconsole,chardev=cons -display none`.

To reboot/shutdown cleanly, `kill -s INT 1`/`kill -s USR1 1` respectively.

## chroot
I also uploaded a tar archive[6] created with `git archive` on the
root repository. It might be useful for poking around with chroot.
Note that the tar does not contain any empty directories and some
files are missing special permissions. After extracting, run
`./libexec/oasis/applyperms -d . && ./libexec/oasis/applyperms -d etc`
to fix this. Also, be aware that there is no common parent directory,
so make sure to extract somewhere you can easily clean up (such as a
new directory).

I've added various bits of documentation to the github wiki[7] if you
want to read more (or clone
https://github.com/michaelforney/oasis.wiki).

[0] https://github.com/michaelforney/oasis/blob/master/config.def.rc
[1] https://github.com/michaelforney/oasis/blob/master/sets.rc#L1
[2] https://github.com/torvalds/linux/compare/master...michaelforney:oasis
[3] https://github.com/michaelforney/oasis/wiki/Install
[4] http://cdimages.ubuntu.com/xubuntu/releases/16.10/release/xubuntu-16.10-desktop-amd64.iso
[5] https://github.com/michaelforney/oasis/releases/download/20170211/oasis-20170211.qcow2
[6] https://github.com/michaelforney/oasis/releases/download/20170211/oasis-20170211.tar.xz
[7] https://github.com/michaelforney/oasis/wiki

Carlos Torres

unread,
Feb 12, 2017, 5:40:59 PM2/12/17
to dev mail list
Hi Michael,

On Sun, Feb 12, 2017 at 4:55 PM, Michael Forney <mfo...@mforney.org> wrote:
> For a while now, I've been working on putting together a linux system
> based on suckless core tools, as well as various other projects. There
> are still a number of things left to do, but I'm now at a point where
> it is quite usable for me, and maybe for others as well.
>
> https://github.com/michaelforney/oasis/wiki
> https://github.com/michaelforney/oasis/releases
>

This looks very interesting

Thanks

Silvan Jegen

unread,
Feb 14, 2017, 2:20:30 AM2/14/17
to dev mail list
Heyho

On Sun, Feb 12, 2017 at 10:55 PM, Michael Forney <mfo...@mforney.org> wrote:
> For a while now, I've been working on putting together a linux system
> based on suckless core tools, as well as various other projects. There
> are still a number of things left to do, but I'm now at a point where
> it is quite usable for me, and maybe for others as well.
>
> https://github.com/michaelforney/oasis/wiki
> https://github.com/michaelforney/oasis/releases
>
> # stali similarities
> - Uses musl libc and suckless core tools.
> - Everything is statically linked.
> - Everything is built from a single repository.
> - Root filesystem is managed with git.
> - Package build systems are ignored in favor of uniform custom build scripts.
>
> # stali differences
> - Package sources are not checked into the source repository, but
> fetched on demand depending on which packages you have selected in
> config.rc[0].
> - Though stali has given up on being a general purpose desktop system,
> I have not, so oasis includes my wayland display server (velox),
> wayland ports of dmenu and st, and a wayland implementation of the
> framebuffer backend for the netsurf browser.

I have played with velox before and was hoping to use it together with
musl at some point.

I definitely want to give it a spin. Thanks for all the work!


Cheers,

Silvan

Marc André Tanner

unread,
Mar 27, 2017, 5:45:05 PM3/27/17
to dev mail list
Hi Michael,

Overall I like your package selection, but I also have a few questions:

* Did you consider using netbsd-curses[1] instead of ncurses?

This probably won't work as is, because libtermkey as required by
vis depends on the ncurses terminfo library. It might be worth
investigating how much work it would be to either port libtermkey
or provide the necessary terminfo symbols in netbsd-curses.

* What is the justification for using ninja (I'm not really familiar
with it). It seems like the only (non-toolchain) package requiring
C++. Furthermore, you mentioned that you might write a compatible
C replacement at some point, hence it must obviously be better than
other existing alternatives.

* Did you try how much of the system can be built using a C only
toolchain like cparser+libfirm?

* Why did you settle for perp instead of s6[2]? Again I'm not really
familiar with either of them (besides reading their respective
websites etc.). Just curious, it would be interesting to get your
reasoning behind the package selection process.

* Will you consider using BearSSL in the future, once it gets a bit
more mature?

Thanks for your work!

Marc

[1] https://github.com/sabotage-linux/netbsd-curses
[2] http://skarnet.org/software/s6/


Michael Forney

unread,
Mar 28, 2017, 3:00:41 AM3/28/17
to dev mail list
On Mon, Mar 27, 2017 at 2:44 PM, Marc André Tanner <m...@brain-dump.org> wrote:
> Hi Michael,
>
> Overall I like your package selection, but I also have a few questions:
>
> * Did you consider using netbsd-curses[1] instead of ncurses?
>
> This probably won't work as is, because libtermkey as required by
> vis depends on the ncurses terminfo library. It might be worth
> investigating how much work it would be to either port libtermkey
> or provide the necessary terminfo symbols in netbsd-curses.

I have not. I actually was originally very hesitant to include any
curses library at all, since it seems to me like the wrong layer to
implement a graphical UI (instead going the route of acme + 9term).

If you find a way to use netbsd-curses (or maybe no curses at all; I
saw you added a vt100 UI to vis), I'd be interested.

> * What is the justification for using ninja (I'm not really familiar
> with it). It seems like the only (non-toolchain) package requiring
> C++. Furthermore, you mentioned that you might write a compatible
> C replacement at some point, hence it must obviously be better than
> other existing alternatives.

I am actually quite happy with how the rc/ninja based build system
worked out. In my experience, ninja is quite good as a low level build
tool. The language itself is extremely simple, and the tool handles
projects with several thousand build actions with ease (since it was
designed for chromium). This makes it really easy to hack on some
package a bit, and build incrementally. Since dependencies (including
C header dependencies) are tracked across the entire system, relinking
due to a static library change is easy and automatic, and everything
is always up to date.

I think it might have been possible to use some other build tool to
achieve something similar, but I don't think it would have worked out
as well.

> * Did you try how much of the system can be built using a C only
> toolchain like cparser+libfirm?

I did try cparser/libfirm at one point in the project's infancy and I
was surprised at the amount of stuff that built no problem. I have
since added more packages though, so can't give you a recent answer. I
also spent a week or so a while ago fixing some bugs in scc to try to
get it to build sbase. I think it would be really cool to be able to
build the core system (or more!) with a lighter toolchain.

> * Why did you settle for perp instead of s6[2]? Again I'm not really
> familiar with either of them (besides reading their respective
> websites etc.). Just curious, it would be interesting to get your
> reasoning behind the package selection process.

I tried both a couple of years ago, and liked perp more. It seemed
simpler and smaller in scope than s6. s6 has its own scripting
language and quite a lot of extra tools. Although, I have to admit,
one contributing factor is that the s6 tool names are just a pain to
type.

Though s6 is definitely more actively maintained and more popular, so
I may take another look at some point. I would not object if someone
wanted to package it as an alternative.

> * Will you consider using BearSSL in the future, once it gets a bit
> more mature?

Of course, although there are a number of packages that depend on the
OpenSSL/LibreSSL API (curl, wpa_supplicant, msmtp, netsurf, openssh,
python, transmission), so it might involve a lot of porting effort.

> Thanks for your work!

Thanks for taking a look!

Kamil Cholewiński

unread,
Mar 28, 2017, 6:35:04 AM3/28/17
to dev mail list
> I think it might have been possible to use some other build tool to
> achieve something similar, but I don't think it would have worked out
> as well.

http://gittup.org/tup/ ?

Marc André Tanner

unread,
Mar 28, 2017, 9:34:02 AM3/28/17
to dev mail list
On Tue, Mar 28, 2017 at 12:00:22AM -0700, Michael Forney wrote:
> On Mon, Mar 27, 2017 at 2:44 PM, Marc André Tanner <m...@brain-dump.org> wrote:
> > Hi Michael,
> >
> > Overall I like your package selection, but I also have a few questions:
> >
> > * Did you consider using netbsd-curses[1] instead of ncurses?
> >
> > This probably won't work as is, because libtermkey as required by
> > vis depends on the ncurses terminfo library. It might be worth
> > investigating how much work it would be to either port libtermkey
> > or provide the necessary terminfo symbols in netbsd-curses.
>
> I have not. I actually was originally very hesitant to include any
> curses library at all, since it seems to me like the wrong layer to
> implement a graphical UI (instead going the route of acme + 9term).

In principle I agree. Unfortunately I often have to deal with systems
where I'm stuck with a tty.

> If you find a way to use netbsd-curses

That might be more easily possible.

> (or maybe no curses at all; I
> saw you added a vt100 UI to vis), I'd be interested.

The vt100 UI stuff cuts out the middle man (trades it for a fixed
set of escape sequences and probably a lot of display flickering),
but is ultimately still based on the same flawed terminal abstraction.

Speaking of curses alternatives there is also

http://www.leonerd.org.uk/code/libtickit/

which is advertised as the "successor" of libtermkey (currently
it still depends on it though). I haven't actually looked at it in
detail, partly because it uses bzr as SCM and launchpad as issue
tracking system. Apparently a first version was released recently.
Not sure whether it would be an improvement.

> > * Did you try how much of the system can be built using a C only
> > toolchain like cparser+libfirm?
>
> I did try cparser/libfirm at one point in the project's infancy and I
> was surprised at the amount of stuff that built no problem. I have
> since added more packages though, so can't give you a recent answer.

It was a long time ago when I last played around with it, but it is
supposedly able to build musl. Hence I thought the rest of the core
system should also be doable. But maybe I'm a bit too optimistic. Also
just being able to compile something isn't that interesting in itself,
the result should also work as expected.

I just tried to build vis, but cparser doesn't currently support multiple
source files on the same command line. This of course doesn't concern
oasis where you completely replaced the build system.

> I also spent a week or so a while ago fixing some bugs in scc to try to
> get it to build sbase. I think it would be really cool to be able to
> build the core system (or more!) with a lighter toolchain.

I strongly agree. I wonder whether an organization participating in GSoC
would be willing to accept/mentor such a project (working on cparser+firm).
Maybe OpenWRT/LEDE? Seeing that they are using musl as their libc.
I'm probably a bit late to the party though ...

> > * Why did you settle for perp instead of s6[2]? Again I'm not really
> > familiar with either of them (besides reading their respective
> > websites etc.). Just curious, it would be interesting to get your
> > reasoning behind the package selection process.
>
> I tried both a couple of years ago, and liked perp more. It seemed
> simpler and smaller in scope than s6. s6 has its own scripting
> language and quite a lot of extra tools. Although, I have to admit,
> one contributing factor is that the s6 tool names are just a pain to
> type.

Yeah, I'm not yet sure what I should think of stuff like execline.

I haven't actually used either system for a sufficient amount of time
to draw any meaningful conclusions.


Haines Brown

unread,
Mar 28, 2017, 12:19:31 PM3/28/17
to dev mail list
I don't want to derail the thread, but speaking of small linux systems I
wonder if anyone has tried Lemote's notebook or desktop which come with
Linux installed. It uses Loongcon CPUs, which are only RISC, but should
be adequate for most purposes.

What drew me to it is that it offers a way to escape the CPU backdoors
that I assume are present in Intel and AMD processors.

Haines Brown

Pickfire

unread,
Mar 28, 2017, 1:48:58 PM3/28/17
to d...@suckless.org
I did a benchmark against tup, make, mk, ninja back then.
What I learn:

- make is the fastest
- ninja needs to be run twice
- tup is slowest (probably didn't use monitor) but easy to write
- mk is slightly slower than make

Still gnumake is the most used, fast as well.
I see tup as a good build system but not used by many.

Kamil Cholewiński

unread,
Mar 28, 2017, 2:06:06 PM3/28/17
to d...@suckless.org
On Tue, 28 Mar 2017, Pickfire <pick...@riseup.net> wrote:
> I did a benchmark against tup, make, mk, ninja back then.
> What I learn:
>
> - make is the fastest
> - ninja needs to be run twice
> - tup is slowest (probably didn't use monitor) but easy to write
> - mk is slightly slower than make
>
> Still gnumake is the most used, fast as well.
> I see tup as a good build system but not used by many.

When disclosing benchmark results, it is always in good manner to share:

- The exact method
- The dataset
- The raw numbers

I'm not arguing for or against any tool (I'm yet to try tup or ninja),
but tup's author actually does a "tup vs mordor" benchmark, where he
shows the tool is slower by a constant vs an ideal, hypothetical,
all-knowing, ungodly-optimised build tool.

http://gittup.org/tup/tup_vs_mordor.html

<3,K.

Alexander Krotov

unread,
Mar 28, 2017, 5:27:45 PM3/28/17
to d...@suckless.org
On Wed, Mar 29, 2017 at 01:48:35AM +0800, Pickfire wrote:
> Kamil Cholewiński <harr...@gmail.com> wrote:
>
> > > I think it might have been possible to use some other build tool to
> > > achieve something similar, but I don't think it would have worked out
> > > as well.
> >
> > http://gittup.org/tup/ ?
>
> I did a benchmark against tup, make, mk, ninja back then.
> What I learn:
>
> - make is the fastest
> - ninja needs to be run twice
> - tup is slowest (probably didn't use monitor) but easy to write
> - mk is slightly slower than make

ninja claims to be "as fast as possible" but starts shell instead of
executing programs directly like make does.

Ori Bernstein

unread,
Mar 29, 2017, 1:35:50 AM3/29/17
to dev mail list
On Wed, Mar 29, 2017 at 12:27:27AM +0300, Alexander Krotov wrote:

> ninja claims to be "as fast as possible" but starts shell instead of
> executing programs directly like make does.

Since when did make execute the programs directly? It runs each line
of the recipe in its own shell.


Ralph Eastwood

unread,
Mar 29, 2017, 4:53:43 AM3/29/17
to dev mail list
On 28 March 2017 at 18:48, Pickfire <pick...@riseup.net> wrote:
> I see tup as a good build system but not used by many.

An interesting feature I noticed was that it automatically detects dependencies.

"The trick is that tup instruments all commands that it executes in
order to determine what files were actually read from (the inputs) and
written to (the outputs). When the C preprocessor opens the header
file, tup will notice that and automatically add the dependency. In
fact, we don't have to specify the C input file either, but you can
leave that in there for now since we'll use it in the next section."
[0]

That seems an interesting feature!?

Ninja does something different in this respect where you as the
build-file writer has to write a rule explicitly to request the
compiler generate the header dependencies. e.g. gcc -MMD -MF out.d [1]

Before being carried away - it seems it's achievable with (GNU?) make
but with significantly more headache [2].

[0] http://gittup.org/tup/ex_a_first_tupfile.html
[1] https://ninja-build.org/manual.html#ref_headers
[2] http://make.mad-scientist.net/papers/advanced-auto-dependency-generation/#basic

--
Tai Chi Minh Ralph Eastwood
tcmrea...@gmail.com

Michael Forney

unread,
Mar 29, 2017, 2:27:45 PM3/29/17
to dev mail list
I think tup could have worked too, but I still prefer ninja for a few reasons:

- tup uses fancy trickery to calculate dependencies of commands using
a fuse filesystem. This requires setuid or user namespaces to work
correctly, or else runs in a "degraded" mode (though, not quite sure
how this works).
- tup has more dependencies (sqlite, libfuse, lua), and as a result,
larger binary size (1.4M vs 908K for static stripped binary built with
-O2).
- tup has a more complicated language because it is intended to be
written directly instead of generated by something else. I'd have to
do some experimentation to see if it could replace the rc scripts, or
if I'd have to use the lua API.

Reply all
Reply to author
Forward
0 new messages