[dev] Suckless operating system

355 views
Skip to first unread message

Martin Kopta

unread,
Jun 13, 2010, 6:09:26 PM6/13/10
to d...@suckless.org
Some philosophical questions..

What does it mean for an operating system to be suckless?
What features should (or should not) an OS have in order to be suckless?
Are there suckless or close-to-be-suckless operating systems out there?
What does suckless thinks about Plan9, *BSD, GNU/Linux, MS Windows, ..?
Is it possible to have an OS for desktop/laptop everyday use (multimedia, web,
programming, research, ..) which is actualy usable, not rotten inside and alive?

Samuel Baldwin

unread,
Jun 13, 2010, 6:22:09 PM6/13/10
to dev mail list
I think the general opinion of Plan 9 in suckless is positive, but
most people don't find it practical (probably because it hasn't been
widely adopted), and I think most people opt for linux distributions
like debian and arch. I don't know many with a high opinion of MS
Windows.

There's work going on now to create a statically linked suckless linux
distribution: stali: http://sta.li/

--
Samuel Baldwin - logik.li

Anders Andersson

unread,
Jun 13, 2010, 6:23:13 PM6/13/10
to dev mail list
> Is it possible to have an OS for desktop/laptop everyday use (multimedia, web,
> programming, research, ..) which is actualy usable, not rotten inside and alive?

Hm, I think we already concluded somewhat that a research application
is unlikely to be suckless. I'm not really sure what you mean by
"multimedia" and "programming". Music and movie players can probably
not be suckless if that means they should be patent free while still
be able to play back the common formats. A web browser might be able
to suck less in the future, when cars fly.

Matthew Bauer

unread,
Jun 13, 2010, 6:28:40 PM6/13/10
to dev mail list
I think surf and uzbl are good steps forward in making a kiss web browser.

David Tweed

unread,
Jun 13, 2010, 7:16:32 PM6/13/10
to dev mail list

One of the issues to consider is that what computers are used for
changes with time, and decisions that one may classify as "the
suckless way of doing things" at one point in time may mean that it's
not effectively useable in some future situations. For instance,
around about 20 years ago you wouldn't have considered multimedia as
something you need on a computer, so the complexity required for
reliable low-latency scheduling might be regarded as being needlessly
complex 20 years ago, by now it's pretty essential for audio
processing. The fact Linux is being used in smartphones and smartbooks
is suddenly pushing kernel developers who've only worked on PCs
face-to-face with hyper-agressive suspending ideas for power-saving.
If cloud computing (where you want to keep the decrypted data you have
on any individual remote computer to the minimum required for the
given task) takes off (and given that Plan 9 was based on running
intensive tasks on a server, I hope I'm safe from a Uriel rant)
functionality that seems pointlessly complicated and "non-suckless"
today may become apropriate. (For instance, I'd imagine that
cryptographic key management will probably become more integrated into
the kernel simply because you'll want to have such fine-grained
permissions and decrypting of entities that anything in userspace will
probably be too slow and easy-to-attack.) If implanted-in-the-body
devices get complex enough they may warrant a general purpose OS...

Of course, part of this comes from the tendency to try to use some
configuration of the same base OS (Linux, Mach, etc) for a wide range
of uses. Time will tell if this will continue to be a reasonable
development strategy. But if it is, a given design may be "suckless"
only for a period of time.

--
cheers, dave tweed__________________________
computer vision reasearcher: david...@gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot

Connor Lane Smith

unread,
Jun 13, 2010, 7:36:54 PM6/13/10
to dev mail list
On 13 June 2010 23:28, Matthew Bauer <mjba...@gmail.com> wrote:
> I think surf and uzbl are good steps forward in making a kiss web browser.

Problem is the vast complexity they both contain is hidden inside
libwebkit. That thing is huge. I get the feeling surf and uzbl only
make the tip of the iceberg suck less.

cls

Connor Lane Smith

unread,
Jun 13, 2010, 7:38:40 PM6/13/10
to dev mail list
On 14 June 2010 00:16, David Tweed <david...@gmail.com> wrote:
> One of the issues to consider is that what computers are used for
> changes with time, and decisions that one may classify as "the
> suckless way of doing things" at one point in time may mean that it's
> not effectively useable in some future situations.

If the system is sufficiently modular it should be relatively future-proof.

cls

David Tweed

unread,
Jun 13, 2010, 8:59:15 PM6/13/10
to dev mail list

I meant to suggest that design decisions and architectures might need
changing as new use cases come to light rather than that a single
design should be future proof-ish, and that this is in fact desirable.
However that means that saying something is "suckless" has to be
implicitly qualified with "for current needs". To pick a really simple
example, consider the changes to booting that happened since the
arrival of netbooks. What was once a relatively rare process, with the
corresponding "suckless" design being to keep things simple, has
become something where sub 5s booting is wanted, which requires more
complicated techniques. That's not to say that old-style booting was
wrong for the time it was designed, but the criteria now are different
and consequently the most elegant solution is now different.

pmarin

unread,
Jun 14, 2010, 2:31:01 AM6/14/10
to dev mail list
> Problem is the vast complexity they both contain is hidden inside
> libwebkit. That thing is huge. I get the feeling surf and uzbl only
> make the tip of the iceberg suck less.

We would can say the same about dwm, X11 and xinerama.

pmarin.

Anselm R Garbe

unread,
Jun 14, 2010, 3:17:50 AM6/14/10
to dev mail list
On 13 June 2010 23:09, Martin Kopta <mar...@kopta.eu> wrote:
> Some philosophical questions..
>
> What does it mean for an operating system to be suckless?

I think the Unix philosophy makes an OS "suckless". Each tool does
just one task and solves this task in the best way; and a universal
interface between each of these tools that allows combining those
tools to solve bigger tasks.

This approach is modular and quite future proof as the past has shown.

> What features should (or should not) an OS have in order to be suckless?

The point is not about the features it's more about the structural organisation.

> Are there suckless or close-to-be-suckless operating systems out there?

Sure, original Unix and Plan 9 are quite suckless. I think one can
achieve a suckless Linux system as well -- I know that the Linux
kernel is more complex than it needs to be, but if one sees the kernel
as single entity, the rest of a system can be quite suckless.

Cheers,
Anselm

Anselm R Garbe

unread,
Jun 14, 2010, 3:20:45 AM6/14/10
to dev mail list
On 14 June 2010 01:59, David Tweed <david...@gmail.com> wrote:
> On Mon, Jun 14, 2010 at 12:38 AM, Connor Lane Smith <c...@lubutu.com> wrote:
>> On 14 June 2010 00:16, David Tweed <david...@gmail.com> wrote:
>>> One of the issues to consider is that what computers are used for
>>> changes with time, and decisions that one may classify as "the
>>> suckless way of doing things" at one point in time may mean that it's
>>> not effectively useable in some future situations.
>>
>> If the system is sufficiently modular it should be relatively future-proof.
>
> I meant to suggest that design decisions and architectures might need
> changing as new use cases come to light rather than that a single
> design should be future proof-ish, and that this is in fact desirable.
> However that means that saying something is "suckless" has to be
> implicitly qualified with "for current needs". To pick a really simple
> example, consider the changes to booting that happened since the
> arrival of netbooks. What was once a relatively rare process, with the
> corresponding "suckless" design being to keep things simple, has
> become something where sub 5s booting is wanted, which requires more
> complicated techniques. That's not to say that old-style booting was

I think the Unix philosophy is quite future proof, also with
parallelization in mind. So if new requirements arise then it's rather
a question if a new tool or new way of combining them is needed.

Regarding the boot speed I disagree. I think short boot cycles can be
achieved with rather more simple init systems than the insanity people
got used to like the SysV style Debian insanity. A simple BSD init
based or even more simple system always outperforms any "smart"
technique in my observation.

Cheers,
Anselm

Connor Lane Smith

unread,
Jun 14, 2010, 3:30:49 AM6/14/10
to dev mail list

Touché.
Being pragmatic is depressing.

cls

Troels Henriksen

unread,
Jun 14, 2010, 3:29:58 AM6/14/10
to dev mail list
Anselm R Garbe <gar...@gmail.com> writes:

> Regarding the boot speed I disagree. I think short boot cycles can be
> achieved with rather more simple init systems than the insanity people
> got used to like the SysV style Debian insanity. A simple BSD init
> based or even more simple system always outperforms any "smart"
> technique in my observation.

Well, for really excellent performance, you do need the ability to
parallelise the init operations, so that's a bit of complexity that has
actual performance benefits.

I agree there is little value in the general runlevel mess.

--
\ Troels
/\ Henriksen

Kurt Van Dijck

unread,
Jun 14, 2010, 7:51:21 AM6/14/10
to dev mail list

I fully agree. after looking to minit & stuff, I decided to write our own
init daemon to incorporate some safety stuff.
* booting is done in parallel.
* udev (+/- 5sec) was replaced by our (small) fdev (now takes some 0.1 sec).

some examples:
dell laptop: booting was over 45 seconds (from kernel starting timers), now 15.
via epia board: was 25, now 4.3 seconds
embedded ARM cpu: (never used debian there, but busybox): no final measurements,
but boottime of 18 seconds got reduced to 6.
OpenMoko: boottime is originally (very long) 2m40s, reduced to 35.

I admit our init is quit more complex than strictly necessary (we try to guarantee
that a watched process is not dead-locked, and therefore have a hardware watchdog
in the init process, and ...).

I'm not familiar with BSD init's.

Kurt


>
> --
> \ Troels
> /\ Henriksen
>

Marc Weber

unread,
Jun 14, 2010, 8:02:45 AM6/14/10
to dev
May I just draw your attention to www.nixos.org?

I don't want to say it sucks less. But it definitely does for developers
because you can install multiple versions of a package at the same time.
You can always rollback.

It does'nt fit all needs at the moment because its hard to separate
headers from binaries. I think it can be fixed - But the project doesn't
have enough man power to start such an effort yet.

One of its key features is that you can easily add quality testing to
your distribution workflow. And systems which sucks less just work.

It may be worth having a look at the project even if its not a perfect
match.

Marc Weber

Moritz Wilhelmy

unread,
Jun 14, 2010, 8:22:33 AM6/14/10
to dev mail list
> * udev (+/- 5sec) was replaced by our (small) fdev (now takes some 0.1 sec).

there is also mdev in busybox, in case you are interested. I like busybox very
much, but I think it lacks documentation.

Kurt Van Dijck

unread,
Jun 14, 2010, 8:33:58 AM6/14/10
to dev mail list
Indeed, it's similar.
I forgot why (must look back), but mdev is even more basic, and wasn't sufficient for me.

>

Ethan Grammatikidis

unread,
Jun 14, 2010, 9:23:46 AM6/14/10
to dev mail list

On 14 Jun 2010, at 00:16, David Tweed wrote:

> On Sun, Jun 13, 2010 at 11:09 PM, Martin Kopta <mar...@kopta.eu>
> wrote:
>> Some philosophical questions..
>>
>> What does it mean for an operating system to be suckless?
>> What features should (or should not) an OS have in order to be
>> suckless?
>> Are there suckless or close-to-be-suckless operating systems out
>> there?
>> What does suckless thinks about Plan9, *BSD, GNU/Linux, MS
>> Windows, ..?
>> Is it possible to have an OS for desktop/laptop everyday use
>> (multimedia, web,
>> programming, research, ..) which is actualy usable, not rotten
>> inside and alive?
>
> One of the issues to consider is that what computers are used for
> changes with time, and decisions that one may classify as "the
> suckless way of doing things" at one point in time may mean that it's
> not effectively useable in some future situations. For instance,
> around about 20 years ago you wouldn't have considered multimedia as
> something you need on a computer, so the complexity required for
> reliable low-latency scheduling might be regarded as being needlessly
> complex 20 years ago, by now it's pretty essential for audio
> processing.

A curious example, in the sense that there was a market for multimedia
on PCs 20 years ago, there was suitable technology, but the two never
came together.

Multimedia on PCs was the upcoming thing 20 years ago. It wasn't just
expected to happen, it was starting to happen. In about 88 I was wowed
by video on a PC screen, but several yeas later ('maybe 94 or 95) I
gave an Atari ST to a musician because "Pentiums" as she called them
couldn't really produce accurate enough timing. The big surprise here
is the timing required was for MIDI; 1/64-beat resolution at a rarely-
used maximum of 250 beats per minute comes to less than 270Hz. The
90MHz+ Pentiums of the time couldn't handle that, where the 8MHz ST
could.

Oh and I almost forgot, the ST had shared video memory. In the high
resolution display mode used by all the top-notch MIDI sequencer
software, the ST's CPU spent more than 50% of it's time halted while
the display circuitry read from the RAM. To re-phrase my statement,
the 90MHz+ Pentiums of the time couldn't handle accurately producing a
270Hz signal, where the 8MHz ST not only could, but did it with one
arm metaphorically tied behind its back by its own display system.
Something sucked all right.

It would be easy to say the ST didn't suck because it didn't
multitask, but at that point OS/9 must have been around for about 2
decades with real-time multi-tasking and multi-user capabilities. OS/9
started life on the 6809 so it couldn't have been complex.

There's more. There's fucking more, but thinking about what computers
have lost in the last 20 years upsets me too much to write about. Home
computers have LOST a lot in the past 20 years while hardware power
has grown by orders of magnitude. It's phenomenal, it's staggering.

--
Do not specify what the computer should do for you, ask what the
computer can do for you.

Ethan Grammatikidis

unread,
Jun 14, 2010, 10:37:58 AM6/14/10
to dev mail list

busybox is a bit incomplete in places too. ed is missing E and Q,
which are the sort of do-without-confirmation commands useful in
scripts. ed also lacks n, leaving (I think) no way to get line numbers
from a busybox utility. I don't know how vi is coming along but don't
have good memories of it from 2 or 3 years ago, it was really rotten.
dc has badly broken parsing and appears to be missing _most_ commands.
I'm not impressed yet. :)

--
Complexity is not a function of the number of features. Some features
exist only because complexity was _removed_ from the underlying system.


Moritz Wilhelmy

unread,
Jun 14, 2010, 11:26:59 AM6/14/10
to dev mail list

would you mind sharing the sourcecode? we are working on another "suckless"
distro, and we don't want dbus, hal, gconf, fdi, xml, policykit and ponys in
there, so we're always looking for unixy software to extend it.

Jakub Lach

unread,
Jun 14, 2010, 4:07:35 PM6/14/10
to dev mail list
2010 17:26 Moritz Wilhelmy <cr...@wzff.de> napisał(a):

> would you mind sharing the sourcecode? we are working on another "suckless"
> distro, and we don't want dbus, hal, gconf, fdi, xml, policykit and ponys in
> there, so we're always looking for unixy software to extend it.

Maybe this shows how Linux is different in this case, but here
on FreeBSD you still don't have to have things like that, If you
don't want them.

(I don't have hal, policykit, dbus, gconf etc.)

best regards,
- Jakub Lach

Stephane Sezer

unread,
Jun 14, 2010, 5:25:44 PM6/14/10
to d...@suckless.org

I use linux and I don't have them neither. The fact is that most linux
distros are binary-based, so developers have to take some decisions, and
they often take the bad ones :D

--
Stephane Sezer

Ilya Ilembitov

unread,
Jun 14, 2010, 5:35:22 PM6/14/10
to dev mail list
Developing a suckless web browser engine is impossible, because one will have to implement all the non-standards thing in the current Web, right? OK, a theoretical question then. In 2010 we live in the times when even Microsoft tries hard to dump IE6, so only IE7 may still force web-masters to write some non-standard code. However, IE7 is only bundled with Vista, and Vista (if I am not mistaken) is not as popular as Windows 7 already. The latter ships with IE8, which is reportedly more standards-compliant. So as soon as WinXP dies already, it will be IE8 (IE9 by that time, may be). Correct me if I am wrong.

Second, more and more major web portals and services are multi-browser. Even MS's Office Web Apps (that were released a week ago) supports all the major browsers, the same for the most popular sites, like Google's services, Twitter, Facebook, most of Yahoo, etc. Most popular CMS's are mostly standards-compliant, too (like WordPress, drupal, etc) and they run nearly the majority of small projects these days. Finally, a lot of services wish to have a mobile version, too, and IE absolutely doesn't have any decisive part here, it's webkit territory. So, even if my point about IE is wrong, most sites are multi-browser these days. Does that mean they are mostly standards-compliant? Or each browser requires its own tweaks, so the "firefox" (or webkit, etc) version of any site is not a standard-compliant site, but rather some set of tweaks for that browser?

So, here is my question. If we take only modern and active projects, how standard are they? Suppose, we have a browser engine that implements only the current standards (OK, may be some legacy standards, but no IE or other tweaks), will we still be able to use 95% of the web?

> On 13 June 2010 23:28, Matthew Bauer <mjba...@gmail.com> wrote:

> > О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫


> > I think surf and uzbl are good steps forward in making a kiss web browser.
> Problem is the vast complexity they both contain is hidden inside
> libwebkit. That thing is huge. I get the feeling surf and uzbl only
> make the tip of the iceberg suck less.
> cls
>

--
wbr, Ilembitov

Ethan Grammatikidis

unread,
Jun 14, 2010, 6:36:26 PM6/14/10
to dev mail list

On 14 Jun 2010, at 22:35, Ilya Ilembitov wrote:
>
> So, here is my question. If we take only modern and active projects,
> how standard are they? Suppose, we have a browser engine that
> implements only the current standards (OK, may be some legacy
> standards, but no IE or other tweaks), will we still be able to use
> 95% of the web?

Probably, but why? There's nothing suckless at all about the standards
coming out of the w3c. I don't know much about rendering html but I
recently made a web server, and while I started out with the noble
intent of supporting standards, before I was done I just had to
declare http 1.1 schizophrenic and delusional!

Consider this: Out of web browser and web server, which one has to
examine the data in order to render it, and which one is just reading
it from the disk and dumping it down a pipe? Which one's resources are
at a premium, and which is mostly idling between fetching web pages?
With those two questions in mind, can someone please tell me what the
w3c were collectively smoking when they made content-type mandatory in
http 1.1? If that isn't enough argument, it's actually impossible to
set content-type correctly from file extension. No-one really tries
and I very much doubt they ever did, but that didn't stop the w3c from
making it mandatory. Idiots.

"Schizophrenic" actually refers to a less serious problem, but still a
bizarre one. Dates are provided in headers to guide caching, very
useful in itself but the date format is about as long-winded as it can
get and it's US-localised too. With that in mind, why are chunk length
values for chunked encoding given in hex? That's not even consistent
with the length value of content-length, which is decimal. And what
titan amongst geniuses decided it was appropriate to apply chunked
encoding to the http headers?

Bjartur Thorlacius

unread,
Jun 14, 2010, 7:09:49 PM6/14/10
to dev mail list
On 6/14/10, Ethan Grammatikidis <eek...@fastmail.fm> wrote:
>
> On 14 Jun 2010, at 22:35, Ilya Ilembitov wrote:
>>
>> So, here is my question. If we take only modern and active projects,
>> how standard are they? Suppose, we have a browser engine that
>> implements only the current standards (OK, may be some legacy
>> standards, but no IE or other tweaks), will we still be able to use
>> 95% of the web?
>
> Probably, but why? There's nothing suckless at all about the standards
> coming out of the w3c. I don't know much about rendering html but I
> recently made a web server, and while I started out with the noble
> intent of supporting standards, before I was done I just had to
> declare http 1.1 schizophrenic and delusional!
>
> Consider this: Out of web browser and web server, which one has to
> examine the data in order to render it, and which one is just reading
> it from the disk and dumping it down a pipe? Which one's resources are
> at a premium, and which is mostly idling between fetching web pages?
> With those two questions in mind, can someone please tell me what the
> w3c were collectively smoking when they made content-type mandatory in
> http 1.1? If that isn't enough argument, it's actually impossible to
> set content-type correctly from file extension. No-one really tries
> and I very much doubt they ever did, but that didn't stop the w3c from
> making it mandatory. Idiots.
setfattr(1). File extensions are just a historical misunderstanding (where
people confused presentation with semantics). IMO they should mostly
be used to give unique names to binaries, sources and configuration
files with a similiar base name.

> "Schizophrenic" actually refers to a less serious problem, but still a
> bizarre one. Dates are provided in headers to guide caching, very
> useful in itself but the date format is about as long-winded as it can
> get and it's US-localised too. With that in mind, why are chunk length
> values for chunked encoding given in hex? That's not even consistent
> with the length value of content-length, which is decimal. And what
> titan amongst geniuses decided it was appropriate to apply chunked
> encoding to the http headers?
Granted, decimal vs hex inconsistency is plain weird. But nobody is
forcing you (as a httpd implementor) to actually use (chunked) trailing
headers, though it´s a different story for clients.

--
kv,
- Bjartur

Matthew Bauer

unread,
Jun 14, 2010, 7:19:33 PM6/14/10
to dev mail list
I wish modern filesystems would allow some way of identifying a file type besides in the filename. It seems like that would make things more straight forward.
--
Matthew Bauer

Bjartur Thorlacius

unread,
Jun 14, 2010, 7:24:26 PM6/14/10
to dev mail list
On 6/14/10, Matthew Bauer <mjba...@gmail.com> wrote:
> I wish modern filesystems would allow some way of identifying a file type
> besides in the filename. It seems like that would make things more straight
> forward.
Surely many modern filesystem support xattrs (extended file attributes)?
One should be able to use them to store media types.

Antoni Grzymala

unread,
Jun 14, 2010, 7:28:01 PM6/14/10
to dev mail list
Bjartur Thorlacius dixit (2010-06-14, 23:24):

Besides, hfs has had this feature (along with the whole data/resource
fork schizophreny) for the last 15 or twenty years.

--
[a]

David Tweed

unread,
Jun 14, 2010, 7:30:05 PM6/14/10
to dev mail list
On Tue, Jun 15, 2010 at 12:19 AM, Matthew Bauer <mjba...@gmail.com> wrote:
> I wish modern filesystems would allow some way of identifying a file type
> besides in the filename. It seems like that would make things more straight
> forward.

The other issue is an providing a very-easy-to-type equivalent of
globbing on filenames in shell/script expressions for whatever
mechanism is used (ie, for things like 'find . -name "*.(h|cpp|tcc)" |
xargs ......"

Ethan Grammatikidis

unread,
Jun 14, 2010, 7:54:29 PM6/14/10
to dev mail list

On 15 Jun 2010, at 00:28, Antoni Grzymala wrote:

> Bjartur Thorlacius dixit (2010-06-14, 23:24):
>
>> On 6/14/10, Matthew Bauer <mjba...@gmail.com> wrote:
>>> I wish modern filesystems would allow some way of identifying a
>>> file type
>>> besides in the filename. It seems like that would make things more
>>> straight
>>> forward.
>
>> Surely many modern filesystem support xattrs (extended file
>> attributes)?
>> One should be able to use them to store media types.

Should, or will?

> Besides, hfs has had this feature (along with the whole data/resource
> fork schizophreny) for the last 15 or twenty years.

I think hfs only has that feature for backwards compatibility, I
haven't seen any sign of its use in Mac OS X.

I get the impression storing file type information was much more
common in the past, which raises the question why is it not now? I
think it's pointless because most file types can be identified from
their first few bytes. This loops back around to my content-type
argument, why should the server go looking for file type when the
client gets it handed to it anyway?

Noah Birnel

unread,
Jun 14, 2010, 9:13:32 PM6/14/10
to dev mail list
On Tue, Jun 15, 2010 at 01:35:22AM +0400, Ilya Ilembitov wrote:
>...Facebook...

You are using an incompatible web browser.

Sorry, we're not cool enough to support your browser. Please keep it real
with one of the following browsers:

* Mozilla Firefox
* Safari
* Microsoft Internet Explorer

Facebook � 2010 �

Just sayin'.
--Noah


Stanley Lieber

unread,
Jun 14, 2010, 9:18:36 PM6/14/10
to dev mail list

I've had to stop using surf to monitor a page at my job because they
now insist upon a Netscape or IE user agent string.

-sl

Kurt Van Dijck

unread,
Jun 15, 2010, 4:03:42 AM6/15/10
to dev mail list

The thing is that this is part of a product for the company I work for.
I don't think my boss wants _all_ code opensourced. I hate to say, but the
answer is no for the moment.

I just talked about the init as it showed a point that it is not necessarily
the complexity that slows down booting. It's the parallelism.

But I seriously evaluated minit & ninit (somewhere on internet). For a regular
desktop system, they would work as well, & are better documented.

our 'fdev' just dropped 5 seconds, but mdev is capable too.

Kurt
>

ilf

unread,
Jun 15, 2010, 5:08:43 AM6/15/10
to d...@suckless.org
On 06-14 20:18, Stanley Lieber wrote:
> I've had to stop using surf to monitor a page at my job because they
> now insist upon a Netscape or IE user agent string.

config.h: static char *useragent
or http://surf.suckless.org/patches/useragent

'Monitoring' a page sounds like I'd script it though.

--
ilf @jabber.berlin.ccc.de

Über 80 Millionen Deutsche benutzen keine Konsole. Klick dich nicht weg!
-- Eine Initiative des Bundesamtes für Tastaturbenutzung

signature.asc

Nick

unread,
Jun 15, 2010, 6:24:37 AM6/15/10
to dev mail list
Quoth Ethan Grammatikidis:

> I think it's pointless because most file types can be identified
> from their first few bytes. This loops back around to my
> content-type argument, why should the server go looking for file
> type when the client gets it handed to it anyway?

Because that way you can do content negotiation. Granted, that isn't
much used today, and it would make sense to make content-type
optional, but I like the idea of content negotiation. Being able to
e.g. get the original markdown for the content of a page, without
the HTML crap, navigation etc, would be really nice in a lot of
cases. I get the impression the W3C expected content negotiation to
be used a lot more when they wrote the HTTP 1.1 spec.

Ethan Grammatikidis

unread,
Jun 15, 2010, 7:19:56 AM6/15/10
to dev mail list

On 15 Jun 2010, at 11:24, Nick wrote:

> Quoth Ethan Grammatikidis:
>> I think it's pointless because most file types can be identified
>> from their first few bytes. This loops back around to my
>> content-type argument, why should the server go looking for file
>> type when the client gets it handed to it anyway?
>
> Because that way you can do content negotiation. Granted, that isn't
> much used today,

Why not? With more international businesses than ever on the web and
the internet spread further over the globe than ever before, and with
content negotiation having been around for such a long time, why is it
hardly used? Perhaps because it sucks?

> and it would make sense to make content-type
> optional, but I like the idea of content negotiation. Being able to
> e.g. get the original markdown for the content of a page, without
> the HTML crap, navigation etc, would be really nice in a lot of
> cases.

Maybe, but I doubt the majority of web designers would like you
looking at their source, as simple as it might be, and the likelihood
of big businesses letting you get at their web page sources seems very
low. Maybe I'm just terminally cynical.

> I get the impression the W3C expected content negotiation to
> be used a lot more when they wrote the HTTP 1.1 spec.

Erm, yeah. The W3C seems to have expected a lot of things would be
practical and useful.

Nick

unread,
Jun 15, 2010, 7:48:34 AM6/15/10
to dev mail list
Quoth Ethan Grammatikidis:

> On 15 Jun 2010, at 11:24, Nick wrote:
> > Because that way you can do content negotiation. Granted, that isn't
> > much used today,
>
> Why not? With more international businesses than ever on the web and
> the internet spread further over the globe than ever before, and with
> content negotiation having been around for such a long time, why is it
> hardly used? Perhaps because it sucks?

I always presumed it was because web browsers never really gave it a
meaningful interface. Same, for that matter, with HTTP basic
authentication.



> > and it would make sense to make content-type
> > optional, but I like the idea of content negotiation. Being able to
> > e.g. get the original markdown for the content of a page, without
> > the HTML crap, navigation etc, would be really nice in a lot of
> > cases.
>
> Maybe, but I doubt the majority of web designers would like you
> looking at their source, as simple as it might be, and the likelihood
> of big businesses letting you get at their web page sources seems very
> low. Maybe I'm just terminally cynical.

Sigh, no, you're largely right. Though wikipedia or some of the more
open blog engines are examples where this is less likely to be true.

> > I get the impression the W3C expected content negotiation to
> > be used a lot more when they wrote the HTTP 1.1 spec.
>
> Erm, yeah. The W3C seems to have expected a lot of things would be
> practical and useful.

Well, I prefer the W3C's vision of the web to the one designers and
marketers have created.

Incidentally, can anyone recommend a good gopher client? I missed it
the first time 'round, and I'd be curious to see a different
paradigm of web type thing.

Kris Maglione

unread,
Jun 15, 2010, 7:45:54 AM6/15/10
to d...@suckless.org
Does anyone ever notice that every time we have this thread, it
grows without bound, and yet never manages to get anywhere?

--
Kris Maglione

You're bound to be unhappy if you optimize everything.
--Donald Knuth


Kurt H Maier

unread,
Jun 15, 2010, 8:43:31 AM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 7:45 AM, Kris Maglione <magli...@gmail.com> wrote:
> Does anyone ever notice that every time we have this thread, it grows
> without bound,

This happens with this topic on all general-dev mailing lists.

>and yet never manages to get anywhere?

This is what makes the suckless list better. Otherwise you wind up
with shit like http://www.archhurd.org/

--
# Kurt H Maier

Ethan Grammatikidis

unread,
Jun 15, 2010, 9:21:12 AM6/15/10
to dev mail list

On 15 Jun 2010, at 12:48, Nick wrote:

> Quoth Ethan Grammatikidis:
>> On 15 Jun 2010, at 11:24, Nick wrote:
>>> Because that way you can do content negotiation. Granted, that isn't
>>> much used today,
>>
>> Why not? With more international businesses than ever on the web and
>> the internet spread further over the globe than ever before, and with
>> content negotiation having been around for such a long time, why is
>> it
>> hardly used? Perhaps because it sucks?
>
> I always presumed it was because web browsers never really gave it a
> meaningful interface. Same, for that matter, with HTTP basic
> authentication.

The interface for language content negotiation is straightforward and
meaningful, but nobody uses even that.

>
>>> and it would make sense to make content-type
>>> optional, but I like the idea of content negotiation. Being able to
>>> e.g. get the original markdown for the content of a page, without
>>> the HTML crap, navigation etc, would be really nice in a lot of
>>> cases.
>>
>> Maybe, but I doubt the majority of web designers would like you
>> looking at their source, as simple as it might be, and the likelihood
>> of big businesses letting you get at their web page sources seems
>> very
>> low. Maybe I'm just terminally cynical.
>
> Sigh, no, you're largely right. Though wikipedia or some of the more
> open blog engines are examples where this is less likely to be true.
>
>>> I get the impression the W3C expected content negotiation to
>>> be used a lot more when they wrote the HTTP 1.1 spec.
>>
>> Erm, yeah. The W3C seems to have expected a lot of things would be
>> practical and useful.
>
> Well, I prefer the W3C's vision of the web to the one designers and
> marketers have created.

I don't. :) There are plenty of worthless shinyshit marketing sites,
of course, but sites which actually sell you a wide range of products
make sure you can find the products you want AND specifications on them.

On w3.org by contrast the page on the cgi standard has nothing but
dead links and references to an obsolete web server. I was searching
for the CGI standard the other day, and couldn't find it _anywhere_.
I've not generally found navigating w3.org too easy, it's only all
right when you already know where stuff is.

>
> Incidentally, can anyone recommend a good gopher client? I missed it
> the first time 'round, and I'd be curious to see a different
> paradigm of web type thing.

I'm curious too. I've only ever used a somewhat sucky web gateway to
access gopher, and that only once.

Dmitry Maluka

unread,
Jun 15, 2010, 9:45:30 AM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 02:21:12PM +0100, Ethan Grammatikidis wrote:
> On w3.org by contrast the page on the cgi standard has nothing but
> dead links and references to an obsolete web server. I was searching
> for the CGI standard the other day, and couldn't find it _anywhere_.

It's here, btw: http://tools.ietf.org/html/rfc3875

Dieter Plaetinck

unread,
Jun 15, 2010, 10:05:24 AM6/15/10
to dev mail list
On Tue, 15 Jun 2010 08:43:31 -0400
Kurt H Maier <karm...@gmail.com> wrote:


> This is what makes the suckless list better. Otherwise you wind up
> with shit like http://www.archhurd.org/
>

What's wrong with arch hurd?

Dieter

Kris Maglione

unread,
Jun 15, 2010, 10:18:20 AM6/15/10
to d...@suckless.org

The HURD part, obviously.

--
Kris Maglione

Haskell is faster than C++, more concise than Perl, more regular than
Python, more flexible than Ruby, more typeful than C#, more robust
than Java, and has absolutely nothing in common with PHP.
--Autrijus Tang


anonymous

unread,
Jun 15, 2010, 11:12:54 AM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 12:48:34PM +0100, Nick wrote:
> Incidentally, can anyone recommend a good gopher client? I missed it
> the first time 'round, and I'd be curious to see a different
> paradigm of web type thing.

Lynx and Mozilla Firefox support Gopher.


Mate Nagy

unread,
Jun 15, 2010, 11:39:40 AM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 07:12:54PM +0400, anonymous wrote:
> Lynx and Mozilla Firefox support Gopher.
firefox's gopher support has some catches (e.g. only port 70 is
supported, given port after : is ignored).

There is an extension for firefox called overbite:
http://gopher.floodgap.com/overbite/

this adds decent gopher support.

lynx used to be a terribly buggy gopher client, but in recent versions
the major problems seem to be fixed. I remember it had an issue with a
bit overzealous caching, so watch out.

There's also the "gopher" package in Debian, which is supposedly "a
text-based (ncurses) client from the University of Minnesota."
This is an abomination that tries to connect with (the nonstandard)
gopher+ by default and if the gopher server doesn't handle this, fails
utterly. Gopher servers must contain gopher+ trampolines to work
around this problem. It has problems handling menus with more
consecutive info lines than the screen height (this is a bit unusual but
not unknown situation).

My vote: if you're firefox running anyway, use overbite; otherwise try
lynx.

Mate

Uriel

unread,
Jun 15, 2010, 3:37:55 PM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 4:18 PM, Kris Maglione <magli...@gmail.com> wrote:
> On Tue, Jun 15, 2010 at 04:05:24PM +0200, Dieter Plaetinck wrote:
>>
>> What's wrong with arch hurd?
>
> The HURD part, obviously.

s/H/T/

uriel

Bjartur Thorlacius

unread,
Jun 15, 2010, 4:45:15 PM6/15/10
to dev mail list
On 6/14/10, Ethan Grammatikidis <eek...@fastmail.fm> wrote:
>
> On 15 Jun 2010, at 00:28, Antoni Grzymala wrote:
>
>> Bjartur Thorlacius dixit (2010-06-14, 23:24):
>>
>>> On 6/14/10, Matthew Bauer <mjba...@gmail.com> wrote:
>>>> I wish modern filesystems would allow some way of identifying a
>>>> file type
>>>> besides in the filename. It seems like that would make things more
>>>> straight
>>>> forward.
>>
>>> Surely many modern filesystem support xattrs (extended file
>>> attributes)?
>>> One should be able to use them to store media types.
>
> Should, or will?
WDYM? AFAIK ext4, Reiserfs, ZFS (if that's categorized as a FS), btrfs
and others support xattr /if/ properly configured. OTOH I think they're
disabled by default on many distros. Any examples of filesystems that
don't support them besides NFS?

> I get the impression storing file type information was much more
> common in the past, which raises the question why is it not now? I
> think it's pointless because most file types can be identified from
> their first few bytes. This loops back around to my content-type
> argument, why should the server go looking for file type when the
> client gets it handed to it anyway?
Not all media types contain magic numbers. In theory one could just
wrap all files in a metadata container that would allow for seperation of
"static" metadatata about files seperately from transfer info (such as
Date and Transfer-*), but that would require long transition period and
standardization on a new Content-Encoding that may become default
in, say, HTTP/2.0 and get some basic support in MS IE 11 or 12.

P.S. When I say "wrapper" I mean something like an shebang/PS style
header like #=text/html.
--
kv,
- Bjartur

Dieter Plaetinck

unread,
Jun 15, 2010, 4:49:04 PM6/15/10
to d...@suckless.org
On Tue, 15 Jun 2010 10:18:20 -0400
Kris Maglione <magli...@gmail.com> wrote:

> On Tue, Jun 15, 2010 at 04:05:24PM +0200, Dieter Plaetinck wrote:
> >On Tue, 15 Jun 2010 08:43:31 -0400
> >Kurt H Maier <karm...@gmail.com> wrote:
> >> This is what makes the suckless list better. Otherwise you wind up
> >> with shit like http://www.archhurd.org/
> >>
> >
> >What's wrong with arch hurd?
>
> The HURD part, obviously.
>

hmm. i'm not too familiar with hurd, but afaik it's supposed to be
simpler and more elegant then Linux

Dieter

Kris Maglione

unread,
Jun 15, 2010, 4:51:25 PM6/15/10
to d...@suckless.org

It's neither.

--
Kris Maglione

Advertising may be described as the science of arresting human
intelligence long enough to get money from it.


Kurt H Maier

unread,
Jun 15, 2010, 5:02:40 PM6/15/10
to dev mail list
On Tue, Jun 15, 2010 at 4:51 PM, Kris Maglione <magli...@gmail.com> wrote:
>> hmm. i'm not too familiar with hurd, but afaik it's supposed to be
>> simpler and more elegant then Linux
>
> It's neither.

And it won't be, even if by some miracle someone gets it working one day.

pancake

unread,
Jun 15, 2010, 4:58:47 PM6/15/10
to d...@suckless.org
I was the author of Bee GNU/Hurd. Few years ago I did my own GNU/Hurd
distro based on pkgsrc package system and with my own build system,
because the Debian and GNU ones were completely unusable and inpracticable.

The sitaution didnt changed too much. Debian maintains many patches that
fixes things, and GNU will never accept them, the system is unusable and
unstable.

The main reason why GNU/HURD is in this situation is because Mach is a
bloated microkernel. The L4 port has never been adopted by the whole
community and the OSKIT is a 800MB userland monster to handle drivers.

As the system is not usable for editing/compiling code because the
console is broken and the X is quite slow and the kernel sometimes crashes.
it makes the system very weird as for development..to not say for users.

Nothing changed in hurd in the past 20 years. In fact.. I dont think this
will never change :)

--pancake

Amit Uttamchandani

unread,
May 14, 2012, 6:27:38 PM5/14/12
to dev mail list
On Mon, Jun 14, 2010 at 01:51:21PM +0200, Kurt Van Dijck wrote:

[snip]

>
> I fully agree. after looking to minit & stuff, I decided to write our own
> init daemon to incorporate some safety stuff.
> * booting is done in parallel.
> * udev (+/- 5sec) was replaced by our (small) fdev (now takes some 0.1 sec).
>
> some examples:
> dell laptop: booting was over 45 seconds (from kernel starting timers), now 15.
> via epia board: was 25, now 4.3 seconds
> embedded ARM cpu: (never used debian there, but busybox): no final measurements,
> but boottime of 18 seconds got reduced to 6.
> OpenMoko: boottime is originally (very long) 2m40s, reduced to 35.
>
> I admit our init is quit more complex than strictly necessary (we try to guarantee
> that a watched process is not dead-locked, and therefore have a hardware watchdog
> in the init process, and ...).
>
> I'm not familiar with BSD init's.
>
> Kurt

Hello,

Just came across your message while going through the suckless archives.
You mentioned later on in the thread that you have not opensourced the
init daemon yet. Has this happened? Or is it possible now? I would like
to take a look at some of the optimizations you have done.

Thanks,
Amit

Kurt Van Dijck

unread,
Jun 18, 2012, 6:08:35 AM6/18/12
to dev mail list
Sorry for the delay.

On Mon, May 14, 2012 at 03:27:38PM -0700, Amit Uttamchandani wrote:
> On Mon, Jun 14, 2010 at 01:51:21PM +0200, Kurt Van Dijck wrote:
>
> [snip]
>
> >
> > I fully agree. after looking to minit & stuff, I decided to write our own
> > init daemon to incorporate some safety stuff.
> > * booting is done in parallel.
> > * udev (+/- 5sec) was replaced by our (small) fdev (now takes some 0.1 sec).
> >
>
[...]
> Hello,
>
> Just came across your message while going through the suckless archives.
> You mentioned later on in the thread that you have not opensourced the
> init daemon yet. Has this happened?
Nope, not yet.
> Or is it possible now?
There's no real intention to do so, and without a strong commitment, I chose
not to start putting it in opensource.

The 'fdev' daemon may be considered again, especially since linux
invented a 'devtmpfs' to solve the udev bloat.
> I would like
> to take a look at some of the optimizations you have done.
I do understand that.

Let me pave your way a bit.
I think for the examples I mentioned, 'minit' & equivalents perform very well, maybe
even better. I started over to accomplish:
* per-service software watchdog.
* dependency-based shutdown.
* probably a few others I forgot.

The optimisations to reach short startup times are not implemented in the init daemon,
but rather in the dependency configurations of different services. Which init daemon
actually gets used is of less importance (for this matter).

The optimizations I made:
* early boot:
WAKEUP:
$ dmesg -n5
$ mount nodev /sys -tsysfs
$ mount nodev /proc -tprocfs
$ mount nodev /dev -ttmpfs
# make nodes /dev/{console,null,zero}
then split up into:
SETUP:
$ hostname -F /etc/hostname
$ ip link set lo up
$ mkdir /dev/cgroup
$ mount nodev /dev/cgroup -tcgroup
FDEV:
# clear hotplug callback from kernel
# should already be cleared in kernel config & recompile
$ echo "" > /proc/sys/kernel/hotplug
# start device daemon (fdev, mdev, ...)
$ fdev ...
# wait for SETUP & FDEV
BOOT:
$ mkdir /dev/pts
$ mount nodev /dev/pts -tdevpts
$ mount / -o remount,rw
$ rm -rf /tmp
$ cp /proc/mounts /etc/mtab
# very debian specific
$ /etc/init.d/keymap.sh start
# DEBIAN ifupdown state
$ cp /dev/null /etc/network/run/ifstat
# now, you're ready to run anything.

minit for example allows for easily starting stuff
in parallel.
Also, by putting network 'lo' up already,
you can start most networking daemons, without waiting
for DHCP to complete ...

Basically, I let no service depend on others, unless some service
would fail to initialize properly otherwise.

Kurt

>
> Thanks,
> Amit
>

--
Kurt Van Dijck
GRAMMER EiA ELECTRONICS
http://www.eia.be
kurt.va...@eia.be
+32-38708534

Reply all
Reply to author
Forward
0 new messages