What does it mean for an operating system to be suckless?
What features should (or should not) an OS have in order to be suckless?
Are there suckless or close-to-be-suckless operating systems out there?
What does suckless thinks about Plan9, *BSD, GNU/Linux, MS Windows, ..?
Is it possible to have an OS for desktop/laptop everyday use (multimedia, web,
programming, research, ..) which is actualy usable, not rotten inside and alive?
There's work going on now to create a statically linked suckless linux
distribution: stali: http://sta.li/
--
Samuel Baldwin - logik.li
Hm, I think we already concluded somewhat that a research application
is unlikely to be suckless. I'm not really sure what you mean by
"multimedia" and "programming". Music and movie players can probably
not be suckless if that means they should be patent free while still
be able to play back the common formats. A web browser might be able
to suck less in the future, when cars fly.
One of the issues to consider is that what computers are used for
changes with time, and decisions that one may classify as "the
suckless way of doing things" at one point in time may mean that it's
not effectively useable in some future situations. For instance,
around about 20 years ago you wouldn't have considered multimedia as
something you need on a computer, so the complexity required for
reliable low-latency scheduling might be regarded as being needlessly
complex 20 years ago, by now it's pretty essential for audio
processing. The fact Linux is being used in smartphones and smartbooks
is suddenly pushing kernel developers who've only worked on PCs
face-to-face with hyper-agressive suspending ideas for power-saving.
If cloud computing (where you want to keep the decrypted data you have
on any individual remote computer to the minimum required for the
given task) takes off (and given that Plan 9 was based on running
intensive tasks on a server, I hope I'm safe from a Uriel rant)
functionality that seems pointlessly complicated and "non-suckless"
today may become apropriate. (For instance, I'd imagine that
cryptographic key management will probably become more integrated into
the kernel simply because you'll want to have such fine-grained
permissions and decrypting of entities that anything in userspace will
probably be too slow and easy-to-attack.) If implanted-in-the-body
devices get complex enough they may warrant a general purpose OS...
Of course, part of this comes from the tendency to try to use some
configuration of the same base OS (Linux, Mach, etc) for a wide range
of uses. Time will tell if this will continue to be a reasonable
development strategy. But if it is, a given design may be "suckless"
only for a period of time.
--
cheers, dave tweed__________________________
computer vision reasearcher: david...@gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot
Problem is the vast complexity they both contain is hidden inside
libwebkit. That thing is huge. I get the feeling surf and uzbl only
make the tip of the iceberg suck less.
cls
If the system is sufficiently modular it should be relatively future-proof.
cls
I meant to suggest that design decisions and architectures might need
changing as new use cases come to light rather than that a single
design should be future proof-ish, and that this is in fact desirable.
However that means that saying something is "suckless" has to be
implicitly qualified with "for current needs". To pick a really simple
example, consider the changes to booting that happened since the
arrival of netbooks. What was once a relatively rare process, with the
corresponding "suckless" design being to keep things simple, has
become something where sub 5s booting is wanted, which requires more
complicated techniques. That's not to say that old-style booting was
wrong for the time it was designed, but the criteria now are different
and consequently the most elegant solution is now different.
We would can say the same about dwm, X11 and xinerama.
pmarin.
I think the Unix philosophy makes an OS "suckless". Each tool does
just one task and solves this task in the best way; and a universal
interface between each of these tools that allows combining those
tools to solve bigger tasks.
This approach is modular and quite future proof as the past has shown.
> What features should (or should not) an OS have in order to be suckless?
The point is not about the features it's more about the structural organisation.
> Are there suckless or close-to-be-suckless operating systems out there?
Sure, original Unix and Plan 9 are quite suckless. I think one can
achieve a suckless Linux system as well -- I know that the Linux
kernel is more complex than it needs to be, but if one sees the kernel
as single entity, the rest of a system can be quite suckless.
Cheers,
Anselm
I think the Unix philosophy is quite future proof, also with
parallelization in mind. So if new requirements arise then it's rather
a question if a new tool or new way of combining them is needed.
Regarding the boot speed I disagree. I think short boot cycles can be
achieved with rather more simple init systems than the insanity people
got used to like the SysV style Debian insanity. A simple BSD init
based or even more simple system always outperforms any "smart"
technique in my observation.
Cheers,
Anselm
Touché.
Being pragmatic is depressing.
cls
> Regarding the boot speed I disagree. I think short boot cycles can be
> achieved with rather more simple init systems than the insanity people
> got used to like the SysV style Debian insanity. A simple BSD init
> based or even more simple system always outperforms any "smart"
> technique in my observation.
Well, for really excellent performance, you do need the ability to
parallelise the init operations, so that's a bit of complexity that has
actual performance benefits.
I agree there is little value in the general runlevel mess.
--
\ Troels
/\ Henriksen
I fully agree. after looking to minit & stuff, I decided to write our own
init daemon to incorporate some safety stuff.
* booting is done in parallel.
* udev (+/- 5sec) was replaced by our (small) fdev (now takes some 0.1 sec).
some examples:
dell laptop: booting was over 45 seconds (from kernel starting timers), now 15.
via epia board: was 25, now 4.3 seconds
embedded ARM cpu: (never used debian there, but busybox): no final measurements,
but boottime of 18 seconds got reduced to 6.
OpenMoko: boottime is originally (very long) 2m40s, reduced to 35.
I admit our init is quit more complex than strictly necessary (we try to guarantee
that a watched process is not dead-locked, and therefore have a hardware watchdog
in the init process, and ...).
I'm not familiar with BSD init's.
Kurt
>
> --
> \ Troels
> /\ Henriksen
>
I don't want to say it sucks less. But it definitely does for developers
because you can install multiple versions of a package at the same time.
You can always rollback.
It does'nt fit all needs at the moment because its hard to separate
headers from binaries. I think it can be fixed - But the project doesn't
have enough man power to start such an effort yet.
One of its key features is that you can easily add quality testing to
your distribution workflow. And systems which sucks less just work.
It may be worth having a look at the project even if its not a perfect
match.
Marc Weber
there is also mdev in busybox, in case you are interested. I like busybox very
much, but I think it lacks documentation.
>
> On Sun, Jun 13, 2010 at 11:09 PM, Martin Kopta <mar...@kopta.eu>
> wrote:
>> Some philosophical questions..
>>
>> What does it mean for an operating system to be suckless?
>> What features should (or should not) an OS have in order to be
>> suckless?
>> Are there suckless or close-to-be-suckless operating systems out
>> there?
>> What does suckless thinks about Plan9, *BSD, GNU/Linux, MS
>> Windows, ..?
>> Is it possible to have an OS for desktop/laptop everyday use
>> (multimedia, web,
>> programming, research, ..) which is actualy usable, not rotten
>> inside and alive?
>
> One of the issues to consider is that what computers are used for
> changes with time, and decisions that one may classify as "the
> suckless way of doing things" at one point in time may mean that it's
> not effectively useable in some future situations. For instance,
> around about 20 years ago you wouldn't have considered multimedia as
> something you need on a computer, so the complexity required for
> reliable low-latency scheduling might be regarded as being needlessly
> complex 20 years ago, by now it's pretty essential for audio
> processing.
A curious example, in the sense that there was a market for multimedia
on PCs 20 years ago, there was suitable technology, but the two never
came together.
Multimedia on PCs was the upcoming thing 20 years ago. It wasn't just
expected to happen, it was starting to happen. In about 88 I was wowed
by video on a PC screen, but several yeas later ('maybe 94 or 95) I
gave an Atari ST to a musician because "Pentiums" as she called them
couldn't really produce accurate enough timing. The big surprise here
is the timing required was for MIDI; 1/64-beat resolution at a rarely-
used maximum of 250 beats per minute comes to less than 270Hz. The
90MHz+ Pentiums of the time couldn't handle that, where the 8MHz ST
could.
Oh and I almost forgot, the ST had shared video memory. In the high
resolution display mode used by all the top-notch MIDI sequencer
software, the ST's CPU spent more than 50% of it's time halted while
the display circuitry read from the RAM. To re-phrase my statement,
the 90MHz+ Pentiums of the time couldn't handle accurately producing a
270Hz signal, where the 8MHz ST not only could, but did it with one
arm metaphorically tied behind its back by its own display system.
Something sucked all right.
It would be easy to say the ST didn't suck because it didn't
multitask, but at that point OS/9 must have been around for about 2
decades with real-time multi-tasking and multi-user capabilities. OS/9
started life on the 6809 so it couldn't have been complex.
There's more. There's fucking more, but thinking about what computers
have lost in the last 20 years upsets me too much to write about. Home
computers have LOST a lot in the past 20 years while hardware power
has grown by orders of magnitude. It's phenomenal, it's staggering.
--
Do not specify what the computer should do for you, ask what the
computer can do for you.
busybox is a bit incomplete in places too. ed is missing E and Q,
which are the sort of do-without-confirmation commands useful in
scripts. ed also lacks n, leaving (I think) no way to get line numbers
from a busybox utility. I don't know how vi is coming along but don't
have good memories of it from 2 or 3 years ago, it was really rotten.
dc has badly broken parsing and appears to be missing _most_ commands.
I'm not impressed yet. :)
--
Complexity is not a function of the number of features. Some features
exist only because complexity was _removed_ from the underlying system.
would you mind sharing the sourcecode? we are working on another "suckless"
distro, and we don't want dbus, hal, gconf, fdi, xml, policykit and ponys in
there, so we're always looking for unixy software to extend it.
> would you mind sharing the sourcecode? we are working on another "suckless"
> distro, and we don't want dbus, hal, gconf, fdi, xml, policykit and ponys in
> there, so we're always looking for unixy software to extend it.
Maybe this shows how Linux is different in this case, but here
on FreeBSD you still don't have to have things like that, If you
don't want them.
(I don't have hal, policykit, dbus, gconf etc.)
best regards,
- Jakub Lach
I use linux and I don't have them neither. The fact is that most linux
distros are binary-based, so developers have to take some decisions, and
they often take the bad ones :D
--
Stephane Sezer
Second, more and more major web portals and services are multi-browser. Even MS's Office Web Apps (that were released a week ago) supports all the major browsers, the same for the most popular sites, like Google's services, Twitter, Facebook, most of Yahoo, etc. Most popular CMS's are mostly standards-compliant, too (like WordPress, drupal, etc) and they run nearly the majority of small projects these days. Finally, a lot of services wish to have a mobile version, too, and IE absolutely doesn't have any decisive part here, it's webkit territory. So, even if my point about IE is wrong, most sites are multi-browser these days. Does that mean they are mostly standards-compliant? Or each browser requires its own tweaks, so the "firefox" (or webkit, etc) version of any site is not a standard-compliant site, but rather some set of tweaks for that browser?
So, here is my question. If we take only modern and active projects, how standard are they? Suppose, we have a browser engine that implements only the current standards (OK, may be some legacy standards, but no IE or other tweaks), will we still be able to use 95% of the web?
> On 13 June 2010 23:28, Matthew Bauer <mjba...@gmail.com> wrote:
> > О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫
> > I think surf and uzbl are good steps forward in making a kiss web browser.
> Problem is the vast complexity they both contain is hidden inside
> libwebkit. That thing is huge. I get the feeling surf and uzbl only
> make the tip of the iceberg suck less.
> cls
>
--
wbr, Ilembitov
Probably, but why? There's nothing suckless at all about the standards
coming out of the w3c. I don't know much about rendering html but I
recently made a web server, and while I started out with the noble
intent of supporting standards, before I was done I just had to
declare http 1.1 schizophrenic and delusional!
Consider this: Out of web browser and web server, which one has to
examine the data in order to render it, and which one is just reading
it from the disk and dumping it down a pipe? Which one's resources are
at a premium, and which is mostly idling between fetching web pages?
With those two questions in mind, can someone please tell me what the
w3c were collectively smoking when they made content-type mandatory in
http 1.1? If that isn't enough argument, it's actually impossible to
set content-type correctly from file extension. No-one really tries
and I very much doubt they ever did, but that didn't stop the w3c from
making it mandatory. Idiots.
"Schizophrenic" actually refers to a less serious problem, but still a
bizarre one. Dates are provided in headers to guide caching, very
useful in itself but the date format is about as long-winded as it can
get and it's US-localised too. With that in mind, why are chunk length
values for chunked encoding given in hex? That's not even consistent
with the length value of content-length, which is decimal. And what
titan amongst geniuses decided it was appropriate to apply chunked
encoding to the http headers?
--
kv,
- Bjartur
Besides, hfs has had this feature (along with the whole data/resource
fork schizophreny) for the last 15 or twenty years.
--
[a]
The other issue is an providing a very-easy-to-type equivalent of
globbing on filenames in shell/script expressions for whatever
mechanism is used (ie, for things like 'find . -name "*.(h|cpp|tcc)" |
xargs ......"
> Bjartur Thorlacius dixit (2010-06-14, 23:24):
>
>> On 6/14/10, Matthew Bauer <mjba...@gmail.com> wrote:
>>> I wish modern filesystems would allow some way of identifying a
>>> file type
>>> besides in the filename. It seems like that would make things more
>>> straight
>>> forward.
>
>> Surely many modern filesystem support xattrs (extended file
>> attributes)?
>> One should be able to use them to store media types.
Should, or will?
> Besides, hfs has had this feature (along with the whole data/resource
> fork schizophreny) for the last 15 or twenty years.
I think hfs only has that feature for backwards compatibility, I
haven't seen any sign of its use in Mac OS X.
I get the impression storing file type information was much more
common in the past, which raises the question why is it not now? I
think it's pointless because most file types can be identified from
their first few bytes. This loops back around to my content-type
argument, why should the server go looking for file type when the
client gets it handed to it anyway?
You are using an incompatible web browser.
Sorry, we're not cool enough to support your browser. Please keep it real
with one of the following browsers:
* Mozilla Firefox
* Safari
* Microsoft Internet Explorer
Facebook � 2010 �
Just sayin'.
--Noah
I've had to stop using surf to monitor a page at my job because they
now insist upon a Netscape or IE user agent string.
-sl
The thing is that this is part of a product for the company I work for.
I don't think my boss wants _all_ code opensourced. I hate to say, but the
answer is no for the moment.
I just talked about the init as it showed a point that it is not necessarily
the complexity that slows down booting. It's the parallelism.
But I seriously evaluated minit & ninit (somewhere on internet). For a regular
desktop system, they would work as well, & are better documented.
our 'fdev' just dropped 5 seconds, but mdev is capable too.
Kurt
>
config.h: static char *useragent
or http://surf.suckless.org/patches/useragent
'Monitoring' a page sounds like I'd script it though.
--
ilf @jabber.berlin.ccc.de
Über 80 Millionen Deutsche benutzen keine Konsole. Klick dich nicht weg!
-- Eine Initiative des Bundesamtes für Tastaturbenutzung
Because that way you can do content negotiation. Granted, that isn't
much used today, and it would make sense to make content-type
optional, but I like the idea of content negotiation. Being able to
e.g. get the original markdown for the content of a page, without
the HTML crap, navigation etc, would be really nice in a lot of
cases. I get the impression the W3C expected content negotiation to
be used a lot more when they wrote the HTTP 1.1 spec.
> Quoth Ethan Grammatikidis:
>> I think it's pointless because most file types can be identified
>> from their first few bytes. This loops back around to my
>> content-type argument, why should the server go looking for file
>> type when the client gets it handed to it anyway?
>
> Because that way you can do content negotiation. Granted, that isn't
> much used today,
Why not? With more international businesses than ever on the web and
the internet spread further over the globe than ever before, and with
content negotiation having been around for such a long time, why is it
hardly used? Perhaps because it sucks?
> and it would make sense to make content-type
> optional, but I like the idea of content negotiation. Being able to
> e.g. get the original markdown for the content of a page, without
> the HTML crap, navigation etc, would be really nice in a lot of
> cases.
Maybe, but I doubt the majority of web designers would like you
looking at their source, as simple as it might be, and the likelihood
of big businesses letting you get at their web page sources seems very
low. Maybe I'm just terminally cynical.
> I get the impression the W3C expected content negotiation to
> be used a lot more when they wrote the HTTP 1.1 spec.
Erm, yeah. The W3C seems to have expected a lot of things would be
practical and useful.
I always presumed it was because web browsers never really gave it a
meaningful interface. Same, for that matter, with HTTP basic
authentication.
> > and it would make sense to make content-type
> > optional, but I like the idea of content negotiation. Being able to
> > e.g. get the original markdown for the content of a page, without
> > the HTML crap, navigation etc, would be really nice in a lot of
> > cases.
>
> Maybe, but I doubt the majority of web designers would like you
> looking at their source, as simple as it might be, and the likelihood
> of big businesses letting you get at their web page sources seems very
> low. Maybe I'm just terminally cynical.
Sigh, no, you're largely right. Though wikipedia or some of the more
open blog engines are examples where this is less likely to be true.
> > I get the impression the W3C expected content negotiation to
> > be used a lot more when they wrote the HTTP 1.1 spec.
>
> Erm, yeah. The W3C seems to have expected a lot of things would be
> practical and useful.
Well, I prefer the W3C's vision of the web to the one designers and
marketers have created.
Incidentally, can anyone recommend a good gopher client? I missed it
the first time 'round, and I'd be curious to see a different
paradigm of web type thing.
--
Kris Maglione
You're bound to be unhappy if you optimize everything.
--Donald Knuth
This happens with this topic on all general-dev mailing lists.
>and yet never manages to get anywhere?
This is what makes the suckless list better. Otherwise you wind up
with shit like http://www.archhurd.org/
--
# Kurt H Maier
> Quoth Ethan Grammatikidis:
>> On 15 Jun 2010, at 11:24, Nick wrote:
>>> Because that way you can do content negotiation. Granted, that isn't
>>> much used today,
>>
>> Why not? With more international businesses than ever on the web and
>> the internet spread further over the globe than ever before, and with
>> content negotiation having been around for such a long time, why is
>> it
>> hardly used? Perhaps because it sucks?
>
> I always presumed it was because web browsers never really gave it a
> meaningful interface. Same, for that matter, with HTTP basic
> authentication.
The interface for language content negotiation is straightforward and
meaningful, but nobody uses even that.
>
>>> and it would make sense to make content-type
>>> optional, but I like the idea of content negotiation. Being able to
>>> e.g. get the original markdown for the content of a page, without
>>> the HTML crap, navigation etc, would be really nice in a lot of
>>> cases.
>>
>> Maybe, but I doubt the majority of web designers would like you
>> looking at their source, as simple as it might be, and the likelihood
>> of big businesses letting you get at their web page sources seems
>> very
>> low. Maybe I'm just terminally cynical.
>
> Sigh, no, you're largely right. Though wikipedia or some of the more
> open blog engines are examples where this is less likely to be true.
>
>>> I get the impression the W3C expected content negotiation to
>>> be used a lot more when they wrote the HTTP 1.1 spec.
>>
>> Erm, yeah. The W3C seems to have expected a lot of things would be
>> practical and useful.
>
> Well, I prefer the W3C's vision of the web to the one designers and
> marketers have created.
I don't. :) There are plenty of worthless shinyshit marketing sites,
of course, but sites which actually sell you a wide range of products
make sure you can find the products you want AND specifications on them.
On w3.org by contrast the page on the cgi standard has nothing but
dead links and references to an obsolete web server. I was searching
for the CGI standard the other day, and couldn't find it _anywhere_.
I've not generally found navigating w3.org too easy, it's only all
right when you already know where stuff is.
>
> Incidentally, can anyone recommend a good gopher client? I missed it
> the first time 'round, and I'd be curious to see a different
> paradigm of web type thing.
I'm curious too. I've only ever used a somewhat sucky web gateway to
access gopher, and that only once.
It's here, btw: http://tools.ietf.org/html/rfc3875
> This is what makes the suckless list better. Otherwise you wind up
> with shit like http://www.archhurd.org/
>
What's wrong with arch hurd?
Dieter
The HURD part, obviously.
--
Kris Maglione
Haskell is faster than C++, more concise than Perl, more regular than
Python, more flexible than Ruby, more typeful than C#, more robust
than Java, and has absolutely nothing in common with PHP.
--Autrijus Tang
Lynx and Mozilla Firefox support Gopher.
There is an extension for firefox called overbite:
http://gopher.floodgap.com/overbite/
this adds decent gopher support.
lynx used to be a terribly buggy gopher client, but in recent versions
the major problems seem to be fixed. I remember it had an issue with a
bit overzealous caching, so watch out.
There's also the "gopher" package in Debian, which is supposedly "a
text-based (ncurses) client from the University of Minnesota."
This is an abomination that tries to connect with (the nonstandard)
gopher+ by default and if the gopher server doesn't handle this, fails
utterly. Gopher servers must contain gopher+ trampolines to work
around this problem. It has problems handling menus with more
consecutive info lines than the screen height (this is a bit unusual but
not unknown situation).
My vote: if you're firefox running anyway, use overbite; otherwise try
lynx.
Mate
s/H/T/
uriel
P.S. When I say "wrapper" I mean something like an shebang/PS style
header like #=text/html.
--
kv,
- Bjartur
> On Tue, Jun 15, 2010 at 04:05:24PM +0200, Dieter Plaetinck wrote:
> >On Tue, 15 Jun 2010 08:43:31 -0400
> >Kurt H Maier <karm...@gmail.com> wrote:
> >> This is what makes the suckless list better. Otherwise you wind up
> >> with shit like http://www.archhurd.org/
> >>
> >
> >What's wrong with arch hurd?
>
> The HURD part, obviously.
>
hmm. i'm not too familiar with hurd, but afaik it's supposed to be
simpler and more elegant then Linux
Dieter
It's neither.
--
Kris Maglione
Advertising may be described as the science of arresting human
intelligence long enough to get money from it.
And it won't be, even if by some miracle someone gets it working one day.
The sitaution didnt changed too much. Debian maintains many patches that
fixes things, and GNU will never accept them, the system is unusable and
unstable.
The main reason why GNU/HURD is in this situation is because Mach is a
bloated microkernel. The L4 port has never been adopted by the whole
community and the OSKIT is a 800MB userland monster to handle drivers.
As the system is not usable for editing/compiling code because the
console is broken and the X is quite slow and the kernel sometimes crashes.
it makes the system very weird as for development..to not say for users.
Nothing changed in hurd in the past 20 years. In fact.. I dont think this
will never change :)
--pancake