Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Beyond the hype - how good was BeOS?

157 views
Skip to first unread message

hakr...@gmail.com

unread,
May 15, 2006, 7:35:42 AM5/15/06
to
I have been told BeOS was redesigned from the ground up, skipping all
the legacy parts. Now, looking back, how revolutionary is its
architecture compared to what we have today? Linux is a unix clone, a
system designad 30 years ago. OS X is built on top of BSD, a system as
old as unix. Windows NT (NT4, 2000, XP) was designed in the yearly
nineties which I assume makes it quite modern.

So, looking at BeOS today, is its architecture still efficient and fast?

greg.j...@gmail.com

unread,
May 15, 2006, 8:04:16 AM5/15/06
to
Old doesn't necessarily mean slow or bad. The wheel's been around for
thousands of years, with constant improvement to the tools using the
wheel. UNIX has been around for years, with constant improvements by
various vendeors.

Similarly, new is not necessarily good. Windows NT, although it was
designed in the nineties, uses ridiculously archaic ideas. Drive
letters, for example.

BeOS _was_ revolutionary in 1991. It hasn't changed since 2001. Five
years' worth of new technology...even if BeOS's architecture is
fundamentally faster, five years of optimization by the other operating
systems have probably caught up.

Michael B. Trausch

unread,
May 15, 2006, 8:32:41 AM5/15/06
to
hakr...@gmail.com wrote in
<1147692942.4...@i40g2000cwc.googlegroups.com> on Mon May 15 2006
07:35:

Well, Windows NT was apparently built both on new code and with code, ideas,
and concepts from the VMS operating system. In fact, the lead
developer/project manager that worked on the VMS system, Dave Cutler, went
from Digital to Microsoft sometime in 1998 to lead the development of
Windows NT. Apparently, he wanted to know how long it would take others to
figure out that WNT is VMS incremented by one letter in each position.

It would appear that he is still there, though that isn't certain (at least
to me). He was still working for Microsoft as of 2005, however.

Point being, that it has concepts from a legacy system anyway -- VMS
development started in the 70s for the PDP under a different name, and
that's some old stuff. BeOS is still great compared to the NT line, IMHO,
since not only is it a graphical system natively, but it has a CLI that has
the power and ease of the UNIX command line. The UNIX system has pretty
much had it right, I think, from inception: Quick 'n dirty where it should
be, and yet reliable. The fact that the basic commands aren't shell
built-ins are good, too, since shells tend to be quite large w/o those
commands being builtins (e.g., mv, cp, ls, cat, and so forth). /bin/bash
on my system is 664,084 bytes of ELF binary, dynamically linked to three
libraries totalling 1,503,848 bytes of ELF binary. However, it is the most
used program on at least the systems that I have running that don't tend to
run X11.

The only thing that could probably be improved upon in today's world is that
it might be a good thing to have the graphics primitives in the operating
system kernel, and the graphics drivers to have an API that would work from
the kernel mode area. Of course, my system running Linux doesn't have
that, and it works quite a bit more efficiently then the Windows NT family,
which, IIRC, does have those things. And if the graphics driver wants to
crash, it can do so on this system without taking the entire computer with
it -- unlike Windows NT family systems, where it will take the entire
computer with it, because you can't just kill the windowing system and
restart it. Whoops.

Anyway, NT isn't a terribly efficient system -- from a user standpoint. I
don't know how well it would fare compared to other systems from a
programming standpoint, but I suspect not well. It's much slower for
things like moving files around and the like -- though that could very well
be that "New Technology" filesystem that they have.

Anyway, I'd rather run Be if it were still developed and maintained. It may
be possible to switch to Haiku (the project that used to be OpenBeOS) when
they have a good release out, assuming that the users and developers will
stay focused on it after release. I remember that the system was something
of a winner in my eyes, but it's been a long time since I've actually used
it.

You can find the Haiku project online:

http://haiku-os.org/learn.php

- Mike

Maxim S. Shatskih

unread,
May 15, 2006, 9:54:25 AM5/15/06
to
> system designad 30 years ago. OS X is built on top of BSD, a system as
> old as unix.

BSD is a kind of UNIX.

OS X is re-branded NextStep. Its inner circle is Mach microkernel. Then - BSD
compatible kernel around. More-or-less BSD-compatible command line, directory
layout and config files.

Then the 2D graphics engine based on Display PostScript, and the UI library
based on Objective C language. Also, the Delphi-style (but predates Delphi a
lot) development tool.

>Windows NT (NT4, 2000, XP) was designed in the yearly
> nineties

Designed late 80ies with strong VMS influence. Developed early nineties.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
ma...@storagecraft.com
http://www.storagecraft.com

Simon Felix

unread,
May 15, 2006, 7:37:22 PM5/15/06
to
greg.j...@gmail.com wrote:
> Old doesn't necessarily mean slow or bad. The wheel's been around for
> thousands of years, with constant improvement to the tools using the
> wheel. UNIX has been around for years, with constant improvements by
> various vendeors.
>
> Similarly, new is not necessarily good. Windows NT, although it was
> designed in the nineties, uses ridiculously archaic ideas. Drive
> letters, for example.

we have a german saying, coined by dieter nuhr:

"wenn man keine ahnung hat, einfach mal fresse halten"

which means - politely translated something like "don't talk about stuff
you obviously have no idea about"


nt doesn't use drive letters. but there's a mapping in the win32 layer
to run win32-applications on the windows nt kernel. win32 somehow sucks
- windows nt itself (kernel, os design, ...) does NOT

> BeOS _was_ revolutionary in 1991. It hasn't changed since 2001. Five
> years' worth of new technology...even if BeOS's architecture is
> fundamentally faster, five years of optimization by the other operating
> systems have probably caught up.

beos was never faster. it just felt faster because of it's responsive gui.


let's see what haiku achieves...

simon

----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

Maxim S. Shatskih

unread,
May 15, 2006, 11:55:54 PM5/15/06
to
Windows architecture is good, but overdesigned a lot.

Some things - like interprocess communication - are sometimes have 2 or 3
implementations - from the kernel's LPC ports (used by some main services like
LSA and as the local transport for RPC) - to windowing subsystem WM_DATA
messages, and even up to specialized drivers, like the modern http.sys which is
IIRC a transport between inetinfo.exe and the worker processes.
Some things are done in unnecessarily complex ways.

> nt doesn't use drive letters.

Starting with w2k, you can open files from any app (from fopen() and such)
using \\?\Volume{guid}\path\file.ext syntax. Reparses on NTFS reparse points
(similar to UNIX symlinks) work this way.

The drive letters are not mandatory, just an additional old-style synonim to
the the volume GUIDs. The MountMgr.sys driver in the kernel manages this, the
table is stored to HKLM\SYSTEM\MountedDevices.

> beos was never faster. it just felt faster because of it's responsive gui.

Windows is fast. In Pentium-166 era, Windows (NT4) was faster then Linux in all
GUI tasks.

The only bad things in Windows performance: a) NTFS is slow, especially in
creation/copying/moving lots of tiny files b) modern Windows start with a load
of all kinds of services in them, while the usual UNIX system has much shorter
list of the installed stuff. NT4 was not this bad.

On the other hand - X11 is slow, and is maybe the worst (architecturally) thing
in the UNIX world. Its bad sides contributed a lot to a lack of wide adoption
of UNIX on desktops.

Microsoft's remote desktop (which is architecturally different - no notion of
"window" on the display side, so, most windowing events except the user input
itself are executed within the server box) seems to be much faster.

Arto Bendiken

unread,
May 16, 2006, 4:52:19 AM5/16/06
to
I'm surprised BeFS hasn't been mentioned, as that was one of the most
important advances in BeOS, compared to other contemporary desktop
systems, and proved to just be a decade ahead of its time:

http://en.wikipedia.org/wiki/Be_File_System

Simon Felix wrote:
> > BeOS _was_ revolutionary in 1991. It hasn't changed since 2001. Five
> > years' worth of new technology...even if BeOS's architecture is
> > fundamentally faster, five years of optimization by the other operating
> > systems have probably caught up.
>
> beos was never faster. it just felt faster because of it's responsive gui.

GUI responsiveness in BeOS was brilliant, and achieved by good design
decisions such as each application having at least two threads - one to
drive the interface, and the other to handle the actual processing
(main(), if you will).

This meant that even when an application was 100% busy, it still
wouldn't be "stuck" or "frozen": you could still move and resize the
application window and the contents would redraw instantly. This made a
huge difference in perceived responsiveness and usability.

--
Arto Bendiken | arto.b...@gmail.com | http://bendiken.net/

roger

unread,
May 16, 2006, 11:41:09 AM5/16/06
to
>The UNIX system has pretty much had it right, I think, from inception: Quick 'n dirty where it should be, and yet reliable.

Interesting... I program on Unix and OpenVMS and I can name at least
10 things (maybe more) that VMS does much better than Unix. VMS is
still being developed (we're using V8.2 now) and is still one of the
best operating systems ever built. Besides, there isn't really one
version of Unix, so something might work on Linux but fail on Solaris,
etc. Here is a simple list from the top of my head:

1. Signals. Signals are done correctly under VMS, and are idiotic
under Unix. Long ago, signals weren't even queued if you were in a
single handler, and even today signals don't specify a specific
unique event. For instance when SIGALRM goes off, it doesn't tell
you specifically which timer went off, so you can't write a library
function that might trap sigalrm because it would catch everyone elses
alarms that might be using the library. All events in VMS (timers,
processes deaths, IO, locking, etc.) not only identify a unique
specific event but all waited actions in VMS allow you to do this
asynchronously. Difficult to explain unless you have programmed at
this level, but VMS did it much/much better than Unix.
2. Error Handling. VMS has a message facility that uniquely
identifies each and every error, warning, etc. that could occur
anywhere in the operating system. So when instead of returning -1, or
zero, or some other value from a routine, and then having to check
errno (which isn't unique enough), you just check the status value.
This status value can then be used to look-up the error message and
print it out. All status values in the operating system are unique
even for each facility and you can create your own status values and
use the message facility for your own routines. Given any status
value, you can often find and fix the problem very quickly. Just
opening a file for example, you might get a "protection violation",
"file not found", "volume not found", "not enough quota",
"insufficient virtual memory", etc.. All error messages are stored
in message files for internationalization. Unix error handling
doesn't come close to this.
3. Exception/Condition Handling. Exception handling is built into the
VMS calling standard. This exception handling is used by C++
throw/catch, but in VMS it's much more flexible and used extensively
into the operating system. For example, in VMS condition handing, you
can trap a condition, fix-up the problem, and even re-try the
instruction or continue with the next instruction. An example of this,
might be to fault the page into memory, fix-up the virtual address
translation buffer, and re-try the instruction. Or trap an alignment
fault, perform the operation, and then continue with the next
instruction in the stream. Condition Handling can be used for much
more than this but this will give you some ideas.
4. RMS - VMS contains a record management system that includes Unix
stream LF files, Unix type binary files, but also many other types of
files. For example, relative files, and indexed files with
multiple-keys and key types. Yes, you can do this in Unix with
packages like C-tree but having it built into the operating system
allows record level locking to work across the cluster. Try to do that
in Unix.
5. All facilities are callable and there is a calling standard for all
languages. For example, the sort utility is callable and has a record
interface, the message facility is callable, command line parsing is
callable, multiple languages are all callable to each other, so a cobol
routine can call a C routine, even in a sharable image (DLL) without a
problem, even if the compilers came from different companies, because
everyone uses the exact same VMS calling standard and it spells out
exceptions, parameter passing, stack dumps, etc. in great detail.
6. Command line parsing. Each program in Unix does it's own command
line parsing, so you have parameters like "-r" in one application
and "-R" in another and there doesn't seem to be much of a
standard. Applications in VMS call a collection of routines (cli$) for
command language interpreter that tells your application what
parameters qualifies, etc. were passed on the command line. You can
even change the parameters for internationalization and not even change
the application at all. I've written a ton of Unix applications and
each time I wish there were a better way of command line parsing on
unix. And yes, I know about getopt() and popt routines but they
don't even compare.
7. Networking. Networking is built-in to the file-name standard, so
there is no need to fire up FTP just to copy a file, or NSF. In fact,
programs can open files across the network by must specifying the node
name and access string.
8. ASTs. No need to do threading when trying to do asynchronous
things. All waited actions on VMS allow an AST routine. Makes
programming multi-user networking applications much easier than using
threads. I know, I write Unix threaded programs all the time and run
them on VMS, but programs that use ASTs are much simpler to debug and
work with, but Unix doesn't have ASTs. NT does.
9. Clustering done right. Cluster wide file systems, cluster wide
shared files, cluster wide logicals, cluster wide devices, etc... Unix
can't get there easily because of it's brain dead file systems and
stream files where you don't know what a record is to lock...
10. Modes. Not only do you have Kernel mode and User mode, but you
also have Supervisor and Exec mode. Have you ever wanted to do
something in Unix and not be in user-mode, but also not necessarily be
a part of the kernel in kernel mode... The shell normally runs in
supervisor and RMS normally runs in exec, so a program can't crash
through the environment table or channels open tables like they can in
unix.
11. Security. There is extensive security on VMS, but even simple
things like all IO having a MAX length argument that prevents buffer
overwriting. fgets() was a security hole just waiting to happen, and
unix was littered with such bugs because they didn't think about it
originally.

I'm sure I could come up with a few more examples off the top of my
head, and that's enough to make some unix people roll their eyes in
the back of their heads, because they think Unix is god because
that's all they really know well. I know both very well and VMS is
still one of the best operating systems ever made and was well ahead of
it's time in many areas. Don't get me wrong, Unix is good for what
it does, and I applaud Ken Thompson and Dennnis Ritchie and Douglas
Mcllrow, and then Stallman and Linus for hacking together something
that eventually became free and set a whole new industry in motion
because I program Unix/Linux just about every day. But VMS had its
roots in many other operating systems where experience was learned
before its development, including RT-11, RSTS, and TOPS; that gave VMS
a very good background in good operating system design from the very
start. VMS is still being used today and still being developed. In
fact the later versions run on Itaniums and support ELF images and
other industry standards; my hope is that it will get ported to
standard PC hardware someday. I still often do my unix development on
VMS, test it and debug it, and then host it on Linux or Solaris, but we
also run soft real-time very large and complex systems on VMS. I also
hope that FreeVMS on IA32 will be successful and eventually preserve a
free open source version of VMS for PC hardware for decades to come:
see http://www.systella.fr/~bertrand/FreeVMS/indexGB.html if you want
to get involved in an attempt to rewrite and preserve a great operating
system.

Maxim S. Shatskih

unread,
May 16, 2006, 3:49:41 PM5/16/06
to
> 2. Error Handling. VMS has a message facility that uniquely
> identifies each and every error, warning, etc. that could occur
> anywhere in the operating system.

So is Windows FormatMessage and UNIX perror().

> 3. Exception/Condition Handling. Exception handling is built into the
> VMS calling standard.

So is in Windows.

> 6. Command line parsing. Each program in Unix does it's own command
> line parsing, so you have parameters like "-r" in one application
> and "-R" in another and there doesn't seem to be much of a
> standard. Applications in VMS call a collection of routines (cli$) for
> command language interpreter that tells your application what
> parameters qualifies, etc. were passed on the command line.

UNIX's "getopt" library?

> 7. Networking. Networking is built-in to the file-name standard

So is in Windows.

> 8. ASTs. No need to do threading when trying to do asynchronous
> things.

No, both things are good :-)

> overwriting. fgets() was a security hole just waiting to happen, and

MS prohibited the calls like strcpy() in their code, and wrote the safe string
library, here the length must be specified everywhere.

hjwron...@yahoo.co.uk

unread,
May 16, 2006, 8:57:46 PM5/16/06
to
> I have been told BeOS was redesigned from the ground up, skipping all
> the legacy parts.

All of them? Really?

Suppose you wanted to read an email on your BeOS machine.
How would it be displayed to you?

> Now, looking back, how revolutionary is its
> architecture compared to what we have today? Linux is a unix clone, a
> system designad 30 years ago. OS X is built on top of BSD, a system as
> old as unix. Windows NT (NT4, 2000, XP) was designed in the yearly
> nineties which I assume makes it quite modern.

You seem to be trying to say something, but you haven't quite
made it clear. Are you saying that one type of system is better
than another for some reason?

> So, looking at BeOS today, is its architecture still efficient and fast?

Has its architecture changed at all since it was designed? Or is
it still just as efficient and fast as it was years ago?

hjw

hakr...@gmail.com

unread,
May 17, 2006, 12:13:03 PM5/17/06
to
>> I have been told BeOS was redesigned from the ground up, skipping all
>> the legacy parts.

>All of them? Really?

>Suppose you wanted to read an email on your BeOS machine.
>How would it be displayed to you?

Reading mail is the job off a simple application. It can easily be
added on top. I guess they talked about such things as the X Window
system, the filesystem etc. But I know very little about OS design.

>You seem to be trying to say something, but you haven't quite
>made it clear. Are you saying that one type of system is better
>than another for some reason?

How well was the BeOS architecture adopted to deal with a singel user
system focused on media content creation compared to more generic
systems? Hade it some major flaws which has become apparent as time has
passed by and more advanced hardware and software has showed up.

Scott Wood

unread,
May 17, 2006, 11:47:07 PM5/17/06
to
On 16 May 2006 08:41:09 -0700, roger <rogern...@msn.com> wrote:
> 1. Signals. Signals are done correctly under VMS, and are idiotic
> under Unix. Long ago, signals weren't even queued if you were in a
> single handler, and even today signals don't specify a specific
> unique event. For instance when SIGALRM goes off, it doesn't tell
> you specifically which timer went off, so you can't write a library
> function that might trap sigalrm because it would catch everyone elses
> alarms that might be using the library.

The POSIX timer interface (timer_settime, etc) passes a sigevent
struct to the signal handler, which can be used to identify the
particular event (an arbitrary integer or pointer can be associated
with the timer). Still, it's ridiculous that it took this long for
such a thing to be supported.

-Scott

KVP

unread,
May 18, 2006, 5:43:54 AM5/18/06
to
[Maxim S. Shatskih wrote:]

> > system designad 30 years ago. OS X is built on top of BSD, a system as
> > old as unix.
> BSD is a kind of UNIX.
>
> OS X is re-branded NextStep. Its inner circle is Mach microkernel. Then - BSD
> compatible kernel around. More-or-less BSD-compatible command line, directory
> layout and config files.
>
> Then the 2D graphics engine based on Display PostScript, and the UI library
> based on Objective C language. Also, the Delphi-style (but predates Delphi a
> lot) development tool.

I would add that its new filesystem comes from beos. Apple apparently
rehired it's previously laid off staff, that went to create beos. It
contains everything that the new winfs still promises to deliver.
(a simple tag based db integrated into the filesystem)

> >Windows NT (NT4, 2000, XP) was designed in the yearly
> > nineties
>
> Designed late 80ies with strong VMS influence. Developed early nineties.

It got redesigned serveral times, first the gui was separate, then got
moved into the kernel for performance reasons. Now it's being moved
out again. It would be much better if the kernel team could move
all non hardware attached components to user mode. I would also
add that ms plans to build into the next release of winnt some
technologies originally developed by sgi for irix.

Viktor

Maxim S. Shatskih

unread,
May 18, 2006, 9:05:13 AM5/18/06
to
> > Designed late 80ies with strong VMS influence. Developed early nineties.
>
> It got redesigned serveral times, first the gui was separate, then got
> moved into the kernel for performance reasons.

No really major redesigns.

>Now it's being moved out again.

Now they are dropping the 2D hardware support, so, each window of each app will
really have the bitmap underlying it. Then these bitmaps are blitted to the
hardware using the 3D texture engine. So, 3D stuff is still in the kernel.

>It would be much better if the kernel team could move
> all non hardware attached components to user mode.

Going on. For instance, the sound mixer and software MIDI are moved out to the
user-mode service.

Josh Rambo

unread,
May 18, 2006, 10:17:25 PM5/18/06
to
Maxim S. Shatskih wrote:
>> 2. Error Handling. VMS has a message facility that uniquely
>> identifies each and every error, warning, etc. that could occur
>> anywhere in the operating system.
>
> So is Windows FormatMessage and UNIX perror().

FormatMessage is a huge fucking mess. perror() doesn't work the way he
is talking about either. It just references errno, which does not have a
unique code for everything that can go wrong.

>> 6. Command line parsing. Each program in Unix does it's own command
>> line parsing, so you have parameters like "-r" in one application
>> and "-R" in another and there doesn't seem to be much of a
>> standard. Applications in VMS call a collection of routines (cli$) for
>> command language interpreter that tells your application what
>> parameters qualifies, etc. were passed on the command line.
>
> UNIX's "getopt" library?
>

Do you have some disability that prevents you from reading things all
the way through? Allow me to quote his original message:

> 6. Command line parsing. Each program in Unix does it's own command
> line parsing, so you have parameters like "-r" in one application
> and "-R" in another and there doesn't seem to be much of a
> standard. Applications in VMS call a collection of routines (cli$) for
> command language interpreter that tells your application what

Jonathan de Boyne Pollard

unread,
Jun 16, 2006, 11:59:03 PM6/16/06
to
MBT> BeOS is still great compared to the NT line, IMHO, since
MBT> not only is it a graphical system natively, but it has a CLI
MBT> that has the power and ease of the UNIX command line.

This is going to be interesting.

Please explain how you think that Windows NT is not "a graphical system
natively", especially in light of what you later say in the very same
message about Windows NT having "graphics primitives in the operating
system kernel".

Michael B. Trausch

unread,
Jun 20, 2006, 11:18:26 AM6/20/06
to
Jonathan de Boyne Pollard wrote in
<c1.01.31LBMD$5...@J.de.Boyne.Pollard.localhost> on Fri, June 16 2006 23:59:

I do believe that I was thinking something other then what I was typing at
the time, lol.

Windows NT was not natively designed to handle graphics-intensive tasks, at
least at first, and Be was quite a bit more efficient in
handling/addressing graphics, at least in my short experience with it.

Nowadays, Windows NT seems to handle stuff alright, so long as you aren't
running with the VESA driver (as is the default for unsupported cards with
XP, as well as the currently available beta of Vista).

In any case, I believe that I was thinking about something else at the time,
because I went back to read the post over again and it would seem that I
did contradict myself in that post...

- Mike

--
Registered Linux User #417338, machine #325045.

The three Rs of Microsoft support: Retry, Reboot, Reinstall.

0 new messages