Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

GGI: yes, yes and yes

17 views
Skip to first unread message

Martynas Kunigelis

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

I don't quite understand all the objections. Most of them are in the form:
why use GGI, if my super-duper-mega-$$$ X server can do it? Well, yeah..
Why go from Kaunas to New York for free on a jet-plane if I can take a
slow ferry for a thousand bucks or so. Let me tell you my simple
motivations for being POSITIVE about GGI:

[1] Gee, I want to do some simple graphics.. Why should I use complex X
for a simple graphics demo? (And there are NO books on X where I live)
SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
Linux, but it leaves the console and the keyboard in a 'dead' state,
so I can not even reboot gracefully.

[2] What about coherence between some programs? E.g. I heard that some
commercial X servers can only return to 80x25 mode. Hey, what if I
use SVGATextMode (which I _do_ use and like a lot)?? What about
dosemu, on the other hand? It assumes 80x25 and f**ks up programs
on other virtual consoles. Don't you think we need a civilized
(i.e. kernel-level) way to sort this out?

Please, if you don't like the concept of GGI, just DON'T USE IT and DON'T
BOTHER READING ABOUT IT. Why discourage brave and good people from doing a
good (in theirs and mine opinion) job? You won't hurt if they succeed,
believe me. And if GGI gets into kernel, I believe you'll get your
satisfaction pressing 'n' to that funky prompt:
Include GGI support? [y/n/M]:

Regards,
Martynas

P.S.: Keep up the good work, GGI guys!

Dale Pontius

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,

Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> writes:
>
> Please, if you don't like the concept of GGI, just DON'T USE IT and DON'T
> BOTHER READING ABOUT IT. Why discourage brave and good people from doing a
> good (in theirs and mine opinion) job? You won't hurt if they succeed,
> believe me. And if GGI gets into kernel, I believe you'll get your
> satisfaction pressing 'n' to that funky prompt:
> Include GGI support? [y/n/M]:
>
I for one, press Y!! (Though [y/n/M] is a good default)

Dale Pontius
(NOT speaking for IBM)

Joseph Foley

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to


Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote in article
<Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>...


>
> I don't quite understand all the objections. Most of them are in the
form:
> why use GGI, if my super-duper-mega-$$$ X server can do it? Well, yeah..
> Why go from Kaunas to New York for free on a jet-plane if I can take a
> slow ferry for a thousand bucks or so. Let me tell you my simple
> motivations for being POSITIVE about GGI:
>
> [1] Gee, I want to do some simple graphics.. Why should I use complex X
> for a simple graphics demo? (And there are NO books on X where I
live)
> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
> Linux, but it leaves the console and the keyboard in a 'dead' state,

> so I can not even reboot gracefully.

This is one of my major gripes. Most people using linux don't have a net
connection or a serial console to reboot with.



> [2] What about coherence between some programs? E.g. I heard that some
> commercial X servers can only return to 80x25 mode. Hey, what if I
> use SVGATextMode (which I _do_ use and like a lot)?? What about
> dosemu, on the other hand? It assumes 80x25 and f**ks up programs
> on other virtual consoles. Don't you think we need a civilized
> (i.e. kernel-level) way to sort this out?

This is the best reason of all - coherence. There are simply too many
different programs doing their own thing with the graphics hardware.

> Please, if you don't like the concept of GGI, just DON'T USE IT and DON'T
> BOTHER READING ABOUT IT. Why discourage brave and good people from doing
a
> good (in theirs and mine opinion) job? You won't hurt if they succeed,
> believe me. And if GGI gets into kernel, I believe you'll get your
> satisfaction pressing 'n' to that funky prompt:
> Include GGI support? [y/n/M]:

From my perspective, this could just be phrased: "Increase system
stability? [y/n/M]"
The ONLY time linux becomes inaccessable to me is when the video subsystem
goes haywire.

> P.S.: Keep up the good work, GGI guys!

Yes, please do keep up the good work, I wish you guys luck.

Joseph
>
>
>

Jean-Baptiste Nivoit

unread,
Jan 26, 1997, 3:00:00 AM1/26/97
to

GGI is definately the right thing to do.
I don't understand why Linus does not like it.

--

jb.
-------------------------------------------
"Emacs is my OS, Linux is my device driver"

Victor Yodaiken

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,

Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
>
>I don't quite understand all the objections. Most of them are in the form:

The objections are simple:nothing should be added to the kernel
if it can be done as well in user programs. What's the problem
with developing GGI using a server? When enough programs use it
to make it GGI standard, then revisit the argument.

>[1] Gee, I want to do some simple graphics.. Why should I use complex X
> for a simple graphics demo? (And there are NO books on X where I live)
> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
> Linux, but it leaves the console and the keyboard in a 'dead' state,
> so I can not even reboot gracefully.

So you need a better SVGAlib. The kernel is not magical. Weird hardware
that hangs SVGAlib will hang the same code in kernel.

From what I see, some people seem to believe that a standard can
be imposed on graphics programmers and manufacturers by putting
the standard into the Linux kernel. There is much reason to
doubt this theory.


Roger Espel Llima

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

In article <01bc0bc7$527db9e0$391d...@h57.albany.edu>,

Joseph Foley <jf8...@csc.albany.edu> wrote:
| Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote in article
| <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>...

| > Please, if you don't like the concept of GGI, just DON'T USE IT and DON'T


| > BOTHER READING ABOUT IT. Why discourage brave and good people from doing
| a
| > good (in theirs and mine opinion) job? You won't hurt if they succeed,
| > believe me. And if GGI gets into kernel, I believe you'll get your
| > satisfaction pressing 'n' to that funky prompt:
| > Include GGI support? [y/n/M]:
|
| From my perspective, this could just be phrased: "Increase system
| stability? [y/n/M]"
| The ONLY time linux becomes inaccessable to me is when the video subsystem
| goes haywire.
|
| > P.S.: Keep up the good work, GGI guys!
|
| Yes, please do keep up the good work, I wish you guys luck.

Right, can't agree more. GGI is a good thing, and it will always be
an option anyway.

Keep up the good work, GGI guys ... and don't be silly with the
licensing, *that* won't help any.

Roger
--
e-mail: roger.es...@ens.fr
WWW page & PGP key: http://www.eleves.ens.fr:8080/home/espel/index.html

Peter W Boettcher

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

Excerpts from netnews.comp.os.linux.development.system: 27-Jan-97 Re:
GGI: yes, yes and yes by Victor Yoda...@chelm.cs
> In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,
> Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
> >
> >I don't quite understand all the objections. Most of them are in the form:
>
> The objections are simple:nothing should be added to the kernel
> if it can be done as well in user programs. What's the problem
> with developing GGI using a server? When enough programs use it
> to make it GGI standard, then revisit the argument.

Almost everything could be done from user space. Have a tcpserver
(a different one for each network card), and make it suid root,
and everything that wanted network access could open a socket
to the tcpserver, which would write raw bytes to your network card...

I've always thought that the kernel should handle all (or as much as
possible) hardware access. After all, the sound driver is in the
kernel, and I don't hear anyone complaining about that. Anyone
with a different philosophy could answer N to the GGI kernel
question.

Pete Boettcher


Hartmut Niemann

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

>In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,
>Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
>>
>>I don't quite understand all the objections. Most of them are in the form:

>The objections are simple:nothing should be added to the kernel
>if it can be done as well in user programs. What's the problem
>with developing GGI using a server? When enough programs use it
>to make it GGI standard, then revisit the argument.

One problem is that you need a server for each and every graphics board you
want to support, and if you want to write architecture-independent,
you better buy a PC with a coupole of popular boards, an Amiga, an Atari,
an Alpha-based computer and maybe a MAC.
An abstraction layer in between would be nice.

A second problem is that there is currently no good solution for
multiple console input devices. This is *missing*.
(And AFAIK GGI offers a solution for this - correct me if I am wrong).
Linux is a multi user system, it supports multiple virtual consoles, but
can't handle two keyboards. With USB coming, two keyboards, two mice
and two monitors on one Linux box is not too far away, and I would use it
if I could. This *is* true multi user support.

If an X server for two users needed to include it's own drivers for
everything, we would be **** near to the broken DOS/WIN3.11 design, where
a weak (but not too buggy) DOS makes room for a complex and buggy GUI to
do with the HW what it wants.

When I write a graphics application, i would prefer seing it fault on
a bad pointer instead of crashing everything, because it is root-suid.

>>[1] Gee, I want to do some simple graphics.. Why should I use complex X
>> for a simple graphics demo? (And there are NO books on X where I live)
>> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
>> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
>> Linux, but it leaves the console and the keyboard in a 'dead' state,
>> so I can not even reboot gracefully.

>So you need a better SVGAlib. The kernel is not magical. Weird hardware
>that hangs SVGAlib will hang the same code in kernel.

What about putting the network code into a userspace module?
Ridiculous, isn't it?
(If X didn't need it, I would have no network support in the kernel...)
What about putting the SVGA stuff into a userspace module?
That is what X is doing...

>From what I see, some people seem to believe that a standard can
>be imposed on graphics programmers and manufacturers by putting
>the standard into the Linux kernel. There is much reason to
>doubt this theory.

It works for M$;-)

To me it seems, that Linux is an OS designed by system operators for
server and router applications. It supports (probably) more network
adapters and SCSI controllers 'out of the box', without a driver disk,
than other major OSs. But it supports basic VGA, and that's it.
And only one keyboard, one mouse, one screen.
Let's change that!

All flames welcome.

Hartmut.

nie...@cip.e-technik.uni-erlangen.de


Jost Boekemeier

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

Hartmut Niemann (nie...@cip.e-technik.uni-erlangen.de) wrote:
: >if it can be done as well in user programs. What's the problem
: >with developing GGI using a server? When enough programs use it
: One problem is that you need a server for each and every graphics board you

THIS IS NOT TRUE.

: If an X server for two users needed to include it's own drivers for


: everything, we would be **** near to the broken DOS/WIN3.11 design, where
: a weak (but not too buggy) DOS makes room for a complex and buggy GUI to
: do with the HW what it wants.

Hartmut, a kernel driver that needs a daemon(!) in user space is not only
broken but brain damaged. GGI can and has to be implemented in user
space IMHO.


Jost

Hartmut Niemann

unread,
Jan 27, 1997, 3:00:00 AM1/27/97
to

jost...@pfirsich.zrz.TU-Berlin.DE (Jost Boekemeier) writes:

>Hartmut Niemann (nie...@cip.e-technik.uni-erlangen.de) wrote:
>: >if it can be done as well in user programs. What's the problem
>: >with developing GGI using a server? When enough programs use it
>: One problem is that you need a server for each and every graphics board you

>THIS IS NOT TRUE.
WHY NOT? (I can use uppercase too, but it doesn't make the argument
more convincing :-)
If I want a program (think of GEM, GeoWorks or another SimCity clone) to run
on S3's Virge *and* on a Matrox Millennium with a resolution higher than
640x480x256 (i.e. standard VGA), what can I do, other than provide two
different versions of some (standard or custum made) library (or server)?

>: If an X server for two users needed to include it's own drivers for
>: everything, we would be **** near to the broken DOS/WIN3.11 design, where
>: a weak (but not too buggy) DOS makes room for a complex and buggy GUI to
>: do with the HW what it wants.

>Hartmut, a kernel driver that needs a daemon(!) in user space is not only
>broken but brain damaged. GGI can and has to be implemented in user
>space IMHO.

Maybe. Don't argue with *me* about implementation details. There are
some daemons working closely with the kernel, if memory serves me, and I can't
see what is wrong with that right away, but maybe you are right and the
GGI design should be changed here.
But it's worse to have every game access my hardware (and *all* my hardware
registers) directly, and not to be able to use two mice for two
monitors.
Think about porting GEM to Linux, directly without X because that would
save lots of system overhead and memory.
Would you restrict it to VGA, would you restrict it
to just the graphics board you happen to have, or wouldn't it be cool
to have any pointing device, and any graphics board be supported because
there's a good clean interface to the hardware?
What about IP bridging being available only for 3Com 509 because the guy
who wrote that part happened to have two of these?

I don't mind particularly about this GGI being kernel mode or user mode,
but some way to ensure that Alt-F2 switches to console 2 in text mode
*no matter what happened on the current console* would be *really* nice.
Wouldn't it?

>Jost

Hartmut.


Victor Yodaiken

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <Ymv4OrO00...@andrew.cmu.edu>,

Peter W Boettcher <pw...@andrew.cmu.edu> wrote:
>Almost everything could be done from user space. Have a tcpserver
>(a different one for each network card), and make it suid root,
>and everything that wanted network access could open a socket
>to the tcpserver, which would write raw bytes to your network card...

Sure. Perhaps even a good idea. But tcp is in kernel by default and
you need to demonstrate an actual advantage in removing it. Since
we have a working and quite usable windowing system depending on
a server, you need to demonstrate the something better. And
"demonstrate" is not the same as "it's obvious".

>I've always thought that the kernel should handle all (or as much as
>possible) hardware access. After all, the sound driver is in the

Why? I think the kernel should provide only those services that it
needs to provide. There is a difference between simplicity in
engineering and imposing a simplistic scheme. Inconsistency
is no sin in a working kernel.

>kernel, and I don't hear anyone complaining about that. Anyone
>with a different philosophy could answer N to the GGI kernel
>question.

And anyone with your philosophy is welcome to build and distribute
a kernel that has graphics built in and that exhibits wonderful behavior.
So far, GGI is a nice sounding idea with no backup. Make it work
and then you will be able to watch as other people hurry to
incorporate it. In the meantime, since your work can progress without
any official stamp of approval, I can't see any grounds for your discontent.

Victor Yodaiken

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <5chqdv$6...@rznews.rrze.uni-erlangen.de>,

Hartmut Niemann <nie...@cip.e-technik.uni-erlangen.de> wrote:
>One problem is that you need a server for each and every graphics board you
>want to support, and if you want to write architecture-independent,

So how does XSVGA work?

>An abstraction layer in between would be nice.

Why can't the server be that layer? And what advantage is there in
putting the layer in the kernel?

>A second problem is that there is currently no good solution for
>multiple console input devices. This is *missing*.
>(And AFAIK GGI offers a solution for this - correct me if I am wrong).
>Linux is a multi user system, it supports multiple virtual consoles, but
>can't handle two keyboards.

Forgive me, but this is not the most compelling argument I've heard
all week.

>If an X server for two users needed to include it's own drivers for
>everything, we would be **** near to the broken DOS/WIN3.11 design, where
>a weak (but not too buggy) DOS makes room for a complex and buggy GUI to
>do with the HW what it wants.

>When I write a graphics application, i would prefer seing it fault on


>a bad pointer instead of crashing everything, because it is root-suid.

So, the application should not be root suid, it should simply have
access to some video buffers --- perhaps not even the hardware ones.
You could even set things up so that an RT-Linux (I knew there
would be an opportunity for this somewhere in this discussion)
task could update hard buffers from virtual buffers every
couple of hundred microseconds.

>>So you need a better SVGAlib. The kernel is not magical. Weird hardware
>>that hangs SVGAlib will hang the same code in kernel.
>What about putting the network code into a userspace module?

Good idea. I think it could be done. There is too much network
crap in the kernel as it is.

>>From what I see, some people seem to believe that a standard can
>>be imposed on graphics programmers and manufacturers by putting
>>the standard into the Linux kernel. There is much reason to
>>doubt this theory.
>It works for M$;-)

Yes. But there are some subtle differences between Linux
and Microsoft.


Steffen Seeger

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

jost...@pfirsich.zrz.TU-Berlin.DE (Jost Boekemeier) writes:

>Hartmut Niemann (nie...@cip.e-technik.uni-erlangen.de) wrote:
>: >if it can be done as well in user programs. What's the problem
>: >with developing GGI using a server? When enough programs use it

>: One problem is that you need a server for each and every graphics board you

>THIS IS NOT TRUE.

You have a better solution? Please, let me know.

>: If an X server for two users needed to include it's own drivers for


>: everything, we would be **** near to the broken DOS/WIN3.11 design, where
>: a weak (but not too buggy) DOS makes room for a complex and buggy GUI to
>: do with the HW what it wants.

>Hartmut, a kernel driver that needs a daemon(!) in user space is not only


>broken but brain damaged. GGI can and has to be implemented in user
>space IMHO.

Jost, calm down if the userspace daemon is the only thing that makes
you worrying. We are willing to accept any better solution to have swappable
memory that can be allocated and accessed in the kernel. And, this is the
only SUID root program to help all others being non-SUID root. (In the current
design, I have to admit.)


>Jost

Steffen Seeger

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

>In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,
>Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
>>
>>I don't quite understand all the objections. Most of them are in the form:

>The objections are simple:nothing should be added to the kernel

>if it can be done as well in user programs. What's the problem
>with developing GGI using a server? When enough programs use it

>to make it GGI standard, then revisit the argument.

To give you only one reason for being in the kernel: you can use
interrupts there. And modern graphic cards need these to reach good
performance. Because otherwise you will end up polling hardware in a
busy loop (not a good idea in a multitasking environment).

>>[1] Gee, I want to do some simple graphics.. Why should I use complex X
>> for a simple graphics demo? (And there are NO books on X where I live)
>> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
>> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
>> Linux, but it leaves the console and the keyboard in a 'dead' state,
>> so I can not even reboot gracefully.

>So you need a better SVGAlib. The kernel is not magical. Weird hardware


>that hangs SVGAlib will hang the same code in kernel.

Right, but the GGI concept is what you call 'magical', it solves this
problem. In the kernel, where it orginates from. And, if you have such
'weird' (I would rather say unsupported) hardware, GGI would not allow you
to screw it up. It solves it for us once and forever, for the others
if they like it.

>From what I see, some people seem to believe that a standard can
>be imposed on graphics programmers and manufacturers by putting
>the standard into the Linux kernel. There is much reason to
>doubt this theory.

So do I. Defacto-standards are imposed by (a) considering them as an
opportunity, (b) helping to enhance them, (c) widespread acceptance.

I believe, that GGI has the power to meet all three points.


Steffen Seeger

Albert D. Cahalan

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

jost...@pfirsich.zrz.TU-Berlin.DE (Jost Boekemeier) writes:
> Hartmut Niemann (nie...@cip.e-technik.uni-erlangen.de) wrote:

>> If an X server for two users needed to include it's own drivers
>> for everything, we would be **** near to the broken DOS/WIN3.11
>> design, where a weak (but not too buggy) DOS makes room for a
>> complex and buggy GUI to do with the HW what it wants.
>
> Hartmut, a kernel driver that needs a daemon(!) in user space is
> not only broken but brain damaged.

What about arpd and kerneld? What about all the kernel daemons,
such as kflushd, kswapd, /sbin/update, and /usr/sbin/klogd?
We need the hack because the kernel only supports virtual memory
for applications. If the kernel could use virtual memory, then
at least arpd and the GGI daemon could dissappear.

> GGI can and has to be implemented in user space IMHO.

No way. The overhead would be too much, and that would make GGI
too slow to be useful. GGI aims to cover everything from plain
old PC video cards (VGA, and even mono text) to cards with DMA
transfers and IRQ operation on non-i386 hardware. Explain how one
can perform a DMA transfer from user space, even running as root.
Explain how one can service an IRQ from user space, even running
as root. Now explain how it can be done quickly (few context
switches and no IO permission bitmap) without root privilege!

XFree86 can not take full advantage of cards that offer DMA bitmap
transfers. XFree86 has to waste CPU time polling because it can not
service a video accelerator IRQ. SVGAlib and any user space GGI
would suffer from the same problems.

--
--
Albert Cahalan
acahalan at cs.uml.edu (no junk mail please - I will hunt you down)

Albert D. Cahalan

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:
> Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
>>
>> I don't quite understand all the objections. Most of them are in the form:

> The objections are simple:nothing should be added to the kernel
> if it can be done as well in user programs. What's the problem
> with developing GGI using a server? When enough programs use it
> to make it GGI standard, then revisit the argument.

In user space, it is not possible to service an IRQ or move a bitmap
via DMA transfer. It also requires a worse context switch. (The X server
would have to go through the kernel to reach the graphics server!)
The user space server idea severely reduces the benefits of a unified
graphics system and is bad for performance.

>> [1] Gee, I want to do some simple graphics.. Why should I use complex X
>> for a simple graphics demo? (And there are NO books on X where I live)
>> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
>> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
>> Linux, but it leaves the console and the keyboard in a 'dead' state,
>> so I can not even reboot gracefully.
>
> So you need a better SVGAlib. The kernel is not magical. Weird
> hardware that hangs SVGAlib will hang the same code in kernel.

SVGAlib is very bad. When you get a SVGAlib game without source code,
you just have to trust that it can safely run as root. Users may
demand SVGAlib tools suid root, but they can not be checked.

SVGAlib can disagree with SVGATextMode, XFree86, and DOSEMU about
the video card state. Common misconception: "you can reset a video
card to a known state". WRONG! Video cards are truly awful beasts
with hidden write-only registers.

> From what I see, some people seem to believe that a standard can
> be imposed on graphics programmers and manufacturers by putting
> the standard into the Linux kernel. There is much reason to
> doubt this theory.

GGI supports both X and SVGAlib. Other APIs can be supported.

Byron A Jeff

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <5ck05r$d...@newshost.nmt.edu>,
Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
-In article <Ymv4OrO00...@andrew.cmu.edu>,
-Peter W Boettcher <pw...@andrew.cmu.edu> wrote:
->Almost everything could be done from user space. Have a tcpserver
->(a different one for each network card), and make it suid root,
->and everything that wanted network access could open a socket
->to the tcpserver, which would write raw bytes to your network card...
-
-Sure. Perhaps even a good idea. But tcp is in kernel by default and
-you need to demonstrate an actual advantage in removing it. Since
-we have a working and quite usable windowing system depending on
-a server, you need to demonstrate the something better. And
-"demonstrate" is not the same as "it's obvious".

And the windowing system won't change from what I understand. The real
problem is that since the kernel has no idea about what's going on in
the video hardware, it's difficult to reset it properly. In addition
it requires SUID access for otherwise ordinary programs (due to I/O
port access). Do you realize that an ordinary user cannot write a SVGALIB
program? That's a clear sign that some part of video access should be
in the kernel.

-
->I've always thought that the kernel should handle all (or as much as
->possible) hardware access. After all, the sound driver is in the
-
-Why? I think the kernel should provide only those services that it
-needs to provide. There is a difference between simplicity in
-engineering and imposing a simplistic scheme. Inconsistency
-is no sin in a working kernel.

I've just been following GGI from that I've been reading. The kernel
part of GGI only provides essential video services, all the rest is
still done at the user level via servers and libraries....

-
->kernel, and I don't hear anyone complaining about that. Anyone
->with a different philosophy could answer N to the GGI kernel
->question.
-
-And anyone with your philosophy is welcome to build and distribute
-a kernel that has graphics built in and that exhibits wonderful behavior.
-So far, GGI is a nice sounding idea with no backup. Make it work
-and then you will be able to watch as other people hurry to
-incorporate it. In the meantime, since your work can progress without
-any official stamp of approval, I can't see any grounds for your discontent.

GGI can surely processed. Just release a module or kernel patch and let
the results prove themselves.

BAJ
--
Another random extraction from the mental bit stream of...
Byron A. Jeff - PhD student operating in parallel - And Using Linux!
Georgia Tech, Atlanta GA 30332 Internet: by...@cc.gatech.edu

Erik Troan

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

On 27 Jan 1997 08:53:19 GMT, Hartmut Niemann <nie...@cip.e-technik.uni-erlangen.de> wrote:
>One problem is that you need a server for each and every graphics board you
>want to support, and if you want to write architecture-independent,

Boy, someone should tell both X Inside and Metrolink that there current
servers don't work. They dynamically load the drivers they need.

>A second problem is that there is currently no good solution for
>multiple console input devices. This is *missing*.

This is true, but adding multiple console support is not the same as putting
a generic bitblt device into the kernel.

None of this means GGI is a bad idea but your arguments are pretty poor.

>What about putting the network code into a userspace module?

The only thing that makes this questionable is the additional context
switches that are required to send a packet (at least one,
client -> kernel -> net stack) and the poor latency you get from placing
the driver interrupt handlers in user space. Operating systems have been
designed with user space TCP/IP stacks and pay a heavy performance
penalty as a result.

Graphics applications *save* context switches by leaving the rendering engine
in user space (client -> kernel -> X server instead of client -> kernel ->
X server -> kernel).

Erik

-------------------------------------------------------------------------------
| I told you I'm not very bright -- Sugar in "Some Like It Hot" |
| "RPM is the greatest thing since swap-space" - Bryan C. Andregg
| |
| Erik Troan = e...@redhat.com = e...@sunsite.unc.edu |

Jon M. Taylor

unread,
Jan 28, 1997, 3:00:00 AM1/28/97
to

In article <5chcah$n...@newshost.nmt.edu>,
Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
>In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,

>Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:
>>
>>I don't quite understand all the objections. Most of them are in the form:
>
>The objections are simple:nothing should be added to the kernel
>if it can be done as well in user programs.

Since this is far from true....

>What's the problem
>with developing GGI using a server? When enough programs use it
>to make it GGI standard, then revisit the argument.

This is precisely what we are trying to get away from! No one
will use the GGI if it comes with the same suid-root-related hassles that
SVGAlib/XFree86/SVGATextMode have now.

>>[1] Gee, I want to do some simple graphics.. Why should I use complex X
>> for a simple graphics demo? (And there are NO books on X where I live)
>> SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my system as it
>> doesn't seem to like my S3-Trio64V+. Well, OK, maybe it doesn't hang
>> Linux, but it leaves the console and the keyboard in a 'dead' state,
>> so I can not even reboot gracefully.
>
>So you need a better SVGAlib. The kernel is not magical. Weird hardware
>that hangs SVGAlib will hang the same code in kernel.

It isn't hanging per se, but locking up the console and keyboard
and mouse. It probably isn't the video driver code, because when that
stuff locks up it usually hard-locks the machine. Most SVGALib/X crashes
are console/input lockups, which might as well be crashes. THAT is what
will not happen with GGI.

>From what I see, some people seem to believe that a standard can
>be imposed on graphics programmers and manufacturers by putting
>the standard into the Linux kernel. There is much reason to
>doubt this theory.

The only "standard" the GGI will hold people to is what the
ioctls are called and what the /dev file is called. The GGI gives
userspace code a raw mmap()ed framebuffer and the ioctls.

Jon

Dimitri Maziuk

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

Victor Yodaiken wrote:
>
> In article <5chqdv$6...@rznews.rrze.uni-erlangen.de>,

> Hartmut Niemann <nie...@cip.e-technik.uni-erlangen.de> wrote:
> >One problem is that you need a server for each and every graphics board you
> >want to support, and if you want to write architecture-independent,
>
> So how does XSVGA work?

I suspect it has code for each supported chipset in there. Otherwise
you wouldn't need to tell it "Chipset clgd5424" in XF86Config.

> >An abstraction layer in between would be nice.
>
> Why can't the server be that layer? And what advantage is there in
> putting the layer in the kernel?

Providing that layer is exactly what an OS is there for. Anyway, the
real argument is that you can't get away from privileged instructions
if you want to access hardware. We've been running stuff suid root
to get there; that doesn't mean it's The Right Thing To Do(tm) -- name
one problem with suid, quick. ;-)

...

> So, the application should not be root suid, it should simply have
> access to some video buffers --- perhaps not even the hardware ones.
> You could even set things up so that an RT-Linux (I knew there
> would be an opportunity for this somewhere in this discussion)
> task could update hard buffers from virtual buffers every
> couple of hundred microseconds.

Tsk, tsk. Polling is not TRTTD(tm) in a multitasking system. Besides,
you _are_ talking about GGI-ish driver here -- you need a driver to
update h/w buffers. Besides, didn't you write elsewhere about simpler
engineering solutions?

...



> Good idea. I think it could be done. There is too much network
> crap in the kernel as it is.

In the kernel sources, you mean?

Cheers
Dimitri
--
Spam 'bots: mailto emaziuk at curtin.edu.au (change " at " to "@" first)
-------------------------------------------------------------------
The views expressed above (hereafter, views) are mine and ownership
remains with me. They are provided "as is" without expressed or
implied warranty of any kind, including, but not limited to, the
implied warranties of the suitability of the views for any purpose.

Victor Yodaiken

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <5ckkei$k...@otto.mb3.tu-chemnitz.de>,

Steffen Seeger <see...@physik.tu-chemnitz.de> wrote:
>To give you only one reason for being in the kernel: you can use
>interrupts there. And modern graphic cards need these to reach good
>performance. Because otherwise you will end up polling hardware in a
>busy loop (not a good idea in a multitasking environment).

Is this really true? Some details?

>>So you need a better SVGAlib. The kernel is not magical. Weird hardware
>>that hangs SVGAlib will hang the same code in kernel.
>

>Right, but the GGI concept is what you call 'magical', it solves this
>problem. In the kernel, where it orginates from. And, if you have such

Can you explain how this works?

>So do I. Defacto-standards are imposed by (a) considering them as an
>opportunity, (b) helping to enhance them, (c) widespread acceptance.
>
>I believe, that GGI has the power to meet all three points.

I don't know what this means.


>
>
> Steffen Seeger
>
>

Victor Yodaiken

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <32EE34B8...@eris.dev.null>,

Dimitri Maziuk <di...@eris.dev.null> wrote:
>Victor Yodaiken wrote:
>>
>> In article <5chqdv$6...@rznews.rrze.uni-erlangen.de>,
>> Hartmut Niemann <nie...@cip.e-technik.uni-erlangen.de> wrote:
>> >One problem is that you need a server for each and every graphics board you
>> >want to support, and if you want to write architecture-independent,
>>
>> So how does XSVGA work?
>
>I suspect it has code for each supported chipset in there. Otherwise
>you wouldn't need to tell it "Chipset clgd5424" in XF86Config.

What will they think of next? And without an official place in the
kernel too!

>> >An abstraction layer in between would be nice.
>>
>> Why can't the server be that layer? And what advantage is there in
>> putting the layer in the kernel?
>
>Providing that layer is exactly what an OS is there for. Anyway, the

This is a circular argument: The abstraction layer must go in the
kernel because the abstraction layer must go in the kernel.

>real argument is that you can't get away from privileged instructions
>if you want to access hardware. We've been running stuff suid root
>to get there; that doesn't mean it's The Right Thing To Do(tm) -- name
>one problem with suid, quick. ;-)

Suid is a wonderful idea. It has some security problems, but they
can be avoided with careful design.

>Tsk, tsk. Polling is not TRTTD(tm) in a multitasking system. Besides,

This "right thing to do" stuff gets quite tiresome. Is there a
big book of "right things to do" somewhere?

>you _are_ talking about GGI-ish driver here -- you need a driver to
>update h/w buffers. Besides, didn't you write elsewhere about simpler
>engineering solutions?

Still haven't seen any technical reasons why graphics needs to be
in the kernel -- except via loadable modules.

>> Good idea. I think it could be done. There is too much network
>> crap in the kernel as it is.
>
>In the kernel sources, you mean?

Yes. I'd like to see a clever method of doing efficient network
processing mostly in user and in loadable modules or some other
way that keeps the kernel small.

Victor Yodaiken

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <5clv2d$o...@solaria.cc.gatech.edu>,

Byron A Jeff <by...@cc.gatech.edu> wrote:
>And the windowing system won't change from what I understand. The real
>problem is that since the kernel has no idea about what's going on in
>the video hardware, it's difficult to reset it properly. In addition
>it requires SUID access for otherwise ordinary programs (due to I/O
>port access). Do you realize that an ordinary user cannot write a SVGALIB
>program? That's a clear sign that some part of video access should be
>in the kernel.

come on. It's exceptionally easy to write a server that will
fork off children to run ordinary user programs with open files
and ioperms and memory windows in the right place.

>->I've always thought that the kernel should handle all (or as much as
>->possible) hardware access. After all, the sound driver is in the
>-
>-Why? I think the kernel should provide only those services that it
>-needs to provide. There is a difference between simplicity in
>-engineering and imposing a simplistic scheme. Inconsistency
>-is no sin in a working kernel.
>
>I've just been following GGI from that I've been reading. The kernel
>part of GGI only provides essential video services, all the rest is
>still done at the user level via servers and libraries....

So some user programs will still touch hardware. Then what's the
big deal?

>->kernel, and I don't hear anyone complaining about that. Anyone
>->with a different philosophy could answer N to the GGI kernel
>->question.
>-
>-And anyone with your philosophy is welcome to build and distribute
>-a kernel that has graphics built in and that exhibits wonderful behavior.
>-So far, GGI is a nice sounding idea with no backup. Make it work
>-and then you will be able to watch as other people hurry to
>-incorporate it. In the meantime, since your work can progress without
>-any official stamp of approval, I can't see any grounds for your discontent.
>
>GGI can surely processed. Just release a module or kernel patch and let
>the results prove themselves.

Exactly.

Jari Soderholm

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

> > good (in theirs and mine opinion) job? You won't hurt if they succeed,
> > believe me. And if GGI gets into kernel, I believe you'll get your
> > satisfaction pressing 'n' to that funky prompt:
> > Include GGI support? [y/n/M]:
> >
> I for one, press Y!! (Though [y/n/M] is a good default)

And I for second will also press Y

GGI is only way to get some decent graphics for Linux, X is slow
memory hog, it is no fun for home users.

Jari

Ketil Z Malde

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

Jari Soderholm <jaso...@cdlinux01.ntc.nokia.com> writes:

> GGI is only way to get some decent graphics for Linux, X is slow
> memory hog, it is no fun for home users.

Perhaps you make the right decision, but for the wrong reason. X isn't
all that slow compared to other window systems alternatives, though it
could arguably be quite a bit faster.

But it does provide one very important feature: network transparency.
Think about it, perhaps today many home users aren't connected in any
way, in a short while everybody will be. And with nice, high bandwidth
connections too.

If the installed base of applications are designed to run on local
displays only, why, one might as well use NT.

As for memory, I agree that effort should be undertaken to minimize the
working set memory footprint, if for no other reason that it makes sense
to do so in general. X isn't really heavy on a decent PC -- and while
backwards compatibility is important to Linux users, the line must be
drawn somewhere. And we can't be so heavily tied to the past that it
limits our future.

Already, applications like The GIMP, Netscape and Emacs tax the system
more than X does, I don't think that a more lightweight windowing system
would make all that much difference.

Sure, some things really really need all the speed possible, most
notably games, and I think a back door to raw hardware could be provided
for those. But in the general case, I side with Linus, there is a need
for a unified graphics system, and X is it.

X is not going away. GGI will earn it's place if it can provide a
better X, for instance by letting me run DOS apps (games) without #$%^&
up my graphics card, by providing simpler, tighter X servers, or by
providing wider support for devices and peripherals (multi-heading,
etc).

~kzm
--
If I haven't seen further, it is by standing in the footprints of giants

Mats Andtbacka

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

Hartmut Niemann, in <5chqdv$6...@rznews.rrze.uni-erlangen.de>:

>What about putting the network code into a userspace module?

>Ridiculous, isn't it?

no. some operating systems do that, in effect; it's not commonly done,
because of the performance penalty imposed by most non-realtime OSs,
but it's certainly possible.

the X server isn't in the kernel, and it's doing just fine talking to
the video hardware regardless. i'm unconvinced that we need much extra
performance from the video subsystem, considering just how much work a
modern CPU can do in the timespan of one vertical refresh. if you
think you can convince me i'm wrong, by all means go ahead.

[...]


>To me it seems, that Linux is an OS designed by system operators for
>server and router applications.

this may well be quite true. is there something wrong with that?
--
"...it's all wrong
but it's alright..." -- Clapton

Larry Doolittle

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

Victor Yodaiken (yoda...@chelm.cs.nmt.edu) wrote:

: >real argument is that you can't get away from privileged instructions


: >if you want to access hardware. We've been running stuff suid root
: >to get there; that doesn't mean it's The Right Thing To Do(tm) -- name
: >one problem with suid, quick. ;-)

: Suid is a wonderful idea. It has some security problems, but they
: can be avoided with careful design.

Careful design of suid programs does not result in
-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
^ ^^^^^^^

- Larry Doolittle ldoo...@jlab.org

Justin Hahn

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

: >An abstraction layer in between would be nice.

: Why can't the server be that layer? And what advantage is there in
: putting the layer in the kernel?

An abstraction layer, from what I understand of this project, would have to
SUID root, and would not have as fast access as kernel code. Plus it would
bring all the other problems SUID problems bring.

: >A second problem is that there is currently no good solution for


: >multiple console input devices. This is *missing*.

: >(And AFAIK GGI offers a solution for this - correct me if I am wrong).


: >Linux is a multi user system, it supports multiple virtual consoles, but
: >can't handle two keyboards.

: Forgive me, but this is not the most compelling argument I've heard
: all week.

It doesn't have to compel you. If you don't EVER want to use GGI, you could
always choose "n" on the config screen. I mean I never hear people whining
about "well there are 30 or 40 SCSI drivers in the kernel, I don't want
support for the other 29 or 39" or "the kernel support 10 odd sound cards, the
other 9 are just bloat". You don't because they work, they work fast, and they
key components. Except that while you can run a system without audio, without
networking, and in many cases without SCSI, you can't without some for of
video. So while the most intrinsic piece of hardware is denied kernel access,
you're allowing less important hardware direct, fast, kernel-space
access. This is not good design philosophy.

: >When I write a graphics application, i would prefer seing it fault on


: >a bad pointer instead of crashing everything, because it is root-suid.

: So, the application should not be root suid, it should simply have


: access to some video buffers --- perhaps not even the hardware ones.
: You could even set things up so that an RT-Linux (I knew there
: would be an opportunity for this somewhere in this discussion)
: task could update hard buffers from virtual buffers every
: couple of hundred microseconds.

1) Most people do not currently use RT-Linux. Come back when they do. 2) You'd
still need something to be SUID root. Otherwise you CAN'T access the memory
addresses and IRQ(s) invovled. Period. That is the way it works. That means a
bad pointer could (depending on how clean the API is, and if its like SVGAlib
its not very) scribble all over system memory. The user-space server is not
effective anyways see my later argument...

: >What about putting the network code into a userspace module?

: Good idea. I think it could be done. There is too much network


: crap in the kernel as it is.

Okay go for it. Come back when it's stable, and when your performance matches
that of the kernel code. You are spouting off reasons why GGI doesn't work,
and why its a bad idea, and giving all these other alternatives, when you
(from the sound of it) have not spent a whole lot of time looking at it, and
haven't cut any driver code for GGI.

: >>From what I see, some people seem to believe that a standard can
: >>be imposed on graphics programmers and manufacturers by putting
: >>the standard into the Linux kernel. There is much reason to
: >>doubt this theory.

: >It works for M$;-)

: Yes. But there are some subtle differences between Linux
: and Microsoft.

Yes... But let me use an MS example. Win NT is "supposedly" a microkernel. As
such in v3.51 its graphics subsystem was non-kernel-space daemon. It was slow
as mud. Slower even. So in v4.0, what does MS do? They put graphics in the
kernel. It flies. Being true to MS style, their graphics subsystem isn't
stable, and it causes crashes. But its always done that. There is NO more
danger from putting GGI in the kernel, doing so in other OSes in the real
world has shown that doing so increases performance, and it will provide a
uniform API, and driver format for Linux. If you think the current XF86
situation is convenient, then you are sick puppy.

But then again, that's just my $0.02


--
-justin

Geert Uytterhoeven

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <E4rpr...@greenie.muc.de>, ge...@greenie.muc.de (Gert Doering) writes:

|> nie...@cip.e-technik.uni-erlangen.de (Hartmut Niemann) writes:
|>
|> >When I write a graphics application, i would prefer seing it fault on
|> >a bad pointer instead of crashing everything, because it is root-suid.
|>
|> Get your facts straight. Even a uid-root program is protected by the usual
|> memory protection. The only exception is that it *may* ask the kernel to
|> permit it access to some I/O ports and memory locations - access to other
|> locations is still now allowed.

Wrong. A setuid root program can access all memory locations in your machine.

|> To the contrary, a GGI module living *in kernel* has *NO* protection and
|> can very easily crash the whole machine when accessing a bad pointer. So?

How many bugs are there in e.g. the network drivers? Those can crash the whole
machine too.

Greetings,

Geert

--
Geert Uytterhoeven Geert.Uyt...@cs.kuleuven.ac.be
Wavelets, Linux/m68k on Amiga http://www.cs.kuleuven.ac.be/~geert/
Department of Computer Science -- Katholieke Universiteit Leuven -- Belgium

Gert Doering

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

nie...@cip.e-technik.uni-erlangen.de (Hartmut Niemann) writes:

>When I write a graphics application, i would prefer seing it fault on
>a bad pointer instead of crashing everything, because it is root-suid.

Get your facts straight. Even a uid-root program is protected by the usual
memory protection. The only exception is that it *may* ask the kernel to
permit it access to some I/O ports and memory locations - access to other
locations is still now allowed.

To the contrary, a GGI module living *in kernel* has *NO* protection and


can very easily crash the whole machine when accessing a bad pointer. So?

gert
--
Yield to temptation ... it may not pass your way again! -- Lazarus Long
//www.muc.de/~gert
Gert Doering - Munich, Germany ge...@greenie.muc.de
fax: +49-89-3243328 gert.d...@physik.tu-muenchen.de

bill davidsen

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

| [1] Gee, I want to do some simple graphics.. Why should I use complex X
| for a simple graphics demo?

Portability?

| [2] What about coherence between some programs? E.g. I heard that some
| commercial X servers can only return to 80x25 mode.

I heard that some CPU don't do FDIV right, and some cars have bad
brakes. Should I do arithmetic in software and avoid any driving
which requires stopping without hitting something?

This is silly, if somet6hing is broken, don't use it.

| Please, if you don't like the concept of GGI, just DON'T USE IT and DON'T
| BOTHER READING ABOUT IT. Why discourage brave and good people from doing a

| good (in theirs and mine opinion) job? You won't hurt if they succeed,
| believe me.

Wrong, everyone is hurt every time a new "standard" comes out which
only works in a subset of the systems in use. X isn't on every
system, but it's available for *almost* every system.

When people start writing to a new standard they lock out everyone
who doesn't conform. Microsoft does that, but it doesn't seem like a
logical thing in Linux. We already have SVGAlib, and that causes
people to write some apps which don't run portably, even on some
non-Intel Linux platforms.

Linux is not large enough to drive the market, so all we do is
fragment it. Invest an hour in learning something like Xforms if you
want simple development. Software written in some interface for
Linux only is about as portable as a game cartridge. The argument
that video is too slow and memory to expensive was a good one once,
but it doesn't match reaility any more.
--
bill davidsen (davi...@tmr.com)
Windows NT is like a doctoral thesis; it contains a wealth of
interesting features and ideas, some of which could be extracted
from the proof of concept and used in a real operating system.

bill davidsen

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <5clv2d$o...@solaria.cc.gatech.edu>,
Byron A Jeff <by...@cc.gatech.edu> wrote:

| And the windowing system won't change from what I understand. The real
| problem is that since the kernel has no idea about what's going on in
| the video hardware, it's difficult to reset it properly. In addition
| it requires SUID access for otherwise ordinary programs (due to I/O
| port access). Do you realize that an ordinary user cannot write a SVGALIB
| program? That's a clear sign that some part of video access should be
| in the kernel.

Clear sign to me that direct access to hardware should be limited to
keep the "ordinary user" from messing it up. For a single user
system that user should be able to add permissions in a moment. For
a multiuser system do you really want people logging in the be able
to play with the console?

bill davidsen

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <01bc0bc7$527db9e0$391d...@h57.albany.edu>,
Joseph Foley <jf8...@csc.albany.edu> wrote:

| > [1] Gee, I want to do some simple graphics.. Why should I use

| > complex X for a simple graphics demo? (And there are NO books on X


| > where I live) SVGAlib? Ha! guess what: SVGAlib _hangs_ Linux on my
| > system as it doesn't seem to like my S3-Trio64V+. Well, OK, maybe it
| > doesn't hang Linux, but it leaves the console and the keyboard in a
| > 'dead' state,
|
| > so I can not even reboot gracefully.
|

| This is one of my major gripes. Most people using linux don't have a net
| connection or a serial console to reboot with.

What I miss is the jump from "SVGlib doesn't work well" to "let's
write another totally new graphics thing to maintain instead of
fixing SVGAlib or using X."

James Youngman

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <E4rpr...@greenie.muc.de>, ge...@greenie.muc.de says...

>
>nie...@cip.e-technik.uni-erlangen.de (Hartmut Niemann) writes:
>
>>When I write a graphics application, i would prefer seing it fault on
>>a bad pointer instead of crashing everything, because it is root-suid.
>
>Get your facts straight. Even a uid-root program is protected by the usual
>memory protection. The only exception is that it *may* ask the kernel to
>permit it access to some I/O ports and memory locations - access to other
>locations is still now allowed.

Surely it is possible for a traditional SVGAlib program to crash without
restoring the console state?

--
James Youngman VG Gas Analysis Systems |The trouble with the rat-race
Before sending advertising material, read |is, even if you win, you're
http://www.law.cornell.edu/uscode/47/227.html|still a rat.


Joe Buck

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

doo...@recycle.cebaf.gov (Larry Doolittle) writes:
>Careful design of suid programs does not result in
>-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
> ^ ^^^^^^^

Large suid programs can still be safe if they perform operations requiring
privilege in a small initialization portion of the program, then revoke
their privilege. The amount of code to be verified is then quite small.
The X server is written in that way.

--
-- Joe Buck http://www.synopsys.com/pubs/research/people/jbuck.html

Help stamp out Internet spam: see http://www.vix.com/spam/

Albert D. Cahalan

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

> In article <5clv2d$o...@solaria.cc.gatech.edu>,
> Byron A Jeff <by...@cc.gatech.edu> wrote:
>> And the windowing system won't change from what I understand.
>> The real problem is that since the kernel has no idea about
>> what's going on in the video hardware, it's difficult to reset
>> it properly. In addition it requires SUID access for otherwise
>> ordinary programs (due to I/O port access). Do you realize that
>> an ordinary user cannot write a SVGALIB program? That's a clear
>> sign that some part of video access should be in the kernel.
>

> come on. It's exceptionally easy to write a server that will
> fork off children to run ordinary user programs with open files
> and ioperms and memory windows in the right place.

It is exceptionally easy, and exceptionally insecure too. When you
do that, the user gains control of video registers that could be
used to mess with the system in severely bad ways. Video hardware
is not nice. It is designed by the truly insane, with switched banks
of write-only registers. A user might:

* Fry the monitor. I have seen a monitor destroyed by a bad
interaction between XFree86 and SVGAlib. By giving the user
access to video registers, it becomes easy to fry a monitor.

* Crash the video card or put it in a state from which there is no
way to be sure how to reset the card.

* Use an advanced card with DMA to modify kernel memory. Now the
user can put a virus or packet sniffer in the kernel.

>>>> I've always thought that the kernel should handle all (or as much as

>>>> possible) hardware access. After all, the sound driver is in the
>>>

>>> Why? I think the kernel should provide only those services that it

>>> needs to provide. There is a difference between simplicity in

>>> engineering and imposing a simplistic scheme. Inconsistency

>>> is no sin in a working kernel.
>>
>> I've just been following GGI from that I've been reading. The kernel
>> part of GGI only provides essential video services, all the rest is
>> still done at the user level via servers and libraries....
>
> So some user programs will still touch hardware.
> Then what's the big deal?

No, they do not touch hardware. The kernel module does that.
There is a library in user space that calls the kernel for
functions supported in hardware and implements other stuff itself.

>> GGI can surely processed. Just release a module or kernel patch
>> and let the results prove themselves.

Both: There is a small kernel patch to remove most of the current
console and add hooks for a video module to take over control.
Most of the video code goes in a module.


H. Peter Anvin

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

Followup to: <32EFA87F...@eris.dev.null>
By author: Dimitri Maziuk <di...@eris.dev.null>
In newsgroup: comp.os.linux.development.system
>
> I'm saying that graphics hardware should be accessed like any other
> hardware -- via system calls. GGI just happen to be the guys trying
> to implement something like that. And given a choice between a suid
> X server with DGA and a GGI kernel module, I'll take the module.
>

Why?

-hpa
--
This space intentionally has nothing but text explaining why this
space has nothing but text explaining that this space would otherwise
have been left blank, and would otherwise have been left blank.


Markus Gutschke

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

ge...@greenie.muc.de (Gert Doering) writes:
> Get your facts straight. Even a uid-root program is protected by the usual
> memory protection. The only exception is that it *may* ask the kernel to
> permit it access to some I/O ports and memory locations - access to other
> locations is still now allowed.

Graphics cards often have I/O port addresses that are way above the
conventional ISA I/O port space. This is especially true for all
modern PCI based graphics cards. As "ioperm(2)" does not allow for
managing more than the first 0x400 I/O ports, every graphics
application will therefore have to use "iopl(2)" and enable access to
these ports. This will leave the machine wide open for any process
going berserk. Not only, can the program run over _all_ I/O ports, but
it can also disable interrupts (c.f. the man page for "iopl(2)").

I would definitly prefer better protection and abstraction from the
hardware, than provided by the current scheme. It is arguable, that a
"ggi server process" could be written, which runs entirely in user
space, offers call-backs to the kernel (for doing text output), and
provides interfaces for programs that want to do graphics. This
approach is likely to suffer from serious performance problems,
though.

That is why the GGI people decided to move the main arbiter for
accessing graphics hardware into kernel space. If I understand the
current concept properly, there will be a "/dev/graphics" device (or
more likely, several of them for all of the virtual consoles). This
device can be accessed by "read(2)"/"write(2)", "mmap(2)", or
"ioctl(2)" operations. The arbiter ensures that accessing the graphics
hardware does not leave the machine in an unstable state and
virtualizes the hardware, so that all of the virtual consoles are
independent of each other. Access control to the graphics hardware is
the same as with all other devices under Unix. The device has
permission bits and it is also possible to request an opened file
descriptor from an authentication server.

A typical application will not directly access the device. Rather it
will use a library that interfaces to the GGI kernel space
driver. Currently there already is a library that emulates the libSVGA
API, so that old programs can benefit from the advantages of the
hardware abstraction provided by GGI. I have not tested this library,
yet, so I cannot comment on how well it works, though.

It is intended that there will be a special library that allows for
taking advantage of special chip set features (if available); this
should appeal to authors of video games who want to make use of
hardware acceleration.

There will also be (or maybe there already is) an X server which
interfaces with the GGI graphics device. This is neccessary, because
under GGI no other process should directly access the graphics
hardware. If this goal can be achieved, the risk of having to restart
the machine because the video card was left in an undefined state,
should cease to exist.

The long-term goal also includes for the libraries (both libSVA and
libGGI) to have different back-end that interfaces to an X server
rather than to the GGI interface. This would allow for running the
same programs both locally (at maximum performance) and over a network
connection.

The intended goal of the GGI project does not seem to be, producing
yet another incompatible API (which some people accuse it of doing),
but rather of making existing API's more reliable by providing a
decent abstraction layer.

Markus

--
Markus Gutschke Internet: gut...@math.uni-muenster.de
Schlage 5a PGP public key: finger -l gut...@math.uni-muenster.de
D-48268 Greven-Gimbte
Germany >>> I prefer encrypted e-mail <<<

jim fetters

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

In article <5co1bb$1p...@usenet1y.prodigy.net>,
bill davidsen <davi...@tmr.com> wrote:

>Wrong, everyone is hurt every time a new "standard" comes out which
>only works in a subset of the systems in use. X isn't on every
>system, but it's available for *almost* every system.

Yeah. I completely agree. X11R4, X11R5, X11R6, and now Broadway.
Let's not forget PHIGS, PEX, Display Postscript (under AIX, SGI, etc), and
other X extensions. And there's Motif, Athena, and OPENLOOK.

So many standards to choose from!
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here's for starters. Grab your SGI, load up Alias Wavefront, and try
to export the display to a Linux box, or a Sun workstation.

Now X, that's a real standard!

>Linux is not large enough to drive the market, so all we do is
>fragment it. Invest an hour in learning something like Xforms if you
>want simple development.

Xforms: now that's what I call SOFTWARE ENGINEERING AT ITS FINEST.

>Software written in some interface for
>Linux only is about as portable as a game cartridge.

Right, your so right. So, in honor of your discovery, we're taking
away your linux box, re-writing the entire video subsystem in Java.
Hey, ITS PORTABLE!!!!!! You won't mind the horribly slow graphics,
AFTER ALL --- ITS PORTABLE: ITS JAVA! And since there is JAva support
in the Linux kernel, your new video subsystem will now run about:
0.000338% faster give or take 1 percent.

>The argument
>that video is too slow and memory to expensive was a good one once,
>but it doesn't match reaility any more.

now memory is cheap, and software is slow and bloated. All things
being equal, the advances in technology have effectively (due to slow
and inefficient programming) canceled out producing a NIL increase
in throughput.
-Jim
======================================================================
"Yeah. Imagine building a computer, where it required a system call
to execute an instruction (e.g. add, shift, load, store, compare,
branch ...). That would suck. Such a machine was built in the 50's,
before they figured out the *right* way to do multi-tasking. Today,
with multi-tasking, user-level programs touch the CPU chip directly.
In fact, this is *so obvious* that it is hard to imagine anything
else. Unfortunately, these principles are still confusing and
unknown to the graphics community.

Yes, the fastest way to do graphics is to allow *direct access* to
the hardware. Yes, *this is dangerous*. But there are design rules,
which are not well known outside of SGI, IBM, etc. that allows fast
high-performance graphics to be built and operated safely."
=======================================================================
[ quote taken from Linas Vepstas ]
=======================================================================


NeXTMail *NOT* Accepted!

Mats Liljegren

unread,
Jan 29, 1997, 3:00:00 AM1/29/97
to

On 28 Jan 97 10:27:02 GMT, see...@physik.tu-chemnitz.de (Steffen
Seeger) wrote:

>To give you only one reason for being in the kernel: you can use
>interrupts there. And modern graphic cards need these to reach good
>performance. Because otherwise you will end up polling hardware in a
>busy loop (not a good idea in a multitasking environment).

If you have video drivers, one for each video card (you could of
course load several of them at once, if you have several types of
cards installed), in the kernel space, this would suffice. Wouldn't
it?

Atop of that, you could have a server running in user space. This
server would emulate what the video card can't do in hardware. In this
way, the interface to the server would be the same, no matter what the
video card can do.

Would this be a good solution?

/Mats


Dimitri Maziuk

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Victor Yodaiken wrote:
>
> In article <32EE34B8...@eris.dev.null>,
> Dimitri Maziuk <di...@eris.dev.null> wrote:
> >Victor Yodaiken wrote:
> >>
...


> This is a circular argument: The abstraction layer must go in the
> kernel because the abstraction layer must go in the kernel.

:-0 I didn't say that. I couldn't have! Not even at 01.17 am!

Seriously, I'm not arguing that GGI should be in the actual kernel
-- it's big enough as it is. Or that it should be GGI.

I'm saying that graphics hardware should be accessed like any other
hardware -- via system calls. GGI just happen to be the guys trying
to implement something like that. And given a choice between a suid
X server with DGA and a GGI kernel module, I'll take the module.

Dimitri
--
emaziuk @ curtin.edu.au

Preston F. Crow

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

I thought the virtual console thing was supposed to do much of what
GGI is talking about:

Switching consoles, which is handled by the kernel, saves the state of
the video card, which is restored when you switch back to that
console. It also provides an abstract interface (/dev/tty) through
which you can control the screen.

So is GGI just talking about fixing virtual consoles? Granted, they
obviously don't save enough video state for proper restoration (as
evidenced by SVGAlib problems). Granted, the abstract interface is
rather limited.

--PC

--
"And he [Christopher Robin] respects Owl, because you can't help respecting
anybody who can spell TUESDAY, even if he doesn't spell it right; but spelling
isn't everything. There are days when spelling Tuesday just doesn't matter."
-- _The House at Pooh Corner_ by A.A. Milne

Larry Doolittle

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Joe Buck (jb...@synopsys.com) wrote:

: doo...@recycle.cebaf.gov (Larry Doolittle) writes:
: >Careful design of suid programs does not result in
: >-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
: > ^ ^^^^^^^

: Large suid programs can still be safe if they perform operations requiring
: privilege in a small initialization portion of the program, then revoke
: their privilege. The amount of code to be verified is then quite small.
: The X server is written in that way.

Umm. It would be nice if this was the whole story. The Perl saved
uid bug is an example of subtle things that can go wrong. I agree
a _fully_ revoked root priviledge eliminates simple stack over-run to
gain root shell style attacks. In the case of X, for example, it
would have to be much more subtle. After overrunning the stack, you
would have to diddle bits on the disk controller in some way that did
not cause an instant crash :-). Since such attacks are possible, however
difficult, I mistrust large programs running as root. This argument
holds even _if_ the source is public. Proprietary programs running as
root are even worse, IMHO.

- Larry Doolittle ldoo...@jlab.org

Timothy Watson

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5cnuru$i...@news.bu.edu> jeh...@bu.edu (Justin Hahn) writes:
> : You could even set things up so that an RT-Linux (I knew there
> : would be an opportunity for this somewhere in this discussion)
> : task could update hard buffers from virtual buffers every
> : couple of hundred microseconds.
>
> 1) Most people do not currently use RT-Linux. Come back when they do. 2) You'd

Hmm, Mr. Yodaikin works in this area, and I believe Alan Cox mentioned
plans to eventually make this part of the 2.1.x kernel series - which
puts the network stuff in a different light also.

> : Good idea. I think it could be done. There is too much network
> : crap in the kernel as it is.
>
> Okay go for it. Come back when it's stable, and when your performance matches
> that of the kernel code. You are spouting off reasons why GGI doesn't work,

This cuts both ways. I think it will be interesting following the GGI
project, but I for one, will probably not bother patching my kernel
until there is an X server for GGI. Maybe it is a chicken/egg problem, but
unless the XFree folks happen to support something similar on another
system, I don't see them jumping to adapt one right away. Maybe when the
interface is stable, someone can be convinced to do this...


David Whysong

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Preston F. Crow (cr...@coos.dartmouth.edu) wrote:
: I thought the virtual console thing was supposed to do much of what
: GGI is talking about:

: Switching consoles, which is handled by the kernel, saves the state of
: the video card, which is restored when you switch back to that
: console. It also provides an abstract interface (/dev/tty) through
: which you can control the screen.

Actually, switching consoles does not save the state of the video card. The
kernel knows very little about graphics mode. The kernel video driver, as
far as I can tell (and I'm certainly no kernel hacker) simply provides a very
few functions for standard vga (text-mode only; font controls, palette and
color maps). (c.f. /usr/src/linux/drivers/char/vga.c)

The kernel has no "knowledge" of the video chipsets, so it can't keep video
sane if some suid-root program messes up the video card state (XFree and
SVGAlib do this very often to me).

: So is GGI just talking about fixing virtual consoles? Granted, they


: obviously don't save enough video state for proper restoration (as
: evidenced by SVGAlib problems). Granted, the abstract interface is
: rather limited.

GGI, as I understand it, would simply add a kernel layer and library. The
kernel layer (consisting of a kernel patch to remove old console code and
a module for each video card) includes a frame buffer and function calls
to do IO. Most user programs would implement graphics through library
calls. The library would take advantage of hardware acceleration features
of each card, or provide emulation for features which the hardware does
not implement.

This approach would "fix" virtual consoles enhance Linux video capabilities.
The new console code is supposed to allow for multiple input devices and
multiple displays as well.

There is already an X server which has been modified to run with GGI. It
was available from a link at the GGI home page at

http://synergy.caltech.edu

(which I can't seem to access today). I'm not sure if it is complete yet,
though. Does anyone have performance comparisons between it and the
comparable XFree server?

Dave Whysong
dwhysong @ physics . ucsb . edu (remove spaces to email me)
finger for pgp key


Gianni Mariani

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

jim fetters wrote:
...

> Here's for starters. Grab your SGI, load up Alias Wavefront, and try
> to export the display to a Linux box, or a Sun workstation.

Yeah, but it would be more fun watching the lawn grow than running
Alias from an SGI box pointed at a Sun or PC !

It's also not impossible to do this, if I recall you can play
tricks with DSO's or load special servers and make it work, sorry
no details, I don't pretend to be correct.

--

_ ` _ ` Globalization R&D
/ \ / / \ /-- /-- /
/ // / / / / / / / Graphics is cool
\_/ \ \_ \/ /_/ /_/ o Internationalization c'est magnifique
/ /
\_/ (415) 933 4387 Opinions mine etc ...

Ketil Z Malde

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

jeh...@bu.edu (Justin Hahn) writes:

> Yes... But let me use an MS example. Win NT is "supposedly" a
> microkernel. As such in v3.51 its graphics subsystem was
> non-kernel-space daemon. It was slow as mud. Slower even. So in v4.0,
> what does MS do? They put graphics in the kernel. It flies. Being true
> to MS style, their graphics subsystem isn't stable, and it causes
> crashes.

And MS did the equivalent of putting X in the kernel, and that's not
quite GGI, is it? The point is to use GGI to multiplex the graphics
hardware in a safe manner to various graphics clients, including X
servers, SVGAlib applications, DOSemu etc etc -- stuff you can *not* do
safely today.

Come on people, Linux is a modular, monolithic kernel, and a minimal
graphics driver belongs there. If you want a microkernel architecture,
they are available, but Linux is not it.

Victor Yodaiken

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <32EFA87F...@eris.dev.null>,

>I'm saying that graphics hardware should be accessed like any other
>hardware -- via system calls. GGI just happen to be the guys trying
>to implement something like that. And given a choice between a suid
>X server with DGA and a GGI kernel module, I'll take the module.
>
>Dimitri

I'm suspicious of design from misty principles. Graphics hardware is
_different_ from other hardware: it's less standard and involves
big chunks of frame buffer. X is large, slow, and annoying. But
it works. It runs quite well and does so in user space.
If someone can do much better, I'd be thrilled to see it. But
all this "It's the right thing to do(tm)" nonsense sounds like
something Dilbert's manager should be saying. The code is freely
available, there is a great deal of interest, so it's up to the
GGI folks to make their case with something more convincing then
platitudes.


Victor Yodaiken

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5cnuru$i...@news.bu.edu>, Justin Hahn <jeh...@bu.edu> wrote:
>: >An abstraction layer in between would be nice.
>
>: Why can't the server be that layer? And what advantage is there in
>: putting the layer in the kernel?
>
>An abstraction layer, from what I understand of this project, would have to
>SUID root, and would not have as fast access as kernel code. Plus it would
>bring all the other problems SUID problems bring.

I'd like to see a more technical justification. Many people are
under the mistaken impression that Kernel codes automatically
runs faster. The correct principle of OS design is to make the
kernel be as unobtrusive as possible so that user programs can
do actual work.

>It doesn't have to compel you. If you don't EVER want to use GGI, you could
>always choose "n" on the config screen. I mean I never hear people whining
>about "well there are 30 or 40 SCSI drivers in the kernel, I don't want
>support for the other 29 or 39" or "the kernel support 10 odd sound cards, the
>other 9 are just bloat". You don't because they work, they work fast, and they
>key components. Except that while you can run a system without audio, without
>networking, and in many cases without SCSI, you can't without some for of
>video. So while the most intrinsic piece of hardware is denied kernel access,
>you're allowing less important hardware direct, fast, kernel-space
>access. This is not good design philosophy.

There is excellent evidence that one can run a sophisticated powerful
graphics system that has a huge base of programs from user mode.
Look, this is a completely nonsensical debate and I can't believe
that I'm wasting time with it. But there is no reason why GGI folks
can't supply a driver and a simple patch. All this screaming about
needing recognition for code that doesn't actually work is very
offputting.

>1) Most people do not currently use RT-Linux.

The poor lambs.

2) You'd
>still need something to be SUID root. Otherwise you CAN'T access the memory
>addresses and IRQ(s) invovled. Period. That is the way it works. That means a
>bad pointer could (depending on how clean the API is, and if its like SVGAlib
>its not very) scribble all over system memory. The user-space server is not
>effective anyways see my later argument...

From day one, UNIX has relied on trusted daemon programs to offer
services that can be offered from user mode. If you don't like this
theory, OS360 may still be available.

>: >What about putting the network code into a userspace module?
>

>: Good idea. I think it could be done. There is too much network
>: crap in the kernel as it is.
>
>Okay go for it. Come back when it's stable, and when your performance matches
>that of the kernel code. You are spouting off reasons why GGI doesn't work,

>and why its a bad idea, and giving all these other alternatives, when you
>(from the sound of it) have not spent a whole lot of time looking at it, and
>haven't cut any driver code for GGI.

I'm reacting to the exceptionally weak technical arguments I find here
for GGI.

>Yes... But let me use an MS example. Win NT is "supposedly" a microkernel. As
>such in v3.51 its graphics subsystem was non-kernel-space daemon. It was slow
>as mud. Slower even. So in v4.0, what does MS do? They put graphics in the
>kernel. It flies. Being true to MS style, their graphics subsystem isn't

Nu? This proves nothing other than microkernels are hard to get right:
something that is well known.

>stable, and it causes crashes. But its always done that. There is NO more
>danger from putting GGI in the kernel, doing so in other OSes in the real
>world has shown that doing so increases performance, and it will provide a
>uniform API, and driver format for Linux.

I'm afraid that arguing that "it worked for Microsoft" is no more
persuasive than any of your other arguments.

>If you think the current XF86
>situation is convenient, then you are sick puppy.

It works. People use it.

Ketil Z Malde

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

> I'm suspicious of design from misty principles. Graphics hardware is

> different_ from other hardware: it's less standard and involves big
> chunks of frame buffer.

Counting the support for other hardware, namely net and scsi, I find
that both entail more than 100K lines of C code. Are graphics really
that much less standard than network or scsi hardware?

(I'm no kernel hacker, so flame/correct if I misunderstood something
fundamental)

Steffen Seeger

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Hello,

Andreas Beck and me (Steffen Seeger) would like to annouce the release of
the ggi-0.0.9 developers version. This release is intended to allow porting
existing display drivers from version 0.0.8, being ported to other platforms
to have approximate the same level on other architectures as on i386, as well
as to serve as a basis for a consistent implementation of the secure access
to graphic services.

Copyright:
----------
Because of the overwhelming number of people who asked us to go on with
GGI, we decided to release ggi-0.0.9 developers version and *any* further
version of GGI under the GNU-GPL copyright license version 2 or any later
version. We would like to thank all of you for the continuing support.

Where to obtain it:
-------------------
You can obtain the source and more detailed information about GGI as well
as the sources from

http://www.tu-chemnitz.de/~sse

However, the documentation is under construction and will be updated during the
next days to serve as a reference to the current implementation.

Summary:
--------
In short, GGI-0.0.9 implements the concepts proposed in an article published
in the November 1996 issue of the Linux Journal. To summarize this up, it
implements the kernel services to allow for proper handling of
multiple displays, keyboards and input hardware on one machine needed
for future releases. What is missing, however, is the implementation of
the graphics devices to allow low level access to the graphics hardware.
A previous release (GGI-0.0.8 or Public Preview GGI) shows how we will try
to implement this and we are now going to merge these two branches again.
This, among writing drivers for the as many hardware as possible, is one
of the main goals for this development cycle.
We tried to keep most things platform independent, so porting it to
other Linux platforms should be possible with minimal effort and we
appreciate any attempts to do this.
Currently, only the i386 platform is supported, ports for the
Alpha AXP are being tackled. If you think you can help porting it to m68k or
SPARC, please get in contact with us.

Status:
-------
When installed properly, this version should allow to boot a i386 type PC
with a VGA, CGA, MGA or HERCULES compatible card and any working mixture
of these. After boot, there should be some (90%) xterm compliant virtual
text consoles and some with a very dumb console parser. The consoles have
scrollback support on any supported hardware.

Features:
---------
* structured layering of the console and display access code.
* scrollback on any display hardware, not only VGA/EGA
Especially braille readers and other non-CRT based hardware should
work well.
* graphical text consoles (like on the 68k platforms) are possible
* different text modes on each console
* different terminal parsers on each console
* different fonts, colors, etc. (not fully implemented yet).
* any kind of input device you can imagine as a pointer/keyboard
* support for up to 32 physical keyboards
* support for up to 32 phyiscal displays
* dynamic registration of displays, keyboards and input devices
* display and input device drivers can be loaded as a module. The
drivers included support ps2aux mice and S3 Vision96x, ATT20c505/Bt485
based cards with icd2061a compatible clock chips.
* clear isolation of hardware dependent parts of the code to allow for easy
porting to other hardware platforms.
* hopefully well documented code. If there is something you miss, please
help us to improve this.

Performance:
------------
Most of the console code was rewritten from scratch, resulting in better
overall performance compared to the Linux-2.0 code. For example, ASCII text
output to the console is about 30%-50% faster with the proper display driver
loaded. This is obtained without changing the interfacing to the TTY layer,
a much greater speedup can be obtained when optimizing here.

Known bugs/limitations:
-----------------------
* only american (qwerty) and german (qwertz) keyboard support yet.
Anyhow, there is a patch to the kbd-0.91 utilites included, so you can
create your own keymap.
* keyboard ioctl()s do not work. (no call to the ioctl() in the drivers yet).
* most Linux-specific console feature have gone. This is because most requests
are to be handled by GGI, these features would interfere with GGI or make
GGI unstable or are risky in some ways. Because of this, SVGAlib based
programs and XFree will currently not work with this patch applied. Writing
a console parser that emulates the old behavior is possible.
* loading a display driver does work with the S3 Vision96x; icd2061a;
ATT/20c505 drivers. Unloading does not work yet, because the ressource
may be busy an we need a special 'prepare-unload-ioctl' (which isn't
implemented yet).
* mouse works, but only with a mouse driver (currently only the ps2aux driver).
Take this one as an example how easy input drivers will be to write.

Things you can help with:
-------------------------
* the kernel boot code needs some work to report the types and modes of *both*
displays if there are a color and a monochrome card installed. Otherwise we
could restrict to 80x25 too, but then we should set this explicitly (so that
kernels loaded with loadlin run well too).
* verify proper installation and boot with the 2.0.x kernels and i386 hardware.
* help porting it to other architectures
* write (boot) display drivers for other architectures (m68k, Alpha, SUN,...)
* write drivers for other keyboards, pointing devices, etc.

Anyhow, this is not meant to be a final nor perfectly working package, rather
a snapshot of the current development. Any help or comments are very welcome.

Have a look at it and enjoy,

Steffen Seeger
Andreas Beck

Mikko Rauhala

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

On 29 Jan 1997 02:18:20 GMT, Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
>come on. It's exceptionally easy to write a server that will
>fork off children to run ordinary user programs with open files
>and ioperms and memory windows in the right place.

And still the process can probably screw up the card or perhaps the whole
OS by using those IO ports "inappropriately". PC graphics cards are not
meant for this kind of things. And still the program could not access
IRQ's. And still the kernel wouldn't have any way to recover should the
program crash.

Hellooo?

--
Mikko Rauhala, sivari - m...@iki.fi - http://www.iki.fi/mjr/


James Youngman

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5cpu3f$q...@newshost.nmt.edu>, yoda...@chelm.cs.nmt.edu says...

>I'm reacting to the exceptionally weak technical arguments I find here
>for GGI.

The XFree86 server is -- in my experience -- stable (except the beta stuff).

The same is not true of SVGAlib programs. It's very easy to lose
keyboard/video when an SVGAlib program does something nasty. At the cost of
(say) 800 bytes of [non-module] kernel code we could fix this completely.

Further, these same 800 bytes would provide hooks that allow safe use of
accelerated features of the graphics hardware without applications needing to
know about it (i.e. we have a library designed to interact with it).

My point of view is that this will enhance the stability of the system and
since the stability of the system in this area is often poor (no, I don't like
that fact either), stability improvements are good.

There seems to me to be a tinge of hysteria; we're not talking here about
embedding the DDX part of an X server in the kernel or anything, as far as I
understand the issue.

Summary: A few hundred bytes of code in the kernel to enhance the stability of
the system would be a good idea. A few hundred thousand bytes of code in the
kernel may not be a good idea, but that is not the motion that is on the table.

[just MHO]

James Youngman

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5co1ns$1b...@usenet1y.prodigy.net>, davi...@tmr.com says...


>What I miss is the jump from "SVGlib doesn't work well" to "let's
>write another totally new graphics thing to maintain instead of
>fixing SVGAlib or using X."

Try thinking of it more as kernel support for the multipexing of video hardware
between SVGAlib programs and X servers in order to enhance the stability of the
system. We're not talking about a new API really; there will (for example) be
a new version of SVGAlib that uses these hooks, doesn't need the program to be
setuid-root (I think) and makes the system less vulnerable to video/keyboard
hangups.

Albert D. Cahalan

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

davi...@tmr.com (bill davidsen) writes:
> Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> wrote:

>> [1] Gee, I want to do some simple graphics.. Why should I use
>> complex X for a simple graphics demo?
>

> Portability?

Note that there is an X server for GGI. GGI also makes SVGAlib programs
portable, even to non-x86 hardware.

>> Please, if you don't like the concept of GGI, just DON'T USE IT
>> and DON'T BOTHER READING ABOUT IT. Why discourage brave and good
>> people from doing a good (in theirs and mine opinion) job?
>> You won't hurt if they succeed, believe me.
>

> Wrong, everyone is hurt every time a new "standard" comes out which
> only works in a subset of the systems in use. X isn't on every
> system, but it's available for *almost* every system.

GGI supports an X server. GGI supports the SVGAlib API, and even
supports existing binaries with a drop-in replacement library.

> When people start writing to a new standard they lock out everyone
> who doesn't conform. Microsoft does that, but it doesn't seem like a
> logical thing in Linux. We already have SVGAlib, and that causes
> people to write some apps which don't run portably, even on some
> non-Intel Linux platforms.

People are encouraged to use libraries to access GGI instead of
direct kernel system calls. Only a few things will directly use
the kernel system calls: the X server, DOSEMU, the new SVGAlib,
and any other libraries such as OpenGL or DirectX.

People are not supposed to write code to a new standard unless
it is _really_ needed, such as when writing the libraries.

> Linux is not large enough to drive the market, so all we do is
> fragment it. Invest an hour in learning something like Xforms if you

> want simple development. Software written in some interface for
> Linux only is about as portable as a game cartridge. The argument


> that video is too slow and memory to expensive was a good one once,
> but it doesn't match reaility any more.

Video is always too slow and memory is always too expensive.
I can't get a 120 frames/second OpenGL rendering at 32000x12000
of a forest with detail down to a single grass blade. Oh, I want
that at 48 bits/pixel with less than 5% CPU load.

--
--
Albert Cahalan
acahalan at cs.uml.edu (no junk mail please - I will hunt you down)

Bill Eldridge

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

> There is excellent evidence that one can run a sophisticated powerful
> graphics system that has a huge base of programs from user mode.
> Look, this is a completely nonsensical debate and I can't believe
> that I'm wasting time with it. But there is no reason why GGI folks
> can't supply a driver and a simple patch. All this screaming about
> needing recognition for code that doesn't actually work is very
> offputting.

Calling people's work of the last year "nonsensical" is also offputting.
They have 4 Web sites, they have sample code, they've written arguments
to support their case.

You can disagree with them, fine, but your attitude that even talking
about it is wasting your precious time is also offputting - other people
want this code, and so they're pushing to have it, in a convenient form,
in the way that the GGI designers feel would be its most useful implementation.
I don't mind people disagreeing with whether that sentiment is valid or
not. I do object to arbitrarily saying, "We have X, X is all we need,
the way X is designed is fine, don't even question it, X is all you need,
we love you yeah yeah yeah...".

Besides the performance/functionality issues they're referring to, there's
also an issue that this code will get used and tested much better if it comes as
a kernel config option than as a separate add-on. I used to compile all
my GNU utilities and install various kernel patches for netatalk et al., but
the quality & support got much better when netatalk support became a standard
kernel option, much like ipfwadm. I know there are other nice packages out
there, but these days I find I only have time to maintain those that come
in the standard kernel tree or with pre-made RPMS's. Call it selfish, lazy or whatever.



> >If you think the current XF86
> >situation is convenient, then you are sick puppy.
>
> It works. People use it.

Some peopel don't use it. I don't use it. A few times I've
gone through the work of configuring X for different cards and
monitors, with tools that come and go and work with very random
results, and half the time I just gave up (I don't think I'm
exceptionally stupid, and have no problem getting a lot of other packages
working even with quite a bit of hacking on unfinished code).
Then with the machines I do have it configured on, I just
don't find many compelling apps to run on it, so I end up
saying, "Why bother starting X at all?"

If someone who's been using Linux for 3 years avoids X because
it's a pain, how many newbies will avoid it? Why port more
apps to X when many people won't run X? It's the 90's, GUI's
are important. (Yep, I work on Windows & Mac machines most
of the time - though the RedHat installations have greatly
improved, & if the graphics & apps improve over the next
year, I may be able to use Linux as a general purpose machine
in the future, just like I used to work on a Next).

--
Bill Eldridge
Radio Free Asia
bi...@rfa.org

Chris Underhill

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

-----BEGIN PGP SIGNED MESSAGE-----

Dale Pontius <pon...@btv.ibm.com> wrote:
: In article <Pine.HPP.3.95.970126...@santaka.sc-uni.ktu.lt>,
: Martynas Kunigelis <alg...@santaka.sc-uni.ktu.lt> writes:
: >
: > Include GGI support? [y/n/M]:
: >
: I for one, press Y!! (Though [y/n/M] is a good default)

Ditto - The only times I've had to reboot Linux in recent history is
because the video hardware locks up rendering the machine unusable to
a local user. For instance, if you're not running xdm, try issuing the
command

killall -9 X

from an xterm window. On both the machines I've got access to, this
freezes the display and knocks the keyboard out cold. The only
solution is to log in remotely and reboot. Sure it maybe possible to
unlock it using kbd_mode/restoretextmode etc., but entering these
commands from a remote machine (as root) invariably fails, and other
"tricks" such as changing text mode with SVGATextMode or running
startx from the remote system lock the machine up totally causing me
to power cycle.

This is a real blight on Linux IMHO, and apart from poor NFS
performance, is the only complaint I have about it. Anything that
minimises the risk of these problems MUST be TRTTD.

Sure, if GGI wasn't done properly, there's the risk of kernel bloat,
but from my reading of the docs, the developers have got the right
idea. I for certain wouldn't object to losing 10-20 pages of
unswappable memory if it stopped X/dosemu/whatever trashing my
h/w. With the bonus of getting rid of suid binary-only programs and
allowing much faster graphics access through operations that *have* to
be done in kernel space, such as DMA, I can't see what Linus is
objecting to.

Oh well, if GGI can be made to work, is done properly, and if popular
applications arise that make use of it, then maybe Linus can be
persuaded. Heck, the vm86plus stuff from dosemu is finally in the
kernel proper, despite several years of Linus' objections to it :-)


-----BEGIN PGP SIGNATURE-----
Version: 2.6.2i

iQCVAwUBMvES62ZVEN0KDxVBAQGTngQAlDsawYb6gjDbWX4sGtPg1jfurp9zigFS
Hx8c9p1DXq0GySRMCX5gin1SvkhOWiMyZVa8WAJmwU7HREumpQ6P5JtWQoq4EJah
47sTGxKoCuQmK3aQhgrVc2Zu6qCQFnzuFXxTBBs191nquYHYKHFfzzS8pOFTlWe2
Tt79gKwDPOM=
=FP3Y
-----END PGP SIGNATURE-----

Jason Mcmullan

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

David Whysong (dwhy...@twiki.physics.ucsb.edu) wrote:

: There is already an X server which has been modified to run with GGI. It


: was available from a link at the GGI home page at

: http://synergy.caltech.edu

: (which I can't seem to access today). I'm not sure if it is complete yet,
: though. Does anyone have performance comparisons between it and the
: comparable XFree server?


It's 'mostly working, but not well tested' - until I get my
ATI Mach64 driver working, the only tested mode is 320x200x8bit

http://www.ul.cs.cmu.edu/~jmcc will provide you w/the X server
(and the _tiny_ patches that are need to X11R6 - no XFree86
sources needed!)

--
Jason McMullan - Research Programmer, Robotics Institute, CMU

Me: http://www.ul.cs.cmu.edu/~jmcc
Linux GGI: http://synergy.caltech.edu/~ggi

Matthew Crosby

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5cqcau$d...@halon.vggas.com>,

James Youngman <JYou...@vggas.com> wrote:
>In article <5cpu3f$q...@newshost.nmt.edu>, yoda...@chelm.cs.nmt.edu says...
>
>>I'm reacting to the exceptionally weak technical arguments I find here
>>for GGI.
>
>The XFree86 server is -- in my experience -- stable (except the beta stuff).
>
>The same is not true of SVGAlib programs. It's very easy to lose
>keyboard/video when an SVGAlib program does something nasty. At the cost of
>(say) 800 bytes of [non-module] kernel code we could fix this completely.


Speaking as someone who is developing his own experimental windowing system,
I for one would love some sort of video hardware abstraction. Whether
it is ggi or svglib, is another matter, but I would also love not having to
make it suid.

I'm sure the gnustep people, who are aiming for a disply postscript system,
and the Mgr people, and all the other X haters would agree with me.

--
Matthew Crosby cro...@cs.colorado.edu
Disclaimer: It was in another country, and besides, the wench is dead.

Jason Mcmullan

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Markus Gutschke (gut...@uni-muenster.de) wrote:
: A typical application will not directly access the device. Rather it

: will use a library that interfaces to the GGI kernel space
: driver. Currently there already is a library that emulates the libSVGA
: API, so that old programs can benefit from the advantages of the
: hardware abstraction provided by GGI. I have not tested this library,
: yet, so I cannot comment on how well it works, though.


As the author of the SVAlib emulation, it appears to work for
everything I've tried it on, with the exception of zgv (does
some _really_ wierd stuff) and doom (I don't have an JumpLIB
development system - if anyone wants to make an a.out version,
_please_do_! The we can have some _real_ performance indicators
w/Doom!)

: It is intended that there will be a special library that allows for


: taking advantage of special chip set features (if available); this
: should appeal to authors of video games who want to make use of
: hardware acceleration.

Yep. The plan so far is to have:

Kernel
------
GGI Lib <- libati.so libtseng.so libmatrox.so, etc
------ ^
App |_ user or commercially contributed
'acceleration' drivers, that use
card-specific accleration ioctls.

: There will also be (or maybe there already is) an X server which


: interfaces with the GGI graphics device. This is neccessary, because
: under GGI no other process should directly access the graphics
: hardware. If this goal can be achieved, the risk of having to restart
: the machine because the video card was left in an undefined state,
: should cease to exist.

I wrote the X server (see http://www.ul.cs.cmu.edu/~jmcc), but
until I get the ATI Mach64 driver working, 320x200x8 is the only
tested mode. ;^)

: The long-term goal also includes for the libraries (both libSVA and


: libGGI) to have different back-end that interfaces to an X server
: rather than to the GGI interface. This would allow for running the
: same programs both locally (at maximum performance) and over a network
: connection.


Exactly. Simply putting a 'LD_LIRBARY_PATH=/usr/lib/ggi-x'
in your environment will load the GGI X emulation libs (once
they're written ;^) for all console programs.

: The intended goal of the GGI project does not seem to be, producing


: yet another incompatible API (which some people accuse it of doing),
: but rather of making existing API's more reliable by providing a
: decent abstraction layer.


You've hit the nail on the head with a sledgehammer. The reason
we're making a 'libGGI' is less to make yet another API (that's
the Berlin Group's job ;^), but to provide a working example
of fast,reliable kernel-driven graphics operations.

Victor Yodaiken

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <vc7k9ox...@jupiter.cs.uml.edu>,
Albert D. Cahalan <no_junk_m...@jupiter.cs.uml.edu> wrote:
>In user space, it is not possible to service an IRQ or move a bitmap
>via DMA transfer. It also requires a worse context switch. (The X server
>would have to go through the kernel to reach the graphics server!)

So you need a device driver.

>The user space server idea severely reduces the benefits of a unified
>graphics system and is bad for performance.

The question here is: do the GGI folks know how to make a unified
graphics system that is significantly better than the X graphics
interface?

>> From what I see, some people seem to believe that a standard can
>> be imposed on graphics programmers and manufacturers by putting
>> the standard into the Linux kernel. There is much reason to
>> doubt this theory.
>
>GGI supports both X and SVGAlib. Other APIs can be supported.

From what I see on the web site, this really should be
"GGI intends to support both ... "
Let's see a working system.

Albert D. Cahalan

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Ketil Z Malde <ke...@imr.no> writes:
> Jari Soderholm <jaso...@cdlinux01.ntc.nokia.com> writes:

>> GGI is only way to get some decent graphics for Linux,
>> X is slow memory hog, it is no fun for home users.
>
> Perhaps you make the right decision, but for the wrong reason.
> X isn't all that slow compared to other window systems
> alternatives, though it could arguably be quite a bit faster.
>
> But it does provide one very important feature: network
> transparency. Think about it, perhaps today many home users
> aren't connected in any way, in a short while everybody will be.
> And with nice, high bandwidth connections too.

People will still run most apps local because it is faster.
Why pay for bandwidth and remote CPU time while suffering
from latency?

X was not designed for network transparency. It can display things
accross the network, but I can tell that the network is there!
If X was designed for network transparency, the widgets would all
be in the server. Right now, "scroll bar" means that the app must
send many commands to the X server. The "file open" dialog box is
even worse.

X has some good points, but it is also one of the largest problems
that Unix systems have. The networking is not so special either,
since I can buy software to do that in Windows.

Paul JY Lahaie

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <32f025e4...@news.ebc.ericsson.se>,
Mats Liljegren <mats.li...@ebc.ericsson.se> wrote:

>Atop of that, you could have a server running in user space. This
>server would emulate what the video card can't do in hardware. In this

...


>Would this be a good solution?

No. You want those routines to be part of a library so they can be
called directly from the "C" program, otherwise your program does a
context switch on all-non accelerated functions. The best way to handle
the situation is a kernel level module which can make sure the video card
can be put in a sane state and leave all the rest to a user level program.
Unfortunately, with most PC hardware, this isn't possible (the cards
aren't very co-operative about that). For example, a module which sets up
the video mode and allows the user program to do the drawing (using
acceleration if supported..). You would rather not have syscall
overhead on drawing primitives.

- Paul

Victor Yodaiken

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <5cnrrh$d...@cebaf4.cebaf.gov>,
Larry Doolittle <doo...@recycle.cebaf.gov> wrote:
>: Suid is a wonderful idea. It has some security problems, but they
>: can be avoided with careful design.

>
>Careful design of suid programs does not result in
>-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
> ^ ^^^^^^^

Perhaps we have different ideas of what programs are supposed to do.
I use X windows every day. It does the job. It supports all sorts
of _useful_ programs that help me in my work. I'd prefer a small
fast version of X that didn't have so much configuration mess, but
I certainly prefer X to a small, fast program that doesn't work.


Justin Hahn

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

Joe Buck (jb...@synopsys.com) wrote:
: doo...@recycle.cebaf.gov (Larry Doolittle) writes:
: >Careful design of suid programs does not result in
: >-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
: > ^ ^^^^^^^

: Large suid programs can still be safe if they perform operations requiring


: privilege in a small initialization portion of the program, then revoke
: their privilege. The amount of code to be verified is then quite small.
: The X server is written in that way.

I for one don't exactly like the idea of installing a suid-root binary off of
the net, where the source code is not necessarily trustworthy. Imagine if a
hacker ever got it in his head to write a Linux virus (or trojan horse). SUID
binaries are the IDEAL way. An SUID ROOT gets total access to the system, it's
not FORCED to revoke those permissions. Security breach, system integrity
holes, all sorts of lovely problems open up in it. Plus not everyone who
writes suid-root programs do it carefully? Why have to worry about it?

If you actually understood GGI you would know the amount of code they actually
want to add to the kernel isn't THAT large, and a lot of the other stuff (last
time I checked) would be a library or a card-driver, and that could/would be
external to the kernel.
--
-justin

o r c e l l . c h i . i l . u s

unread,
Jan 30, 1997, 3:00:00 AM1/30/97
to

In article <32F10B...@rfa.org>, Bill Eldridge <bi...@rfa.org> wrote:
>> There is excellent evidence that one can run a sophisticated powerful
>> graphics system that has a huge base of programs from user mode.
>> Look, this is a completely nonsensical debate and I can't believe
>> that I'm wasting time with it. But there is no reason why GGI folks
>> can't supply a driver and a simple patch. All this screaming about
>> needing recognition for code that doesn't actually work is very
>> offputting.
>
>Calling people's work of the last year "nonsensical" is also offputting.

He didn't. Calling this argument nonsensical is not the same as calling
the development nonsensical.

____
david parsons \bi/ o...@pell.chi.il.us
\/

Dimitri Maziuk

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

Ketil Z Malde wrote:
>

>
> Counting the support for other hardware, namely net and scsi, I find
> that both entail more than 100K lines of C code. Are graphics really
> that much less standard than network or scsi hardware?
>

Yes. Sound cards are about the only comparable h/w in this respect.
On a PC the last standard for graphics was VGA.

Dimitri
--
emaziuk at curtin.edu.au
-----------------------------------------------
If Jesus was Jewish, why did he have a Puerto-Rican name?
( Zen koan )

Dimitri Maziuk

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

H. Peter Anvin wrote:
>
> Followup to: <32EFA87F...@eris.dev.null>
> By author: Dimitri Maziuk <di...@eris.dev.null>
> In newsgroup: comp.os.linux.development.system

> >
> > I'm saying that graphics hardware should be accessed like any other
> > hardware -- via system calls. GGI just happen to be the guys trying
> > to implement something like that. And given a choice between a suid
> > X server with DGA and a GGI kernel module, I'll take the module.
> >
>
> Why?
>

Why, games, of course! :)
(Given the speed of linux's development it's entirely possible that
I'll see [lots of] games using GGI before I'm too old to play them,
he-he-he.)

I don't really care about some KBs of disk space I'll save on the
code for all those chipsets in X server, not with today's hd prices.
I don't mind suid binaries either -- I've essentially a single-user
box and I log in as root half the time anyway (I know, I know.)

Cheers

Panu Matilainen

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

On 29 Jan 1997 17:12:24 GMT, Geert Uytterhoeven <ge...@cs.kuleuven.ac.be> wrote:
>In article <E4rpr...@greenie.muc.de>, ge...@greenie.muc.de (Gert Doering) write
> s:
>|> nie...@cip.e-technik.uni-erlangen.de (Hartmut Niemann) writes:
>|>
>|> >When I write a graphics application, i would prefer seing it fault on
>|> >a bad pointer instead of crashing everything, because it is root-suid.
>|>
>|> Get your facts straight. Even a uid-root program is protected by the usual
>|> memory protection. The only exception is that it *may* ask the kernel to
>|> permit it access to some I/O ports and memory locations - access to other
>|> locations is still now allowed.
>
>Wrong. A setuid root program can access all memory locations in your machine.

The only way a root program can access all memory is to open /dev/mem (which
is possible, being root) but other than that, occasional wild pointer in a
root program is not going to crash linux (or any unix). The normal memory
protection _is_ there, the point is that you can get everything through the
filesystems since you're allowed to read/write anything, and that's
basically the only way to damage an unix system.

Panu

>
>|> To the contrary, a GGI module living *in kernel* has *NO* protection and
>|> can very easily crash the whole machine when accessing a bad pointer. So?
>
>How many bugs are there in e.g. the network drivers? Those can crash the whole
>machine too.
>
>Greetings,
>
> Geert
>
>--
>Geert Uytterhoeven Geert.Uyt...@cs.kuleuven.ac.be
>Wavelets, Linux/m68k on Amiga http://www.cs.kuleuven.ac.be/~geert/
>Department of Computer Science -- Katholieke Universiteit Leuven -- Belgium


Andre Fachat

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

I would really love to have something like GGI, that prevents
dosemu from f*cking up my console! (luckily console switching and X was
still running so I could reboot gracefully. But it's nice to see all
those weird colorful graphics mess, monitoring some system RAM on the
console.... :-\

André

--
André Fachat |"I do not feel obliged to believe that the
Institute of physics, | same God who has endowed us with sense,
Technische Universität Chemnitz | reason, and intellect has intended us to
http://www.tu-chemnitz.de/~fachat | forego their use" -- Galileo Galilei

Arlet Ottens

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

Justin Hahn wrote:

> I for one don't exactly like the idea of installing a suid-root binary off of
> the net, where the source code is not necessarily trustworthy. Imagine if a
> hacker ever got it in his head to write a Linux virus (or trojan horse). SUID
> binaries are the IDEAL way. An SUID ROOT gets total access to the system, it's
> not FORCED to revoke those permissions. Security breach, system integrity
> holes, all sorts of lovely problems open up in it. Plus not everyone who
> writes suid-root programs do it carefully? Why have to worry about it?

It is true that a suid program is an ideal way for a hacker to wreak havoc on
your system. However, any competent hacker could probably achieve (almost) the
same results with *any* binary that you blindly get of the net and execute.
It would for instance be quite trivial for a normal binary to destroy all your
personal files.

BTW: in case of the X-server you were referring to: you can get the sources
instead of the binary, and compile it yourself. This will not give you any
added security however, unless you spend a few days carefully going through
every line of code. This is of course not X-server specific, it applies to any
source code you get off the net.



> If you actually understood GGI you would know the amount of code they actually
> want to add to the kernel isn't THAT large, and a lot of the other stuff (last
> time I checked) would be a library or a card-driver, and that could/would be
> external to the kernel.

I don't understand why letting anybody add code to a kernel is any safer than
running a suid binary. It is not. Image how a hacker would feel if he could slip
a trojan into the kernel distribution. Even if I can trust people not to be
malicious, and put trojans into the kernel, it is still quite possible that a
device driver contains a bug that can be exploited to create a security hole.
(for instance, I might be able to send some commands to the video card, so that
it will DMA part of its frame buffer into some other users's process space)

--
Arlet.

Ketil Z Malde

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

no_junk_m...@jupiter.cs.uml.edu (Albert D. Cahalan) writes:

> People will still run most apps local because it is faster. Why pay
> for bandwidth and remote CPU time while suffering from latency?

Because bandwidth isn't the only thing that has a price. I have, on my
desk, an old HP715 -- used to be top notch workstation, but is now
outdated. The display still is really nice, and I run all my apps
(well, most) on a Sun Ultra somewhere. That's a lot faster than running
them locally on my now underpowered CPU, and it's a lot cheaper than
putting an Ultra on my desk (and everybody else's). Sure, a local Ultra
might be a bit faster...

What we do here is to centralize CPU resources, and replace Unix
workstations with X-terminals. This achieves:

* simplified maintenance (quick, upgrade all versions of <app> on all
machines that has them installed)
* more reliable service to users (I can run Glance or similar on a
couple of servers, but never on 70 workstations)
* simplified user operation (no more looking for files on local disks,
because a user don't remember where he put them)

This requires a good network design, but we've got that.

> X was not designed for network transparency. It can display things
> accross the network, but I can tell that the network is there! If X
> was designed for network transparency, the widgets would all be in the
> server. Right now, "scroll bar" means that the app must send many
> commands to the X server. The "file open" dialog box is even worse.

I agree with this. X is designed to provide network transparent
raster displays, not a network transparent GUI. Didn't I just post some
thoughts about that?

I agree, we need a separation of interface structure from interface
design. I'll say that again, because I think it's that important: We
need a separation of interface structure from interface design.

Then an app provides the interface structure, and the actual interface
is rendered graphically by the server. If a graphically rendered
interface is what you wanted, of course. It could equally well be ANSI
terminal rendered, text based, voice based, braille.... or
communications with a script for automated application use.

And of course, users could configure their server in any way, providing
consistent look and feel to applications, without tying users to a look
and feel determined by someone else.

I feel this is the way to go, unfortunately, there's a lot of legacy
code around using (that is, strongly bound, knit, and chained to)
various graphics libraries. Applications want to be free!

Ketil Z Malde

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

>>In user space, it is not possible to service an IRQ or move a bitmap
>>via DMA transfer. It also requires a worse context switch. (The X
>>server would have to go through the kernel to reach the graphics
>>server!)

> So you need a device driver.

Yes?

> The question here is: do the GGI folks know how to make a unified
> graphics system that is significantly better than the X graphics
> interface?

No, but luckily, they're not doing that.

Steffen Seeger

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

mats.li...@ebc.ericsson.se (Mats Liljegren) writes:

>On 28 Jan 97 10:27:02 GMT, see...@physik.tu-chemnitz.de (Steffen
>Seeger) wrote:

>>To give you only one reason for being in the kernel: you can use
>>interrupts there. And modern graphic cards need these to reach good
>>performance. Because otherwise you will end up polling hardware in a
>>busy loop (not a good idea in a multitasking environment).

>If you have video drivers, one for each video card (you could of
>course load several of them at once, if you have several types of
>cards installed), in the kernel space, this would suffice. Wouldn't
>it?

>Atop of that, you could have a server running in user space. This
>server would emulate what the video card can't do in hardware. In this

>way, the interface to the server would be the same, no matter what the
>video card can do.

>Would this be a good solution?

This is exactly how GGI is planned to work. However, to maintain the
several display cards you need to have some administrative code in the
kernel and an underlying concept that is able to handle them properly.
This is what GGI-0.0.9 implements and what we proposed to have in the kernel.

The full GGI is still some coding away to be a usable substitute for normal
users, but when it is ready and has shown it's usefullness, we would like
to have *a chance* to get it included. Except for the USENIX statement,
Linus had a pretty different opinion about that in the past. As well,
if Linus 'calls the shots' he would also make the long-term decicions,
and we wanted to find an arrangement to aviod unneccessary recoding of things
that are known not to work. Instead, weird VGA support is included and the
fixes to have e.g. the same codebase on Alpha and i386 are far from clean.

Steffen

Panu Matilainen

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

On 30 Jan 1997 14:19:33 GMT, Mikko Rauhala <m...@laulujoutsen.pc.helsinki.fi> wrote:
>On 29 Jan 1997 02:18:20 GMT, Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
>>come on. It's exceptionally easy to write a server that will
>>fork off children to run ordinary user programs with open files
>>and ioperms and memory windows in the right place.
>
>And still the process can probably screw up the card or perhaps the whole
>OS by using those IO ports "inappropriately". PC graphics cards are not
>meant for this kind of things. And still the program could not access
>IRQ's. And still the kernel wouldn't have any way to recover should the
>program crash.

A buggy videodriver isn't any better in kernel space. Don't get me wrong,
I'm in favor of GGI but the problem is really quite complicated. Maybe it's
the PC graphics cards that should be fixed before anything else...

Panu

Mikko Rauhala

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

On 31 Jan 1997 08:38:36 GMT, Panu Matilainen <p...@popcorn.fi> wrote:
>A buggy videodriver isn't any better in kernel space. Don't get me wrong,

I meant the other parts of the program. If they crash, the console is left
in an undefined state.

>I'm in favor of GGI but the problem is really quite complicated. Maybe it's
>the PC graphics cards that should be fixed before anything else...

Yes, but that's not a realistic option for now. Besides, a more efficient
interface using GGI can be created for better cards after they arrive.

Justin Hahn

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

Arlet Ottens (ar...@spase.nl) wrote:

: It is true that a suid program is an ideal way for a hacker to wreak havoc on
: your system. However,any competent hacker could probably achieve (almost) the


: same results with *any* binary that you blindly get of the net and execute.
: It would for instance be quite trivial for a normal binary to destroy all your
: personal files.

Yes but an suid root program could destroy ALL your files. /usr, /bin, /lib,
everything. In a hacker's eyes that's a more desirable goal than just
someone's personal goal. The more damage the better is the way they think. In
the case of an ISP, or the like, that's a lot of files. (and there are ISPs
running linux)

: BTW: in case of the X-server you were referring to: you can get the sources

: instead of the binary, and compile it yourself. This will not give you any
: added security however, unless you spend a few days carefully going through

: every line of code. This is of course not X-server specific,it applies to any


: source code you get off the net.

How many people ACTUALLY do this though? I've done it once, and I'm usually
good about it. I've upgraded XFree 3 times. A lot of people don't know how to
compile and don't know what to do when the compile fails ( as they
occassionally do).


: I don't understand why letting anybody add code to a kernel is any safer than


: running a suid binary. It is not. Image how a hacker would feel if he could
: slip a trojan into the kernel distribution. Even if I can trust people not to

: be malicious, and put trojans into the kernel,it is still quite possible that


: a device driver contains a bug that can be exploited to create a security
: hole. (for instance, I might be able to send some commands to the video card,

: so it will DMA part of its frame buffer into some other users's process
: space)

But a hacker can't just ADD code to the kernel (unless he corrupts the
sources). GGI is not a bunch of hackers. Plus we have peer review of the
kernel. Kernel code is pretty carefully combed (usu.) Also GGI will only
provide an interface, akin to the graphics libraries in DOS (Anyone remember
BGI?) where the hardware is removed from the programmer, and you don't have
that kind of ability. Your objections are irrational. I can see valid
objections to GGI, but 1) you're not making them, 2) They are far outweighed
by the advantages. Please stop arguing against something you don't understand.


--
-justin

Evan Leibovitch

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

In article <32F0EA32...@eris.dev.null>,
Dimitri Maziuk <di...@eris.dev.null> wrote:

>> > I'm saying that graphics hardware should be accessed like any other
>> > hardware -- via system calls. GGI just happen to be the guys trying
>> > to implement something like that. And given a choice between a suid
>> > X server with DGA and a GGI kernel module, I'll take the module.

>> Why?

>Why, games, of course! :)

I can see another possible application.

For Linux to work well as in a Network Computer (NC) environment,
browsing and Java performance will be critical. I can see circumstances
under which a browser/JVM writer would prefer to bypass X and go directly
to the hardware (or, rather, the abstraction layer that GGI promises),
reducing bloat and increasing performance.

While its long-term success is far from assured, there is enough
potential for computers in which the browser/JVM is *the* user
interface, that having a Java-capable Linux browser that bypasses
X might be a good idea at some point.

Certainly, such an approach is not for everyone, and I'm certainly not
advocating the death of X. But there are times when X is overkill, when
the user doesn't need multiple windows and Java programs can do the tasks
of conventional X clients. A high-performance browser might be just the
thing that makes Linux in demand as a premier NC platform, and GGI seems
to be a hardware-independent way of taking the bloat of X out of the
picture (pun intended).

--
Evan Leibovitch, Sound Software Ltd, located in beautiful Brampton, Ontario
Supporting PC-based Unix since 1985 / Caldera & SCO authorized / www.telly.org
Trains stop at train stations. Buses stop at bus stations. I use a workstation.

Byron A Jeff

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

In article <5cmc1c$1...@newshost.nmt.edu>,
Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
-In article <5clv2d$o...@solaria.cc.gatech.edu>,
-Byron A Jeff <by...@cc.gatech.edu> wrote:
->And the windowing system won't change from what I understand. The real
->problem is that since the kernel has no idea about what's going on in
->the video hardware, it's difficult to reset it properly. In addition
->it requires SUID access for otherwise ordinary programs (due to I/O
->port access). Do you realize that an ordinary user cannot write a SVGALIB
->program? That's a clear sign that some part of video access should be
->in the kernel.
-
-come on. It's exceptionally easy to write a server that will
-fork off children to run ordinary user programs with open files
-and ioperms and memory windows in the right place.

Of course it is. By why is it necessary? I don't need a server to access
my mouse or my printer. Why should I need one to access my screen.

One of the major jobs of a kernel is to provide abstractions for hardware
devices. GGI provides an abstraction for my screen. All of the port/mem
and different types of video cards will be abstracted. I like that.

-
->->I've always thought that the kernel should handle all (or as much as
->->possible) hardware access. After all, the sound driver is in the
->-
->-Why? I think the kernel should provide only those services that it
->-needs to provide. There is a difference between simplicity in
->-engineering and imposing a simplistic scheme. Inconsistency
->-is no sin in a working kernel.
->
->I've just been following GGI from that I've been reading. The kernel
->part of GGI only provides essential video services, all the rest is
->still done at the user level via servers and libraries....
-
-So some user programs will still touch hardware. Then what's the
-big deal?

My big deal is that if I want to develop a graphics application for the
console, I don't have to become root to do it.

-
->->kernel, and I don't hear anyone complaining about that. Anyone
->->with a different philosophy could answer N to the GGI kernel
->->question.
->-
->-And anyone with your philosophy is welcome to build and distribute
->-a kernel that has graphics built in and that exhibits wonderful behavior.
->-So far, GGI is a nice sounding idea with no backup. Make it work
->-and then you will be able to watch as other people hurry to
->-incorporate it. In the meantime, since your work can progress without
->-any official stamp of approval, I can't see any grounds for your discontent.
->
->GGI can surely processed. Just release a module or kernel patch and let
->the results prove themselves.
-
-Exactly.

Well at least we agree here...

BAJ
--
Another random extraction from the mental bit stream of...
Byron A. Jeff - PhD student operating in parallel - And Using Linux!
Georgia Tech, Atlanta GA 30332 Internet: by...@cc.gatech.edu

Ray Auchterlounie

unread,
Jan 31, 1997, 3:00:00 AM1/31/97
to

Panu Matilainen <p...@popcorn.fi> wrote:
[...]

>A buggy videodriver isn't any better in kernel space. Don't get me wrong,

The problem is that at the moment you get several different, posssibly
buggy, video drivers using the same card from user space.

Even if the video code isn't buggy, a userspace program can be killed
half way through a set of register changes...

With a single kernel space driver, at least if it does have bugs you
know where to look - if the console gets messed up by switching
between X and an SVGAlib program how do you know which video driver
code to even start looking in ?

ray

--
Ray Auchterlounie <r...@kythera.demon.co.uk>
"Forty Two! Is that all you've got to show for
seven and a half million years' work?"

Justin Hahn

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

: >An abstraction layer, from what I understand of this project, would have to
: >SUID root, and would not have as fast access as kernel code. Plus it would
: >bring all the other problems SUID problems bring.

: I'd like to see a more technical justification. Many people are
: under the mistaken impression that Kernel codes automatically
: runs faster. The correct principle of OS design is to make the
: kernel be as unobtrusive as possible so that user programs can
: do actual work.

You're right and your wrong. From what I am given to understand of GGI, many
of the operations they need to do, in order to run properly, are kernel space
only. These operations make things faster. Not EVERYTHING is sped up by kernel
code, but some things are.

: There is excellent evidence that one can run a sophisticated powerful

: graphics system that has a huge base of programs from user mode.
: Look, this is a completely nonsensical debate and I can't believe
: that I'm wasting time with it. But there is no reason why GGI folks
: can't supply a driver and a simple patch. All this screaming about
: needing recognition for code that doesn't actually work is very
: offputting.

GGI has been available for quite a while now (6 months that I know, and I'm
given to understand a year plus). The code works, albeit only for a few
drivers. If you want a logical argument the here:

1) Graphics is presently a kludge

2) GGI is 1 solution

3) GGI is efficient

4) Kernel space code gets more development than patched kernel code

5) Kernel code, in some cases, is faster than user-space code

In GGI's case putting code into the kernel (and they don't want a whole lot of
code if you look at what they want) will provide faster access. And simpler
access. There's a lot to be said in favor of simplicity. Plus putting code in
the kernel will have two added benefits. 1) It will get more development and
thus become more stable and support more hardware faster. 2) More people will
test it out, and see if it fits their need.

: >1) Most people do not currently use RT-Linux.

: The poor lambs.

The same can be said of those who don't use GGI. Too bad... (I wish I could
run it, bu presently my card is unsupported....)


: From day one, UNIX has relied on trusted daemon programs to offer
: services that can be offered from user mode. If you don't like this
: theory, OS360 may still be available.

All well and true. But there are times for user-space and kernel-space. The
GGI developers are saying the following: We want to put some basic hooks and
function calls in the kernel to allow faster, more direct access to the
hardware. We want to put most of the rest of the stuff in a user-space library
and a possibly a module or user-space daemon of sort. They are NOT saying:
Let's shove this whole f'ing thing into the kernel, and double it's size and
force everyone out there to use it.

: I'm reacting to the exceptionally weak technical arguments I find here
: for GGI.

: >Yes... But let me use an MS example. Win NT is "supposedly" a microkernel. As
: >such in v3.51 its graphics subsystem was non-kernel-space daemon. It was slow
: >as mud. Slower even. So in v4.0, what does MS do? They put graphics in the
: >kernel. It flies. Being true to MS style, their graphics subsystem isn't

: Nu? This proves nothing other than microkernels are hard to get right:
: something that is well known.

1) M$'s Win NoT is hardly a microkernel these days. I've read up on it, and
there's actually been a quote or two from the NT development team in this
respect (forgot the journal, I can find if you really want me to back that up)
What it shows is that video in user-space is slower than in kernel space.

: I'm afraid that arguing that "it worked for Microsoft" is no more
: persuasive than any of your other arguments.

I was afraid you'd say this, and I agree this is the weak point in my
argument. But this is a real world case. Can you find a case where putting
code in kernel space made things worse? Do you have any support of it? What
I'm saying is I can support my position with real evidence. You have shown
none. In the same vein, you haven't exactly given any real strong evidence for
your position.

: It works. People use it.

The same justification is given to me when I tell them that Windoze sucks and
I don't want to use it. If you are satisfied to use something that works and
the people use, Win '95 is available at your local software store (and while
you may debate whether it "works" or not, many businesses use it primarily or
exclusively, so that's a strawman argument)

--
-justin

Victor Yodaiken

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <5ctf9g$o...@solaria.cc.gatech.edu>,

Byron A Jeff <by...@cc.gatech.edu> wrote:
>Of course it is. By why is it necessary? I don't need a server to access
>my mouse or my printer. Why should I need one to access my screen.

lpd

>One of the major jobs of a kernel is to provide abstractions for hardware


>devices. GGI provides an abstraction for my screen. All of the port/mem

Can I use this form of argument too? How about: there must
be a relational data base in the kernel because one of the major jobs
of the kernel is to provide abstractions for hardware and certainly
a relational data base is an abstract view of secondary store?
Convinced? There must be an infinite precision arithmetic
implementation in the kernel because the kernel's job is to
provide an abstraction for that lame ALU with all those condition
bits and I don't want to trust a user program like a compiler
to provide the abstraction?

Ray Auchterlounie

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

Joe Buck <jb...@synopsys.com> wrote:
>doo...@recycle.cebaf.gov (Larry Doolittle) writes:
>>Careful design of suid programs does not result in
>>-rwsr-xr-x 1 root root 1500694 Jul 23 1995 /usr/bin/X11/XF86_S3
>> ^ ^^^^^^^

>Large suid programs can still be safe if they perform operations requiring
>privilege in a small initialization portion of the program, then revoke

Provided they revoke it properly (perl...) and provided there aren't
other ways to get back to privileged access (/proc had holes once).

>their privilege. The amount of code to be verified is then quite small.
>The X server is written in that way.

Designed in that way, maybe. SVGAlib is too - first call should be to
setup the video, which drops privileges.

Designed != written - if it was you wouldn't find a whole range of
SVGAlib programs, and X, in past Linux security alerts.

Are we confident we've found all the holes (and won't make new ones),
or should we use GGI to remove the potential ?

Gert Doering

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

ge...@cs.kuleuven.ac.be (Geert Uytterhoeven) writes:

>In article <E4rpr...@greenie.muc.de>, ge...@greenie.muc.de (Gert Doering) writes:
>|> nie...@cip.e-technik.uni-erlangen.de (Hartmut Niemann) writes:
>|>
>|> >When I write a graphics application, i would prefer seing it fault on
>|> >a bad pointer instead of crashing everything, because it is root-suid.
>|>
>|> Get your facts straight. Even a uid-root program is protected by the usual
>|> memory protection. The only exception is that it *may* ask the kernel to
>|> permit it access to some I/O ports and memory locations - access to other
>|> locations is still now allowed.

>Wrong. A setuid root program can access all memory locations in your machine.

A malevolant program can, by undergoing great efforts (mmap()ing in all
the memory, etc.) -- but we're talking about "bad pointer accesses" here,
and those *are* trapped in a suid-root program just fine.

>|> To the contrary, a GGI module living *in kernel* has *NO* protection and
>|> can very easily crash the whole machine when accessing a bad pointer. So?

>How many bugs are there in e.g. the network drivers? Those can crash the whole
>machine too.

And they do, occasionally. That gives?

gert
--
Yield to temptation ... it may not pass your way again! -- Lazarus Long
//www.muc.de/~gert
Gert Doering - Munich, Germany ge...@greenie.muc.de
fax: +49-89-3243328 gert.d...@physik.tu-muenchen.de

Victor Yodaiken

unread,
Feb 1, 1997, 3:00:00 AM2/1/97
to

In article <5cu0e5$a...@kythera.demon.co.uk>,

Ray Auchterlounie <r...@kythera.demon.co.uk> wrote:
>Panu Matilainen <p...@popcorn.fi> wrote:
>[...]
>>A buggy videodriver isn't any better in kernel space. Don't get me wrong,
>
>The problem is that at the moment you get several different, posssibly
>buggy, video drivers using the same card from user space.
>
>Even if the video code isn't buggy, a userspace program can be killed
>half way through a set of register changes...
>
>With a single kernel space driver, at least if it does have bugs you
>know where to look - if the console gets messed up by switching

It's easier to debug kernel code?


Byron A Jeff

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

In article <5cu7ro$3...@newshost.nmt.edu>,

Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
>In article <5ctf9g$o...@solaria.cc.gatech.edu>,
>Byron A Jeff <by...@cc.gatech.edu> wrote:
>>Of course it is. By why is it necessary? I don't need a server to access
>>my mouse or my printer. Why should I need one to access my screen.
>
>lpd

Slight difference. I can talk to my printer via /dev/lp. lpd and friends
make up a print spooler, which is a service different from accessing the
actual printer. Whenever I hook up a printer I first to a:

cat > /dev/lp

to make sure it works. lpd is neither required nor involved in that process.

>
>>One of the major jobs of a kernel is to provide abstractions for hardware
>>devices. GGI provides an abstraction for my screen. All of the port/mem
>
>Can I use this form of argument too? How about: there must
>be a relational data base in the kernel because one of the major jobs
>of the kernel is to provide abstractions for hardware and certainly
>a relational data base is an abstract view of secondary store?

No. Not a good argument. A relational database is an abstraction implemented
on top of the seconday store. The kernel should (and does) provide an
abstraction for the secondary store (i.e. /dev/hda /dev/sda etc).

A video card is hardware. The kernel does not provide any access to it at
all. Therefore each graphics system (X, SVGA, etc.) ends up providing
its own interface to each video card.

A difference to say the least.

Let me ask you this: should each database manufacturer have to provide code
to access the IDE/SCSI disks it uses for their database directly?

>Convinced?

Not in the least.

> There must be an infinite precision arithmetic
>implementation in the kernel because the kernel's job is to
>provide an abstraction for that lame ALU with all those condition
>bits and I don't want to trust a user program like a compiler
>to provide the abstraction?

Actually you lose here. The Linux kernel does in fact have FPU emulator
for exactly that reason, so that 386 CPU without Floating point can still
run floating point programs. Because the kernel's job is to provide an
abstraction for floating point, be is to have an actual floating point
unit or to emulate it.

Next?

Open up your box. For each and every device in there (CPU, disk, parallel,
serial, CDROM, ethernet card, sound card etc.) there is a kernel level
device driver (video too, but only the basic text console). But there is no
kernel level driver for graphics. Why? because there are too many different
configurations of graphics cards to have a driver that address them all.
GGI takes a stab at defining that driver.

Why is the graphics device so different from any other piece of hardware in
the system?

Uwe Bonnes

unread,
Feb 2, 1997, 3:00:00 AM2/2/97
to

Ketil Z Malde <ke...@imr.no> wrote:
: yoda...@chelm.cs.nmt.edu (Victor Yodaiken) writes:

: > I'm suspicious of design from misty principles. Graphics hardware is
: > different_ from other hardware: it's less standard and involves big
: > chunks of frame buffer.

: Counting the support for other hardware, namely net and scsi, I find


: that both entail more than 100K lines of C code. Are graphics really
: that much less standard than network or scsi hardware?

: (I'm no kernel hacker, so flame/correct if I misunderstood something
: fundamental)
: ~kzm

I fully aggree! There is sound support in the kernel, so graphics should
be included too.

--
Uwe Bonnes b...@elektron.ikp.physik.th-darmstadt.de

Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Victor Yodaiken

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d30hf$k...@solaria.cc.gatech.edu>,

Byron A Jeff <by...@cc.gatech.edu> wrote:
>Slight difference. I can talk to my printer via /dev/lp. lpd and friends
>make up a print spooler, which is a service different from accessing the
>actual printer. Whenever I hook up a printer I first to a:

So to get multiplexing, you need the suid program.

>>Can I use this form of argument too? How about: there must
>>be a relational data base in the kernel because one of the major jobs
>>of the kernel is to provide abstractions for hardware and certainly
>>a relational data base is an abstract view of secondary store?
>
>No. Not a good argument. A relational database is an abstraction implemented
>on top of the seconday store. The kernel should (and does) provide an

I see. So your orginal argument should have been: GGI must be
in the kernel because the kernel is supposed to provide an abstraction
that you find convenient? The disk as an array of bytes is an abstraction
that many database designers find annoying. If you want to use the
accelerated features of SCSI II or want to take advantage of the full
power of the device, you have to write a driver and then making it
safely share state with other disk clients will be complex.

>Let me ask you this: should each database manufacturer have to provide code
>to access the IDE/SCSI disks it uses for their database directly?

I'm arguing the status quo. Your argument seems to be that
what is and is not in the kernel doesn't follow a very simple scheme.
But that's hardly a good reason for a change.

>> There must be an infinite precision arithmetic
>>implementation in the kernel because the kernel's job is to
>>provide an abstraction for that lame ALU with all those condition
>>bits and I don't want to trust a user program like a compiler
>>to provide the abstraction?
>
>Actually you lose here. The Linux kernel does in fact have FPU emulator

An FPU emulator is not an infinite precision arithemtic emulation.
Note that x*y in C is completely different from (times x y) in scheme
and mult x y on a 64 bit P7 will not necessarily compute the same
thing as mult x y on a 386 --- and things get even worse if there
are threads in a process and we aren't guaranteed
that ( pushl 2; pushl 2; popl %eax, popl %ebx, add %eax %ebx)
will get 4. In fact, this system exposes us to all sorts of
unmediated hardware ugliness.

>Why is the graphics device so different from any other piece of hardware in
>the system?

The important question is: what advantage do we get by treating it the
same.


Open source software allows us to settle these questions on technical
issues and user preference. I think that the most unfortunate
aspect of this debate is that some GGI proponents want to settle
the issue on political grounds and on some silly preferences
for uniformity among things that are not uniform. Perhaps part of the
reason that some of us are so irritated by this, is that we remember
working in corporate environments where goofy decisions were imposed
by managerial fiat. Make it work _first_.

And with this, I take my exit from the GGI debate for as long as
my good sense lasts.


Theodore Y. Ts'o

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

From: Bill Eldridge <bi...@rfa.org>
Date: Thu, 30 Jan 1997 15:59:43 -0500

Some peopel don't use it. I don't use it. A few times I've
gone through the work of configuring X for different cards and
monitors, with tools that come and go and work with very random
results, and half the time I just gave up (I don't think I'm
exceptionally stupid, and have no problem getting a lot of other packages
working even with quite a bit of hacking on unfinished code).
Then with the machines I do have it configured on, I just
don't find many compelling apps to run on it, so I end up
saying, "Why bother starting X at all?"

If you think that GGI is going to make the video configuration problem
go away, you're being very, very, very naive. Those are not problems
which are solved by moving graphics code into the kernel.

- Ted

Ketil Z Malde

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

ev...@bigbird.telly.org (Evan Leibovitch) writes:

> For Linux to work well as in a Network Computer (NC) environment,
> browsing and Java performance will be critical. I can see
> circumstances under which a browser/JVM writer would prefer to bypass
> X and go directly to the hardware (or, rather, the abstraction layer
> that GGI promises), reducing bloat and increasing performance.

Interesting thought, since everybody else seems to think the NC should
be a beefed-up X terminal... :-)

Byron A Jeff

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d3rit$c...@senator-bedfellow.MIT.EDU>,
Theodore Y. Ts'o <ty...@MIT.EDU> wrote:
- From: Bill Eldridge <bi...@rfa.org>
- Date: Thu, 30 Jan 1997 15:59:43 -0500
-
- Some peopel don't use it. I don't use it. A few times I've
- gone through the work of configuring X for different cards and
- monitors, with tools that come and go and work with very random
- results, and half the time I just gave up (I don't think I'm
- exceptionally stupid, and have no problem getting a lot of other packages
- working even with quite a bit of hacking on unfinished code).
- Then with the machines I do have it configured on, I just
- don't find many compelling apps to run on it, so I end up
- saying, "Why bother starting X at all?"
-
-If you think that GGI is going to make the video configuration problem
-go away, you're being very, very, very naive. Those are not problems
-which are solved by moving graphics code into the kernel.

Excellent point Ted. Much of the problem of configuration is that the
parameters are less dependant on the video card (which can generate
any scan rate in its range) and more dependant on the monitor (which
can't necessarily handle all of the scan rates that the card can generate)

What GGI seems to have the potential to do is to have varying graphics
packages (X, SVGALIB, DOSEMU, etc.) all operate properly once the configuration
is complete. So instead of duplicating multiple drivers in each package
each with their own configuration, there can be a single configuration for
all of them. Also the risk of having a faulty driver in a single package
is reduced. Once the GGI driver is debugged, then all the client packages
that use it will work properly. Note Ted that both X and SVGALIB utilize
your most excellent serial drivers for serial mice instead of implementing
their own.

This thread raises two new points:

1) That a configuration tool for GGI drivers should be quite high on the
priority list. Maybe it can even be the demonstration tool for GGI.
2) Where should configuration information like this be stored? Should it
be written in the driver? ioctl or some other device interface? An ifconfig
type tool? My gut says that reading it from a file in the filesystem
probably isn't the best idea. But I could be wrong.

GGI team: how are you addressing these issues?

Byron A Jeff

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d3oqu$f...@newshost.nmt.edu>,

Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
>In article <5d30hf$k...@solaria.cc.gatech.edu>,
>Byron A Jeff <by...@cc.gatech.edu> wrote:
>>Slight difference. I can talk to my printer via /dev/lp. lpd and friends
>>make up a print spooler, which is a service different from accessing the
>>actual printer. Whenever I hook up a printer I first to a:
>
>So to get multiplexing, you need the suid program.

Nope. lpd can be run as a regular user because is uses /dev/lp to access
the printer. Exactly the point. You only need SUID when you want to access
I/O ports, memory, or interrupts directly. I believe that's the kernel's
job. The device driver handles all that stuff so that regular users can
access the hardware.

>
>>>Can I use this form of argument too? How about: there must
>>>be a relational data base in the kernel because one of the major jobs
>>>of the kernel is to provide abstractions for hardware and certainly
>>>a relational data base is an abstract view of secondary store?
>>
>>No. Not a good argument. A relational database is an abstraction implemented
>>on top of the seconday store. The kernel should (and does) provide an
>
>I see. So your orginal argument should have been: GGI must be
>in the kernel because the kernel is supposed to provide an abstraction
>that you find convenient?

No. Like I said before the kernel should provide a abstraction for hardware.

>
>>Let me ask you this: should each database manufacturer have to provide code
>>to access the IDE/SCSI disks it uses for their database directly?
>
>I'm arguing the status quo. Your argument seems to be that
>what is and is not in the kernel doesn't follow a very simple scheme.
>But that's hardly a good reason for a change.

This is interesting. I really have nothing to do with GGI. I'm just an
observer. But as an observer it's clear to me that minimal graphics
support in the kernel (to support graphics hardware) is a good thing. There
needs to be a change. How many times has X or a SVGA program crashed leaving
your console totally unusable? Have you tried to write a program using SVGALIB
can come to thre realization that you must be root just to develop that
application?

There are good reasons for a change here. It's neither arbitrary nor
whimsical...

>>Why is the graphics device so different from any other piece of hardware in
>>the system?
>
>The important question is: what advantage do we get by treating it the
>same.

Hmmm:

1) safety. Multiple graphic systems (X, SVGALIB, DOSEMU) can be switched
and swapped without worry of the graphics hardware being left in an
unusable state.

2) Uniformity. The current SVGA X server currently has over 30 different
drivers in it. Be separating the hardware specific driver from the higher
level components, there only needs to be one and only one X server, SVGALIB,
DOSEMU, whatever....

3) Consistency. Every other hardware device has a device driver.

>
>
>Open source software allows us to settle these questions on technical
>issues and user preference. I think that the most unfortunate
>aspect of this debate is that some GGI proponents want to settle
>the issue on political grounds and on some silly preferences
>for uniformity among things that are not uniform. Perhaps part of the
>reason that some of us are so irritated by this, is that we remember
>working in corporate environments where goofy decisions were imposed
>by managerial fiat. Make it work _first_.

Oh I agree with you there. That's the first thing I said when I joined the
debate: Don't worry about getting GGI in the kernel, just get it out and
let folks start using it.

And all of my arguments are strictly on technical merit. By extracting the
graphics driver into the kernel, then any program that needs graphics can
use the driver without worry of the underlaying hardware mechanisms or
fear of detstabilizing the system. Plus my application can be developed and
executed as a regular user, limiting security holes. And the gravy is that
as new graphics hardware comes out, my application runs on it unchanged as
soon as the new driver specific to that card is developed.

>
>And with this, I take my exit from the GGI debate for as long as
>my good sense lasts.

I'm really still wondering that the debate is about...

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5cr3rj$4...@usenet.bham.ac.uk>,
Chris Underhill <c...@solar27.ph.bham.ac.uk> wrote:

| Ditto - The only times I've had to reboot Linux in recent history is
| because the video hardware locks up rendering the machine unusable to
| a local user. For instance, if you're not running xdm, try issuing the
| command
|
| killall -9 X

| This is a real blight on Linux IMHO, and apart from poor NFS
| performance, is the only complaint I have about it. Anything that
| minimises the risk of these problems MUST be TRTTD.

Do you do that often? I confess I've never found the urge to do
this, or to deliberately power down, remove the ZIP drive while
mounted, or any other forms of self abuse I've seen touted in
various Linux groups as problems.

If you find frequent need to do stuff like this you have a bad
install or sick hardware. Software is not the answer.
--
-bill davidsen (davi...@tmr.com)
"As a software development model, Anarchy does not scale well."
-Dave Welch

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5cti13$k...@csusac.ecs.csus.edu>,
Jon M. Taylor <tay...@ecs.ecs.csus.edu> wrote:
| In article <5co1ns$1b...@usenet1y.prodigy.net>,
| bill davidsen <davi...@tmr.com> wrote:
| >In article <01bc0bc7$527db9e0$391d...@h57.albany.edu>,
| >Joseph Foley <jf8...@csc.albany.edu> wrote:
| >
| >What I miss is the jump from "SVGlib doesn't work well" to "let's
| >write another totally new graphics thing to maintain instead of
| >fixing SVGAlib or using X."
|
| SVGAlib is broken as designed, not "doesn't work well". Even if
| we added drivers for every chipset under the sun and fixed all the bugs in
| it, SVGAlib programs still have to be suid root. We do not want to have
| to use suid-root programs to "do graphics" under Linux.

I take it you feel that moving all the device dependent stuff from
SVGAlib (used by a few programs) to the kernel (used by everything)
will improve stability. Excuse me if I feel that's a giant step in
the wrong direction.

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d30hf$k...@solaria.cc.gatech.edu>,
Byron A Jeff <by...@cc.gatech.edu> wrote:
| In article <5cu7ro$3...@newshost.nmt.edu>,
| Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:
| >In article <5ctf9g$o...@solaria.cc.gatech.edu>,

| >Byron A Jeff <by...@cc.gatech.edu> wrote:

| >Can I use this form of argument too? How about: there must
| >be a relational data base in the kernel because one of the major jobs
| >of the kernel is to provide abstractions for hardware and certainly
| >a relational data base is an abstract view of secondary store?
|
| No. Not a good argument. A relational database is an abstraction implemented
| on top of the seconday store. The kernel should (and does) provide an

| abstraction for the secondary store (i.e. /dev/hda /dev/sda etc).

And all drives look the same, a sequential stream of bytes with
random access.

| A video card is hardware. The kernel does not provide any access to it at
| all. Therefore each graphics system (X, SVGA, etc.) ends up providing
| its own interface to each video card.
|
| A difference to say the least.

But all video cards don't look the same at the hardware level, how
can you have an abstraction of things which are incompatible? Do you
limits the set of common features to those available on all cards,
like 320x200 mono mode? Or do you intend to put area fill, line
drawing and 3D rotation in the kernel, to be done in software if the
hardware can't?

The choices are to (a) provide a limited set of capabilities and not
take advantage of the card, or (b) require that the kernel support
every possible high level construct, and if the card doesn't support
it the kernel will be as efficient as a user level program designed
just for the card?

There is no right answer, (a) will be slow and no one will want to
use it, and (b) will result in a kernel the size of NT, and few
drivers because they're so complex. Oh, and as soon as cards come
up with a feature not in (b) people will want to write to the card
directly or will want to change the "standard" to take advantage.

| Open up your box. For each and every device in there (CPU, disk, parallel,
| serial, CDROM, ethernet card, sound card etc.) there is a kernel level
| device driver (video too, but only the basic text console). But there is no
| kernel level driver for graphics. Why? because there are too many different
| configurations of graphics cards to have a driver that address them all.

You got that right... but you didn't stop when you reached the
correct conclusion.

| GGI takes a stab at defining that driver.
|

| Why is the graphics device so different from any other piece of hardware in
| the system?

Because it's a human interface? Because I don't need to deal with
color places, resolution and vector drawing on my disk drives or
serial ports? How about because it's more complex?

I don't buy into the GGI simple concept because I don't believe it's
simple. Until you believe that there is a solution which is both
affordable (in this case the coin is unpaid coding hours) and useful
(able to use all the gee-whizzes in the hardware), there's no way to
think this will be a positive thing.

What it will do is fragment the graphics development picture, some
will write to X, some to GGI, some to SVGAlib. It will speed
development of new applications in some cases, and slow them for
people who will feel the need to make an app run on all three
graphics systems.

I'm sorry to say that a single acceptable standard is probably
better for the user than a pool of standards of any quality.

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d532p$e...@solaria.cc.gatech.edu>,

Byron A Jeff <by...@cc.gatech.edu> wrote:
| In article <5d3oqu$f...@newshost.nmt.edu>,
| Victor Yodaiken <yoda...@chelm.cs.nmt.edu> wrote:

| >The important question is: what advantage do we get by treating it the
| >same.
|
| Hmmm:
|
| 1) safety. Multiple graphic systems (X, SVGALIB, DOSEMU) can be switched
| and swapped without worry of the graphics hardware being left in an
| unusable state.

I have two questions here. First, how many people ever do this? I
honestly can't say I ever needed or even wanted to do this. And the
other issue is that the last time I wrote a video BIOS there were
states reachable by software which required a power cycle to reset.
Are the new, more complex, cards really immune to this type of
problem? Or will things be "better" but still not "without worry?"

| 2) Uniformity. The current SVGA X server currently has over 30 different
| drivers in it. Be separating the hardware specific driver from the higher
| level components, there only needs to be one and only one X server, SVGALIB,
| DOSEMU, whatever....

But we have the 30 drivers, and lots of *portable* programs written
to X. Do we really need another Linux-only graphics thing?

| 3) Consistency. Every other hardware device has a device driver.

I don't think of any other device which has so much variation
in capabilities between models and vendors. Even sound isn't as
varied, and I suspect that at least 60% of all sound cards being
sold today are either unsupported or supported in compatibility mode
rather than with full capabilities. Sounds like supporting most
video cards as VGA, now doesn't it?

| And all of my arguments are strictly on technical merit. By extracting the
| graphics driver into the kernel, then any program that needs graphics can
| use the driver without worry of the underlaying hardware mechanisms or
| fear of detstabilizing the system. Plus my application can be developed and
| executed as a regular user, limiting security holes. And the gravy is that
| as new graphics hardware comes out, my application runs on it unchanged as
| soon as the new driver specific to that card is developed.

Of course no one with more than one user would set it up that way...
for the same reasons that every user is not given access to the raw
disk, anyone with access to the raw graphics device would be able to
fake a login screen, etc, which leads to some degree of exposure.
Which will lead to the same setgid and access groups we now use for
floppy, kmem, hard drive, etc. I don't see every user having access
to the graphics any more than other raw devices.

You really don't want people who are not at the console to have
access to the graphical device (at least in most cases). Some form
of putting the process group on an access control list for the
duration of the login, and having /dev/graphics be a virtual device
like /dev/tty... Linux doesn't do much with process group, I think
I'll look that up if I ever get the time.

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <KETIL-ytq4...@imr.no>, Ketil Z Malde <ke...@imr.no> wrote:

| Counting the support for other hardware, namely net and scsi, I find
| that both entail more than 100K lines of C code. Are graphics really
| that much less standard than network or scsi hardware?
|
| (I'm no kernel hacker, so flame/correct if I misunderstood something
| fundamental)

I posted this earlier, so just a recap: video cards can have more
capabilities than any other device, 3D, vector, sizes, area fill,
vector draw, etc. If you make a simple user interface you don't give
access to all this and the video is slow. If you have a complex
interface to offer all possible functionality, then you emulate a
lot for less capable cards.

And cards from various vendors do similar things in vastly diferent
ways.

bill davidsen

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5ct7sc$j...@news.bu.edu>, Justin Hahn <jeh...@bu.edu> wrote:

| But a hacker can't just ADD code to the kernel (unless he corrupts the
| sources). GGI is not a bunch of hackers.

That's the problem. Most good software is written by people who
write it for themselves, or for fun. People who post as if they were
on a crusade scare me.

Has Linus agreed to put this... functionality in the kernel, or is
this all an exercise in futility?

Paul JY Lahaie

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d532p$e...@solaria.cc.gatech.edu>,
Byron A Jeff <by...@cc.gatech.edu> wrote:

>Nope. lpd can be run as a regular user because is uses /dev/lp to access
>the printer. Exactly the point. You only need SUID when you want to access
>I/O ports, memory, or interrupts directly. I believe that's the kernel's
>job. The device driver handles all that stuff so that regular users can
>access the hardware.

Have you tried to do this? You won't get very far. lpd uses port 515
which means it needs to be root to open.

>needs to be a change. How many times has X or a SVGA program crashed leaving
>your console totally unusable? Have you tried to write a program using SVGALIB

Never. I run Accelerated/X and I've yet to crash my console.

>2) Uniformity. The current SVGA X server currently has over 30 different
>drivers in it. Be separating the hardware specific driver from the higher
>level components, there only needs to be one and only one X server, SVGALIB,
>DOSEMU, whatever....

What will the performance be like. Someone mentioned that hardware
acceleration will be implemented using ioctls. What's the performance
penalty on system call overhead. Perhaps one syscall isn't much, but
doing heavy drawings, issuing hundreds of system calls will hurt
performance.

>3) Consistency. Every other hardware device has a device driver.

This is false. Other devices are treated like gfx cards. Scanners
and CD-ROM writers come to mind. The kernel provides enough for a
user-level program to talk to the device.

- Paul


Jon M. Taylor

unread,
Feb 3, 1997, 3:00:00 AM2/3/97
to

In article <5d5msg$1f...@usenet1y.prodigy.net>,

bill davidsen <davi...@tmr.com> wrote:
>
>I take it you feel that moving all the device dependent stuff from
>SVGAlib (used by a few programs) to the kernel (used by everything)
>will improve stability. Excuse me if I feel that's a giant step in
>the wrong direction.

Do you worry about buggy SCSI code crashing your system? Only if
you select that code to be compiled in. There is always the potential
for buggy device drivers to crash the system - I see reports of this
happening all the time on linux-kernel. The solution is to fix the bugs,
not toss out the whole idea of device drivers in the kernel!

Jon

It is loading more messages.
0 new messages