Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sensing vertical blank...

185 views
Skip to first unread message

Dave Haynie

unread,
Nov 15, 1996, 3:00:00 AM11/15/96
to

In-reply-to: Tinic Urou's message of Thu, 10 Oct 1996 14:41:13 -0400
References: <325C13...@ix.netcom.com>
<53gueh$f...@dfw-ixnews6.ix.netcom.com>
<325D51...@informatik.uni-hamburg.de>

> William Adams wrote:

> > >Can the be support vertical or horizontal blank interrupts?

> > This would be video card dependent. Depending on your card you
> > can turn on or off the vertical retrace interrupt. If it's off, or
> > not supported, you can't get an interrupt. If it's on, then you could
> > write a device driver to detect it and do something.

> Does the GameKit supply support for this? I think its really important
> for Video applications.

Yup.

> Things like SCALA would not be possible without this... Since those
> screens are interlaced you must syncronise to the halfpictures to
> get smooth animations and those nice wipes. A simple
> wait_for_v_blank() and wait_for_h_blank() would be really nice... If
> the GFX card does not support this, simply return an B_ERROR...

Or try to do your best anyway.

Scala on the PC currently provides all its own graphics drivers,
simply because current (well, as of a few years ago) Windows APIs
didn't support real multimedia features such as this. Beam sync is
actually handled as a combination of driver and user preferences.
You can use the VBI if the card support it. But this doesn't guarantee
beam sync -- some SVGA card implementations give you a real VBI, some
give you a steady 60Hz no matter what the real refresh rate, etc. So
you can opt to use a timer for synchronization instead. Or just bag it
and run unsynchronized.

There's nothing you, me, Be, or Scala can do about all the SVGA cards
out there. But the API can be written to properly support cards that
do the right thing, without becoming locked into any hardware
particulars. Maybe the Apple experience has shown the folks at Be the
problems with being so isolated from the hardware. But the Amiga
experience is a clear indication that you're doomed if you start to
let application code depend on bit-level hardware details.

--
Dave Haynie | ex-Commodore Engineering | for DiskSalv 3 &
Sr. Systems Engineer | Hardwired Media Company | "The Deathbed Vigil"
Scala Inc., US R&D | PIOS USA, Inc. | in...@iam.com

"Who'd-a thought tommorrow... would be so strange" -R.E.M.

Dave Haynie

unread,
Nov 15, 1996, 3:00:00 AM11/15/96
to

In-reply-to: ad...@bespecific.com's message of Fri, 11 Oct 1996 14:54:39
GMT
References: <325C13...@ix.netcom.com>
<53gueh$f...@dfw-ixnews6.ix.netcom.com>
<325D51...@informatik.uni-hamburg.de>
<53ktli$l...@sjx-ixn5.ix.netcom.com>
<325E06...@poly.polytechnique.fr>
<53ln0t$m...@dfw-ixnews7.ix.netcom.com>

In article <53ln0t$m...@dfw-ixnews7.ix.netcom.com> ad...@bespecific.com
(William Adams) writes:

> Jean-Baptiste Queru <qu...@poly.polytechnique.fr> wrote:

> >Secondly because changing the palette in the middle of a frame is ugly:
> >you get a frame with a different palette on the upper part and on the
> >lower part on the screen... Try with black and white to have an idea...
> >(ever seen a demo on PC, Amiga or Atari?)

> This is highly dependent on your graphics card. Some RAMDACs 'do the
> right thing' when it comes time to change the CLUT. They will wait
> for the vsync. But you're right, in general you can't rely on this.

Which is exactly why this should be handled in the graphics driver,
not anywhere else. If your particular card can do things right, it
should be allowed to do things right, that being as efficiently as
possible, via a real VBI or a beam-synched LUT. If it can't do it
well, it still may be able to do it right, if perhaps
inefficiently. Finally, if the card isn't capable of it, no solution's
going to work well, so you'll get an ugly display, just like you do on
the PC. The only way to address this right is in the graphics device
driver.

> >Surely you're joking, aren't you? You're suggesting to do exactly what
> >the gamekit should prevent you from doing, i.e. writing
> >hardware-dependent code... This is hard to do and ugly.

> No, I'm not joking. I'm assuming I'm talking to developers.

We assume that too. But we're hoping to be Be developers. What you
suggest below went out with the Commodore 64. It's absolutely the
wrong way to do it on the BeBox, I claim.

> But you're right, not everyone is going to want to do:
>
> wait_for_v_sync()
> {
> while (inp(0x3DA) & 8) // if in vsync, wait till out
> ;
> while (!(inp(0x3DA) & 8)) // wait while not in vsync
> ;
> }

No one should every do this! You can't actually recommending a busy
wait? If an Amiga support person had recommend something like this,
they'd be drawn and quartered. "50 lashes with cat-o-nine tails,
Mr. Christian...".

Sure, it may be the case that the hardware offers no other
solution. In that case, this might be what's actually happening. But
it shouldn't be visible to the applications programmer!

> The gamekit allows you to get the address of the registers. So you
> can stick this code in your app and get vsync.

What if the card I'm using isn't a standard SVGA card? Non-SVGA cards
do exist on the PCI bus, and they run just dandy under Windows, OS/2,
and MacOS, for example. I have just such a bad boy in my PC at home,
the Imagine 128 from Number Nine. This has a cheap-ass Cirrus Logic
SVGA chip on it, for compatibility, and a nice 128-bit graphics engine
with apparently no relation to the world of SVGA, for when you have a
modern OS (and if Windows and MacOS fall into that definition, BeOS
had better as well).

Dave Haynie

unread,
Nov 15, 1996, 3:00:00 AM11/15/96
to
<325E85...@poly.polytechnique.fr>
<53n863$i...@dfw-ixnews8.ix.netcom.com>
<325F75...@poly.polytechnique.fr>
<y8a4tjy...@hertie.artcom.de>
<326151...@poly.polytechnique.fr> <3262C3...@eccosys.com>
<misc1739-211...@132.181.31.109>
<y8ad8yc...@hertie.artcom.de>
<misc1739-231...@132.181.31.102>

misc...@cantva.canterbury.ac.nz (Jon Hart) writes:

> >Jon> then set up an interrupt that will kick you around the time
> >Jon> the retrace starts, at which point you poll it to get the
> >Jon> state of the retrace. At least one problem with the vbi is

> > I call this a kludge.

The timer trick is a kludge, but it works when you need it. My
experience at Scala have been really valueable on this. Here we were,
all coming from the Amiga, the most video savvy personal computer
ever, and we had this onslaught of PC things to support. The bottom
line is that some work right, some work wrong, and most can be kludged
into working, for a price.

> >Jon> that a lot of PC video cards dont have their interupt line
> >Jon> connected, for no apparent reason. I think that trident are
> >Jon> mostly to blame for this problem.
> >
> > Ok, so some cards won't work in a BeBox. I've got no problem with
> > that.

> Im not talking some, its is ALL PC video cards that have this
> problem.

No they don't. The original VGA specification supported a VBI, and so
did most clones. However, since the Microsoft OSs didn't do anything
with it, and interrupts were in short supply on the ISA bus, some
cards made it optional, via a jumper. Some dispensed with the VBI
entirely, although the chipsets still support it. And some just got
plain weird. See, back in VGA days, the VBI looked like a jiffy timer
-- one interrupt every 1/59.abc of a second. Some cards just gave out
this roughly 60Hz interrupt, regardless of the actual vertical blank
rate. This is the kind of problem you invariably run across when a
"standard" is an ad hoc thing based on reverse engineering, rather
than some specification in print.

But with all of that, some cards you can actually buy today do VBI
right, no problems. Scala's video preferences lets you choose between
real VBI, timer, or no synchronization. The selection was made a use
option since Scala's drivers line up with chips (which all support
VBI) rather than implementations (eg, what any given card with that
chip on it actually does). The nice thing about synchronization is
that it really is a user interface thing -- if the video isn't
synched, only you're going to complain, the software still works just
dandy.

I think the BeOS needs to put VBI into the OS as a standard feature,
supported at the lowest level in the device driver. A particular
driver will supply the hardware VBI, if it can be done, and the "I'm
in vertical blanking" function, do the OS (or whoever) can calculate
the timer workaround without being married to this week's SVGA
implementation details (there are already some very nice graphics
cards, fully supported under Windows, that bear no resembalance
whatsoever to VGA/SVGA -- the BeOS must not put anything register
dependent beyond the driver level, or it'll be "worse than Windows").

> The two things my PC friends moan for is hardware VBlank suppoprt and more
> timers, oh, and fast joystick ports.

I think the support of the old slow joystick is important, simply
because there are a ton of different slow joysticks out there in the
world. On the other hand, it is fairly stupid to have to deal with
that kind of thing, running a software-assisted A/D converter on a
modern CPU, when a $0.50 chip would give you back $20 worth of CPU
power. I'm planning to offer both slow and fast game controller ports
on the PIOS ONE.

The real problem, far as I can tell, is that Be's handling user OS
wrong right now. There should be another level of abstraction,
something like the Amiga's input.device with plugins. Each plugin,
strangely enough, is a controller driver -- it eats hardware events
and turns them into standard controller messages, which then meander
their way down the input food chain, eventually being eaten by your
game. This goes way beyond simply dealing with "Joystick", "mouse",
"keyboard", etc. Take a look at any PClone game. You'll find 5-20
different things might be hanging on the game port. Doesn't it make
sense to dial up "Bloodmaster SuperKill 3" just once, in a gameport
preferences editor somewhere, rather than have programs need to
consider the differences between that and a Microsoft or Gravis stick
for the same port. Or whether you'd like to drive by keyboard, mouse,
joystick, lightpen, USB port, TrimBus port, N64-controller-on-
serial-3, etc.

> One way around the lack of VBlank interupts on PC cards may be to use
> Mac video cards, but they cost a lot more, and there are no drivers for
> them.

A good portion of today's PCI-based Mac video cards are nothing more
than the exact same video card used on the PC, plus a Mac driver. Even
Apple's been using "SVGA" chips these days (SVGA is farily
misleading -- all such chips have a mode, or a fallback, for
supporting the conventional VGA stuff and usually the SVGA BIOS, but
the individual details of the "real" chip architecture vary greatly).

> NOTE TO BE: I think that VBlank support in the OS is vital, for a lot of
> applications.

I agree -- anything involving video demands glitch-free screen
updates. Even if we're not talking about something that's going to
videotape for broadcast, this remains true. Scala's gig is doing
computer graphics that look like television. You don't see TV's
glitching on display update. So anything running Scala for such
applications has to behave similarly. There exist graphics boards that
do this just dandy. And those that make the C= 64 look like a clean
video machine...

--
Dave Haynie | Scala, Inc. | PIOS Computer A.G.
"But I've been though all this shit before"
-Counting Crows

William Adams

unread,
Nov 16, 1996, 3:00:00 AM11/16/96
to

Dave Haynie <Dave....@scala.com> wrote:

>> But you're right, not everyone is going to want to do:
>>
>> wait_for_v_sync()
>> {
>> while (inp(0x3DA) & 8) // if in vsync, wait till out
>> ;
>> while (!(inp(0x3DA) & 8)) // wait while not in vsync
>> ;
>> }

>No one should every do this! You can't actually recommending a busy
>wait? If an Amiga support person had recommend something like this,
>they'd be drawn and quartered. "50 lashes with cat-o-nine tails,
>Mr. Christian...".

I may be drawn and quartered for other reasons. But someone has to be
bold enough to pose the possibilities and have them shot down. At
least everyone can learn why this isn't a good idea. I don't mind
playing the fool every once in a while to better understanding. Not
everyone understands why this technique is a bad idea, so I think the
discussion helps.

Well, the wonderful thing about a news group is that you can discuss
all possibilities, including the ones that won't work very well. I
don't disagree with any of your assessments, and agree most
wholeheartedly that this should be done by the graphics driver. The
solutions that have been proposed in this thread are relevant when
such support does not exist in the driver. If you have better
non-driver supported solutions, I'm sure everyone would love to hear
them.

>Sure, it may be the case that the hardware offers no other
>solution. In that case, this might be what's actually happening. But
>it shouldn't be visible to the applications programmer!

>> The gamekit allows you to get the address of the registers. So you
>> can stick this code in your app and get vsync.

>What if the card I'm using isn't a standard SVGA card? Non-SVGA cards
>do exist on the PCI bus, and they run just dandy under Windows, OS/2,
>and MacOS, for example. I have just such a bad boy in my PC at home,
>the Imagine 128 from Number Nine. This has a cheap-ass Cirrus Logic
>SVGA chip on it, for compatibility, and a nice 128-bit graphics engine
>with apparently no relation to the world of SVGA, for when you have a
>modern OS (and if Windows and MacOS fall into that definition, BeOS
>had better as well).

Again, I agree most wholeheartedly. And again I say, this solutions
put forth here are in the face of the fact that the current driver
architecture does not do anything about vblank synchronization. The
graphics driver developers at Be know the problem well, and have a
good solution for the future, but for now... What can we do?

-- William

blakatz

unread,
Nov 16, 1996, 3:00:00 AM11/16/96
to

>> > >Can the be support vertical or horizontal blank interrupts?

>> Does the GameKit supply support for this? I think its really important
>> for Video applications.

>Yup.

I agree - I think it's very important to provide VBI and {float
VBTimeLeft();} at the driver level.

However on the Amiga I repeatedly had the problem where my VBI
function took longer than a frame to run - hence the interrupt was
called again, and eventually the interrupt stack gets flooded and you
take a trip to Guru land..

the details aren't quite so disastrous in a dual-processor machine,
but not all BeOS machine are dual-processor.. (but it's still pretty
disastrous.)

In my (limited) experience with BAudioSubscribers, it appears with
anything more than 50% total processor usage, small 'gaps' appear
between the buffers where the scheduler didn't quite make it in time..
- equally disastrous for a broad-cast quality render..

I guess my question is - if we provide a VBI at the driver level
("if"?!?! *WHEN* we provi....) , how do we prevent :
a) the VBI being called while a previous VBI is being serviced.
b) other B_REAL_TIME threads causing a delay between when the VBI
occurs, and when it is serviced, potentially pushing the callee
outside the actual Vertical Blank.
c) three routines, all wanting to hang off the VBI - who goes first?


Chris
bla...@ihug.co.nz
http://www.geocities.com/TimesSquare/Arcade/1783
p.s. it appears to take about 4-5 frames to fill a
BWindowScreen(B_32_BIT_640x480) - too slow to do in a VBI, but you
just *KNOW* someone is gonna try it..

Osma Ahvenlampi

unread,
Nov 17, 1996, 3:00:00 AM11/17/96
to

ad...@bespecific.com (William Adams) writes:
> Again, I agree most wholeheartedly. And again I say, this solutions
> put forth here are in the face of the fact that the current driver
> architecture does not do anything about vblank synchronization. The
> graphics driver developers at Be know the problem well, and have a
> good solution for the future, but for now... What can we do?

For now, from the point of view of a responsible developer, it is
better to do nothing and suffer the flicker than to start doing
incompatible hacks and kludges past the OS. Our responsibility is to
write code that will let the OS evolve cleanly, without the
requirement for millions of compatibility kludges to make sure old
applications don't break. Your (Be's, and your own) resposibility is
to ensure that the OS _does_ evolve, and we won't have to suffer
easy-to-correct misfeatures in it for longer than a maximum of one
release cycle, and to not suggest "solutions" that some poor,
unsuspecting less experienced developer will take as something that
will really work.

It worries me that with the public release mere months away, the idea
of source compatibility, let alone binary compatibility, is not taken
more seriously.

--
Do unto others before they do unto you.
| "Osma Ahvenlampi" <mailto:o...@iki.fi> <http://www.iki.fi/oa/> |
| Posting unsolicited E-Mail to this address is forbidden. |
--

Brian Young

unread,
Nov 17, 1996, 3:00:00 AM11/17/96
to

In article <56kcn3$m...@newsource.ihug.co.nz>,

blakatz <bla...@ihug.co.nz> wrote:
>I guess my question is - if we provide a VBI at the driver level
>("if"?!?! *WHEN* we provi....) , how do we prevent :
>a) the VBI being called while a previous VBI is being serviced.

It would probably be better for the API to provide a semaphore
that a B_REAL_TIME thread waits on. If the thread isn't waiting on
it, then there's no problem.

>b) other B_REAL_TIME threads causing a delay between when the VBI
>occurs, and when it is serviced, potentially pushing the callee
>outside the actual Vertical Blank.

Oh well. There probably shouldn't be a whole lot of B_REAL_TIME threads
running at the same time. Application designers should ensure that
the worst that can happen is flicker (which is probably what'll happen
whether or not it was designed that way).

>c) three routines, all wanting to hang off the VBI - who goes first?

Well, if it was a semaphore than the normal scheduling would happen.
Alternatively, disallow more than one thread to wait for vertical blank.
Treat it as a scarce system wide shared resource.

Brian.

--
bay...@undergrad.math.uwaterloo.ca
--
bay...@undergrad.math.uwaterloo.ca

Jon Hart

unread,
Nov 18, 1996, 3:00:00 AM11/18/96
to

> I agree - I think it's very important to provide VBI and {float
> VBTimeLeft();} at the driver level.

I would prefer a counter that provided either what line or pixel the
retrace was up to.



> However on the Amiga I repeatedly had the problem where my VBI
> function took longer than a frame to run - hence the interrupt was

Similar things happened on the old macs. The answer back then was to
use the VBI to set a flag, and then exit the interrupt, keep it short
and sweet. Even with quite a few things setting flags they will all
get to execute in the vbi.
Then in your main loop you ran the code only if the flag was set.
It worked...

> I guess my question is - if we provide a VBI at the driver level
> ("if"?!?! *WHEN* we provi....) , how do we prevent :
> a) the VBI being called while a previous VBI is being serviced.

I think that this is an issue for developers, keep your interrupt
code a simple as possible, so that it is guaranteed to finish before
the next one.



> b) other B_REAL_TIME threads causing a delay between when the VBI
> occurs, and when it is serviced, potentially pushing the callee
> outside the actual Vertical Blank.

It tests, not in VBI, dont do anything.

> c) three routines, all wanting to hang off the VBI - who goes first?

On the Mac VBI tasks were entered into a queue for execution.
You could not assume your task was first, but you could assume that the order
you entered multiple tasks onto the queue was the order that your tasks
would execute in.

Jon.

Michael S Lee

unread,
Nov 18, 1996, 3:00:00 AM11/18/96
to

bay...@undergrad.math.uwaterloo.ca (Brian Young) writes:
>>b) other B_REAL_TIME threads causing a delay between when the VBI
>>occurs, and when it is serviced, potentially pushing the callee
>>outside the actual Vertical Blank.

>Oh well. There probably shouldn't be a whole lot of B_REAL_TIME threads


>running at the same time. Application designers should ensure that
>the worst that can happen is flicker (which is probably what'll happen
>whether or not it was designed that way).

Actually, this is where the discussion of "real-time" OSes is pertinent.
What you may want to provide to developers (whether Be does or not is a
separate issue) is an rate-control (QoS) based interface for real-time
threads. Such threads should generally know (and therefore be able to
request) how much minimum processing time they require during a given
interval. In this case, at every VBI (~1/60s?) you need X amount of
cycles within a certain window. The OS can then accept your request
(it now _guarantees_ this level of service) or deny it, upon which
whoever is spawning the thread can decide whether they will try to run
anyway without the guarantee (as a normal thread) or whether to not spawn
the thread at all.

Thus programs can have some real-time guarantees from the OS. It could
also specify to the OS whether it should terminate the thread or somehow
notify it if it exceeds it's alloted time (doesn't finish it's work).
That way both hard real time applications can know when they have failed
(because of the program, not the OS) and react gracefully, and soft
real time apps can recover (e.g. they skip a processing cycle to "catch"
up).

In fact, if done right, threads could negotiate Quality of Service at
any time with the OS. That way a movie player could ask for more
processing time per some period say when you resize the window, and
even let the user know that frames will not be dropped due to lack of
computation (you still have problems with guaranteeing rate of service
from I/O devices like disks) or that some may be dropped (because it
couldn't get a QoS guarantee from the OS at the time). The user might
then quit a few other apps running in the background and ask the movie
player to renegotiate. Or the movie player might not let the window
resize affect its image size (because it had a preference which asked
for smooth playback). Etc, etc...

>>c) three routines, all wanting to hang off the VBI - who goes first?

>Well, if it was a semaphore than the normal scheduling would happen.


>Alternatively, disallow more than one thread to wait for vertical blank.
>Treat it as a scarce system wide shared resource.

With the type of system described above, it doesn't matter who goes
first (if they all got QoS guarantees) because they'd all get to go in
the period of time they have compute resources allocated for. If one
of the threads was not a realtime thread, it would go after the realtime
threads, or it might go first, but the OS would pre-empt it in time to
allow the realtime threads to get their allocated time.

I don't qualify this system as the definition of "real-time." However,
this is a type of real-time system that I think is useful even for desktop
applications. How hard is it? Depends on how Be designed their scheduler
and the framework the scheduler fits into (as well as the thread invocation
sections). Do we want Be to add this? I don't know, someone want to
answer that?

Michael.

Osma Ahvenlampi

unread,
Nov 18, 1996, 3:00:00 AM11/18/96
to

bla...@ihug.co.nz (blakatz) writes:
> However on the Amiga I repeatedly had the problem where my VBI
> function took longer than a frame to run - hence the interrupt was
> called again, and eventually the interrupt stack gets flooded and you
> take a trip to Guru land..

You dont get to run user code from interrupts on the BeOS. Only
drivers can install interrupt hooks, and they run in kernel space. So,
this problem does not exist. The graphics driver should export a
function that your application can use to find out when a vertical
blank happens, but since your app never uses the interrupt directly,
no such stacking will happen.

--
If they give you ruled paper, write the other way.

blakatz

unread,
Nov 18, 1996, 3:00:00 AM11/18/96
to

misc...@cantva.canterbury.ac.nz (Jon Hart) wrote:

> I would prefer a counter that provided either what line or pixel the
>retrace was up to.

I considered this - but then you also need to know when the first line
of the display kicks in - which is different for different video
cards/modes/scan rates.. the video card might not even count
scan-lines outside of the display area. By putting a function such as
{int ScanLine();} you're inviting horrid display hacks that only work
on some cards..

whatever the method - I propose it be a
<insert_time_unit_here>UntilVB(), rather than a
<insert_time_unit_here>CurrentValue() where the time unit is
miliseconds, or scanlines, or colour clocks, or PCI bus accesses or
whatever.. (actually bus accesses is a good one..)

> Similar things happened on the old macs. The answer back then was to
>use the VBI to set a flag, and then exit the interrupt, keep it short
>and sweet. Even with quite a few things setting flags they will all
>get to execute in the vbi.
> Then in your main loop you ran the code only if the flag was set.
>It worked...

<sarcasm>
hmm - so we get
void VBICallBack(void * dummy){
VBIOccured++;
}

and our main loops sits there and polls?
</sarcasm>
the problem is that some lazy programmer on an 4x604x533 box is gonna
release software that cascades VBI's on a 2x603x66.

the 'answer' of course is to drop VBICallBack's if a VBI is currently
being processed - I'm asking if there is a better solution..
(e.g. optionally making VBI's re-entrant so they can run in two
threads at the same time perhaps?)


but consider the pathological case:
void VBICallBack(void *dummy){
while (1){
*(long *)(rand(MEM_SIZE))=rand(0xffffffff);
}
}

suppose memory protection is cool, (to the point it doesn't care about
unaligned accesses), it's gonna loop forever right?
so we have to have some way our VBI handler is gonna time-out
eventually. (Some user-initiated method to De-Queue VBI's)


> I think that this is an issue for developers, keep your interrupt
>code a simple as possible, so that it is guaranteed to finish before
>the next one.

but it *CAN"T* be guarranteed - if it were the case all you'd have to
do is run the same program 6 times to crash your machine - Virtual
Memory would do the rest.. (if nothing else, over-run of the VBI
stack)

>> b) other B_REAL_TIME threads causing a delay between when the VBI
>> occurs, and when it is serviced, potentially pushing the callee
>> outside the actual Vertical Blank.

> It tests, not in VBI, dont do anything.

so if your program is the last in the chain, your program never does a
screen update?

>> c) three routines, all wanting to hang off the VBI - who goes first?

> On the Mac VBI tasks were entered into a queue for execution.
>You could not assume your task was first, but you could assume that the order
>you entered multiple tasks onto the queue was the order that your tasks
>would execute in.

which is wasteful on a multi-processor machine..

I propose a circular-buffer of threads that get called while the VBI
is active.. suppose there were 6 threads belonging to 4 programs,
A1,A2,B1,B2,C1,D1,E1

(assume a dual processor machine)
A1 and A2 would be launched together. When A1 returned, There is still
VB time left so B1 would be launched, then B2, C1. VB ends

the next VBI occurs and A1 is still running, so we Halt A1 (semaphore
it - it'll stop within 3ms) and launch D1 and E1. When D1 quits we
launch A2,B1,B2,and finally C1. they all return, and we've still got
some VB time left over, so we let A1 run some more..

hopefully by this time A1 is in the debugger - but that's okay, it's
not slowing down our other programs that need VB time..


>Jon.
just my $0.02
Chris
bla...@ihug.co.nz
http://www.geocities.com/TimesSquare/Arcade/1783

William Adams

unread,
Nov 20, 1996, 3:00:00 AM11/20/96
to

Osma Ahvenlampi <oahv...@hyppynaru.cs.hut.fi> wrote:

>ad...@bespecific.com (William Adams) writes:
>> Again, I agree most wholeheartedly. And again I say, this solutions
>> put forth here are in the face of the fact that the current driver
>> architecture does not do anything about vblank synchronization. The
>> graphics driver developers at Be know the problem well, and have a
>> good solution for the future, but for now... What can we do?

>For now, from the point of view of a responsible developer, it is
>better to do nothing and suffer the flicker than to start doing
>incompatible hacks and kludges past the OS. Our responsibility is to
>write code that will let the OS evolve cleanly, without the
>requirement for millions of compatibility kludges to make sure old
>applications don't break. Your (Be's, and your own) resposibility is
>to ensure that the OS _does_ evolve, and we won't have to suffer
>easy-to-correct misfeatures in it for longer than a maximum of one
>release cycle, and to not suggest "solutions" that some poor,
>unsuspecting less experienced developer will take as something that
>will really work.

Agreed, but I suspect that if the discussion does not occur, then I
would soon see posts to the effect "Why doesn't Be provide a solution"

So again, here we are in an open public forum. Batting ideas around,
trying to work out solutions. Be does have a solution for the future,
and has publicly stated the Graphics driver architecture will go
through some major changes soon. I don't believe we have to sit in
silence until that occurs. I do try to provide a minimal amount of
hacky solutions so that we don't mislead. I have also tried to state
many times that these are just hacks, and most people appreciate that
there is at least discussion of the pros and cons.

I hope that most people like the open nature that this very long
discussion has had thus far. Although developers want Be to act
responsibly (and we will), I think one thing that attracts them to
this platform is that we are open enough to discuss issues openly as
we work toward real solutions.


>It worries me that with the public release mere months away, the idea
>of source compatibility, let alone binary compatibility, is not taken
>more seriously.

Well, it is taken seriously, and we are a couple of releases away from
1.0. The file system changes are pretty major as well, requiring
source changes, but most people seem ready to accept this for the
better good. We will continue to listen to developers and do the
right thing.

-- William Adams
(wad...@be.com)

Marc Warden

unread,
Nov 24, 1996, 3:00:00 AM11/24/96
to

> I think that this is an issue for developers, keep your interrupt
>code a simple as possible, so that it is guaranteed to finish before
>the next one.
>

Hi Jon.

What if the graphics engine (hardware engine) is busy? Some graphics
processors don't like direct access to the frame buffer when the graphics
engine is busy.

To provide vertical blank sensing to support will the Be developers also need
an API added that gives them this status? Or shall they then poll the
graphics accelerator's graphics engine status register to know when its OK to
access the frame buffer?

Oh, I almost forgot. Some chip sets don't even support direct access to the
frame buffer unless certain registers are re-programmed to enable this. So
even if the graphics engine isn't busy, before access to the frame buffer is
allowed, these registers must be reprogrammed. And their initial state saved
for restoration after the process is completed. Now the API needs to be
extended to provide this control.

Sincerely,

Marc Warden (ma...@ibm.net or 10540...@compuserve.com)


Jay Riley

unread,
Dec 15, 1996, 3:00:00 AM12/15/96
to

Osma Ahvenlampi <oahv...@hyppynaru.cs.hut.fi> writes:

>For now, from the point of view of a responsible developer, it is
>better to do nothing and suffer the flicker than to start doing
>incompatible hacks and kludges past the OS. Our responsibility is to
>write code that will let the OS evolve cleanly, without the
>requirement for millions of compatibility kludges to make sure old
>applications don't break. Your (Be's, and your own) resposibility is
>to ensure that the OS _does_ evolve, and we won't have to suffer
>easy-to-correct misfeatures in it for longer than a maximum of one
>release cycle, and to not suggest "solutions" that some poor,
>unsuspecting less experienced developer will take as something that
>will really work.
>

>It worries me that with the public release mere months away, the idea
>of source compatibility, let alone binary compatibility, is not taken
>more seriously.
>

Hmm...if they're working on it perhaps they could put a dummy routine to
synch to verticle retrace into the shipping libraries (a virtual function
if you will) and then make it "live" when they have the bugs worked out.

Regards,

Jay Riley/Owner, DATAMAGIK

*****************************************************************
DATAMAGIK Systems, Software & Design Engineering Since 1985
107 Ranch Road 620 South #10F Austin, Texas USA 78734-3999
*****************************************************************
+ This message was created using a PowerPC laptop (100MHz 603e) +

0 new messages