Every PM thread in OS/2 has an application message queue all of its own. The
number of queues per thread or per process is not relevant to the problem,
so what you are asking for is based on an incorrect understanding of PM.
The problem is that the system input queue passes keyboard and mouse
messages to all of those application message queues *synchronously*. The
desynchronised input queue that is in FixPack 17 and the asynchronous
input queue that is reportedly in OS/2 Merlin will not.
>Does fix pack 17 really give Warp an asychronous input queue? If so, where do
>I go to download this pack?
No, FP17 does not give Warp an asynchronous input queue. The SIQ "fix"
in FP17 works by trying to monitor programs which block the queue. If
it determines a process is blocking the queue, it attempts to rip
focus away from the offending process.
As is the case with all fixes, mileage varies greatly. But it does not
alter the Intel version of OS/2's SIQ.
Timothy Weaver
twe...@npt-tech.com
NTS, Inc.
http://www.npt-tech.com/newport
:>The problem is that the system input queue passes keyboard and mouse
:>messages to all of those application message queues *synchronously*. The
:>desynchronised input queue that is in FixPack 17 and the asynchronous
:>input queue that is reportedly in OS/2 Merlin will not.
Exactly what do you mean by "desynchronised"? It's either sync or async.
-- -----------------------------------------------------------------
Floyd Drennon <fdre...@pobox.com>
OS/2 & Lan Server Certified Instructor
Comp-U-Comm - Computer & Communication Consultants
-------------------------------------------------------------------
Does fix pack 17 really give Warp an asychronous input queue? If so, where do
I go to download this pack?
mark
: Does fix pack 17 really give Warp an asychronous input queue? If so, where
: do I go to download this pack?
Hah! Many people would wish that there was an asynchronous input queue
fix... :-) but there isn't. Note that the previous poster said
"desynchronised". This is referring to a hack IBM put in, that is supposed
to detect when the input queue is frozen... from my experience, it sometimes
helps but mostly, doesn't catch problems that the old Ctrl-Esc didn't. But
YMMV.
--
+-----------------------------------------------+----------------------------+
| Name: Malcolm Chan (English) | |
| Zeng Qiangyong (Chinese) | _/ _/ _/_/_/ |
| | _/_/ _/_/ _/ _/ |
| Student | _/ _/ _/ _/ |
| Computer Science Department | _/ _/ _/ |
| University of Auckland | _/ _/ _/ _/ |
| | _/ _/ _/_/_/ |
| EMail Address: mal...@kcbbs.gen.nz | |
+-----------------------------------------------+----------------------------+
I have always wanted to understand the SIQ problem and your post has brought
me a little nearer.
I wonder if you could clarify one or two points that those of us without an
in depth knowledge of operating systems find difficult.
Why does the fact that keyboard and mouse messages are passed to all
application message queues *synchronously* cause the system to lock up if one
application misbehaves? If I understood this I imagine I would understand why
an asynchronus input queue would solve the problem.
What is the difference between desynchronus and asynchronus and how does it
affect the problem?
It may of course be that it is not possible to understand the answers to
these questions without a degree in computing science but I thought there was
no harm in asking.
David
Actually with FP17 it's still synchronous. "Desynchronised" means,
effectively, that a kludge has been put in to keep the synchronous queue
from being locked by a single app. for more than a predetermined time limit.
_VTL_______________________________________________________________________
Viet-Tam Luu Team OS/2 Corel Engineering 1996 NWC: "Never forget."
2B Math/CS, University of Waterloo http://www.csclub.uwaterloo.ca/~vtluu/
"You go there you're gone forever / I go there I'll lose my way / if we
stay here we're not together / Anywhere is" - Enya & R. Ryan, "Anywhere Is"
>What is the difference between desynchronus and asynchronus and how does it
>affect the problem?
>
The "desynchronous" fix in FP17 that you're referring to is just a kind of
watchdog. It watches all the apps and if any fail to respond to a message
within a certain period of time (by default 2 seconds) it pushes them to
the background, and stops waiting for the reply to the message, so the
next one in the queue can be processed. It helps, sometimes, but is
not a real great solution since your computer is still locked for a couple of
seconds.
In the asyncronous queue, the system doesn't wait for replies to any of the
messages, but just fires them off. The problem with this is, with a
synchronous queue you _know_ that the messages will always be handled
in the order they came in. With an asynchronous queue, you have no idea
what order they will be handled in, it'll depend on what each app is doing,
it's priority, all kinds of things. Frankly, except for this one problem, the
synchronous queue is a much better way of doing things. Of course,
this one problem is a bear.
-- Rob
===============================================================================
Robert McDermid (Hummingbird Communications Ltd.)
mcde...@hcl.com Unless otherwise stated above, all opinions are my own
and do not reflect the views of Hummingbird Communications
===============================================================================
It may not catch problems that CTRL-ESC can't, but it has one big
advantage. You don't have to terminate the offending process. When
the SIQ fix/hack take focus from the bad app, that app continues
running. If/when it starts handling messages again, OS/2 will include
it in the system message queue again.
Without the SIQ fix, your only choice is to either wait or press
CTRL-ESC and kill the app.
---------------------+--------------------------------------------------------+
David Charlap | The contents of this message are not the opinions of |
da...@visix.com | Visix Software, nor of anyone besides myself. |
Visix Software, Inc. +---------------------------+----------------------------+
Member of Team-OS/2 | What does this button do? |
---------------------+---------------------------+
Here's the problem in a nutshell, and there is no good solution.
Every possible solution creates a different problem.
With a windowing system, events can go to many different windows.
Most are sent by applications or by the OS when things relating to
that window happen (like repainting, timers, etc.)
Mouse input events go to the window you click on (unless some window
captures the mouse).
So far, no problem. Whenever an event happens, you put a message on
the target window's message queue. Every process has a message
queue. If the process queue fills up, the messages back up onto the
system queue.
This is the first cause of apps hanging the GUI. If an app doesn't
handle messages and they back up into the system queue, other apps
can't get any more messages. The reason is that the next message in
line can't go anywhere, and the system won't skip over it.
This can be fixed by making apps have bigger private message queues.
The SIQ fix does this. PMQSIZE does this for systems without the SIQ
fix. Applications can also request large queues on their own.
Another source of the problem, however, happens when you include
keyboard events. When you press a key, there's no easy way to know
what window the keystroke message should be delivered to.
Most windowing systems use a concept known as "focus". The window
with focus gets all incoming keyboard messages. Focus can be changed
from window to window by apps or by users clicking on winodws.
This is the second source of the problem. Suppose window A has focus.
You click on window B and start typing before the window gets focus.
Where should the keystrokes go? On the one hand, they should go to A
until the focus actually changes to B. On the other hand, you
probably want the keystrokes to go to B, since you clicked there
first.
OS/2's solution is that when a focus-changing event happens (like
clicking on a window), OS/2 holds all messages in the system queue
until the focus change actually happens. This way, subsequent
keystrokes go to the window you clicked on, even if it takes a while
for that window to get focus.
The downside is that if the window takes a real long time to get focus
(maybe it's not handling events, or maybe the window losing focus
isn't handling events), everything backs up in the system queue and
the system appears hung.
There are a few solutions to this problem.
One is to make focus policy asynchronous. That is, focus changing has
absolutely nothing to do with the keyboard. If you click on a window
and start typing before the focus actually changes, the keystrokes go
to the first window until focus changes, then they go to the second.
This is what X-windows does.
Another is what NT does. When focus changes, keyboard events are held
in the system message queue, but other events are allowed through.
This is "asynchronous" because the messages in the system queue are
delivered to the application queues in a different order from that
with which they were posted. If a bad app won't handle the "lose
focus" message, it's of no consequence - the app receiving focus will
get its "gain focus" message, and the keystrokes will go to it.
The NT solution also takes care of the application queue filling up
problem. Since the system delivers messages asynchronously, messages
waiting in the system queue will just sit there and the rest of the
messages will be delivered to their apps.
The OS/2 SIQ solution is this: When a focus-changing event happens,
in addition to blocking further messages from the application queues,
a timer is started. When the timer goes off, if the focus change has
not yet happened, the bad app has its focus taken away and all
messages targetted at that window are skipped. When the bad app
finally handles the focus change message, OS/2 will detect this and
stop skipping its messages.
As for the pros and cons:
The X-windows solution is probably the easiest. The problem is that
users generally don't like having to wait for the focus to change
before they start typing. On many occasions, you can type and the
characters end up in the wrong window because something (usually heavy
system load) is preventing the focus change from happening in a timely
manner.
The NT solution seems pretty nice, but making the system message queue
asynchronous can cause similar problems to the X-windows problem.
Since messages can be delivered out of order, programs must not assume
that two messages posted in a particular order will be delivered in
that same order. This can break legacy apps, but since Win32 always
had an asynchronous queue, it is fair to simply tell app designers
"don't do that". It's harder to tell app designers something like
that on OS/2 - they'll complain "you changed the rules and our apps
are breaking."
The OS/2 solution's problem is that nothing happens until you try to
change window focus, and then wait for the timeout. Until then, the
bad app is not detected and nothing is done.
I would just throw in that, yes its probably a kludge deluxe and no it does not work
all the time as you would like, but as a developer its now saving me much time and
effort. I used to be able to lock things up quite soundly and reguarly when working
on certain types of code that are succeptible to circular freak outs. I can almost
always recover without a reboot now which helps tremendously.
On the other hand, I now find that the focus will occasionally fail to 'make it' to
an app that I click on according to the 'business' of the current focus window. It
will sometimes give up, but the second click always works. Mostly when I kick of
a compile in a command line and move the cursor over to the editor. Only certain
apps have this problem, and I assume it related to some sequence of events in
their handling of focus change messages that is changed slightly. Its not that the
timeout is kicking in because its never that long before it gives us, just a pause
and a flash as though it got it, then it stays on the command line. Occasionally
the focus cursor creation/destruction scenario gets munged in the losing app,
and I will have to push it back and pull it forward again to get the cursor to
show up.
Dean Roddey
CIDCorp, The CIDLib Class Libraries
dro...@jagunet.com
http://www.jagunet.com/~droddey/
No, it doesn't - but Merlin will. (Just came from a Merlin demo at my
local IBM building)
-----------------------------------
James R. Himmelman
jhi...@i-2000.com
-----------------------------------
Great explanation, David. I always meant to look up exactly what the problem
was, but now that can wait a little longer. Someone mentioned earlier in this
thread that IBM was changing Merlin to an asynchronous queue. I had previously
heard that this was not going to be the case. Can anyone verify either
rumour?
--
Carsten Whimster -- carsten_...@iqpac.com
EDM Associate Editor, Book Reviewer
EDM http://www.iqpac.com/edm2/index.shtml
Book Reviews http://www.iqpac.com/edm2/columns/books.shtml
My Webpage http://www.undergrad.math.uwaterloo.ca/~bcrwhims/index.html
Mark
It seems another problem with focusing on the WPS is having to wait for
swap file activity to subside, bad app or no bad app the OS/2 gui seems
to give priority to the swap file over the gui screen queues. Perhaps
this is in cooperation with the whole system and recources but can create
many delays in focusing, especially with systems with less or fewer recources.
(sometimes waiting mins. with only 1 to 6 applications open) ... When
developing software on higher end computers it may be less noticable to
the developer if they are using fast drives, more memory, and upgraded
cpus. It seems that the ceiling(s) in computing power and a time element
in existance of a standard(s) is not as negotiable as they once were.
> I find with either with or without the SIQ fix that Warp will not respond
> to keyboard input if it is working on updating the swap file. It does not
> have to be a bad app just an app that either uses the swap file alot or o
You really need SCSI busmaster, o' serf of ATA. ATA disk I/O easily
takes all CPU power (or nearly all, say, 95%) while SCSI uses next
to none (about 10%). You can do a lot in 90% left versus 5% left of
CPU time.
--
Toolkits [database] [sound] [utils] |cor...@crl.com
DOS Bullet 1.27 DOS Ruckus 1.21 | ftp://ftp.crl.com /users/co/cornel
Win3x Bullet 1.27 W95 Bullet 2.05 |http://www.crl.com/~cornel/
DOSX32 Bullet 2.05 OS/2 Bullet 2.05 | BBS:(210)684-8065
I find with either with or without the SIQ fix that Warp will not respond
to keyboard input if it is working on updating the swap file. It does not
have to be a bad app just an app that either uses the swap file alot or
one that you have been using and when you start another application it
starts swapping memory to disk, I suppose with larger memory systems it
is not as noticable but with 12 - 20 meg systems it is sometimes
noticeble when only using 2 to 6 apps, sometimes more noticable with
internet applications ..
Sometime in order for already running (good) apps to take focus it can
take up to several mins. for the focus to change, that is if the OS/2 is
trying to update the swap file. Also when closing apps the swap file
will often take over the system until it is done ... I have noticed during
this same period of activity applications failing when put into and out of
the system queue and when lots of disk swapping is going on.
From my general understanding when starting an application, depending on systems
recources, a percentage of the code is placed in several areas, one
being the virtual memory. When closing the app all or part of the code stays
in the swap file for later use. With some applications it seems that more
code is being written and cleaned up when closing the application than when
starting. I find that these kind of functions slow down the WPS considerably
and affect the user when trying to change focus, often hampering the users
performance, rather than just having an application act badly. More careful
planning by the user can sometime reduce these kind of occurances but will
often happen more than not for some users. I find users that come from
Windows enviroments don't always readily understand this because Windows
is more reliant on fast memory and screen updates rather than multitasking
and when something does go wrong they will often see a General Protection
Fault and reboot not having to deal with problems in focusing.
One point of having a Gui is to be able to change focus rapidly, but often
is seems the gui doesn't always want to cooperate with the multitasking
OS and visa versa. I think OS/2 comes alot closer to having a more complete
multitasking environment but is always faced with (perhaps less prioritized
element) of having a fast Gui that responds to the users imput.
Having to continually wait and refocus both the Gui and the users eyes and
related brain activities place quite a strain not only on a single user
but on large populations of users, institutions, industries, governments etc.
....
>I find with either with or without the SIQ fix that Warp will not respond
>to keyboard input if it is working on updating the swap file.
And I still don't see why IBM hasn't fixed this problem. I've already
figured out how to solve it without all these fancy workarounds:
1) Make the keyboard driver a low-level system driver
(command-line level) so that the WPS cannot lock it
2) Give the keyboard event system-level priority
3) Make every application "borrow" time from the keyboard driver.
(This won't slow down a user-input device like a keyboard)
4) If a lock-up occurs, the offending application could easily be
terminated by a low-level kill utility.
___ _________________________________________________________________
| \ Brandon P. Fesler, a proud member of TEAM OS/2 USA
| P | World Wide Web: http://www.oklahoma.net/~nethead
| / Mail/Finger: net...@okc.oklahoma.net
| \
| F | I support the Constitution, even though our government doesn't.
|___/ _________________________________________________________________
I have MOVED! Be sure to get the new address.
As for 1) and 2), the keyboard driver in OS/2 is already a low level system
driver running in kernel mode with system-level priority.
As for 3) isn't that what Fixpack 17 is doing?
As for 4), isn't that what Watchcat is doing?
Rgds,
Chris
Famous People on Operating Systems (Please feel free to contribute)
Rene Discartes---"I think, therefore I don't use Windows."
Clint Eastwood---"A man got to know his operating system limitations."
Albert Einstein---"E=OS/2"
Hamlet---"To Warp or not to Warp, that is the question."
Steve McGarrett, Hawaii Five-O---"Boot'em, Dano."
President Roosevelt---"This day shall live in infamy."
(On the day Windows 95 is launched.) ***cro...@kuentos.guam.net***
This is generally not due to priorities, but to hardware limitations.
Although the keyboard interrupt usually has higher priority than the hard
disc, if your hard disc adapter does not support DMA (and many ATA ("IDE")
hard disc controllers do not) then during periods of heavy disc access the
hard disc device driver is busy sitting in a loop reading data from the
hard disc controller, and multitasking is disabled. So although keyboard
interrupts are being serviced, the keystrokes are being queued in the
keyboard device driver, and not passed to applications.
There's nothing that you can do about this in software. PIO has the same
performance degradation on just about all operating systems (although on
OSes like DOS or DOS+Windows, you don't notice because they don't attempt
to multitask hard disc access alongside other threads). The solution is to
get yourself an ATA controller that uses DMA instead of PIO, or to get
yourself a SCSI controller (which has further hardware features to aid in
multitasking, such as the ability to overlap multiple I/O requests to
different devices on the one SCSI bus).
Firstly, the command line is not low-level. This isn't DOS+Windows you
know, where Windows runs on top of DOS. In OS/2, "low-level" means "running
at Ring 0, alongside the OS/2 kernel". All applications, including text-mode
(i.e. "command line") ones, run on top of the OS/2 kernel, in Ring 3.
Secondly, the keyboard device driver is already "low-level". Look in your
CONFIG.SYS and you will find a line that says
BASEDEV=IBMKBD.SYS
That's the keyboard device driver. You can look up more information on the
keyboard device driver (and many other device drivers) in the OS/2 Warp
Command Reference in your Information folder (under "BASEDEV", obviously
enough).
The keyboard device driver runs in protected mode, at ring 0. It services
keyboard interrupts directly, and queues up the keystrokes so that they can
be read by applications.
This is not a problem with the virtual memory manager per se. The problem is
that PM blocks on a focus change until the relevant
parts of the target application are paged in to process synchronous messages.
If PM had asynchronous input queues swap file I/O wouldn't block anything
but the application whose pages are being swapped.
- Mark Butler
--
Mark David Butler ( butlerm @ xmission.com )
>disc, if your hard disc adapter does not support DMA (and many ATA ("IDE")
>hard disc controllers do not) then during periods of heavy disc access the
>hard disc device driver is busy sitting in a loop reading data from the
>hard disc controller, and multitasking is disabled.
First of all, disabling multitasking during hard disk I/O sounds like
a software problem, not a hardware problem to me.
Second of all, surely you jest. There is absolutely no reason why any
system with preemptive multitasking should block tasks due to CPU driven
hard disk I/O. I have never seen OS/2, Windows NT, or the Amiga block
all tasks during CPU driven IDE or SCSI hard disk I/O.
OS/2 and Windows NT simply give the appearance of blocking
the user interface during paging due to Microsoft's synchronous
user interface design. But all actual processes run full speed
during disk I/O unless they are waiting on another task to respond
to the operating system.
The only exception to this I have ever seen is that there was an original
SCSI adapter manufacturere for the Amiga nine years ago who seemed to
think that calling Forbid() during disk I/O would make it run faster (it
doesn't -- they fixed it in the next release).
The first problem is trivially easy to fix. Each application needs
a dynamically sized input queue that can grow to any length required.
The Amiga OS has this.
1. Every application can have one or more message ports.
2. Every window is assigned to a message port
3. Windows can share the same message port
4. You can have an arbitrary thread / message port mapping.
5. Busy applications simply accumulate messages on the message
port until they are ready to process them.
6. All user interface messages are fully asynchronous, however they
are definitely delivered in the same order they are sent.
7. User interface elements like menus and gadgets are fully asynchronous
with the owning application. You can select menu items, edit text
fields, etc even when the owning application is busy calculating.
8. All of this works without requiring the creation of multiple threads
on the part of the application programmer. The user interface elements
are run by the operating system.
>This is the second source of the problem. Suppose window A has focus.
>You click on window B and start typing before the window gets focus.
>Where should the keystrokes go? On the one hand, they should go to A
>until the focus actually changes to B. On the other hand, you
>probably want the keystrokes to go to B, since you clicked there
>first.
There is no problem, the operating system needs to unilaterally change the
focus from A to B as soon as B is selected regardless of whether A or B
are hopelessly busy. This implies that the operating system needs to be
able to redraw large parts of the window completely asynchronously from
the owning application. This actually happens on the Amiga. In fact
if you use SMART_REFRESH windows your program need not ever redraw its
own windows -- they simply act like virtual screen buffers. In the more
common SIMPLE_REFRESH case, the OS redraws the window frame, menus, and OS
managed user interface gadgets and then posts a refresh message to the
application to refresh its own window as convenience allows.
In this way completely dead applications (even ones where the task is
stopped in a debugger) have fully live user interfaces. You can pick them
up, move them around, look at the menus, type on the screen, etc even
when the application is comatose. The Amiga does not have good resource
recovery, so if a GUI task crashes you often get a fully live user interface
with no application task behind it at all.
My suggestion for the way the OS/2 GUI programming model should be improved
is to gradually eliminate at least the inter-application coupling by
having each application appear to run stand alone in its own virtual
GUI machine. All input events should be sent asynchronously to each
virtual machine. None of the communications from the main OS to each
virtual machine should ever block for any reason.
Then each virtual machine can have one or more threads that process one
or more virtual input queues in the hopelessly synchronous and blocking
manner that is Microsoft's legacy to the user interface world.
Each virtual machine should have a backup window manager, Xwindows / Motif
style, that at least provides a working title bar, depth selection gadgets, etc.
temporarily when an application is too busy to provide its own (if it must
at all).
As a temporary alternative, perhaps someone should write a user interface
class framework that has common code that provides all the user interface
behavior that should be performed asynchronously by the operating system,
and then creates a secondary thread that runs the application without the
programmer ever having to be unduely concerned about all the complex issues
that arise in multithreaded programs (as if the user interface were managed
by the operating system and there was actually only one thread in his
program).
Does anyone have any comments / suggestions?
I have a PCI-bus SCSI adaptor, and still observe the same problem that
the original poster does. Either there's some settings for the driver that
aren't set correctly, or there's some other factor at work here.
It all depends upon the device driver. Some PIO device drivers simply sit
in an infinite loop *polling* the I/O port for the next byte read from the
device.
This polling is generally time-critical, and hence the reason why the device
driver generally doesn't allow context switching whilst it is polling.
One of the worst examples of this, by the way, is the SCSI HBA driver for the
PAS 16. It uses PIO data transfer.
I've had very bad experiences with PAS16 SCSI and its damned stupid polled
I/O device driver in the past. If anyone reading this is thinking of using
PAS16 SCSI, *don't*! Avoid it like the plague. Buy a real SCSI Host Bus
Adapter, such as an Adaptec.
I have an Adaptec 1542b ... I think the drives are limited to to a 5 mb/sec
transfere rate though they seem to work ok with the controller set to
5.7 mb/s which is the default. With some systems you have to set this
back to 5.0 mb/s .. Question is would it be effectualy to upgrade to
10 mb/s drives and controller without upgrading the cpu, memory, video
etc. I see the newest controllers and hard drives are capable of
20 to 40 mb/sec transfere rates but their cost seems to be about 4 times
that of the norm.
However, I assume having PROTECTONLY=YES (+ my suggested
DOSEMULATION=ON mode) will fix much of the aforementioned
problems.
f\
I read a post about a Merlin presentation by an IBMer, where the
queue changes in Merlin were described as a multiple queue and
not asynchronous. I have no clue what this means or if the IBMer
really knew what he was talking about. Does any one know what
the features of a multiple queue might be?
Chris Robinson - chr...@ibm.net
False. As a matter of fact, there was a bug in the early x86 code
where, if a prefix byte were used in the instruction, upon return
from an interrupt, the instruction would fail. See any comprehensive
x86 CPU for details (this was back in the 808x line). Anyway, yes,
rep movs is interruptable, as are all instructions except some
stack pointer loading (once again, early x86es didn't get that right).
>Assuming the REP MOV instruction is still in use, I would presume
>that the movement is done in chunks of some convenient size. During
>a chunk, no interrupts are possible. Between chunks, all pending
The likely problem ATA types see is that a disk I/O operation has
the highest priority in the system, so if it never comes back out
from its kernel call, nothing else is going to happen except other
interrupts (hopefully -- don't assume this when using Win95).
If you get diskio19.zip from hobbes, check to see what it says your
overhead is. My ATA drives run 95% and 65%. My SCSI drives run 10%.
That's 90% CPU left available during the period monitored, versus
say 5%. You can fill in the rest.
As far as I know, the REP MOV instruction is uninterruptible. I'm
not sure if that's how 'modern' drivers do it, but let's face it,
your data is in some RAM on a controller and you want to get it to
your own RAM. There are only a certain number of ways to do that.
Assuming the REP MOV instruction is still in use, I would presume
that the movement is done in chunks of some convenient size. During
a chunk, no interrupts are possible. Between chunks, all pending
multitasking must be done, and there are probably only a handful
of timeslices between chunks, so only a handful of threads will
get serviced.
It just plain slows down.
Busmaster.
Dale Pontius
(NOT speaking for IBM)
Another con to the NT solution is that Win32, like OS/2 Presentation
Manager, uses focus change notifications to change the appearance of
windows. A button, for example, will display a dotted rectangle when it
has the focus. Under certain conditions, when focus processing isn't done
synchronously, you can end up with two buttons on the screen, both of which
appear to have the focus, which can be confusing to the user.
Actually, the problem is NOT trivial.
Assume that you click a button in a window 'A'. The action taken is
to create another window 'B' that contains a text entry field. Keystrokes
are issued IMMEDIATELY following the button press (say, via a macro
program). The keystrokes are available BEFORE window 'B' has been
created (possibly in its own process). Where do those keystrokes go?
Depending on external factors (activity of other processes, scheduling,
speed of CPU), the keystrokes could be sent to window 'A' *or* window 'B'.
Not very consistent. If you state that input events must be processed
synchronously (the current OS/2 model), then things work out.
On NT, sometimes windows get created in front, sometimes behind... a
bit unpredictable.
If you dis-allow type-ahead and mouse-ahead during process creation,
things get a bit better (and Warp has this option -- I think IBM
wanted to "check out the water"). Also, it is difficult to make a
macro recorder type of application with asynchronous events. How do you
know which window will have the focus in a predictable way? Of course,
you could simply disallow this kind of application (unless it is within
your own application, because you know when window creation is finished).
In your final solution, you remove this capability as well. Not so good.
You could also send explicit focus commands or query existence of controls.
By the way, each application has an input queue that it can set to any
length desired under OS/2. Most of your points are covered already.
Message queues are created by threads IF they want one. Messages do
accumulate until the thread reads the queue.
Point 6 is a bit confusing. How can the messages be asynchronous,
and yet delivered in the same order sent?
Points 7 and 8 are pushes. Either provide threads and open the architecture,
*or* provide "under the cover" system services. Threads are provided,
so I don't see the need for the extra machinery.
To summarize:
You said:
"There is no problem, the operating system needs to unilaterally change the
focus from A to B as soon as B is selected regardless of whether A or B
are hopelessly busy. This implies that the operating system needs to be..."
and my counter:
How do you change focus to B if it doesn't exist yet? Do you try to
solve the NP-Complete execution problem? Or just put up with mis-focus?
Fred Weigel.
:In <4pp9cn$k...@xmission.xmission.com>, but...@xmission.xmission.com (Mark David Butler) writes:
:>In article <4pke7g$o...@shelby.visix.com>,
:>
Simple. PM is a program, like all others. If something running at
higher priority (like the kernel's swapper thread), it's going to
block PM. That's the way OS/2 schedules CPU time. Higher priority
threads always take time from low priority threads.
And before you suggest that PM (and other apps, I guess) be given a
priority higher than the swapper, think about it. If code and data
pages are missing, what's PM going to do? Execute whatever it finds
in memory?
>1) Make the keyboard driver a low-level system driver
> (command-line level) so that the WPS cannot lock it
It already is. It's a BASEDEV - loads before the non-BASEDEV device
drivers load.
>2) Give the keyboard event system-level priority
Event???? Events are created by PM. There's no concept of events
below PM. There are just devices and I/O requests (along with
semaphoers, threads, etc.)
>3) Make every application "borrow" time from the keyboard driver.
> (This won't slow down a user-input device like a keyboard)
How's it going to do this? Do you want OS/2 to asynchronously call a
function in an app? This is something like the UNIX signal
mechanism. Writing an application to handle its keyboard input this
way is a real pain - it's not very different from hooking interrupts
in DOS programs - a royal pain. No thanks.
>4) If a lock-up occurs, the offending application could easily be
> terminated by a low-level kill utility.
Such utilities can be written without changing OS/2. But you can't
preempt the swapper, no matter what!
Not all messages. Only focus-changing messages. Things like clicking
on the desktop or on a frame window, CTRL-ESC, ALT-TAB, etc. The
timer doesn't get started for other messages.
>within a certain period of time (by default 2 seconds) it pushes them
>to the background, and stops waiting for the reply to the message, so
>the next one in the queue can be processed. It helps, sometimes, but
>is not a real great solution since your computer is still locked for
>a couple of seconds.
>
>In the asyncronous queue, the system doesn't wait for replies to any
>of the messages, but just fires them off. The problem with this is,
>with a synchronous queue you _know_ that the messages will always be
>handled in the order they came in. With an asynchronous queue, you
>have no idea what order they will be handled in, it'll depend on what
>each app is doing, it's priority, all kinds of things. Frankly,
>except for this one problem, the synchronous queue is a much better
>way of doing things. Of course, this one problem is a bear.
The problem is a bear only because there are tons of OS/2 apps that
were built using a synchronous queue. If any rely on the synchronous
behavior, they'll have to be rewritten.
The problem isn't so terrible if the system always had an asynchronous
queue (like NT) or multiple queues (like X), since the apps all know
that this behavior exists and won't assume otherwise.
How are you going to know in advance that the user event will change
window focus? If I write a program, I can subclass the frame window
and have it not take focus when clicked on. I can also make it take
focus on some other user event.
Until the window receives the user event and programmatically changes
its focus, the OS can't really know whether it's right to just rip
focus away.
This is why the FP17 SIQ fix only starts the timer if you click on a
frame window. Clicks on client windows won't do it, because the app
may have created its own focus policy criteria.
>This implies that the operating system needs to be
>able to redraw large parts of the window completely asynchronously from
>the owning application. This actually happens on the Amiga.
Very good for the Amiga. But you can't retrofit that onto an existing
OS without breaking lots of apps. Apps assume that each thread is
only going to handle one message at a time. If you asynchronously
send a PAINT message while some other message is being handled, you
stand a good chance of crashing the app.
>My suggestion for the way the OS/2 GUI programming model should be
>improved is to gradually eliminate at least the inter-application
>coupling by having each application appear to run stand alone in its
>own virtual GUI machine. All input events should be sent
>asynchronously to each virtual machine. None of the communications
>from the main OS to each virtual machine should ever block for any
>reason.
>
>Then each virtual machine can have one or more threads that process
>one or more virtual input queues in the hopelessly synchronous and
>blocking manner that is Microsoft's legacy to the user interface
>world.
>
>Each virtual machine should have a backup window manager, Xwindows /
>Motif style, that at least provides a working title bar, depth
>selection gadgets, etc. temporarily when an application is too busy
>to provide its own (if it must at all).
What about things like the WPS, where one process owns lots and lots
of windows - windows that appear to the user as different programs?
How is this scheme going to make them play nice?
And you want this when?
You make a lot of great suggestions, and if I was designing a new OS,
I'd probably use some of them. But designing something new is a far
cry from trying to redisign an OS without breaking existing apps.
And how would you rather do it? If you try to access a piece of
memory that is not resident, what can you do other than wait for the
swapper to load it?
>Perhaps this is in cooperation with the whole system and recources
>but can create many delays in focusing, especially with systems with
>less or fewer recources.
All systems get annoyingly slow when you bog them down. There is no
way to keep the UI responsive under all possible system loads. Sooner
or later, things are going to slow down.
On the other hand, I've done very well with the Creative Labs
SoundBlaster-16 SCSI-2 card. It has an Adaptec chipset equivalent to
their 1510 card. Works great for me. (Of course, I only use it with
a CD-ROM and a tape drive, both of which are relatively slow devices.)
Focus changing isn't a simple process. Get a spy program and examine
the system message queue while changing focus among windows. Close to
100 messages of various kinds get sent to different windows during the
focus changing process.
>I notice that my CPU meter keeps right on running without any
>jerkiness,
The CPU meter probably responds to only two messages - WM_PAINT and
WM_TIMER. Since it doesn't do much, it can give the appearance of
normal operation under a heavier system load.
>yet even typing becomes very slow and jerky when this is taking
>place.
Depends on what you're tying into and what kind of processing is going
on.
You're talking about the bus speed jumpers. That's a function of your
motherboard. On my Dell system at work, Dell tech support said I
could jumper it to the full 8M/s bus speed. Your mileage may vary.
As for SCSI speed, SCSI-1 and SCSI-2 are 5M/s busses. Fast SCSI is
10M/s. Fast-Wide SCSI is 20M/s. The 1542B card is a Fast SCSI card.
It will use the 10M/s devices if you have them. (I think your entire
SCSI chain must be fast or they all fall back to the slow 5M/s speed.)
The only thing to keep in mind is that no matter how fast the drives,
you can't transfer data between the controller and the computer faster
than your bus speed. This is 5M/s for ISA (up to 8M/s if you jumper
the card that way.) Still, 8 is better than 5.
>Question is would it be effectualy to upgrade to 10 mb/s drives and
>controller without upgrading the cpu, memory, video etc.
Depends on what you have now and what you do. Disk-intensive things
like file servers, compiling, databases, and other similar things will
definitely benefit. Your 1542 card can handle the Fast SCSI drives,
but you'll have to replace it with a non-ISA card (VLB, PCI, MCA or
EISA) if you want to be able to use the entire 10M/s bandwidth.
>I see the newest controllers and hard drives are capable of 20 to 40
>mb/sec transfere rates but their cost seems to be about 4 times that
>of the norm.
You're talking about Fast-Wide SCSI and SCSI-3.
Wide SCSI increases the data channel from 16-bits to 32-bits -
doubling the transfer rate from 10M/s to 20M/s for the same speed.
You should note, however, that all the drives on a Wide SCSI bus must
be Wide devices. One "narrow" drive will force the entire bus to run
as "narrow" - ruining your performance. (If you have a mix of drives
and don't want to dump the narrow ones, you'll have to get a second
SCSI card or get one with two ports.)
SCSI-3 is a new standard that hasn't quite solidified yet. It's even
faster, but there are three different incompatible cabling schemes
that vendors are using. I'd wait until that settles down before
buying into it.
But you should keep in mind that these speeds are for the SCSI bus.
The drives themselves can't (yet) do 20M/s. The fastest hard drive
I've seen is a 9G AV-type drive. With Fast-Wide SCSI, it can do
sustained writes at 12M/s and reads at 6M/s.
What you are taking about is prioritisation of disk access, I don't
believe it happens for anything including swap (GUI is just a process).
I think that all disk requests are just processed in sequence, with the
result being that if an idle priority process begins disk access,
followed by a high priority process doing access to the same disk, then
initially the high priority process will have to wait behind the idle
priority process. Then as the disk request for the high priority process
is being carried out, it is blocked so the idle priority process will run
and can (will usually) request another read/write. The high priority
process will then have to wait for this if it issues another read/write.
(actually I s'pose the disk accesses wouldn't be writes because these
will be grouped/delayed by the lazy write cache - unless they are swapper
writes)
So effectively the disk accesses will get "time sliced" one by one
regardless of the priority of the processes involved. It is normally
slows the system down do a crawl - because the reads can involve seeking
to/from either end of the disk.
Mind you, if you think that is bad - try this.
Copy a large (ie like 300M - 500M) file to a HPFS partition. The system
will spend about 3-5 minutes just playing with the disk before it actually
begins copying the file. I presume that it is trying to preallocate
contiguous sectors.
If you try to access the disk (swapper access or just reading a file) during
this time - that process will hang until the pre-copy sector allocation is
complete. Obviously if it was swapper access, then your whole system is
hung for a couple of minutes.
Bye - Mark
--
| // Mark Stead - Internet: Work ma...@unico.com.au |
| // "Save Ferris!" Home ma...@daemon.apana.org.au |
| // There is more to life than money - like sex, computers |
| // Windows 95 : Busy or Unstable? Don't ask me, I run OS/2 |
;>I've noticed this too. It only happens when the swap file is being shrunk,
;>not when it is growing. I know that shrinking the swap file is fairly
;>CPU-intensive, but I can't figure out why it affects focus change on apps
;>so much. I notice that my CPU meter keeps right on running without any
;>jerkiness, yet even typing becomes very slow and jerky when this is
;>taking place. It's seriously strange.
I start my swaper at 30MB and will likely soon change that to 40MB. The
added 16MB (32 total) helps significantlt, but so did increasing the default
swapper to 30MB.
At work I have the swaper on it's own SCSI drive (270MB). The drive was
on it's way to scrap, and what the hey. This alone greatly improved the
responsivness of the system. The default swapper is still 30MB on this though.
The performance comes from SCSI and the fact that the heads are always
near where they're needed.
Try hitting ctrl-alt-numlock twice in a row. This will invoke the
system dumper. ONCE YOU HAVE DONE THIS, THERE IS NO WAY BACK! YOU
MUST REBOOT AFTERWARDS.
Then, you can either feed it dump diskettes, *or* just do ctrl-alt-del.
NOTE: This will not properly shut down the disk-cache! Not recommended
as an everyday thing!
Fred Weigel.
:In <4pomfs$l...@silver.jba.co.uk>, Jd...@jba.co.uk (Jonathan de Boyne Pollard) writes:
:>net...@okc.oklahoma.net wrote:
:>| In <1S0vx02B...@ix.netcom.com>, w...@ix.netcom.com (wmcs) writes:
:>|
:>| >I find with either with or without the SIQ fix that Warp will not respond
:>| >to keyboard input if it is working on updating the swap file.
:>|
:>| And I still don't see why IBM hasn't fixed this problem. I've already
:>| figured out how to solve it without all these fancy workarounds:
:>|
:>| 1) Make the keyboard driver a low-level system driver
:>| (command-line level) so that the WPS cannot lock it
:>
:>Firstly, the command line is not low-level. This isn't DOS+Windows you
:>know, where Windows runs on top of DOS. In OS/2, "low-level" means "running
:>at Ring 0, alongside the OS/2 kernel". All applications, including text-mode
:>(i.e. "command line") ones, run on top of the OS/2 kernel, in Ring 3.
:>
:>Secondly, the keyboard device driver is already "low-level". Look in your
:>CONFIG.SYS and you will find a line that says
:>
:> BASEDEV=IBMKBD.SYS
:>
:>That's the keyboard device driver. You can look up more information on the
:>keyboard device driver (and many other device drivers) in the OS/2 Warp
:>Command Reference in your Information folder (under "BASEDEV", obviously
:>enough).
:>
:>The keyboard device driver runs in protected mode, at ring 0. It services
:>keyboard interrupts directly, and queues up the keystrokes so that they can
:>be read by applications.
To be fair, its not just an OS/2 thang. Under NT, when you shutdown an app that's
gotten into a non-trivial swapping situation it becomes very spongy for about 5
seconds as it cleans this up.
Though I guess its kinda spongy all the time <ha, ha>
Dean Roddey
CIDCorp, The CIDLib Class Libraries
dro...@jagunet.com
http://www.jagunet.com/~droddey/
Actually, an easier & safer way is to press CTRL-ALT-DEL instead. The
OS will pop a little window up that says "Rebooting the system...." while it
flushes the system caches. This IS a proper shut down, only it doesn't notify
each open app that the system is closing, so you will lose your work.
The weird thing is, before I do this, I press Ctrl-ESC a few times to try to
break the loop the system is always getting stuck in. After waiting over
a minute, I press Ctrl-Alt-Del, and the system responds to that instantly and
displays that "Rebooting...." message. Now why can't it respond to
Ctrl-Esc that quick?
Mark
I guess I should have said DMA transfere speed or transfere speed
instead of transfere rate.
The original question was how does this effect the users ability to
focus applications on the GUI .. I think some of the people here posted
answers to the effect that posed some of the problems.
I thought that higher priority threads only take time from lower priority
threads when the higher priority ones are not blocked. When the
swapper attempts to read the disk, it should be blocked pending
the reception of the data (although, this may not work to well with
most IDE drives - it should work fine with SCSI). When the swapper
is blocked, the lower priority threads should run. [I think this is the
case for OS/2 - if I am wrong about this, then I would say that
OS/2's scheduling has a *major* design flaw.]
>And before you suggest that PM (and other apps, I guess) be given a
>priority higher than the swapper, think about it. If code and data
>pages are missing, what's PM going to do? Execute whatever it finds
>in memory?
Well, the lower priority tasks that are waiting on memory from the
swapper would not run since one of the things that happens when
a page fault occurs is that the task that caused the page fault is
blocked by the OS until the swapper retrieves the necessary
page. But any other programs should be able to continue. This
is the 2nd most important feature of virtual memory - enhancing
performance of multiple applications by changing to a different
application when one gets blocked by a page fault (or any other
disk activity, or any other IO activity, or for any other reason).
Granted, if the necessary parts of the PM are swapped out or
the application thread that needs to accept the message is
swapped out, there will be a PM-wide pause while the swapper
does its stuff. But for other swapper activity (including size
changes) this need not be the case - IBM just needs to revamp
their code a little. For example, it seems like the swapper is
written to complete a size change before handling additional
page faults. This makes some sense from the OS programmer's
point of view since it is the simplest way to handle it. However,
it would be much nicer for the user if page faults could be
handled at any time - and IMHO would be worth the extra effort
by IBM.
>>4) If a lock-up occurs, the offending application could easily be
>> terminated by a low-level kill utility.
>
>Such utilities can be written without changing OS/2. But you can't
>preempt the swapper, no matter what!
Wrong.
- Jeff
Anyway, I like your idea (I think it was yours, anyway) about making the
swap file really big. It's already at 20 Mb, but it rarely goes above 30, and
I have room to size it up to 40 with some space to spare. On my home machine,
with only IDE drives, I have an 49 Mb swap file and never have this problem,
so it seems like a pretty easy solution.