This isn't really true, it just seems like it is.
Delays are typically defined as "Delay forMilliseconds: 50" but they can also be defined as "Delay untilMilliseconds: aMillisecondClockValue". For "Delay forMilliseconds: 50" a Delay is instantiated and #delayTime: is set to the millisecond value. For "Delay untilMilliseconds: aMillisecondClockValue" a Delay is instantiated and #resumptionTime: is set to aMillisecondClockValue.
When #wait is sent, if delayTime is set, resumptionTime is set to the current millisecond clock value (Time millisecondClockValue) plus delayTime. The delay is added to a list of delayed tasks (sorted by low to high by resumptionTime). The OS (via the VM) is asked to interrupt us in 100 milliseconds (I'm not 100% sure about this as I can't see what the VM does and different OSs may deal with the interrupt requests in different ways). The current process (the one associated with this delay) is suspended. The Smalltalk dispatcher will then dispatch the next ready to run process based upon its priority. If no process is ready to run the we go to an idle process that relinquishes the CPU to the OS (goes to sleep).
When the interrupt triggers, the list of delayed tasks is checked to see if any of their resumptionTime(s) have past and they are made ready to run.
The problem is as I have described it. If one does a delay for 20 milliseconds, its resumptionTime will be set to 20 milliseconds in the future but the OS won't trigger the interrupt until 100 milliseconds in the future, by then the delay is 80 milliseconds past due.
As I also said, for reasons I don't understand, the VM can't seem to get the OS to trigger interrupts more often that every 14-15 milliseconds. So, even if the above situation is fixed and I think that is easy to do, we still can't get better resolution than about 15 milliseconds.
Try this:
| delay times |
delay := Delay forMilliseconds: 1.
times := OrderedCollection new.
times add: Time millisecondClockValue.
100 timesRepeat: [
delay wait.
times add: Time millisecondClockValue.
].
times inspect.
times last - times first.
Then do this:
Delay interruptPeriod: 10.
and repeat above code. I get 10000 and 1560 respectively. And when:
Delay interruptPeriod: 1.
I still get 1560.
So we can't delay for short amounts of time, less than 100 milliseconds, without changing the delay interrupt period and even then we can't get below 15 milliseconds.
For low volume - high CPU tasks this isn't very bad but for high volume - low CPU tasks it can be a killer.
For one case, I'm going to simplify it here, I had two servers (NT services) where server A send server B work via TCP/IP sockets. Server A would wait until server B sent back the result before sending server B more work. This was running on a single CPU VMWare VM windows server (because it is often recommended that only one CPU be defined for a VM as it wouldn't be dispatched until both CPUs are available). This configuration forced server B to wait on a read on the socket. Server A might return more work in say 5 milliseconds but server B wouldn't know about the data for another 95 milliseconds (assuming I hadn't changed the interrupt period). When I changed the interrupt period things got better but still can't do better than the 15 millisecond rate.
Lou