Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DOM timers and the system clock.

20 views
Skip to first unread message

Ry Nohryb

unread,
Jun 24, 2010, 1:33:26 PM6/24/10
to
Hi,

Do you think that adjusting the operating system's date/time ought to
affect a setTimeout(f, ms) or a setInterval(f, ms) ?

I don't.

I mean, when I code a setTimeout I'm saying "do this as soon as x ms
have elapsed", not do this at the time +new Date()+ x milliseconds,
right ?

But, in every browser I've tested this in (NS Navigators, iCabs, the
latest Chromes, Safaris and FireFoxes), in all of them except in Opera
(kudos to Opera !), any pending setTimeouts and setIntervals go nuts
just by adjusting the system's clock time to somewhere in the past.

Try it by yourself: open this: http://jorgechamorro.com/cljs/100/ and
see what happens to the timers as soon as you adjust the system's
clock to for example an hour less or a day less (yesterday).

Everywhere but in Opera. That's a bug, right ? Or not ? What do you
think ? Are there any (valid) excuses for that ? Or should we open a
bunch of tickets in their respective bugzillas ?

TIA,
--
Jorge.

Jeremy J Starcher

unread,
Jun 24, 2010, 2:02:37 PM6/24/10
to
On Thu, 24 Jun 2010 10:33:26 -0700, Ry Nohryb wrote:

> Hi,
>
> Do you think that adjusting the operating system's date/time ought to
> affect a setTimeout(f, ms) or a setInterval(f, ms) ?

In many other situations, adjusting the system clock leads to
unpredictable events, including possible refiring or skipping of cron
jobs and the like.

It is perfectly reasonable for software to do something unpredictable
when something totally unreasonable happens.

In other words: DON'T DO THAT.

If keeping systems in sync is important, there are ways to clocks in sync
without ever setting a system clock in backwards -- you just "slow down"
the clock until it finally catches up. While it has side effects, they
are a lot more gentle most other approaches, but this gets off topic.

By the same token, setTimeout and setInterval fail to fire when the
computer is in suspend or hibernate and UA's act differently when woken
from sleep and all UA's I've tested fail when I remove the onboard RAM.

> I don't.

I'm sorry to hear that.

> I mean, when I code a setTimeout I'm saying "do this as soon as x ms
> have elapsed", not do this at the time +new Date()+ x milliseconds,
> right ?

But what you say and what the computer understands are not the same
thing. If the OS only has one timer, how do you suggest it keeps track
of time passage besides deciding to start at:
+new Date()+ x milliseconds?


<snip>>

> Everywhere but in Opera.

It would be mildly interesting to see their implantation of that.

> That's a bug, right ? Or not ?

No.

> What do you think ?

I think if I can't say anything nice ...

> Are there any (valid) excuses for that ? Or should we open a
> bunch of tickets in their respective bugzillas ?

If I were part of the Mozilla team, I think I'd enjoy getting a bug
report on that one. Its the sort of thing I'd email around the office
for a good laugh of the day and a bit of stress relief.

Ry Nohryb

unread,
Jun 24, 2010, 2:25:00 PM6/24/10
to
On Jun 24, 8:02 pm, Jeremy J Starcher <r3...@yahoo.com> wrote:
> (...)

> If I were part of the Mozilla team, I think I'd enjoy getting a bug
> report on that one.  Its the sort of thing I'd email around the office
> for a good laugh of the day and a bit of stress relief.

Sure you'd do that, until you discover that the system resets the time
every now and then and suddenly your brain turns on and your idiotic
smile disappears completely:

system.log :

23/06/10 00:32:49 ntpd[13] time reset -23.5561 s
--
Jorge.

Ry Nohryb

unread,
Jun 24, 2010, 2:30:02 PM6/24/10
to
On Jun 24, 8:02 pm, Jeremy J Starcher <r3...@yahoo.com> wrote:
> (...) setTimeout and setInterval fail to fire when the

> computer is in suspend or hibernate and UA's act differently when woken
> from sleep and all UA's I've tested fail when I remove the onboard RAM.
> (...)

Wow! How come ?
--
Jorge.

Jeremy J Starcher

unread,
Jun 24, 2010, 2:40:13 PM6/24/10
to

I'm not an expert on ntpd, but it should only do 'hard' set the first
time it adjusts the clock, after that it should skew the system time to
sync.

Ry Nohryb

unread,
Jun 24, 2010, 2:45:51 PM6/24/10
to

And what about daylight saving time ? That's +/- 1 hour at once ...
--
Jorge.

Jeremy J Starcher

unread,
Jun 24, 2010, 3:15:25 PM6/24/10
to

Depends upon the OS. My Linux box stores the time internally as GMT and
applies rules to translate to the local time zone. There is no
adjustment for daylight savings time.

You might have a point about Windows based machines -- that sounds like a
crappy enough approach that they'd use it.

John G Harris

unread,
Jun 24, 2010, 3:34:38 PM6/24/10
to
On Thu, 24 Jun 2010 at 10:33:26, in comp.lang.javascript, Ry Nohryb
wrote:

>Do you think that adjusting the operating system's date/time ought to
>affect a setTimeout(f, ms) or a setInterval(f, ms) ?

It doesn't matter what we think. What does the specification say ?


>I don't.
<snip>

Ditto.

John
--
John Harris

Thomas 'PointedEars' Lahn

unread,
Jun 24, 2010, 4:01:50 PM6/24/10
to
John G Harris wrote:

> Ry Nohryb wrote:
>> Do you think that adjusting the operating system's date/time ought to
>> affect a setTimeout(f, ms) or a setInterval(f, ms) ?
>
> It doesn't matter what we think. What does the specification say ?

It's "DOM Level 0". There is no specification in the sense of a Web
standard (yet).


PointedEars
--
Use any version of Microsoft Frontpage to create your site.
(This won't prevent people from viewing your source, but no one
will want to steal it.)
-- from <http://www.vortex-webdesign.com/help/hidesource.htm> (404-comp.)

Ry Nohryb

unread,
Jun 24, 2010, 4:19:02 PM6/24/10
to

Again:
Jun 24 21:19:12 MacBookUniBody ntpd[7707]: time reset +61.663190 s
--
Jorge.

Dr J R Stockton

unread,
Jun 25, 2010, 4:23:05 PM6/25/10
to
In comp.lang.javascript message <15NUn.24$KT3...@newsfe13.iad>, Thu, 24
Jun 2010 18:02:37, Jeremy J Starcher <r3...@yahoo.com> posted:

>On Thu, 24 Jun 2010 10:33:26 -0700, Ry Nohryb wrote:

>> Do you think that adjusting the operating system's date/time ought to
>> affect a setTimeout(f, ms) or a setInterval(f, ms) ?

It, certainly, ought not to do so. But it might do so.

>In many other situations, adjusting the system clock leads to
>unpredictable events, including possible refiring or skipping of cron
>jobs and the like.

AIUI, CRON jobs are set to fire at specific times. A CRON job set to
fire at 01:30 local should fire whenever 01:30 local occurs. A wise
used does not mindlessly set an event to occur during the missing Spring
hour or the doubled Autumn hour, though in most places avoiding Sundays
will prevent a problem.

>It is perfectly reasonable for software to do something unpredictable
>when something totally unreasonable happens.

But changing the displayed time should NOT affect an interval specified
as a duration.

>But what you say and what the computer understands are not the same
>thing. If the OS only has one timer, how do you suggest it keeps track
>of time passage besides deciding to start at:
> +new Date()+ x milliseconds?

Bu continuing to count its GMT millisecond timer in the normal way and
using it for durations. The displayed time is obtained from a value
offset from that by a time-zone-dependent amount and by a further 18e5
or 36e5 ms in Summer.


A PC has at least two independent clocks, one in the RTC and one using
different hardware (read PCTIM003.TXT, which Google seems to find). The
same seems likely to be true for any computer designed to be turned on
and off.

--
(c) John Stockton, nr London, UK. ?@merlyn.demon.co.uk Turnpike v6.05.
Web <URL:http://www.merlyn.demon.co.uk/> - w. FAQish topics, links, acronyms
PAS EXE etc : <URL:http://www.merlyn.demon.co.uk/programs/> - see 00index.htm
Dates - miscdate.htm estrdate.htm js-dates.htm pas-time.htm critdate.htm etc.

VK

unread,
Jun 26, 2010, 7:44:44 AM6/26/10
to
On Jun 25, 12:01 am, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:

> It's "DOM Level 0".  There is no specification in the sense of a Web
> standard (yet).

There is a working draft from 2006, left in the misery ever since - it
contains nothing valuable but a lot of question marks:
http://www.w3.org/TR/Window/#window-timers

At the very least for Windows/IE Javascript is heavily based on
different C++ runtimes. In the particular
%System%\System32\jscript.dll imports Msvcrt.dll and from there gets
all its float math and Date manipulation.
So it would be interesting to know the implementation of C++ own
timers and their reaction on OS time change. If OP observations are
correct then probably the "canonical" setTimeout explanation that goes
back from Netscape docs is incomplete up to being misleading, the
proper explanation would be (the added part with asterisk): "The
setTimeout method evaluates an expression or calls a function after a
specified amount of time * since the timer has been set based on the
current system time *"

VK

unread,
Jun 26, 2010, 7:56:17 AM6/26/10
to
On Jun 26, 3:44 pm, VK <schools_r...@yahoo.com> wrote:
> "The setTimeout method evaluates an expression or calls a function after a
> specified amount of time * since the timer has been set based on the
> current system time *"

Other words
window.setTimeout("window.alert(1)", 10000);
executed at say 2010-06-26 00:01:0000 LST (Local System Time)
literally means:

1. Get LST / 2010-06-26 00:01:0000

2. Get delay (10000ms = 10 sec)

3. Set C++ runtime IRQ to 2010-06-26 00:11:0000 to notify the
Javascript engine * whenever this moment of LST will happen *.

VK

unread,
Jun 26, 2010, 8:44:19 AM6/26/10
to
On Jun 26, 3:56 pm, VK <schools_r...@yahoo.com> wrote:
> Other words
>  window.setTimeout("window.alert(1)", 10000);
> executed at say 2010-06-26 00:01:0000 LST (Local System Time)
> literally means:
>
> 1. Get LST / 2010-06-26 00:01:0000
>
> 2. Get delay (10000ms = 10 sec)
>
> 3. Set C++ runtime IRQ to 2010-06-26 00:11:0000 to notify the
> Javascript engine * whenever this moment of LST will happen *.

As Google search shows, I am right. C/C++ do not have built-in timer
functionality, and their add-on implementations in OSs are based on
time stamps timePlaced/timeCalled, not on some absolute coordinate. If
so then it is a global laziness oops of non real time OSs.

It may also be interesting that for Windows environments the minimal
delay is 10ms, any smaller will be automatically set to 10ms, so
window.setTimeout("foo()",0) is perfectly valid but equal to
window.setTimeout("foo()",10)

Also the maximum delay for Windows environments is 2147483647ms =~ 596
hours =~ 24.8 days, any bigger value will be set to 2147483647ms. See
http://msdn.microsoft.com/en-us/library/ms644906%28v=VS.85%29.aspx
USER_TIMER_MINIMUM and USER_TIMER_MAXIMUM

Thomas 'PointedEars' Lahn

unread,
Jun 26, 2010, 2:29:51 PM6/26/10
to
Dr J R Stockton wrote:

> Jeremy J Starcher <r3...@yahoo.com> posted:

>> In many other situations, adjusting the system clock leads to
>> unpredictable events, including possible refiring or skipping of cron
>> jobs and the like.
>
> AIUI, CRON jobs are set to fire at specific times. A CRON job set to
> fire at 01:30 local should fire whenever 01:30 local occurs. A wise
> used does not mindlessly set an event to occur during the missing Spring
> hour or the doubled Autumn hour, though in most places avoiding Sundays
> will prevent a problem.

An even wiser person lets their system, and their cron jobs, run on UTC,
which avoids the DST issue, and leaves the textual representation of dates
to the locale.



>> It is perfectly reasonable for software to do something unpredictable
>> when something totally unreasonable happens.
>
> But changing the displayed time should NOT affect an interval specified
> as a duration.

Duration is defined as the interval between two points in time. The only
way to keep the counter up-to-date is to check against the system clock. If
the end point of the interval changes as the system clock is modified, the
result as to whether and when the duration is over must become false.



>>But what you say and what the computer understands are not the same
>>thing. If the OS only has one timer, how do you suggest it keeps track
>>of time passage besides deciding to start at:
>> +new Date()+ x milliseconds?
>
> Bu continuing to count its GMT millisecond timer in the normal way and
> using it for durations.

Since usually a process is not being granted CPU time every millisecond,
this is not going to work. I find it surprising to read this from you as
you appeared to be well-aware of timer tick intervals at around 50 ms,
depending on the system.


PointedEars
--
Prototype.js was written by people who don't know javascript for people
who don't know javascript. People who don't know javascript are not
the best source of advice on designing systems that use javascript.
-- Richard Cornford, cljs, <f806at$ail$1$8300...@news.demon.co.uk>

VK

unread,
Jun 27, 2010, 7:19:31 AM6/27/10
to
I received an answer from Boris Zbarsky (one of Mozilla project head
leaders) at mozilla.dev.tech.js-engine

http://groups.google.com/group/mozilla.dev.tech.js-engine/msg/4e6df47759cc7018

Copy:

> Assuming

> var timerID = window.setTimeout(doIt(), 20000);
> executed at the moment of time 2010-XX-XX 23:50:0000

> and within the next 20 secs OS time was changed by DST request or
> manually. Will it be executed somewhere in 20000ms since timerID set
> irrespectively to the OS time, somewhere at 2010-XX-XY 00:10:0000 of
> the old system time, somewhere at 2010-XX-XY 00:10:0000 of the new
> system time? Other words is the queue based on an absolute scale,
> immutable time stamps, mutable time stamps?

1) This is a DOM issue, not a JSEng one.
2) Right now, the new system time would determine firing time (though
note that "time" means "time since epoch", so is unaffected by
DST changes, changes of OS timezone, or the like; only actual
changes to the actual clock matter, not to the user-visible
display).
3) The information in item 2 is subject to change. See
https://bugzilla.mozilla.org/show_bug.cgi?id=558306

VK

unread,
Jun 27, 2010, 7:39:50 AM6/27/10
to
So to summarize the actual setTimeout/setInterval behavior in response
to the OP question:

setTimeout / setInterval are based on time stamps using the current
epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
zone change or DST change do not affect timers. This way system clock
change breaks the timer functionality.

Timers did not, do not and will not be based on relative scales, like
window.setTimeout("foo()", 10000);
// WRONG ASSUMPTION:
// foo() will be tried to execute 10 sec
// after window.setTimeout("foo()", 10000);
// statement was executed

Dr J R Stockton

unread,
Jun 27, 2010, 4:39:16 PM6/27/10
to
In comp.lang.javascript message <7229694.e...@PointedEars.de>,
Sat, 26 Jun 2010 20:29:51, Thomas 'PointedEars' Lahn
<Point...@web.de> posted:

>Dr J R Stockton wrote:
>
>> Jeremy J Starcher <r3...@yahoo.com> posted:
>>> In many other situations, adjusting the system clock leads to
>>> unpredictable events, including possible refiring or skipping of cron
>>> jobs and the like.
>>
>> AIUI, CRON jobs are set to fire at specific times. A CRON job set to
>> fire at 01:30 local should fire whenever 01:30 local occurs. A wise
>> used does not mindlessly set an event to occur during the missing Spring
>> hour or the doubled Autumn hour, though in most places avoiding Sundays
>> will prevent a problem.
>
>An even wiser person lets their system, and their cron jobs, run on UTC,
>which avoids the DST issue, and leaves the textual representation of dates
>to the locale.

A peculiar attitude (as is customary).

The Germans, by EU law, adjust their official time in Spring and Autumn.
No doubt The vast majority of the population will shift their daily
lives accordingly. But perhaps you do not. A computer should be set to
use whichever sort of time is most appropriate to its usage.


>>> It is perfectly reasonable for software to do something unpredictable
>>> when something totally unreasonable happens.
>>
>> But changing the displayed time should NOT affect an interval specified
>> as a duration.
>
>Duration is defined as the interval between two points in time. The only
>way to keep the counter up-to-date is to check against the system clock. If
>the end point of the interval changes as the system clock is modified, the
>result as to whether and when the duration is over must become false.

You are displaying a lack of understanding of computers in general and
also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
Resolution 1.

Duration is measured in SI seconds, or multiples/submultiples thereof.
If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
be no surprise).


>>>But what you say and what the computer understands are not the same
>>>thing. If the OS only has one timer, how do you suggest it keeps track
>>>of time passage besides deciding to start at:
>>> +new Date()+ x milliseconds?
>>
>> Bu continuing to count its GMT millisecond timer in the normal way and
>> using it for durations.
>
>Since usually a process is not being granted CPU time every millisecond,
>this is not going to work. I find it surprising to read this from you as
>you appeared to be well-aware of timer tick intervals at around 50 ms,
>depending on the system.

You appear to be still running DOS or Win98, in which there are indeed
0x1800B0 ticks per 24 hours. In more recent systems, the default
granularity is finer; and the fineness can be adjusted can be adjusted
by program demand. Indeed, a program relying on the fineness that it
finds may be affected when another process changes the corresponding
timer, AIUI.

Next time that you read PCTIM003, read also its date.

Perhaps you have heard of interrupts? In a bog-standard PC, from the
earliest days, it has been possible to get interrupts at up to 32 kHz
from the RTC - consult the RS 146816 data sheet or equivalent. CRON
ought not to rely on being awoken at frequent intervals so that it may
look at the clock; it should be awoken from passivity by the timer event
queue (or whatever it may be called) of the system, and should pre-empt
whatever else may currently have an active time slice.

A sensibly-written CRON would enable events to be scheduled by UTC and
by local time and by duration (SI time) from request.

--
(c) John Stockton, nr London UK. ?@merlyn.demon.co.uk Turnpike v6.05 MIME.
Web <URL:http://www.merlyn.demon.co.uk/> - FAQish topics, acronyms, & links.
Proper <= 4-line sig. separator as above, a line exactly "-- " (RFCs 5536/7)
Do not Mail News to me. Before a reply, quote with ">" or "> " (RFCs 5536/7)

Ry Nohryb

unread,
Jun 27, 2010, 7:13:32 PM6/27/10
to
On Jun 27, 1:39 pm, VK <schools_r...@yahoo.com> wrote:
> So to summarize the actual setTimeout/setInterval behavior in response
> to the OP question:
>
> setTimeout / setInterval are based on time stamps using the current
> epoch time since 1970-01-01T00:00:00Z ISO 8601. This way system time
> zone change or DST change do not affect timers. This way system clock
> change breaks the timer functionality.

Not in Operas. Kudos to them. A setTimeout(f ,100) means call f in
100ms. If not, I'd rather write setTimeout(f, +new Date+ 100).
--
Jorge.

Thomas 'PointedEars' Lahn

unread,
Jun 28, 2010, 9:54:09 AM6/28/10
to
Dr J R Stockton wrote:

> Thomas 'PointedEars' Lahn posted:


>> Dr J R Stockton wrote:
>>> Jeremy J Starcher <r3...@yahoo.com> posted:
>>>> In many other situations, adjusting the system clock leads to
>>>> unpredictable events, including possible refiring or skipping of cron
>>>> jobs and the like.
>>> AIUI, CRON jobs are set to fire at specific times. A CRON job set to
>>> fire at 01:30 local should fire whenever 01:30 local occurs. A wise
>>> used does not mindlessly set an event to occur during the missing Spring
>>> hour or the doubled Autumn hour, though in most places avoiding Sundays
>>> will prevent a problem.
>> An even wiser person lets their system, and their cron jobs, run on UTC,
>> which avoids the DST issue, and leaves the textual representation of
>> dates to the locale.
>
> A peculiar attitude (as is customary).
>
> The Germans, by EU law, adjust their official time in Spring and Autumn.
> No doubt The vast majority of the population will shift their daily
> lives accordingly. But perhaps you do not. A computer should be set to
> use whichever sort of time is most appropriate to its usage.

You miss the point. It is not necessary for the system clock of a computer
to use local time in order for the operating system to display local time.
Not even in Germany, which you claim to know so well (but in fact haven't
got the slightest clue about).

>>>> It is perfectly reasonable for software to do something unpredictable
>>>> when something totally unreasonable happens.
>>> But changing the displayed time should NOT affect an interval specified
>>> as a duration.
>> Duration is defined as the interval between two points in time. The only
>> way to keep the counter up-to-date is to check against the system clock.
>> If the end point of the interval changes as the system clock is modified,
>> the result as to whether and when the duration is over must become false.
>
> You are displaying a lack of understanding of computers in general

Is that so? A usual PC will not grant CPU time to a process every
millisecond (so that this process could count down reliably per your
suggestion), so other means are necessary to determine which amount of time
has passed.

> and also of the real world outside - and of ISO 8601 and of CGPM 13, 1967,
> Resolution 1.
>
> Duration is measured in SI seconds, or multiples/submultiples thereof.
> If UNIX, CRON, etc., do otherwise they are just plain wrong (which would
> be no surprise).

You are missing the point completely.

>>>> But what you say and what the computer understands are not the same
>>>> thing. If the OS only has one timer, how do you suggest it keeps track
>>>> of time passage besides deciding to start at:
>>>> +new Date()+ x milliseconds?
>>>
>>> Bu continuing to count its GMT millisecond timer in the normal way and
>>> using it for durations.
>>
>> Since usually a process is not being granted CPU time every millisecond,
>> this is not going to work. I find it surprising to read this from you as
>> you appeared to be well-aware of timer tick intervals at around 50 ms,
>> depending on the system.
>
> You appear to be still running DOS or Win98, in which there are indeed
> 0x1800B0 ticks per 24 hours. In more recent systems, the default
> granularity is finer; and the fineness can be adjusted can be adjusted
> by program demand. Indeed, a program relying on the fineness that it
> finds may be affected when another process changes the corresponding
> timer, AIUI.

You should get yourself informed beyond technical standards, and avoid
making hasty generalizations if you want to be taken seriously. I happen to
be running a PC laptop with a Linux kernel I have configured and compiled
myself which has a finer granularity, a timer frequency of 1000 Hz to be
precise (which is recommended for desktop systems). That does not have
anything to do with the CPU time granted to a process by the operating
system (which is certainly not every millisecond, since other processes
running on that machine want that CPU time, too), especially not with the
resolution of setTimeout()/setInterval() which is determined by the
implementation (and Mozilla-based ones will not go below 10 milliseconds
AISB).

> [snip irrelevance]

Ry Nohryb

unread,
Jun 28, 2010, 9:59:39 AM6/28/10
to
On Jun 28, 3:54 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
> (...)

> You miss the point.  It is not necessary for the system clock of a computer
> to use local time in order for the operating system to display local time.  
> Not even in Germany, which you claim to know so well (but in fact haven't
> got the slightest clue about). (...)

JFTR: Pointy is austrian, not german. Isn't it, Pointy ?
--
Jorge.

Ry Nohryb

unread,
Jun 28, 2010, 10:12:44 AM6/28/10
to
On Jun 28, 3:54 pm, Thomas 'PointedEars' Lahn <PointedE...@web.de>
wrote:
>
> (...) Mozilla-based ones will not go below 10 milliseconds
> AISB (...)

Yes. I wrote this to test that: http://jorgechamorro.com/cljs/097/

My findings (on an intel Mac):

- Opera triggers faster a setInterval(f,1) than a setInterval(f,0).
- Mozillas and Safari max out @ ~100Hz no matter what (0 ... 10) ms.
- Chrome at 0..5ms gives 230Hz.
- iCab comparatively flies: a setInterval(f,0) runs at more than 2KHz.
--
Jorge.

0 new messages