Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Plea to Microsoft to allow RTC = UTC

15 views
Skip to first unread message

Markus Kuhn

unread,
Jul 2, 2001, 12:14:28 PM7/2/01
to
I have managed to get someone inside Microsoft interested in the
problems of the old MS-DOS convention of keeping the IBM PC RTC in some
local time that is not further specified in the CMOS RAM data. I was asked
to write up a case to convince Microsoft management to dedicate resources
for fixing this and enabling the Windows RTC driver to maintain the
battery clock in Universal Time instead of local time.

I just did so on

http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html

and I would like to invite all computer time gurus out there to review
this brief essay before I send it off to Redmond.

Please let me know by email if you have any further

- arguments
- technical suggestions/proposals
- related references
- URLs of well-documented RTC DST problem stories

that you think whoever at Microsoft touches the Windows RTC code
next should be aware of.

Cheers,

Markus

--
Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
Email: mkuhn at acm.org, WWW: <http://www.cl.cam.ac.uk/~mgk25/>

Thomas A. Horsley

unread,
Jul 2, 2001, 5:00:45 PM7/2/01
to
>I have managed to get someone inside Microsoft interested in the
>problems of the old MS-DOS convention of keeping the IBM PC RTC in some
>local time that is not further specified in the CMOS RAM data.

Oh Please, let it come true! If you need any signatures on petitions,
count me in! The RTC keeping local time instead of UTC is far more
annoying even than \ instead of / :-).
--
>>==>> The *Best* political site <URL:http://www.vote-smart.org/> >>==+
email: Tom.H...@worldnet.att.net icbm: Delray Beach, FL |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+

Maarten Wiltink

unread,
Jul 3, 2001, 3:54:24 AM7/3/01
to
Markus Kuhn wrote in message <9hq6l4$qh4$1...@pegasus.csx.cam.ac.uk>...

>I have managed to get someone inside Microsoft interested in the
>problems of the old MS-DOS convention of keeping the IBM PC RTC in some
>local time that is not further specified in the CMOS RAM data. I was asked
>to write up a case to convince Microsoft management to dedicate resources
>for fixing this and enabling the Windows RTC driver to maintain the
>battery clock in Universal Time instead of local time.
>
>I just did so on
>
> http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html
>
>and I would like to invite all computer time gurus out there to review
>this brief essay before I send it off to Redmond.
>
>Please let me know by email if you have any further
>
> - arguments
> - technical suggestions/proposals
> - related references
> - URLs of well-documented RTC DST problem stories
>
>that you think whoever at Microsoft touches the Windows RTC code
>next should be aware of.

I'm not a time guru. Can I say a few things? Thank you.

In my humble opinion, you might emphasise that the hardware
clock is a hardware affair, and that only in the software
is there a notion of _which_ time is kept. Forgetting all
about global networking issues, the simple fact that the
hardware clock itself doesn't know what time it is keeping
should mandate that _a_ universal time is kept in it. Then
when the case for having all clocks running the same time
is made, the choice for a specific universal time is easy.

The rest of my comments are about PR. You want Microsoft
to do you [us all] a favour; suck up! Don't say "better
RTC drivers", say "third-party" or "external". (That
driver does not work under NT, I gather? Put that in
there or they'll tell you to use that and go away.)

Similarly, rephrase "public ridicule", at least to the
extent that the people at Microsoft could feel it as an
opportunity to make themselves look good, not as a chance
to get away from the mud-slinging and the rotten tomatoes.

"... The time-proven and robust Unix tradition" sounds
like another bad idea. You're effectively telling them
that Windows should become Unix. Microsoft doesn't want
that and I even think they're right. Unix is historically
a server OS; Windows grew up to be a desktop OS. NOBODY
installs an X server on a headless machine... Windows
offers not even the option to leave it out. Compare it
to other, non-Unix systems that keep time in UTC. Call
it POSIX. Don't just say "Unix, because Unix is better
than Windows", because their answer will be predictable.
Live with it, or switch. (BTW, I _like_ the embedded
argument. Microsoft might, too.)

All this said, I like the idea. If Microsoft doesn't
take to it, or even if they do, would you consider
writing another such letter? I think POSIX could make
do with a requirement that the clock be kept in UTC.
And I remember something about the DoD and POSIX and
NT that would make a nice back door if this first
letter falls on deaf ears...

Groetjes,
Maarten Wiltink


Peter Bunclark

unread,
Jul 3, 2001, 3:55:09 AM7/3/01
to
Markus Kuhn wrote:

Perhaps Microsoft could lead the world and implement a bios clock
that keeps TAI; that way, bios firmware doesn't have to have the
complexity of leap seconds (or any future move to a `different UTC'),
leaving that layer to the OS which has vastly more resources available
to it.

(What happens to a bios clock after a leap second? Is it just wrong
until such times as an NTP-driven Linux system shuts down and
resets it?)

Pete.

John Sager

unread,
Jul 3, 2001, 7:10:50 AM7/3/01
to
In article <9hq6l4$qh4$1...@pegasus.csx.cam.ac.uk>, mg...@cl.cam.ac.uk (Markus Kuhn) writes:
: I have managed to get someone inside Microsoft interested in the

: problems of the old MS-DOS convention of keeping the IBM PC RTC in some
: local time that is not further specified in the CMOS RAM data. I was asked
: to write up a case to convince Microsoft management to dedicate resources
: for fixing this and enabling the Windows RTC driver to maintain the
: battery clock in Universal Time instead of local time.
:
: I just did so on
:
: http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html
:
: and I would like to invite all computer time gurus out there to review
: this brief essay before I send it off to Redmond.
:

The tone is too critical IMHO. Big corporations don't like having their
noses rubbed in it, and M$ is definitely no exception. Perhaps
the early choice with MS-DOS was reasonable, after all even DEC made
the same choice with VMS. However it's outlived its usefulness, and
better solutions are available, and becoming essential in a networked
world. Let them judge the complexity or otherwise of fixing it, just
point out the advantages if they do, and the probable consequences if
they don't.

Don't forget that you are trying to get the substance of it across
to senior execs at M$. If your contact judges that it will wind them
up, then he'll just bin it. You might even consider generating some
Powerpoint slideware, since that's almost certainly how this will
be presented to management anyway:)

--
John

--
Sorry about the address.
This is me, not BT.

Philip Homburg

unread,
Jul 3, 2001, 7:11:18 AM7/3/01
to
In article <3B417A5D...@ast.cam.ac.uk>,

Peter Bunclark <p...@ast.cam.ac.uk> wrote:
>Perhaps Microsoft could lead the world and implement a bios clock
>that keeps TAI; that way, bios firmware doesn't have to have the
>complexity of leap seconds (or any future move to a `different UTC'),
>leaving that layer to the OS which has vastly more resources available
>to it.
>
>(What happens to a bios clock after a leap second? Is it just wrong
>until such times as an NTP-driven Linux system shuts down and
>resets it?)

During the period of time between two leap seconds, the cmos clock will gain
or lose many seconds. They are not very stable.


Philip Homburg

joseph c lang

unread,
Jul 3, 2001, 8:14:16 AM7/3/01
to

IMHO leave out (or rewrite) the references to supporting other operating
systems.
The very last thing Microsoft would do is make it easier for the other
guy!
Don't say UNIX, say POSIX.

joe lang

Clifton T. Sharp Jr.

unread,
Jul 3, 2001, 6:12:11 PM7/3/01
to
Markus Kuhn wrote:
> I have managed to get someone inside Microsoft interested in the
> problems of the old MS-DOS convention of keeping the IBM PC RTC in some
> local time that is not further specified in the CMOS RAM data. I was asked
> to write up a case to convince Microsoft management to dedicate resources
> for fixing this and enabling the Windows RTC driver to maintain the
> battery clock in Universal Time instead of local time.

Wrong approach. Here's what you need to do.

1. Find some geek Ph.D. employed at Microsoft.
2. Show him a real live woman.
3. Guarantee that the woman will spend one full hour talking to him if he
promises to go tell his bosses how much it would inconvenience the
known world if they ran RTCs on UTC, and how it would be so different
from what everyone else in the world is doing.
4. Pay the poor woman really, really well.

--
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Cliff Sharp | "Speech isn't free when it comes postage-due." |
| WA9PDM | -- Jim Nitchals, founder, FREE |
+-+-+-+-+-+-+-+-+-+- http://www.spamfree.org/ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Mike Stump

unread,
Jul 4, 2001, 3:30:30 AM7/4/01
to
In article <9hq6l4$qh4$1...@pegasus.csx.cam.ac.uk>, Markus Kuhn <sender> wrote:
>I was asked to write up a case to convince Microsoft management to
>dedicate resources for fixing this and enabling the Windows RTC
>driver to maintain the battery clock in Universal Time instead of
>local time.

Ah, I wonder if we could write and RFC for it, `time and date shall be
kept in UTC format inside real time clock chips inside computers', and
then let it wind it's way though the normal process, and after an
appropriate number of years, it could then be a standard, then we can
ask all vendors if they are rfc xxxx compliant. The backing to
management would be, to conform to this RFC.

Personally, I can't help but think they gave you busywork. Oh, did I
mention I am a cynic.

Let us know how it goes.

Markus Kuhn

unread,
Jul 4, 2001, 1:19:12 PM7/4/01
to
Tom.H...@worldnet.att.net (Thomas A. Horsley) writes:

>Markus Kuhn wrote:
>>I have managed to get someone inside Microsoft interested in the
>>problems of the old MS-DOS convention of keeping the IBM PC RTC in some
>>local time that is not further specified in the CMOS RAM data.
>
>Oh Please, let it come true! If you need any signatures on petitions,
>count me in! The RTC keeping local time instead of UTC is far more
>annoying even than \ instead of / :-).

I spent the afternoon looking for interesting strings in the Windows 2000
kernel. I noted a UTF-16 encoded string "RealTimeIsUniversal"
(NTOSKRNL.EXE:bbd4, NTKRNLPA.EXE:9304). This makes me wonder whether
Windows 2000 doesn't already have an undocumented registry entry

HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal

or so that allows to keep the RTC in UTC (probably REG_DWORD with 0 or 1).
None of this is documented or supported by Microsoft, so try at your
own risk ...

http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html

Wolfgang Rupprecht

unread,
Jul 4, 2001, 2:07:15 PM7/4/01
to

m...@kithrup.com (Mike Stump) writes:
> Ah, I wonder if we could write and RFC for it, `time and date shall be
> kept in UTC format inside real time clock chips inside computers', and
> then let it wind it's way though the normal process, and after an
> appropriate number of years, it could then be a standard, then we can
> ask all vendors if they are rfc xxxx compliant. The backing to
> management would be, to conform to this RFC.

If anyone bothers to write an RFC, please consider specifying the time
as being in UTC without the leap seconds (called TAI). The fact that
leap seconds *step* the clock is just disgusting. (It ruins any time
measurement that straddles the step.)

-wolfgang
--
Wolfgang Rupprecht <wolfga...@dailyplanet.wsrcc.com>
http://www.wsrcc.com/wolfgang/
Coming soon: GPS mapping tools for Open Systems. http://www.gnomad-mapping.com/

Joe Gwinn

unread,
Jul 4, 2001, 4:04:09 PM7/4/01
to
Comments at bottom.

In article <9hq6l4$qh4$1...@pegasus.csx.cam.ac.uk>, Markus Kuhn wrote:

> I have managed to get someone inside Microsoft interested in the
> problems of the old MS-DOS convention of keeping the IBM PC RTC in some
> local time that is not further specified in the CMOS RAM data. I was asked
> to write up a case to convince Microsoft management to dedicate resources
> for fixing this and enabling the Windows RTC driver to maintain the
> battery clock in Universal Time instead of local time.
>
> I just did so on
>
> http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html
>
> and I would like to invite all computer time gurus out there to review
> this brief essay before I send it off to Redmond.
>
> Please let me know by email if you have any further
>
> - arguments
> - technical suggestions/proposals
> - related references
> - URLs of well-documented RTC DST problem stories
>
> that you think whoever at Microsoft touches the Windows RTC code
> next should be aware of.

I just read the proposal. In general, it seems well-proven that keeping
local time in the hardware clocks is today bad idea, although it was once
OK.

However, it's not a good idea to have the hardware try to do UTC directly,
because leap seconds cannot be predicted by any algorithm, instead
requiring astronomical observation. Nor is it true that hardware clocks
in POSIX are UTC. In fact, the underlying POSIX clock, called "Seconds
Since the Epoch", counts seconds (and fractions of seconds) since the
POSIX Epoch, 00:00:00 UTC 1 January 1970 AD. The fact that the Epoch is
defined in terms of UTC neither requires nor implies that Seconds Since
the Epoch is UTC. In fact, Seconds Since the Epoch is in effect TAI plus
a constant albeit often unknown offset, and the seconds being counted are
nominally equal to the SI Second in duration.

As you know, the true nature of Seconds Since the Epoch was extensively
and sometimes violently debated several months ago during development of
the POSIX 1003.1 Revision. This revision, now in its seventh and final or
next to final draft, weighing in at 3,600 pages, is the unification and
codification of all POSIX and UNIX, replacing 20 or 30 individual
sometimes interlocking sometimes conflicting UNIX and POSIX standards, and
is the ultimate authority. In that document, it is made explicit that
leap seconds are not applied to Seconds Since the Epoch, that every day
has exactly 86,400 seconds, and it states that while "broken-down time" in
POSIX resembles UTC in format, it is not necessarily UTC.

This approach was chosen because: The primary use of time in POSIX is for
message and file modification timestamps, which require only a
monotonically non-decreasing timescale of reasonable resolution. Isolated
machines or networks of machines still need to keep time, but cannot do
UTC because being isolated they cannot know when the leap seconds will
come. Ordinary computer clock hardware and software naturally generates a
timescale resembling TAI, and trying to make the hardware do UTC is far
more complex.

So, the advice to Microsoft should be to follow P1003.1-200x (the draft's
name) in making the hardware clocks parallel TAI, with UTC generated from
TAI as needed. This will both solve their problems with local clocks,
allow better support for their built-in POSIX subsystem, and allow better
integration with the many non-MS platforms that .NET will need to work
with.

Joe Gwinn

Markus Kuhn

unread,
Jul 4, 2001, 5:21:07 PM7/4/01
to
joeg...@mediaone.net (Joe Gwinn) writes:
>However, it's not a good idea to have the hardware try to do UTC directly,
>because leap seconds cannot be predicted by any algorithm, instead
>requiring astronomical observation.

True, and in practice completely irrelevant. The drift of the BIOS is
two orders of magnitude larger than the drift of UT1 or UTC to TAI. In
addition, neither Win32 nor POSIX has at this time any notion of TAI
whatsoever (time_t is a UTC encoding under POSIX and will have to remain
so for backwards compatibility), therefore running the RTC in TAI is
really not practical and bad advice.

But then, we had that flame war on the PASC list already a few months ago.

>In fact, the underlying POSIX clock, called "Seconds
>Since the Epoch", counts seconds (and fractions of seconds) since the
>POSIX Epoch, 00:00:00 UTC 1 January 1970 AD.

The POSIX clock, called "Seconds Since the Epoch" encodes the
yyyy-mm-dd hh:mm:ss display of a UTC clock into an integer without
providing a distinction between 23:59:60 and 00:00:00 according to the
formula given in

http://www.cl.cam.ac.uk/~mgk25/volatile/posix-2-2-2-113.pdf

which is implemented in millions of systems to which backwards
compatibility must be maintained.

The 1990 POSIX standard had very clear language here. If the 200x
POSIX draft waters up that definition, than won't influence prudent
existing practice and backwards compatibility requirements. Unix
does not run on TAI. The world is quite relilient against amok
standards editors.

>In that document, it is made explicit that
>leap seconds are not applied to Seconds Since the Epoch, that every day
>has exactly 86,400 seconds, and it states that while "broken-down time" in
>POSIX resembles UTC in format, it is not necessarily UTC.

So what exactly is a "day" in the exact sense of the new POSIX document?
All that still sounds extremely dubious to me. Can you please post
the full text of the new draft specification here?

>This approach was chosen because: The primary use of time in POSIX is for
>message and file modification timestamps, which require only a

>monotonically non-decreasing timescale of reasonable resolution ...

... and reliable convertability to civilian local time zones. Only UTC
provides that without access to leap second tables.

>Isolated
>machines or networks of machines still need to keep time, but cannot do
>UTC because being isolated they cannot know when the leap seconds will
>come. Ordinary computer clock hardware and software naturally generates a
>timescale resembling TAI, and trying to make the hardware do UTC is far
>more complex.

Ordinary isolated computers have a time scale that differs significantly
from both TAI or UTC, because it is dependent on the local temperature
history of the 32.768 kHz crystal, which is two orders of magnitude stronger
than leap seconds. There is absolutely nothing more natural about TAI than
UTC from a the point-of-view of an undisciplined battery backed crystal
oscillator.

TAI is a rather unfamiliar concept for most non-time-geek mere
mortal users who do not have a HP caesium maser in the basement,
and TAI is neither widely known nor widely diseminated in
time broadcast services. "Ordinary computer clock hardware" (not network
sychronized) is typically manually synchronized every couple weeks/months
by a user to the famous beeps on the full hour on BBC radio (UTC), or
something like that. Your wrist watch approximates UTC plus an integral
hour offset, not TAI, because you sync it occasionally to a UTC clock,
not to a TAI clock. Just like any "ordinary computer clock hardware".

There is no technology on the horizon that would make clocks with
a drift better than the unpredictability of leap seconds cheap and
battery efficient (!) enough to justify routine use in normal office
computers. The best available battery-backed low-cost clock is AFAIK
the Maxim DS32KHZ <http://pdfserv.maxim-ic.com/arpdf/DS32kHz.pdf> and
it's temperature compensated crystal drifts around a minute per
year (2 ppm), whereas TAI-UTC drifts less than a second per year.

So again, why exactly do you claim that an unsynchronized local
computer clock follows TAI closer than UTC? All I see are clocks
that drift quickly far away from both TAI and UTC and that are
occasionally manually set back to UTC. That's the real world.

>So, the advice to Microsoft should be to follow P1003.1-200x (the draft's
>name) in making the hardware clocks parallel TAI, with UTC generated from
>TAI as needed.

If POSIX-200x really mandates TAI hardware clocks, then I'll reserve
already a place in the recycling bin for it. First of all, it is not the
business of an API standard to tell me, how I implement and configure by
underlying hardware. Secondly, I think it is really bad engineering to base
an API on TAI in a world in which only UTC is widely available. The NTP
community has understood and followed that for a long time and dubious
new text in some POSIX-200x can not change that, no matter what authority
its authors claim.

I am perfectly aware that UTC leap seconds can be disruptive, which is
why I propose to standardise a smoothed version or UTC called UTS for
the purpose of computer application timestamping and OS APIs, which is
just a formalized and exactly standardized form of what is already
common good practice since BSD's adjtime() anyway:

http://www.cl.cam.ac.uk/~mgk25/uts.txt

When ITU get's around to give UTS some formal blessing, then all that
needs to be fixed in POSIX.1-1990 with regard to leap seconds is to
replace the word UTC in section 2.2.2.113 with UTS and we know exactly
how to navigate POSIX systems tightly synchronized all over the world
across a UTC leap second without any non-monotonicities or long-term
deviations from UTC. Sounds perfect to me.

I'd also be happy to see POSIX standardize a proper sophisticated time API
that gives applications not only access to UTC, but also to TAI on those
few machines where it is reliably available for special applications
(mostly navigation and astronomic instrument control). My proposal for
such an API that supports both explicit coding of UTC leap seconds
and access to TAI (and thread-safe time zones and much more) is
available on

http://www.cl.cam.ac.uk/~mgk25/c-time/

though I think today that it is more an academic exercise to see
how a perfectionist time API could look like, while a UTS-based
seconds-since-the-epoch is what makes most sense in the real
world.

Cheers,

Markus

P.S.: Can you please post the relevant text from POSIX.1-200x?
When will IEEE drafts finally be freely available to the interested
public? Are the authority of a document and its price linearly or
inversely correlated? Will POSIX.1-200x define a time_t replacement
with better resolution and if yes, what are epoch, resolution and
leap-second behavior?

Philip Homburg

unread,
Jul 5, 2001, 3:40:40 AM7/5/01
to
In article <joegwinn-040...@192.168.1.100>,

Joe Gwinn <joeg...@mediaone.net> wrote:
>So, the advice to Microsoft should be to follow P1003.1-200x (the draft's
>name) in making the hardware clocks parallel TAI, with UTC generated from
>TAI as needed. This will both solve their problems with local clocks,
>allow better support for their built-in POSIX subsystem, and allow better
>integration with the many non-MS platforms that .NET will need to work
>with.

Even in the best case, a CMOS clock will gain or lose one second every two
weeks. So what is the point in dealing with leap seconds?

Philip Homburg

Peter Bunclark

unread,
Jul 5, 2001, 4:10:36 AM7/5/01
to
Philip Homburg wrote:

On my home pc, I lock the system time with NTP; on shutdown, it
sets the CMOS clock to accurate (ish) local civil time. Then when
I boot Windows the time is nearly right and I can set my Timex
DataLink watch by it.
My point is, we all know the CMOS clock can't keep time by itself, but a
decent OS can keep putting it back on track. Pity BIOS's don't
have a programable drift factor...
As with a lot of these time issues, they don't matter to many folk
because a few seconds here or there don't matter really; but since
we have the technology for high accuracy, at least the standards should
set targets. Remember there was `no point' using 4-digit years in '80s
software.
Another example might be nanosleep() - looked ridiculous when defined,
but more bits get filled in as systems get faster, thank goodness the
world didn't go for microsleep() (or even millisleep()!).

Pete.

Philip Homburg

unread,
Jul 5, 2001, 7:59:38 AM7/5/01
to
In article <3B4420FC...@ast.cam.ac.uk>,

Peter Bunclark <p...@ast.cam.ac.uk> wrote:
>Philip Homburg wrote:
>> In article <joegwinn-040...@192.168.1.100>,
>> Joe Gwinn <joeg...@mediaone.net> wrote:
>> >So, the advice to Microsoft should be to follow P1003.1-200x (the draft's
>> >name) in making the hardware clocks parallel TAI, with UTC generated from
>> >TAI as needed. This will both solve their problems with local clocks,
>> >allow better support for their built-in POSIX subsystem, and allow better
>> >integration with the many non-MS platforms that .NET will need to work
>> >with.
>>
>> Even in the best case, a CMOS clock will gain or lose one second every two
>> weeks. So what is the point in dealing with leap seconds?
>>
>> Philip Homburg
>
>On my home pc, I lock the system time with NTP; on shutdown, it
>sets the CMOS clock to accurate (ish) local civil time.

And how does it write the current time into the CMOS? A naive approach
may result in an offset of about half a second.

> Then when
>I boot Windows the time is nearly right and I can set my Timex
>DataLink watch by it.

Again how carefully does Window read the CMOS? An offset of upto one second
is likely.

> My point is, we all know the CMOS clock can't keep time by itself, but a
>decent OS can keep putting it back on track. Pity BIOS's don't
>have a programable drift factor...

Which means that the CMOS is even less reliable.

> As with a lot of these time issues, they don't matter to many folk
>because a few seconds here or there don't matter really; but since
>we have the technology for high accuracy, at least the standards should
>set targets. Remember there was `no point' using 4-digit years in '80s
>software.

The CMOS clock will never provide high accuracy time. The interface is
wrong. The end result is that you can stop worrying about how the CMOS
clock handles leap seconds, because in general it is far too inaccurate.

Let's try to get decent support for TAI in NTP first.


Philip Homburg

Mike

unread,
Jul 5, 2001, 8:15:45 AM7/5/01
to
On 4 Jul 2001 17:19:12 GMT, mg...@cl.cam.ac.uk (Markus Kuhn) wrote:

>Tom.H...@worldnet.att.net (Thomas A. Horsley) writes:
>>Markus Kuhn wrote:
>>>I have managed to get someone inside Microsoft interested in the
>>>problems of the old MS-DOS convention of keeping the IBM PC RTC in some
>>>local time that is not further specified in the CMOS RAM data.
>>
>>Oh Please, let it come true! If you need any signatures on petitions,
>>count me in! The RTC keeping local time instead of UTC is far more
>>annoying even than \ instead of / :-).
>
>I spent the afternoon looking for interesting strings in the Windows 2000
>kernel. I noted a UTF-16 encoded string "RealTimeIsUniversal"
>(NTOSKRNL.EXE:bbd4, NTKRNLPA.EXE:9304). This makes me wonder whether
>Windows 2000 doesn't already have an undocumented registry entry
>
> HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal
>
>or so that allows to keep the RTC in UTC (probably REG_DWORD with 0 or 1).
>None of this is documented or supported by Microsoft, so try at your
>own risk ...
>
>http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html
>
>

It looks like Windows may already use UTC internally...
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sysinfo/hh/sysmgmt/time_5gh1.asp

It is just the hardware clock that is not set ot UTC.

Peter Bunclark

unread,
Jul 5, 2001, 9:27:35 AM7/5/01
to
Philip Homburg wrote:

>
> The CMOS clock will never provide high accuracy time. The interface is
> wrong. The end result is that you can stop worrying about how the CMOS
> clock handles leap seconds, because in general it is far too inaccurate.
>

Never? If there was a strong recommendation, along the lines
of an rfc, ``the CMOS clock SHOULD maintain {TAI | UTC|whatever}'',
my wildly optimistic hope would be that future motherboard/bios/clock
manufacturers would get it right.

>
> Let's try to get decent support for TAI in NTP first.
>
>

Fully agree with that; and so much more likely than the Microsoft thing.

Pete.


> Philip Homburg

Philip Homburg

unread,
Jul 5, 2001, 10:06:34 AM7/5/01
to
In article <3B446B47...@ast.cam.ac.uk>,

Peter Bunclark <p...@ast.cam.ac.uk> wrote:
>Philip Homburg wrote:
>
>>
>> The CMOS clock will never provide high accuracy time. The interface is
>> wrong. The end result is that you can stop worrying about how the CMOS
>> clock handles leap seconds, because in general it is far too inaccurate.
>>
>
>Never? If there was a strong recommendation, along the lines
>of an rfc, ``the CMOS clock SHOULD maintain {TAI | UTC|whatever}'',
>my wildly optimistic hope would be that future motherboard/bios/clock
>manufacturers would get it right.

What is the point of making a highly accurate CMOS clock? You need a high
quality external time source to set the clock in the first place. Why not
use that source instead of your CMOS clock? You can use the CMOS to
maintain some idea of time across a reboot (in case your external time source
is down when you reboot), but you need to design a completely new hardware
interface for the CMOS clock to get something like microsecond precision
(currently millisecond precision is doable with some effort).

The chance that somebody who cares about subsecond accuracy and doesn't
have decent external time sources reboots around the time that a leapsecond
occurs is close to zero.

An alternative use for the CMOS clock is to maintain time while a PC is
switched off for a longer period of time. The problem is that cheap crystal
are not very accurate (or stable). It is probably more economical to buy
receivers for radio stations that broadcast time than to buy high quality
crystals for your CMOS.

Philip Homburg

Tim Roberts

unread,
Jul 5, 2001, 11:34:10 PM7/5/01
to
phi...@pch.home.cs.vu.nl (Philip Homburg) wrote:
>
>During the period of time between two leap seconds, the cmos clock will gain
>or lose many seconds. They are not very stable.

Indeed, during the period of time between two consecutive midnights, the
CMOS clock will gain or lose many seconds. They are MUCH less accurate
than many people believe.
--
- Tim Roberts, ti...@probo.com
Providenza & Boekelheide, Inc.

Peter Bunclark

unread,
Jul 6, 2001, 7:50:10 AM7/6/01
to
Philip Homburg wrote:

> In article <3B446B47...@ast.cam.ac.uk>,
> Peter Bunclark <p...@ast.cam.ac.uk> wrote:
> >Philip Homburg wrote:
> >
> >>
> >> The CMOS clock will never provide high accuracy time. The interface is
> >> wrong. The end result is that you can stop worrying about how the CMOS
> >> clock handles leap seconds, because in general it is far too inaccurate.
> >>
> >
> >Never? If there was a strong recommendation, along the lines
> >of an rfc, ``the CMOS clock SHOULD maintain {TAI | UTC|whatever}'',
> >my wildly optimistic hope would be that future motherboard/bios/clock
> >manufacturers would get it right.
>
> What is the point of making a highly accurate CMOS clock? You need a high

One example would be to sync your laptop which it's on the LAN, and have
it have the right time when you fire it up on the plane.
You can get very cheap, very accurate, wristwatches, so why not ask
for your CMOS clock to actually keep time? As you say, millisecond
is doable - that's hugely better than what is generally achieved.

Pete.

Colin Dancer

unread,
Jul 6, 2001, 8:59:41 AM7/6/01
to
I think one of the big issues is temperature control.

Your wrist watch normally stays at a pretty constant temperature given it's
contact you your body. Most watchs are therefore calibrated / speced at
about this temperature and keep fairly good time as a result.

The inside of your average computer varies much more in temperature that the
range your watch experiences and this can have a large effect on the
frequency of both the main and CMOS clock. The CMOS clock problem is the
worst, because once your machine shuts down the temperature will drop
signficantly, so some form of dynamic rather than static frequency
adjustment would be required. The technology exists but I guess there just
isn't the demand to drive deployement in off the shelf PCs.

For an experiment, try putting your watch in the fridge for a day, and
you'll find all but the most expensive will lose a good few seconds.

"Peter Bunclark" <p...@ast.cam.ac.uk> wrote in message
news:3B45A5F2...@ast.cam.ac.uk...

Aleksandar Milivojevic

unread,
Jul 6, 2001, 9:12:52 AM7/6/01
to
Peter Bunclark (p...@ast.cam.ac.uk) wrote:
> You can get very cheap, very accurate, wristwatches, so why not ask
> for your CMOS clock to actually keep time? As you say, millisecond
> is doable - that's hugely better than what is generally achieved.

Very cheap wristwathces are simply very cheap wristwathces. They
sometimes drift as far as +/- 15 seconds per day. If you want
anything more accurate, you'll have to pay it 50-100 USD. Or at least
that was the fact when I was looking wristwatches in shops last time
(some years ago).

--
Aleksandar Milivojević <al...@fly.srk.fer.hr>
Opinions expressed herein are my own.
Statements included here may be fiction rather than truth.

p8r

unread,
Jul 6, 2001, 11:00:23 AM7/6/01
to

I think it might be important not to confuse CMOS/RTC *accuracy* with
*data representation*. If we can imagine a world in which all wIntel
boxes *try* to represent time the same way as everyone else, then that
would be a much better world than one in which we have an entire class
of computers we have to hack in one of 37(?) different ways just to get
reliable approximate data.

As far as TAI/UTC is concerned, an RTC isn't an exceptionally intelligent
piece of circuitry, since it doesn't have to be. Just let it count its
cycles, and have those cycles represent International Atomic Time, and
let the software/OS that's concerned itself with timezones also concern
itself with leap seconds.

Finally, I must second Maarten Wiltink's criticism of the text itself;
preaching to the choir is one thing, dealing with the Devil is another.

Cheers,
Payter Versteegen

Jonathan Buzzard

unread,
Jul 6, 2001, 12:38:28 PM7/6/01
to
In article <yw66d65...@as101.tel.hr>,

Aleksandar Milivojevic <al...@fly.srk.fer.hr> writes:
> Peter Bunclark (p...@ast.cam.ac.uk) wrote:
>> You can get very cheap, very accurate, wristwatches, so why not ask
>> for your CMOS clock to actually keep time? As you say, millisecond
>> is doable - that's hugely better than what is generally achieved.
>
> Very cheap wristwathces are simply very cheap wristwathces. They
> sometimes drift as far as +/- 15 seconds per day. If you want
> anything more accurate, you'll have to pay it 50-100 USD. Or at least
> that was the fact when I was looking wristwatches in shops last time
> (some years ago).
>

True but a $10 watch will keep much better time that the CMOS clock on
a motherboard. Some of this is down to the wild temperature changes that
occur inside a computer case. Most of it could be improved by running
a clock with a faster crystal. Instead of using a 32768Hz crystal why
not use an 8MHz one?

JAB.

--
Jonathan A. Buzzard Email: jona...@buzzard.org.uk
Northumberland, United Kingdom. Tel: +44(0)1661-832195

Markus Kuhn

unread,
Jul 6, 2001, 3:03:38 PM7/6/01
to
>Philip Homburg wrote:
> You can get very cheap, very accurate, wristwatches, so why not ask
>for your CMOS clock to actually keep time? As you say, millisecond
>is doable - that's hugely better than what is generally achieved.

You can't get a wrist watch or any other device running on a tiny
long-lasting battery that drifts less than around half a minute
per year. High-precision crystal clocks keep the crystal heated
constantly to one of the local extremals of the crystal's
temperature characteristics, and that costs power.
Clocks that loose less than a few seconds per year cost a couple
of thousand euros. You don't have to go right for a caesium maser
to notice the difference between UT1 and TAI, there are very
good crystal clocks available with 10^-10 stability, but for
far more than the price of an entire good PC and with a need
for continuous power supply.

David L. Mills

unread,
Jul 6, 2001, 2:58:51 PM7/6/01
to p8r
Guys,

TAI support is in current NTPv4. It requires the nanokernel support in
order to reveal the TAI offset to user programs. You will either need to
install the NIST leapsecond file or use NTP public key cryptography to
snatch it directly from a NIST server.

Dave

Markus Kuhn

unread,
Jul 6, 2001, 3:07:13 PM7/6/01
to
jona...@happy.buzzard.org.uk (Jonathan Buzzard) writes:
>True but a $10 watch will keep much better time that the CMOS clock on
>a motherboard. Some of this is down to the wild temperature changes that
>occur inside a computer case. Most of it could be improved by running
>a clock with a faster crystal. Instead of using a 32768Hz crystal why
>not use an 8MHz one?

The $10 wrist watch also uses 32.768 kHz crystals. Power consumption
of the clock chip rises roughly linearly with the counting frequency,
so with 8 MHz, your batteries last 244x less long, which is of
significant customer inconvenience.

Markus Kuhn

unread,
Jul 6, 2001, 3:14:34 PM7/6/01
to
mg...@cl.cam.ac.uk (Markus Kuhn) writes:
>I spent the afternoon looking for interesting strings in the Windows 2000
>kernel. I noted a UTF-16 encoded string "RealTimeIsUniversal"
>(NTOSKRNL.EXE:bbd4, NTKRNLPA.EXE:9304). This makes me wonder whether
>Windows 2000 doesn't already have an undocumented registry entry
>
> HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal
>
>or so that allows to keep the RTC in UTC (probably REG_DWORD with 0 or 1).
>
>http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html

Go into regedit and set the entry as described.

Setting this registry entry does indeed keep the RTC in UTC and
seems to work very nicely on our Win2000 machines. I guess, it might
also work on older incarnations of Windows NT, but can't test this
locally here.

Has anyone else given it a try?

And for Windows98, you can still use the old MS-DOS solution:

ftp://ftp.simtel.net/pub/simtelnet/msdos/clocks/clk360rs.zip

May be, this should be added to all sorts or Linux installation
FAQs ...

Try it!

Philip Homburg

unread,
Jul 6, 2001, 5:35:49 PM7/6/01
to
In article <9i522a$7sl$1...@pegasus.csx.cam.ac.uk>,

Markus Kuhn <mg...@cl.cam.ac.uk> wrote:
>>Philip Homburg wrote:
>> You can get very cheap, very accurate, wristwatches, so why not ask
>>for your CMOS clock to actually keep time? As you say, millisecond
>>is doable - that's hugely better than what is generally achieved.

Please, be carefull with your attributions. I didn't say this. You didn't
say I did, but it looks that way.

Philip Homburg

Philip Homburg

unread,
Jul 6, 2001, 5:44:45 PM7/6/01
to
In article <3B460A6B...@udel.edu>, David L. Mills <mi...@udel.edu> wrote:
>TAI support is in current NTPv4. It requires the nanokernel support in
>order to reveal the TAI offset to user programs. You will either need to
>install the NIST leapsecond file or use NTP public key cryptography to
>snatch it directly from a NIST server.

Is there a way (hopefully described in a small, readable document) to get the
leadsecond information from NTP without going through the whole public key
business? IMHO, public key crypto and leapseconds are separate issues. The
public key crypto stuff looked quite complicated, whereas the distribution of
leapsecond information is (at least in theory) quite simple. In would like
to add leapsecond support to my NTP-compatible time-synchronization program,
but with the current approach it looks like a lot of work.

Philip Homburg

Lost Seconds

unread,
Jul 6, 2001, 5:11:42 PM7/6/01
to
On Fri, 6 Jul 2001 17:38:28 +0100, Jonathan A. Buzzard wrote:

>In article <yw66d65...@as101.tel.hr>,
> Aleksandar Milivojevic <al...@fly.srk.fer.hr> writes:
>> Peter Bunclark (p...@ast.cam.ac.uk) wrote:
>>> You can get very cheap, very accurate, wristwatches, so why not ask
>>> for your CMOS clock to actually keep time? As you say, millisecond
>>> is doable - that's hugely better than what is generally achieved.
>>
>> Very cheap wristwathces are simply very cheap wristwathces. They
>> sometimes drift as far as +/- 15 seconds per day. If you want
>> anything more accurate, you'll have to pay it 50-100 USD. Or at least
>> that was the fact when I was looking wristwatches in shops last time
>> (some years ago).
>>
>
>True but a $10 watch will keep much better time that the CMOS clock on
>a motherboard. Some of this is down to the wild temperature changes that
>occur inside a computer case. Most of it could be improved by running
>a clock with a faster crystal. Instead of using a 32768Hz crystal why
>not use an 8MHz one?

Because you can use a binary divider on 32768Hz to get a 1Hz output.
To do this on 8MHz would mean you might have to redefine the standard
second.

If you changed to say 8.192MHz it might be a little easier.

;-)

Joe Gwinn

unread,
Jul 6, 2001, 8:32:17 PM7/6/01
to
Comment below.

In article <4ip4i9...@192.168.42.254>, jona...@happy.buzzard.org.uk
(Jonathan Buzzard) wrote:

> In article <yw66d65...@as101.tel.hr>,
> Aleksandar Milivojevic <al...@fly.srk.fer.hr> writes:
> > Peter Bunclark (p...@ast.cam.ac.uk) wrote:
> >> You can get very cheap, very accurate, wristwatches, so why not ask
> >> for your CMOS clock to actually keep time? As you say, millisecond
> >> is doable - that's hugely better than what is generally achieved.
> >
> > Very cheap wristwathces are simply very cheap wristwathces. They
> > sometimes drift as far as +/- 15 seconds per day. If you want
> > anything more accurate, you'll have to pay it 50-100 USD. Or at least
> > that was the fact when I was looking wristwatches in shops last time
> > (some years ago).
> >
>
> True but a $10 watch will keep much better time that the CMOS clock on
> a motherboard. Some of this is down to the wild temperature changes that
> occur inside a computer case. Most of it could be improved by running
> a clock with a faster crystal. Instead of using a 32768Hz crystal why
> not use an 8MHz one?

Yes. The issue is crystal cost. There are various crystal cuts, some
with better temperature compensation than others. With IRIG-B receiver
cards, one can purchase a temperature compensated voltage controlled
crystal oscillator (TXVCO) option, and it makes a great deal of
difference. The frequency is 10 MHz.

The wristwatch oscillators are little tuning forks or bars made of quartz,
as I understand it. I suspect the temperature compensation is by use of a
temperature-sensitive ceramic capacitor, but I don't really know the
details.

Dallas Semiconductor (Owned by Maxim?) makes a better grade of CMOS
oscillator IC (with crystal and lithium battery built in). If memory
serves, one thing they boasted about was better temperature compensation.
This from an advertising insert in EDN Magazine, an electronics trade
magazine.

Joe Gwinn

Joe Gwinn

unread,
Jul 6, 2001, 9:43:49 PM7/6/01
to
In article <9i01c3$njf$1...@pegasus.csx.cam.ac.uk>, mg...@cl.cam.ac.uk
(Markus Kuhn) wrote:

> joeg...@mediaone.net (Joe Gwinn) writes:
> >However, it's not a good idea to have the hardware try to do UTC directly,
> >because leap seconds cannot be predicted by any algorithm, instead
> >requiring astronomical observation.
>
> True, and in practice completely irrelevant. The drift of the BIOS is
> two orders of magnitude larger than the drift of UT1 or UTC to TAI. In
> addition, neither Win32 nor POSIX has at this time any notion of TAI
> whatsoever (time_t is a UTC encoding under POSIX and will have to remain
> so for backwards compatibility), therefore running the RTC in TAI is
> really not practical and bad advice.

Not quite. Time-t is not quite a UTC encoding, and hardware clocks
naturally parallel TAI (follow at a fixed offset) to the degree their
accuracy allows.


> But then, we had that flame war on the PASC list already a few months ago.

Right. We don't need to relive it.


> >In fact, the underlying POSIX clock, called "Seconds
> >Since the Epoch", counts seconds (and fractions of seconds) since the
> >POSIX Epoch, 00:00:00 UTC 1 January 1970 AD.
>
> The POSIX clock, called "Seconds Since the Epoch" encodes the
> yyyy-mm-dd hh:mm:ss display of a UTC clock into an integer without
> providing a distinction between 23:59:60 and 00:00:00 according to the
> formula given in
>
> http://www.cl.cam.ac.uk/~mgk25/volatile/posix-2-2-2-113.pdf
>
> which is implemented in millions of systems to which backwards
> compatibility must be maintained.
>
> The 1990 POSIX standard had very clear language here.

Actually, no. That was the problem. If the language had been clear,
there would not have been such a debate about what the language really
meant.


> ... If the 200x
> POSIX draft waters up that definition, that won't influence prudent


> existing practice and backwards compatibility requirements. Unix

> does not run on TAI. The world is quite resilient against amok
> standards editors.

Suffice it to say that your views on how POSIX time does work and should
work ultimately did not prevail.

The 1990 wording was tightened up and clarified, but did not change how
time is intended to work in POSIX. Nor could it, given the existing
base.

It might have been possible to fix the leap-second related issues in POSIX
had the debate not degenerated into a flamewar, causing the non time
obsessed to run for cover screaming Enough!


> >In that document, it is made explicit that
> >leap seconds are not applied to Seconds Since the Epoch, that every day
> >has exactly 86,400 seconds, and it states that while "broken-down time" in
> >POSIX resembles UTC in format, it is not necessarily UTC.
>
> So what exactly is a "day" in the exact sense of the new POSIX document?
> All that still sounds extremely dubious to me. Can you please post
> the full text of the new draft specification here?

I don't know that system managers or readers would appreciate my posting a
3,600 page document on a newsgroup; the system would likely choke. Nor do
I own the copyright.


> >This approach was chosen because: The primary use of time in POSIX is for
> >message and file modification timestamps, which require only a
> >monotonically non-decreasing timescale of reasonable resolution ...
>
> ... and reliable convertability to civilian local time zones. Only UTC
> provides that without access to leap second tables.

Huh? You cannot generate UTC without access to leap second data.


> >Isolated
> >machines or networks of machines still need to keep time, but cannot do
> >UTC because being isolated they cannot know when the leap seconds will
> >come. Ordinary computer clock hardware and software naturally generates a
> >timescale resembling TAI, and trying to make the hardware do UTC is far
> >more complex.
>
> Ordinary isolated computers have a time scale that differs significantly
> from both TAI or UTC, because it is dependent on the local temperature
> history of the 32.768 kHz crystal, which is two orders of magnitude stronger
> than leap seconds. There is absolutely nothing more natural about TAI than
> UTC from a the point-of-view of an undisciplined battery backed crystal
> oscillator.
>
> TAI is a rather unfamiliar concept for most non-time-geek mere
> mortal users who do not have a HP caesium maser in the basement,
> and TAI is neither widely known nor widely diseminated in
> time broadcast services. "Ordinary computer clock hardware" (not network
> sychronized) is typically manually synchronized every couple weeks/months
> by a user to the famous beeps on the full hour on BBC radio (UTC), or
> something like that. Your wrist watch approximates UTC plus an integral
> hour offset, not TAI, because you sync it occasionally to a UTC clock,
> not to a TAI clock. Just like any "ordinary computer clock hardware".

To this level of detail, time itself is unfamiliar to all but a benighted
few. So, what's the point?


> There is no technology on the horizon that would make clocks with
> a drift better than the unpredictability of leap seconds cheap and
> battery efficient (!) enough to justify routine use in normal office
> computers. The best available battery-backed low-cost clock is AFAIK
> the Maxim DS32KHZ <http://pdfserv.maxim-ic.com/arpdf/DS32kHz.pdf> and
> it's temperature compensated crystal drifts around a minute per
> year (2 ppm), whereas TAI-UTC drifts less than a second per year.
>
> So again, why exactly do you claim that an unsynchronized local
> computer clock follows TAI closer than UTC? All I see are clocks
> that drift quickly far away from both TAI and UTC and that are
> occasionally manually set back to UTC. That's the real world.

This confuses the implementation with the model, and also assumes that no
Windows machine will ever have good clocks.

Actually, the Maxim part you mention above can be retrofitted to many
motherboards by anybody adept with a soldering iron. If the machine is
under warantee, this will void the warantee, so I would wait before doing
this, but it's not hard or risky because the Maxim part is made to be a
drop-in replacement for many CMOS clock chips. Details matter, though.


> >So, the advice to Microsoft should be to follow P1003.1-200x (the draft's
> >name) in making the hardware clocks parallel TAI, with UTC generated from
> >TAI as needed.
>
> If POSIX-200x really mandates TAI hardware clocks, then I'll reserve
> already a place in the recycling bin for it. First of all, it is not the
> business of an API standard to tell me, how I implement and configure by
> underlying hardware. Secondly, I think it is really bad engineering to base
> an API on TAI in a world in which only UTC is widely available. The NTP
> community has understood and followed that for a long time and dubious
> new text in some POSIX-200x can not change that, no matter what authority
> its authors claim.

Huh? You know perfectly well what POSIX-200x says. In particular, POSIX
does not mandate TAI or any other such clock. It instead defines its own
kind of clock.


> I am perfectly aware that UTC leap seconds can be disruptive, which is
> why I propose to standardise a smoothed version or UTC called UTS for
> the purpose of computer application timestamping and OS APIs, which is
> just a formalized and exactly standardized form of what is already
> common good practice since BSD's adjtime() anyway:
>
> http://www.cl.cam.ac.uk/~mgk25/uts.txt
>

> When ITU gets around to give UTS some formal blessing, then all that


> needs to be fixed in POSIX.1-1990 with regard to leap seconds is to
> replace the word UTC in section 2.2.2.113 with UTS and we know exactly
> how to navigate POSIX systems tightly synchronized all over the world
> across a UTC leap second without any non-monotonicities or long-term
> deviations from UTC. Sounds perfect to me.

Somehow, I don't think that the ITU will be blessing UTS anytime soon.
Nor do I think that UTS is any easier to implement than UTC, as one still
needs access to leapsecond information.


> I'd also be happy to see POSIX standardize a proper sophisticated time API
> that gives applications not only access to UTC, but also to TAI on those
> few machines where it is reliably available for special applications
> (mostly navigation and astronomic instrument control). My proposal for
> such an API that supports both explicit coding of UTC leap seconds
> and access to TAI (and thread-safe time zones and much more) is
> available on
>
> http://www.cl.cam.ac.uk/~mgk25/c-time/
>
> though I think today that it is more an academic exercise to see
> how a perfectionist time API could look like, while a UTS-based
> seconds-since-the-epoch is what makes most sense in the real
> world.

The POSIX standards community standardizes existing practice. So, the way
to proceed is to implement your ideas, get a reasonable number of people
to use and test the code, and then make a proposal. Building these new
APIs into NTP would make lots of sense, and can be done without the help
or even permission of POSIX, for that matter.


> Cheers,
>
> Markus
>
> P.S.: Can you please post the relevant text from POSIX.1-200x?

Some of it, anyway. In a separate posting.


> When will IEEE drafts finally be freely available to the interested
> public?

Good question. People have been hounding IEEE for years, but to no avail;
they make too much money from selling standards. So do ISO and ITU, for
that matter.


> Are the authority of a document and its price linearly or
> inversely correlated?

Hmm. If a standard isn't important, it will prove impossible to charge a
high price. However, Internet standards are free yet quite important,
while ISO-OSI standards are quite expensive but now eclipsed by the
Internet. I'm not sure that the price of ISO standards was the main
issue, though it was a problem. More to the point, ISO-OSI never really
worked all that well, while Internet stuff did work, and the price of
implementations was right. Anyway, there doesn't seem to be much
correlation between price and importance.


> Will POSIX.1-200x define a time_t replacement
> with better resolution and if yes, what are epoch, resolution and
> leap-second behavior?

POSIX.1-200x will not, because it's basically done, with 96% approval in
the last round of balloting. There is talk of eventually redoing time_t,
though. This would be done as an amendment. No idea what the technical
details will be; the issue is going to be quite contentious, which is why
it wasn't done for POSIX.1-200x. I personally favored David Korn's
proposal, to define time_t as a 64-bit unsigned integer composed of two
subfields with a binary point between, being seconds and fractions of a
second. I forget the sizes of the fields. One big problem was that there
is no way to make this backward compatible with the legacy time-T format.

Joe Gwinn

David L. Mills

unread,
Jul 7, 2001, 2:17:53 AM7/7/01
to Philip Homburg
Philip,

In short, the answer to your conjecture is no. They are not separate
issues. You can always snatch the NIST file directly and completely
ignore NTP. Happy chime.

Dave

Jonathan Buzzard

unread,
Jul 6, 2001, 6:22:14 PM7/6/01
to
In article <9i522a$7sl$1...@pegasus.csx.cam.ac.uk>,

mg...@cl.cam.ac.uk (Markus Kuhn) writes:
>>Philip Homburg wrote:
>> You can get very cheap, very accurate, wristwatches, so why not ask
>>for your CMOS clock to actually keep time? As you say, millisecond
>>is doable - that's hugely better than what is generally achieved.
>
> You can't get a wrist watch or any other device running on a tiny
> long-lasting battery that drifts less than around half a minute
> per year. High-precision crystal clocks keep the crystal heated
> constantly to one of the local extremals of the crystal's
> temperature characteristics, and that costs power.
> Clocks that loose less than a few seconds per year cost a couple
> of thousand euros. You don't have to go right for a caesium maser
> to notice the difference between UT1 and TAI, there are very
> good crystal clocks available with 10^-10 stability, but for
> far more than the price of an entire good PC and with a need
> for continuous power supply.
>

Why on earth do they cost that much? All you need is a high speed
crystal in an oven (cook it to say 50 Celcius) which has been
calibrated, and battery backup in case the power goes. Some hardware
dividing, and pickup the much slower frequency with a PIC, that could
even do all the LED control for the display.

Being really picky I have one on my wall that only cost 20UKP and keeps
perfect time. Picks up the Rugby MSF signal it keep it right.

Markus Kuhn

unread,
Jul 7, 2001, 4:57:03 AM7/7/01
to

USENET Forensics 101

Neither of us said that, but then this is also perfectly obvious
for the experienced USENET reader: The name of the author
is always one ">" indentation level higher than the text provided
by the author. Above, the claim that there exist accurate cheap wrist
watches is at indentation level 3, so the name of its author would
have to be indented at level 2, at which level there is no author
name given. So it is obvious that neither I or you said that,
but someone who posted between us.

There was a time when this was well known to most USENET participants,
but with exponential growth of the Internet, I guess we can't expect any
more that any significant fraction of the users are experienced.

Markus Kuhn

unread,
Jul 7, 2001, 5:45:53 AM7/7/01
to
joeg...@mediaone.net (Joe Gwinn) writes:
>> ... and reliable convertability to civilian local time zones. Only UTC
>> provides that without access to leap second tables.
>
>Huh? You cannot generate UTC without access to leap second data.
>
>> http://www.cl.cam.ac.uk/~mgk25/uts.txt

>
>Somehow, I don't think that the ITU will be blessing UTS anytime soon.
>Nor do I think that UTS is any easier to implement than UTC, as one still
>needs access to leapsecond information.

If you have accurate UTC, then ususally you also have reliable short-term
leap second information coming with it from the same source!
Practically all sources of UTC also offer a warning about the
next coming leap second. GPS and NTP provide the warning a long
time in advance, DCF77 and (I think) WWV(B) provides it one hour in advance.
The only UTC transmitter that I am aware of that lacks any leap second warning is
MSF in Rugby, UK [but in most of its coverage area, DCF77 can be received
reliably as well, so MSF is not that relevant in practice except for
patriotic reasons; UK folks don't like depending on the Continent for
anything].

UTS was specifically designed to to be easy to implement if all the leap
second information that you have is a half-hour advance notice that
a leap second is coming up, which is what all the services except MSF
provide you. The Linux/BSD kernel has had status bits to forward that
information to applications for more than half a decade; short-term
leap-second warnings are certainly well-established NTP practice on POSIX
platforms.

You are right in that UTS is not easier to implement than UTC (or TAI)
in the infrastructore (kernel, time servers, etc.). But UTS is far easier
to handle than both UTC and TAI by *applications*, and *that* is the crucial
point to appreciate:

a) UTS is identical to UTC except for 20 minutes every 1.5 years, so
it fullfills the user requirements of having a very close tie to the
official local time (the hourly BBC radio beeps, etc.), which
TAI doesn't.

b) UTS can be mapped bijectively onto the POSIX time_t scale (without
needing extra magic to represent 23:59:60 as extremely rare and difficult
to test events), which UTC can't.

c) The upper error for time interval measurements in UTS is 0.1% of the interval
length or 1 second (whichever is smaller) in rare occasions (20 minutes
every 1.5 years), while UTC only has an upper error limit of 1 second and
an *unbounded* relative time error for length interval measurements.

d) UTS derived time_t is monotonic, whereas UTC derived time_t is not
necessarily.

All that makes UTS are very pleasant to use compromise that is more suitable
than either UTC or TAI as the base of a time scale provided by an operating
system to its applications.

If enough experts appreciate that, then it should be no problem to sprinkle
the necessary blessings of any standards body over it and turn a simple URL
into sufficiently expensive magic authoritative paper, if that really is what
the world needs. (If you heard here some sarcasm about the entire religious
ceremony of formal ISO/ITU/IEEE/etc. standardization, please ignore ;-)

The UTS proposal had been up for half a year now, and I never received
any suggestions for technical improvements. The only negative comments
that I received were about the fact that it is not (yet) a formally
recognised standard as authors of API specifications prefer to reference.
Well, that can be fixed, as soon as the technical details are
agreed upon. I think the current proposal is not particularly
controversial.

Have a look at it again and let me know what you think, before I push
this further through the formal instances.

http://www.cl.cam.ac.uk/~mgk25/uts.txt

Markus

David Woolley

unread,
Jul 7, 2001, 6:44:20 AM7/7/01
to
In article <joegwinn-060...@192.168.1.100>,
joeg...@mediaone.net (Joe Gwinn) wrote:

> Yes. The issue is crystal cost. There are various crystal cuts, some
> with better temperature compensation than others. With IRIG-B receiver

It's not so much that there are good and bad cuts as that different
cuts suit different conditions. The temperature dependence is cubic.
You can have a cut which has a very flat characteristic over a narrow
temperature range, where the positive and negative peaks of the cubit
coalesce. These are probably good for ovened crystals. You can also
have ones which keep reasonably accurate over a wide range but are
only flat in a small region. I suspect that watch crystals are cut for
something in between, with an ideal temperature near body temperature.

> The wristwatch oscillators are little tuning forks or bars made of quartz,
> as I understand it. I suspect the temperature compensation is by use of a

Mostly crystals. These are no less crystals than the ones you described
above. (It is possible that PC CMOS clocks use crystals intended for
watches and are therefore not at the ideal temperature.)

> temperature-sensitive ceramic capacitor, but I don't really know the
> details.

I doubt that there is any compensation other than the choice of crystal
cut.

David Woolley

unread,
Jul 7, 2001, 7:02:05 AM7/7/01
to
Message-ID: <mmd5i9...@192.168.42.254>
From: jona...@happy.buzzard.org.uk (Jonathan Buzzard)

> Why on earth do they cost that much? All you need is a high speed

Because the market is small (and will bear the price). Commercial
products with a small market often have similar tooling, sales, and
support costs to higher volume products. Also, probably because
they are not as simple as you assume.

> crystal in an oven (cook it to say 50 Celcius) which has been

Normal ovens use bi-metallic strips as on off controllers. They are
probably not that much better than operation in an air conditioned room,
except when the equipment is switched on and off.

For 1:1E10 stability, I'd assume that they don't just control the
temperature, but also measure the residual temperature error, look it up
in an individual calibration curve, and tweak the electrical operation
or frequency counting, to compensate. I'd expect the temperature control
to be fully proportional.

> Being really picky I have one on my wall that only cost 20UKP and keeps
> perfect time. Picks up the Rugby MSF signal it keep it right.

These radio clocks can actually have very poor crystals, with poor
frequency calibration, as they probably rely on frequent corrections.
I'd be surprised if they actually bothered to do the frequency correction
that NTP daemons do, but rather simply corrected the phase error.
Mine updates every 2 hours, which would mean it could get away with a
100ppm error without many people noticing.

However, such receivers should not work within a PC, if it complies with
EMC standards. They would need an external antenna.

PS If you really do know a way of getting 1:1E10 accuracy cheaply, let me
know, as there are quite a few amateur SETI searchers who might like to
get 0.15 Hz accuracy on the hydrogen line; the professionals, with
access to the hydrogen maser standards at Arecibo are only managing half
this as their minimum bandwidth. The typical amateur system works to
about 10Hz (over a minute or two).

Joe Gwinn

unread,
Jul 7, 2001, 9:46:21 AM7/7/01
to
In article <9i6loh$cep$2...@pegasus.csx.cam.ac.uk>, mg...@cl.cam.ac.uk
(Markus Kuhn) wrote:

> joeg...@mediaone.net (Joe Gwinn) writes:
> >> ... and reliable convertability to civilian local time zones. Only UTC
> >> provides that without access to leap second tables.
> >
> >Huh? You cannot generate UTC without access to leap second data.
> >
> >> http://www.cl.cam.ac.uk/~mgk25/uts.txt
> >
> >Somehow, I don't think that the ITU will be blessing UTS anytime soon.
> >Nor do I think that UTS is any easier to implement than UTC, as one still
> >needs access to leapsecond information.
>
> If you have accurate UTC, then ususally you also have reliable short-term
> leap second information coming with it from the same source!
> Practically all sources of UTC also offer a warning about the
> next coming leap second. GPS and NTP provide the warning a long
> time in advance, DCF77 and (I think) WWV(B) provides it one hour in advance.
> The only UTC transmitter that I am aware of that lacks any leap second
warning is
> MSF in Rugby, UK [but in most of its coverage area, DCF77 can be received
> reliably as well, so MSF is not that relevant in practice except for
> patriotic reasons; UK folks don't like depending on the Continent for
> anything].

The point is that isolated systems are just that -- isolated. They do
*not* have access to any leap second information whatsoever. Not even by
radio.


[snip]


> All that makes UTS are very pleasant to use compromise that is more suitable
> than either UTC or TAI as the base of a time scale provided by an operating
> system to its applications.
>
> If enough experts appreciate that, then it should be no problem to sprinkle
> the necessary blessings of any standards body over it and turn a simple URL
> into sufficiently expensive magic authoritative paper, if that really is what
> the world needs. (If you heard here some sarcasm about the entire religious
> ceremony of formal ISO/ITU/IEEE/etc. standardization, please ignore ;-)

It is a glacial process to be sure. That's why I wouldn't worry about the
ITU standardizing UTS in our lifetimes. Adding UTS as an NTP option is
far more likely to succeed soon enough to matter.


> The UTS proposal had been up for half a year now, and I never received
> any suggestions for technical improvements. The only negative comments
> that I received were about the fact that it is not (yet) a formally
> recognised standard as authors of API specifications prefer to reference.
> Well, that can be fixed, as soon as the technical details are
> agreed upon. I think the current proposal is not particularly
> controversial.
>
> Have a look at it again and let me know what you think, before I push
> this further through the formal instances.
>
> http://www.cl.cam.ac.uk/~mgk25/uts.txt

I recall providing some comments a year ago or so, when you first proposed
UTS, but I no longer recall the details.

Joe Gwinn

Mike Stump

unread,
Jul 7, 2001, 1:20:02 PM7/7/01
to
In article <mmd5i9...@192.168.42.254>,

Jonathan Buzzard <jona...@happy.buzzard.org.uk> wrote:
>Why on earth do they cost that much? All you need is a high speed
>crystal in an oven

I've always wondered, why can't we have instead, a accurate
temperature sensor, say 16-24 bits worth of a slow A/D converter that
just measures the crystal temp, and then in the algorithm, we
compensate for the actual temp of the crystal. Lower power budget, as
we don't have to heat anything, CPUs are reasonably cheap ($1), the
temprature sensor seems like it should be cheap, the A/D converters
get cheap ($9 for random 18 bit maxim part), if you can run them
extremely slow (well under 20 Msps)... You can fire up the converter
frequently if the temprature wonders, or less frequently if it stays
the same to conserve power.

Jonathan Buzzard

unread,
Jul 7, 2001, 4:42:31 AM7/7/01
to
In article <fbackt8m14rj3isu1...@4ax.com>,

Eh, divide by 80000 in hardware gives a tick every 1/100th of a second.
A little more difficult than lots of simple divide by 2's but still
perfectly possible, and hardly any more gates by todays standards.

Joe Doupnik

unread,
Jul 7, 2001, 3:28:45 PM7/7/01
to
--------
Temperature isn't the only important factor. Crystal aging is
another, and that is usually the outgassing of the crystal and especially
that of its case components (solder...). Those are temperature dependent
for obvious reasons, and vary a lot from unit to unit.
When one is discussing mass production motherboards a dollar's worth
of components (you are discussing tens of times more) is many dollars for the
end user, and that end user could care less about clocking finesse. Average
users just want the box to work and not require human intervention. Technical
users place no trust in mass production MB clocks. So I think your suggestion
is one looking for an acceptable business plan.
Joe D.

Jonathan Buzzard

unread,
Jul 7, 2001, 3:40:35 PM7/7/01
to
In article <GG45H...@kithrup.com>,
m...@kithrup.com (Mike Stump) writes:

Interesting idea, the problem as I see it is the difficulty of getting a
A/D converter that is not temperature sensitive in itself.

Jonathan Buzzard

unread,
Jul 7, 2001, 3:50:58 PM7/7/01
to
In article <T9945...@djwhome.demon.co.uk>,

da...@djwhome.demon.co.uk (David Woolley) writes:
> Message-ID: <mmd5i9...@192.168.42.254>
> From: jona...@happy.buzzard.org.uk (Jonathan Buzzard)
>
>> Why on earth do they cost that much? All you need is a high speed
>
> Because the market is small (and will bear the price). Commercial
> products with a small market often have similar tooling, sales, and
> support costs to higher volume products. Also, probably because
> they are not as simple as you assume.
>
>> crystal in an oven (cook it to say 50 Celcius) which has been
>
> Normal ovens use bi-metallic strips as on off controllers. They are
> probably not that much better than operation in an air conditioned room,
> except when the equipment is switched on and off.

They are however cheaper than air conditioned rooms.

> For 1:1E10 stability, I'd assume that they don't just control the
> temperature, but also measure the residual temperature error, look it up
> in an individual calibration curve, and tweak the electrical operation
> or frequency counting, to compensate. I'd expect the temperature control
> to be fully proportional.

I was only really thinking of somethat that would be good for a second
a year. However the idea of an oven is to stop the temperature drifting
in the first place.

>> Being really picky I have one on my wall that only cost 20UKP and keeps
>> perfect time. Picks up the Rugby MSF signal it keep it right.
>
> These radio clocks can actually have very poor crystals, with poor
> frequency calibration, as they probably rely on frequent corrections.
> I'd be surprised if they actually bothered to do the frequency correction
> that NTP daemons do, but rather simply corrected the phase error.
> Mine updates every 2 hours, which would mean it could get away with a
> 100ppm error without many people noticing.

Which is more than adequate for keeping a PC clock from drifting more than
a fraction of a second.

> However, such receivers should not work within a PC, if it complies with
> EMC standards. They would need an external antenna.
>
> PS If you really do know a way of getting 1:1E10 accuracy cheaply, let me
> know, as there are quite a few amateur SETI searchers who might like to
> get 0.15 Hz accuracy on the hydrogen line; the professionals, with
> access to the hydrogen maser standards at Arecibo are only managing half
> this as their minimum bandwidth. The typical amateur system works to
> about 10Hz (over a minute or two).

I am sure a high speed crystal in an oven would get you close for not
much money. You would need access to an accurate frequency counter
to calibrate it though.

David Woolley

unread,
Jul 7, 2001, 6:18:49 PM7/7/01
to
In article <9i6loh$cep$2...@pegasus.csx.cam.ac.uk>,
mg...@cl.cam.ac.uk (Markus Kuhn) wrote:

> The only UTC transmitter that I am aware of that lacks any leap second warning is
> MSF in Rugby, UK [but in most of its coverage area, DCF77 can be received

MSF transmits DUT1, the difference between UTC and UT1 to an accuracy
of a tenth of a second. That can be used to detect a leap second after
the event, and it is probably fairly easy to guess which, if any, of the
two dates a year allowed leap seconds will have them. I thought it also
did code a leap second warning.

> any suggestions for technical improvements. The only negative comments
> that I received were about the fact that it is not (yet) a formally

I seem to remember a negative comment to the effect that it would be
confusing for systems that did need accurate time differences over short
periods.

Dave Tweed

unread,
Jul 7, 2001, 7:40:11 PM7/7/01
to
Mike Stump wrote:
> I've always wondered, why can't we have instead, a accurate
> temperature sensor, say 16-24 bits worth of a slow A/D converter that
> just measures the crystal temp, and then in the algorithm, we
> compensate for the actual temp of the crystal.

You can. It's called a Temperature-Compensated Crystal Oscillator (TCXO).
The Oven-Controlled Crystal Oscillators (OCXOs) perform somewhat better
because holding the crystal at a constant temperature means you don't have
to worry about the exact nonlinearities of the temperature-frequency curve.

As others have pointed out, though, there are effects other than temperature,
mostly related to aging, that affect the frequency of the crystal oscillator.

I'm working with a class of products calld "GPS-disciplined master
oscillators" for the broadcast industry. These generally consist of a
single- or double-oven OCXO that is voltage-adjustable over a small
range. A controller compares the output of the oscillator to the 1 pps
output of a GPS receiver module and adjusts the oscillator to keep it
locked. The better ones also model the aging characteristics of the
oscillator and extrapolate forward in time during the intervals when
GPS is unavailable.

-- Dave Tweed

Dave Tweed

unread,
Jul 7, 2001, 7:41:38 PM7/7/01
to
Jonathan Buzzard wrote:
> Interesting idea, the problem as I see it is the difficulty of getting a
> A/D converter that is not temperature sensitive in itself.

It doesn't matter; any temperature dependencies of the ADC get modeled and
compensated for along with those of the crystal itself.

-- Dave Tweed

Johan Kullstam

unread,
Jul 7, 2001, 10:31:41 PM7/7/01
to
jona...@happy.buzzard.org.uk (Jonathan Buzzard) writes:

> In article <yw66d65...@as101.tel.hr>,
> Aleksandar Milivojevic <al...@fly.srk.fer.hr> writes:
> > Peter Bunclark (p...@ast.cam.ac.uk) wrote:
> >> You can get very cheap, very accurate, wristwatches, so why not ask
> >> for your CMOS clock to actually keep time? As you say, millisecond
> >> is doable - that's hugely better than what is generally achieved.
> >
> > Very cheap wristwathces are simply very cheap wristwathces. They
> > sometimes drift as far as +/- 15 seconds per day. If you want
> > anything more accurate, you'll have to pay it 50-100 USD. Or at least
> > that was the fact when I was looking wristwatches in shops last time
> > (some years ago).
> >
>
> True but a $10 watch will keep much better time that the CMOS clock on
> a motherboard. Some of this is down to the wild temperature changes that
> occur inside a computer case. Most of it could be improved by running
> a clock with a faster crystal. Instead of using a 32768Hz crystal why
> not use an 8MHz one?

this might not help. the high frequency crystals are often based on a
slower crystal and then frequency multiplied up to your frequency.

--
J o h a n K u l l s t a m
[kull...@ne.mediaone.net]
Don't Fear the Penguin!

Mike Stump

unread,
Jul 8, 2001, 12:54:00 AM7/8/01
to
In article <27p7i9...@192.168.42.254>,

Jonathan Buzzard <jona...@happy.buzzard.org.uk> wrote:
>I was only really thinking of somethat that would be good for a
>second a year.

Just for grins...

http://www.corningfrequency.com/catalog/datasheets/oco700sc.pdf

will do the trick. I didn't see the buy me now button... :-(

Though, if you have the money, why not splurge and get:

http://www.corningfrequency.com/catalog/datasheets/oco700sc.pdf

and be good for 79 years, or 13 ms a year.

Markus Kuhn

unread,
Jul 8, 2001, 3:45:53 AM7/8/01
to
da...@djwhome.demon.co.uk (David Woolley) writes:
>PS If you really do know a way of getting 1:1E10 accuracy cheaply, let me
>know, as there are quite a few amateur SETI searchers who might like to
>get 0.15 Hz accuracy on the hydrogen line; the professionals, with
>access to the hydrogen maser standards at Arecibo are only managing half
>this as their minimum bandwidth. The typical amateur system works to
>about 10Hz (over a minute or two).

You can get off-the-shelf GPS receivers with a 10 MHz output signal (Trimble)
and frequency counters or PLL generators with a 10 MHz clock reference input
signal (Agilent, etc.). Would that combination get you close enough?

I don't understand though, what the SETI folks need that frequency accuracy
for. If LGM really would bother to direct a narrowband high power hydrogen
line message to us, surely the Doppler shift would be significantly larger
than a few Hz anyway?

Markus Kuhn

unread,
Jul 8, 2001, 4:01:29 AM7/8/01
to
da...@djwhome.demon.co.uk (David Woolley) writes:
>> The only UTC transmitter that I am aware of that lacks any leap second warning is
>> MSF in Rugby, UK [but in most of its coverage area, DCF77 can be received
>
>MSF transmits DUT1, the difference between UTC and UT1 to an accuracy
>of a tenth of a second. That can be used to detect a leap second after
>the event, and it is probably fairly easy to guess which, if any, of the
>two dates a year allowed leap seconds will have them. I thought it also
>did code a leap second warning.

No, it doesn't, I even asked the operators. Check yourself in the spec.
URLs to the data format descriptions of all LF radio clocks that I know
are on

http://www.cl.cam.ac.uk/~mgk25/lf-clocks.html

David Woolley

unread,
Jul 8, 2001, 6:29:27 AM7/8/01
to
In article <27p7i9...@192.168.42.254>,
jona...@happy.buzzard.org.uk (Jonathan Buzzard) wrote:

> I am sure a high speed crystal in an oven would get you close for not
> much money. You would need access to an accurate frequency counter
> to calibrate it though.

Some people already use the ovened crystal option for their receivers.
The frequency of the crystal tends to be constrained by the design of the
receiver; however, radio designers would never think of using a 32kHz
crystal as a standard. They'll typically use one at around 5 to 10MHz
fundamental frequency, but possibly operating on a higher overtone.

(Incidentally, although 32kHz may be used for power consumption reasons,
high frequency crystals are going to be more fragile and therefore less
stable, I would have thought. There is probably an optimum frequency,
but it is not going to be very high by modern standards.)

The frequency counter would need even better stability. Most people
do not have access to such devices and even rental costs would be
prohibitive. (On the other hand, many of those constructing at a
level that would allow such an oscillator to be introduced do work in
the industry.)

David Woolley

unread,
Jul 8, 2001, 7:41:44 AM7/8/01
to
In article <9i940p$60n$2...@pegasus.csx.cam.ac.uk>,
mg...@cl.cam.ac.uk (Markus Kuhn) wrote:

> da...@djwhome.demon.co.uk (David Woolley) writes:
> >
> >MSF transmits DUT1, the difference between UTC and UT1 to an accuracy

> >two dates a year allowed leap seconds will have them. I thought it also


> >did code a leap second warning.

> No, it doesn't, I even asked the operators. Check yourself in the spec.

It does transmit DUT1; it doesn't transmit leap warnings, except to the
very limited extent that the leap information is available a few seconds
before the the actual insertion/deletion, as the frame is transmitted
in the minute preceding the actual time. I was probably thinking of the
daylight saving time warning bit.

http://www.cl.cam.ac.uk/~mgk25/lf-clocks.html

The NPL URL to which this points is off the air this morning, so I had
to read the Google HTML extract of their PDF document.

Jonathan Buzzard

unread,
Jul 8, 2001, 3:12:19 PM7/8/01
to
In article <T9945...@djwhome.demon.co.uk>,
da...@djwhome.demon.co.uk (David Woolley) writes:

[SNIP]


>
> (Incidentally, although 32kHz may be used for power consumption reasons,
> high frequency crystals are going to be more fragile and therefore less
> stable, I would have thought. There is probably an optimum frequency,
> but it is not going to be very high by modern standards.)

It does occur to me that a while a 32kHz crystal made sense 20 years
ago on power consumption, and could run for 10 years on a lithium
battery. You can do much better these days, so there is nothing to
stop you using a higher frequency crystal these days.

Don Payette

unread,
Jul 9, 2001, 1:19:25 PM7/9/01
to
Your first bullet in The Problem is a red herring. That is,
the issue of converting back and forth between local time
and UTC. To convert in EITHER direction you need to know the
offset (or zone name, which gives offset). Local time has no
ambiguity in the one hour overlap when the zone is known.
1:30 AM PST is not the same as 1:30 AM PDT and both can be
unambiguously converted to UTC.

Your subsequent points are valid, however, in that the CMOS
itself doesn't contain the zone information.

In your Solution you talk about databases and such getting
non-decreasing timestamps. Time-critical apps don't use
local time and Windows already has GetSystemTime
that does this nicely. While it is possible that a re-boot
to a new OS will cause an incorrect time temporarily, it's unlikely
an admin would start up a database app, or anything else, prior
to correcting the situation. My opinion, of course, but I believe
it unlikely that a production server would be allowed to have
a dual boot in the first place, at least in my shop it wouldn't. ;-)


mg...@cl.cam.ac.uk (Markus Kuhn) wrote:

>I have managed to get someone inside Microsoft interested in the
>problems of the old MS-DOS convention of keeping the IBM PC RTC in some
>local time that is not further specified in the CMOS RAM data. I was asked
>to write up a case to convince Microsoft management to dedicate resources
>for fixing this and enabling the Windows RTC driver to maintain the
>battery clock in Universal Time instead of local time.
>
>I just did so on
>
> http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html
>
>and I would like to invite all computer time gurus out there to review
>this brief essay before I send it off to Redmond.
>
>Please let me know by email if you have any further
>
> - arguments
> - technical suggestions/proposals
> - related references
> - URLs of well-documented RTC DST problem stories
>
>that you think whoever at Microsoft touches the Windows RTC code
>next should be aware of.
>
>Cheers,
>
>Markus

-----------
Don Payette
Unisys Corporation
I speak only for myself; not my employer
Please reply in the newsgroup. Don't try
sending e-mail.

Don Payette

unread,
Jul 9, 2001, 7:56:14 PM7/9/01
to
Another thought. If you're making a plea to Microsoft,
I would suggest stating it in terms of things Microsoft
would be interested in. To that end I would get rid
of all references to anything Unix related, including
all of the Posix stuff.

IMHO, Microsoft views their customers as using only Windows
(why would they need anything else?). Problems related
to dual-booting to any OS other than a Microsoft OS will
fall on deaf ears. If you want to convince them, you'll need
to make the argument Windows centric. The only argument I
see there is the dual-boot between different flavors of
Windows.

Good luck.

Marc Brett

unread,
Jul 10, 2001, 4:01:45 AM7/10/01
to
Don Payette <Nob...@nowhere.com> wrote:
> IMHO, Microsoft views their customers as using only Windows
> (why would they need anything else?). Problems related
> to dual-booting to any OS other than a Microsoft OS will
> fall on deaf ears. If you want to convince them, you'll need
> to make the argument Windows centric. The only argument I
> see there is the dual-boot between different flavors of
> Windows.

...which would mandate keeping the RTC in local time...

--
Marc Brett +44 20 8560 3160 WesternGeco
Marc....@westerngeco.com 455 London Road, Isleworth
FAX: +44 20 8847 5711 Middlesex TW7 5AA UK

Markus Kuhn

unread,
Jul 10, 2001, 1:08:51 PM7/10/01
to
Quick Update: Current support status in Windows

While browsing through the Windows 2000 SP2 kernel binary with the
"strings" tool, I noted a UTF-16 encoded string "RealTimeIsUniversal"
(NTOSKRNL.EXE:bbd4, NTKRNLPA.EXE:9304). It turned out that Windows
NT tests a long forgotten undocumented registry entry

HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal

for instructing the kernel that the RTC runs in UT (type REG_DWORD,
values 0 for local time or 1 for Universal Time). When you set
RealTimeIsUniversal=1, the kernel initialization code will jump after
reading the CMOS clock over the section that translates local time
into CMOS time and will take the CMOS time directly as the Universal
Time kept by the NT kernel clock.

I got a reply from someone in Microsoft's Base Kernel Team
who got interested in RealTimeIsUniversal and they had a look at
the relevant parts of the NT kernel source code. The
RealTimeIsUniversal flag is there (a leftover from the days when NT
still ran on RISC machines with UTC RTCs), but its implementation
seems now incomplete and it is currently not covered by Microsoft's
documentation and regression test suite, therefore using it is not
recommended at this time. A couple of potential RealTimeIsUniversal
bugs have been identified over the past few days, there might be more.
For instance, the kernel debugger assumes that the CMOS time is local
time and will get the time wrong when RealTimeIsUniversal=1. There
might be a similar problem in the code that resumes processing after
the CPU was suspended or in the code that calculates DST change
times. They will look into fixing these problems, but they can't promise
yet that RealTimeIsUniversal=1 will be officially supported soon. In any
case, it is unfortunately too late at this stage for a fix to get into the
forthcoming Windows XP release. Perhaps RealTimeIsUniversal=1 can
be established as the default for new platforms such as IA64 where
there is no DOS-compatibility requirement, and then it would be fully
supported again.

I have not yet heard anything from the developers of Windows ME (the
latest release of MS-DOS). It would be nice if they too could add
support for RealTimeIsUniversal or some equivalent mechanism in one
of their next service packs, or at least could check and confirm
whether the concept of a CLOCK.SYS driver like clk360rs.zip is still
properly supported.

http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html

Vernon Schryver

unread,
Jul 10, 2001, 2:34:08 PM7/10/01
to
In article <9ifcr3$g38$1...@pegasus.csx.cam.ac.uk>,
Markus Kuhn <mg...@cl.cam.ac.uk> wrote:

> ...


>RealTimeIsUniversal flag is there (a leftover from the days when NT

>still ran on RISC machines with UTC RTCs)...

What is a "UTC RTC"? I know a little about some of the "RISC" hardware
on which NT ran, but I can't think of anything that made any of those
clock chips "UTC." The UNIX systems that ran on the hardware I know about
certainly did not know anything about any UTC features in the clock chips.
You might make a case that the chips were "PST/PDT RTCs" because that was
the default time zone, but only for those few who don't understand the
difference between a file in /etc and stuff in a clock chip.

For many years, since well before Microsoft discovered RISC CPUs, I've
been running various brands of UNIX on PC's. They have all used UTC
in the clock chips (except sometimes when Microsoft systems were booted
on them). Should I have replaced all of their non-UTC RTCs with UTC RTS?

....

In other words--Sheesh!--what do you expect to be told when you ask
about an undocumented knob? Any competent programmer or support person
will try to discourage users from relying on or even using undocumented
features. They'll reach for implausible worries such as whether a
kernel debugger has the smarts to convert from UTC to local time.
Tacitly encouraging users to use undocumented features only brings
you grief.

(Such problems can be expected in UNIX kernel debuggers outside previous
BSD systems that stashed information about the local timezone in the
kernel. Kernel debuggers need to not rely too much on the kernel,
and than means poking around in /etc/timezone or equivalent just to
decode timestamps is not something many experienced kernel hacks would
expect or want. Yes, master-slave kernel debuggers can use everything on
the master.)

As for their "looking into fixing those problems" response--Double Sheesh!
That's exactly what they should have said even in the extremely likely
case that no one ever seriously consider fixing them. As a professional,
you file such bug reports partly to document problems that won't be
fixed and partly to save the next person from having to look for something
to tell users to choose to not think about the meanings of "unsupported"
and "undocumented."


Vernon Schryver v...@rhyolite.com

Thomas A. Horsley

unread,
Jul 10, 2001, 8:06:18 PM7/10/01
to
>> If you want to convince them, you'll need
>> to make the argument Windows centric. The only argument I
>> see there is the dual-boot between different flavors of
>> Windows.
>
>...which would mandate keeping the RTC in local time...

Actually not. If you do this (which I do), you have to go to all sorts of
trouble to make sure that only one of the copies of Windows thinks it should
be adjusting the RTC for daylight time, otherwise you keep getting one hour
off every time you boot a different OS near the DST change time. If the RTC
was UTC, none of the copies of Windows would need to adjust anything, so
they'd all have the correct time (assuming, of course, that you had patches
for all those versions of Windows to make them all believe in UTC :-).
--
>>==>> The *Best* political site <URL:http://www.vote-smart.org/> >>==+
email: Tom.H...@worldnet.att.net icbm: Delray Beach, FL |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+

Andrew Hood

unread,
Jul 11, 2001, 6:53:18 AM7/11/01
to
I tried this the other day on Win2K SP1. Various parts work, other parts
don't. For instance, the event log gets it right. The clock in the taskbar
(and all other invocations of the clock) get it wrong.

Oh well, delete the registry entry and reboot.


In article <9ifcr3$g38$1...@pegasus.csx.cam.ac.uk>, "Markus Kuhn"
<mg...@cl.cam.ac.uk> wrote:

> Quick Update: Current support status in Windows
>
> While browsing through the Windows 2000 SP2 kernel binary with the
> "strings" tool, I noted a UTF-16 encoded string "RealTimeIsUniversal"
> (NTOSKRNL.EXE:bbd4, NTKRNLPA.EXE:9304). It turned out that Windows NT
> tests a long forgotten undocumented registry entry
>
> HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal

--
33. If there is disturbance in the camp, the general's authority is
weak. If the banners and flags are shifted about, sedition is afoot.
If the officers are angry, it means that the men are weary.
Sun Tzu Wu: The Art of War: IX. THE ARMY ON THE MARCH

Andrew Hood

unread,
Jul 11, 2001, 6:58:53 AM7/11/01
to
In article <dhnjktkvj89h9vpqs...@4ax.com>, "Don Payette"
<Nob...@nowhere.com> wrote:

> Your first bullet in The Problem is a red herring. That is, the issue
> of converting back and forth between local time and UTC. To convert in
> EITHER direction you need to know the offset (or zone name, which gives
> offset). Local time has no ambiguity in the one hour overlap when the
> zone is known. 1:30 AM PST is not the same as 1:30 AM PDT and both can
> be unambiguously converted to UTC.


Which would be fine if you could distinguish. The "official" definition
of the summer and winter timezones in those parts of Eastern Australia
which observe daylight saving are "Eastern Standard Time" and Eastern
Summer Time" and are both spelt EST.
Personally, I use EDT for summer time on systems where I control the
timezone definitions.

--
3. The art of war, then, is governed by five constant factors, to be
taken into account in one's deliberations, when seeking to determine
the conditions obtaining in the field.
Sun Tzu Wu: The Art of War: I. LAYING PLANS

Don Payette

unread,
Jul 11, 2001, 4:52:12 PM7/11/01
to
Whew, good thing I said zone name instead of zone abbreviation. ;-)
Ultimately, you gotta know the offset to go in either direction.

One might ask about the intelligence of those that chose the same
abbreviation for the two seasons, but then we're talking about
Aussies, here, aren't we. :-)

"Andrew Hood" <ajh...@fl.net.au> wrote:

>In article <dhnjktkvj89h9vpqs...@4ax.com>, "Don Payette"
><Nob...@nowhere.com> wrote:
>
>> Your first bullet in The Problem is a red herring. That is, the issue
>> of converting back and forth between local time and UTC. To convert in
>> EITHER direction you need to know the offset (or zone name, which gives
>> offset). Local time has no ambiguity in the one hour overlap when the
>> zone is known. 1:30 AM PST is not the same as 1:30 AM PDT and both can
>> be unambiguously converted to UTC.
>
>
>Which would be fine if you could distinguish. The "official" definition
>of the summer and winter timezones in those parts of Eastern Australia
>which observe daylight saving are "Eastern Standard Time" and Eastern
>Summer Time" and are both spelt EST.
>Personally, I use EDT for summer time on systems where I control the
>timezone definitions.

-----------

Aleksandar Milivojevic

unread,
Jul 12, 2001, 3:17:25 AM7/12/01
to
Don Payette (Nob...@nowhere.com) wrote:
> Your first bullet in The Problem is a red herring. That is,
> the issue of converting back and forth between local time
> and UTC. To convert in EITHER direction you need to know the
> offset (or zone name, which gives offset). Local time has no
> ambiguity in the one hour overlap when the zone is known.
> 1:30 AM PST is not the same as 1:30 AM PDT and both can be
> unambiguously converted to UTC.

My time zone is MET. With or without summer time. Based on this
information and DST flag and configuration file that gives rules when
DST starts and ends, system is supposed to convert universal time to
my local time. If it was the other way around, OS would need to keep
time zone combined with DST information everywhere it stores time
information (including file time stamps). Isn't it much simplier and
safer to have kernel that knows only about universal time, and leave
it to user-level API calls (like POSIX localtime() function) to
convert universal time to local time based on user preferences?


--
Aleksandar Milivojević <al...@fly.srk.fer.hr>
Opinions expressed herein are my own.
Statements included here may be fiction rather than truth.

Todd Knarr

unread,
Jul 12, 2001, 10:50:33 AM7/12/01
to
In comp.protocols.time.ntp <ywu20i7...@as101.tel.hr> Aleksandar Milivojevic <al...@fly.srk.fer.hr> wrote:
> information (including file time stamps). Isn't it much simplier and
> safer to have kernel that knows only about universal time, and leave
> it to user-level API calls (like POSIX localtime() function) to
> convert universal time to local time based on user preferences?

Practical consideration too: laptops. They can easily move from one
time zone to another. It'd be nice to simply be able to reset the
time zone and have all timestamps and other time displays show up
in the new zone. If the system clock and things like filesystem
timestamps and such run in UTC, this is trivial. If they're all in
local time, then they _all_ need to be altered across the board
to make that work ( or you need to carry around with each and every
time a notation about the timezone it was created in ). This also
applies to servers that're accessed remotely from a timezone other
than the one they're physically in.

This actually derives from a similar situation on larger computers,
where one person may log in from different timezones at different times
and in fact several people may be logged in from completely different
timezones at the same time and the very idea of defining a single
local timezone for the whole system is ridiculous.

--
Collin was right. Never give a virus a missile launcher.
-- Erk, Reality Check #8

Aleksandar Milivojevic

unread,
Jul 12, 2001, 11:58:39 AM7/12/01
to
Todd Knarr (tkn...@silverglass.org) wrote:
> This actually derives from a similar situation on larger computers,
> where one person may log in from different timezones at different times
> and in fact several people may be logged in from completely different
> timezones at the same time and the very idea of defining a single
> local timezone for the whole system is ridiculous.

Historically, this is exactly a reason why UNIX kernel works with
universal time (nowdays UTC, in past GMT) and why MS-DOS was working
with local time.

Don Payette

unread,
Jul 13, 2001, 1:01:30 PM7/13/01
to
You make a good point and I agree that timestamps
should be kept in UTC. I also agree it would be great
if the RTC was in UTC, it gets rid of many problems.
The issue is one of migration, which can often be a
thorny problem. You also talk of file timestamps. That's
related but is actually off-topic. UTC time is
available in Windows and can be used for timestamps
regardless of the state of the RTC.

However the point I was making is that his first bullet
states that going from local to UTC is somehow more
problematic than the reverse. They aren't different. To
do either you need to know the offset, which is typically
obtainable from the zone name, if not the abbreviation,
but the offset is the important number to know.

Aleksandar Milivojevic <al...@fly.srk.fer.hr> wrote:

>Don Payette (Nob...@nowhere.com) wrote:
>> Your first bullet in The Problem is a red herring. That is,
>> the issue of converting back and forth between local time
>> and UTC. To convert in EITHER direction you need to know the
>> offset (or zone name, which gives offset). Local time has no
>> ambiguity in the one hour overlap when the zone is known.
>> 1:30 AM PST is not the same as 1:30 AM PDT and both can be
>> unambiguously converted to UTC.
>
>My time zone is MET. With or without summer time. Based on this
>information and DST flag and configuration file that gives rules when
>DST starts and ends, system is supposed to convert universal time to
>my local time. If it was the other way around, OS would need to keep
>time zone combined with DST information everywhere it stores time
>information (including file time stamps). Isn't it much simplier and
>safer to have kernel that knows only about universal time, and leave
>it to user-level API calls (like POSIX localtime() function) to
>convert universal time to local time based on user preferences?

-----------

Jesper Dybdal

unread,
Jul 13, 2001, 3:09:43 PM7/13/01
to
Don Payette <Nob...@nowhere.com> wrote:

>However the point I was making is that his first bullet
>states that going from local to UTC is somehow more
>problematic than the reverse. They aren't different. To
>do either you need to know the offset, which is typically
>obtainable from the zone name, if not the abbreviation,
>but the offset is the important number to know.

To go from UTC to local, you need to know the offset and when DST begins and
ends.

But to go from local to UTC, you also need to know whether DST is already in
effect, since you cannot always determine that from the local time.

During the night between 27 and 28 October 2001, the local time here will be
02:30 twice (at 00:30 UTC and again at 01:30 UTC), so in order to convert
01:30 local to UTC, you would need to know whether 01:30 was DST or not. The
RTC does not store that information, it only stores the local time (01:30).

Or to put another way, the offset depends unambiguously on the UTC time, but
ambiguously on the local time.

--
Jesper Dybdal, Denmark.
http://www.dybdal.dk (in Danish).

Todd Knarr

unread,
Jul 13, 2001, 3:22:51 PM7/13/01
to
In comp.protocols.time.ntp <2j9uktsv76phkr707...@4ax.com> Don Payette <Nob...@nowhere.com> wrote:
> However the point I was making is that his first bullet
> states that going from local to UTC is somehow more
> problematic than the reverse. They aren't different. To

Actually they are. To go from UTC to local you only need to
know the UTC time and the current timezone. To go the other
way, though, you need not only the local time but the timezone
it was in _when it was recorded_.

Think about a file's modification time when I take a laptop on
a trip from Los Angeles to New York. Just before I leave, at
7:00AM local time, I modify a file. 2 hours later I arrive in
New York at 1:00PM local time, and reset the computer to the
local timezone and correct the clock. I then ask the computer
for the modification time of the file in UTC. If the computer
keeps times in local time, what will it say the modification
time was? Is this correct?

D. J. Bernstein

unread,
Jul 13, 2001, 4:33:53 PM7/13/01
to
Jesper Dybdal <jdu...@u5.dybdal.dk> wrote:
> The RTC does not store that information, it only stores the local time
> (01:30).

It also doesn't store leap seconds: the second counter is limited to 59.

The local-time-versus-universal-time arguments are just like the
UTC-versus-TAI arguments. Novice programmers think that UTC is easier
than TAI, and that local time is easier than UTC; they don't understand
the engineering benefits of modularizing the user interface.

---Dan

Aleksandar Milivojevic

unread,
Jul 16, 2001, 2:28:30 AM7/16/01
to
Don Payette (Nob...@nowhere.com) wrote:
> However the point I was making is that his first bullet
> states that going from local to UTC is somehow more
> problematic than the reverse. They aren't different. To
> do either you need to know the offset, which is typically
> obtainable from the zone name, if not the abbreviation,
> but the offset is the important number to know.

But, as I said, often DST information is not available. Current time
is 8:27:16 Medium European Time. And I leave it to you to guess is it
with or without DST.

H. Peter Anvin

unread,
Jul 19, 2001, 2:39:47 PM7/19/01
to
Followup to: <3B479DDA...@acm.org>
By author: Dave Tweed <dtw...@acm.org>
In newsgroup: comp.protocols.time.ntp
>
> I'm working with a class of products calld "GPS-disciplined master
> oscillators" for the broadcast industry. These generally consist of a
> single- or double-oven OCXO that is voltage-adjustable over a small
> range. A controller compares the output of the oscillator to the 1 pps
> output of a GPS receiver module and adjusts the oscillator to keep it
> locked. The better ones also model the aging characteristics of the
> oscillator and extrapolate forward in time during the intervals when
> GPS is unavailable.
>

Dumb question: what is the price delta between such a contraption and
a rubidium or cesium oscillator?

-hpa
--
<h...@transmeta.com> at work, <h...@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

David L. Mills

unread,
Jul 19, 2001, 9:47:32 PM7/19/01
to
hpa,

You mention a class of GPS timing receivers made by Austron and others
for telecommunications synchronization. I have two Austron receivers,
each with OCXO and one with a LORAN-C auxilliary discipline. Each cost
$16K new, but that was ten years ago. The current generation of GPS
receivers made by TruTime, Spectracom and others doesn't need the OCXO
or LORAN-C, since there are more satellites and the Selective
Availability wiggle has been turned off. New cesium oscillators are in
the $40K-$70K range, but used ones are floating around the surplus
market. My three were donated, but the beam tubes don't last forever and
replacement (used) ones have set me back over $2K.

Dave

Dave Tweed

unread,
Jul 20, 2001, 8:16:52 AM7/20/01
to
"H. Peter Anvin" wrote:
> By author: Dave Tweed <dtw...@acm.org>
> > I'm working with a class of products calld "GPS-disciplined master
> > oscillators" for the broadcast industry. These generally consist of a
> > single- or double-oven OCXO that is voltage-adjustable over a small
> > range. A controller compares the output of the oscillator to the 1 pps
> > output of a GPS receiver module and adjusts the oscillator to keep it
> > locked. The better ones also model the aging characteristics of the
> > oscillator and extrapolate forward in time during the intervals when
> > GPS is unavailable.
>
> Dumb question: what is the price delta between such a contraption and
> a rubidium or cesium oscillator?

New cesium oscillators are in the $50K-$70K range, while GPS-disciplined
oscillators fall into the $2K-$6K range, depending on the quality of
the OCXO and the amount of engineering that was put into the control
algorithm.

Contrary to what Dave Mills says, high-quality OCXOs are still needed in
some applications. GPS can become unavailable for up to a few hours for
reasons other than theoretical satellite visibility, and holdover accuracy
to better than 1 us per hour is required.

BTW, most of these receivers have a "position locked" mode, so Selective
Availability doesn't affect them even when it is turned on. It just takes
them longer to achieve their final specs from a cold start (usually 24
hours instead of a few minutes).

-- Dave Tweed

David L. Mills

unread,
Jul 21, 2001, 12:56:24 AM7/21/01
to Dave Tweed
Dave,

My six GPS receivers have not survived less than four satelites in well
over six years. Yours might not be so lucky. Never in ten years has any
of my receivers lost all satellites, although the LORAN-C discipline
kicked in now and then during the period after the Gulf War. A "position
locked" feature is common in all my GPS receivers, but this is no
guarantee the PDOP will be exceeded even after the coordinates are
determined. This is not to say that a precision OCXO is not a good
investment, just that it is not justified solely on the suspicion GPS
satellites will come dark.

Dave

Dave Tweed

unread,
Jul 22, 2001, 7:42:49 PM7/22/01
to
Dave --

> My six GPS receivers have not survived less than four satelites in well
> over six years. Yours might not be so lucky. Never in ten years has any
> of my receivers lost all satellites, although the LORAN-C discipline
> kicked in now and then during the period after the Gulf War. A "position
> locked" feature is common in all my GPS receivers, but this is no
> guarantee the PDOP will be exceeded even after the coordinates are
> determined. This is not to say that a precision OCXO is not a good
> investment, just that it is not justified solely on the suspicion GPS
> satellites will come dark.

Right, I'm not denying that. But I'm putting them in facilities where
they share tower space and other resources with commercial broadcasters
and other radio services, and the staff there won't hesitate to disconnect
an antenna lead for a few hours in order to (re)locate some equipment or
do other work, and of course, lightning strikes can also take out an
antenna with no one able to get to the site for a few hours. That's what
I was referring to by GPS becoming "unavailable" to the disciplined
oscillator.

-- Dave Tweed

0 new messages