Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Epoch today

85 views
Skip to first unread message

Paul Eggert

unread,
Feb 29, 2000, 3:00:00 AM2/29/00
to
mg...@cl.cam.ac.uk (Markus Kuhn) writes:

>I just noted, that ~100 minutes ago was a pretty good potential epoch
>for Gregorian calendar based date/time handling systems, in case we are
>looking for a new one:

> 2000-03-01 00:00Z

Yes. That epoch is also proposed in the following draft API for C0x:

http://www.twinsun.com/tz/timeapi.html

(based on earlier work by you. :-)

Markus Kuhn

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
I just noted, that ~100 minutes ago was a pretty good potential epoch
for Gregorian calendar based date/time handling systems, in case we are
looking for a new one:

2000-03-01 00:00Z

Cheers,

--
Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
Email: mkuhn at acm.org, WWW: <http://www.cl.cam.ac.uk/~mgk25/>

Erkki Ruohtula

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
In article <89i0od$2rt$1...@light.twinsun.com>,
egg...@twinsun.com (Paul Eggert) wrote:

> mg...@cl.cam.ac.uk (Markus Kuhn) writes:
> >I just noted, that ~100 minutes ago was a pretty good potential epoch
> >for Gregorian calendar based date/time handling systems, in case we
are
> >looking for a new one:
>
> > 2000-03-01 00:00Z
>
> Yes. That epoch is also proposed in the following draft API for C0x:
>
> http://www.twinsun.com/tz/timeapi.html

Could either of you explain why this would be a good epoch,
and why some standard might want to select a new one, instead of
using one of the choices already in use in existing standards
or API:s. I looked at the quoted web page, but found no explanations.

Intuitively, I would assume some epoch date that yields positive
values for most dates of the "computer age" to be superior.

Erkki


Sent via Deja.com http://www.deja.com/
Before you buy.

amos-...@nsof.co.il

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
look up the CALNDR-L mailing list at LIST...@ECUMAIL7.ECU.EDU
They are making a lot of fuss about this epoch (and others) there.

--
Amos Shapir
Paper: nSOF Parallel Software, Ltd.
Givat-Hashlosha 48800, Israel
Tel: +972 3 9388551 Fax: +972 3 9388552 GEO: 34 55 15 E / 32 05 52 N

Valeriy E. Ushakov

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
In comp.std.internat Erkki Ruohtula <ruoh...@my-deja.com> wrote:

> > > 2000-03-01 00:00Z


>
> Could either of you explain why this would be a good epoch,

See <http://www.naggum.no/lugm-time.html>

To quote from the above:

| a 400-year cycle not only starts 2000-03-01 (as it did 1600-03-01),
| it contains an even number of weeks: 20,871. This means that we can
| make do with a single 400-year calculation for all time within the
| Gregorian calendar with respect to days of week, leap days, etc.

SY, Uwe
--
u...@ptc.spbu.ru | Zu Grunde kommen
http://www.ptc.spbu.ru/~uwe/ | Ist zu Grunde gehen

Andreas Prilop

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
In article <89jjrc$r3u$1...@news.ptc.spbu.ru>,

"Valeriy E. Ushakov" <u...@ptc.spbu.ru> wrote:

> | it contains an even number of weeks: 20,871.

I think it's rather odd.

--
Change "invalid" to "de" in e-mail address.

D. J. Bernstein

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
Erkki Ruohtula <ruoh...@my-deja.com> wrote:
> Could either of you explain why this would be a good epoch,

It wouldn't be. There's already a standard MJD epoch, and this ain't it.
Breaking compatibility to save an occasional subtraction is foolish.

What Kuhn and Eggert are alluding to is the fact that the Gregorian
calendar consists of

matching 400-year cycles, each 400-year cycle being
4 matching 100-year cycles plus one day, each 100-year cycle being
25 matching 4-year cycles minus one day, each 4-year cycle being
4 matching 1-year cycles plus one day

as long as you start the 400-year cycles on 1 March of years divisible
by 400. Fast date-conversion code follows this structure.

---Dan

Terje Mathisen

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
Valeriy E. Ushakov wrote:
>
> In comp.std.internat Erkki Ruohtula <ruoh...@my-deja.com> wrote:
>
> > > > 2000-03-01 00:00Z
> >
> > Could either of you explain why this would be a good epoch,
>
> See <http://www.naggum.no/lugm-time.html>
>
> To quote from the above:
>
> | a 400-year cycle not only starts 2000-03-01 (as it did 1600-03-01),
> | it contains an even number of weeks: 20,871. This means that we can
> | make do with a single 400-year calculation for all time within the
> | Gregorian calendar with respect to days of week, leap days, etc.

This follows directly from the number of days in 400 years:

146097 which is evenly divisible by 7.

However, any other date which is equal modulo 400 years is just as good,
in the code I wrote to test compiler optimization I used 1600-03-01 as
my internal "day zero".

The "best" choice is probably to work with a date like this which is
before the first representable date in your calendar system, making all
day numbers positive.

This allows the code to be noticeably faster.

Terje

PS. Probably the best way to implement the conversion between day number
and (year,month,day) is to use a good approximation for the year number,
using either fixed point or floating point multiplication.

By selecting the proper scale factor, this guess will normally be
correct, and if not, always one too high.

Since the reverse operation is much simpler (when starting from a day
like 1600-03-01), the first guess can then be checked and if needed
decremented.

--
- <Terje.M...@hda.hydro.com>
Using self-discipline, see http://www.eiffel.com/discipline
"almost all programming can be viewed as an exercise in caching"

Vernon Schryver

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
In article <38BD6C...@hda.hydro.com>,
Terje Mathisen <Terje.M...@hda.hydro.com> wrote:

> ...


>This allows the code to be noticeably faster.

Please define "noticeably," with due consideration of the frequency
with which the code is executed.


Vernon Schryver v...@rhyolite.com

Terje Mathisen

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Vernon Schryver wrote:
>
> In article <38BD6C...@hda.hydro.com>,
> Terje Mathisen <Terje.M...@hda.hydro.com> wrote:
>
> > ...
> >This allows the code to be noticeably faster.
>
> Please define "noticeably," with due consideration of the frequency
> with which the code is executed.

OK: 25% to 100% faster on each time_t to struct tm conversion, as needed
by anything like 'dir' or 'ls -l'.

I.e. not at all critical, just nice to have: Shorter, faster, and
simpler code.

Terje

Vernon Schryver

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
In article <38BE421D...@hda.hydro.com>,
Terje Mathisen <Terje.M...@hda.hydro.com> wrote:

>> >This allows the code to be noticeably faster.
>>
>> Please define "noticeably," with due consideration of the frequency
>> with which the code is executed.
>
>OK: 25% to 100% faster on each time_t to struct tm conversion, as needed
>by anything like 'dir' or 'ls -l'.
>
>I.e. not at all critical, just nice to have: Shorter, faster, and
>simpler code.

Does that make `dir` or `ls -l` run 25-100% noticably or even measurably
faster?--Of course not! Unless you can measure the speed-up in `dir`
or `ls`, how can you call any improvement "noticeably faster"?

We've had this discussion before, and I thought you agreed that changing
things merely so that you can claim you've optimized them is silly,
childish, and foolish (although we didn't use those words). Changing
the epoch for an existing protocol or operating system would inevitably
introduce bugs and errors that would have to be found and fixed for years
to come. Picking a novel epoch for a new protocol or a new operating
system would inevitably involve additional bugs and errors in conversion
code for talking to other systems that would not otherwise need to be
found and fixed if you instead used an existing, popular epoch.

Yes, I realize that Microsoft picked a new epoch for WIN32 instead of
using any of the existing epochs. I rest my case.


Vernon Schryver v...@rhyolite.com

Terje Mathisen

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
Vernon Schryver wrote:
>
> In article <38BE421D...@hda.hydro.com>,
> Terje Mathisen <Terje.M...@hda.hydro.com> wrote:
>
> >> >This allows the code to be noticeably faster.
> >>
> >> Please define "noticeably," with due consideration of the frequency
> >> with which the code is executed.
> >
> >OK: 25% to 100% faster on each time_t to struct tm conversion, as needed
> >by anything like 'dir' or 'ls -l'.
> >
> >I.e. not at all critical, just nice to have: Shorter, faster, and
> >simpler code.
>
> Does that make `dir` or `ls -l` run 25-100% noticably or even measurably
> faster?--Of course not! Unless you can measure the speed-up in `dir`
> or `ls`, how can you call any improvement "noticeably faster"?

Please!

I have not argued that we should change anything like this just because
I personally believe it would be marginally faster.

What I'm suggesting is that if you or someone else ever have the need to
write code like this from scratch, then please consider using those
ideas.

>
> We've had this discussion before, and I thought you agreed that changing
> things merely so that you can claim you've optimized them is silly,
> childish, and foolish (although we didn't use those words). Changing
> the epoch for an existing protocol or operating system would inevitably
> introduce bugs and errors that would have to be found and fixed for years
> to come. Picking a novel epoch for a new protocol or a new operating
> system would inevitably involve additional bugs and errors in conversion
> code for talking to other systems that would not otherwise need to be
> found and fixed if you instead used an existing, popular epoch.

Again, if what I've written have given you the idea that I would like to
change the epoch (i.e. as used in unix and ntp API's), just because it
would get rid of a single constant offset calculation inside my code, I
apologize for not being sufficiently fluent in a foreign language.

> Yes, I realize that Microsoft picked a new epoch for WIN32 instead of
> using any of the existing epochs. I rest my case.

<bg>

MS actually did get some things right with WIN32 and time:

By defining a single global timebase with both a very useful resolution
(100 ns) and a long range (63 bits), the number of additional
application-specific time formats is reduced.

Of course, they also messed up really badly by making it near-impossible
to actually query that system clock with better than timer tick (10 ms)
resolution.

The current hack around this misfeature requires a real-time thread just
to compare the cpu clock counter with the system time. :-(

So, Vernon, are you happy now, or should I grovel some more? :-)

Terje

PS. The normal Unix convention of using UTC seconds for all dates,
disregarding all leap seconds is also broken, imho.

For dates, it is better to use day numbers instead of (day_number *
86400).

Vernon Schryver

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
In article <38C011A0...@hda.hydro.com>,
Terje Mathisen <terje.m...@hda.hydro.com> wrote:

> ...


>I have not argued that we should change anything like this just because
>I personally believe it would be marginally faster.
>
>What I'm suggesting is that if you or someone else ever have the need to
>write code like this from scratch, then please consider using those
>ideas.

Ok, I misunderstood.

However, I still think that if I were writing a brand new operating system
or network time protocol, I'd be a naive fool to try to save a cycle or
two per conversion by picking a new epoch instead of using whichever
existing epoch is most common in the market closest to my target.

If I'm inventing UNIX for the first time in a parallel universe,
your observations about good choices for the epoch would seem useful.


> ...


>> Yes, I realize that Microsoft picked a new epoch for WIN32 instead of
>> using any of the existing epochs. I rest my case.
>
><bg>
>
>MS actually did get some things right with WIN32 and time:
>
>By defining a single global timebase with both a very useful resolution
>(100 ns) and a long range (63 bits), the number of additional
>application-specific time formats is reduced.

Yes, since 64 bit arithmetic in the 1980's was about as hard as 32 bit
arigthmetic on minicomputers in the late 1960's or early 1970's, and if
you are determined to prove your ability to write formal functional spec's,
then Microsoft did the right thing in going to 64 bits. You might also
rationally justify going to 64 bits and paying the costs of converting to
and from all other time formats by worrying about the end of the UNIX
epoch in 2038. However, Microsoft merely proved its usual talent for
being Microstupid in picking a new epoch. Or maybe Microsoft was being
embrace-and-extend MicroSmart by maximizing the pain of talking to others.

>Of course, they also messed up really badly by making it near-impossible
>to actually query that system clock with better than timer tick (10 ms)
>resolution.

I don't think that's a big deal, and that it's more of an implementation
bug than a design error. I see no reason why GetSystemTime (or whatever
it's called) can't be changed as BSD UNIX on 80*86's and SV on other
hardware were changed to have gettimeofday() return more resolution than
the 60-100 HZ clock.

I do think Microsoft messed up in the gyrations they recommend for
dealing with 64-bit FILETIME's.


>PS. The normal Unix convention of using UTC seconds for all dates,
>disregarding all leap seconds is also broken, imho.

>For dates, it is better to use day numbers instead of (day_number *
>86400).

In that area, all choices are equally broken. Your preference of what
to break and what not to break with leap seconds is reasonable for some
purposes, but bad for others. Isn't WIN32 as broken/not broken with
respect to leap seconds as UNIX?


Vernon Schryver v...@rhyolite.com

Paul Keinanen

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to
On Fri, 03 Mar 2000 20:25:20 +0100, Terje Mathisen
<terje.m...@hda.hydro.com> wrote:


>MS actually did get some things right with WIN32 and time:
>
>By defining a single global timebase with both a very useful resolution
>(100 ns) and a long range (63 bits), the number of additional
>application-specific time formats is reduced.

This sounds quite familiar, VAX/VMS also used a 64 bit absolute/delta
time format with 100 ns resolution. The only difference seems to be
the epoch time, since the VAX/VMS time started at JD 2400000.5 or
nearly 150 years ago.


>Of course, they also messed up really badly by making it near-impossible
>to actually query that system clock with better than timer tick (10 ms)
>resolution.

If you need a submillisecond _resolution_, why bother with the
date/time functions at all. Use the performance counter services
(QueryPerformanceFrequency and QueryPerformanceCounter) to get about
800 ns resolution on a standard PC and few nanosecond resolution on a
multiprocessor PC. If these events are also to be relayed for human
consumption, just once get the system time and performance counter
count and calculate the difference and use this difference when
converting other event times to hh:mm:ss.ttt or whatever human
readable format is needed.

>The current hack around this misfeature requires a real-time thread just
>to compare the cpu clock counter with the system time. :-(

The only time I can think of when you would need a high priority
thread is when you are measuring events external to the computer and
you have a millisecond _accuracy_ of the system clock (e.g. through
net time or GPS time), in which case you would like to read the system
time and performance counter in a more or less atomic operation to
compensate for the drift of various oscillators.

However, the proper way of timing external events accurately is to use
an external time standard, such as the 1 pulse/s output of a GPS
receiver and feed it to one input of a parallel input card while the
other events are connected to other input bits on the _same_ input
card. In this way most systematic errors can be eliminated (e.g.
interrupt latency).

>PS. The normal Unix convention of using UTC seconds for all dates,
>disregarding all leap seconds is also broken, imho.

Keeping the internal system clock in a linear time (such as TAI/IAT or
GPS time) is a good thing, since you should not mess with the time
controlling internal timing no matter what bureaucratic conventions
(such as standard/daylight or leap seconds) are used in the external
world. The thing that may be broken is the conversion to local time,
in which the time zone information should also include the difference
between IAT and UTC. However, I do not know if all flavours of Unix
support time zone specification to 1 s resolution.


>For dates, it is better to use day numbers instead of (day_number *
>86400).

This nicely avoids the leap second problem. The Julian Day Number
system is an good old method of expressing days from different
calendars. It nicely covers all times from which we have written
history and even when stored as a 24 bit integer, will last for a very
long time. Storing it as a 64 bit double precision floating point
value will give submillisecond resolution, but here again the leap
second issue must be checked to see if it is relevant for the problem
to be solved.

Paul

Terje Mathisen

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to
Vernon Schryver wrote:
> However, I still think that if I were writing a brand new operating system
> or network time protocol, I'd be a naive fool to try to save a cycle or
> two per conversion by picking a new epoch instead of using whichever
> existing epoch is most common in the market closest to my target.

Assuming you are staying with the second as your unit of measurement,
then by all means keep 1970-01-01 as the epoch.

If you want to use some other time unit, then modifying the epoch as
well really doesn't matter, but re. Microsoft, see below!

> >Of course, they also messed up really badly by making it near-impossible
> >to actually query that system clock with better than timer tick (10 ms)
> >resolution.
>

> I don't think that's a big deal, and that it's more of an implementation
> bug than a design error. I see no reason why GetSystemTime (or whatever
> it's called) can't be changed as BSD UNIX on 80*86's and SV on other
> hardware were changed to have gettimeofday() return more resolution than
> the 60-100 HZ clock.

Microsoft could in principle fix that any time they feel like it,
however by splitting out anything related to the system clock interrupt
down in the HAL layer, (afaik) they made it impossible for even a kernel
driver to provide a proper interface.

> I do think Microsoft messed up in the gyrations they recommend for
> dealing with 64-bit FILETIME's.

Yes, I simply disregard most of the text in their API docs, and copy
those FILETIME's into regular 64-bit ints as soon as possible.

> >PS. The normal Unix convention of using UTC seconds for all dates,
> >disregarding all leap seconds is also broken, imho.
>

> >For dates, it is better to use day numbers instead of (day_number *
> >86400).
>

> In that area, all choices are equally broken. Your preference of what
> to break and what not to break with leap seconds is reasonable for some
> purposes, but bad for others. Isn't WIN32 as broken/not broken with
> respect to leap seconds as UNIX?

Yes, WIN32 FILETIME is identical to time_t in that regard, modulo a
division by 1e7 and shifting the epoch by 369 years.

I would love to have a time/calendar api which did "The Right Thing" for
both short and long time intervals, but since the actual length of a
year is unknown until about 6 months before it ends, this is probably
impossible.

However, by defining future dates as day numbers, it becomes easier to
know exactly when that event is supposed to take place, even if it is
impossible to calculate exactly how many seconds the intervening time
interval will last. :-(

Terje

Terje Mathisen

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to
Paul Keinanen wrote:
>
> On Fri, 03 Mar 2000 20:25:20 +0100, Terje Mathisen
> <terje.m...@hda.hydro.com> wrote:
>
> >MS actually did get some things right with WIN32 and time:
> >
> >By defining a single global timebase with both a very useful resolution
> >(100 ns) and a long range (63 bits), the number of additional
> >application-specific time formats is reduced.
>
> This sounds quite familiar, VAX/VMS also used a 64 bit absolute/delta
> time format with 100 ns resolution. The only difference seems to be
> the epoch time, since the VAX/VMS time started at JD 2400000.5 or
> nearly 150 years ago.

Since Dave Cutler was a big force behind both VMS and NT, this isn't too
surprising. :-(

I believe one possible reason Microsoft changed the epoch was because
they have managed to break their implementation in such a way that only
positive (i.e. 63-bit) values can work.

Their docs actually warn you that using a value with the top bit set
(negative or just a very large unsigned value) will cause a failure.

I'm guessing they said something like: "1601 is definitely before any
time that people might want to express on a WIN32 system, while the
normal MJD epoch wasn't 'invented here', so we can't use that"

Anyway, yet another case of "broken by design"? :-(

> >Of course, they also messed up really badly by making it near-impossible
> >to actually query that system clock with better than timer tick (10 ms)
> >resolution.
>

> If you need a submillisecond _resolution_, why bother with the
> date/time functions at all. Use the performance counter services
> (QueryPerformanceFrequency and QueryPerformanceCounter) to get about
> 800 ns resolution on a standard PC and few nanosecond resolution on a
> multiprocessor PC. If these events are also to be relayed for human

I've used these timers since about 1983 to measure the speed of code I
write, I assume you are reading this thread on comp.std.internat, and
not comp.protocols.time.ntp?

> However, the proper way of timing external events accurately is to use
> an external time standard, such as the 1 pulse/s output of a GPS
> receiver and feed it to one input of a parallel input card while the
> other events are connected to other input bits on the _same_ input
> card. In this way most systematic errors can be eliminated (e.g.
> interrupt latency).

Yes, indeed. I have 3 (getting the fourth soon) independent GPS-based
(different vendors/GPS boards) NTP Stratum 1 sources. These are emplaced
in different regions of our high-speed corporate WAN, and supplemented
with external stratum 1 and 2 peers, and a DCF-77 radio clock.

Two of the GPS/NTP servers are of the "black box" type, while the third
is a Motorola Oncore connected to FreeBSD 3.4, with all the NTP kernel
hacks implemented by Poul-Henning Kemp.

My beef with NT is that it is basically impossible to get close to the
same absolute precision out of a brand-new, unloaded, PIII box as what
FreeBSD delivers on a 4-5 year old Pentium box.

D. J. Bernstein

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to
Terje Mathisen <terje.m...@hda.hydro.com> wrote:
> PS. The normal Unix convention of using UTC seconds for all dates,
> disregarding all leap seconds is also broken, imho.

But that's not the normal UNIX convention. Almost all time-handling code
treats time_t as a real-time counter: TAI seconds since an epoch.

The kernel, for example, increases time_t monotonically, trying to match
real time. There's a huge amount of code that subtracts time_t values to
determine real-time intervals. There's a huge amount of code that adds
time intervals to time_t to schedule future events.

Yes, there are some ancient time-conversion libraries that can't handle
leap seconds, but there are also replacements for those libraries.

I'm running my UNIX systems on TAI, synchronized by NTP. My CST/CDT
local-time displays are accurate. My measurements of real-time
differences are accurate. Everything works. There's no screwy wobbling
near leap seconds. The only code I had to replace was in xntpd.

> For dates, it is better to use day numbers instead of (day_number * 86400).

Yup. I wrote a caldate library, http://cr.yp.to/libtai/caldate.html, for
exactly this reason.

---Dan

Vernon Schryver

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to
In article <38C117CA...@hda.hydro.com>,
Terje Mathisen <terje.m...@hda.hydro.com> wrote:

> ...


>> I don't think that's a big deal, and that it's more of an implementation
>> bug than a design error. I see no reason why GetSystemTime (or whatever
>> it's called) can't be changed as BSD UNIX on 80*86's and SV on other
>> hardware were changed to have gettimeofday() return more resolution than
>> the 60-100 HZ clock.
>
>Microsoft could in principle fix that any time they feel like it,
>however by splitting out anything related to the system clock interrupt
>down in the HAL layer, (afaik) they made it impossible for even a kernel
>driver to provide a proper interface.

You say that as if the HAL layer were as fixed by good documentation
and code from zillions of organizations as the WIN32 API, and so Microsoft
can't change it at a drop of a hat or a threat to 1% of their marketshare.
I also don't imagine anyone in the Windows world would ever consider
ignoring the HAL and writing machine dependent code in a driver, interrupt
handler, or system service, not to mention an application or DLL.

You also seem to be saying that the HAL is a proper interface intead
of an over-specified, under-thought exercise in desgin by committee.


> ...


>I would love to have a time/calendar api which did "The Right Thing" for
>both short and long time intervals, but since the actual length of a
>year is unknown until about 6 months before it ends, this is probably
>impossible.


>However, by defining future dates as day numbers, it becomes easier to
>know exactly when that event is supposed to take place, even if it is
>impossible to calculate exactly how many seconds the intervening time
>interval will last. :-(

I don't see that. If you don't know how many seconds there will
be until a desired date, how do you know when it has arrived?
And aren't you forgetting minor details like future improvements by
the civil authorities (e.g. daylight savings time)?

A count of days since an instant in the past is worse than a count
of seconds since the same date by a factor of 24*3600=8640.
A day-count does not have fewer or even different hard problems
than a second-count for the task that started this thread, which
is communicating with people about local, civil dates.


Vernon Schryver v...@rhyolite.com

Terje Mathisen

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
Vernon Schryver wrote:
>
> In article <38C117CA...@hda.hydro.com>,
> Terje Mathisen <terje.m...@hda.hydro.com> wrote:
>
> > ...
> >> I don't think that's a big deal, and that it's more of an implementation
> >> bug than a design error. I see no reason why GetSystemTime (or whatever
> >> it's called) can't be changed as BSD UNIX on 80*86's and SV on other
> >> hardware were changed to have gettimeofday() return more resolution than
> >> the 60-100 HZ clock.
> >
> >Microsoft could in principle fix that any time they feel like it,
> >however by splitting out anything related to the system clock interrupt
> >down in the HAL layer, (afaik) they made it impossible for even a kernel
> >driver to provide a proper interface.
>
> You say that as if the HAL layer were as fixed by good documentation
> and code from zillions of organizations as the WIN32 API, and so Microsoft
> can't change it at a drop of a hat or a threat to 1% of their marketshare.
> I also don't imagine anyone in the Windows world would ever consider
> ignoring the HAL and writing machine dependent code in a driver, interrupt
> handler, or system service, not to mention an application or DLL.
>
> You also seem to be saying that the HAL is a proper interface intead
> of an over-specified, under-thought exercise in desgin by committee.

No, rather the opposite: Exactly because the HAL is such a "deep secret"
within Microsoft, they have made it that much harder for anyone outside
the company to fix the problem.

For MS to give us full RDTSC-type precision time stamps would be
trivial, for anyone else unneccesarily hard.

> >However, by defining future dates as day numbers, it becomes easier to
> >know exactly when that event is supposed to take place, even if it is
> >impossible to calculate exactly how many seconds the intervening time
> >interval will last. :-(
>
> I don't see that. If you don't know how many seconds there will
> be until a desired date, how do you know when it has arrived?

A future event which must happen N seconds from now, needs to work with
TAI seconds. If this is sufficiently far into the future, it is
impossible to know either the UTC or local time this will correspond to.

A future event which must happen at either local or UTC time HH:MM:SS on
some specific date, needs to work with a combination of day number and
offset into that day. In this case it is impossible to know exactly how
many TAI seconds will pass before that event.

Does that make sense?

Or should we just agree to disagree?

Vernon Schryver

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
In article <38C38A54...@hda.hydro.com>,
Terje Mathisen <Terje.M...@hda.hydro.com> wrote:

> ...


>No, rather the opposite: Exactly because the HAL is such a "deep secret"
>within Microsoft, they have made it that much harder for anyone outside
>the company to fix the problem.
>
>For MS to give us full RDTSC-type precision time stamps would be
>trivial, for anyone else unneccesarily hard.

The phrase "unnecessarily hard in WIN32" is redundant.
Do the P-III performance counters require privilege to read?


>> >However, by defining future dates as day numbers, it becomes easier to
>> >know exactly when that event is supposed to take place, even if it is
>> >impossible to calculate exactly how many seconds the intervening time
>> >interval will last. :-(
>>
>> I don't see that. If you don't know how many seconds there will
>> be until a desired date, how do you know when it has arrived?

> ...


>A future event which must happen at either local or UTC time HH:MM:SS on
>some specific date, needs to work with a combination of day number and
>offset into that day. In this case it is impossible to know exactly how
>many TAI seconds will pass before that event.
>
>Does that make sense?

Yes, your description of the problem is almost the point I was making.
You skipped the major problem of knowing when the day has arrived, which
is not helped by having the system count days instead of fortnights,
seconds, milliseconds, or anything else. In a system with only seconds,
an application that wants do something on March 6, 2010 would not simply
(and stupidly) delay about 315,360,000 seconds, but instead would
repeatedly wait 40,000 to 1,000,000 seconds and check the current date
until close to the target. In a system with day numbers, the application
would not simply (and stupidly) delay about 3650 days, but instead would
repeatedly wait 0.5 to 10 days and check the date until close to the
target. In either case, once close to the target, the application would
delay smaller durations. This successive approximation tactic is required
in any real application for many reasons, including clocks that are
changed, leap seconds, and daylight savings time. Because of this
absolutely required successive approximation, day counting does not help.

Yes, such successive appoximating is isomorophic to the 30+ year old
idea in how UNIX kernel "callouts" work. No, putting a delay-until-day-#
function in the operating system API would be a bad idea (albeit typical
of the Microsoft school of piling it higher and deeper), because of the
costs of teaching the operating system about leap seconds, leap days,
and daylight savings time. Yes, in the unusual case that a system has
more than one application than the `cron` program that delays until a
particular date (consider how `at` and `calendar` have worked in UNIX
for the last 10+ years), it would be handy to put the common part of
such successive approximation code in a common place, such as a library.
Any real life application that is going to delay more than a day is
better served by `cron` or needs to do other tasks before the target
date. The most common code you could find would be something to compare
(struct tm)'s so that an application that has just awakened could more
easily compare the current date with a target date.

The notion of making the system count days is overall more of complication
than an at best minor local optimization. It is at least as simple to
delay a multiple of 86400 seconds when you want delay more a day. Like
picking a new epoch simply to save subtracting a constant, day numbers
are a tiny local optimization that makes the whole system worse.

As I keep saying, the trick is not finding a change (including
a local optimization), but in making the whole system better.


Vernon Schryver v...@rhyolite.com

Colin Andrew Percival

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
In comp.protocols.time.ntp Terje Mathisen <Terje.M...@hda.hydro.com> wrote:
> For MS to give us full RDTSC-type precision time stamps would be
> trivial, for anyone else unneccesarily hard.

On my machine, QueryPerformanceCounter returns the TSC value. (And
QueryPerformanceFrequency returns the frequency of the processor). The
function might be implemented differently on other versions of windows
though, the API documentation doesn't specify *which* high performance
counter will be used.

Colin Percival

Terje Mathisen

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to

By default, NT SMP machines do return the TSC counter when you use the
QueryPerformanceCounter api, while single-cpu machines default to the
original 1.18MHz base frequency which is used (after another counter
stage, 64K in the BIOS) to generate the regular clock interrupts.

Using RDTSC directly is even easier.

My problem isn't that it is hard to measure short time periods
accurately, but that NT makes it needlessly hard to get absolute time
stamps with a similar resolution, something which is needed for a good
(Stratum-1) NTP server.

D. J. Bernstein

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
Vernon Schryver <v...@calcite.rhyolite.com> wrote:
> The notion of making the system count days

Most time-handling code, measured by volume or by execution frequency,
deals with real-time seconds, so that should be the basic format. But
there's also quite a bit of code that implicitly or explicitly counts
days: local-time conversion code, for example, and calendar-display
code. So it's useful to have MJD<->calendar date conversion routines.

---Dan

0 new messages