Why does Standard C provide for two leap seconds?

47 views
Skip to first unread message

Paul Eggert

unread,
May 29, 1992, 5:25:42 PM5/29/92
to
The ANSI C Standard allows functions like localtime() and gmtime() to yield
times like 23:59:60 (for one leap second) and 23:59:61 for a second leap second.
Why is there a provision for two leap seconds? My understanding is as follows:

TAI = International Atomic Time (our best guess at the ``exact'' time)
UT1 = Universal Time (derived from astronomical observations;
this is less regular, due to glitches in Earth's rotation)
UTC = Coordinated Universal Time (often mistakenly called ``GMT'';
the basis for public time after taking time zones into account)

By definition, TAI-UTC is an integer, and UTC is adjusted by the International
Earth Rotation Service using leap seconds so that it is always the case that
|UT1-UTC| < 0.9 s. [Reference: Terry J Quinn, The BIPM and the accurate
measure of time, Proc IEEE 79, 7 (July 1991), 894-905.]

Given these rules, two leap seconds can never be inserted next to each other,
since this would violate the rule |UT1-UTC| < 0.9 s. either just before or
just after such an insertion.

So why does Standard C provide for two leap seconds?

Norman Diamond

unread,
May 30, 1992, 7:52:21 AM5/30/92
to
In article <#-2ic+&{'@twinsun.com> egg...@twinsun.com (Paul Eggert) writes:
>So why does Standard C provide for two leap seconds?

Because there really were two leap seconds in one year (on separate days),
and someone on the ANSI C committee thought that they came on the same day,
and there's just too much politics to go through to try to correct errors.
--
Norman Diamond dia...@jit081.enet.dec.com
If this were the company's opinion, I wouldn't be allowed to post it.
"Yeah -- bad wiring. That was probably it. Very bad."

Andrew Hume

unread,
May 30, 1992, 2:24:30 PM5/30/92
to

actually, the real question is why are there any leap seconds (in C)?
I edit a format standard (for disk file systems) and we have wavered on the
issue of what are teh valid values for the seconds field of a timestamp.
my first attempt was to copy C and allow 2 leap seconds. but unlike C,
we have to specify teh semantics of teh fields we record. and here is where
we got lost; damned if i know what the semantics are. how exactly is
24:60 different from 25:00? there seem to be only two plausible models
for understanding this:
1) minutes are of variable length.
2) time is discontinuous. (two adjacent seconds may be more than 1 sec apart.)

both of these are unacceptable to some degree; but 2) is most acceptable
because C implicitly wants you to use difftime to measure time differences
and it, in principle, can use the leap second stuff. so we adopted the strategy
that minutes are always 60 seconds and too bad if you want to record a timestamp
that exists in one of these discontinuities.

andrew hume
and...@research.att.com

Colin Plumb

unread,
Jun 1, 1992, 6:15:45 AM6/1/92
to
In article <1992May31.2...@cs.cmu.edu> bw...@cs.cmu.edu (Bradley White) writes:
>% cat b.c
>#include <sys/types.h>
>#include <time.h>
>#include <stdio.h>
>
>int
>main()
>{
> struct tm t;
> time_t t1, t2;
>
> setenv("TZ", "GMT", 1);
> t.tm_year = 92; t.tm_mon = 5; t.tm_mday = 30;
> t.tm_hour = 23; t.tm_min = 59; t.tm_sec = 0; t.tm_isdst = -1;
> if ((t1 = mktime(&t)) == -1)
> return 1;
> fprintf(stdout, "t1 == %s", ctime(&t1));
> t.tm_min += 1; t.tm_isdst = -1;
> if ((t2 = mktime(&t)) == -1)
> return 1;
> fprintf(stdout, "t2 == %s", ctime(&t2));
> fprintf(stdout, "difftime(t2, t1) == %g seconds\n", difftime(t2, t1));
> return 0;
>}
>% make b
>cc b.c -o b
>% ./b
>t1 == Tue Jun 30 23:59:00 1992
>t2 == Wed Jul 1 00:00:00 1992
>difftime(t2, t1) == 61 seconds

This sort of thing always annoyed me about NTP... why doesn't it just
track atomic time and leave corrections to UTC, UTC1, or whatever, to
other levels? Well, okay, it often *gets* time in UTC, so it needs to
be able to map that to TA, but this business of having to append the
leap bits to the timestamp to get unique timestamps is really annoying,
and if you don't care about seconds, then you don't care and it's not
going to kill you either way. But if you do, then the subtraction is
such a pain.

Is this just a historical artifact, or is there a purportedly good reason
for doing things that way?

It would seem that a list of 00:00:00 seconds that were moved due to
leap adjustments would uniquely describe the history of leap seconds,
and you'd only need the most recent one to get the current UTC<->TA
offset. Not a lot of fuss. Because it's a cultural thing, the rules
for distributing such info securely, if you want that, are rather
trickier than time. The basic NTP protocol isn't quite suitable for it.
--
-Colin

Paul Eggert

unread,
Jun 1, 1992, 2:18:16 PM6/1/92
to
and...@alice.att.com (Andrew Hume) writes:

> actually, the real question is why are there any leap seconds (in C)?...


>there seem to be only two plausible models for understanding this:
> 1) minutes are of variable length.
> 2) time is discontinuous. (two adjacent seconds may be more than 1 sec apart.)

(1) is the international ``metric'' standard used by every major
industrialized country. Posix requires (2). A better way of saying (2) is:

2') Seconds are of variable length.

For example, Posix requires that the difference between 1992-06-30
23:59:59 and 1992-07-01 00:00:00 (gmtime) be 1 second, even though 2
metric seconds fall between those two times in Coordinated Universal
Time (UTC). Thus a Posix second is either 0, 1 or 2 metric seconds,
depending on whether leap seconds have been subtracted or added;
and there are two ways to measure seconds:

1) the metric way, and
2) the Posix way.

You don't have to worry about leap seconds under (2);
but, unfortunately for Posix applications, most of the world uses (1).

Paul Eggert

unread,
Jun 1, 1992, 4:32:17 PM6/1/92
to
dia...@jit345.bad.jit.dec.com (Norman Diamond) writes:

>Because there really were two leap seconds in one year (on separate days),

>and someone on the ANSI C committee thought that they came on the same day,...

I've received some correspondence on the subject, and I think Diamond is right.
My theory is that the ANSI C committee consulted the people who maintain
the official time for the U.S. Navy, and that in reply the Navy timekeepers
meant to say
``There have been two leap seconds during the same year'',
but said
``There have been two leap seconds at once'',
and were then misinterpreted as saying
``There have been two leap seconds during the same minute''.
So the ANSI C committee provided for two leap seconds within the same minute,
even though the second second is unnecessary. Only once, in June and December
1972, have there been two leap seconds in the same year; the world would have
to change drastically for us to need two leap seconds in the same minute!

Bradley White

unread,
Jun 1, 1992, 8:09:29 PM6/1/92
to
In article <#-3or...@twinsun.com> egg...@twinsun.com (Paul Eggert) writes:
> Posix requires that the difference between 1992-06-30
>23:59:59 and 1992-07-01 00:00:00 (gmtime) be 1 second, even though 2
>metric seconds fall between those two times

Indeed.

% cat c.c
#include <sys/types.h>
#include <time.h>
#include <tzfile.h>
#include <stdio.h>

int
main()
{
struct tm t;
time_t t1, t2;

char buf[sizeof "Tue, 30 Jun 92 23:59:59 GMT"];
static char rfc822fmt[] = "%a, %e %b %y %T %Z";

setenv("TZ", "GMT", 1);
t.tm_year = 92; t.tm_mon = 5; t.tm_mday = 30;

t.tm_hour = 23; t.tm_min = 59; t.tm_sec = 59; t.tm_isdst = -1;


if ((t1 = mktime(&t)) == -1)
return 1;

strftime(buf, sizeof buf, rfc822fmt, &t);
fprintf(stdout, "t1 == %s\n", buf);
t.tm_year = 92; t.tm_mon = 6; t.tm_mday = 1;
t.tm_hour = 0; t.tm_min = 0; t.tm_sec = 0; t.tm_isdst = -1;


if ((t2 = mktime(&t)) == -1)
return 1;

strftime(buf, sizeof buf, rfc822fmt, &t);
fprintf(stdout, "t2 == %s\n", buf);
fprintf(stdout, "METRIC: difftime(t2, t1) == %g sec\n",
difftime(t2, t1));
fprintf(stdout, "POSIX: difftime(t2, t1) == %g sec\n",
difftime(time2posix(t2), time2posix(t1)));
return 0;
}
% make c
cc c.c -o c
% ./c
t1 == Tue, 30 Jun 92 23:59:59 GMT
t2 == Wed, 1 Jul 92 00:00:00 GMT
METRIC: difftime(t2, t1) == 2 sec
POSIX: difftime(t2, t1) == 1 sec

Brad

Steve Suttles

unread,
Jun 1, 1992, 6:15:20 PM6/1/92
to
in article <1992May31....@jrd.dec.com>, dia...@jit345.bad.jit.dec.com (Norman Diamond) says:
> [ ... ]
>
>>how exactly is 24:60 different from 25:00?
>
> Well, when leap seconds occur, one day's 23:59:60 is different from the next
> day's 0:00:00. They are consecutive but non-overlapping seconds, just like
> any other two consecutive seconds.

>
>>there seem to be only two plausible models for understanding this:
>> 1) minutes are of variable length.
>> 2) time is discontinuous.
>
> Huh? There's only one plausible model, which models reality exactly, and
> that is your "1)".
>
Sorry; the real answer is that minutes are of constant length and time
is continous and contiguous. It is nonintegral. Just as there are not
_precisely_ 365 or 366 days in a year, there are not _precisely_ a
particular number of days in a month (name whatever number you want it
not to be), nor a precise number of these non-months in a year. The reason
there are not a precise number of seconds in a minute is because of error.
This non-integral "day" was divided into 24 hours. Inaccurately, and
necessarily so, since a day takes a varying amount of time (reflected
in varying times for sunrise and sunset). These hours were divided into
60 minutes, and them into 60 seconds, and the second was fixed at some
standard duration...which is almost right. The difference can't be detected
unless you accumulate a whole bunch of them and see if it is the same time
of day. In order to keep the error from accumulating, we've adopted the
same bandaid we did for years--"leap" time. This has also been used to
correct months, and weeks, and days, and hours, and minutes. The real
root of the problem is the Greek perception of beauty (elegance). It was
presumed for some time that years would be an integral multiple of days,
and should be. We've finally gotten past the notion of an integral multiple
of months (that's why they are different length). The real reason is that
all these natural time references have nothing whatsoever to do with each
other, and that offends the order we want to impose on an uncaring universe.

sas
--
Steve Suttles Internet: st...@dbaccess.com Dr. DCL is IN!
CROSS ACCESS Corporation UUCP: {uunet,mips}!troi!steve Yo speako TECO!
2900 Gordon Ave, Suite 100 fax: (408) 735-0328 Talk data to me!
Santa Clara, CA 95051-0718 vox: (408) 735-7545 HA! It's under 4 lines NOW!

Andrew Hume

unread,
Jun 2, 1992, 1:05:48 AM6/2/92
to

i am now fairly confused about the whole time thing. (thanks to paul
eggert!) is there a paper or standard that describes the relationship between
UTC and atomic clock time (TA?)?? and exactly what is the nature of
teh adjustment called leap seconds?

my understanding is that UTC is a sequence of seconds of equal length.
the mapping of any particular second into yr/mo/dy/hr/mi/se is adjusted
periodically and ought to take into account leap seconds. is this right?

(lets ignore posix for just now; although i would be quite surprised
if posix were to contradict some other ISO standard.)

andrew hume

John C Sager

unread,
Jun 2, 1992, 8:22:11 AM6/2/92
to
In <7...@wookie.dbaccess.com>, st...@dbaccess.com (Steve Suttles) writes:

> Sorry; the real answer is that minutes are of constant length and time
> is continous and contiguous. It is nonintegral. Just as there are not
> _precisely_ 365 or 366 days in a year, there are not _precisely_ a
> particular number of days in a month (name whatever number you want it
> not to be), nor a precise number of these non-months in a year. The reason
> there are not a precise number of seconds in a minute is because of error.
> This non-integral "day" was divided into 24 hours. Inaccurately, and
> necessarily so, since a day takes a varying amount of time (reflected
> in varying times for sunrise and sunset). These hours were divided into
> 60 minutes, and them into 60 seconds, and the second was fixed at some
> standard duration...which is almost right. The difference can't be detected
> unless you accumulate a whole bunch of them and see if it is the same time
> of day. In order to keep the error from accumulating, we've adopted the
> same bandaid we did for years--"leap" time. This has also been used to
> correct months, and weeks, and days, and hours, and minutes. The real
> root of the problem is the Greek perception of beauty (elegance). It was
> presumed for some time that years would be an integral multiple of days,
> and should be. We've finally gotten past the notion of an integral multiple
> of months (that's why they are different length). The real reason is that
> all these natural time references have nothing whatsoever to do with each
> other, and that offends the order we want to impose on an uncaring universe.

It's not true that minutes are of constant length since we adopted leap seconds
internationally, back in the early 1970s. Every so often we get a 61 second
minute - hence 23:59:60. If the earth speeds up its rotation period sufficiently,
we will start to get 59 second minutes when leap-seconds are subtracted.
Time is a continuous process, but we choose to mark it by means of constant-
length intervals (seconds). This solves a whole lot of measurement
problems in all sorts of areas. It is true that the periods that we formerly
used to measure time (earth orbital period (year), moon orbital period
(~month), earth rotation period (day)) are non-integral multiples of our
basic unit, the second, non-integral multiples of each other and, worse still,
*are not constant*. The ancients soon understood this, except the last point,
hence an increasingly complex set of relationships between them have developed
down the centuries.
The term 'error' is misleading. It implies that there is a 'correct' way
of coping with the non-integral relationships of multiple periods. We use one.
There are others, such as fractional days at the end of each year instead of
a whole day every four years. Which would you prefer? If the earth's
rotation period *were* constant, then we could define the second such that
there are 86400 of them in a day. But it's not, hence leap-seconds.

In <22...@alice.att.com>, and...@alice.att.com (Andrew Hume) writes:

> (lets ignore posix for just now; although i would be quite surprised
> if posix were to contradict some other ISO standard.)

You show a remarkable faith in the standards-making process. Standards-makers
are as fallible as the rest of us, especially when no-one on the committee
fully understands the small, but important, point under consideration. Hence
this thread :)

John C Sager Mail: B67 G18, BT Labs
Email: j...@zoo.bt.co.uk Martlesham Heath
Tel: +44 473 642623 IPSWICH IP5 7RE
Fax: +44 473 637614 England

John Gruber

unread,
Jun 2, 1992, 4:31:41 PM6/2/92
to
and...@alice.att.com (Andrew Hume) writes:
>
>
> i am now fairly confused about the whole time thing. (thanks to paul
> eggert!) is there a paper or standard that describes the relationship between
> UTC and atomic clock time (TA?)?? and exactly what is the nature of
> teh adjustment called leap seconds?

Well, I can't find it here at the office right now, but I think you will find
some details in the following paper:

"On the Chronometry and Metrology of Computer Timescales and their Application
to the Network Time Protocol"

The author is David Mills, and it appeared in Computer Communication Review
volume 21, number 9 last October. We carry the CCR in our library, I'm sure
many others do too.

David Mills, huh... sounds like a familiar name, but I can't seem to place
it. Now where have I heard it before? :-)

And why couldn't he find a title which would fit on one 80 column line,
whoever he is? :-)
--
John Gruber
University Computer Services UUCP:..!osu-cis!bgsuvax!gruber
Bowling Green State University gru...@andy.bgsu.edu
Bowling Green, OH 43403-0125 gruber at andy

Paul Eggert

unread,
Jun 2, 1992, 7:07:40 PM6/2/92
to
and...@alice.att.com (Andrew Hume) writes:

>is there a paper or standard that describes the relationship between
>UTC and atomic clock time (TA?)?? and exactly what is the nature of

>the adjustment called leap seconds?

The international standard is CCIR Recommendation 460-4 (1986).
(CCIR = International Radio Consultative Committee;
the group that actually decides on leap seconds is the wonderfully named
International Earth Rotation Service.)
See also the _Proceedings of the IEEE_'s July 1991 special issue
on time and frequency, particularly the lead paper by Quinn.

> my understanding is that UTC is a sequence of seconds of equal length.

No, TAI (International Atomic Time) is a sequence of seconds of equal length.
UTC is TAI adjusted by leap seconds; a UTC second can be 0, 1, or 2 TAI seconds.
See Fig. 2 (p. 899) of Quinn's paper.

The C Standard doesn't say whether timing functions obey UTC or TAI;
this is an issue only with difftime().

However, Posix 1003.1-1990 requires that timing functions obey UTC,
even though computer clocks behave more like TAI than like UTC.
This problem is typically handled in an ad hoc way:
a (human) system administrator or (automated) NTP software
changes the system clock after traversing a leap second boundary.
Since functions like alarm() and (Posix) difftime() use UTC,
this slightly messes up real-time calculations around leap second boundaries.

A more principled (albeit non-Posix) way to attack this problem is
to keep the internal clock on TAI, and to map this to UTC as needed;
see Bradley White's postings in this forum.
However, since leap seconds aren't predictable in advance,
this requires maintaining a leap second configuration table by hand
(particularly since NTP tracks UTC, not TAI as it probably should),
and it also means one can't pre-date timestamps precisely.

> (lets ignore posix for just now; although i would be quite surprised
>if posix were to contradict some other ISO standard.)

I don't think Posix contradicts the time standards;
it just exposes problems inherent in the fact that
the Earth is slowing down but our clocks aren't.

Clive Feather

unread,
Jun 2, 1992, 3:53:25 AM6/2/92
to
In article <#-3qn...@twinsun.com> egg...@twinsun.com (Paul Eggert) writes:
>dia...@jit345.bad.jit.dec.com (Norman Diamond) writes:
>> Because there really were two leap seconds in one year (on separate days),
>> and someone on the ANSI C committee thought that they came on the same day,...
>
> So the ANSI C committee provided for two leap seconds within the same minute,
> even though the second second is unnecessary. Only once, in June and December
> 1972, have there been two leap seconds in the same year; the world would have
> to change drastically for us to need two leap seconds in the same minute!

According to Whitaker's Almanack 1987, leap seconds are introduced on
the last second of a month, so as to keep UT and UTC within 0.9 seconds
of each other - a positive leap second (23:59:60) will therefore be used
when the difference grows to 0.5 or 0.6 in one direction, and a negative
leap second when it grows that far in the other direction.

December and June are preferred months for leap seconds, and all
previous leap seconds have been positive (at a negative leap second,
23:59:58 is followed by 00:00:00).

--
Clive D.W. Feather | IXI Limited | If you lie to the compiler,
cl...@x.co.uk | 62-74 Burleigh St. | it will get its revenge.
Phone: +44 223 462 131 | Cambridge CB1 1OJ | - Henry Spencer
Fax: +44 223 462 132 | United Kingdom |

Bradley White

unread,
Jun 2, 1992, 11:35:02 PM6/2/92
to
In article <#-41h...@twinsun.com> egg...@twinsun.com (Paul Eggert) gives
an excellent summary of the stationary- -v- moving-epoch question.

> TAI (International Atomic Time) is a sequence of seconds of equal length.
>UTC is TAI adjusted by leap seconds; a UTC second can be 0, 1, or 2 TAI sec

>The C Standard doesn't say whether timing functions obey UTC or TAI;


>this is an issue only with difftime().

And it is exactly in functions like difftime() where I believe you really
want TAI (as Paul mentions), with typical examples being benchmarking and
real-time systems. This leads Paul to conclude ...

>A .. principled (albeit non-Posix) way to attack this problem is


>to keep the internal clock on TAI, and to map this to UTC as needed;

My previous examples of this utilize Arthur David Olson's "tz" package,
which, it is hoped, will help establish "prior-art" for the TAI approach.

However, Paul points out a couple of problems:

> since leap seconds aren't predictable in advance,
>this requires maintaining a leap second configuration table by hand

"Maintaining a leap second table" -- yes. "By hand" -- no. In much the
same way that NTP currently distributes notification of upcoming leaps, I
envisage a simple system that will distribute such a table automatically.
The same system would also be used to distribute the information found in
Olson's time-zone description files.

>and it also means one can't pre-date timestamps precisely.

You can if you use a sufficiently abstract time representation. In the
original example, Andrew Hume alludes to storing timestamps in a textual
form like ``YYYY/MM/DD HH:MM:SS.XXXX.... UTC'' rather than as some kind
of ``time_t'' encoding. Then, the timestamps remain constant over leap
second announcements, even though their (ephemeral) ``time_t'' encodings
may change. And they are portable to boot.

Use TAI for system clocks --- it takes a leapin' and keeps on tickin'!

Brad

Norman Diamond

unread,
Jun 3, 1992, 1:20:37 AM6/3/92
to
In article <1992Jun03.0...@cs.cmu.edu> bw...@cs.cmu.edu (Bradley White) writes:
>In article <#-41h...@twinsun.com> egg...@twinsun.com (Paul Eggert) gives
>>The C Standard doesn't say whether timing functions obey UTC or TAI;
>>this is an issue only with difftime().

This has already been discussed, so I ignored the error before.
But now that it's getting propagated...

>And it is exactly in functions like difftime() where I believe you really
>want TAI (as Paul mentions), with typical examples being benchmarking and
>real-time systems.

Indeed, UTC is not the best choice for benchmarks. But the C standard DOES
mandate UTC. ANSI 4.2.3.3 mandates UTC for gmtime(). Now for local time
operations, Newfoundlanders and Indians (and sufficiently old Malaysians and
maybe some others) know that their local time zone might differ from UTC by
a non-integral number of hours. However, local time zones still generally
differ from UTC by an integral number of minutes.

Bradley White

unread,
Jun 3, 1992, 6:13:20 PM6/3/92
to
In article <1992Jun3.1...@lucid.com> j...@lucid.com (Jerry Schwarz) writes:
>In article <1992Jun03.1...@cs.cmu.edu>, bw...@cs.cmu.edu (Bradley White) writes:
>|> There is an easy and compatible method of computing time differences,
>|> difftime(), and as I have shown it can easily cope with leap seconds.
>|> And, being leap-second cognizant, it can even produce the right answer!
>
>difftime has to work on times in the future as well as the past. Since
>leap seconds are not added according to any fixed algorithmn it isn't
>possible to write code that is "leap-second cognizant" of
>the future.

Correct -- until the relevant leap second is announced, the leap-cognizant
version of difftime() will return the same (possibly incorrect) answer as
the leap-ignorant. However, after the announcement the former version will
be correct (as it always is for the past) while the latter will continue to
be incorrect (as it always is for the past).

It seems to me that trading off a degree of uncertainty about our all too
uncertain future is worth accurately representing the past.

The major point of my posts is that handling leap seconds is both natural
and easy. If we are concerned with them at all, we should do it "right".

Brad

Jerry Schwarz

unread,
Jun 3, 1992, 3:47:32 PM6/3/92
to
In article <1992Jun03.1...@cs.cmu.edu>, bw...@cs.cmu.edu (Bradley White) writes:
|>
|> There is an easy and compatible method of computing time differences,
|> difftime(), and as I have shown it can easily cope with leap seconds.
|> And, being leap-second cognizant, it can even produce the right answer!
|>

difftime has to work on times in the future as well as the past. Since
leap seconds are not added according to any fixed algorithmn it isn't
possible to write code that is "leap-second cognizant" of

the future. (Direct objections to rec.psi :-)


-- Jerry Schwarz

Chuck Bacon

unread,
Jun 3, 1992, 10:16:42 AM6/3/92
to
Please someone, answer THE burning question for us Unix lovers.
I am familiar with ctime(3) on several Unix systems, and the following
statement appears to apply to all of them:

In Unix, the standard int representation of a date/time is not based
on UTC or any similar standard, but rather on the subdivision of
the day into 86400 seconds. Every integral multiple of 86400,
expressed through a call to ctime, results in a string with a time
portion ending in :00:00.

Now, as I understand it, Unix began at the stroke of midnight in London,
beginning Jan. 1, 1970. From that beginning, I would expect the Unix
time representation of the stroke of midnight on any night (in the same
London time zone) would be a multiple of 86400 plus some number of
seconds, namely those leap seconds which had occurred since the
beginning of 1970. But the Unix time conversion routines say different.

What say you, protocols.time.ntp?

--
Chuck Bacon - cr...@helix.nih.gov ( alas, not my 3b1 )-:
ABHOR SECRECY - PROTECT PRIVACY

Bradley White

unread,
Jun 3, 1992, 2:58:14 PM6/3/92
to
In article <1992Jun3.0...@jrd.dec.com> dia...@jit.dec.com (Norman Diamond) writes:
>This has already been discussed, so I ignored the error before.
>But now that it's getting propagated...

Might I suggest that the reason this issue resurfaces from time to time
is not that people continue to propagate erroneous interpretations of the
standards, but that the standards continue to erroneously define reality.

I don't have the ANSI standard, so let me comment on IEEE Std 1003.1-1988
(POSIX). The quotes come from "Appendix B: Rationale and Notes", pps 194
and 195.

"The concept of leap seconds is added for precision; at the
time this standard was published, 14 leap seconds had been
added since January 1, 1970. These 14 seconds are ignored to
provide an easy and compatible method of computing time
differences."

There is an easy and compatible method of computing time differences,
difftime(), and as I have shown it can easily cope with leap seconds.
And, being leap-second cognizant, it can even produce the right answer!

"... not only do most systems not keep track of leap seconds,
but most systems are probably not synchronized to any standard
time reference. Therefore, it is inappropriate to require that
a time represented as seconds since the Epoch precisely represent
the number of seconds between the referenced time and the Epoch."

The NTP people will undoubtedly contest that "most systems are ... not
synchronized," and a growing number do keep track of leap seconds. But,
regardless of how (in)accurate your clock may be, surely it's appropriate
that a time represented as seconds since the Epoch precisely represent
the number of seconds between that time and the Epoch!

"... it is important that all conforming systems interpret
``536457599 seconds since the Epoch'' as 59 seconds, 59 minutes,
23 hours 31 December 1986, regardless of the accuracy of the
system's idea of the current time."

As other example code I gave has shown, 536457599 can easily be gener-
ated for "1986/12/31 23:59:59 +0000" even though the system ticks TAI
because the conversion routines know about leap seconds. However, ...

"Consistent interpretation of ``seconds since the Epoch'' can
be critical to certain types of distributed applications that
rely on such timestamps to synchronize events."

... I continue to maintain that most applications want clocks that do
not leap, and that a more abstract representation of the time than
``seconds since the Epoch'' is appropriate for distributed applications
(particularly if some systems wish to move to 64-bit time_t's with a
different Epoch).

All my own opinion, of course.

Brad

Jukka Korpela

unread,
Jun 4, 1992, 4:58:10 AM6/4/92
to
In article <1992Jun4....@nsisrv.gsfc.nasa.gov> c...@gryphon.gsfc.nasa.gov (Charles E. Campbell) writes:

Every compiler seems to have its own idiosyncratic set of pre-defined
constants, such as "unix" or "vms" or "MCH_AMIGA" or ...

In fact, the predefined constant (macro) names should begin with underscore,
and they usually do. The standard clearly states (in 3.8.8): "All predefined
macro names shall begin with a leading underscore followed by an uppercase
letter or a second underscore."

Now, a semi-standard seems to be that "-D" -> "#define DEBUG" and
"-Dname[=value]" -> "#define name [value]". How about a semi-standard "-P"
that will cause the preprocessor to dump its pre-defined symbol table
to stdout?

It is common practise in UNIX systems, hardly more. Compiler options
are normally not standardized in language standards. Even their syntax
depends on the operating system. We could, of course, define a more
or less formal standard for specifying some compile-time options,
but it would have to be rather abstract.

I think the essential point in your article is the need for detailed
documentation about implementation-specific features. Please notice that
the standard already requires that (in 1.7):

"An implementation shall be accompanied by a document that defines all
implementation-defined characteristics and all extensions."

So if a vendor claims ANSI conformance but does not provide documentation
about e.g. nonstandard predefined macros, you have a case against them.

As a quality issue, I think we should require that implementations be
accompanied, among other things, with short documents that *only*
define the implementation-defined characteristics and extensions.
(I hate to scan thru a thick manual which has such documentation
interspersed with the description of standard C and its usage,
often even so that the implementation-specific features are not
flagged as such.)

* Jukka "Yucca" Korpela, Manager of User Support
* Helsinki University of Technology (HUT) Computing Centre, Finland
* Internet mail address: Jukka....@hut.fi

Charles E. Campbell

unread,
Jun 3, 1992, 9:07:57 PM6/3/92
to
Every compiler seems to have its own idiosyncratic set of pre-defined
constants, such as "unix" or "vms" or "MCH_AMIGA" or ... . In fact, they often
have quite a few of them. However, finding out what they are usually requires
considerable spelunking in the manuals, as vendors seldom list all of the
pre-defined constants anywhere and they don't appear to believe in mentioning
these little beasties in their indexes, either.

Now, a semi-standard seems to be that "-D" -> "#define DEBUG" and
"-Dname[=value]" -> "#define name [value]". How about a semi-standard "-P"
that will cause the preprocessor to dump its pre-defined symbol table

to stdout? Or, at least, *some* option for that purpose!

I say "semi-standard" as it is way too late to make a ANSI-C mod. How
do you netters feel about this?
--
O
Dr. Chip Campbell -[o]-
Intelligent Robotics Laboratory / [:] \ The Next Generation
Goddard Space Flight Center || || In Robotics

Alan Bowler

unread,
Jun 4, 1992, 12:59:35 PM6/4/92
to
In article <1992Jun3.0...@jrd.dec.com> dia...@jit.dec.com (Norman Diamond) writes:
>
>Indeed, UTC is not the best choice for benchmarks. But the C standard DOES
>mandate UTC. ANSI 4.2.3.3 mandates UTC for gmtime(). Now for local time
>operations, Newfoundlanders and Indians (and sufficiently old Malaysians and
>maybe some others) know that their local time zone might differ from UTC by
>a non-integral number of hours.

What is also not so well recognized is that the offset from UTC does not follow
any fixed algorithm. The rule about when switches between daylight and
standard time is set by local governments each year, and as Newfoundlanders
are aware, that change is not always 1 hour. This makes it very hard to
build things into programs.

Bradley White

unread,
Jun 4, 1992, 2:12:13 PM6/4/92
to
In article <1992Jun4.1...@thinkage.on.ca> atbo...@thinkage.on.ca (Alan Bowler) writes:
> The rule about when switches between daylight and
>standard time is set by local governments each year, and as Newfoundlanders
>are aware, that change is not always 1 hour. This makes it very hard to
>build things into programs.

Which is exactly the reason why the Olson implementation reads a timezone
description file at run-time. This also allows you to dynamically select
the local time zone.

Stefan Esser

unread,
Jun 4, 1992, 11:46:48 AM6/4/92
to
In article <1992Jun4....@nsisrv.gsfc.nasa.gov>, c...@gryphon.gsfc.nasa.gov (Charles E. Campbell) writes:
|> Every compiler seems to have its own idiosyncratic set of pre-defined
|> constants, such as "unix" or "vms" or "MCH_AMIGA" or ... . In fact, they often
|> have quite a few of them. However, finding out what they are usually requires
|> considerable spelunking in the manuals, as vendors seldom list all of the
|> pre-defined constants anywhere and they don't appear to believe in mentioning
|> these little beasties in their indexes, either.

There is was an interesting shell script posted some time ago, which
at least on Unix systems finds the predefined defines.

It does this by extracting all strings from the preprocessor binary and
sending a source which contains #ifdef [String] for all strings and thus
finds all which are predefined by this particular preprocessor.

Although it is a shell script (relying on standard Unix commands), it shouldn't
be too hard to port it to other OSs, since all that's really needed is a
command to extract sequences of characters as candidates of predefined
symbols from the preprocessor.

Since I don't know who wrote and posted it, I can't give credit to the author.
It's short enough IMHO to be posted to a discussion group and I use it to
find out what _really_ is defined on each new system or release of a compiler.

*** I didn't write it, I'm just a happy user ***

# This is a shell archive (shar 3.32)
#
# existing files WILL be overwritten
#
# This shar contains:
# length mode name
# ------ ---------- ------------------------------------------
# 674 -rwxr-xr-x cpp-defs
#
if touch 2>&1 | fgrep 'amc' > /dev/null
then TOUCH=touch
else TOUCH=true
fi
echo "x - extracting cpp-defs (Text)"
sed 's/^X//' << 'SHAR_EOF' > /vol/local/bin/cpp-defs &&
X#!/bin/sh
X
X# cpp-defs: prints names of predefined cpp macros
X# tested on Sun3, Sun4, SGI, HP, Stellar, Convex, DECstation, Moto 1147
X
X# stupid, stupid Ultrix! I have a "which" function that I cannot use, because
X# the Ultrix /bin/sh does not understand shell functions! (try /bin/sh5!)
X# also, the Ultrix "test" command does not understand the -x flag!
X
Xpath="`echo $PATH | sed -e 's/:/ /g'`"
Xcc=`for dir in $path ; do
X if [ -r $dir/cc ] ; then
X echo $dir/cc
X exit
X fi
Xdone`
X
Xstrings -a -2 /lib/cpp $cc |
Xsed -e 's/^-D//' |
Xsort -u |
Xsed -n '/^[a-zA-Z_][a-zA-Z0-9_]*$/s//#ifdef &\
X"%&"\
X#endif/p' >/tmp/$$.c
X
Xcc -E /tmp/$$.c |
Xsed -n '/%/s/[%"]//gp'
X
Xrm -f /tmp/$$.c
SHAR_EOF
$TOUCH -am 0723160291 cpp-defs &&
chmod 0755 cpp-defs ||
echo "restore of cpp-defs failed"
set `wc -c cpp-defs`;Wc_c=$1
if test "$Wc_c" != "674"; then
echo original size 674, current size $Wc_c
fi
exit 0

--
Stefan Esser, Institute of Nuclear Physics, University of Cologne, Germany
s...@IKP.Uni-Koeln.DE [134.95.192.50]

Tim Shepard

unread,
Jun 4, 1992, 4:26:55 PM6/4/92
to

I too have been frustrated by the fact that the current popular
technology for synchronizing the time on networked computers (NTP)
does not provide a straightforward way of keeping TAI on a system
instead of UTC. Too me, TAI seems to be the more natural time scale
to use inside a system for timestamps and scheduling. Conversion to
UTC could be handled much like time-zone and DST conversion is handled
now. If necessary, an NTP-like system could distribute the current
UTC-TAI offset with the time.

The current NTP system seems to be rooted in the standard broadcasts
provided by the NBS (or now the NIST) on radio stations WWV, WWVH, and
WWVB which provide UTC encoded in the pulses, but not an easy way of
getting at TAI (without some out of band information and/or some
memory). The broadcasts do carry something like the leap warning bit
in NTP.

The system time of the newer GPS system is a fixed offset from TAI and
thus does provide a method of obtaining TAI. So in principle, an
NTP-like system could be built which distributes TAI (probably with
the current UTC offset and a leap warning) from stratum-1 servers
synced to GPS receivers. But, such a system would not be able to
easily use time broadcasts that only encode UTC.

The NTP standard seems to be too far along in the process to get a
fundamental change like this into it; changing the 65-bit timestamp
format (including leap-warning) to a 64-bit TAI-based timestamp is
probably out of the question.

I wonder if it might be possible to add the UTC-TAI offset as a
variable to the NTP mode 6 messages so that a client of an NTP server
could query the server for this offset. This might allow an NTP
implementation to maintain a TAI local clock. The details of when,
and how often a client should poll the server for this variable would
need to be thought out carefully, but it might be an easy way of
providing for TAI-based local clocks.

-Tim Shepard
sh...@lcs.mit.edu

Bradley White

unread,
Jun 4, 1992, 8:00:47 PM6/4/92
to
Personal mail I have received shows that there is some confusion about
what I am trying to say. I will attempt to state my point more clearly
during this response to Tim Shepard's recent post.

> I too have been frustrated by the fact that the current popular
> technology for synchronizing the time on networked computers (NTP)
> does not provide a straightforward way of keeping TAI on a system
> instead of UTC.

I am not suggesting that NTP, nor its timestamp encoding, be changed
in any way. NTP is sufficient to synchronize clocks no matter whether
the local clock ticks TAI or UTC.

> I wonder if it might be possible to add the UTC-TAI offset as a
> variable to the NTP mode 6 messages so that a client of an NTP server
> could query the server for this offset. This might allow an NTP
> implementation to maintain a TAI local clock.

This isn't necessary. I have run a TAI local clock, and successfully
synchronized via NTP -- being leap-second cognizant allows for easy
creation of POSIX and NTP timestamps from a TAI source. Indeed, the
TAI system sails smoothly through leaps and does not need any of that
hairy defibrillation logic.

What I am saying is:

* difftime() should be allowed, if not required, to return a
time difference in TAI seconds.

* To that end, difftime() should not take time_t's as arguments
(but rather, say, struct tm's), OR the time_t encoding should
be left unspecified, OR both.

My secondary point was that to make this feasible we would probably
need a system for distributing leap second announcements, and that the
same system could be used to distribute timezone descriptions and help
support some form of automatic NTP configuration.

Finally, regarding timestamp formats... The POSIX and NTP formats are
mostly fine (the important thing being that the recipient agree with
the sender on the encoding). However, a simpler format such as

YYYY/MM/DD HH:MM:SS.XXXXXXXXXXXX +0000

does not require a confusing definition, does not cease to work in 2099,
and can be constructed via standard routines without knowing the encoding
of time_t's (and is therefore independent of whether the local clock ticks
TAI or UTC). Sure, it is a few bytes longer, but some of the redundancy
allows for distinguishing between 23:59:60 and 00:00:00 over leaps.

Brad

Norman Diamond

unread,
Jun 5, 1992, 1:10:07 AM6/5/92
to
In article <1992Jun05.0...@cs.cmu.edu> bw...@cs.cmu.edu (Bradley White) writes:
>This isn't necessary. I have run a TAI local clock, and successfully
>synchronized via NTP -- being leap-second cognizant allows for easy
>creation of POSIX and NTP timestamps from a TAI source.
>TAI system sails smoothly through leaps and does not need any of that
>hairy defibrillation logic.

I think I'm beginning to understand that TAI has seconds of varying duration.
I read the definitions in a posting just a few days ago, but somehow didn't
get this impression from that posting. Have I gotten it correctly now?
But I'm still confused. If TAI ignores leaps because it remains fibrillated,
but a CPU timer oscillates so many times per UTC second (not so many times
per fibrillating second), then how do you compute TAI time? Or after 43,200
leap seconds, will TAI say midnight when the sun is high in the sky?

> * difftime() should be allowed, if not required, to return a
> time difference in TAI seconds.
> * To that end, difftime() should not take time_t's as arguments
> (but rather, say, struct tm's), OR the time_t encoding should
> be left unspecified, OR both.

And now I'm more confused. If difftime() returns a difference in fibrillating
seconds, then how is that good for benchmarks?
And can't time_t still be specified as the number of seconds since some epoch,
and have difftime() just do a subtraction?
Or is the intention for time_t to be the number of UTC seconds since some
epoch and then have difftime() adjust the interval to count fibrillating
seconds? This would require a database for every fibrillating second that
has elapsed, and the fractional count of a fibrillating second would be
ambiguous (varying according to whether you count a fraction that occured
before some integral quantity, or afterwards, or fractions on both sides).

Previously I posted an opinion that UTC is not the best choice for benchmarks.
That opinion was a bit jumbled too. If a benchmark records the beginning time
and the ending time, and if a human subtracts the times and forgets to adjust
for leap seconds, then the computation will be inaccurate. But if difftime()
is used and correctly counts all seconds, then it will yield an accurate
benchmark.

Christian Huitema

unread,
Jun 5, 1992, 4:00:19 AM6/5/92
to
In article <JKORPELA.9...@vipunen.hut.fi>, jkor...@vipunen.hut.fi (Jukka Korpela) writes:
> In article <1992Jun4....@nsisrv.gsfc.nasa.gov> c...@gryphon.gsfc.nasa.gov (Charles E. Campbell) writes:
>
> Every compiler seems to have its own idiosyncratic set of pre-defined
> constants, such as "unix" or "vms" or "MCH_AMIGA" or ...
>
> In fact, the predefined constant (macro) names should begin with underscore,
> and they usually do. The standard clearly states (in 3.8.8): "All predefined
> macro names shall begin with a leading underscore followed by an uppercase
> letter or a second underscore."

Are you really sure they usually do? This is a set of constants found in
various compilers:

mc68030, mc68020, mc68010, sparc, gould, _IBMR2, vax, ns16000,
ns32000, nsc32000, i386, mips, MIPSEL, MIPSEB, sun, _AIX, ultrix,
bsd4_2, __STDC__

And that is just to quote a few... Usually, these constants are defined in an
obscure part of the C manual. So, yes, it should be very interesting to have a
"listing" option.

Christian Huitema

Mark Brader

unread,
Jun 5, 1992, 11:49:55 AM6/5/92
to
> It's not true that minutes are of constant length since we adopted leap
> seconds internationally, back in the early 1970s. Every so often we get
> a 61 second minute - hence 23:59:60.

Whether this implies that minutes (or seconds, as someone else said) are
not "of constant length" is really a matter of English usage. Given the
various ways that time-terms are used in English, we could say with a straight
face that, when a leap second occurs, there is a minute that is more than
a minute long. Of course, the two uses of "minute" here are in different
senses; one implies constant length and one doesn't.

Followups are directed to comp.std.english. :-) [Actually, to email.]
--
Mark Brader "'He added a 3-point lead' is pronounced
SoftQuad Inc., Toronto differently in Snooker than in Typography..."
utzoo!sq!msb, m...@sq.com -- Lee R. Quin

This article is in the public domain.

Tim Shepard

unread,
Jun 5, 1992, 2:41:00 PM6/5/92
to

In article <1992Jun5.0...@jrd.dec.com> dia...@jit345.bad.jit.dec.com (Norman Diamond) writes:

>I think I'm beginning to understand that TAI has seconds of varying duration.
>I read the definitions in a posting just a few days ago, but somehow didn't
>get this impression from that posting. Have I gotten it correctly now?
>But I'm still confused. If TAI ignores leaps because it remains fibrillated,
>but a CPU timer oscillates so many times per UTC second (not so many times
>per fibrillating second), then how do you compute TAI time? Or after 43,200
>leap seconds, will TAI say midnight when the sun is high in the sky?

TAI seconds are always the same length. I recall that this length of
time is called the SI second and is defined as a fixed number of
cycles of an atomic oscillator.

After 43,200 leap seconds in the same direction, TAI will indeed say
midnight when the sun is high in the sky over England.

-Tim Shepard
sh...@lcs.mit.edu

Bradley White

unread,
Jun 5, 1992, 4:25:07 PM6/5/92
to
In article <SHEP.92J...@ginger.lcs.mit.edu> sh...@ginger.lcs.mit.edu (Tim Shepard) writes:
> After 43,200 leap seconds in the same direction, TAI will indeed say
> midnight when the sun is high in the sky over England.

Can you explain what leads you to that conclusion?

What we are calling "TAI" is a differential ... the number of SI seconds
between event X and event Y. As such, it has nothing to do with the time
of day.

However, if you know the time-of-day of event X, the differential between
X and Y, and how many leap seconds occurred between X and Y, then you can
deduce the time-of-day of event Y.

You seem to be assuming a particular encoding for time_t's ... the very
thing I am trying to debunk. If I gave you a list of the only things you
could do with time_t's, namely ...

time_t time(time_t *);
struct tm *localtime(time_t *);
struct tm *gmtime(time_t *);
time_t mktime(struct tm *);
double difftime(time_t, time_t);

..., could you write a program to demonstrate your statement?

Let me summarize again ... The expression given for ``seconds since the
Epoch'' on page 34 of IEEE Std 1003.1-1988 is a reasonable way to encode
times and to share them with your friends (particularly if you don't care
about leap seconds), but please don't force me to use that encoding for
time_t's.

If you don't assume a time_t encoding, and only use the above routines,
I promise that your code will work on my system. Furthermore, I promise
that difftime() will give accurate results, and that there won't be any
weird discontinuities over leap seconds. And, I'll even throw in a
routine for generating a POSIX-style timestamp, ...

time_t time2posix(time_t);

Is it a deal?

Tim Shepard

unread,
Jun 6, 1992, 4:49:36 PM6/6/92
to

In article <1992Jun05.2...@cs.cmu.edu> bw...@cs.cmu.edu (Bradley White) writes:

>In article <SHEP.92J...@ginger.lcs.mit.edu> sh...@ginger.lcs.mit.edu (Tim Shepard) writes:
>> After 43,200 leap seconds in the same direction, TAI will indeed say
>> midnight when the sun is high in the sky over England.

>Can you explain what leads you to that conclusion?

>What we are calling "TAI" is a differential ... the number of SI seconds
>between event X and event Y. As such, it has nothing to do with the time
>of day.

In my experience, the conventional notation for each of the timescales
(TAI, UTC, UT0, UT1, UT2, etc.) is a date, hour, minute, and second
plus any fractional second. TAI is a full blown timescale just like
all the others (much more than just a differential).

Times expressed in TAI and UTC differ by a few seconds today, but many
many years from now, times expressed in TAI and UTC may differ by as
much as 12 hours; hence "midnight" (zero hours) TAI would correspond
to "noon" (twelve hours) UTC.

I am not refering to any C or Unix standards when I say this. I am
recalling my experiences with astronomers about time keeping. Maybe
our jargon is somewhat incomptatable. Maybe a term is needed for "an
arbitrary timescale where the time difference between two timestamps
is obtained by straight forward integer subtraction."

In any case, I like your idea of making time_t's opaque type in the
standards.

I would hope that I could type these commands (at a /bin/sh type shell):

$ export TZ
$ TZ=TAI date
$ TZ=UTC date

and observe the current TAI-UTC difference (if I type fast enough).

-Tim Shepard
sh...@lcs.mit.edu

Bradley White

unread,
Jun 6, 1992, 5:00:45 PM6/6/92
to
In article <SHEP.92J...@ginger.lcs.mit.edu> sh...@ginger.lcs.mit.edu (Tim Shepard) writes:
>In article <1992Jun05.2...@cs.cmu.edu> bw...@cs.cmu.edu (Bradley White) writes:
>>What we are calling "TAI" is a differential ... the number of SI seconds
>>between event X and event Y. As such, it has nothing to do with the time
>>of day.
>
>In my experience, the conventional notation for each of the timescales
>(TAI, UTC, UT0, UT1, UT2, etc.) is a date, hour, minute, and second
>plus any fractional second. TAI is a full blown timescale just like
>all the others (much more than just a differential).
>
>Times expressed in TAI and UTC differ by a few seconds today, but many
>many years from now, times expressed in TAI and UTC may differ by as
>much as 12 hours; hence "midnight" (zero hours) TAI would correspond
>to "noon" (twelve hours) UTC.

OK, thanks Tim ... I'll refrain from using terms like TAI (although, from
the above, it seems that "TAI time-of-day" is a relatively useless concept).

>In any case, I like your idea of making time_t's opaque type in the
>standards.

Yes, that is the major point of my posts ... removing the POSIX definition
of time_t's allows those who care, to correctly handle leap seconds, which
includes giving correct results for difftime() over leaps. For those who
don't care, it shouldn't make any difference at all.

It is my (naive) understanding that the ANSI standard currently leaves
time_t as an opaque type. (However, IEEE Std 1003.1-1988 also says that it
is ANSI-approved. What's the deal there? I guess it could just be that
ANSI ==> POSIX, even though POSIX =/=> ANSI.)

Brad

Larry Jones

unread,
Jun 7, 1992, 4:15:27 PM6/7/92
to
In article <1992Jun06.2...@cs.cmu.edu>, bw...@cs.cmu.edu (Bradley White) writes:
> It is my (naive) understanding that the ANSI standard currently leaves
> time_t as an opaque type. (However, IEEE Std 1003.1-1988 also says that it
> is ANSI-approved. What's the deal there? I guess it could just be that
> ANSI ==> POSIX, even though POSIX =/=> ANSI.)

time_t is indeed an opaque type in the ANSI/ISO C Standard[s]. Standard
C is completely independent of POSIX or any other operating system
standard. POSIX, as I understand it, requires a Standard C compiler,
but then goes on to imposed additional requirements over and above the
C Standard, some of which, interestingly enough, render the C compiler
non-conforming!
----
Larry Jones, SDRC, 2000 Eastman Dr., Milford, OH 45150-2789 513-576-2070
Domain: scj...@sdrc.com Path: uunet!sdrc!scjones IBM: USSDR7DR at IBMMAIL
I keep forgetting that rules are only for little nice people. -- Calvin

Reply all
Reply to author
Forward
0 new messages