Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Time-related extract of the POSIX standard to come

3 views
Skip to first unread message

Joe Gwinn

unread,
Jul 6, 2001, 10:34:03 PM7/6/01
to
The following is a small extract from POSIX P1003.1-200x draft 7, which
currently has 96% approval in the ballot group. This extract contains the
basic definitions of time in POSIX.

Lines 1825-1828 Section 3.149 Epoch -- The time zero hours, zero minutes,
zero seconds, on January 1, 1970 Coordinated Universal Time (UTC). Note:
See also Seconds Since the Epoch defined in Section 4.14 (on page 102).

Rationale: Lines 575-579 Epoch -- Historically, the origin of UNIX system
time was referred to as "00:00:00 GMT, January 1, 1970". Greenwich Mean
Time is actually not a term acknowledged by the international standards
community; therefore, this term, Epoch, is used to abbreviate the
reference to the actual standard, Coordinated Universal Time.


Lines 3189-3211 Section 4.14 Seconds Since the Epoch -- A value that
approximates the number of seconds that have elapsed since the Epoch. A
Coordinated Universal Time name (specified in terms of seconds (tm_sec),
minutes (tm_min), hours (tm_hour), days since January 1 of the year
(tm_yday), and calendar year minus 1900 (tm_year)) is related to a time
represented as seconds since the Epoch, according to the expression
below. If the year is <1970 or the value is negative, the relationship is
undefined. If the year is 1970 and the value is non-negative, the value is
related to a Coordinated Universal Time name according to the C-language
expression, where tm_sec, tm_min, tm_hour, tm_yday, and tm_year are all
integer types:

tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
(tm_year-70)*31536000 + ((tm_year-69)/4)*86400 -
((tm_year-1)/100)*86400 + ((tm_year+299)/400)*86400

The relationship between the actual time of day and the current value for
seconds since the Epoch is unspecified. How any changes to the value of
seconds since the Epoch are made to align to a desired relationship with
the current actual time are made is implementation-defined. As represented
in seconds since the Epoch, each and every day shall be accounted for by
exactly 86 400 seconds.

Note: The last three terms of the expression add in a day for each
year that follows a leap year starting with the first leap year since the
Epoch. The first term adds a day every 4 years starting in 1973, the
second subtracts a day back out every 100 years starting in 2001, and the
third adds a day back in every 400 years starting in 2001. The divisions
in the formula are integer divisions; that is, the remainder is discarded
leaving only the integer quotient.


Rationale: Lines 1520-1574 A.4.14 Seconds Since the Epoch -- Coordinated
Universal Time (UTC) includes leap seconds. However, in POSIX time
(seconds since the Epoch), leap seconds are ignored (not applied) to
provide an easy and compatible method of computing time differences.
Broken-down POSIX time is therefore not necessarily UTC, despite its
appearance. As of September 2000, 24 leap seconds had been added to UTC
since the Epoch, 1 January, 1970. Historically, one leap second is added
every 15 months on average, so this offset can be expected to grow
steadily with time. Most systems' notion of ``time'' is that of a
continuously increasing value, so this value should increase even during
leap seconds. However, not only do most systems not keep track of leap
seconds, but most systems are probably not synchronized to any standard
time reference. Therefore, it is inappropriate to require that a time
represented as seconds since the Epoch precisely represent the number of
seconds between the referenced time and the Epoch. It is sufficient to
require that applications be allowed to treat this time as if it
represented the number of seconds between the referenced time and the
Epoch. It is the responsibility of the vendor of the system, and the
administrator of the system, to ensure that this value represents the
number of seconds between the referenced time and the Epoch as closely as
necessary for the application being run on that system. It is important
that the interpretation of time names and seconds since the Epoch values
be consistent across conforming systems; that is, it is important that all
conforming systems interpret "536 457 599 seconds since the Epoch" as 59
seconds, 59 minutes, 23 hours 31 December 1986, regardless of the accuracy
of the system's idea of the current time. The expression is given to
ensure a consistent interpretation, not to attempt to specify the
calendar. The relationship between tm_yday and the day of week, day of
month, and month is in accordance with the Gregorian calendar, and so is
not specified in POSIX.1. Consistent interpretation of seconds since the
Epoch can be critical to certain types of distributed applications that
rely on such timestamps to synchronize events. The accrual of leap seconds
in a time standard is not predictable. The number of leap seconds since
the Epoch will likely increase. POSIX.1 is more concerned about the
synchronization of time between applications of astronomically short
duration.

Note that tm_yday is zero-based, not one-based, so the day number in the
example above is 364. Note also that the division is an integer division
(discarding remainder) as in the C language. Note also that the meaning of
gmtime( ), localtime( ), and mktime( ) is specified in terms of this
expression. However, the ISO C standard computes tm_yday from tm_mday,
tm_mon, and tm_year in mktime( ). Because it is stated as a
(bidirectional) relationship, not a function, and because the conversion
between month-day-year and day-of-year dates is presumed well known and is
also a relationship, this is not a problem. Implementations that implement
time_t as a signed 32-bit integer will overflow in 2038. The data size for
time_t is as per the ISO C standard definition, which is
implementation-defined.

See also Epoch (on page 3308).

The topic of whether seconds since the Epoch should account for leap
seconds has been debated on a number of occasions, and each time consensus
was reached (with acknowledged dissent each time) that the majority of
users are best served by treating all days identically. (That is, the
majority of applications were judged to assume a single length - as
measured in seconds since the Epoch - for all days. Thus, leap seconds are
not applied to seconds since the Epoch.) Those applications which do care
about leap seconds can determine how to handle them in whatever way those
applications feel is best. This was particularly emphasized because there
was disagreement about what the best way of handling leap seconds might
be. It is a practical impossibility to mandate that a conforming
implementation must have a fixed relationship to any particular official
clock (consider isolated systems, or systems performing "reruns" by
setting the clock to some arbitrary time). Note that as a practical
consequence of this, the length of a second as measured by some external
standard is not specified. This unspecified second is nominally equal to
an International System (SI) second in duration. Applications must be
matched to a system that provides the particular handling of external time
in the way required by the application.


Above are the actual words of the standard, complete and unedited. The
"Rationale" is a formal part of the standard, and is not my commentary,
which instead follows.

First, note the careful wording. The POSIX timescale origin is defined to
coincide with a particular instant, 00:00:00 UTC 1 January 1970 AD, but
POSIX thereafter goes its own way.

Second, note that leap seconds are explicitly not applied to Seconds Since
the Epoch, being neither inserted nor removed.

Third, note that the relationship between broken-down time and Seconds
Since the Epoch has no provision for leap seconds whatsoever.

Fourth, note that the second is nominally equal to the SI Second. The
"nominally" is because we cannot mandate that computers have atomic
clocks.

Fifth, note that "Broken-down POSIX time is therefore not necessarily UTC,
despite its appearance." The difference is that UTC by definition
requires leap seconds, which are not provided for. Leap seconds were in
fact invented and defined in the ITU standard defining UTC.


The first four points add up to the definition of a clock that has the
general semantics of TAI, but differs from TAI by an unspecified constant,
to the accuracy of the clock hardware in the computer. I usually
summarize this by saying that the POSIX clock Seconds Since the Epoch
"parallels" TAI.


Joe Gwinn

Philip Homburg

unread,
Jul 7, 2001, 5:21:37 AM7/7/01
to
In article <joegwinn-060...@192.168.1.100>,

Joe Gwinn <joeg...@mediaone.net> wrote:
>First, note the careful wording. The POSIX timescale origin is defined to
>coincide with a particular instant, 00:00:00 UTC 1 January 1970 AD, but
>POSIX thereafter goes its own way.
>
>Second, note that leap seconds are explicitly not applied to Seconds Since
>the Epoch, being neither inserted nor removed.
>
>Third, note that the relationship between broken-down time and Seconds
>Since the Epoch has no provision for leap seconds whatsoever.
>
>Fourth, note that the second is nominally equal to the SI Second. The
>"nominally" is because we cannot mandate that computers have atomic
>clocks.
>
>Fifth, note that "Broken-down POSIX time is therefore not necessarily UTC,
>despite its appearance." The difference is that UTC by definition
>requires leap seconds, which are not provided for. Leap seconds were in
>fact invented and defined in the ITU standard defining UTC.
>
>The first four points add up to the definition of a clock that has the
>general semantics of TAI, but differs from TAI by an unspecified constant,
>to the accuracy of the clock hardware in the computer. I usually
>summarize this by saying that the POSIX clock Seconds Since the Epoch
>"parallels" TAI.

I don't see how that follows. UTS seems like a valid implementation of POSIX
time, but is quite unlike TAI. UT1 is another possibiliy.

New interfaces that mandate UTC or TAI are the way to go (not vague
specifications about the length of a second, or interfaces that leave the
application with no way to determine the accuracy of the clock).


Philip Homburg

Markus Kuhn

unread,
Jul 7, 2001, 6:08:01 AM7/7/01
to

OK, *that* actually sounds (in contrast to your own words earlier) very
sensible and has eliminated my worst fears that your waffle about time_t
being TAI generated with me. POSIX.1:200x still forbids and not requires
that time_t is an encoding of TAI, which is a good thing!

POSIX.1:200x defines time_t loosely enough to make a synchronized system
that makes time_t an encoding of UTS strickly conforming to the
standards. That's also good. In essence, nothing of relevance has changed
in the standard here.

>The first four points add up to the definition of a clock that has the
>general semantics of TAI, but differs from TAI by an unspecified constant,
>to the accuracy of the clock hardware in the computer. I usually
>summarize this by saying that the POSIX clock Seconds Since the Epoch
>"parallels" TAI.

Trust me, it would be a good idea if you simply avoided to use the term
TAI. It has nothing to do with what the old or new POSIX specifies and you
just confuse readers with your private very special and unusual trminology
or a time that "parallels TAI by an unspecified constant". It's mostly your
terminology that creates the flame wars that we had on PASC. TAI is
semantically a time that does not relate to UTC or civilian time unless you
have a full leap second history.

That semantic is *not* what POSIX wants time_t to be. So *please* don't
mention TAI any more. Most time-aware POSIX users do not care what the
display of a TAI clock in the basement of a lab in Paris not related
to their local civil time says. For them, TAI seems to drift away from
their local time and UTC and with their cheap clock, they can't
distinguish, which of TAI or UTC is more stable. This is not just
an implementation detail, this is essential for the practical relevance of
this abstract model of yours.

Markus

--
Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
Email: mkuhn at acm.org, WWW: <http://www.cl.cam.ac.uk/~mgk25/>

Joe Gwinn

unread,
Jul 7, 2001, 10:09:16 AM7/7/01
to
In article <93hvp3sgd0hqg...@stereo.pch.home.cs.vu.nl>,
phi...@pch.home.cs.vu.nl (Philip Homburg) wrote:

> In article <joegwinn-060...@192.168.1.100>,
> Joe Gwinn <joeg...@mediaone.net> wrote:
> >First, note the careful wording. The POSIX timescale origin is defined to
> >coincide with a particular instant, 00:00:00 UTC 1 January 1970 AD, but
> >POSIX thereafter goes its own way.
> >
> >Second, note that leap seconds are explicitly not applied to Seconds Since
> >the Epoch, being neither inserted nor removed.
> >
> >Third, note that the relationship between broken-down time and Seconds
> >Since the Epoch has no provision for leap seconds whatsoever.
> >
> >Fourth, note that the second is nominally equal to the SI Second. The
> >"nominally" is because we cannot mandate that computers have atomic
> >clocks.
> >
> >Fifth, note that "Broken-down POSIX time is therefore not necessarily UTC,
> >despite its appearance." The difference is that UTC by definition
> >requires leap seconds, which are not provided for. Leap seconds were in
> >fact invented and defined in the ITU standard defining UTC.
> >
> >The first four points add up to the definition of a clock that has the
> >general semantics of TAI, but differs from TAI by an unspecified constant,
> >to the accuracy of the clock hardware in the computer. I usually
> >summarize this by saying that the POSIX clock Seconds Since the Epoch
> >"parallels" TAI.
>
> I don't see how that follows. UTS seems like a valid implementation of POSIX
> time, but is quite unlike TAI. UT1 is another possibiliy.

The offset between TAI and Seconds Since the Epoch is left unspecified in
POSIX simply because POSIX cannot mandate that people set their clocks
correctly.

My point is that Seconds Since the Epoch stays at a more or less fixed (if
often unknown) offset from TAI, while UTS and UTC will drift farther and
farther from TAI as leap seconds accumulate. "As of September 2000, 24


leap seconds had been added to UTC since the Epoch, 1 January, 1970.
Historically, one leap second is added every 15 months on average, so this

offset [between POSIX and UTC] can be expected to grow steadily with
time."


> New interfaces that mandate UTC or TAI are the way to go (not vague
> specifications about the length of a second, or interfaces that leave the
> application with no way to determine the accuracy of the clock).

Yes, of course. But agreement on anything more definite was not
achieved. At this point, some actual field implementations of proposed
APIs are needed, to allow consensus to be arrived at, and to have the
actual-use experience that POSIX requires before standardizing something.

Joe Gwinn

Joe Gwinn

unread,
Jul 7, 2001, 10:35:56 AM7/7/01
to
In article <9i6n21$cep$3...@pegasus.csx.cam.ac.uk>, mg...@cl.cam.ac.uk
(Markus Kuhn) wrote:

As I have said repeatedly, Seconds Since the Epoch (which is kept in
variables of type time_t) is not TAI, but it parallels TAI in that the
offset is more or less constant.


> POSIX.1:200x defines time_t loosely enough to make a synchronized system
> that makes time_t an encoding of UTS strickly conforming to the
> standards. That's also good. In essence, nothing of relevance has changed
> in the standard here.

While many people will do such things, technically it is not conforming.
Strictly speaking, there is a 24-second and growing difference:

"Broken-down POSIX time is therefore not necessarily UTC, despite its
appearance. As of September 2000, 24 leap seconds had been added to UTC
since the Epoch, 1 January, 1970. Historically, one leap second is added
every 15 months on average, so this offset can be expected to grow
steadily with time."

But the POSIX Police will not be knocking on your door in the night.


> >The first four points add up to the definition of a clock that has the
> >general semantics of TAI, but differs from TAI by an unspecified constant,
> >to the accuracy of the clock hardware in the computer. I usually
> >summarize this by saying that the POSIX clock Seconds Since the Epoch
> >"parallels" TAI.
>
> Trust me, it would be a good idea if you simply avoided to use the term
> TAI. It has nothing to do with what the old or new POSIX specifies and you
> just confuse readers with your private very special and unusual trminology
> or a time that "parallels TAI by an unspecified constant". It's mostly your
> terminology that creates the flame wars that we had on PASC. TAI is
> semantically a time that does not relate to UTC or civilian time unless you
> have a full leap second history.
>
> That semantic is *not* what POSIX wants time_t to be. So *please* don't
> mention TAI any more. Most time-aware POSIX users do not care what the
> display of a TAI clock in the basement of a lab in Paris not related
> to their local civil time says. For them, TAI seems to drift away from
> their local time and UTC and with their cheap clock, they can't
> distinguish, which of TAI or UTC is more stable. This is not just
> an implementation detail, this is essential for the practical relevance of
> this abstract model of yours.

If it looks like a duck, quacks like a duck, ....

I think my definition of what I mean by "parallels TAI" is quite clear,
and I doubt that many people are confused by it. Nor was it the root
cause of the violent debate in POSIX either. The basic debate was on the
true nature of Seconds Since the Epoch -- is it most like UTC, UT1 (GMT),
or UTC?

UT1 was never a serious contender, although we did flirt with it. The
problem with UTI is that it requires astronomical observation data not
available to an isolated system, the same problem as with UTC, although
UT1 has no leap seconds.

In the votes, the POSIX balloters did not accept the contention that
Seconds Since the Epoch (~time_t) was a form of UTC, leading to the
standard text that I posted on 6 July 2001. I submit that that
P1003.1-200x/d7 text is quite clear on the point, and also that Seconds
Since the Epoch behaves like TAI, and not like UTC or UT1.

Said another way, the offset between Seconds Since the Epoch and TAI is
constant, while the offset between Seconds Since the Epoch and UTC (and
UT1) grows steadily as leap seconds accumulate.

But you don't need to agree or to use my terminology if it offends you.
Nor do I wish to further debate the issue, as the standard says what it
says; we can agree to disagree on the matter.

Joe Gwinn

Markus Kuhn

unread,
Jul 7, 2001, 11:17:43 AM7/7/01
to
joeg...@mediaone.net (Joe Gwinn) writes:
>The offset between TAI and Seconds Since the Epoch is left unspecified in
>POSIX simply because POSIX cannot mandate that people set their clocks
>correctly.

It could mandate a best effor goal what the clock is supposed to show
if optimal reference time is available. It is obvious that real
implementations will always only be able to prove a best effort
approcimation, with NTP within 10 ms, with sloppy maual clock administration
within 5-10 minutes or worse.

>My point is that Seconds Since the Epoch stays at a more or less fixed (if
>often unknown) offset from TAI, while UTS and UTC will drift farther and
>farther from TAI as leap seconds accumulate. "As of September 2000, 24
>leap seconds had been added to UTC since the Epoch, 1 January, 1970.
>Historically, one leap second is added every 15 months on average, so this
>offset [between POSIX and UTC] can be expected to grow steadily with
>time."

Just to clarify this: What you are saying is that in order to comply
with your reading of POSIX.1:200x, I'd have to modify the kernel clock
synchronization on my PC here such that the (currently ~10 ms precise)
unmodified xclock/date/etc. programms here would actually have to to
display 12:00:00 twenty-four seconds *before* I hear the noon beep on
BBC news? I can't believe that you can be serious about such a radical,
incompatible, and highly experimental change of the POSIX API. This
is currently not used anywhere (though there is an experimental
compiler-time option on the Olson library to implement this, which
is generally advised not to be used).

There are zillions of existing binaries out there that decode time_t
and interpret it as it is as UTC or a derived local time. You can't
just simply mandate that we subtract 24 seconds from that, just because
that fits into your view of the world.

Joe Gwinn

unread,
Jul 7, 2001, 11:56:22 AM7/7/01
to
In article <9i796n$pae$1...@pegasus.csx.cam.ac.uk>, mg...@cl.cam.ac.uk
(Markus Kuhn) wrote:

> joeg...@mediaone.net (Joe Gwinn) writes:
> >The offset between TAI and Seconds Since the Epoch is left unspecified in
> >POSIX simply because POSIX cannot mandate that people set their clocks
> >correctly.
>
> It could mandate a best effor goal what the clock is supposed to show
> if optimal reference time is available. It is obvious that real
> implementations will always only be able to prove a best effort
> approcimation, with NTP within 10 ms, with sloppy maual clock administration
> within 5-10 minutes or worse.

POSIX is a source-code API standard. Telling the operator what to do is
out of scope. And, impossible to enforce.


> >My point is that Seconds Since the Epoch stays at a more or less fixed (if
> >often unknown) offset from TAI, while UTS and UTC will drift farther and
> >farther from TAI as leap seconds accumulate. "As of September 2000, 24
> >leap seconds had been added to UTC since the Epoch, 1 January, 1970.
> >Historically, one leap second is added every 15 months on average, so this
> >offset [between POSIX and UTC] can be expected to grow steadily with
> >time."
>
> Just to clarify this: What you are saying is that in order to comply
> with your reading of POSIX.1:200x, I'd have to modify the kernel clock
> synchronization on my PC here such that the (currently ~10 ms precise)
> unmodified xclock/date/etc. programms here would actually have to to
> display 12:00:00 twenty-four seconds *before* I hear the noon beep on
> BBC news? I can't believe that you can be serious about such a radical,
> incompatible, and highly experimental change of the POSIX API. This
> is currently not used anywhere (though there is an experimental
> compiler-time option on the Olson library to implement this, which
> is generally advised not to be used).
>
> There are zillions of existing binaries out there that decode time_t
> and interpret it as it is as UTC or a derived local time. You can't
> just simply mandate that we subtract 24 seconds from that, just because
> that fits into your view of the world.

The standard says what it says.

Joe Gwinn

Aleksandar Milivojevic

unread,
Jul 8, 2001, 9:54:46 AM7/8/01
to
Hmmmm,

I don't see why it is that important if POSIX says that time_t and
broken-down representation is TAI or UTC (plus some constant offset in
both cases).

There are applications that would benefit from TAI (simple examples
are UNIX sleep command or anything that measure time intervals or
anything that calculates time intervals from two given time stamps),
and there are applications that would benefit from UTC (simple
examples are UNIX date command or any application that displays
clock). We use both types of applications on 99.99% of computers
every day. The bottom line is if we choose that time_t is TAI, we
will need to code workarounds in "date". If we choose that time_t is
UTC, we will need to code workarounds in "sleep". Either way we will
need workarounds.

Possible solution, IMHO, could be to have time_t and broken down
representation returned by gmtime() function as TAI plus some constant
offset. In real world it could be expected that if the system have
source of exact time, time_t will be TAI. As for applications that
need civilian time, we use localtime() function anyway. Localtime()
makes adjustments for time zones and DST, so it could also make
adjustments for leap seconds (as accurate as known history on given
system allows). Struct tm already have tm_isdst flag and tm_gmtoff,
and I don't see any reason why struct tm couldn't be expanded to
include additional member so that (new) applications will know how many
leap seconds are accounted for in result of localtime().

On installations where exact time is a must, installation can always
be implemented to have access to complete leap seconds history and
short-term future (because leap seconds are always announced some time
before they take effect).

On installations where exact time is not important, no harm is done
even if there is no leap seconds history at all because clock will
always have some random offset from both UTC and TAI (random in a
sense that application running on such a system will never be given
correct time from time_t, gmtime() and localtime()).

Now back to the not-so-original topic: should time stored in CMOS
clock be TAI or UTC? It is of no importance, because we need leap
seconds history either way. And if we don't have it or if it is
outdated, it can be assumed that difference between UTC and TAI on
such a system is of no importance to the user of such a system. If it
was important to the user, user of such a system would have obtained
it and he would make sure that it is up to date (either automaticly or
manually).

--
Aleksandar Milivojević <al...@fly.srk.fer.hr>
Opinions expressed herein are my own.
Statements included here may be fiction rather than truth.

D. J. Bernstein

unread,
Jul 8, 2001, 2:12:53 PM7/8/01
to
Aleksandar Milivojevic <al...@fly.srk.fer.hr> wrote:
> Localtime() makes adjustments for time zones and DST, so it could also
> make adjustments for leap seconds

Already done! The Olson tz library has supported leap seconds for many
years. It's the most common version of localtime() for UNIX.

I've found it rather easy to set up UNIX machines so that time_t counts
SI seconds since the epoch, local times are displayed accurately, and
time intervals are computed accurately.

> Now back to the not-so-original topic: should time stored in CMOS
> clock be TAI or UTC? It is of no importance, because we need leap
> seconds history either way.

``Should time stored in CMOS clock be universal time, or local time? It
is of no importance, because we need time zone information either way.''

---Dan

Markus Kuhn

unread,
Jul 8, 2001, 3:19:08 PM7/8/01
to
d...@cr.yp.to (D. J. Bernstein) writes:
>> Now back to the not-so-original topic: should time stored in CMOS
>> clock be TAI or UTC? It is of no importance, because we need leap
>> seconds history either way.
>
>``Should time stored in CMOS clock be universal time, or local time? It
>is of no importance, because we need time zone information either way.''

If the CMOS clock is in UTC and not local time, you won't need
the time zone information in the kernel (as WinNT does at the moment!).
Note that in order to access user-space information such as
time zones, you need to know the time first, because of file system
access timestamps. So you do solve a chicken-egg problem by keeping the
CMOS clock in UTC.

Aleksandar Milivojevic

unread,
Jul 9, 2001, 2:20:14 AM7/9/01
to
D. J. Bernstein (d...@cr.yp.to) wrote:
> > Now back to the not-so-original topic: should time stored in CMOS
> > clock be TAI or UTC? It is of no importance, because we need leap
> > seconds history either way.
>
> ``Should time stored in CMOS clock be universal time, or local time? It
> is of no importance, because we need time zone information either way.''

But then you need time zone information written somewhere in the CMOS,
so that you know which time zone was in use when CMOS clock was
adjusted last time.

Users can traverse time zones by simply boarding in the plane with
their laptops and going to the other side of Atlantic. They can also
have VLAN with servers and clients located in many time zones. They
need OS that counts and stores time in CMOS clock and in file time
stamps using something like UTC or TAI, and user level programs that
can display this information using local time (with entire "time
zones, DST, leap seconds and whatever else we can think of" thing).

0 new messages