Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

_Imaginary numbers

21 views
Skip to first unread message

Mike

unread,
Jan 7, 2001, 1:41:47 PM1/7/01
to
I've been reading the C99 spec (the final, not the draft). Carefully reading
it, given the following code:

_Imaginary double func(_Imaginary double x, _Imaginary double y)
{
x *= y;
return x;
}

What value is returned? The best I can figure is *= should produce a syntax
error if both its operands are _Imaginary. Any thoughts?


James Kuyper

unread,
Jan 7, 2001, 2:23:59 PM1/7/01
to

It's not a syntax error; there's no syntax that distinguishes between
imaginary and complex values. It's also not a constraint violation. In
general, I would expect it to be covered by 6.5p5: "... if the result is
not ... in the range of representable values for its type ... the
behavior is undefined".

However, there is appendix G. Appendix G is not normative. Even
considered solely as a recommendation, it applies only to
implementations that choose to define __STDC_IEC_559_COMPLEX__. However,
appendix G at least says something more specific about this issue. The
code you've written is equivalent to "x = x*y;" - that re-write makes it
slightly easier to explain.

According to G.5.1p1, "... If both operands have imaginary type, then
the result has real type." That covers x*y. Section G.4.2p2 specifies
that "When a value of real type is converted to imaginary type, the
result is a positive imaginary zero." That covers the "x =" part of the
expression. Therefore, func(x,y) always returns positive imaginary zero,
no matter what it's called with.

Mike

unread,
Jan 7, 2001, 3:38:13 PM1/7/01
to
That would apply if the conversion is implicit. But I don't see that
conversions from imaginary to real are implicit - they must be cast.

James Kuyper wrote in message <3A58C24F...@wizard.net>...

Christian Bau

unread,
Jan 7, 2001, 5:36:56 PM1/7/01
to

I think it is worse. It is completely legal:

First, x *= y is exactly the same as x = x * y; that is the general rule
for all the +=, -=, *= etc operators (in the general case, the lvalue on
the left side is evaluated only once. In this case this makes no
difference).

Now _Imaginary * _Imaginary gives a result of the double, so you have an
assignmnt x = <double>. (G.5.1)

This assignment involves converting the double to _Imaginary, and that
yields an imaginary positive 0. (G.4.2)

So the result returned will be +0.0 * I (except for overflow and NaNs),
which is quite useless and might come unexpected to the programmer. If
you suggest that implicit conversions between real and imaginary numbers
should produce a warning by the compiler then I will agree.

Clive D.W. Feather

unread,
Jan 15, 2001, 10:22:00 AM1/15/01
to
In article <Vs466.240817$U46.7...@news1.sttls1.wa.home.com>, Mike
<geo...@myob.com> writes

>That would apply if the conversion is implicit. But I don't see that
>conversions from imaginary to real are implicit - they must be cast.

No: imaginary types are arithmetic types, and such conversions are
allowed without a cast.

--
Clive D.W. Feather, writing for himself | Home: <cl...@davros.org>
Tel: +44 20 8371 1138 (work) | Web: <http://www.davros.org>
Fax: +44 20 8371 4037 (D-fax) | Work: <cl...@demon.net>
Written on my laptop; please observe the Reply-To address

Clive D.W. Feather

unread,
Jan 15, 2001, 10:29:52 AM1/15/01
to
In article <3A58EF88...@cbau.freeserve.co.uk>, Christian Bau
<christ...@cbau.freeserve.co.uk> writes

>So the result returned will be +0.0 * I (except for overflow and NaNs),
>which is quite useless and might come unexpected to the programmer.

The feeling within WG14 was that anyone using imaginary types should
know what they're doing. Given that they are only useful in rather
specialised situations, it was more in the spirit of C to provide the
rope than to prohibit imaginary<->real conversions.

>If
>you suggest that implicit conversions between real and imaginary numbers
>should produce a warning by the compiler then I will agree.

The Standard doesn't have a concept of compiler warnings.

Nick Maclaren

unread,
Jan 15, 2001, 1:58:32 PM1/15/01
to
In article <L5$EW7FYW...@on-the-train.demon.co.uk>,

Clive D.W. Feather <cl...@davros.org> wrote:
>In article <Vs466.240817$U46.7...@news1.sttls1.wa.home.com>, Mike
><geo...@myob.com> writes
>>That would apply if the conversion is implicit. But I don't see that
>>conversions from imaginary to real are implicit - they must be cast.
>
>No: imaginary types are arithmetic types, and such conversions are
>allowed without a cast.

In <time.h>:

typedef _Imaginary double time_t;

Your sanity may suffer :-)


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

James Kuyper

unread,
Jan 15, 2001, 3:25:27 PM1/15/01
to
Nick Maclaren wrote:
>
> In article <L5$EW7FYW...@on-the-train.demon.co.uk>,
> Clive D.W. Feather <cl...@davros.org> wrote:
...

> >No: imaginary types are arithmetic types, and such conversions are
> >allowed without a cast.
>
> In <time.h>:
>
> typedef _Imaginary double time_t;
>
> Your sanity may suffer :-)

How about:

typedef _Imaginary double clock_t;

#define CLOCKS_PER_SEC (clock_t)(1000.0*I)

Mike

unread,
Jan 16, 2001, 4:30:30 AM1/16/01
to
I wonder also what happens if a complex is compared with 0. Suppose the real
part is a NaN and the imaginary part is not?


Clive D.W. Feather wrote in message
<$51J2FGw...@on-the-train.demon.co.uk>...

Clive D.W. Feather

unread,
Jan 16, 2001, 4:09:16 AM1/16/01
to
In article <3A635CB7...@wizard.net>, James Kuyper
<kuy...@wizard.net> writes

>typedef _Imaginary double clock_t;
>
>#define CLOCKS_PER_SEC (clock_t)(1000.0*I)

Not only is that evil, but I'm not convinced it is illegal.

Clive D.W. Feather

unread,
Jan 16, 2001, 4:07:38 AM1/16/01
to
In article <93vh8o$gd9$1...@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm...@cus.cam.ac.uk> writes

>>No: imaginary types are arithmetic types, and such conversions are
>>allowed without a cast.
>
>In <time.h>:
>
>typedef _Imaginary double time_t;
>
>Your sanity may suffer :-)

Only in POSIX, which *appears* to assume that time_t holds a count of
seconds. My code doesn't make that assumption; my sanity is fine.

Nick Maclaren

unread,
Jan 16, 2001, 2:08:14 PM1/16/01
to
In article <CLnma9L8...@on-the-train.demon.co.uk>,

Clive D.W. Feather <cl...@davros.org> wrote:
>In article <3A635CB7...@wizard.net>, James Kuyper
><kuy...@wizard.net> writes
>>typedef _Imaginary double clock_t;
>>
>>#define CLOCKS_PER_SEC (clock_t)(1000.0*I)
>
>Not only is that evil, but I'm not convinced it is illegal.

I think that we agreed that it was legal? And evil, of course.

Nick Maclaren

unread,
Jan 16, 2001, 2:12:58 PM1/16/01
to
In article <CrumCzLa...@on-the-train.demon.co.uk>,

Clive D.W. Feather <cl...@davros.org> wrote:
>In article <93vh8o$gd9$1...@pegasus.csx.cam.ac.uk>, Nick Maclaren
><nm...@cus.cam.ac.uk> writes
>>>No: imaginary types are arithmetic types, and such conversions are
>>>allowed without a cast.
>>
>>In <time.h>:
>>
>>typedef _Imaginary double time_t;
>>
>>Your sanity may suffer :-)
>
>Only in POSIX, which *appears* to assume that time_t holds a count of
>seconds. My code doesn't make that assumption; my sanity is fine.

Er, try 7.23.1 paragraphs 3 and 4:

[#3] The types declared are size_t (described in 7.17);

time_t

which are arithmetic types capable of representing times;

[#4] The range and precision of times representable in
clock_t and time_t are implementation-defined.

Sorry, but this clearly implies that time_t contains an arithmetic
encoding of times and is not just an opaque type. Unless the C
standard is using non-standard English again :-(

Incidentally - POSIX doesn't appear to assume that - it DOES assume
it. Inconsistently, but that is another matter ....

Keith Thompson

unread,
Jan 16, 2001, 3:11:18 PM1/16/01
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:
[...]
> Er, try 7.23.1 paragraphs 3 and 4:
>
> [#3] The types declared are size_t (described in 7.17);
>
> time_t
>
> which are arithmetic types capable of representing times;
>
> [#4] The range and precision of times representable in
> clock_t and time_t are implementation-defined.
>
> Sorry, but this clearly implies that time_t contains an arithmetic
> encoding of times and is not just an opaque type. Unless the C
> standard is using non-standard English again :-(

It doesn't say *how* it represents times. For example, as I write
this, the time (in UTC) is
2001-01-16 20:04:35
An implementation could represent this as the integer value
20010116200435

This representation is relatively well-behaved (it's monotonic), but
there's no reason to assume it couldn't be opaque. As long as the
language-defined functions work correctly, an implementation can
choose any representation it likes. For example, a system might use
the representation of a hardware timer register.

Personally, I'd like to see this tightened up a bit. I've never heard
of an implementation in which time_t isn't monotonic and linear;
requiring these characteristics wouldn't be a heavy burden, and could
make difftime() unnecessary.

--
Keith Thompson (The_Other_Keith) k...@cts.com <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
MAKE MONEY FAST!! DON'T FEED IT!!

Nick Maclaren

unread,
Jan 16, 2001, 4:14:31 PM1/16/01
to
In article <yecely3...@king.cts.com>, Keith Thompson <k...@cts.com> wrote:
>nm...@cus.cam.ac.uk (Nick Maclaren) writes:
>[...]
>> Er, try 7.23.1 paragraphs 3 and 4:
>>
>> [#3] The types declared are size_t (described in 7.17);
>>
>> time_t
>>
>> which are arithmetic types capable of representing times;
>>
>> [#4] The range and precision of times representable in
>> clock_t and time_t are implementation-defined.
>>
>> Sorry, but this clearly implies that time_t contains an arithmetic
>> encoding of times and is not just an opaque type. Unless the C
>> standard is using non-standard English again :-(
>
>It doesn't say *how* it represents times. For example, as I write
>this, the time (in UTC) is
> 2001-01-16 20:04:35
>An implementation could represent this as the integer value
> 20010116200435
>
>This representation is relatively well-behaved (it's monotonic), but
>there's no reason to assume it couldn't be opaque. As long as the
>language-defined functions work correctly, an implementation can
>choose any representation it likes. For example, a system might use
>the representation of a hardware timer register.

Oh, yes, no argument there! Both of the representations you describe
are perfectly reasonable. My point was that the conditions of
arithmetic representation with a defined range and precision make
the use of _Imaginary a little perverse, at the very least! Not
necessarily impossible, but definitely perverse ....

Keith Thompson

unread,
Jan 16, 2001, 4:29:20 PM1/16/01
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:
[...]
> Oh, yes, no argument there! Both of the representations you describe
> are perfectly reasonable. My point was that the conditions of
> arithmetic representation with a defined range and precision make
> the use of _Imaginary a little perverse, at the very least! Not
> necessarily impossible, but definitely perverse ....

Yes, unfortunately, the standard does not require implementers to be
sane. "There ain't no sanity clause."

James Kuyper

unread,
Jan 16, 2001, 7:42:03 PM1/16/01
to
"Clive D.W. Feather" wrote:
>
> In article <3A635CB7...@wizard.net>, James Kuyper
> <kuy...@wizard.net> writes
> >typedef _Imaginary double clock_t;
> >
> >#define CLOCKS_PER_SEC (clock_t)(1000.0*I)
>
> Not only is that evil, but I'm not convinced it is illegal.

Any particular reason?

clock_t is required to
1. Be an arithmetic type.
2. Be able to represent times.

CLOCKS_PER_SEC is required to
3. expand to a constant expression.
4. with type clock_t
5. that is the number per second of the value returned by the clock()
function.

The value returned by clock() is required to be
6. "the implementation’s best approximation to the processor
time used by the program since the beginning of an
implementation-defined era related
only to the program invocation."
7. in a form such that the corresponding number of seconds can be
obtained by dividing by CLOCKS_PER_SEC.

With an obvious definition of clock(), these definitions can easily meet
all those requirements. I don't see any other relevant requirements.

I see a potential problem with this implementation, but it doesn't
affect the conformance of such an implementation. There's no type that
is guaranteed to be able to store the number of seconds returned by
clock() with full range and full precision. 'long double' isn't
guaranteed to be able to store with full precision, if clock_t happens
to be, for example, uintmax_t. Of course, since CLOCKS_PER_SEC couldn't
be less than 1 for such an implementation, a program would have to run a
VERY long time before that mattered.

On the other hand, uintmax_t isn't guaranteed to be able to store the
maximum range, should clock_t be 'long double'. In this case it's
trivial to make even a very short time overflow uintmax_t, by simply
choosing CLOCKS_PER_SEC to be very small (LDBL_MIN, for instance).

And now the final indignity, we see that clock()/CLOCKS_PER_SEC isn't
even guaranteed to be of type clock_t, and isn't even guaranteed to be
convertable to clock_t without complete loss of information.

Perhaps the wording needs a little tightening? :-)

This is precisely the type of situation that calls for _Typeof():

_Typeof(clock()/CLOCKS_PER_SEC) seconds = clock()/CLOCKS_PER_SEC;

John Hauser

unread,
Jan 16, 2001, 11:08:53 PM1/16/01
to

James Kuyper:

> typedef _Imaginary double clock_t;
> #define CLOCKS_PER_SEC (clock_t)(1000.0*I)

Clive D.W. Feather:


> Not only is that evil, but I'm not convinced it is illegal.

James Kuyper:
> Any particular reason? [...]


> With an obvious definition of clock(), these definitions can
> easily meet all those requirements. I don't see any other relevant
> requirements.

Clive said he thinks it's legit. (``I'm not convinced it is illegal.'')

> There's no type that
> is guaranteed to be able to store the number of seconds returned by
> clock() with full range and full precision. 'long double' isn't
> guaranteed to be able to store with full precision, if clock_t happens
> to be, for example, uintmax_t. Of course, since CLOCKS_PER_SEC
> couldn't be less than 1 for such an implementation, a program would
> have to run a VERY long time before that mattered.

How about if `CLOCKS_PER_SEC' is greater than 1? If `uintmax_t' is
64 bits, `CLOCKS_PER_SEC' is 2^32, and `long double' is IEEE 32-bit
single-precision, a program could notice the inability of `long double'
to represent the full precision of `clock_t' in a fraction of a second.

> [...]


> This is precisely the type of situation that calls for _Typeof():
> _Typeof(clock()/CLOCKS_PER_SEC) seconds = clock()/CLOCKS_PER_SEC;

As long as you don't mind throwing away the fractional part.

What sort of application are you thinking of that can't survive with
`double'?:

double seconds = (double) clock() / CLOCKS_PER_SEC;

Are you worried that `double' won't have enough precision? Is that
any more likely than that the result from `clock()' isn't sufficiently
accurate for your purpose?

- John Hauser

James Kuyper

unread,
Jan 17, 2001, 4:50:32 AM1/17/01
to
John Hauser wrote:
>
> James Kuyper:
...

> What sort of application are you thinking of that can't survive with
> `double'?:
>
> double seconds = (double) clock() / CLOCKS_PER_SEC;
>
> Are you worried that `double' won't have enough precision? Is that
> any more likely than that the result from `clock()' isn't sufficiently
> accurate for your purpose?

I'm not worrying about what's likely; I'm pointing out what's permitted.
Yes, if clock_t is a large integer type, and double is a 32 bit type,
and CLOCKS_PER_SEC is very large, correctly reflecting a very
high-precision timer, then you could loose that precision when you do
that explicit cast. However, what's even worse, if clock_t is
_Imaginary, the cast will lose all of the time information.

Nick Maclaren

unread,
Jan 17, 2001, 4:48:58 AM1/17/01
to
In article <3A651AD5...@cs.berkeley.edu>,
John Hauser <jha...@cs.berkeley.edu> wrote:
>James Kuyper:

>
>> There's no type that
>> is guaranteed to be able to store the number of seconds returned by
>> clock() with full range and full precision. 'long double' isn't
>> guaranteed to be able to store with full precision, if clock_t happens
>> to be, for example, uintmax_t. Of course, since CLOCKS_PER_SEC
>> couldn't be less than 1 for such an implementation, a program would
>> have to run a VERY long time before that mattered.
>
>How about if `CLOCKS_PER_SEC' is greater than 1? If `uintmax_t' is
>64 bits, `CLOCKS_PER_SEC' is 2^32, and `long double' is IEEE 32-bit
>single-precision, a program could notice the inability of `long double'
>to represent the full precision of `clock_t' in a fraction of a second.

As a nitpick, neither double nor long double are allowed to be IEEE
single precision in standard C, as they have a mandated minimum
precision of 9 decimal digits.

This doesn't affect your argument, of course.

Clive D.W. Feather

unread,
Jan 17, 2001, 3:52:03 AM1/17/01
to
In article <9426fq$3sn$1...@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm...@cus.cam.ac.uk> writes

>Er, try 7.23.1 paragraphs 3 and 4:
>
> [#3] The types declared are size_t (described in 7.17);
>
> time_t
>
> which are arithmetic types capable of representing times;
>
> [#4] The range and precision of times representable in
> clock_t and time_t are implementation-defined.
>
>Sorry, but this clearly implies that time_t contains an arithmetic
>encoding of times and is not just an opaque type.

No, it implies that it is an opaque type that contains an encoding of
times. That is, there is a set of times each of which has a unique
encoding in time_t. The range and precision of that set is
implementation-defined.

>Unless the C
>standard is using non-standard English again :-(

No, just that you're reading into it more than is said.

>Incidentally - POSIX doesn't appear to assume that - it DOES assume
>it.

I didn't want to make the assertion without re-reading the text.

Clive D.W. Feather

unread,
Jan 17, 2001, 3:48:54 AM1/17/01
to
In article <3A64EA5B...@wizard.net>, James Kuyper
<kuy...@wizard.net> writes

>> Not only is that evil, but I'm not convinced it is illegal.
[...]

>With an obvious definition of clock(), these definitions can easily meet
>all those requirements. I don't see any other relevant requirements.

Exactly. Which is why I'm not convinced that it's illegal.



>I see a potential problem with this implementation,

[...]

That problem applies to other, slightly saner, implementations as well.

>And now the final indignity, we see that clock()/CLOCKS_PER_SEC isn't
>even guaranteed to be of type clock_t,

This would also be the case if clock_t was short.

>and isn't even guaranteed to be
>convertable to clock_t without complete loss of information.

That's less usual !

>Perhaps the wording needs a little tightening? :-)

Feel free to write a DR.

Clive D.W. Feather

unread,
Jan 17, 2001, 3:58:32 AM1/17/01
to
In article <WwU86.271529$U46.8...@news1.sttls1.wa.home.com>, Mike
<geo...@myob.com> writes

>I wonder also what happens if a complex is compared with 0. Suppose the real
>part is a NaN and the imaginary part is not?

The two parts are compared separately, and both must be equal for it to
be equal. If the real part is a NaN, that is not equal to 0.

Clive D.W. Feather

unread,
Jan 17, 2001, 3:54:33 AM1/17/01
to
In article <yecely3...@king.cts.com>, Keith Thompson <k...@cts.com>
writes

>Personally, I'd like to see this tightened up a bit. I've never heard
>of an implementation in which time_t isn't monotonic and linear;
>requiring these characteristics wouldn't be a heavy burden,

You do *not* want to (re-)start the leap seconds discussion on this
group. Trust me.

"linear" is a problem when the gap between 23:59:58 and 00:00:00 may be
1, 2, or 3 seconds long. Or not, since there are at least two kinds of
seconds.

John Hauser

unread,
Jan 17, 2001, 3:30:06 PM1/17/01
to

Nick Maclaren:

> As a nitpick, neither double nor long double are allowed to be IEEE
> single precision in standard C, as they have a mandated minimum
> precision of 9 decimal digits.

Sorry, thanks for the correction. (Actually, given the weak accuracy
requirements of floating-point in the C Standard, it's not entirely
clear to me the 9 digits is enforceable. But I'm happy to concede it
for now.)

- John Hauser

Mike

unread,
Jan 19, 2001, 2:07:17 AM1/19/01
to

Clive D.W. Feather wrote in message ...

>In article <WwU86.271529$U46.8...@news1.sttls1.wa.home.com>, Mike
><geo...@myob.com> writes
>>I wonder also what happens if a complex is compared with 0. Suppose the
real
>>part is a NaN and the imaginary part is not?
>
>The two parts are compared separately, and both must be equal for it to
>be equal. If the real part is a NaN, that is not equal to 0.


That makes sense.


David R Tribble

unread,
Jan 22, 2001, 2:37:59 PM1/22/01
to
[was: Re: _Imaginary numbers]

Keith Thompson <k...@cts.com> writes
>> Personally, I'd like to see this tightened up a bit. I've never
>> heard of an implementation in which time_t isn't monotonic and
>> linear; requiring these characteristics wouldn't be a heavy burden,

Clive D.W. Feather wrote:
> You do *not* want to (re-)start the leap seconds discussion on this
> group. Trust me.

You are correct in that all discussions about time_t semantics have
been cluttered by talk of leap seconds, choice of calendars, and
other minutiae, instead of focusing on a very small set of changes
(or additional "tightening up" of constraints) to the existing
time_t.

Being one of the proponents for a better and more useful time_t, I
have practically abandoned all hope of ever seeing ISO C making any
meaningful changes to it in my lifetime. (POSIX, on the other hand,
provides a better avenue for attempted improvement.)

> "linear" is a problem when the gap between 23:59:58 and 00:00:00 may
> be 1, 2, or 3 seconds long. Or not, since there are at least two kinds
> of seconds.

On the other hand, setting aside all arguments of leap seconds,
calendars, encodings, etc., it is perfectly reasonable to pursue an
ISO C requirement that time_t represents a monotonically increasing
(arithmetic) value. All this does is say that time_t values for later
dates are arithmetically greater than values than for earlier dates.
This allows a program to use the '<' operator to compare time_t
values with some expected measure of meaning. This requirement should
not invalidate any existing implementation, and should qualify as
"codifying existing practice".

It is also reasonable to pursue a requirement that time_t represent
either
a) a count of "events" (of implementation-defined duration) since a
particular implementation-defined "epoch" [thus allowing POSIX-like
encodings], or
b) an encoding of an implementation-defined date/time [thus allowing
DOS-like 'yyyy:mm:dd:hh:mm:ss' encodings].
and nothing else. This requirement also should not break any existing
implementations, but it would limit the possibilities for future
implementations.

These requirements still make no stipulations about the data type
of time_t other than that it must be "arithmetic".

--
David R. Tribble, mailto:da...@tribble.com, http://david.tribble.com

Douglas A. Gwyn

unread,
Jan 23, 2001, 3:29:53 PM1/23/01
to
David R Tribble wrote:
> Being one of the proponents for a better and more useful time_t, I
> have practically abandoned all hope of ever seeing ISO C making any
> meaningful changes to it in my lifetime. (POSIX, on the other hand,
> provides a better avenue for attempted improvement.)

We backed out an attempt at improvements in these facilities
for C99, in the light of errors that were found in the new
specifications and having been advised that an ad hoc working
group (outside WG14) was trying to arrive at a consensus on
an improved interface, which might be advisable for a future
C standard to adopt once it has been shaken out. A serious
problem is the disconnect between the physicist's notion of
time as simply counts of a specified periodic phenomenon and
the hodge-podge of cultural notions of time, including some
accidents of astronomy.

I don't know how much progress (if any) that working group
made.

David R Tribble

unread,
Jan 24, 2001, 2:12:17 PM1/24/01
to
David R Tribble wrote:
>> Being one of the proponents for a better and more useful time_t, I
>> have practically abandoned all hope of ever seeing ISO C making any
>> meaningful changes to it in my lifetime. (POSIX, on the other hand,
>> provides a better avenue for attempted improvement.)

Douglas A. Gwyn wrote:
> We backed out an attempt at improvements in these facilities
> for C99, in the light of errors that were found in the new
> specifications and having been advised that an ad hoc working
> group (outside WG14) was trying to arrive at a consensus on
> an improved interface, which might be advisable for a future
> C standard to adopt once it has been shaken out. A serious
> problem is the disconnect between the physicist's notion of
> time as simply counts of a specified periodic phenomenon and
> the hodge-podge of cultural notions of time, including some
> accidents of astronomy.

I think the approach taken by the Java library has merits. It
defines a 'Date' type as containing a simple integer count of the
number of subsecond ticks elapsed since a given epoch date.
(The ticks are milliseconds, and the epoch starts at 1970-01-01
00:00:00.000 GMT, but these are arbitrary details.) Such a 'Date'
object can be converted into the appropriate calendric form by
applying to it a 'Calendar' object, which determines the year, month,
day of the month, weekday, etc. for that particular value.

Currently the only supported Calendar is the standard Gregorian
calendar sans leap seconds. Presumably, future implementations can
provide alternate calendars for various worldwide regional needs, as
well as representations needing leap seconds.

I like this approach because it separates the concepts of 'date
counter' and 'calendric representation'. The former is simply a
count of elapsed events from some arbitrary date; the latter deals
with all the difficult cultural aspects. Providing different
implementations of 'Calendar' does not affect the implementation
or semantics of 'Date'.

This approach also has the benefit of defining specific properties
of 'Date' values, such as the precision and range of dates. It
also makes it easy to do comparisons (e.g., using the '<' and '='
operators) and computations (e.g., adding 40 seconds to the current
time) on date values, and be guaranteed that the results are
predictable and the operations are portable. ISO C 'time_t' does
not have any of these properties at present.

> I don't know how much progress (if any) that working group made.

I thought I was to on its mailing list, but I haven't received
anything in months.

Pete Forman

unread,
Jan 25, 2001, 7:16:17 AM1/25/01
to
David R Tribble <da...@tribble.com> writes:

> I think the approach taken by the Java library has merits. It
> defines a 'Date' type as containing a simple integer count of the
> number of subsecond ticks elapsed since a given epoch date. (The
> ticks are milliseconds, and the epoch starts at 1970-01-01
> 00:00:00.000 GMT, but these are arbitrary details.) Such a 'Date'
> object can be converted into the appropriate calendric form by
> applying to it a 'Calendar' object, which determines the year,
> month, day of the month, weekday, etc. for that particular value.

Java's Date class is rather vague on what is being represented.

Although the Date class is intended to reflect coordinated
universal time (UTC), it may not do so exactly, depending on the
host environment of the Java Virtual Machine.

Assuming that it does indeed contain the correct value for UTC, it
turns out to be slightly worse than time_t (as refined by POSIX). In
POSIX time_t repeats from 23:59:60 to 00:00:00 at a leap second but it
does not go backwards as it has a precision of one second. Java's
Date will indicate that 23:59:60.500 is later than 00:00:00.000 of the
next day.

Apologies for citing POSIX in this group but ISO C does not define the
epoch or timescale that time_t represents.
--
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco -./\.- by myself and does not represent
pete....@westerngeco.com -./\.- opinion of Schlumberger, Baker
http://www.crosswinds.net/~petef -./\.- Hughes or their divisions.

Douglas A. Gwyn

unread,
Jan 25, 2001, 12:20:28 PM1/25/01
to
Pete Forman wrote:
> Java's Date will indicate that 23:59:60.500 is later than
> 00:00:00.000 of the next day.

I think the point is that for comparison purposes etc.
one ought to be using the time (tick counter), not the
calendar representation of the time.

Larry Jones

unread,
Jan 25, 2001, 2:50:54 PM1/25/01
to
David R Tribble (da...@tribble.com) wrote:
>
> I think the approach taken by the Java library has merits. It
> defines a 'Date' type as containing a simple integer count of the
> number of subsecond ticks elapsed since a given epoch date.

That sounds like an intractable specification for most systems. I would
venture to guess that every existing implementation actually counts
ticks *that were not part of a leap second* instead.

-Larry Jones

You just can't ever be too careful. -- Calvin

Douglas A. Gwyn

unread,
Jan 25, 2001, 4:31:35 PM1/25/01
to
Larry Jones wrote:
> I would venture to guess that every existing implementation
> actually counts ticks *that were not part of a leap second*

I don't see why.. Once the system clock has been "set", it
is just an oscillator at a frequency fixed by physics like
properties of a piezoelectric crystal, independent of the
conventions imposed on society by astronomers and politicians.
That's certainly the way UNIX's system clock is maintained.
To properly *set* the clock, the system *might* have to
temporarily deal with calendric issues if that is the form
in which available "current time" is expressed. However,
if for example there is a network time server that provides
current time in terms of ticks since an epoch, no calendric
issues would be involved in setting the system clock.

Pete Forman

unread,
Jan 26, 2001, 10:52:08 AM1/26/01
to

I *was* referring to the Date objects which encapsulate the
millisecond count since epoch. The time strings in my posting
referred to the real UTC time rather than what the Date or Calendar
classes might contain.

A Date object constructed on with no arguments during a leap second,
e.g. on 1998-12-31T23:59:60.500Z, would come after one constructed on
1999-01-01T00:00:00.000Z. It would compare equal to one constructed
on 1999-01-01T00:00:00.500Z. Another way of looking at it is that
there is no unambiguous way to construct the first Date object after
the event with the single long argument constructor (milliseconds
since epoch).

This problem is inevitable in Java and C/POSIX unless you move from
UTC as your timescale to something monotonic like TAI or UTS.

Larry Jones

unread,
Jan 26, 2001, 5:00:47 PM1/26/01
to
Douglas A. Gwyn (gw...@arl.army.mil) wrote:
> Larry Jones wrote:
> > I would venture to guess that every existing implementation
> > actually counts ticks *that were not part of a leap second*
>
> I don't see why.

Because most *systems* only count ticks that aren't part of leap
seconds, POSIX even requires it. In order to compensate, a Java
implementation would have to have a leap second table, which would
require periodic updating, and I sincerely doubt that anyone has gone to
that trouble. While the following is true:

> Once the system clock has been "set", it
> is just an oscillator at a frequency fixed by physics like
> properties of a piezoelectric crystal, independent of the
> conventions imposed on society by astronomers and politicians.
> That's certainly the way UNIX's system clock is maintained.

The reality is that UNIX system clocks are always being reset (or
adjusted) using a simple formula based on calenderic time that does not
take leap seconds into consideration (as I said above, POSIX requires
this behavior).

-Larry Jones

It COULD'VE happened by accident! -- Calvin

Douglas A. Gwyn

unread,
Jan 26, 2001, 8:35:19 PM1/26/01
to
Larry Jones wrote:
> ... POSIX even requires it.

Clearly if POSIX requires that the system clock be forced
to stutter near a "leap second", then there is an error in
that spec that ought to be fixed, just as we occasionally
fix errors in the C spec.

David R Tribble

unread,
Jan 29, 2001, 7:45:19 PM1/29/01
to
Pete Forman wrote:
> This problem is inevitable in Java and C/POSIX unless you move from
> UTC as your timescale to something monotonic like TAI or UTS.

I'm all for mandating a time system based on a simple linear count
of ticks from a specific epoch that entirely disregards leap seconds.
The epoch start (or "zero") date could even be left up to the
implementation, provided that there was an easy way to determine
what it is (in Gregorian calendar terms), such as through a standard
macro in <time.h>.

I think it's reasonable to say that such a system would meet 90% or
more of everyone's needs, and that those who need to deal with leap
seconds are probably going to use specialty date libraries anyway
(and possibly specialty date hardware as well), and will not rely on
the standard library routines alone.

We could even make adherence to such a linear implementation optional,
but require implementations to provide a macro (e.g.,
_STDC_TIME_SUPPORTED) indicating conformance to the standard model.

Such a model would allow us to compare time_t values using nothing
more sophisticated than the '<' and '==' operators. It would also
allow us to do fairly simple operations like "add 40 seconds to
the current date" with some assurance of predictable semantics and
portability.

ISO C time_t is not capable of even these simple operations in its
current form.

Douglas A. Gwyn

unread,
Jan 31, 2001, 10:53:47 AM1/31/01
to
David R Tribble wrote:
> I'm all for mandating a time system based on a simple linear count
> of ticks from a specific epoch that entirely disregards leap seconds.

Indeed, that's the only sensible approach, because:

> Such a model would allow us to compare time_t values using nothing
> more sophisticated than the '<' and '==' operators. It would also
> allow us to do fairly simple operations like "add 40 seconds to
> the current date" with some assurance of predictable semantics and
> portability.

With suitable macros for TICKS_PER_SECOND etc.

> ISO C time_t is not capable of even these simple operations in its
> current form.

Well, it can be, but it is not required to be.

David R Tribble

unread,
Feb 2, 2001, 2:51:02 PM2/2/01
to
Douglas A. Gwyn wrote:
>> Clearly if POSIX requires that the system clock be forced
>> to stutter near a "leap second", then there is an error in
>> that spec that ought to be fixed, just as we occasionally
>> fix errors in the C spec.

David Tribble wrote:
>> I'm all for mandating a time system based on a simple linear count
>> of ticks from a specific epoch that entirely disregards leap seconds.

Bradley White <b...@acm.org> wrote:
> And I agree. However, when translating such a tick count to or
> from the time-of-day, you must take leap seconds into account!

No, you don't.

If your time system is based solely on a representation that ignores
leap seconds, then all conversions to and from calendar formats would
also completely disregard leap seconds. Leap seconds simply don't
exist in such a representational universe.

It is true, of course, that such a system will drift from true real
world clock time, assuming it never stops operating. However, such a
system is likely to rebooted on occasion, as well as having its
internal clock updated every so often to keep it in sync with the real
world.

This creates the interesting situation of examining a time value from
a past event (such as a file modification timestamp), and determining
that it happened N seconds ago, whereas it really happened N+E seconds
ago, where E is the accumulated error between the system's clock and
real world time. But E can be safely ignored in most civil
applications, since few applications really care about how accurate
the date of past events aligns with real time. To the system, it
only cares that N represents a given interval of time within its
representational universe.

Any system that does care about E, and about the alignment of system
time and real world clock time, and thus about leap seconds, is bound
to rely on specialized software and perhaps specialized hardware
instead of the standard ISO C library alone. Therefore the ISO C
library does not have to exactly correspond to notions of time
reckoning in the real world. It should provide time measuring
features suitable for most civil applications, but no more.

My point of view is that leap seconds are not necessary for most civil
applications, and thus can be simply disregarded when dealing with
system time values.

-- David R. Tribble, <da...@tribble.com> --

Neil Booth

unread,
Feb 2, 2001, 5:30:31 PM2/2/01
to
scj...@thor.sdrc.com (Larry Jones) writes:

>
> Because most *systems* only count ticks that aren't part of leap
> seconds, POSIX even requires it. In order to compensate, a Java
> implementation would have to have a leap second table, which would
> require periodic updating, and I sincerely doubt that anyone has gone to
> that trouble.

Oh, I don't know. Have you looked at the GNU C Library? I'm not
sure, but I think they count leap seconds. There's an incredible
amount of information in there about calendars and times in different
countries, and it's constantly being updated with verbose comments and
references to documents elsewhere as justification.

Neil.

Douglas A. Gwyn

unread,
Feb 2, 2001, 5:47:39 PM2/2/01
to
David R Tribble wrote:
> My point of view is that leap seconds are not necessary for most civil
> applications, and thus can be simply disregarded when dealing with
> system time values.

I think it muddies the water to insist on that point.
Surely, the conversion from a nice uniform internal time counter
to an external calendric representation ought to be as good as
the implementer can make it, which means that in principle it
ought to compensate for leap seconds just as it presumably
compensates for leap days almost every four years.
A sloppy implementation could ignore the leap seconds,
but if some application contains its own time formatting
function that does the job right, it could cause confusion.
Any API standard that specifies such a conversion function
ought to say that the formatted version is a culturally
appropriate representation of the wall-clock time (or similar).
Then whether leap seconds are properly accounted for becomes a
matter of Quality of Implementation.

Brian Inglis

unread,
Feb 4, 2001, 3:55:13 PM2/4/01
to

I think that it would probably be more useful if ISO C at least
handled current dates in calendars other than the Gregorian: a
billion or so people use the Han calendar, a few hundred million
use the Islamic calendar, and many millions use the Hebrew
calendar for civil purposes. There have got to be a number of
programmers out there struggling with RYO calendars.

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
--
Brian_...@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
use address above to reply

David R Tribble

unread,
Feb 5, 2001, 11:46:40 AM2/5/01
to
David R Tribble wrote:
>> My point of view is that leap seconds are not necessary for most
>> civil applications, and thus can be simply disregarded when dealing
>> with system time values.

Douglas A. Gwyn wrote:
> I think it muddies the water to insist on that point.
> Surely, the conversion from a nice uniform internal time counter
> to an external calendric representation ought to be as good as
> the implementer can make it, which means that in principle it
> ought to compensate for leap seconds just as it presumably
> compensates for leap days almost every four years.

There is a big difference between compensating for leap days, which
are entirely predictable and thus computable from a handful of
simple equations, and leap seconds, which do not have any kind of
predictability property.

You can add N days worth of seconds to today's date and determine
exactly how many leap days will occur between the two dates (and thus
you can determine the exact day of the week the future date falls on),
but you can't determine how many leap seconds will occur between them
(and thus you can't determine the exact time of day the future date
represents).

I'm arguing that a completely deterministic time representation which
disregards non-deterministic elements (i.e., leap seconds) is an
acceptable minimum implementation requirement.

(My complete suggestion was (is) that a conforming time_t
representation should be a monotonically increasing value that has
some computable relation to a simple count of "ticks" since some
arbitrary "epoch" start date.)


> A sloppy implementation could ignore the leap seconds,
> but if some application contains its own time formatting
> function that does the job right, it could cause confusion.
> Any API standard that specifies such a conversion function
> ought to say that the formatted version is a culturally
> appropriate representation of the wall-clock time (or similar).
> Then whether leap seconds are properly accounted for becomes a
> matter of Quality of Implementation.

I take the point of view that, at a minimum, a reasonable time
representation does not need leap seconds. I also take the point of
view that we should not prevent implementations from properly handling
leap seconds if they so choose.

Then there is the problem of transferring dates between different
systems. If one system implements leap seconds correctly, to the
point of encoding leap seconds in its calendric representations of
time_t values (e.g., "2000-12-31T23:59:60.500Z"), then it will be
difficult to transfer this value to a system that doesn't do leap
seconds and have it properly interpret the date value.

By insisting on a uniform (portable) standard representation, at least
for dates that are transferred between systems, the date interchange
problem is avoided.

David R Tribble

unread,
Feb 5, 2001, 12:06:34 PM2/5/01
to
Brian Inglis wrote:
> I think that it would probably be more useful if ISO C at least
> handled current dates in calendars other than the Gregorian: a
> billion or so people use the Han calendar, a few hundred million
> use the Islamic calendar, and many millions use the Hebrew
> calendar for civil purposes. There have got to be a number of
> programmers out there struggling with RYO calendars.

My point of view is that the internal time_t representation of dates
should be completely independent of any calendar. I.e., it should be
conceptually as simple as a count of "ticks" since some "epoch" start
date (or it could be some simple bitwise function thereof, which
allows for nonlinear bitfield encodings).

Converting a time_t value into a human-readable equivalent then
involves applying a calendric transformation to it. I think it's
reasonable to require conforming implementations to provide at least
a Gregorian calendar transform (since it's accepted world-wide for
trade, even in regions that use other cultural calendars). Other
calendars can be provided as library extensions.

(I also think a better naming scheme should be adopted to support
multiple calendars. The ctime() function, for example, could be
called gregorian_ctime(). Or perhaps a new function to replace
strftime() could take an additional argument that specifies the
calendar to apply, with the default being Gregorian. We could also
provide get_local_calendar() which returns the default regional
calendar, similar to retrieving the local timezone setting.)

Separating the two concepts (internal time and calendric
transformations) allows you, in the most general case, to pick and
choose which calendar you want to display dates in. It also allows
for relatively simple conversions between dates in different
calendars, since the underlying time_t representation is the same.

Douglas A. Gwyn

unread,
Feb 7, 2001, 2:19:22 PM2/7/01
to
Max TenEyck Woodbury wrote:
> Rather than requiring a specific implementation of time_t, it would
> be better if specific properties of time_t were defined.

Right, but from a programming perspective the desired
properties pretty much force it to be a simple integer "tick"
counter from some epoch that the calendric functions have
access to. Of course seconds is too coarse a unit for ticks
these days, so the scaling factor needs to be available, e.g.
TICKS_PER_SECOND. Given that information, programs can easily
perform time-based operations such as setting timeout delays.

> Adding additional library functions is probably NOT the best way
> to do this, considering that there is already a mechanism in place
> for this. Instead, the alternate calendars should be controlled by
> the time-locale in effect. The 'C' locale would be a gregorian
> calendar with 'english' month and day names. Changing the
> time-locale could change the constraints on struct tm.

I agree that given the existence of locale mechanisms, it makes
sense to exploit them for calendric formatting as well, with as
you say the "C" locale having the traditional property. It still
might be desirable to redesign the library interface, since for
example phase of the moon is not a standard component of struct
tm, but is necessary for calculation of some holidays.

> Do other calendars require any other day level or larger time
> components that could NOT be mapped into tm_year, tm_mon, tm_mday,
> tm_yday and tm_wday consistently provided the constraints are
> changed appropriately?

I just mentioned one, and I suspect there are others.

> ... how is the new 'internet
> time' to be handled? (I'd really like to ignore it, but that
> may not be possible.)

What in the world would "internet time" be? Time is not Newtonian,
and many years ago Lamport already published a paper dealing with
the very real issue of relativistic time in the context of networks.
Local time is well defined, but global time cannot be; at best, one
can single out a particular local time as a reference, but there is
no simple way to convert from that local time to a distant local
time. It's even dependent on the state of motion; for example,
satellites have to take that into account.

> it should be possible to take a struct tm from one calendar, use
> mktime to convert it to a time_t, change the time-locale to another
> calendar and call gmtime to convert the time_t to the new calendar.

Absolutely. This works because the intermediate form has clean,
physically more fundamental properties than the "outside" forms.

> a == b If not true, the times represented differ. If true, the
> times represented do not differ by more than some
> implementation defined amount. (Typically 1 sec., but
> might be 2 or 3 sec if leap seconds stutter.)

No, there is *no need* to worry about leap seconds when using
time_t if it is specified as a uniform tick counter. There is
of course the *cultural* ambiguity caused by converting
calendric times to ticks, such as during the transition from
DST to standard time, where for example 02:30 occurs twice an
hour apart. There is nothing that can be done to disambiguate
an incomplete specification when converting to the uniform tick
count, so probably that case should return an error from the
conversion function. There ought to be a representation
distinction like 02:30D vs. 02:30S so calendric notation could
be used by people to unambiguously denote local time. When the
US Congress first mandated DST, NBS took a different approach:
they quit using regional time in their WWV time broadcasts and
starting using GMT, now UTC or whatever, which did not jump
around at the whim of politicians. Unfortunately, they still
have leap seconds to contend with.

Douglas A. Gwyn

unread,
Feb 7, 2001, 2:37:41 PM2/7/01
to
David R Tribble wrote:
> There is a big difference between compensating for leap days, which
> are entirely predictable and thus computable from a handful of
> simple equations, and leap seconds, which do not have any kind of
> predictability property.

While the rules for leap days have not changed in recent (Western)
history, the rules for leap hours (aka Daylight Saving Time) have
changed several times in past decades and also vary with locality.
Tables are used to deal with DST, and tables can be used to deal
with leap seconds. Just as a time-conversion library might not
have accurate DST information, it might not have accurate leap
second information; that doesn't excuse it from even making the
effort. What I urge is that any standard specification along
these lines state clearly that the (locale-specific) calendric
time representation and the internal uniform tick counter are
the inputs/outputs of the conversion functions, and that the
tick counting be physically regular, but *not* that the calendric
time representation be perfectly in agreement with national and
international standards. That allows "wiggle room" for low-
quality implementations while nonetheless encouraging perfect
implementations.

> I take the point of view that, at a minimum, a reasonable time
> representation does not need leap seconds. I also take the point of
> view that we should not prevent implementations from properly handling
> leap seconds if they so choose.

The second is much more vital. Leap seconds make a huge difference
when the time happens to be *near* the occurrence of a leap second.
The entire world is expected to take leap seconds into account when
setting their everyday clocks; this matters for many applications,
e.g. radio/TV broadcasting, where to-the-second synchronization is
important.

> Then there is the problem of transferring dates between different
> systems. If one system implements leap seconds correctly, to the
> point of encoding leap seconds in its calendric representations of
> time_t values (e.g., "2000-12-31T23:59:60.500Z"), then it will be
> difficult to transfer this value to a system that doesn't do leap
> seconds and have it properly interpret the date value.

A system that does not understand leap seconds simply cannot
correctly interpret any human-oriented time representation;
unless one were to make the big human-engineering mistake of
requiring such external textual representations to differ from
everyday clocks by an amount equal to the number of leap seconds
since some standard epoch. (That would require two extra known
constants, epoch and leap-second table, beyond what people
normally have to deal with.)

In many instances the conversion inaccuracy might not matter,
so long as the system is self-consistent, which is the main
reason I would not insist on specifying that the implementation
has to get it perfectly right.

Max TenEyck Woodbury

unread,
Feb 7, 2001, 6:05:41 PM2/7/01
to
"Douglas A. Gwyn" wrote:
>
> Max TenEyck Woodbury wrote:
>> Rather than requiring a specific implementation of time_t, it would
>> be better if specific properties of time_t were defined.
>
> Right, but from a programming perspective the desired
> properties pretty much force it to be a simple integer "tick"
> counter from some epoch that the calendric functions have
> access to. Of course seconds is too coarse a unit for ticks
> these days, so the scaling factor needs to be available, e.g.
> TICKS_PER_SECOND. Given that information, programs can easily
> perform time-based operations such as setting timeout delays.

First, calendars are primarily for human use, not machine use.
I have yet to see anybody who could directly use a time base
more accurate than a few milliseconds and most human activity
that I have dealt with is no more accurate than plus or minus
a minute or so. A reasonable conclusion from that is that
time_t, as a human relavent type, does not NEED more resolution
than a second or two.

Second, there is already a separate time type for machine use.
The time clock_t has the properties required for interval timing
and has the constant CLOCKS_PER_SEC defined for it.

Conclusion: You are mixing the requirements for time_t with
the requirements for clock_t. As a result, your conclusion is
incorrect.

>> Adding additional library functions is probably NOT the best way
>> to do this, considering that there is already a mechanism in place
>> for this. Instead, the alternate calendars should be controlled by
>> the time-locale in effect. The 'C' locale would be a gregorian
>> calendar with 'english' month and day names. Changing the
>> time-locale could change the constraints on struct tm.
>
> I agree that given the existence of locale mechanisms, it makes
> sense to exploit them for calendric formatting as well, with as
> you say the "C" locale having the traditional property. It still
> might be desirable to redesign the library interface, since for
> example phase of the moon is not a standard component of struct
> tm, but is necessary for calculation of some holidays.

As is the solstice and equinox dates. That information is needed
for specifying the occurrence of specific events, but is not needed
for the enumeration of the time sequence. (...)

>> Do other calendars require any other day level or larger time
>> components that could NOT be mapped into tm_year, tm_mon, tm_mday,
>> tm_yday and tm_wday consistently provided the constraints are
>> changed appropriately?
>
> I just mentioned one, and I suspect there are others.

Calculating the date of Easter is a different problem from
enumerating dates. Are there calendaric enumerations that require
components that can not be mapped into the currently mandated
members of struct tm?

As for suspicions, I have a few of my own. Older calendars were
based on the year since the start of an epoch like the beginning
of the reign of a king. In such a calendar system, an epoch index
would be helpful. But I'm not about to require the inclusion of
a tm_epoch member without some indication that it would be useful.
Does anybody have anything definitive?

>> ... how is the new 'internet
>> time' to be handled? (I'd really like to ignore it, but that
>> may not be possible.)
>
> What in the world would "internet time" be? Time is not Newtonian,
> and many years ago Lamport already published a paper dealing with
> the very real issue of relativistic time in the context of networks.
> Local time is well defined, but global time cannot be; at best, one
> can single out a particular local time as a reference, but there is
> no simple way to convert from that local time to a distant local
> time. It's even dependent on the state of motion; for example,
> satellites have to take that into account.

Hey. I just read about it and I think it's just an advertising gimmick.
BUT it deals with people and is relevant to the time_t and struct tm
time types.

The relativistic time corrections are not noticeable on the human scale
and can be ignored for time_t and struct tm. They can be significant
to clock_t, but that's another can of worms entirely.

>> it should be possible to take a struct tm from one calendar, use
>> mktime to convert it to a time_t, change the time-locale to another
>> calendar and call gmtime to convert the time_t to the new calendar.
>
> Absolutely. This works because the intermediate form has clean,
> physically more fundamental properties than the "outside" forms.

Physical properties are relevant to clock_t. It does NOT follow that
time_t has to have the same properties.

>> a == b If not true, the times represented differ. If true, the
>> times represented do not differ by more than some
>> implementation defined amount. (Typically 1 sec., but
>> might be 2 or 3 sec if leap seconds stutter.)
>
> No, there is *no need* to worry about leap seconds when using
> time_t if it is specified as a uniform tick counter. There is
> of course the *cultural* ambiguity caused by converting
> calendric times to ticks, such as during the transition from
> DST to standard time, where for example 02:30 occurs twice an
> hour apart. There is nothing that can be done to disambiguate
> an incomplete specification when converting to the uniform tick
> count, so probably that case should return an error from the
> conversion function. There ought to be a representation
> distinction like 02:30D vs. 02:30S so calendric notation could
> be used by people to unambiguously denote local time. When the
> US Congress first mandated DST, NBS took a different approach:
> they quit using regional time in their WWV time broadcasts and
> starting using GMT, now UTC or whatever, which did not jump
> around at the whim of politicians. Unfortunately, they still
> have leap seconds to contend with.

First, you're arguing for a particular implementation which is NOT
what the standard should be doing.

Second, you've failed to take into account the existence of the
clock_t type which has most of the properties you require.

Third, you'd gratuitously label a lot of useful implementations as
non conforming without gaining anything by it. Bluntly, it would
cost quite a bit of effort to make the changes you require. What
would the C community gain by requiring those changes?

I can see a gain for a course identity comparison and ordering
on the time_t type. I can see another set of gains for linking
the constraints on struct tm to the time-locale. But I don't see
a whole lot of additional gain to forcing time_t to have the
same properties as clock_t.

mt...@cds.duke.edu

Paul Jarc

unread,
Feb 7, 2001, 6:54:44 PM2/7/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> First, calendars are primarily for human use, not machine use.
> I have yet to see anybody who could directly use a time base
> more accurate than a few milliseconds and most human activity
> that I have dealt with is no more accurate than plus or minus
> a minute or so. A reasonable conclusion from that is that time_t,
> as a human relavent type, does not NEED more resolution than a
> second or two.

Your conclusion involves time_t, but your preceding remarks have
nothing to do with time_t. Why isn't your conclusion about struct tm
instead?

> Second, there is already a separate time type for machine use.
> The time clock_t has the properties required for interval timing
> and has the constant CLOCKS_PER_SEC defined for it.

Yes, but a clock_t value returned from clock() isn't meaningful except
relative to an unspecified epoch, so apart from measuring intervals
during a *single* program run, it's useless. It's much easier to fix
time_t and time() than it would be to fix clock_t and clock().

> Conclusion: You are mixing the requirements for time_t with
> the requirements for clock_t. As a result, your conclusion is
> incorrect.

Our requirements a priori are unrelated to particular types. We want
a type that behaves like clock_t except with a fixed epoch. time_t is
*already* this way on many implementations - that's why it's a good
choice.

What properties does time_t have that make it suitable for
human-oriented use instead of machine-oriented use? Does it do this
job better than struct tm? Why do you lump time_t in with struct tm
instead of with clock_t?

> Physical properties are relevant to clock_t. It does NOT follow that
> time_t has to have the same properties.

It also does not follow that time_t is a bad choice of type to assign
the properties we want.

> > There is nothing that can be done to disambiguate an incomplete
> > specification when converting to the uniform tick count, so
> > probably that case should return an error from the conversion
> > function. There ought to be a representation distinction like
> > 02:30D vs. 02:30S so calendric notation could be used by people to
> > unambiguously denote local time.
>

> First, you're arguing for a particular implementation which is NOT
> what the standard should be doing.

No, he's arguing for giving implementations room to get things right.
The current interface is broken.

> Third, you'd gratuitously label a lot of useful implementations as
> non conforming without gaining anything by it.

How so?

> But I don't see a whole lot of additional gain to forcing time_t to
> have the same properties as clock_t.

The alternative would be to have a new clocktime() function that
returns a clock_t value with a fixed-across-program-runs epoch. This
would not be a bad idea, but if we require this, it wouldn't be any
harder for implementations to do the same work for time() that they
would have to do for clocktime().

Probably the best choice is to create a new function and new type.
That way, guarantees provided by existing implementations (e.g.,
time_t is a count of seconds) won't be affected by new properties we
assign (e.g., newtime_t has (or may have) subsecond precision).


paul

Niklas Matthies

unread,
Feb 7, 2001, 7:13:37 PM2/7/01
to
On Wed, 07 Feb 2001 18:05:41 -0500, Max TenEyck Woodbury <mt...@cds.duke.edu> wrote:
> "Douglas A. Gwyn" wrote:
[···]

> > What in the world would "internet time" be? Time is not Newtonian,
> > and many years ago Lamport already published a paper dealing with
> > the very real issue of relativistic time in the context of networks.
> > Local time is well defined, but global time cannot be; at best, one
> > can single out a particular local time as a reference, but there is
> > no simple way to convert from that local time to a distant local
> > time. It's even dependent on the state of motion; for example,
> > satellites have to take that into account.
>
> Hey. I just read about it and I think it's just an advertising gimmick.
> BUT it deals with people and is relevant to the time_t and struct tm
> time types.
>
> The relativistic time corrections are not noticeable on the human scale
> and can be ignored for time_t and struct tm. They can be significant
> to clock_t, but that's another can of worms entirely.

I agree with Douglas Gwyn that a time_t value should mean one and only
one time (relative to a given locale), namely the number of ticks (which
all have the same length) since some fixed start-of-epoch. The length of
a tick and the start of the epoch should be implementation-defined, but
determinable by the program. In particular, which point in time a time_t
value represents should not depend on whether the implementation keeps
track of leap hours and leap seconds, and for example switching from or
to DST should not break the continuity of time_t. Of course, the
accuracy of the value returned by time() _does_ depend on whether the
implementation keeps track of leap hours and leap seconds, and the
accuracy of the conversion from and to struct tm depends on whether the
implementation takes these things into account, but that's independent
from what the time_t value is meant to represent, it's a mere QoI issue.

> >> it should be possible to take a struct tm from one calendar, use
> >> mktime to convert it to a time_t, change the time-locale to another
> >> calendar and call gmtime to convert the time_t to the new calendar.
> >
> > Absolutely. This works because the intermediate form has clean,
> > physically more fundamental properties than the "outside" forms.
>
> Physical properties are relevant to clock_t. It does NOT follow that
> time_t has to have the same properties.

But clock() does not (generally) measure total elapsed time, but only
the time the processor spends within the program (e.g. in user space of
the program). This makes it useless for measuring actual time.

To make clock_t applicable to stopwatch-like applications, a new library
function would be needed that returns the actual time elapsed since some
defined point in time. I agree that, given such a function, clock_t
would be more suitable for subsecond time measurement than time_t. Maybe
some more elaborate interface like

struct timer;
struct timer *create_timer(void); /* may return NULL */
void start_timer(struct timer *);
void stop_timer(struct timer *);
clock_t read_timer(struct timer *);

would be preferable, even.

-- Niklas Matthies

Niklas Matthies

unread,
Feb 7, 2001, 7:29:21 PM2/7/01
to
On 07 Feb 2001 18:54:44 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
[毽愍

> Yes, but a clock_t value returned from clock() isn't meaningful except
> relative to an unspecified epoch, so apart from measuring intervals
> during a *single* program run, it's useless.

It's much worse than the epoch being unspecified. For example, when
running this program

#include <stdio.h>
#include <time.h>

int main()
{
clock_t start = clock();
while (clock() < start + CLOCKS_PER_SEC * 5) /* nop */;
printf("%lu\n", (unsigned long) (clock() - start) / CLOCKS_PER_SEC);
return 0;
}

on a common implementation, and suspending the program for a couple of
minutes inbetween (e.g. Ctrl-Z under Unix) or running some other
CPU-intensive program in parallel, the program will still output `5',
even though it ran significantly longer than 5 seconds.

Therefore, about the only useful application of clock() is to measure
runtimes of algorithms.

-- Niklas Matthies

Douglas A. Gwyn

unread,
Feb 7, 2001, 7:39:10 PM2/7/01
to
Max TenEyck Woodbury wrote:
> time_t, as a human relavent type, does not NEED more resolution
> than a second or two.

You got it backwards -- time_t is used for an "actual" time,
not the human end of the interface (that would be essentially
the struct tm equivalent). The internal uniform time needs to
be sufficiently precise for any application use on that platform,
which suggests that the resolution ought to be left up to the
implementation (but also accessible by the application).

> Conclusion: You are mixing the requirements for time_t with
> the requirements for clock_t. As a result, your conclusion is
> incorrect.

One way of putting it is that the only reason for maintaining
separate types clock_t and time_t was that the former usually
doesn't have the range needed by the latter. But now that we
have a mandatory 64-bit integer type, this distinction might be
moot. Since there are currently no provisions for meaningful
arithmetic on time_t (apart from difftime, which has very
limited use), other than via the problematic coversion to
struct tm and back, there is a clear deficiency. We can on
the other hand perform meaningful arithmetic on clock_t, but
that doesn't do us much good, because clock_t has no required
relationship to time_t. The argument is that time_t ought to
have clock_t-like properties so that we can *use* it for time-
oriented computations, or else a new type ought to have both
sets of properties. (An alternative would be to add a bunch
of library functions to perform elementary arithmetic operations
on time_t; that is not very appealing.)

> Calculating the date of Easter is a different problem from
> enumerating dates. Are there calendaric enumerations that require
> components that can not be mapped into the currently mandated
> members of struct tm?

The ones we have include physically-derived ones (day and year)
accidental ones (week, hour, minute, second), and physically-
inspired but woefully inaccurate ones (month). The Chinese
month and year do not coincide with the Western ones. The
Middle-Eastern day does not coincide with the Western one. It
is clear how culture has influenced the evolution of most of
these. As I understand it, the Babylonians also had larger-than-
year cycles. It is quite possible that some obscure tribe (that
might become important due to sitting on important natural
resources) has its own unique units for time. Actually in the US
and probably elsewhere, we have a 12-hour unit whose values are
AM and PM.

> ... But I'm not about to require the inclusion of a tm_epoch


> member without some indication that it would be useful.

Until now, nobody suggested such a thing (that I recall).
The epoch would be inherent in the calendar part of the locale.

> The relativistic time corrections are not noticeable on the human

> scale ...

Not true! Internet gamers are well aware of real problems
caused by latency. And what is "the human scale"? Any effect
can easily be magnified by the system surrounding it; for example
if satellites do not synchronize properly with the ground stations
entire communication links could be severed.

> First, you're arguing for a particular implementation which is NOT
> what the standard should be doing.

No, I'm arguing for specification of a certain useful set of
properties. That certainly is something a standard can do,
and whether it should do it needs to be argued on its merits.

> Third, you'd gratuitously label a lot of useful implementations as
> non conforming without gaining anything by it. Bluntly, it would
> cost quite a bit of effort to make the changes you require. What
> would the C community gain by requiring those changes?

No, in fact I would not suggest that the new interface
specification radically change the characteristics of the
types previously specified (time_t, struct tm, clock_t.)
The connection between time_t and struct tm serves as
a well-known model for the ways that the proposed uniform
internal time and external calendric representation could
be related, and clock_t is a model for a tick counter, but
as I see it the purpose of this thread is not to identify a
minimal set of tweaks to the existing API but rather to
figure ought what a better API would look like. *Both*
could be specified in a future standard, or at least the
new one needn't preclude simultaneous support for the old
one in essentially the way it used to be implemented.

Douglas A. Gwyn

unread,
Feb 7, 2001, 8:53:56 PM2/7/01
to
Paul Jarc wrote:
> No, he's arguing for giving implementations room to get things right.
> The current interface is broken.

Well, that was true at an earlier point in the thread,
where it was stated that POSIX requires time_t to stutter.

Douglas A. Gwyn

unread,
Feb 7, 2001, 8:59:45 PM2/7/01
to
Niklas Matthies wrote:
> I agree with Douglas Gwyn that a time_t value should mean one and only
> one time (relative to a given locale), namely the number of ticks (which
> all have the same length) since some fixed start-of-epoch. The length of
> a tick and the start of the epoch should be implementation-defined, but
> determinable by the program.

I wouldn't have time_t be locale-dependent (although the
conversion to and from calenderic representation would be).
The epoch needs to be some fixed point in physical time
(at the executing location). Then it would be easy to
determine the epoch should the need arise: pick some
locale in which to represent it, e.g. "C" plus choice of
time zone, and in that locale simply convert (time_t)0
to the representation, producing e.g. "1970/01/01 00:00Z"
for UNIX.

Max TenEyck Woodbury

unread,
Feb 7, 2001, 10:01:06 PM2/7/01
to
Paul Jarc wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> First, calendars are primarily for human use, not machine use.
>> I have yet to see anybody who could directly use a time base
>> more accurate than a few milliseconds and most human activity
>> that I have dealt with is no more accurate than plus or minus
>> a minute or so. A reasonable conclusion from that is that time_t,
>> as a human relavent type, does not NEED more resolution than a
>> second or two.
>
> Your conclusion involves time_t, but your preceding remarks have
> nothing to do with time_t. Why isn't your conclusion about struct tm
> instead?

Because there are specific routines that tie struct tm and time_t
to each other.

>> Second, there is already a separate time type for machine use.
>> The time clock_t has the properties required for interval timing
>> and has the constant CLOCKS_PER_SEC defined for it.
>
> Yes, but a clock_t value returned from clock() isn't meaningful except
> relative to an unspecified epoch, so apart from measuring intervals
> during a *single* program run, it's useless. It's much easier to fix
> time_t and time() than it would be to fix clock_t and clock().

So provide a method for calibrating the epoch. The thing that clock_t
lacks is a fiducial point. If you need such a thing, by all means
implement a function to make it available, but do not change the
characteristics of time_t unnecessarily.

>> Conclusion: You are mixing the requirements for time_t with
>> the requirements for clock_t. As a result, your conclusion is
>> incorrect.
>
> Our requirements a priori are unrelated to particular types. We want
> a type that behaves like clock_t except with a fixed epoch. time_t is
> *already* this way on many implementations - that's why it's a good
> choice.

But on a fundamental level it is NOT. You require a linear type.
clock_t is already required to be such a type. time_t does not now
have that requirement. It should not be too difficult to design and
implement a routine that returns a clock_t with a specific epoch.
You don't even have to include it in the C standard. In fact, such
a routine would probably fit nicely into POSIX.

> What properties does time_t have that make it suitable for
> human-oriented use instead of machine-oriented use? Does it do this
> job better than struct tm? Why do you lump time_t in with struct tm
> instead of with clock_t?

See above. There are specific conversion routines between the two.
There is even a reasonable non-linear implementation: normalize
the struct tm and stuff the values of each members into an appropriate
bit field within the time_t. (By skimping on the size of the second
field, you can squeeze a useful range into a 32 bit integer. That's
simple enough that even Microsoft can do it! [And with minor variants,
they have.])

>> Physical properties are relevant to clock_t. It does NOT follow that
>> time_t has to have the same properties.
>
> It also does not follow that time_t is a bad choice of type to assign
> the properties we want.

Except that it breaks extant implementation. Why bust something when
there is a way to do it without breaking things?

>>> There is nothing that can be done to disambiguate an incomplete
>>> specification when converting to the uniform tick count, so
>>> probably that case should return an error from the conversion
>>> function. There ought to be a representation distinction like
>>> 02:30D vs. 02:30S so calendric notation could be used by people to
>>> unambiguously denote local time.
>>
>> First, you're arguing for a particular implementation which is NOT
>> what the standard should be doing.
>
> No, he's arguing for giving implementations room to get things right.
> The current interface is broken.

Oh, please. There is room now to do it your way if you insist. The
problem is that putting it in the standard would make everyone do it
your way, even if they didn't want to.

>> Third, you'd gratuitously label a lot of useful implementations as
>> non conforming without gaining anything by it.
>
> How so?

The standard is compulsory. If you insist that type time_t's property
be changed, the implementations that don't do it your way would no
longer be conforming. It's gratuitous because you could implement
routines to manipulate clock_t instances without changing the C
standard.

>> But I don't see a whole lot of additional gain to forcing time_t to
>> have the same properties as clock_t.
>
> The alternative would be to have a new clocktime() function that
> returns a clock_t value with a fixed-across-program-runs epoch. This
> would not be a bad idea, but if we require this, it wouldn't be any
> harder for implementations to do the same work for time() that they
> would have to do for clocktime().

Or something like it. However, any such routine would HAVE to have
operating system support. So, put it in your operating system standard
rather than our language standard. Let's be honest about this. Don't
try to smuggle any more operating system requirements into the language
than is absolutely necessary.

> Probably the best choice is to create a new function and new type.
> That way, guarantees provided by existing implementations (e.g.,
> time_t is a count of seconds) won't be affected by new properties we
> assign (e.g., newtime_t has (or may have) subsecond precision).

I think you have that right, except that those specifications belong
in some other standard, not in the C language standard.

mt...@cds.duke.edu

Max TenEyck Woodbury

unread,
Feb 8, 2001, 12:06:40 AM2/8/01
to
"Douglas A. Gwyn" wrote:
>
> Max TenEyck Woodbury wrote:
>> time_t, as a human relavent type, does not NEED more resolution
>> than a second or two.
>
> You got it backwards -- time_t is used for an "actual" time,
> not the human end of the interface (that would be essentially
> the struct tm equivalent). The internal uniform time needs to
> be sufficiently precise for any application use on that platform,
> which suggests that the resolution ought to be left up to the
> implementation (but also accessible by the application).

Sorry. Read the standard. time_t is required to be no more than
a shorthand representation of the information in struct tm, and
it doesn't even have to be a very good one at that. Just because
some implementations do more than is required, doesn't mean that
everybody should have to.

> One way of putting it is that the only reason for maintaining
> separate types clock_t and time_t was that the former usually
> doesn't have the range needed by the latter. But now that we
> have a mandatory 64-bit integer type, this distinction might be
> moot. Since there are currently no provisions for meaningful
> arithmetic on time_t (apart from difftime, which has very
> limited use), other than via the problematic coversion to
> struct tm and back, there is a clear deficiency. We can on
> the other hand perform meaningful arithmetic on clock_t, but
> that doesn't do us much good, because clock_t has no required
> relationship to time_t. The argument is that time_t ought to
> have clock_t-like properties so that we can *use* it for time-
> oriented computations, or else a new type ought to have both
> sets of properties. (An alternative would be to add a bunch
> of library functions to perform elementary arithmetic operations
> on time_t; that is not very appealing.)

There is a certain amount of sense in you argument about the
receptive ranges of clock_t and time_t. It is in fact likely
that some implementations will go to a 64-bit time_t, just like
some implementations have gone to a 64-bit pos_t. However, that
does not mean that the change should be REQUIRED, and the changes
you propose would have that effect.

In fact, if you use a little insight when reading the standard,
you might see that a number of things are deliberately NOT said
about the time types just so implementations with hardware limits
could be squeezed into conformance with the standard. I can
understand (and to some extent share) your frustration with such
limits, but ignoring them is not going to win you the argument.

The absence of arithmetic operations on time_t and the existence
of difftime are compelling arguments that time_t does not HAVE
to be a linear type. The existence of CLOCKS_PER_SEC makes it
clear that clock_t is a linear time type. The lack of a standard
conversion between the two is almost certainly deliberate. That does
not mean you could not write such a routine and build the necessary
support into the operating system. However, to get that routine
included in the C language standard, you would have to convince
people that A) it would benefit a significant number of people,
B) hurt almost no one, and C) did not fit better under some other
standard's purview. In my opinion, you have only done a modestly
good job on A, failed completely on B and ignored C.

>> Calculating the date of Easter is a different problem from
>> enumerating dates. Are there calendaric enumerations that require
>> components that can not be mapped into the currently mandated
>> members of struct tm?
>
> The ones we have include physically-derived ones (day and year)
> accidental ones (week, hour, minute, second), and physically-
> inspired but woefully inaccurate ones (month). The Chinese
> month and year do not coincide with the Western ones. The
> Middle-Eastern day does not coincide with the Western one. It
> is clear how culture has influenced the evolution of most of
> these. As I understand it, the Babylonians also had larger-than-
> year cycles. It is quite possible that some obscure tribe (that
> might become important due to sitting on important natural
> resources) has its own unique units for time. Actually in the US
> and probably elsewhere, we have a 12-hour unit whose values are
> AM and PM.

Just because the do not coincide does not mean that a mapping
does not exist. If I remember correctly, the arabic calendar
might be considered to have 13 months. That does not mean that
their month would require a new member to struct tm. It would
require that the constraints that normalized the values of
tm_mon lie in 0..11 would change.

If a Myan calendar were implemented, I believe an additional
year level fields would be required, but I don't see much
use for a Myan calendar. For similar reasons, I suspect that
obscure tribes can also be ignored. However, I have a nagging
suspicion that some moderately influential group (maybe the
B'hai?) might have a requirement for a cycle that does not
map into the existing set easily.

As for the 12 hour clock, there are extant mappings into
tm_hour, so that is not a problem. I was aware that many
cultures use solar time and designate the day transition
as either sun-up or sun-set, but that could still be mapped
into the existing struct tm.

>> ... But I'm not about to require the inclusion of a tm_epoch
>> member without some indication that it would be useful.
>
> Until now, nobody suggested such a thing (that I recall).
> The epoch would be inherent in the calendar part of the locale.

That was what I assumed also, but I mentioned it just to
make sure that everyone else agreed with that assumption.

>> The relativistic time corrections are not noticeable on the human
>> scale ...
>
> Not true! Internet gamers are well aware of real problems
> caused by latency. And what is "the human scale"? Any effect
> can easily be magnified by the system surrounding it; for example
> if satellites do not synchronize properly with the ground stations
> entire communication links could be severed.

Hmm. I suppose you could argue that all delays are due to
relativistic effects, but the corrections for differences in
clock rate due to differences in altitude or velocity are
more specifically relativistic and are considerably smaller than
human reaction times. Further, game event sequences should NOT
be timed using a calendar. With very rare exceptions, you do not
want the changes to and from daylight savings time or the
occurrence of a leap second to impact the flow of a game.

The satellite systems you mentioned should
not be using time_t and probably should not be using clock_t for
synchronization. Since time keeping is such a critical part of
their application, a specialized time type with much more
stringent requirements would normally be used. I could see the
application of time_t and struct tm in a debugging routine, a
GPS receiver or other human interface application.

The point is that time_t and struct tm are specifically specified
to minimally deal with calendar problems. Trying to impose other
requirements on time_t unnecessarily conflicts with their being
minimal.

>> First, you're arguing for a particular implementation which is NOT
>> what the standard should be doing.
>
> No, I'm arguing for specification of a certain useful set of
> properties. That certainly is something a standard can do,
> and whether it should do it needs to be argued on its merits.

I'll grant that you are not requiring a specific resolution, but
you are asking for changes that were fairly obviously considered
before and rejected.

>> Third, you'd gratuitously label a lot of useful implementations as
>> non conforming without gaining anything by it. Bluntly, it would
>> cost quite a bit of effort to make the changes you require. What
>> would the C community gain by requiring those changes?
>
> No, in fact I would not suggest that the new interface
> specification radically change the characteristics of the
> types previously specified (time_t, struct tm, clock_t.)
> The connection between time_t and struct tm serves as
> a well-known model for the ways that the proposed uniform
> internal time and external calendric representation could
> be related, and clock_t is a model for a tick counter, but
> as I see it the purpose of this thread is not to identify a
> minimal set of tweaks to the existing API but rather to
> figure ought what a better API would look like. *Both*
> could be specified in a future standard, or at least the
> new one needn't preclude simultaneous support for the old
> one in essentially the way it used to be implemented.

Now that makes sense, but is not quite what was being said.

mt...@cds.duke.edu

Keith Thompson

unread,
Feb 8, 2001, 2:37:39 AM2/8/01
to
Team-...@gmx.net (Niklas Matthies) writes:
> On 07 Feb 2001 18:54:44 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
> [毽愍
> > Yes, but a clock_t value returned from clock() isn't meaningful except
> > relative to an unspecified epoch, so apart from measuring intervals
> > during a *single* program run, it's useless.
>
> It's much worse than the epoch being unspecified. For example, when
> running this program
[...]

> on a common implementation, and suspending the program for a couple of
> minutes inbetween (e.g. Ctrl-Z under Unix) or running some other
> CPU-intensive program in parallel, the program will still output `5',
> even though it ran significantly longer than 5 seconds.

Which is exactly what it's supposed to do; the clock() function is
defined in terms of processor time, not real time.

--
Keith Thompson (The_Other_Keith) k...@cts.com <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
MAKE MONEY FAST!! DON'T FEED IT!!

Keith Thompson

unread,
Feb 8, 2001, 3:07:10 AM2/8/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Paul Jarc wrote:
[...]

> > Yes, but a clock_t value returned from clock() isn't meaningful except
> > relative to an unspecified epoch, so apart from measuring intervals
> > during a *single* program run, it's useless. It's much easier to fix
> > time_t and time() than it would be to fix clock_t and clock().
>
> So provide a method for calibrating the epoch. The thing that clock_t
> lacks is a fiducial point. If you need such a thing, by all means
> implement a function to make it available, but do not change the
> characteristics of time_t unnecessarily.

As I mentioned elsewhere in this thread, clock_t measures CPU time,
not elapsed time, so it's not useful for the same purposes.

With the guarantee (new in C99) of at least 64-bit integer types,
there's no real reason not to have a time type that covers both fine
precision and long intervals. You could measure the age of the Earth
in seconds in about 57 bits, leaving one bit for a sign and 6 for 1/64
second resolution. Or, more realistically, you could provide
nanosecond resolution over +/- 292 years, or microsecond resolution
over +/- 292,000 years.

Any existing code that makes any assumptions about how time_t maps to
real time is non-portable, since the standard makes no such
guarantees. Additional constraints in a future standard would
therefore be acceptable from a strictly standards-compliance point of
view. However, I agree that there's a lot of code out there that
makes such assumptions, and it shouldn't be broken arbitrarily.

My suggestion for the next C standard (C2010?): keep the existing time
types and functions as they are, but deprecate them. Add new types
and functions that fix the problems with the old interface while being
flexible enough to allow for further compatible enhancements. For
example, mktime() is the inverse of localtime(); where is the
corresponding inverse of gmtime(). It's possible to use gmtime() and
localtime() to figure out the system's idea of the current time zone,
but there's no portable direct way to get at this piece of information
that the OS already knows about. The asctime() implementation in the
standard breaks for years after 9999; this is a problem now for
systems that are capable of representing such times.

Yes, this is easier said than done, and there are a lot of issues that
have to be worked out. Leap seconds are particularly nasty. I think
there was an effort to add something like this in C99, but it wasn't
completed. That's fine; it was better to leave it out than to
standardize something that wasn't sufficiently stable. But I think we
can do better next time. (Where the word "we" refers to the
underappreciated volunteers who do the hard work while I sit back and
complain 8-)}.)

(There's a book called "Standard C Date/Time Library", but I think the
implementation it describes is proprietary; I suspect the word
Standard modifies "C", not "Library".)

Niklas Matthies

unread,
Feb 8, 2001, 7:06:00 AM2/8/01
to
On Thu, 8 Feb 2001 01:59:45 GMT, Douglas A. Gwyn <gw...@arl.army.mil> wrote:
> Niklas Matthies wrote:
> > I agree with Douglas Gwyn that a time_t value should mean one and only
> > one time (relative to a given locale), namely the number of ticks (which
> > all have the same length) since some fixed start-of-epoch. The length of
> > a tick and the start of the epoch should be implementation-defined, but
> > determinable by the program.
>
> I wouldn't have time_t be locale-dependent (although the
> conversion to and from calenderic representation would be).

I agree, I didn't actually mean what I wrote (duh). I meant that the
time represented by a time_t value depends on the epoch and tick length,
which should be implementation-defined, but not locale-dependent.

-- Niklas Matthies

Niklas Matthies

unread,
Feb 8, 2001, 7:15:04 AM2/8/01
to
On 07 Feb 2001 23:37:39 -0800, Keith Thompson <k...@cts.com> wrote:
> Team-...@gmx.net (Niklas Matthies) writes:
> > On 07 Feb 2001 18:54:44 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
> > [毽愍
> > > Yes, but a clock_t value returned from clock() isn't meaningful except
> > > relative to an unspecified epoch, so apart from measuring intervals
> > > during a *single* program run, it's useless.
> >
> > It's much worse than the epoch being unspecified. For example, when
> > running this program
> [...]
> > on a common implementation, and suspending the program for a couple of
> > minutes inbetween (e.g. Ctrl-Z under Unix) or running some other
> > CPU-intensive program in parallel, the program will still output `5',
> > even though it ran significantly longer than 5 seconds.
>
> Which is exactly what it's supposed to do; the clock() function is
> defined in terms of processor time, not real time.

Yes, of course that's what it's supposed to do. The aim was to point out
that currently there are no (guaranteed) means of obtaining a clock_t
that measures elapsed real-time. And adding such a mean might be
dangerous because people will get confused about whether a clock_t value
contains a CPU-time interval or a real-time interval.

-- Niklas Matthies

Paul Jarc

unread,
Feb 8, 2001, 12:12:14 PM2/8/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Paul Jarc wrote:
> > Your conclusion involves time_t, but your preceding remarks have
> > nothing to do with time_t. Why isn't your conclusion about struct tm
> > instead?
>
> Because there are specific routines that tie struct tm and time_t
> to each other.

So? We want a tick-counter that can be converted to and from a struct
tm-like calendric representation. That doesn't limit the tick-counter
to calendric uses. Likewise, the fact that time_t can be converted to
and from struct tm doesn't mean that it ought to be useless for
anything else.

> > Yes, but a clock_t value returned from clock() isn't meaningful except
> > relative to an unspecified epoch, so apart from measuring intervals
> > during a *single* program run, it's useless. It's much easier to fix
> > time_t and time() than it would be to fix clock_t and clock().
>
> So provide a method for calibrating the epoch.

As has already been explained, that would not be sufficient. clock_t
currently is never used for measuring actual time, only processor
time. Using any existing type for a high-precision, long-range tick
counter is going to cause problems, but using clock_t would also cause
confusion about what sort of time is being measured.

> >> Conclusion: You are mixing the requirements for time_t with
> >> the requirements for clock_t. As a result, your conclusion is
> >> incorrect.
> >
> > Our requirements a priori are unrelated to particular types. We want
> > a type that behaves like clock_t except with a fixed epoch. time_t is
> > *already* this way on many implementations - that's why it's a good
> > choice.
>
> But on a fundamental level it is NOT. You require a linear type.
> clock_t is already required to be such a type. time_t does not now
> have that requirement.

It's not a property of the types per se. Both clock_t and time_t can
store sequential values. Both clock() and time() (as specified by the
standard) are useless for getting an externally meaningful tick count.
The difference is that on some systems, time() is useful for that
purpose. On no systems (AFAIK) is clock() useful in that way.

> It should not be too difficult to design and implement a routine
> that returns a clock_t with a specific epoch.

It would be no more difficult to do so with time_t. The encoding of
those values might be different from the values returned by time(),
but the same issue would be present with clock_t and clock().

> Oh, please. There is room now to do it your way if you insist. The
> problem is that putting it in the standard would make everyone do it
> your way, even if they didn't want to.

No. No one is forced to conform to the standard. If there's demand
for almost-conforming implementations, then there's an opportunity for
implementors to create them.

> >> Third, you'd gratuitously label a lot of useful implementations as
> >> non conforming without gaining anything by it.
> >
> > How so?
>
> The standard is compulsory.

I meant, what are some examples of platforms where it would be
unreasonably difficult to make time_t be a tick-counter? What are
some examples of existing code that relies on the present encoding of
time_t on such implementations?

> > The alternative would be to have a new clocktime() function that
> > returns a clock_t value with a fixed-across-program-runs epoch.
>

> Or something like it. However, any such routine would HAVE to have
> operating system support.

The same could be said of I/O, signal handling, and other features of
the standard library. If we remove them all, then there's no longer
such a thing as a portable, useful C program.


paul

Max TenEyck Woodbury

unread,
Feb 8, 2001, 5:49:32 PM2/8/01
to
Keith Thompson wrote:

> As I mentioned elsewhere in this thread, clock_t measures CPU time,
> not elapsed time, so it's not useful for the same purposes.

Not QUITE correct. clock() measures CPU time. Some other function
might very well measure other kinds of time using a clock_t type.
The point was that clock_t has the characteristics needed for
measurement; time_t does not.

> With the guarantee (new in C99) of at least 64-bit integer types,
> there's no real reason not to have a time type that covers both fine
> precision and long intervals. You could measure the age of the Earth
> in seconds in about 57 bits, leaving one bit for a sign and 6 for 1/64
> second resolution. Or, more realistically, you could provide
> nanosecond resolution over +/- 292 years, or microsecond resolution
> over +/- 292,000 years.

BUT the standard hs no business REQUIRING that a 64-bit value be
used. That should be an implementation decision.

> Any existing code that makes any assumptions about how time_t maps to
> real time is non-portable, since the standard makes no such
> guarantees. Additional constraints in a future standard would
> therefore be acceptable from a strictly standards-compliance point of
> view. However, I agree that there's a lot of code out there that
> makes such assumptions, and it shouldn't be broken arbitrarily.

Again, not quite. The mapping to real-time has to be done with
the difftime function. On the other hand, the mapping from clock_t
to real time is specified completely by CLOCKS_PER_SEC.

One important point about the time section of the standard is that
it is intentionally minimal. It is the simplest specification that
gets the job done. A lot of work went into making it minimal. Unless
there is a compelling reason, extant code that depends on that
minimally should not be broken.

> My suggestion for the next C standard (C2010?): keep the existing time
> types and functions as they are, but deprecate them. Add new types
> and functions that fix the problems with the old interface while being
> flexible enough to allow for further compatible enhancements. For
> example, mktime() is the inverse of localtime(); where is the
> corresponding inverse of gmtime(). It's possible to use gmtime() and
> localtime() to figure out the system's idea of the current time zone,
> but there's no portable direct way to get at this piece of information
> that the OS already knows about. The asctime() implementation in the
> standard breaks for years after 9999; this is a problem now for
> systems that are capable of representing such times.

I disagree. A component of the standard should only be deprecated
in the presence of an extant superior alternative. Since the
alternative has not yet been presented completely, it is premature
to even consider deprecating the current requirements. Further, the
parts of the proposal presented so far fail to be minimal. That
makes them an inferior alternative, not a superior one.

As for time zones and gmtime, time zones are not a universal concept.
Also, the C language standard is not the only place where standardize
methods of accessing information are specified. Since other standards
define methods for getting the time zone information, please don't go
stepping on their toes by trying to do their job for them.

As for the asctime limit, two questions need to be answered:
A) is 4 digits an exact specification or a minimal specification?
B) is the question urgent enough that it has to be answered now or
can the decision be postponed to some later standard cycle (like
C9950 or so)?

> Yes, this is easier said than done, and there are a lot of issues that
> have to be worked out. Leap seconds are particularly nasty. I think
> there was an effort to add something like this in C99, but it wasn't
> completed. That's fine; it was better to leave it out than to
> standardize something that wasn't sufficiently stable. But I think we
> can do better next time. (Where the word "we" refers to the
> underappreciated volunteers who do the hard work while I sit back and
> complain 8-)}.)

Hmm. Leap seconds typically account for a shift of one second per year.
That's 1 part in 60x60x24x365.2422 (=31556926.08), which is more than 7
digits to the left of the decimal point. From that I suspect that the
problem was how to write the specification so that those implementations
that wanted to implement them could do so without breaking those
implementations that wanted to ignore them. That is indeed a tough
problem.



> (There's a book called "Standard C Date/Time Library", but I think the
> implementation it describes is proprietary; I suspect the word
> Standard modifies "C", not "Library".)

Who published it and who is the author(s)? I've got copies of Plauger's
"Standard C Library" (both editions) and it has a section on <time.h>,
but this seems to be even more specialized.

mt...@cds.duke.edu

Keith Thompson

unread,
Feb 8, 2001, 7:25:23 PM2/8/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Keith Thompson wrote:
> > As I mentioned elsewhere in this thread, clock_t measures CPU time,
> > not elapsed time, so it's not useful for the same purposes.
>
> Not QUITE correct. clock() measures CPU time. Some other function
> might very well measure other kinds of time using a clock_t type.
> The point was that clock_t has the characteristics needed for
> measurement; time_t does not.

The standard says that clock_t and time_t are "arithmetic types
capable of representing times". The difference is that clock_t is
required to map linearly to an actual number of seconds, while time_t
is not.

Note that on Solaris, clock_t is typedef'ed as long, and
CLOCKS_PER_SEC is 1000000; this means the result returned by clock()
wraps around after about 36 minutes of CPU time. As far as I know,
that's a legal implementation. It's not adequate for most time
measurement purposes.

> > With the guarantee (new in C99) of at least 64-bit integer types,
> > there's no real reason not to have a time type that covers both fine
> > precision and long intervals. You could measure the age of the Earth
> > in seconds in about 57 bits, leaving one bit for a sign and 6 for 1/64
> > second resolution. Or, more realistically, you could provide
> > nanosecond resolution over +/- 292 years, or microsecond resolution
> > over +/- 292,000 years.
>
> BUT the standard hs no business REQUIRING that a 64-bit value be
> used. That should be an implementation decision.

What I'm suggesting is that, as long as we have guaranteed 64-bit
integers, we might as well mandate a time type that takes advantage of
them.

[...]


> One important point about the time section of the standard is that
> it is intentionally minimal. It is the simplest specification that
> gets the job done. A lot of work went into making it minimal. Unless
> there is a compelling reason, extant code that depends on that
> minimally should not be broken.

Which is why I'm not suggesting doing away with time_t. A new
interface doesn't have to be as minimal as the current one; it can at
least provide the functionality that can be implemented easily. The
guarantee of 64-bit integers makes it easy to provide a single type
that provides both a large range and fine resolution. By supporting
this in (a future revision of) the standard, we can save a lot of
reinventing of the wheel.

> > My suggestion for the next C standard (C2010?): keep the existing time
> > types and functions as they are, but deprecate them. Add new types
> > and functions that fix the problems with the old interface while being
> > flexible enough to allow for further compatible enhancements. For
> > example, mktime() is the inverse of localtime(); where is the
> > corresponding inverse of gmtime(). It's possible to use gmtime() and
> > localtime() to figure out the system's idea of the current time zone,
> > but there's no portable direct way to get at this piece of information
> > that the OS already knows about. The asctime() implementation in the
> > standard breaks for years after 9999; this is a problem now for
> > systems that are capable of representing such times.
>
> I disagree. A component of the standard should only be deprecated
> in the presence of an extant superior alternative. Since the
> alternative has not yet been presented completely, it is premature
> to even consider deprecating the current requirements. Further, the
> parts of the proposal presented so far fail to be minimal. That
> makes them an inferior alternative, not a superior one.

I don't agree that minimality is such an important criterion.

*If* we can provide a new interface that's suffiently superior, I
think deprecating time_t would be appropriate -- but I wouldn't mind
terribly if it weren't deprecated. In any case, deprecated features
take a *long* time to be removed.

> As for time zones and gmtime, time zones are not a universal concept.
> Also, the C language standard is not the only place where standardize
> methods of accessing information are specified. Since other standards
> define methods for getting the time zone information, please don't go
> stepping on their toes by trying to do their job for them.

The current C standard includes some support for time zones (the
distinction between gmtime() and localtime(), and the tm_isdst member
of struct tm); it just doesn't provide very good support. I should be
able to write portable C code that either handles time zones properly
or fails cleanly when the underlying system doesn't support them,
without relying on other standards. I can't do that now.

If other standards do a good job of supporting time zones, some of
that work can be incorporated into C. There's no good reason to have
multiple incompatible standards in this area.

> As for the asctime limit, two questions need to be answered:
> A) is 4 digits an exact specification or a minimal specification?

The standard requires the asctime function to use "the equivalent of
the following algorithm", followed by a C implementation of the
function. The given function will write past the bounds of an array
if (1900 + timeptr->tm_year) is outside the range -999 .. +9999.

> B) is the question urgent enough that it has to be answered now or
> can the decision be postponed to some later standard cycle (like
> C9950 or so)?

The problem exists now, though most software isn't likely to run into
it. On a Cray T3E, if I pass ctime() a pointer to a time_t value of
1000000000000 (1e12), it returns "Thu Sep 26 18:46:40 ****\n".
Allowing it to use a 5-digit year isn't a good solution, because the
caller is likely to assume that the result will fit in a char[26].

Consider a program that gets time_t values from an external source and
formats them for display using ctime() or asctime(). Such a program
could easily get a garbage value and invoke undefined behavior.

[...]

> > (There's a book called "Standard C Date/Time Library", but I think the
> > implementation it describes is proprietary; I suspect the word
> > Standard modifies "C", not "Library".)
>
> Who published it and who is the author(s)? I've got copies of Plauger's
> "Standard C Library" (both editions) and it has a section on <time.h>,
> but this seems to be even more specialized.

Title: "Standard C Date/Time Library : Programming the World's
Calendars and Clocks"
Author: Lance Latham
Publisher: Miller Freeman
Publication date: December 1997
ISBN: 0879304960
Paperback, 560 pages.
$49.95 from barnesandnoble.com

The publisher says:

Lance Latham provides a library of C programming functions that
constitute a complete date and time toolkit. He details the Julian
calendar and the calendars of most major cultures of the world,
and he supplies the historical knowledge necessary to determine
the rules of use and the range of problems that programming a
solution must address.

Max TenEyck Woodbury

unread,
Feb 8, 2001, 7:45:22 PM2/8/01
to
Paul Jarc wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> Paul Jarc wrote:
>>> Your conclusion involves time_t, but your preceding remarks have
>>> nothing to do with time_t. Why isn't your conclusion about struct tm
>>> instead?
>>
>> Because there are specific routines that tie struct tm and time_t
>> to each other.
>
> So? We want a tick-counter that can be converted to and from a struct
> tm-like calendric representation. That doesn't limit the tick-counter
> to calendric uses. Likewise, the fact that time_t can be converted to
> and from struct tm doesn't mean that it ought to be useless for
> anything else.

So you want to break a bunch of implementations because you can't
be bothered to make up a new type with the properties you want and
want to replace a very carefully crafted specification with one of
your utterance...

>>> Yes, but a clock_t value returned from clock() isn't meaningful except
>>> relative to an unspecified epoch, so apart from measuring intervals
>>> during a *single* program run, it's useless. It's much easier to fix
>>> time_t and time() than it would be to fix clock_t and clock().
>>
>> So provide a method for calibrating the epoch.
>
> As has already been explained, that would not be sufficient. clock_t
> currently is never used for measuring actual time, only processor
> time. Using any existing type for a high-precision, long-range tick
> counter is going to cause problems, but using clock_t would also cause
> confusion about what sort of time is being measured.

Oh, come off it. clock_t is a type tied to a (particular) kind of real
time. It could just as easily be tied to other kinds of real time without
fundamental changes. As for it's range, that's an implementation decision.
As mentioned elsewhere, the measurement of CPU time and it's (lack of
epoch) is embedded in the clock() function specification. There is no
reason that other functions that return clock_t could not have other
properties and still maintain the type specific properties ot clock_t.

>> But on a fundamental level it is NOT. You require a linear type.
>> clock_t is already required to be such a type. time_t does not now
>> have that requirement.
>
> It's not a property of the types per se. Both clock_t and time_t can
> store sequential values. Both clock() and time() (as specified by the
> standard) are useless for getting an externally meaningful tick count.
> The difference is that on some systems, time() is useful for that
> purpose. On no systems (AFAIK) is clock() useful in that way.

time_t is NOT required to be sequential. clock_t is. clock() has a
very specific function in that it accumulates a particular KIND of
tick. It is useful for the analysis of programs (and charging for
computer time and not much else). Other functions might accumulate
other kinds of ticks and have other uses.

You keep insisting that time_t be a tick count. While that might be
a useful implementation, it is not NECESSARY to the functions that
use it. I conclude you don't understand the reasons the standard
has not defined additional properties for time_t, and that further
discussion will be less than useful until you do.

I believe I understand the limits on time_t and I think a couple
additional constraints about comparisons might be added without
breaking even the cruftiest implementations, but I could be wrong,
and if I am, I am willing to drop those requirements. I don't
believe arithmetic operations can be added consistently without
major changes to at least some implementations.

>> It should not be too difficult to design and implement a routine
>> that returns a clock_t with a specific epoch.
>
> It would be no more difficult to do so with time_t. The encoding of
> those values might be different from the values returned by time(),
> but the same issue would be present with clock_t and clock().

It depends on the implementation. Because clock_t is a complete
sequential type, implementing the additional functions using it
must be possible, except for range limitations, without changing the
other function implementations. Because time_t is NOT necessarily
a complete sequential type, changing it to have the properties
necessary to implement these functions could well require changes
to other function implementations. That would make implementations
based on time_t more difficult than implementations based on clock_t.

>> Oh, please. There is room now to do it your way if you insist. The
>> problem is that putting it in the standard would make everyone do it
>> your way, even if they didn't want to.
>
> No. No one is forced to conform to the standard. If there's demand
> for almost-conforming implementations, then there's an opportunity for
> implementors to create them.

This has been covered MANY times before. (At one point I have even
said something like it.) Historically, when the that is said, the
author is about to discover one of his pedal extremities inserted in
his oral cavity. (I'm trying to be express something considerably
less polite here.) Standards by their nature are compulsory.

>>>> Third, you'd gratuitously label a lot of useful implementations as
>>>> non conforming without gaining anything by it.
>>>
>>> How so?
>>
>> The standard is compulsory.
>
> I meant, what are some examples of platforms where it would be
> unreasonably difficult to make time_t be a tick-counter? What are
> some examples of existing code that relies on the present encoding of
> time_t on such implementations?

It isn't a mater of platform or even of unreasonable difficulty.
It's a questions of existing and potential implementations. It's
a question of gratuitous changes. Microsoft encodes its file dates
in something that was probably a time_t type at one point and it
is NOT a completely sequential type. I could be wrong, but I do not
believe the RSX or RSTS time_t types were completely sequential.
This may be old, crufty stuff, but it still has to be dealt with.

>>> The alternative would be to have a new clocktime() function that
>>> returns a clock_t value with a fixed-across-program-runs epoch.
>>
>> Or something like it. However, any such routine would HAVE to have
>> operating system support.
>
> The same could be said of I/O, signal handling, and other features of
> the standard library. If we remove them all, then there's no longer
> such a thing as a portable, useful C program.

Those ARE controversial and a lot of effort went into showing that
each was absolutely necessary (although there is some pressure to
throw signal handling out) and that a wide variety of different
implementations could be covered by the standard. I do not suggest
that the problem be ignored. I suggest that the proposed changes have
not been adequately wrung out by people who understand the limits
of standards and the wide variety of implementations they have to
cover.

I brought the question up because there are systems that do NOT
provide fine grained tick counts but do provide something that
could be turned into a time_t or a struct tm (e.g. MSDOS). It is
ridiculous to require something like a palm-pilot to have a TIA
quality clock built into it, but that is almost what is required
for the leap second issue to become a real problem. (OK. I'm
exaggerating for effect. Good quartz clocks lack the necessary
accuracy by only a small amount but still come up short.)

The point was that the C language standard may not be the place
for this specification. One of the operating system standards
(like POSIX) may be a more appropriate repository for this. It
might also be added as a codicil to the time measurement standards
but that might be a new precedent.

mt...@cds.duke.edu

Niklas Matthies

unread,
Feb 8, 2001, 7:52:38 PM2/8/01
to
On Thu, 08 Feb 2001 19:45:22 -0500, Max TenEyck Woodbury <mt...@cds.duke.edu> wrote:
[毽愍

> I brought the question up because there are systems that do NOT
> provide fine grained tick counts but do provide something that
> could be turned into a time_t or a struct tm (e.g. MSDOS). It is
> ridiculous to require something like a palm-pilot to have a TIA
> quality clock built into it, but that is almost what is required
> for the leap second issue to become a real problem. (OK. I'm
> exaggerating for effect. Good quartz clocks lack the necessary
> accuracy by only a small amount but still come up short.)

Requirements on time_t do not imply requirements on the hardware or on
the accuracy of the values returned by time().

-- Niklas Matthies

Max TenEyck Woodbury

unread,
Feb 8, 2001, 9:44:36 PM2/8/01
to
Keith Thompson wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> Keith Thompson wrote:
>
> Note that on Solaris, clock_t is typedef'ed as long, and
> CLOCKS_PER_SEC is 1000000; this means the result returned by clock()
> wraps around after about 36 minutes of CPU time. As far as I know,
> that's a legal implementation. It's not adequate for most time
> measurement purposes.

OK, so the implementation is a poor one and you need something
better than clock_t in terms of precision. That doesn't change
the conclusion that time_t is not the type to use either...

> What I'm suggesting is that, as long as we have guaranteed 64-bit
> integers, we might as well mandate a time type that takes advantage of
> them.

A) If so, it should be a new type and it should definitely NOT
replace time_t and should probably not replace clock_t.

B) Consider stuffing it in the POSIX standard rather than in the
C language standard. If you insist on merging it with clock_t,
it should probably be left as a Quality of Implementation (QoI)
issue.

>> One important point about the time section of the standard is that
>> it is intentionally minimal. It is the simplest specification that
>> gets the job done. A lot of work went into making it minimal. Unless
>> there is a compelling reason, extant code that depends on that
>> minimally should not be broken.
>
> Which is why I'm not suggesting doing away with time_t. A new
> interface doesn't have to be as minimal as the current one; it can at
> least provide the functionality that can be implemented easily. The
> guarantee of 64-bit integers makes it easy to provide a single type
> that provides both a large range and fine resolution. By supporting
> this in (a future revision of) the standard, we can save a lot of
> reinventing of the wheel.

Hmm. The question that comes to my mind is does it belong in the
C language standard, in an operating system standard, or should it
be left as a QoI issue.

> I don't agree that minimality is such an important criterion.

That's your prerogative, but I've seen it kill quite a few proposals.
If you can show that the impact of the proposed change is small, you
have a much better chance of getting it accepted than if you do not
do this.

> *If* we can provide a new interface that's suffiently superior, I
> think deprecating time_t would be appropriate -- but I wouldn't mind
> terribly if it weren't deprecated. In any case, deprecated features
> take a *long* time to be removed.

And you need a lot of ammunition to do it. Further, you set yourself
up as a target when you propose it prematurely. The only practical
way to get something deprecated is to produce something of such
obviously superior quality that the C community as a whole embraces
it and ignores the old stuff. The less your solution generates this
kind of response, the longer it takes to clear the old stuff out.
Frankly, time issues are not going to generate the intensity of
interest to get this done quickly.

> The current C standard includes some support for time zones (the
> distinction between gmtime() and localtime(), and the tm_isdst member
> of struct tm); it just doesn't provide very good support. I should be
> able to write portable C code that either handles time zones properly
> or fails cleanly when the underlying system doesn't support them,
> without relying on other standards. I can't do that now.

Yes, there is only minimal time zone support in C because there is
considerable variation in time zone support by the underlying host
systems. Further, the specifications for time zones is already the
subject of other standards and are the subject of quite a bit of
political wrangling. Until time zone specifications stabilize, an
attempt to smuggle a particular requirement into the domain of the
standards community is going to generate a lot of ill will. You
can try to do it, but that could damage your efforts to get other,
more useful changes made.

(O.K. I'm a chicken, but I don't have the resources to fight this
fight and there are already quite a few big guns aimed in what is
probably the right direction. In other words, I think the
implementers are already working on this and they are more likely
to come up with workable solutions while down in the trenches than
anybody like me who only has a broad view of what is going on.)

> If other standards do a good job of supporting time zones, some of
> that work can be incorporated into C. There's no good reason to have
> multiple incompatible standards in this area.

Or just have C ignore the problem and let the other standards
use C as part of their base. While I don't think all standards are
equal, I'm not about to get into a discussion of their relative
merits or priorities.

>> As for the asctime limit, two questions need to be answered:
>> A) is 4 digits an exact specification or a minimal specification?
>
> The standard requires the asctime function to use "the equivalent of
> the following algorithm", followed by a C implementation of the
> function. The given function will write past the bounds of an array
> if (1900 + timeptr->tm_year) is outside the range -999 .. +9999.

Ah, so the result depends on the details of the application and some
of them would break if the constraint was changed. Would you agree
that a note on how to avoid the resulting undefined behavior would
be helpful?

>> B) is the question urgent enough that it has to be answered now or
>> can the decision be postponed to some later standard cycle (like
>> C9950 or so)?
>
> The problem exists now, though most software isn't likely to run into
> it. On a Cray T3E, if I pass ctime() a pointer to a time_t value of
> 1000000000000 (1e12), it returns "Thu Sep 26 18:46:40 ****\n".
> Allowing it to use a 5-digit year isn't a good solution, because the
> caller is likely to assume that the result will fit in a char[26].
>
> Consider a program that gets time_t values from an external source and
> formats them for display using ctime() or asctime(). Such a program
> could easily get a garbage value and invoke undefined behavior.

Yep. That makes it an application QoI problem. It's always a good idea
to validate your input, but the C standard has no business requiring
it, only bouncing you on your ear (or whatever else you decide to call
undefined behavior) when you don't.

> Title: "Standard C Date/Time Library : Programming the World's
> Calendars and Clocks"
> Author: Lance Latham
> Publisher: Miller Freeman
> Publication date: December 1997
> ISBN: 0879304960
> Paperback, 560 pages.
> $49.95 from barnesandnoble.com
>
> The publisher says:
>
> Lance Latham provides a library of C programming functions that
> constitute a complete date and time toolkit. He details the Julian
> calendar and the calendars of most major cultures of the world,
> and he supplies the historical knowledge necessary to determine
> the rules of use and the range of problems that programming a
> solution must address.

Thank you for the reference. Could you check and see if the book
mentions ACM Collected Algorithm 199. I've relied on that algorithm
in the past to good ends. CACM 199 implements the gregorian calendar
with a year length of 365.2425 days, only a little off from the
physical year of 365.2422 days. If the book references it, I'd
bet it had a good bibliography and was well written. If not, I'll
need to examine a copy before I buy it...

mt...@cds.duke.edu

Max TenEyck Woodbury

unread,
Feb 8, 2001, 9:55:54 PM2/8/01
to
Niklas Matthies wrote:
>
> Requirements on time_t do not imply requirements on the hardware or on
> the accuracy of the values returned by time().

OK, but the standards REQUIREMENTS on time_t should be no more than
those needed to meet the functional requirements or pass on the
information provided by the host system (if that information is
available). There is no question that an implementation can exceed
the requirements. The problem is requiring more than is needed.

mt...@cds.duke.edu

Keith Thompson

unread,
Feb 9, 2001, 2:38:19 AM2/9/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Keith Thompson wrote:
> > Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> >> Keith Thompson wrote:
> > Note that on Solaris, clock_t is typedef'ed as long, and
> > CLOCKS_PER_SEC is 1000000; this means the result returned by clock()
> > wraps around after about 36 minutes of CPU time. As far as I know,
> > that's a legal implementation. It's not adequate for most time
> > measurement purposes.
>
> OK, so the implementation is a poor one and you need something
> better than clock_t in terms of precision. That doesn't change
> the conclusion that time_t is not the type to use either...

I agree that time_t is not the type to use, and I think that we're
probably stuck with the current design of time_t and struct tm for the
forseeable future.

I would tentatively suggest some minor tweaks to the current
interface. For one think, I would advocate mandating that arithmetic
comparisons on time_t values should be meaningful, that the mapping to
real time should be linear, and probably that it should be an integer
type -- *if and only if* all existing implementations already satisfy
these constraints. I don't know of any implementations that would be
broken by these changes. If there are any such implementations, I'll
withdraw the suggestion.

Another thing I would suggest is a few additions to fill in the
missing parts of the abstraction. The current standard formalizes the
previously existing practice, which was a semi-random collection of
types and functions that were implemented because they were
individually useful at the time. As I mentioned before, mktime() is
the inverse of localtime(); there should be a corresponding inverse of
gmtime(). There should be a way of getting the local time zone's
offset from UTC; the system has to know this anyway to implement both
gmtime() and localtime(), so I should be able to ask for it.

(The problem with this is getting enough people to agree on what
should be added. Quite possibly the only way to reach consensus would
be to agree to add nothing, which I think is what happened for C99.)

Finally, there should be a well-designed, orthogonal time library,
capable of handling both long ranges and fine precisions. Perhaps
POSIX is the place to do this, but I can't think of a good reason to
deny this functionality to non-POSIX systems. Time handling has to be
implemented on top of facilities provided by the operating system, but
it's really more of an interface to the real world.

> > I don't agree that minimality is such an important criterion.
>
> That's your prerogative, but I've seen it kill quite a few proposals.
> If you can show that the impact of the proposed change is small, you
> have a much better chance of getting it accepted than if you do not
> do this.

I'd also have a much better chance of getting it accepted if I had the
time and energy to put together a concrete proposal. Unfortunately, I
don't.

[...]

> >> As for the asctime limit, two questions need to be answered:
> >> A) is 4 digits an exact specification or a minimal specification?
> >
> > The standard requires the asctime function to use "the equivalent of
> > the following algorithm", followed by a C implementation of the
> > function. The given function will write past the bounds of an array
> > if (1900 + timeptr->tm_year) is outside the range -999 .. +9999.
>
> Ah, so the result depends on the details of the application and some
> of them would break if the constraint was changed. Would you agree
> that a note on how to avoid the resulting undefined behavior would
> be helpful?

I think ctime() and asctime() are a unique problem. The standard
mandates that the implementation shall be equivalent to a specific
code sample; I don't think any other functions are defined that way.
(The rand() implementation is an example, not a mandate.)
Unfortunately, the provided code has a bug in it. The bug could be
fixed by requiring the year field of the result string to be set to
"****" for years after 9999.

> > Title: "Standard C Date/Time Library : Programming the World's
> > Calendars and Clocks"
> > Author: Lance Latham
> > Publisher: Miller Freeman
> > Publication date: December 1997
> > ISBN: 0879304960
> > Paperback, 560 pages.
> > $49.95 from barnesandnoble.com

[...]


> Thank you for the reference. Could you check and see if the book
> mentions ACM Collected Algorithm 199.

Sorry, I've never read the book and I don't have access to a copy of
it.

Max TenEyck Woodbury

unread,
Feb 9, 2001, 11:31:44 AM2/9/01
to
Keith Thompson wrote:
>
> I would tentatively suggest some minor tweaks to the current
> interface. ...

...Provided they really are minor...

> .......... For one think, I would advocate mandating that arithmetic
> comparisons on time_t values should be meaningful, ...

That probably is minor. At worst it might require changing the order
of the bit fields inside time_t, if that is in fact how an implementation
does time_t. Another possibility is that the type might have to be
changed from signed to unsigned or unsigned to signed. But, if some
implementor raises a stink, expect this to die...

> .................................................. that the mapping to
> real time should be linear, ...

You almost certainly will NOT get this one. It makes a lot of sense for
an implementation to break time_t into a number of independent bit fields
packed into an integer type. If an implementation uses bit fields inside
time_t, it would take a complete redesign of several routines to change
this behavior.

> ........................... and probably that it should be an integer
> type -- ...

Since time_t is required to be size_t, you've already got that. In
fact, I suspect that this requirement is ignored on small machines
since you can't even fit the number of seconds in a day into a 16
bit unsigned short and that is an allowed size_t. (24*60*60=86400).
[Cuss Schildt's book. His implementation limits section is truncated
and I suspect there is something in there that could preclude such
small size_t. My loose-leaf copy of the standard is at home...]

> ....... *if and only if* all existing implementations already satisfy


> these constraints. I don't know of any implementations that would be
> broken by these changes. If there are any such implementations, I'll
> withdraw the suggestion.

I don't have an in to any library implementations anymore, and it has
been almost a decade since I did have that kind of access, but a couple
implementations from that era separated time_t values into day counts
and tick-within-day counts. Microsoft encodes some of its time stamps
as double-seconds, day of the month, month of the year, and year for
a total of 32 bits. I don't know if those implementations are called
type time_t's, but they might be...

> Another thing I would suggest is a few additions to fill in the
> missing parts of the abstraction. The current standard formalizes the
> previously existing practice, which was a semi-random collection of
> types and functions that were implemented because they were
> individually useful at the time. As I mentioned before, mktime() is
> the inverse of localtime(); there should be a corresponding inverse of
> gmtime(). There should be a way of getting the local time zone's
> offset from UTC; the system has to know this anyway to implement both
> gmtime() and localtime(), so I should be able to ask for it.

Hmm. Given the standard C functions, how would you go about
getting the local time offset from UTC?

1) Get the current time using time(...) [or use some other time_t
value of interest].

2) convert and copy it to a pair of struct tm's using gmtime(...)
and localtime(...).

3) Check to make sure both struct tm's refer to the same day.

4) If they don't, modify the localtime generated struct tm
by some number of hours, [just make sure you don't cross any
daylight savings transitions when you make the change] then use
mktime(...) to get a new time_t to work from and return to 2).

5) With the dates the same, a little arithmetic on tm_hour, tm_min
and tm_sec should get you the offset. You might want to make
sure the tm_min and tm_sec differences were positive using
something similar to the procedure used in 4). {Or should they
be the same sign as the tm_hour difference? That depends on
exactly what you want to do with the result...]

That's almost complicated enough to deserve a library function.
[And it's much simpler than some of the hoops the the library code
might have to jump through to get the time offset from the underlying
host system.] Given that, it is not difficult to build an inverse of
gmtime based on mktime. That means your only difficulty would be
selling the standard committee on the need for it...

> (The problem with this is getting enough people to agree on what
> should be added. Quite possibly the only way to reach consensus would
> be to agree to add nothing, which I think is what happened for C99.)

I wasn't part of C99, but I suspect that they bit off a little more than
they could chew. Adding an inverse to gmtime probably got lumped with a
bunch of other changes. If it had been a separate item, it might have
been added with little more debate than how useful it was.

> Finally, there should be a well-designed, orthogonal time library,
> capable of handling both long ranges and fine precisions. Perhaps
> POSIX is the place to do this, but I can't think of a good reason to
> deny this functionality to non-POSIX systems. Time handling has to be
> implemented on top of facilities provided by the operating system, but
> it's really more of an interface to the real world.

You might be surprised how much good an addition to POSIX would do.
With that change in place, you'd catch all the Unix variants and
the upper tier Microsoft systems. I don't know enough about the
MacOS to know how strongly they try to be POSIX compliant, but
I suspect they'd put an effort into a compliant implementation.
What you'd miss is the low end stuff like the old DOS systems.

> I think ctime() and asctime() are a unique problem. The standard
> mandates that the implementation shall be equivalent to a specific
> code sample; I don't think any other functions are defined that way.
> (The rand() implementation is an example, not a mandate.)
> Unfortunately, the provided code has a bug in it. The bug could be
> fixed by requiring the year field of the result string to be set to
> "****" for years after 9999.

Argh. Don't do that! Make them change the array size to 28 and add a
comment or footnote that it might need to be bigger if int's are
bigger. That '****' mess is one of the more questionable decisions
in FORTRAN. It was deliberately not imported into the standard
formatting routines. Let's not change THAT decision now. (Of course,
when I'm writing FORTRAN code, I'm just as nasty about field overflows
in C.) <Hmm, is there a DR on this and, if so, how was it resolved?>

mt...@cds.duke.edu

Paul Jarc

unread,
Feb 9, 2001, 11:40:10 AM2/9/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> So you want to break a bunch of implementations because you can't
> be bothered to make up a new type with the properties you want and
> want to replace a very carefully crafted specification with one of
> your utterance...

Not at all. I already said I'm favor of creating a new type. But you
(seem to) say that if an old type is to be used, clock_t is better
than time_t; I disagree with that. CLOCKS_PER_SEC is *not* a property
of clock_t; it's a property of the *values* returned by clock().
(n869 7.23.2.1p3)

> Oh, come off it. clock_t is a type tied to a (particular) kind of real
> time.

No, clock() is tied. clock_t is specified *exactly* the same way
time_t is: they "are arithmetic types capable of representing times"
(n869 7.23.1p3). Also note that an implementation is even allowed to
have clock() always return (clock_t)-1 and let clock_t be char, or
float, or something else equally unsuitable for use as a tick counter
with a useful range. Even ignoring such implementations that don't
even try to give useful information, clock() is specified to return
only an approximation - in particular, it need not be a count of any
particular kind of tick; CLOCKS_PER_SEC could even be less than 1.

> As mentioned elsewhere, the measurement of CPU time and it's (lack of
> epoch) is embedded in the clock() function specification.

As is the relationship to real time.

> There is no reason that other functions that return clock_t could
> not have other properties and still maintain the type specific
> properties ot clock_t.

Replace clock_t with time_t here, and the standard would support the
statement equally.

> You keep insisting that time_t be a tick count.

No, I'm insisting that using time_t is no worse than using clock_t.

> I conclude you don't understand the reasons the standard has not
> defined additional properties for time_t, and that further
> discussion will be less than useful until you do.

I conclude that you haven't read the standard regarding clock_t, and


that further discussion will be less than useful until you do.

> I believe I understand the limits on time_t

I believe you do as well. But I believe you do not understand the
limits on clock_t.

> I brought the question up because there are systems that do NOT
> provide fine grained tick counts but do provide something that
> could be turned into a time_t or a struct tm (e.g. MSDOS).

How do they implement clock_t and clock()?

> The point was that the C language standard may not be the place
> for this specification. One of the operating system standards
> (like POSIX) may be a more appropriate repository for this.

Possibly.


paul

Paul Jarc

unread,
Feb 9, 2001, 11:52:20 AM2/9/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Since time_t is required to be size_t,

Where did you get this from? I don't see it in n869.

> [Cuss Schildt's book. His implementation limits section is truncated
> and I suspect there is something in there that could preclude such
> small size_t. My loose-leaf copy of the standard is at home...]

If you're working from Schildt, you need to drop out of the discussion
until you can find a better source.

> modify the localtime generated struct tm by some number of hours,
> [just make sure you don't cross any daylight savings transitions
> when you make the change]

How would you go about ensuring that?


paul

Douglas A. Gwyn

unread,
Feb 9, 2001, 10:51:46 AM2/9/01
to
Max TenEyck Woodbury wrote:
> BUT the standard hs no business REQUIRING that a 64-bit value be
> used. That should be an implementation decision.

It wouldn't be a requirement for 64 bits, but rather specification
of some integer type suitable for use in the specified ways. One
implication of the specified usage might be that 32 bits would be
infeasible. It would be best, as Keith and I have suggested, to
not impose new requirements on an existing type name that might
affect the feasibility of previous implementation choices, but
rather to introduce a new type name for the new specification.

> Again, not quite. The mapping to real-time has to be done with
> the difftime function.

Which does not support many commonly needed operations, such as
computing a time 5 seconds from the present.

> On the other hand, the mapping from clock_t
> to real time is specified completely by CLOCKS_PER_SEC.

No, clock_t returned by clock() measures an attribute of the
program execution, and in addition to a scaling factor one also
needs an epoch. clock_t as currently implemented might not
have enough bits to serve as a general representation for times.

> One important point about the time section of the standard is that
> it is intentionally minimal. It is the simplest specification that
> gets the job done. A lot of work went into making it minimal. Unless
> there is a compelling reason, extant code that depends on that
> minimally should not be broken.

Nobody has suggested that strictly conforming code should be
broken by the new specifications.

> I disagree. A component of the standard should only be deprecated
> in the presence of an extant superior alternative. Since the
> alternative has not yet been presented completely, it is premature
> to even consider deprecating the current requirements. Further, the
> parts of the proposal presented so far fail to be minimal. That
> makes them an inferior alternative, not a superior one.

Get real -- any deprecation would occur along with adoption of
a new interface, and the decision would be made *at that time*,
when the details *had* been worked out and the superiority had
become evident. Even so, a deprecated interface would still be
a mandatory requirement for conforming implementations for at
least one more 10-year cycle of the standard, i.e. until 2019
at the earliest. Implementations could continue to support it
as long as customers want it.

> As for time zones and gmtime, time zones are not a universal
> concept.

Insofar as the planet Earth is concerned, every location has one.

> Also, the C language standard is not the only place where standardize
> methods of accessing information are specified. Since other standards
> define methods for getting the time zone information, please don't go
> stepping on their toes by trying to do their job for them.

The C standards committee has already deferred to a working
group on time API, backing out its earlier attempt at
improving the C99 time API. So long as time/calendar functions
are part of the (hosted-environment) C standard at all, it
makes sense for us to be concerned about the suitability of
the one we inherited and possible improvements. Time is of
universal interest, just as is text, so useful basic operations
*should* be standardized at as low a level as feasible. The
embedded systems programmer, for example, can't expect that
POSIX time functions will be nearly as likely to be available
as the Standard C time functions.

> Hmm. Leap seconds typically account for a shift of one second per year.
> That's 1 part in 60x60x24x365.2422 (=31556926.08), which is more than 7
> digits to the left of the decimal point. From that I suspect that the
> problem was how to write the specification so that those implementations
> that wanted to implement them could do so without breaking those
> implementations that wanted to ignore them. That is indeed a tough
> problem.

Actually it is quite easy, and I already suggested how to do it
earlier in this thread.

Douglas A. Gwyn

unread,
Feb 9, 2001, 11:02:59 AM2/9/01
to
Max TenEyck Woodbury wrote:
> In fact, if you use a little insight when reading the standard,

Who do you think helped write the very specs we're discussing?
The time API portion of the C Standard is something I have taken
an especial interest in, *because* it is inadequate for many
commonly needed time-related operations. I was also involved in
discussion that led to ADO's time-zone library, for example.

Niklas Matthies

unread,
Feb 9, 2001, 1:39:04 PM2/9/01
to
On Fri, 9 Feb 2001 15:51:46 GMT, Douglas A. Gwyn <gw...@arl.army.mil> wrote:
> Max TenEyck Woodbury wrote:
[···]

> > Again, not quite. The mapping to real-time has to be done with
> > the difftime function.
>
> Which does not support many commonly needed operations, such as
> computing a time 5 seconds from the present.

Maybe a compromise would be to add a function addtime(), and possibly
integer counterparts to that function and difftime(), say:

intmax_t idifftime(time_t, time_t);
/* returne INTMAX_MIN for overflow */
time_t iaddtime(time_t, intmax_t /* SECONDS */);
/* returns (time_t) -1 for overflow */

> Get real -- any deprecation would occur along with adoption of
> a new interface, and the decision would be made *at that time*,
> when the details *had* been worked out and the superiority had
> become evident. Even so, a deprecated interface would still be
> a mandatory requirement for conforming implementations for at
> least one more 10-year cycle of the standard, i.e. until 2019
> at the earliest. Implementations could continue to support it
> as long as customers want it.

Well, it might be a good idea to establish a new interface before 2038
comes too close. :)

-- Niklas Matthies

Douglas A. Gwyn

unread,
Feb 9, 2001, 2:48:03 PM2/9/01
to
Niklas Matthies wrote:
> Maybe a compromise would be to add a function addtime(), ...

However, apart from the clumsiness of using functions for
arithmetic operations, which is perhaps tolerable, that
still doesn't address the problem of the struct-tm-related
time value having hiccups near leap seconds (according to
what we were told earlier in this thread).

> Well, it might be a good idea to establish a new interface
> before 2038 comes too close. :)

Yes, that's why it ought to be addressed now.

Keith Thompson

unread,
Feb 9, 2001, 3:56:48 PM2/9/01
to
"Douglas A. Gwyn" <gw...@arl.army.mil> writes:
[...]

> The C standards committee has already deferred to a working
> group on time API, backing out its earlier attempt at
> improving the C99 time API.

That's good news. Does the working group have a web site?

Markus Kuhn has a proposal at
<http://www.cl.cam.ac.uk/~mgk25/c-time/>
but I don't know whether it's related to the working group you
mentioned.

Douglas A. Gwyn

unread,
Feb 9, 2001, 5:14:10 PM2/9/01
to
Keith Thompson wrote:
> That's good news. Does the working group have a web site?

Not that I know of. It has a mailing list, but there seems
to be no activity on it now. Therefore it might be useful
if somebody (you, perhaps?) were to spearhead a fresh effort
at an improved standard time API.

> Markus Kuhn has a proposal at
> <http://www.cl.cam.ac.uk/~mgk25/c-time/>
> but I don't know whether it's related to the working group
> you mentioned.

I don't know, either. I see his page says that Eggert also
has been working on this..

Niklas Matthies

unread,
Feb 9, 2001, 6:26:56 PM2/9/01
to
On Fri, 9 Feb 2001 19:48:03 GMT, Douglas A. Gwyn <gw...@arl.army.mil> wrote:
> Niklas Matthies wrote:
> > Maybe a compromise would be to add a function addtime(), ...
>
> However, apart from the clumsiness of using functions for
> arithmetic operations, which is perhaps tolerable, that
> still doesn't address the problem of the struct-tm-related
> time value having hiccups near leap seconds (according to
> what we were told earlier in this thread).

This is not necessarily a problem. As I understand things (in lack of a
copy of POSIX, so my understanding might not be completely correct),
POSIX assumes a universe without leap seconds, for the purpose of being
able to define a 1-to-1-correspondence of time_t values with stuct tm
values, in particular for future values, where leap second information
is not yet known. (Another part of the rationale seems to be that POSIX
doesn't want POSIX-compliant systems to be required to be able to keep
track of leap seconds.) In addition, time_t is defined as the count of
seconds since 1/1/1970 00:00:00 in that universe without leap seconds.

Since the real world has a leap second every 500 days or so, the POSIX-
universe time drifts slowly away from the real-world time. This means
that a POSIX system which wants to generate struct tm values that
closely correspond to actual calendar time down to the second has to
adjust the time_t values accordingly, because of the strict mapping
between struct tm and time_t that POSIX imposes. So, in this sense,
time_t stutters under POSIX.

But in practice, it is almost equivalent to say that time_t is linear
with time, but that POSIX struct tm values differ from actual calendar
time by the number of leap seconds since 1970. On most actual systems,
the internal clock will not run accurately to the second in sync with
actual calendar time anyway (leap seconds or not), and will be adjusted
forth or back by the user/administrator from time to time, so a certain
"stuttering" of time_t will usually happen regardless of whether the
time_t<->struct tm mapping takes leap seconds in account or not.

So, the problem with POSIX is not that time_t stutters (I don't think
that a system becomes non-POSIX-compliant because its clock is off by a
few seconds), but that it imposes a fixed time_t<->struct tm mapping
that is known for all future time values and that disregards leap
seconds even for those leap seconds already known. Actually, this is
regarded as a feature, because all POSIX implementations will output the
same textual time for the same time_t values (in particular for those
arising from doing date and time arithmetics; hence there is no
"degradation" in accuracy of time_t values by continually switching
between different POSIX implementations that are doing arithmetics on
them).

Now, if C0X would require time_t to be some linear tick count since some
epoch, this wouldn't necessarily contradict POSIX. Only implementations
which include leap seconds into their time_t<->struct tm mapping (and/or
choose another tick unit than seconds, and/or another epoch than POSIX)
will become non-POSIX-compliant. One could imagine that leap second
support for the time_t<->struct tm mapping is switchable on the
implementation, so that one can decide whether to be POSIX-compliant,
or whether to not have time_t hickup on leap seconds, but still keep
stuct tm time in sync with actual calendar time. Both variants could be
C0X-conforming if C0X neither enforces leap seconds to be accounted for
in the time_t<-> struct tm mapping nor requires time() to be completely
in sync with real time (the latter of which wouldn't make much sense to
require anyway).

-- Niklas Matthies

James Kuyper Jr.

unread,
Feb 9, 2001, 7:28:15 PM2/9/01
to
Max TenEyck Woodbury wrote:
...
> Since time_t is required to be size_t,

Citation please? I know of no such requirement. It must be an arithmetic
type. As far as I can see, time_t can be signed char, or long double
_Imaginary, and just about anything in between.

David R Tribble

unread,
Feb 13, 2001, 8:07:00 PM2/13/01
to
I suggest the following list of new constraints for the 'time_t' type
(or possibly some new "extended time" type):

1. Arithmetic type.
The 'time_t' type has arithmetic type.

(This is unchanged from the current requirements. It is broad
enough to allow time_t to be an integer or floating-point type,
but also too broad, allowing weird types such as 'complex long
double' and 'imaginary float'.)

(I think that mandating that time_t be a integer type might cause
some existing conforming implementations to become nonconforming,
but I'm not sure. Does anyone know of an example? IIRC, the Apple
Rhapsody OS had an 'NSDate' class type that used a 'double' as
a seconds counter; I don't know what type its 'time_t' was.)

2. Monotonically increasing.
'time_t' values increase throughout time, i.e., the time_t value
for one event is arithmetically less than the time_t value for
an event occurring later in time (within the implementation's
"epoch").

3. Comparisons.
Two 'time_t' values may be compared; if they compare equal, they
represent identical moments in time; if one compares less than the
other, it represents a moment in time that occurs before the moment
represented by the greater value (within the implementation's
"epoch").


In addition, the following new macros should be added to the <time.h>
header:

_TIME_SUPPORTED

Has integer type, and evaluates to a nonzero (true) value if the
'time_t' type of the implementation conforms to the constraints
above; otherwise it evaluates to zero (false).

_TICKS_PER_SEC

Specifies the minimum number of distinct "ticks" per second that
the 'time_t' type is capable of representing uniquely within its
"epoch". This must be at least 1.

(If 'time_t' is an integer type, this is the exact number of ticks
per second that it can represent uniquely. If 'time_t' is a
floating-point type, this is the largest resolution representable
within its epoch; time values outside this epoch may have different
resolutions.)

(The term "epoch" is a little vague, and should probably have a
formal definition.)

_TIME_MIN

Has type 'time_t', and specifies the minimum 'time_t' value that
represents a meaningful time within the "epoch". All 'time_t'
values representing meaningful times must not compare less than
this value.

(This therefore represents the first "valid" date in the epoch.)

_TIME_MAX

Has type 'time_t', and specifies the maximum 'time_t' value that
represents a meaningful time within the "epoch". All 'time_t'
values representing meaningful times must not compare greater than
this value.

(This therefore represents the last "valid" date in the epoch.)

_TIME_UNKNOWN

Has type 'time_t', and specifies an arbitrary date outside the
"epoch", which can be used to indicate an unknown or indeterminate
date. This value must not compare equal to any 'time_t' value
that represents a meaningful time within the "epoch".

(This is intended to be used for initializing time_t variables,
and as a possible function return value indicating that an error
or inconsistency has occurred, such as during a calendar date
conversion routine. This is meant to replace the current use of
'(time_t)(-1)' as an error indicator, since -1 might represent a
valid time within the epoch.)

_TIME_IS_INTEGER

Has integer type, and evaluates to a nonzero (true) value if the
'time_t' is an integer type; it evaluates to zero (false) for all
other types.

(This allows for detecting implementations where 'time_t' is a
floating-point type.)

_TIME_IS_LINEAR

Has integer type, and evaluates to a nonzero (true) value if the
'time_t' is implemented in such a way that adding 1 (i.e., one
tick) to a given time_t value results in a new time_t value that
represents a moment occurring exactly one tick after the moment
represented by the old value.

(This allows for implementations where time_t is a simple integer
tick count, as well as though that map clock ticks into alternate
time_t forms, e.g., concatenated bitfields, such as
<2001:05:08:02:30:00:500>.)

_TIME_HAS_LEAP_SECS

Has integer type, and evaluates to a nonzero (true) value if the
'time_t' representation properly represents leap seconds;
otherwise, it is zero (false).


I also considered adding these macro constants, but decided that they
would be tied to a particular calendar, and thus are unsuitable for
use with a calendar-independent time_t type:

_TIME_YEAR_MIN (not needed)

Specifies the minimum year number that can be entirely represented
by a 'time_t' value (i.e., this is the first whole year of the
"epoch" represented by the 'time_t' type.)

_TIME_YEAR_MAX (not needed)

Specifies the maximum year number that can be entirely represented
by a 'time_t' value (i.e., this is the last whole year of the
"epoch" represented by the 'time_t' type.)


Comments?

--
David R. Tribble, mailto:da...@tribble.com, http://david.tribble.com

Brian Inglis

unread,
Feb 14, 2001, 12:51:51 AM2/14/01
to
On Tue, 13 Feb 2001 19:07:00 -0600, David R Tribble
<da...@tribble.com> wrote:

>I suggest the following list of new constraints for the 'time_t' type
>(or possibly some new "extended time" type):
>
>1. Arithmetic type.
> The 'time_t' type has arithmetic type.
>
> (This is unchanged from the current requirements. It is broad
> enough to allow time_t to be an integer or floating-point type,
> but also too broad, allowing weird types such as 'complex long
> double' and 'imaginary float'.)
>
> (I think that mandating that time_t be a integer type might cause
> some existing conforming implementations to become nonconforming,
> but I'm not sure. Does anyone know of an example? IIRC, the Apple
> Rhapsody OS had an 'NSDate' class type that used a 'double' as
> a seconds counter; I don't know what type its 'time_t' was.)

SAS C on IBM S/370 used/uses? (are they still around) double for
IBM TOD clock time_t. Anyone know what IBM C on S/390 uses?

[snip rest]

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
--
Brian_...@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
use address above to reply

Douglas A. Gwyn

unread,
Feb 14, 2001, 1:28:36 AM2/14/01
to
David R Tribble wrote:
> Comments?

Not a bad starting place for discussion.

I think there could be some implementation currently using a
double for time_t. However, if we adopt the idea of ticks
then the type *has* to be specified as an integer type
(preferably unsigned). According to earlier discussion, it
might be "politically" wise to leave C89/C99/POSIX time_t
alone and invent a new type name to go with the stricter specs.

Niklas Matthies

unread,
Feb 14, 2001, 5:34:18 AM2/14/01
to
On Wed, 14 Feb 2001 06:28:36 GMT, Douglas A. Gwyn <DAG...@null.net> wrote:
[毽愍

> I think there could be some implementation currently using a
> double for time_t. However, if we adopt the idea of ticks
> then the type *has* to be specified as an integer type
> (preferably unsigned).

Why? Linearity only is a requirement for the mapping of times to
numerical values of the type, not on the precision of the type (i.e. the
set of numerical values representable by the type). It makes very much
sense to have a time_t that is double, which encodes times in a linear
way to double values, and the ticks per second to be known.

-- Niklas Matthies

Konrad Schwarz

unread,
Feb 14, 2001, 9:40:21 AM2/14/01
to
Please include a macro for printf()/scanf() formatting.

Max TenEyck Woodbury

unread,
Feb 14, 2001, 2:00:22 PM2/14/01
to
Paul Jarc wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> Since time_t is required to be size_t,
>
> Where did you get this from? I don't see it in n869.

OK. I miss read 'are' as 'as' in the section on types. I did
not remember that size_t is used with strftime, so I did not
think it needed to be specified as a type.

>> [Cuss Schildt's book. His implementation limits section is truncated
>> and I suspect there is something in there that could preclude such
>> small size_t. My loose-leaf copy of the standard is at home...]
>
> If you're working from Schildt, you need to drop out of the discussion
> until you can find a better source.

Schildt's book is limited, but (with one known exception) accurate.
As I said, the problem is with what he leaves out. (I do NOT use
his commentary at all.) Further, I have a more complete copy of the
standard at home.

>> modify the localtime generated struct tm by some number of hours,
>> [just make sure you don't cross any daylight savings transitions
>> when you make the change]
>
> How would you go about ensuring that?

Use the tm_dst flag. If it changes after you make the hour adjustment,
you need to move forward or backward a day depending on the sign of
the hour adjustment.

mt...@cds.duke.edu

Max TenEyck Woodbury

unread,
Feb 14, 2001, 2:01:48 PM2/14/01
to

Sorry, I miss read 'are' as 'as'...

mt...@cds.duke.edu

Max TenEyck Woodbury

unread,
Feb 14, 2001, 2:20:58 PM2/14/01
to
Paul Jarc wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> So you want to break a bunch of implementations because you can't
>> be bothered to make up a new type with the properties you want and
>> want to replace a very carefully crafted specification with one of
>> your utterance...
>
> Not at all. I already said I'm favor of creating a new type. But you
> (seem to) say that if an old type is to be used, clock_t is better
> than time_t; I disagree with that. CLOCKS_PER_SEC is *not* a property
> of clock_t; it's a property of the *values* returned by clock().
> (n869 7.23.2.1p3)

But that means that division is defined meaningfully on clock_t, which
is not the case with time_t.

>> Oh, come off it. clock_t is a type tied to a (particular) kind of real
>> time.
>
> No, clock() is tied. clock_t is specified *exactly* the same way
> time_t is: they "are arithmetic types capable of representing times"
> (n869 7.23.1p3). Also note that an implementation is even allowed to
> have clock() always return (clock_t)-1 and let clock_t be char, or
> float, or something else equally unsuitable for use as a tick counter
> with a useful range. Even ignoring such implementations that don't
> even try to give useful information, clock() is specified to return
> only an approximation - in particular, it need not be a count of any
> particular kind of tick; CLOCKS_PER_SEC could even be less than 1.

The point is that you can do meaningful scaling on clock_t. There is
no such provision for time_t.

>> There is no reason that other functions that return clock_t could
>> not have other properties and still maintain the type specific
>> properties ot clock_t.
>
> Replace clock_t with time_t here, and the standard would support the
> statement equally.

No. clock_t is scalable. You can perform arithmetic on it meaningfully.
time_t is not required to be scalable. All you can do is store it and
pass it around (much like pos_t). If you want to change it, you have
to convert it to a struct tm, make your adjustments and convert it back.

>> I brought the question up because there are systems that do NOT
>> provide fine grained tick counts but do provide something that
>> could be turned into a time_t or a struct tm (e.g. MSDOS).
>
> How do they implement clock_t and clock()?

A) Crudely, or
B) By returning (clock_t)-1.

Max TenEyck Woodbury

unread,
Feb 14, 2001, 2:30:02 PM2/14/01
to
"Douglas A. Gwyn" wrote:
>
>> As for time zones and gmtime, time zones are not a universal
>> concept.
>
> Insofar as the planet Earth is concerned, every location has one.

Every location has a time-offset from GMT, but locations where the
custom is to use solar time do not have a 'time zone' as the term
is commonly used.

Max TenEyck Woodbury

unread,
Feb 14, 2001, 3:06:42 PM2/14/01
to
David R Tribble wrote:
>
> I suggest the following list of new constraints for the 'time_t' type
> (or possibly some new "extended time" type):
>
> ...
> 3. Comparisons.
> Two 'time_t' values may be compared; if they compare equal, they
> represent identical moments in time; if one compares less than the
> other, it represents a moment in time that occurs before the moment
> represented by the greater value (within the implementation's
> "epoch").

'Identical' is not the right way to phrase this. Something about being
within the same implementation defined interval is needed.

> ...


> _TICKS_PER_SEC
>
> Specifies the minimum number of distinct "ticks" per second that
> the 'time_t' type is capable of representing uniquely within its
> "epoch". This must be at least 1.
>
> (If 'time_t' is an integer type, this is the exact number of ticks
> per second that it can represent uniquely. If 'time_t' is a
> floating-point type, this is the largest resolution representable
> within its epoch; time values outside this epoch may have different
> resolutions.)
>
> (The term "epoch" is a little vague, and should probably have a
> formal definition.)

This macro should only be defined if time_t is a scalable type. If
time_t is a non-scalable construct, then this value would be meaningless.

> ...


> _TIME_IS_LINEAR
>
> Has integer type, and evaluates to a nonzero (true) value if the
> 'time_t' is implemented in such a way that adding 1 (i.e., one
> tick) to a given time_t value results in a new time_t value that
> represents a moment occurring exactly one tick after the moment
> represented by the old value.
>
> (This allows for implementations where time_t is a simple integer
> tick count, as well as though that map clock ticks into alternate
> time_t forms, e.g., concatenated bitfields, such as
> <2001:05:08:02:30:00:500>.)

This would be equivalent to defined(_TICKS_PER_SEC) wouldn't it?

mt...@cds.duke.edu

Paul Jarc

unread,
Feb 14, 2001, 6:09:16 PM2/14/01
to
Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
> Paul Jarc wrote:
> > CLOCKS_PER_SEC is *not* a property of clock_t; it's a property of
> > the *values* returned by clock(). (n869 7.23.2.1p3)
>
> But that means that division is defined meaningfully on clock_t, which
> is not the case with time_t.

Whether division is *possible* is a property of the type, and division
is possible for both time_t and clock_t (since they are arithmetic
types). Whether it is *meaningful* is a property of the *values*
(returned by time() or clock()), not the type. It's not meaningful to
say that division of time_t is not meaningful.

> The point is that you can do meaningful scaling on clock_t. There is
> no such provision for time_t.

You can do meaningful scaling on values returned by clock(). There is
no such provision for clock_t in general (nor could there be).

> time_t is not required to be scalable. All you can do is store it and
> pass it around (much like pos_t). If you want to change it, you have
> to convert it to a struct tm, make your adjustments and convert it back.

That's not a property of the type; it's a property of the encoding of


the values returned by time().


paul

David R Tribble

unread,
Feb 16, 2001, 12:37:29 PM2/16/01
to
Douglas A. Gwyn wrote:
> I think there could be some implementation currently using a
> double for time_t. However, if we adopt the idea of ticks
> then the type *has* to be specified as an integer type

I personally prefer that the time type be an integer type. However,
we might still be able to come up with consistent definitions for
defining "tick" for floating-point time types; i.e., allowing such
implementations to represent, say, seconds (and fractions thereof).
This is problematic, though, since such types don't really have a
distinct "tick" value (other than "one second").

In my _TICKS_PER_SEC suggestion, I tried to cover this possibility
by defining it to mean the largest-granularity increment. For
a time_t of type double, this could be something like 1.0e-15.
Or it could simply be the system's smallest idea of a time increment
(e.g., 100).


> (preferably unsigned).

I prefer the time type to be signed, so that arithmetic on time
values is a true group operation, requiring no more bits than what
time_t provides. I.e., time differences, a.k.a. "intervals", should
be allowed to be signed quantities.

But I could get used to the idea of an unsigned time_t type.


> According to earlier discussion, it
> might be "politically" wise to leave C89/C99/POSIX time_t
> alone and invent a new type name to go with the stricter specs.

We may have to do this. For one thing, time_t and struct tm are
fairly closely intertwined (which might not be as big a problem as
I think it is). For another, some existing implementations of
time_t might become nonconforming with the new proposed constraints,
which would probably spell the death of the proposal.

David R Tribble

unread,
Feb 16, 2001, 12:40:46 PM2/16/01
to
Konrad Schwarz wrote:
> Please include a macro for printf()/scanf() formatting.

This is not necessary, since we already have strftime().

Unless, of course, you mean that we need a printf() format specifier
meaning "the type of time_t". I would instead prefer to force
programmers to cast time_t values into one of the primitive types,
such as 'double' or 'long long int'.

However, we do need to include a new strftime() format specifier
to access fractional seconds.

Keith Thompson

unread,
Feb 16, 2001, 3:04:20 PM2/16/01
to
David R Tribble <da...@tribble.com> writes:
> Douglas A. Gwyn wrote:
[...]

> > (preferably unsigned).
>
> I prefer the time type to be signed, so that arithmetic on time
> values is a true group operation, requiring no more bits than what
> time_t provides. I.e., time differences, a.k.a. "intervals", should
> be allowed to be signed quantities.
>
> But I could get used to the idea of an unsigned time_t type.

I strongly prefer time_t (or its successor) to be signed. With a
signed representation, you can easily represent times before and after
the epoch. With an unsigned representation, times before the epoch
(including the first (*mumble*) years of my life, assuming a 1970
epoch) cannot be represented. Of course, this means that using
(time_t)-1 as an error indicator is not a good idea -- yet another
argument in favor of defining a new type rather than enhancing the
existing one.

Doug (if I've gotten the attribution right), why do you prefer
unsigned?

Douglas A. Gwyn

unread,
Feb 16, 2001, 6:01:39 PM2/16/01
to
Keith Thompson wrote:
> Doug (if I've gotten the attribution right), why do you prefer
> unsigned?

Only because that is the natural type for a counter.
What we could use is a standard method of obtaining the
corresponding signed type for a given unsigned type,
so we wouldn't need ptrdiff_t etc. Lacking that, I
see now that a signed type for time representation
would certainly be easier to use in computation.

James Kuyper Jr.

unread,
Feb 17, 2001, 1:28:01 AM2/17/01
to
David R Tribble wrote:
>
> Konrad Schwarz wrote:
> > Please include a macro for printf()/scanf() formatting.
>
> This is not necessary, since we already have strftime().

That response covers printf(), but what about a scanf() macro?
Unlike strftime(), strptime() is not part of the standard. I was once
given a reason why strptime() isn't in the standard. I don't remember
what the reason was, but it didn't strike me as a particularly strong
reason - something like "no one suggested it".

Max TenEyck Woodbury

unread,
Feb 17, 2001, 1:14:40 PM2/17/01
to
Keith Thompson wrote:
>
> Max TenEyck Woodbury <mt...@cds.duke.edu> writes:
>> Keith Thompson wrote:
>>> Title: "Standard C Date/Time Library : Programming the World's
>>> Calendars and Clocks"
>>> Author: Lance Latham
>>> Publisher: Miller Freeman
>>> Publication date: December 1997
>>> ISBN: 0879304960
>>> Paperback, 560 pages.
>>> $49.95 from barnesandnoble.com
> [...]
>> Thank you for the reference. Could you check and see if the book
>> mentions ACM Collected Algorithm 199.
>
> Sorry, I've never read the book and I don't have access to a copy of
> it.

Got the book yesterday.

It's structure is somewhat similar to Plauger's standard library
books, complete with a similarly restricted license. It does
include a CD with additional material and the source code.

As a book. It's less than great. It covers the historical and
cultural aspects of the calendars well. It's historical and
cultural coverage of time keeping is much sparser.

As expected, it is full of usable code and the code has extensive
block headers explaining usage and interface, but the code also
contains lots of magic numbers like 1461 that really should be
explained.

Further, all the implementations are special purpose functions
with little, if any, attempt to provide a standard interface
where the different calendar forms could be specified as a
parameter. A LOT of work would be needed to simplify the
interface to the point where it could be added easily to
<time.h> and/or <locale.h>.

The implementations are specifically for x86 machines in 16-bit
mode, however there are test programs that can be used to test
the results on other platforms.

The Y2K harangue in chapter 1 seems a bit overblown in
retrospect and fails to credit those people who did the right
thing ahead of time. The rest of the book has only a few
references to numerology and other fantastic ideas, and those
mostly exist because the literary sources contain that material.

The (classic?) Fliegel and van Fandern algorithms (CACM 199) are
referenced very briefly, and are not analyzed, either
functionally or qualitatively.

-----------

I did look at all the calendars quickly to see if they could
be mapped into struct tm, and all except the Myan looked like
they would fit within struct tm. However, the number of
different calendars possible, and the fact that some of the
calendars had many different eras, suggests strongly that
a calendar type/era identifier might be needed in struct tm
if a calendar locale were added to the C standard.

mt...@cds.duke.edu

Francis Glassborow

unread,
Feb 17, 2001, 8:25:30 PM2/17/01
to
In article <3A8EBF90...@cds.duke.edu>, Max TenEyck Woodbury
<mt...@cds.duke.edu> writes

>As expected, it is full of usable code and the code has extensive
>block headers explaining usage and interface, but the code also
>contains lots of magic numbers like 1461 that really should be
>explained.
Personally I found the licence restrictions so draconian that I would
advise people to read them very carefully before they read any other
part of the book and certainly before they but it. If you ever intend to
do commercial work, reading this book may make it hard for you to
develop your own functions without infringing the copyright licence.

--
Francis Glassborow
See http://www.accu.org for details of The ACCU Spring Conference, 2001
(includes many regular participants to C & C++ newsgroups)

Trevor L. Jackson, III

unread,
Feb 18, 2001, 11:09:04 PM2/18/01
to
David R Tribble wrote:

> I suggest the following list of new constraints for the 'time_t' type
> (or possibly some new "extended time" type):

[snip constraints suggestions]

These suggestions motivated me to consider the most useful resolution for
the lsb of an extended time type (etime_t?). Assuming that etime_t is an
integral type, and that the lsb is some fraction of a second, there
appear to be two aspects of the world that are worth considering.

The first aspect is the limitations of the human senses. Sound being the
sense with the finest temporal resolution, one might consider a
resolution above 44 KHz (CD quality) as a sensible lower limit on useful
resolutions. Thus 16 bits of fractional seconds might be a useful
standard. One might extend this to 17 bits (to get 100 KHz) or 20 bits
(for 1 MHz) , but anything smaller is hard to justify for human
interactions. Needless to say 16 bits in the fractional field will be an
architecturally convenient dividing point on most machines.

Clearly there are situations that demand finer resolution, but those
demands come with additional, specialized demands that do not belong in a
language standard. Thus I suggest it is safe to ignore requests for
nanosecond resolution and the like. If etime_t has 16 bits of resolution
then it will capture almost all of the utility of clock_t.

The second aspect is the date of birth issue. It is extremely convenient
to be able to express both birth dates and now in the same epoch.
Applying this to things other than human birthdays leads to the
conclusion that the most useful epoch begins when time begins. It would
be quite proper to position the origin of etime_t at the origin of real
time. But this requires 60 bits of seconds, which leaves only 3 bits of
int64_t for sub-second resolution. Such a coarse resolution would only
capture a portion of the utility of clock_t.

Thus these criteria are in conflict given "merely" 64 bits of
resolution. Of the two I prefer the latter because 1) 8 Hz is better
than what we have now and 2) epochs suck.


Douglas A. Gwyn

unread,
Feb 18, 2001, 11:37:27 PM2/18/01
to
"Trevor L. Jackson, III" wrote:
> The first aspect is the limitations of the human senses.

The appeal to human hearing characteristics is arbitrary.
Many of the most important *computational* uses of time
apply to much faster events, sometimes below the nanosecond
level (although many current systems don't provide access
to clocks having that precision).

> Applying this to things other than human birthdays leads to the

> conclusion that the most useful epoch begins when time begins. ...


> Thus these criteria are in conflict given "merely" 64 bits of
> resolution. Of the two I prefer the latter because 1) 8 Hz is
> better than what we have now and 2) epochs suck.

The "origin of time", even if it were a meaningful concept,
cannot be known with sufficient accuracy to use it as a
reference point. Therefore the "epoch" (0 on the linear
time scale) would have to be within a reasonable stretch
from the current time, another strong argument for allowing
negative values.

I suggest a nice dividing point would be at the second:
64 bits for integral number of seconds and 64 bits for
number of 2^-64 seconds. I.e. doubleword 128-bit fixed
point with the radix point in the middle. This would be
a natural for processors supporting 128-bit integer ops,
and readily kludged on others. Programs wanting to work
no more precisely than units of one second could use
just the top half.

Stephen Baynes

unread,
Feb 19, 2001, 8:13:09 AM2/19/01
to
Keith Thompson wrote:
>
> David R Tribble <da...@tribble.com> writes:
> > Douglas A. Gwyn wrote:
> [...]
> > > (preferably unsigned).
> >
> > I prefer the time type to be signed, so that arithmetic on time
> > values is a true group operation, requiring no more bits than what
> > time_t provides. I.e., time differences, a.k.a. "intervals", should
> > be allowed to be signed quantities.
> >
> > But I could get used to the idea of an unsigned time_t type.
>
> I strongly prefer time_t (or its successor) to be signed. With a
> signed representation, you can easily represent times before and after
> the epoch. With an unsigned representation, times before the epoch
> (including the first (*mumble*) years of my life, assuming a 1970
> epoch) cannot be represented. Of course, this means that using
> (time_t)-1 as an error indicator is not a good idea -- yet another
> argument in favor of defining a new type rather than enhancing the
> existing one.

But an unsigned type, with a suitable choice of epoch will give
the same range of dates. A signed type only seems to offer an
advantage if you want to align (time_t)0 with some significant
reasonably contemporary date. A signed type would also be useful if
one wanted to standardize the epoch start date and cope with (and
exploit) different sized time_ts (eg moving from a 32 to 64 bit time_t
without the values changing).

--
Stephen Baynes CEng MBCS Stephen...@soton.sc.philips.com
Philips Semiconductors Ltd
Southampton SO15 0DJ +44 (0)23 80316431 *** NEW ***
United Kingdom My views are my own.

Trevor L. Jackson, III

unread,
Feb 19, 2001, 9:05:05 AM2/19/01
to
"Douglas A. Gwyn" wrote:

> "Trevor L. Jackson, III" wrote:
> > The first aspect is the limitations of the human senses.
>
> The appeal to human hearing characteristics is arbitrary.
> Many of the most important *computational* uses of time
> apply to much faster events, sometimes below the nanosecond
> level (although many current systems don't provide access
> to clocks having that precision).

Which is why I suggested that they can be safely ignored. High
resolution timing often requires a program to react to an event within a
window. Such requirements involve latency considerations and
architectural features (e.g., interrupts) that are difficult to describe
in a general (standard) way.

>
>
> > Applying this to things other than human birthdays leads to the
> > conclusion that the most useful epoch begins when time begins. ...
> > Thus these criteria are in conflict given "merely" 64 bits of
> > resolution. Of the two I prefer the latter because 1) 8 Hz is
> > better than what we have now and 2) epochs suck.
>
> The "origin of time", even if it were a meaningful concept,
> cannot be known with sufficient accuracy to use it as a
> reference point. Therefore the "epoch" (0 on the linear
> time scale) would have to be within a reasonable stretch
> from the current time, another strong argument for allowing
> negative values.

Like the date of the birth of Christ, accuracy is irrelevant. The
western world happily uses years without caring that year one marks
nothing. The only relevant aspect of the "origin" of time is that it be
included within the addressable range of the extended time type. Since
that number is 12-20 billion years an epoch that starts at -30 billion
years should assure that we can address any interesting point in time
such as the formation of the earth, sun, milky way, etc. E.g., meteor
dating could be standard.

As for negative values of time these are necessary for the expression of
differences between points of time, but less than useful as values
representing points in time. If one wants the maximum capability one
should propose a complex time type as has been suggested by Hawking and
others.
<bold><italic><underline><red><blink><beep>NOT!</beep></blink></red></underline></italic><./bold>

> I suggest a nice dividing point would be at the second:
> 64 bits for integral number of seconds and 64 bits for
> number of 2^-64 seconds. I.e. doubleword 128-bit fixed
> point with the radix point in the middle. This would be

> a natural for processors supporting 128-bit integer ops,
> and readily kludged on others. Programs wanting to work
> no more precisely than units of one second could use
> just the top half.

While an apparent natural point for the fixed decimal point, I believe
its use would be a mistake because:
1) It violates the elegance of C's parsimony -- you don't pay for
what you don't use.
2) Non-scientific use is restricted to historical years at the high
end and human senses at the low end.
3) It violates the convenience of an integral type until int128_t is
required.

On many systems it will be expensive to produce because it will require
the combination of an extended time of day counter with the high
resolution timer. If the only application requirement is to record
human birthdays back to 1900, the extra overhead of the high resolution
timer will be wasted. The only excuse for this inefficiency is the
programmer convenience of having an atomic type to handle times. This
would excuse the existing practice of using a double as a time_t on
machines with slow floating point -- the convenience of the wide atomic
type is worth the inefficiency of the floating point representation of
an integer (e.g., microseconds since 1900-01-01 00:00:00.0).

With a composite type the convenience is lost as is the production
efficiency of the division into time_t and clock_t. To preserve the
efficiency one would simply extent time_t to be int64_t and cover all of
time. But this is not a standards issue. Implementations can simply do
it.

To capture the convenience of an atomic type with the current basic
types one needs to span the interesting uses of time by aligning the
atomic boundaries to the region of interest. One can limit the
non-scientific dates of interest to times within recorded history (10K
years) or the span of our species (35K years) in the past and a like
span in the future (after which the extended time type will be moot
because we 'll have a fermi-time counter in every ALU for measuring the
occasional plank event ;-). Similarly, one can limit the frequencies of
interest to the span of human senses. Thus an atomic extended time type
covering both the historical dates and human frequencies will be vastly
more convenient than one that splits the range into arithmetically
incompatible sub ranges. (In C++ one would just defined arithmetic
operators over the composite type).

An int64_t covering both ranges in a 32.32 fixed point arrangement
pinches exactly as the current time_t does -- too few years. A 48.16
fixed point arrangement would cover both ranges with plenty of room on
both sides.

Aligning the architectural divisions (atomic types) with integral
seconds can be more easily achieved and far more efficiently achieved by
extending time_t and clock_t independently. Which can already be done
within the current standard. Only a unification of time_t and clock_t
mandates standards action, and only the convenience of arithmetic over
an atomic type justifies the inefficiency required of such a
unification.


It is loading more messages.
0 new messages