Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

time64_t

1,726 views
Skip to first unread message

jacob navia

unread,
May 28, 2004, 2:49:36 AM5/28/04
to
I have added this functions to the lcc-win32 standard library:
New header file:

time64.h

It defines time64_t as long long (64 bit).

-----------------------------------------------------------------
#include <time.h>

typedef long long time64_t;

time64_t time64(time64_t *pt);

struct tm *localtime64(time64_t *pt);

long double difftime64(time64_t time1, time64_t time0);

time64_t mktime64( struct tm *today );

struct tm *gmtime64(time64_t t);

char *ctime64(time64_t *tp);
-----------------------------------------------------------
Do you see any obvious wrongdoings here? Feedback appreciated.

jacob

Richard Bos

unread,
May 28, 2004, 4:23:38 AM5/28/04
to
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

> I have added this functions to the lcc-win32 standard library:
> New header file:
>
> time64.h

I don't know where you get the impression that this header can be part
of a Standard library.

Richard

jacob navia

unread,
May 28, 2004, 4:38:06 AM5/28/04
to
Yes, the guards!

I added

#ifndef __time_h__
#define __time_h__
...
#endif

Thanks Richard

Richard Bos

unread,
May 28, 2004, 4:51:06 AM5/28/04
to
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

[ I know snipping is a good thing, but could you please not remove _all_
context from a reply? ]

> Yes, the guards!
>
> I added
>
> #ifndef __time_h__
> #define __time_h__
> ...
> #endif

And what, if anything, do you think this accomplishes? AFAICT all it
does is prevent you from using some implementations' _valid_ Standard
<time.h> headers together with your, non-Standard, "time64.h" - for
those implementations that use the same kind of guard in their Standard
headers. It has no impact on implementations that do _not_ use that kind
of guard.
I certainly does not magically make _your_ new header a comforming part
of a Standard implementation, since it does not stop it from declaring
several identifiers which are in the user's namespace. Any program using
is is obviously not strictly conforming; any implementation preventing a
user from declaring his _own_ time64.h, or his own time64_t, is also not
a Standard implementation.

Richard

Dan Pop

unread,
May 28, 2004, 5:29:45 AM5/28/04
to

I can't see any obvious connection between the C standard and this stuff.
So what *exactly* was your C standard related question?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Dan...@ifh.de

jacob navia

unread,
May 28, 2004, 6:16:34 AM5/28/04
to
There are several implementations of this functions in several
operating systems.

I thought that the standards comitee has (maybe) reflected about this
and a proposition for this is "in the works" or recommended.

A similar implementation exists under:
Hewlett Packard
http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/MAN/MAN3/1996____.HTM

I suppose the Compaq implementation is similar
http://btrcx1.cip.uni-bayreuth.de/cgi-bin/manpages/time/3

Sun Microsystems has one too:
http://docs.sun.com/source/817-5065/1_library.html

jacob


Harti Brandt

unread,
May 28, 2004, 6:41:21 AM5/28/04
to
On Fri, 28 May 2004, jacob navia wrote:

jn>There are several implementations of this functions in several
jn>operating systems.
jn>
jn>I thought that the standards comitee has (maybe) reflected about this
jn>and a proposition for this is "in the works" or recommended.
jn>
jn>A similar implementation exists under:
jn>Hewlett Packard
jn>http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/MAN/MAN3/1996____.HTM
jn>
jn>I suppose the Compaq implementation is similar
jn>http://btrcx1.cip.uni-bayreuth.de/cgi-bin/manpages/time/3
jn>
jn>Sun Microsystems has one too:
jn>http://docs.sun.com/source/817-5065/1_library.html

Standardizing time64_t would be the same type of standardisation bug
like the 64-bit file stuff. They should instead just use a 64-bit time_t
and that's it.

harti

jacob navia

unread,
May 28, 2004, 6:56:45 AM5/28/04
to

"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
news:Pine.GSO.4.60.0405281239370.18426@zeus...

Well, but that silent change would make many programs fail
that assume a 32 bit time_t...

All those assignments:

unsigned t = time(NULL);

would have to be found and rewritten. That was my rationale
behind this, (and I suppose HP/SUN too)

Brian Inglis

unread,
May 28, 2004, 9:46:00 AM5/28/04
to
fOn Fri, 28 May 2004 12:56:45 +0200 in comp.std.c, "jacob navia"
<ja...@jacob.remcomp.fr> wrote:

>
>"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
>news:Pine.GSO.4.60.0405281239370.18426@zeus...
>>

>> Standardizing time64_t would be the same type of standardisation bug
>> like the 64-bit file stuff. They should instead just use a 64-bit time_t
>> and that's it.

Great advice.

>Well, but that silent change would make many programs fail
>that assume a 32 bit time_t...

Those incorrect programs would also silently fail on systems where
time_t is not an integer type. The Standard does not guarantee time_t
is POSIX compatible.

>All those assignments:
>
>unsigned t = time(NULL);

Platform specific code that makes assumptions about the underlying
type and contents of time_t deserve what they get.
I've never seen time() assigned to unsigned int only signed int,
although I've seen srand((unsigned)time(NULL)) as often as
srand(time(NULL)).

>would have to be found and rewritten. That was my rationale
>behind this, (and I suppose HP/SUN too)

Accommodating non-conforming code written in violation of the Standard
is a long and slippery slope.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Brian....@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply

jacob navia

unread,
May 28, 2004, 11:45:52 AM5/28/04
to

"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
news:Pine.GSO.4.60.0405281239370.18426@zeus...

Ahh ok.

The same decision was done regarding fseek64?

Thanks for your time


jacob navia

unread,
May 28, 2004, 11:46:57 AM5/28/04
to
The standards comitee has decided the same for fseek64 then?

Thanks for your time


Thad Smith

unread,
May 28, 2004, 12:18:10 PM5/28/04
to
Brian Inglis wrote:
>
> fOn Fri, 28 May 2004 12:56:45 +0200 in comp.std.c, "jacob navia"
> <ja...@jacob.remcomp.fr> wrote:
>
> >"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
> >news:Pine.GSO.4.60.0405281239370.18426@zeus...
> >>
> >> Standardizing time64_t would be the same type of standardisation bug
> >> like the 64-bit file stuff. They should instead just use a 64-bit time_t
> >> and that's it.
>
> Great advice.
>
> >Well, but that silent change would make many programs fail
> >that assume a 32 bit time_t...
>
> Those incorrect programs would also silently fail on systems where
> time_t is not an integer type. The Standard does not guarantee time_t
> is POSIX compatible.

What date/time formats are portable? The Posix version has a limited
lifetime. A formatted character string, such as asctime produces,
avoids the short lifetime problem, but is more expensive and less
efficient. For large databases or logging on small-memory systems, the
difference can be significant. In my opinion, a longer binary date/time
version makes sense, but only if the encoding is standardized, not made
opaque, as time_t is and the suggested time64_t might be.

Extending time_t to more bits certainly makes sense and should be done,
but there should be another compressed time format that is explicitly
defined for portability. time_t can continue to be used with the
current standard functions, but a new explicit format would be much
better for inclusion in data files and other cross-implementation,
cross-platform, language-independent usage.

There probably are several such date/time codes in use now. I even
proposed one yesterday for a limited application! Without some defacto
standardization, though, they will continue to be a jumble of formats,
such as floating point formats were before IEEE-754 was adopted by
industry.

Does what I am talking about already exist and only need a slight boost
to be adopted as a defacto efficient, portable, long-lived date/time
format?

Thad

Alan Balmer

unread,
May 28, 2004, 1:38:12 PM5/28/04
to
On Fri, 28 May 2004 12:56:45 +0200, "jacob navia"
<ja...@jacob.remcomp.fr> wrote:

>
>"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
>news:Pine.GSO.4.60.0405281239370.18426@zeus...
>> On Fri, 28 May 2004, jacob navia wrote:
>>
>> Standardizing time64_t would be the same type of standardisation bug
>> like the 64-bit file stuff. They should instead just use a 64-bit time_t
>> and that's it.
>>
>
>Well, but that silent change would make many programs fail
>that assume a 32 bit time_t...
>
>All those assignments:
>
>unsigned t = time(NULL);
>
>would have to be found and rewritten.

As they should be. For all my code, it would require zero time and
testing.

> That was my rationale
>behind this, (and I suppose HP/SUN too)
>
>

--
Al Balmer
Balmer Consulting
removebalmerc...@att.net

Christian Bau

unread,
May 28, 2004, 3:07:24 PM5/28/04
to
In article <c975tf$cgo$1...@news-reader2.wanadoo.fr>,
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

If code assumes that time_t is 32 bit, then it is already broken.

As a programmer, you don't know the size of time_t, and you don't know
how to interpret its values except by calling other functions of the
same implementation. You can't write a time_t to a file and hope to be
able to read it on a different implementation (for example a new version
of the same compiler), so doing that is very shortsighted.

What you can do portably is to convert time_t into something with a
definite meaning (for example unsigned short year, unsigned char month,
day, unsigned long seconds or 64 bit microseconds since Jan. 1st 1800 if
that is what you want), write it to a file using portable code, and read
it back using the same interpretation of the data.

So if you think that 32 bit time_t cannot work for the next 40 years
then just change it to 64 bit, and all properly written code will
continue working.

James Kuyper

unread,
May 28, 2004, 3:39:25 PM5/28/04
to
Christian Bau wrote:
...

> If code assumes that time_t is 32 bit, then it is already broken.

Not broken, just unportable.

Paul Eggert

unread,
May 28, 2004, 7:12:27 PM5/28/04
to
At Fri, 28 May 2004 12:16:34 +0200, "jacob navia" <ja...@jacob.remcomp.fr> writes:

> There are several implementations of this functions in several
> operating systems.
>

> I suppose the Compaq implementation is similar
> http://btrcx1.cip.uni-bayreuth.de/cgi-bin/manpages/time/3

DEC (later Compaq; now HP) messed up when they ported Unix to the
64-bit Alpha chip long ago. They decided to make time_t a 32-bit
quantity, since that was the size it was on the VAX. This was a bad
decision for a couple of reasons. At some point DEC figured this out,
and added a 64-bit "time64_t" but by then they had
backward-compatibility issues so they couldn't futz with the original
time_t.

That's a Fortran library. It has nothing to do with C.

Sun, like everybody I know of (except DEC), decided that their 64-bit
hosts should have 64-bit time_t in C.

It might make sense for Sun etc. to have a "large-time mode" for
32-bit applications, so that they can use 64-bit time_t as well. But
Sun etc. hasn't done this; I guess they figure that by the time 2038
rolls around we'll all be using 64-bit processors.

File-offset system calls like lseek64 are purely a transition strategy
for users who are compiling applications in mixed environments, where
some 32-bit code is aware of large files, other (legacy) 32-bit code
is not, and it's some poor schmuck's job to link all that code
together into a single application and make it work. Other than that
particular problem, there's no excuse for this complexity and it
certainly doesn't belong in the C standard. (It doesn't belong in
POSIX either, and the POSIX folks are smart enough to realize this.)

Paul Eggert

unread,
May 28, 2004, 7:17:22 PM5/28/04
to
At Fri, 28 May 2004 17:46:57 +0200, "jacob navia" <ja...@jacob.remcomp.fr> writes:

> The standards comitee has decided the same for fseek64 then?

That's correct: fseek64 has not been standardized as far as I now.

POSIX does specify fseeko, which acts like fseek but deals with an
off_t value rather than a long value, but that issue is conceptually
independent of the 32/64 issue.

On many 32-bit platforms nowadays off_t is 64 bits (or can be made to
be 64 bits if you define a FILE_OFFSET_BITS macro before including
stdio.h etc), but this detail is not part of any standard that I know of.

Paul Eggert

unread,
May 28, 2004, 7:26:31 PM5/28/04
to
At Fri, 28 May 2004 10:18:10 -0600, Thad Smith <thad...@acm.org> writes:

> What date/time formats are portable? The Posix version has a limited
> lifetime.

Not really, since POSIX doesn't require that time_t must be limited to
32 bits, and many POSIX implementations use signed 64-bit time_t which
for all practical purposes is unlimited.

> A formatted character string, such as asctime produces, avoids the
> short lifetime problem,

That's the right idea in general, but asctime has its own lifetime
issues since it has undefined behavior before the year -999 and after
the year 9999.

> Does what I am talking about already exist and only need a slight boost
> to be adopted as a defacto efficient, portable, long-lived date/time
> format?

D. J. Bernstein is has proposed and is using such a format;
see <http://cr.yp.to/libtai/tai64.html>.

Jack Klein

unread,
May 29, 2004, 12:04:45 AM5/29/04
to
On Fri, 28 May 2004 13:46:00 GMT, Brian Inglis
<Brian....@SystematicSw.Invalid> wrote in comp.std.c:

> fOn Fri, 28 May 2004 12:56:45 +0200 in comp.std.c, "jacob navia"
> <ja...@jacob.remcomp.fr> wrote:
>
> >
> >"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
> >news:Pine.GSO.4.60.0405281239370.18426@zeus...
> >>
> >> Standardizing time64_t would be the same type of standardisation bug
> >> like the 64-bit file stuff. They should instead just use a 64-bit time_t
> >> and that's it.
>
> Great advice.
>
> >Well, but that silent change would make many programs fail
> >that assume a 32 bit time_t...
>
> Those incorrect programs would also silently fail on systems where
> time_t is not an integer type. The Standard does not guarantee time_t
> is POSIX compatible.

[snip]

Unless they've upgraded it lately, the POSIX standard does not
guarantee that time_t is an integer type, any more than the C standard
does.

POSIX, like C, guarantees that time_t is an arithmetic type, and
therefore could be a float, long, or double.

The one guarantee that POSIX makes and C doesn't is that the time_t
value returned by time() contains the number of seconds since a base
time. I can't remember for sure if the base time is specified by the
POSIX standard.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html

Douglas A. Gwyn

unread,
May 29, 2004, 12:57:10 AM5/29/04
to
jacob navia wrote:
> It defines time64_t as long long (64 bit).
> Do you see any obvious wrongdoings here? Feedback appreciated.

I think modeling *newly invented* time conversion
functions after the antique legacy functions in the
C standard is a mistake; there have been several
people working on a suite of replacement functions
with better designs and that would be the thing to
emulate (or contribute to). WG14 had considered
some extensions to the standard functions or C99,
but in response to feedback decided to withdraw
them in order to allow better solutions to be
developed for consideration for future standards.

It is also a mistake to tie the data type to 64
bits. What happens on a machine where 128-bit
words are a better choice?

Finally, note that time_t doesn't have to be
limited to 32 bits. In fact on Unix systems it
needs to be long int, which is sometimes already
64 bits.

Douglas A. Gwyn

unread,
May 29, 2004, 1:06:23 AM5/29/04
to
jacob navia wrote:
> Well, but that silent change would make many programs fail
> that assume a 32 bit time_t...
> All those assignments:
> unsigned t = time(NULL);
> would have to be found and rewritten. That was my rationale
> behind this, (and I suppose HP/SUN too)

Sheesh. That kind of code was a mistake even back when
these time functions were first invented (predating 7th
Edition Unix of 1978). The type returned by time is
time_t, not unsigned int. On Unix systems time_t needs
to be type long int for compatibility with legacy code;
on systems where long int has only 32 bits there is thus
a problem after year 2023 or so (on Unix, time_t also
has to be measured in seconds past the epoch).

If one is going to "fix" such code by editing it, simply
using time_t would take care of the problem, insofar as
correction via recompiling goes. (There is also a
binary object/library compatibility issue when a typedef
such as time_t is changed on some platform, which is well
understood by major vendors and for which solutions have
been devised that do not require editing source code.)

jacob navia

unread,
May 28, 2004, 7:01:27 PM5/28/04
to

"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
news:Pine.GSO.4.60.0405281239370.18426@zeus...

This would break quite a lot of code. I am convinced that
people have a practical view of code, changing and mending
code is not pleasant, and in this case can be avoided.

We are in 2004, and next bit crossing of that counter
is scheduled in 2038, i.e. several years in the future.

Only certain calculations (40 years loans for instance)
need dates beyond that today. If people start using 64 bit
time_t now, it will be easier to make the transition
later, say in the C standard of 2029 edition.

Most programs will be phased out in 34 years. May
I remind you that 34 years ago was precisely 1970...
the starting event of that counter.

Still, the fact that that counter has survived 34 years
means that it will survive another 34.
Programs are more short lived, and not one of the
programs written in 1970 is still running today unmodified.

The "time64" feature leaves the users the choice of
the point of transition, since they can at any time
redefine the standard types in 64 bits. This is a decision
that should not be done by the compiler.

They have 34 years to smoothly pass to 64 bits when they
want to.


Keith Thompson

unread,
May 29, 2004, 5:15:10 AM5/29/04
to
"jacob navia" <ja...@jacob.remcomp.fr> writes:
> "Harti Brandt" <bra...@dlr.de> a écrit dans le message de
> news:Pine.GSO.4.60.0405281239370.18426@zeus...
> > On Fri, 28 May 2004, jacob navia wrote:
> >
> > Standardizing time64_t would be the same type of standardisation bug
> > like the 64-bit file stuff. They should instead just use a 64-bit time_t
> > and that's it.
>
> This would break quite a lot of code. I am convinced that
> people have a practical view of code, changing and mending
> code is not pleasant, and in this case can be avoided.

Such code is already broken. Many systems already use a 64-bit
time_t, and more will do so in the next few years. Porting to such
systems will weed out any code that assumes time_t is 32 bits.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

jacob navia

unread,
May 29, 2004, 6:11:36 AM5/29/04
to

"Keith Thompson" <ks...@mib.org> a écrit dans le message de
news:ln4qpzo...@nuthaus.mib.org...

> "jacob navia" <ja...@jacob.remcomp.fr> writes:
>
> Such code is already broken. Many systems already use a 64-bit
> time_t, and more will do so in the next few years. Porting to such
> systems will weed out any code that assumes time_t is 32 bits.

But you HAVE to assume some size in MANY applications:

1) You are putting the time_t in a database and you have to choose which
size.

2) You are writing a windows compiler and all executables have a 4 byte
field that is a time_t

3) You are sending this over the network and the other side expects
4 byte time_t

With my proposal, is the USER that decides when he:she will change that
code, not the compiler!

Wojtek Lerch

unread,
May 29, 2004, 11:22:49 AM5/29/04
to
jacob navia wrote:
> The "time64" feature leaves the users the choice of
> the point of transition, since they can at any time
> redefine the standard types in 64 bits. This is a decision
> that should not be done by the compiler.

It's nice to allow the user to decide, but wouldn't it be even nicer if
instead of having to run a global search and replace on all his source
files, he could do it with a simple command-line switch:


typedef long _Time32_t;
_Time32_t _Time32(_Time32_t *pt);
struct tm *_Localtime32(_Time32_t *pt);
// ...

typedef long long _Time64_t;
_Time64_t _Time64(_Time64_t *pt);
struct tm *_Localtime64(_Time64_t *pt);
// ...

#if _BIGTIME
typedef _Time64_t time_t;
#define time _Time64
#define localtime _Localtime64
// ...
#else
typedef _Time32_t time_t;
#define time _Time32
#define localtime _Localtime32
// ...
#endif

Christian Bau

unread,
May 29, 2004, 11:49:14 AM5/29/04
to
In article <c99nkp$aa0$1...@news-reader4.wanadoo.fr>,
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

> "Keith Thompson" <ks...@mib.org> a écrit dans le message de
> news:ln4qpzo...@nuthaus.mib.org...
> > "jacob navia" <ja...@jacob.remcomp.fr> writes:
> >
> > Such code is already broken. Many systems already use a 64-bit
> > time_t, and more will do so in the next few years. Porting to such
> > systems will weed out any code that assumes time_t is 32 bits.
>
> But you HAVE to assume some size in MANY applications:
>
> 1) You are putting the time_t in a database and you have to choose which
> size.

If you put a time_t into a database and expect to be able to read it on
a different implementation or a future implementation of the same
compiler, then you get what you deserved.

> 2) You are writing a windows compiler and all executables have a 4 byte
> field that is a time_t

If time_t is not four bytes, then the executables may very well have a
four byte field, but it is not time_t. It may be "time_t as defined by
implementation X" which is whatever implementation X defined it to be,
and you probably can read it and interpret it correctly in some portable
way, but _not_ by using type time_t.


> 3) You are sending this over the network and the other side expects
> 4 byte time_t

The other side should be beaten with a clue stick.

Wojtek Lerch

unread,
May 29, 2004, 2:21:56 PM5/29/04
to
"Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
news:40B8AB1...@yahoo.ca...

> #if _BIGTIME
> typedef _Time64_t time_t;
> #define time _Time64
> #define localtime _Localtime64
> // ...
> #else
> typedef _Time32_t time_t;
> #define time _Time32
> #define localtime _Localtime32
> // ...
> #endif

Trouble is, this is not completely conforming: people are supposed to be
able to #undef the name of a library function and still be able to take its
address. Can this be done without having to provide two sets of libraries?
(And without compiler magic that renames a function using a pragma or
something, instead of a macro?)


Keith Thompson

unread,
May 29, 2004, 5:31:07 PM5/29/04
to

It's already up to the user. There's nothing stopping any user from
declaring a time64_t type in his own program.

Your proposal lets the user decide whether to use 64 or 32 bits -- or
rather, whether to use 64 bits or the implementation-defined time_t
type, which could be 64 bits itself. (Or were you suggesting a
time32_t type as well -- and would such a type be required on 64-bit
systems that really don't need it?) You're still imposing assumptions
about the representation (resolution epoch, integer vs. floating-point,
signed vs. unsigned).

The standard deliberately leaves these things implementation-defined.
Adding time64_t to the standard without nailing down the other
attributes of the type would be less than useful; there would still be
no guarantee that the time64_t you send over the network would be a
valid time64_t as seen on the other side.

It's possible, I suppose, that a time64_t type might be useful in some
contexts, but the standard isn't the place for it, which means this
isn't the newsgroup for it.

It's barely possible that a future version of the C standard might
specify more details about the representation of time_t -- for
example, that it's a signed integer type of at least 32 bits
representing seconds since 1970-01-01 -- but it seems unlikely. Many
(most?) current implementations use such a representation, but even if
they all did, it would preclude future implementations from using a
resolution finer than one second.

The solution to the Y2038 problem (which is a problem of C
implementations, not of the C standard) is the migration to 64-bit
time_t. This is already happening. The solution to the admitted
weaknesses of the current <time.h> interface, I think, will be a new
interface, not an improved version of the current one. Such an
interface is being worked on, but it wasn't ready in time to be
included in C99.

Paul Eggert

unread,
May 30, 2004, 12:07:14 AM5/30/04
to
At Sat, 29 May 2004 18:21:56 GMT, "Wojtek Lerch" <Wojt...@yahoo.ca> writes:

> people are supposed to be able to #undef the name of a library
> function and still be able to take its address. Can this be done
> without having to provide two sets of libraries? (And without
> compiler magic that renames a function using a pragma or something,
> instead of a macro?)

No, in practice it requires magic.

Antoine Leca

unread,
May 31, 2004, 8:44:43 AM5/31/04
to
En c975tf$cgo$1...@news-reader2.wanadoo.fr, jacob navia va escriure:

> All those assignments:
>
> unsigned t = time(NULL);
>
> would have to be found and rewritten.

I doubt you'll find many code of this style that deserve any dedication.
This one, for instance, is broken not only wrt the standard, but it also
assumes 32-bit int as well, and trying to port a program like this to
DOS/Win16 would certainly have been a nightmare (and the issue with time_t
is certainly a minor one.) Thus, it is better to NOT try to port this kind
of programs to anything, and let them fall out.


Antoine

Antoine Leca

unread,
May 31, 2004, 9:15:02 AM5/31/04
to
En c99nkp$aa0$1...@news-reader4.wanadoo.fr, jacob navia va escriure:

> But you HAVE to assume some size in MANY applications:
>
> 1) You are putting the time_t in a database and you have to choose
> which size.

What is the problem here? Either the database engine supports some kind(s)
of timestamp data formats, and then surely time_t is a BAD choice. Or it
does not (which exclude it from the first class these days), or else there
is no engine, and then, depending on your application:
- either this is a database for your own use, and storing it in raw format
is probably the best solution, less work in general and particularly right
now, and the day time_t change, you just write the ad-hoc convertion
routine; that is the general way of handling this, by the way;
- either you want to share the datas, and the best solution is very
probably to have some kind of readable format, and then ctime() comes to
mind.


> 2) You are writing a windows compiler and all executables have a 4
> byte field that is a time_t

You mean, you are writing some software tool that mandates the tradition
32-bit Unix-derived format (which endianess?): well, then, just do what you
are already doing in many places, isolate in the adequate layer the
conversion routines between your application and the outer world. In this
same place you will deal with endianess and similar things. The size of the
timestamps is of a similar nature.

BTW: because 1-second accuracy is now too small for a number of tools,
particularly make and its derivates, this particular use of 32-bit timestamp
is the one that is the most loudly asks at an evolution; and this evolution
is NOT toward a 64-bit count of seconds (which addresses a problem that is
34 years from us, that is, more time as this scheme exists), but rather to
increase precision; thus having to deal with the question UTC vs. TAI, leap
seconds or not; which is definitively a problem that asks for a complete
revamp of <time.h>.
So bottom line: do not worry here, you certainly will have to rewrite this
code one day or another!


> 3) You are sending this over the network and the other side expects
> 4 byte time_t

Same as above, with the added points that most network protocols do NOT use
4 byte time_t, and furthermore that the additional layer is already present
(except in those badly written programs that do not deserve consideration),
because the network protocol DOES mandate some endianess (normally, big),
and in the normal case you cannot assume you have it (normally, you should
be portable to Intel, so should work with little-endian as well ;-).)


Antoine

Antoine Leca

unread,
May 31, 2004, 9:47:38 AM5/31/04
to
En 40B76642...@acm.org, Thad Smith va escriure:

> What date/time formats are portable? The Posix version has a limited
> lifetime.

Besides, it also has limited accuracy, which proves to be much more of a
problem right now (you are not using time_t to drive an application that
compute jubilation dates, are you?)


> Does what I am talking about already exist and only need a slight
> boost to be adopted as a defacto efficient, portable, long-lived
> date/time format?

There were a long thread about this in 2000, during the preparation of the
new version of Posix. Various points were brought, but no consensus.
Among the various problems mentionned were:
- TAI vs. UTC (i.e. leap seconds)
- if we use fixed-point (also nicknamed the Korn proposal, IIRC), then there
is a problem because we have no C support
- which accuracy (盜 seems much too coarse, as or fs seems overkill and
"eats" all the bits in 64; so either ns or ps; but now we also have Win32
and its d盜, decimicroseconds = 100 ns unit...)
- J2000.0 was then seen as a nice Epoch (which even allowed floating point
to become "usable"...) but this would be obviously a somewhat transitory
solution.

As you see (and as you already knew), too much choice, so no election.

OTOH, everybody seem to agree that proper representation of times (as
opposed to timestamps) in not-too-much large format should go to ISO 8601,
which allow for the correct grade of accuracy depending of your application
(birthdays would not ask for the same thing as laboratory recordings, but
still they can use compatible formats). This agreement showed (more or less)
that the problem with time_t is only relevant to their use as compact
timestamps, which is a domain much less large that the present use of
time_t. And makes the problem much less important. As a result, the wise
decision was (seen) to postpone.

Antoine

Keith Thompson

unread,
May 31, 2004, 2:11:30 PM5/31/04
to

Quibble: the programmer probably assumed 32-bit int, but the code only
assumes that unsigned int can hold the time_t representation of the
current time. The code should work (again by accident) on a system
with 64-bit int and time_t -- and will work until 2106 (assuming the
most common representation) on a system with 32-bit int and 64-bit
time_t.

David Hopwood

unread,
May 31, 2004, 2:27:45 PM5/31/04
to
Antoine Leca wrote:

> En 40B76642...@acm.org, Thad Smith va escriure:
>
>>What date/time formats are portable? The Posix version has a limited
>>lifetime.
>
> Besides, it also has limited accuracy, which proves to be much more of a
> problem right now (you are not using time_t to drive an application that
> compute jubilation dates, are you?)
>
>>Does what I am talking about already exist and only need a slight
>>boost to be adopted as a defacto efficient, portable, long-lived
>>date/time format?
>
> There were a long thread about this in 2000, during the preparation of the
> new version of Posix. Various points were brought, but no consensus.
> Among the various problems mentionned were:
> - TAI vs. UTC (i.e. leap seconds)

Anything that assumes a day has 86400 seconds is simply broken. Use TAI.

> - if we use fixed-point (also nicknamed the Korn proposal, IIRC), then there
> is a problem because we have no C support

So don't do that. "typedef long long time64_t;" or "typedef int64_t time64_t;"
are fine.

> - which accuracy (µs seems much too coarse, as or fs seems overkill and


> "eats" all the bits in 64; so either ns or ps; but now we also have Win32

> and its dµs, decimicroseconds = 100 ns unit...)

There is no difficulty in multiplying or dividing by 100; different operating
systems use different resolutions anyway. Use ns.

> - J2000.0 was then seen as a nice Epoch (which even allowed floating point
> to become "usable"...) but this would be obviously a somewhat transitory
> solution.

Why? J2000.0 is still a nice epoch. It has the advantage of being after 1972,
so UTC converstion is well-defined after the epoch; it is also in existing use
in astronomy. I don't see why it has become any worse an epoch simply because
4 years have passed.

Personally I would use TAI64N, though, just to avoid inventing something new:
<http://cr.yp.to/libtai/tai64.html>.

> As you see (and as you already knew), too much choice, so no election.

But much of this choice is arbitrary, and any vaguely reasonable set of
choices would have been better than doing nothing.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

jacob navia

unread,
May 31, 2004, 3:21:14 PM5/31/04
to

"David Hopwood" <david.nosp...@blueyonder.co.uk> a écrit dans le
message de news:BOKuc.14156$Dm2....@front-1.news.blueyonder.co.uk...
Antoine Leca wrote:

>> As you see (and as you already knew), too much choice, so no election.

> But much of this choice is arbitrary, and any vaguely reasonable set of
> choices would have been better than doing nothing.

!!!!!

EXACTLY !

I agree with that 100%. !

Doing nothing just ignores the problems.

Paul Eggert

unread,
May 31, 2004, 9:13:20 PM5/31/04
to
At Mon, 31 May 2004 21:21:14 +0200, "jacob navia" <ja...@jacob.remcomp.fr> writes:

> Doing nothing just ignores the problems.

So do something! You can adopt David Hopwood's suggestion (i.e., use
TAI64N for time_t) without changing the C Standard one iota. If
that's what you want to do, go for it! Nobody's stopping you, and
your implementation will conform to the standard.

Richard Bos

unread,
Jun 1, 2004, 4:36:26 AM6/1/04
to
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

> "Keith Thompson" <ks...@mib.org> a écrit dans le message de
> news:ln4qpzo...@nuthaus.mib.org...
> > "jacob navia" <ja...@jacob.remcomp.fr> writes:
> >
> > Such code is already broken. Many systems already use a 64-bit
> > time_t, and more will do so in the next few years. Porting to such
> > systems will weed out any code that assumes time_t is 32 bits.
>
> But you HAVE to assume some size in MANY applications:
>
> 1) You are putting the time_t in a database and you have to choose which
> size.

Any database programmer worth the name knows to use YYYYMMDDHHMMSS, or a
suitable part or extension of that format - _never_ a raw time_t.

> 2) You are writing a windows compiler and all executables have a 4 byte
> field that is a time_t
>
> 3) You are sending this over the network and the other side expects
> 4 byte time_t

Gak! Get your standards corrected...

> With my proposal, is the USER that decides when he:she will change that
> code, not the compiler!

Yup. And then the user will have to change again. And again. And again.
Sorry, but the proper solution is to write portable code, not to use
kludgy patches.

Richard

Christian Bau

unread,
Jun 1, 2004, 4:37:06 AM6/1/04
to
In article <40bc3f80....@news.individual.net>,
r...@hoekstra-uitgeverij.nl (Richard Bos) wrote:

> Yup. And then the user will have to change again. And again. And again.
> Sorry, but the proper solution is to write portable code, not to use
> kludgy patches.

As an example, it would be incredibly stupid to store dates in a
spreadsheet on platform X as number of seconds since Jan. 1st, 1904 and
on platform Y as number of seconds since Jan. 1st, 1900, just because
platform X and platform Y interpret time_t values in different ways. It
would be so stupid, I couldn't imagine anyone doing that.

Harti Brandt

unread,
Jun 1, 2004, 4:44:26 AM6/1/04
to
On Fri, 28 May 2004, jacob navia wrote:

jn>
jn>"Harti Brandt" <bra...@dlr.de> a écrit dans le message de
jn>news:Pine.GSO.4.60.0405281239370.18426@zeus...
jn>> On Fri, 28 May 2004, jacob navia wrote:
jn>>
jn>> Standardizing time64_t would be the same type of standardisation bug
jn>> like the 64-bit file stuff. They should instead just use a 64-bit time_t
jn>> and that's it.
jn>>
jn>
jn>Well, but that silent change would make many programs fail
jn>that assume a 32 bit time_t...
jn>
jn>All those assignments:
jn>
jn>unsigned t = time(NULL);
jn>
jn>would have to be found and rewritten. That was my rationale
jn>behind this, (and I suppose HP/SUN too)

Others have answered already, just let me add that the move to 64bit types
for file offsets and sizes in BSD was rather smooth as far as I remember.
Also we moved the sparc64 implementation of FreeBSD to 64-bit time_t a
couple of months ago - to my surprise that was rather easy (to most tricky
part was building and installing a 64-bit time_t kernel/world on a 32-bit
time_t kernel/world.

Counting on the fact that 2038 is 34 years in the future and that people
will convert there programs until then is rather naive. I know that people
were writing non-year-2k compatible programs even in 1999. Other's will
just wait until 2037, because the issue might just go away :-)

harti

Richard Bos

unread,
Jun 1, 2004, 5:00:22 AM6/1/04
to
Christian Bau <christ...@cbau.freeserve.co.uk> wrote:

You're right. No programmer or software house worth the name would ever
do that. Only complete amateurs or downright swindle firms would try
that rip-off on their customers.

Richard

jacob navia

unread,
Jun 1, 2004, 5:16:39 AM6/1/04
to

"Richard Bos" <r...@hoekstra-uitgeverij.nl> a écrit dans le message de
news:40bc4511....@news.individual.net...

> You're right. No programmer or software house worth the name would ever
> do that. Only complete amateurs or downright swindle firms would try
> that rip-off on their customers.

Sorry but I sense some irony in all this...

Can you specify what is going on?

(Without naming the big software company of course :-)


Niklas Matthies

unread,
Jun 1, 2004, 5:27:20 AM6/1/04
to
On 2004-06-01 09:16, jacob navia wrote:
> "Richard Bos" <r...@hoekstra-uitgeverij.nl> a écrit dans le message de
> news:40bc4511....@news.individual.net...
>> You're right. No programmer or software house worth the name would ever
>> do that. Only complete amateurs or downright swindle firms would try
>> that rip-off on their customers.
>
> Sorry but I sense some irony in all this...
>
> Can you specify what is going on?

Try this: http://www.google.de/search?q=KB180162

-- Niklas Matthies

Niklas Matthies

unread,
Jun 1, 2004, 5:28:18 AM6/1/04
to
On 2004-06-01 09:16, jacob navia wrote:
> "Richard Bos" <r...@hoekstra-uitgeverij.nl> a écrit dans le message de
> news:40bc4511....@news.individual.net...
>> You're right. No programmer or software house worth the name would ever
>> do that. Only complete amateurs or downright swindle firms would try
>> that rip-off on their customers.
>
> Sorry but I sense some irony in all this...
>
> Can you specify what is going on?

Try this: http://www.google.com/search?q=KB180162

-- Niklas Matthies

Casper H.S. Dik

unread,
Jun 1, 2004, 5:36:08 AM6/1/04
to
Brian Inglis <Brian....@SystematicSw.Invalid> writes:

>>Well, but that silent change would make many programs fail

>>that assume a 32 bit time_t...

>Those incorrect programs would also silently fail on systems where


>time_t is not an integer type. The Standard does not guarantee time_t
>is POSIX compatible.

The problem you ignore is one of binary compatibility; you cannot
change a type and expect binaries to continue to work. (And this includes
situations where products are shipped with libraries or cases where
partial recompiles are done after types are changed).

Changing time_t would also affect the size of other structures used
throughout the operating system.

This leaves you with two choices:

bump all types and change the object file format
(this is essentially what, e.g., 64-bit Solaris does;
it requires you to ship two copies of each library: one for
32 bit binaries/one for 64 bit binaries)

introduce a type64_t and related interfaces so object
files and binaries can still be mixed and matched and
you only need to ship and maintain one set of libraries.
(this is the solution chosen for 64 bit file pointers)

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

Casper H.S. Dik

unread,
Jun 1, 2004, 5:39:26 AM6/1/04
to
"jacob navia" <ja...@jacob.remcomp.fr> writes:

>The "time64" feature leaves the users the choice of
>the point of transition, since they can at any time
>redefine the standard types in 64 bits. This is a decision
>that should not be done by the compiler.


THe "large file" interfaces are generally hidden from the
application; you pass a command line option to the compiler specify
that you want "64-bit off_t" and the compiler/environment map
change off_t to 64 bit and then map all the functions to their
64 bit counterparts. It's also about being able to maintain
a single library which supports both types of applications; that
is important to allow for the continuous use of old libraries and
old binaries with the new run-time system.

Casper H.S. Dik

unread,
Jun 1, 2004, 5:42:44 AM6/1/04
to
Paul Eggert <egg...@twinsun.com> writes:

>It might make sense for Sun etc. to have a "large-time mode" for
>32-bit applications, so that they can use 64-bit time_t as well. But
>Sun etc. hasn't done this; I guess they figure that by the time 2038
>rolls around we'll all be using 64-bit processors.

That, I think, will certainly be true for the processors, but not
necessarily for the applications :-(

Binary compatibility allows people to stay in business, but it
complicates certain things greatly. (And OpenSource does not magically
fix this, as some seem to think)

>File-offset system calls like lseek64 are purely a transition strategy
>for users who are compiling applications in mixed environments, where
>some 32-bit code is aware of large files, other (legacy) 32-bit code
>is not, and it's some poor schmuck's job to link all that code
>together into a single application and make it work. Other than that
>particular problem, there's no excuse for this complexity and it
>certainly doesn't belong in the C standard. (It doesn't belong in
>POSIX either, and the POSIX folks are smart enough to realize this.)

Indeed; and time64_t could be part of a "long time compilation environment"
but all the functions would be automagically mapped.

time64_t would be exclusively used by the implementors of that environment,
just as off64_t is.

Dan Pop

unread,
Jun 1, 2004, 10:07:47 AM6/1/04
to
In <c973i4$d1p$1...@news-reader3.wanadoo.fr> "jacob navia" <ja...@jacob.remcomp.fr> writes:

>There are several implementations of this functions in several
>operating systems.
>
>I thought that the standards comitee has (maybe) reflected about this
>and a proposition for this is "in the works" or recommended.
>
>A similar implementation exists under:
>Hewlett Packard
>http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/MAN/MAN3/1996____.HTM
>
>I suppose the Compaq implementation is similar
>http://btrcx1.cip.uni-bayreuth.de/cgi-bin/manpages/time/3
>
>Sun Microsystems has one too:
>http://docs.sun.com/source/817-5065/1_library.html

A lot more implementations support <unistd.h>. So, why aren't you
advocating <unistd.h> support in the C standard?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Dan...@ifh.de

Jonathan Leffler

unread,
Jun 1, 2004, 10:50:14 AM6/1/04
to

And especially stupid to have platform Y think that 1900-02-28 was
followed by 1900-02-29, not 1900-03-01?

--
Jonathan Leffler #include <disclaimer.h>
Email: jlef...@earthlink.net, jlef...@us.ibm.com
Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/

David R Tribble

unread,
Jun 1, 2004, 4:06:57 PM6/1/04
to
Thad Smith writes:
>> What date/time formats are portable? The Posix version has a limited
>> lifetime.

Paul Eggert wrote:
> Not really, since POSIX doesn't require that time_t must be limited to
> 32 bits, and many POSIX implementations use signed 64-bit time_t which
> for all practical purposes is unlimited.

>> Does what I am talking about already exist and only need a slight boost
>> to be adopted as a defacto efficient, portable, long-lived date/time
>> format?

> D. J. Bernstein is has proposed and is using such a format;
> see <http://cr.yp.to/libtai/tai64.html>.


1. I have a question that I didn't see an answer for in this thread:
Does the time64_t type have exactly the same value as the existing time_t
type (except for being a wider type)? I.e., on POSIX systems, does a
time64_t value represent the number of seconds since 1970-01-01 00:00:00?
Or, to put it another way, can a time_t value be converted into a time64_t
value directly with nothing more than a simple cast?


--
Just my two cents worth, but to point out that we've been down this road
before, and no change to ISO C was ever made; perhaps now is a good time
to open those old wounds again?

2. Here's my proposal for improving the definition of the existing time_t type:
http://david.tribble.com/text/c0xtimet.htm

My proposal does not advocate making any changes to time_t itself, but simply
to add standard macros that describe the behavior of it, so that the
programmer can write more effective and portable code. If this kind of thing
can't be done for time_t, could it be done for time64_t?


3. If we're going to adopt a much wider time type (64 bits), why not improve
it by increasing its precision to handle subseconds? Jumping from 32 bits
(the most common size of time_t) to 64 bits gives us more than enough room
to increase the precision to microseconds and still cover a range of thousands
of years. Here's a proposal I made along those lines, but which I eventually
abandoned due to lack of interest:
http://david.tribble.com/text/c0xtime.htm

But I suspect that it's already too late for such an improvement to be made
to time64_t, even though it's not standardized, right?


3. As I pointed out several years ago, moving to a wider time type is
problematic because of existing data files containing time_t values, e.g.,
PGP encrypted files, Win32 object files, etc.
http://groups.google.com/groups?selm=3964F7D4.A5944034%40tribble.com

So, yes, changing to a wider (or just simply a different) time type will take
time (apologies for the pun). But we've got (only) 34 years until the first
flag day.

-drt

jacob navia

unread,
Jun 1, 2004, 4:49:52 PM6/1/04
to

"David R Tribble" <da...@tribble.com> a écrit dans le message de
news:f4002eab.04060...@posting.google.com...

> 1. I have a question that I didn't see an answer for in this thread:
> Does the time64_t type have exactly the same value as the existing time_t
> type (except for being a wider type)? I.e., on POSIX systems, does a
> time64_t value represent the number of seconds since 1970-01-01 00:00:00?
> Or, to put it another way, can a time_t value be converted into a time64_t
> value directly with nothing more than a simple cast?
>

In lcc-win32 yes.

I was unsure about it, then I asked here, then I see that
people here are just as unsure as I was and that this
whole topic looks like a can of worms. :-)


jacob navia

unread,
Jun 1, 2004, 5:01:39 PM6/1/04
to

"Dan Pop" <Dan...@cern.ch> a écrit dans le message de
news:c9i2jj$liv$3...@sunnews.cern.ch...

>
> A lot more implementations support <unistd.h>. So, why aren't you
> advocating <unistd.h> support in the C standard?

Look Dan, why don't you try to keep on-topic?

Polemic, polemic. It is boring at the end.

What is your opinion about the issues at hand?

o Overflow of the 32 bit counter and the transition to a wider
time_t.
o Time keeping in general, precision, epochs,
leap seconds yes or no, etc. Those are the issues.

Polemic doesn't lead us any further.

jacob


Richard Bos

unread,
Jun 2, 2004, 3:30:42 AM6/2/04
to
"jacob navia" <ja...@jacob.remcomp.fr> wrote:

> "Dan Pop" <Dan...@cern.ch> a écrit dans le message de
> news:c9i2jj$liv$3...@sunnews.cern.ch...
> >
> > A lot more implementations support <unistd.h>. So, why aren't you
> > advocating <unistd.h> support in the C standard?
>
> Look Dan, why don't you try to keep on-topic?

You started it.

> o Overflow of the 32 bit counter and the transition to a wider
> time_t.

There is no "the 32 bit counter" in the Standard discussion of time_t.

> o Time keeping in general, precision, epochs,
> leap seconds yes or no, etc. Those are the issues.

And they're quite a bit more complicated than can be solved by merely
adding another type to the Standard, a fixed-size one to make matters
worse.

Richard

Harti Brandt

unread,
Jun 2, 2004, 4:56:16 AM6/2/04
to
On Tue, 1 Jun 2004, jacob navia wrote:

jn>o Overflow of the 32 bit counter and the transition to a wider
jn> time_t.

There are several systems with 64-bit time_t already, so what's the
problem? Just use one of these systems. I really see no reason for
discussion here. If someone has data files that contain time_t's and he is
moving to another OS or machine he'll have to convert his data files
anyway. There is no guarantee about the underlying arithmetic type.
Writing time_t on Sparc Solaris and reading them on Solaris x86 will fail,
even when booth have a 32-bit time_t. I fail to see how inventing time64_t
would improve this situation (if it needs improvement at all except for
fixing these broken programs, 34 years should be enough to fix every
single program anyway).

jn>o Time keeping in general, precision, epochs,
jn> leap seconds yes or no, etc. Those are the issues.

That's an entirely different topic. That's like asking for SQL support in
fopen(). time_t has its application domain. If you need anything else,
you'll need another type but that should certainly not be time[0-9]*_t.

harti

Dan Pop

unread,
Jun 2, 2004, 8:45:16 AM6/2/04
to
In <c9iqrl$l7n$1...@news-reader2.wanadoo.fr> "jacob navia" <ja...@jacob.remcomp.fr> writes:


>"Dan Pop" <Dan...@cern.ch> a écrit dans le message de
>news:c9i2jj$liv$3...@sunnews.cern.ch...
>>
>> A lot more implementations support <unistd.h>. So, why aren't you
>> advocating <unistd.h> support in the C standard?
>
>Look Dan, why don't you try to keep on-topic?

When I have pointed out that your original post was off topic, you replied
by mentioning a few implementations providing a non-standard feature.

Hence my answer, which is perfectly adequate to your reply. Whether you
like it or not.

>Polemic, polemic. It is boring at the end.

If you don't like it, Usenet is not for you.

>What is your opinion about the issues at hand?
>
>o Overflow of the 32 bit counter and the transition to a wider
> time_t.

Show me the chapter and verse making time_t a 32-bit type. I am currently
using systems with a 64-bit time_t, so your time64_t looks laughable to
me. It might make sense in the context of *certain* implementations,
but it makes absolutely no sense in the context of the C standard.

>o Time keeping in general, precision, epochs,
> leap seconds yes or no, etc. Those are the issues.

Are they? All I see in the subject line is time64_t. And I do know,
from past c.s.c discussions, that enhancing <time.h> proved to be a can
of worms that, after mature consideration, the committee decided to leave
closed. Check the newsgroup's archives.

>Polemic doesn't lead us any further.

By the day we'd all agree on everything related to the C standard, we
could as well close this newsgroup...

Douglas A. Gwyn

unread,
Jun 2, 2004, 10:49:37 AM6/2/04
to
David R Tribble wrote:
> 3. If we're going to adopt a much wider time type (64 bits), why not improve
> it by increasing its precision to handle subseconds? Jumping from 32 bits
> (the most common size of time_t) to 64 bits gives us more than enough room
> to increase the precision to microseconds and still cover a range of thousands
> of years.

Why not generalize even more and have a resolution in femtoseconds
with a maximum value of billions of years? I'm not being entirely
facetious; if there is going to be a useful general-purpose time
handling library it ought to support the operations done with time
values in general scientific calculation. Even from the more
narrow perspective of computer systems, nanosecond resolution will
be important within the foreseeable future, and the original dates
for some human documents worth transcribing into on-line versions
go back thousands of years (and presumably we don't want to have
to change time format ever again for the remainder of civilization).

The general rule for allocating bits is to estimate the widest
range that seems at all reasonable, then double it.

Antoine Leca

unread,
Jun 2, 2004, 11:23:22 AM6/2/04
to
En BOKuc.14156$Dm2....@front-1.news.blueyonder.co.uk, David Hopwood va
escriure:


I appreciate your concern. However, I was not the one which elected, I am
just trying to help people here understand what happened then. And my
understanding is that the committee did not see any possible consensus, so
they "choose" to do nothing; this is what I try to explain. My personnal
solution has nothing to do here.

> Antoine Leca wrote:
>
>> Among the various problems mentionned were:
>> - TAI vs. UTC (i.e. leap seconds)
>
> Anything that assumes a day has 86400 seconds is simply broken.

The only point was that there are various kind of seconds, and the very one
somebody may want varies with the application. And BTW, a number of
applications does not require such accuracy, so they can very quietly assume
that a day is 86400 s.

> Use TAI.

Well, with TAI, it is my impression that "days" are always 86400 s.
OK, they are just "SI days", that have nothing to do with Earth rotation
altogether; "UT* days" would be an entirely different thing. But at least
TAI is here just to avoid having variable-length "days". ;-)

Anyway, depending of your application, TAI might, or may not, be a good
election. For example, bankers sometimes do substractions of days to compute
interests; it is my understanding that they would widely prefer to NOT have
to adjust for leap seconds (even if it melts down to a simple 64-bit binary
lookup in a ~30 entries table, it is still some cycles "wasted".)


>> - if we use fixed-point (also nicknamed the Korn proposal, IIRC),
>> then there is a problem because we have no C support
>
> So don't do that. "typedef long long time64_t;" or "typedef int64_t
> time64_t;" are fine.

Hmmm; five lines below you seem to advocate for J2000.0...
Seems contradictory.


>> - which accuracy (盜 seems much too coarse, as or fs seems overkill


>> and "eats" all the bits in 64; so either ns or ps; but now we also

>> have Win32 and its d盜, decimicroseconds = 100 ns unit...)


>
> There is no difficulty in multiplying or dividing by 100; different
> operating systems use different resolutions anyway. Use ns.

You do not seem to get the point. There already _is_ something around there
that use some defined unit here. It was not easy to dispell it, at least for
the committee.


>> - J2000.0 was then seen as a nice Epoch (which even allowed floating
>> point to become "usable"...) but this would be obviously a somewhat
>> transitory solution.
>
> Why?

Because, in A.D. 2000, there was reminiscence of something named Y2K that
was seen by a number of people as some kind of "false syndrom created by
computerists to draw money"; at the same time there was still reminiscence
of "don't do again the same mistake, ever" syndrom (this one I usually name
it Y10K problem.) So in detail J2000.0 did not look like so nice at second
look (this is just about nice-looking, it has nothing to do with technical
merits.)


>> As you see (and as you already knew), too much choice, so no
>> election.
>
> But much of this choice is arbitrary,

So is your election.

> and any vaguely reasonable set
> of choices would have been better than doing nothing.

Why? The question was, "can we right now define more precisely/tightly a
timestamp?" Answer was a qualified no. This is not the same as "doing
nothing"; time_t does continue to exist. And by 2038 we certainly will have
standards that avoid the pitfall (I do not expect 32 bits, nor Posix/C, to
be the standard for computers then.) At the same rate, we ALSO will have
then problems with "unexplicable" software failures because of this, for
much standards that are issued before.

The only real point is that by 2001, there was a failure to establish a
proeminent "definitive solution" for this perceived problem. So much for the
history.


Antoine

Antoine Leca

unread,
Jun 2, 2004, 12:01:01 PM6/2/04
to
En c96ne1$8v0$1...@news-reader5.wanadoo.fr, jacob navia va escriure:
> I have added this functions to the lcc-win32 standard library:

I believe this is off topic to c.s.c., but as you asked and there have been
some debate that gives hints, perhaps I can try to move forward (or did you
drop this altogether in the meanwhile?)


> New header file:
>
> time64.h
>
> It defines time64_t as long long (64 bit).
> typedef long long time64_t;

long long is NOT guaranteed to be 64 bits, so the name is misleading. Why
not using time_time_t instead? ;-)

As you are using Win32, certainly this is the same as FILETIME (100 ns since
Jan 1st, 1601 Gregorian, Greenwich meridian). At least this would make much
sense to me. Also take a look at
<http://msdn.microsoft.com/library/en-us/sysinfo/base/time_functions.asp>


> time64_t time64(time64_t *pt);

What I see missing are functions to convert from/to time_t.

Since you are based on Win32, perhaps you can also have a function to
convert from time64_t to UUID.

> char *ctime64(time64_t *tp);

ctime (and asctime) are considered broken in a number of place, because they
return a statically allocated buffer, which is multithread-unfriendly.
Please take the opportunity to correct this.

> struct tm *localtime64(time64_t *pt);
> struct tm *gmtime64(time64_t t);

Don't repeat the same mistake again and again: please create an unique
function that is passed some opaque type that represent the timezone
information; then define two constants, LOCAL_TIMEZONE and ZULU_TIMEZONE.
TIME_ZONE_INFORMATION is perhaps a good choice for the definition of the
opaque type on Win32, BTW.

Also, same point as ctime here: do not return a statically allocated buffer.

> long double difftime64(time64_t time1, time64_t time0);

Please make clear if the result are UTC or TAI seconds. Or more to the
point, offer both.


> time64_t mktime64( struct tm *today );

I am uncomfortable with mktime, particularly when the result falls in the
Fall-back or in the Spring sprung. It would be worthwhile to avoid this.
However, I have no easy to use fix to this (it comes to no surprise that
mkxtime was the most polemic back in days of C9X CD2).

Antoine

jacob navia

unread,
Jun 2, 2004, 12:19:41 PM6/2/04
to

"Antoine Leca" <ro...@localhost.gov> a écrit dans le message de
news:40bdfa17$0$12754$636a...@news.free.fr...

> En c96ne1$8v0$1...@news-reader5.wanadoo.fr, jacob navia va escriure:
> > I have added this functions to the lcc-win32 standard library:
>
> I believe this is off topic to c.s.c., but as you asked and there have
been
> some debate that gives hints, perhaps I can try to move forward (or did
you
> drop this altogether in the meanwhile?)
>

Surely not

>
> > New header file:
> >
> > time64.h
> >
> > It defines time64_t as long long (64 bit).
> > typedef long long time64_t;
>
> long long is NOT guaranteed to be 64 bits, so the name is misleading. Why
> not using time_time_t instead? ;-)
>

In lcc-win32 it is 64 bits.

> As you are using Win32, certainly this is the same as FILETIME (100 ns
since
> Jan 1st, 1601 Gregorian, Greenwich meridian). At least this would make
much
> sense to me.

Yes, it makes sense and I did use the windows SYSTEMTIME/FILETIME feature.

Thanks. There are some useful functions there, but (for some reason) they
seem
to be scheduled for obsolescense. Specifically the documentation says (in
red letters!)
>>
[RtlTimeToSecondsSince1970 is available for use in Windows 2000 and Windows
XP. It may be altered or unavailable in subsequent versions.]
>>
So I added an equivalent function.

>
> > time64_t time64(time64_t *pt);
>
> What I see missing are functions to convert from/to time_t.
>

Right. Thanks WIll add that.

> Since you are based on Win32, perhaps you can also have a function to
> convert from time64_t to UUID.

> > char *ctime64(time64_t *tp);
>
> ctime (and asctime) are considered broken in a number of place, because
they
> return a statically allocated buffer, which is multithread-unfriendly.
> Please take the opportunity to correct this.
>

I thought about that, but in my small implementation I can't add a new
(incompatible) behavior. That is why I turned to this group with the
hope that I would find a clear directive as what to do.

So far there hasn't ben any concrete answer. Most I got was "it is in
the works"...

> > struct tm *localtime64(time64_t *pt);
> > struct tm *gmtime64(time64_t t);
>
> Don't repeat the same mistake again and again: please create an unique
> function that is passed some opaque type that represent the timezone
> information; then define two constants, LOCAL_TIMEZONE and ZULU_TIMEZONE.
> TIME_ZONE_INFORMATION is perhaps a good choice for the definition of the
> opaque type on Win32, BTW.
>

I understand that this would be nice, but since the standard doesn't say
anything...
I will point the users to Microsoft API, that seems anyway much better than
what C99 provides, and there are no obvious buffer overflows in those specs
:-(

> Also, same point as ctime here: do not return a statically allocated
buffer.
>

I followed the C99 specification. I would gladly do something else if
the comitee decides so. Until then, I will point out the dangers of this
in the docs and point to Microsoft functions.

> > long double difftime64(time64_t time1, time64_t time0);
>
> Please make clear if the result are UTC or TAI seconds. Or more to the
> point, offer both.
>

OK. Thanks

>
> > time64_t mktime64( struct tm *today );
>
> I am uncomfortable with mktime, particularly when the result falls in the
> Fall-back or in the Spring sprung.

You mean when the daylight savings time changes? Please specify...
"Spring sprung" doesn't ring any bells inside my little brain.

> It would be worthwhile to avoid this.


Thanks for your input Antoine.

Thad Smith

unread,
Jun 2, 2004, 5:19:59 PM6/2/04
to
Antoine Leca wrote:
>
> En c96ne1$8v0$1...@news-reader5.wanadoo.fr, jacob navia va escriure:
> > I have added this functions to the lcc-win32 standard library:
>
> I believe this is off topic to c.s.c., but as you asked and there have been
> some debate that gives hints, perhaps I can try to move forward (or did you
> drop this altogether in the meanwhile?)
>
> > New header file:
> >
> > time64.h
> >
> > It defines time64_t as long long (64 bit).
> > typedef long long time64_t;
>
> long long is NOT guaranteed to be 64 bits, so the name is misleading. Why
> not using time_time_t instead? ;-)

I sense a confusion here between two issues that I think should be
separated:
1. Providing an internal format with increased range and resolution.
2. Providing functions to convert to/from a well-documented
representation of date/time, suitable for external and portable usage,
such as databases, and data interchange formats. I think it is
reasonable to use a monotonic format other than 1970-based-32-bit int.

Perhaps the best way to move forward is to implement conversion routines
to/from the internal time format for the more popular of the extended
formats and see which ones win out in terms of usage. We will probably
maintain different formats (and functions) in use: probably the bankers,
astronomers, and experimental particle physicists would not be happy
with the same set of time-related formats and functions.

Thad

Antoine Leca

unread,
Jun 3, 2004, 8:30:14 AM6/3/04
to
En 40BE447F...@acm.org, Thad Smith va escriure:

> I sense a confusion here between two issues that I think should be
> separated:
> 1. Providing an internal format with increased range and resolution.
> 2. Providing functions to convert to/from a well-documented
> representation of date/time, suitable for external and portable usage,
> such as databases, and data interchange formats.

Agreed so far. As I wrote elsewhere, the correct form for the external
representation is named ISO 8601. It handles gracefully leap seconds,
timezones, and all kind of avanies you may even not think about.

But there is also a need for a set of functions to convert from/to a number
of well-documented awkard representations of data or times, among them are
1970-based 32-bit count of seconds, the ctime(3) output, or the rfc822
format.


Antoine

Dan Pop

unread,
Jun 3, 2004, 11:43:26 AM6/3/04
to
In <40bf1a2f$0$12747$636a...@news.free.fr> "Antoine Leca" <ro...@localhost.gov> writes:

>Agreed so far. As I wrote elsewhere, the correct form for the external
>representation is named ISO 8601. It handles gracefully leap seconds,
>timezones, and all kind of avanies you may even not think about.

^^^^^^^
What are these "avanies" thingies?

David R Tribble

unread,
Jun 3, 2004, 3:00:17 PM6/3/04
to
David R Tribble wrote:
>> 3. If we're going to adopt a much wider time type (64 bits), why not improve
>> it by increasing its precision to handle subseconds? Jumping from 32 bits
>> (the most common size of time_t) to 64 bits gives us more than enough room
>> to increase the precision to microseconds and still cover a range of
>> thousands of years.

Douglas A. Gwyn wrote:
> Why not generalize even more and have a resolution in femtoseconds
> with a maximum value of billions of years? I'm not being entirely
> facetious; if there is going to be a useful general-purpose time
> handling library it ought to support the operations done with time
> values in general scientific calculation. Even from the more
> narrow perspective of computer systems, nanosecond resolution will
> be important within the foreseeable future, and the original dates
> for some human documents worth transcribing into on-line versions
> go back thousands of years (and presumably we don't want to have
> to change time format ever again for the remainder of civilization).

Yes, but you won't find any applications that need both a very fine subsecond
resolution and a wide range of thousands of years. Any given application
will use a given time/date range and precision that will, at best bet, fit
into 40 bits or less (scaled appropiately), give or take a few bits.

We already know we can't invent a compact representation that will suit
everybody's needs. I think using a 64-bit signed monotonically increasing
integer value to encode microsecond ticks within a standard epoch (e.g.,
2001-01-01), which covers a range of 292,277 years, is a reasonable
compromise, *especially* for use in a standard library. Applications that
need a wider range of dates (e.g., astronomical calculations), as well as
those that need higher precision (e.g., subatomic particle simulations)
will need to use their own special-purpose libraries, just as they
presumably are doing so today.


> The general rule for allocating bits is to estimate the widest
> range that seems at all reasonable, then double it.

I would prefer a tick size of somewhere between 1 microsecond and
10 nanoseconds. When encoded in a 63-bit value, this corresponds to a range
of between 292,000 and 2,922 years. This is, IMHO, a completely reasonable
compromise.

-drt

Brian Inglis

unread,
Jun 3, 2004, 4:06:13 PM6/3/04
to
On Wed, 2 Jun 2004 18:19:41 +0200 in comp.std.c, "jacob navia"
<ja...@jacob.remcomp.fr> wrote:

>
>"Antoine Leca" <ro...@localhost.gov> a écrit dans le message de
>news:40bdfa17$0$12754$636a...@news.free.fr...
>> En c96ne1$8v0$1...@news-reader5.wanadoo.fr, jacob navia va escriure:
>> > I have added this functions to the lcc-win32 standard library:
>>
>> I believe this is off topic to c.s.c., but as you asked and there have
>been
>> some debate that gives hints, perhaps I can try to move forward (or did
>you
>> drop this altogether in the meanwhile?)
>>
>
>Surely not

>> > char *ctime64(time64_t *tp);


>>
>> ctime (and asctime) are considered broken in a number of place, because
>they
>> return a statically allocated buffer, which is multithread-unfriendly.
>> Please take the opportunity to correct this.
>>
>
>I thought about that, but in my small implementation I can't add a new
>(incompatible) behavior. That is why I turned to this group with the
>hope that I would find a clear directive as what to do.
>
>So far there hasn't ben any concrete answer. Most I got was "it is in
>the works"...

As ctime and asctime are also locale-unfriendly, maybe they should
just be dropped, and programmers recommended in the documentation
(forced in the implementation) to use strftime, with examples given to
achieve the same results as ctime and asctime, and explanations of the
locale-friendly and thread-friendly benefits.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Brian....@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply

Dave Hansen

unread,
Jun 3, 2004, 4:26:37 PM6/3/04
to
On Thu, 03 Jun 2004 20:06:13 GMT, Brian Inglis
<Brian....@SystematicSw.Invalid> wrote:

[...]


>As ctime and asctime are also locale-unfriendly, maybe they should
>just be dropped, and programmers recommended in the documentation

Deprecated, please.

>(forced in the implementation) to use strftime, with examples given to
>achieve the same results as ctime and asctime, and explanations of the
>locale-friendly and thread-friendly benefits.

That's fine for new code, but there is an existing base, after all.

Regards,

-=Dave
--
Change is inevitable, progress is not.

Richard Bos

unread,
Jun 4, 2004, 3:05:41 AM6/4/04
to
id...@hotmail.com (Dave Hansen) wrote:

> On Thu, 03 Jun 2004 20:06:13 GMT, Brian Inglis
> <Brian....@SystematicSw.Invalid> wrote:
>
> >As ctime and asctime are also locale-unfriendly, maybe they should
> >just be dropped, and programmers recommended in the documentation
>
> Deprecated, please.

OTOH, too much is deprecated and not enough is dropped. If C is ever to
_really_ go forward, the next Standard should drop, _not_ deprecate,
both gets() and old-style declarations. Especially gets() is nothing but
a Hallucigenia, well overdue to be buried inside a virtual Burgess
shale.

Richard

Antoine Leca

unread,
Jun 4, 2004, 10:17:40 AM6/4/04
to
En f4002eab.0406...@posting.google.com, David R Tribble va
escriure:

> Yes, but you won't find any applications that need both a very fine
> subsecond resolution and a wide range of thousands of years.

A generic time-handling library? (seen from the C standard, it is still an
application.)

> standard epoch (e.g., 2001-01-01), which covers a range of 292,277
> years, is a reasonable compromise,

This specifically I do not know.
If you go beyond human life limits (particularly in the future), the only
remaining "users" are astronomers and like people (here I group SF writers,
cosmic and Indian gurus with them.) And they will not see 300,000 as being
any more (or less) sensible than any other integer values such as 4000.
After all, Andromeda is 8 times farther than that!

So you could very easily trade one or two decimal digits here and "spend"
them in precision (at proofreading time, I notice you did write the same.)
And with a trade of only one decimal digit, we encounter the already widely
available Win32 timestamps (I ignore here the TAI vs. UTC debate.) Since a
standard is supposed to codify existing practice if there are no compeling
reasons to not do it, you should now construct an argumentary why a revision
of the C standard should invent another yet incompatible representation of
time instead of codifying existing practice... (and yes, I know of libtai64;
it is not anywhere as used as FILETIME, though.)


Antoine

Antoine Leca

unread,
Jun 4, 2004, 10:20:29 AM6/4/04
to
En 40bf88ca....@News.individual.net, Dave Hansen va escriure:

> That's fine for new code, but there is an existing base, after all.

Which new code? We are speaking about the introduction of

char *ctime64(time64_t);


Antoine

Antoine Leca

unread,
Jun 4, 2004, 10:46:38 AM6/4/04
to
En c9nguu$2ae$6...@sunnews.cern.ch, Dan Pop va escriure:

> In <40bf1a2f$0$12747$636a...@news.free.fr> "Antoine Leca"
> <ro...@localhost.gov> writes:
>
>> Agreed so far. As I wrote elsewhere, the correct form for the
>> external representation is named ISO 8601. It handles gracefully
>> leap seconds, timezones, and all kind of avanies you may even not
> ^^^^^^^
>> think about.

>
> What are these "avanies" thingies?

I meant "Moderately evil thingies."

This is a (old-fashionned) French word I hoped to translate directly to
English. Wrong move, obviously. The official word seems to be "affront", but
I do not know the (English) sense for sure, so I'd prefer refering you to
the meaning I intented.


Antoine

Dave Hansen

unread,
Jun 4, 2004, 3:12:10 PM6/4/04
to
On Fri, 04 Jun 2004 07:05:41 GMT, r...@hoekstra-uitgeverij.nl (Richard
Bos) wrote:

>id...@hotmail.com (Dave Hansen) wrote:
>
>> On Thu, 03 Jun 2004 20:06:13 GMT, Brian Inglis
>> <Brian....@SystematicSw.Invalid> wrote:
>>
>> >As ctime and asctime are also locale-unfriendly, maybe they should
>> >just be dropped, and programmers recommended in the documentation
>>
>> Deprecated, please.
>
>OTOH, too much is deprecated and not enough is dropped. If C is ever to

Like what? I'm not aware of anything that was deprecated in C99.

>_really_ go forward, the next Standard should drop, _not_ deprecate,
>both gets() and old-style declarations. Especially gets() is nothing but
>a Hallucigenia, well overdue to be buried inside a virtual Burgess
>shale.

I've posted before that I would have loved to have seen both implicit
int and gets deprecated. Unfortunately, neither was.

Another candidate for deprecation in the next standard might be
trigraphs.

Dave Hansen

unread,
Jun 4, 2004, 5:40:21 PM6/4/04
to
On Fri, 4 Jun 2004 16:20:29 +0200, "Antoine Leca" <ro...@localhost.gov>
wrote:

Maybe you were, but Brian Inglis was talking about dropping ctime and
asctime from the standard. It was to that suggestion I was
responding. Read my post again.

lawrenc...@ugsplm.com

unread,
Jun 5, 2004, 12:12:38 AM6/5/04
to
Dave Hansen <id...@hotmail.com> wrote:
>
> I've posted before that I would have loved to have seen both implicit
> int and gets deprecated. Unfortunately, neither was.

But implicit int was completely removed -- isn't that even better?

-Larry Jones

Summer vacation started! I can't be sick! -- Calvin

Douglas A. Gwyn

unread,
Jun 5, 2004, 1:44:26 AM6/5/04
to
David R Tribble wrote:
> I would prefer a tick size of somewhere between 1 microsecond and
> 10 nanoseconds. When encoded in a 63-bit value, this corresponds to a range
> of between 292,000 and 2,922 years. This is, IMHO, a completely reasonable
> compromise.

Thus you should double that and allocate 128 bits.

I certainly could not support a new standard for
recording time on data processing systems that has
a resolution greater than one nanosecond. Some of
the harware I already use has a system clock with
resolution much faster than one microsecond. And
a range of only a couple of hundred years would be
cutting it way too close. So use 128 bits.

Douglas A. Gwyn

unread,
Jun 5, 2004, 5:18:41 AM6/5/04
to
Douglas A. Gwyn wrote:
So use 128 bits.

And though it should be obvious, leave room on both ends.
I.e. define the basic unit as femtosecond, but the timer
doesn't have to have ticks ("granularity") that small.

Keith Thompson

unread,
Jun 5, 2004, 5:23:56 PM6/5/04
to
Many systems have a type "struct timeval", containing two members: a
time_t and a long int denoting microseconds. (There's also a "struct
timezone", containing members "tz_minuteswest" and "tz_dsttime".)

Might something like this, perhaps with a resolution finer than
microseconds, be a candidate for future standardization? time_t could
continue to have a resolution of 1 second (*), but there would be a
fairly clean way to get finer resolution.

(*) No, the standard doesn't specify a 1-second resolution, but it's
the most common implementation.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Paul Eggert

unread,
Jun 6, 2004, 4:06:08 AM6/6/04
to
At Sat, 05 Jun 2004 21:23:56 GMT, Keith Thompson <ks...@mib.org> writes:

> Many systems have a type "struct timeval",....


>
> Might something like this, perhaps with a resolution finer than
> microseconds, be a candidate for future standardization?

It's already been done. POSIX standardizes both struct timeval and
struct timespec (the latter has 1-ns resolution). POSIX is an ISO
standard (ISO/IEC 9945), just as C99 is.

Douglas A. Gwyn

unread,
Jun 6, 2004, 7:38:37 AM6/6/04
to
Paul Eggert wrote:
> It's already been done. POSIX standardizes both struct timeval and
> struct timespec (the latter has 1-ns resolution). POSIX is an ISO
> standard (ISO/IEC 9945), just as C99 is.

One problem with POSIX in recent times is that
essentially every interface gets tossed into it.
Presumably in the not too distant future they
will find it necessary to add a struct newtimespec
with resolution better than a nanosecond. Too bad
the opportunity ws missed (twice now) to get ahead
of the state of the art.

Paul Eggert

unread,
Jun 6, 2004, 3:49:06 PM6/6/04
to
At Sun, 06 Jun 2004 07:38:37 -0400, "Douglas A. Gwyn" <DAG...@null.net> writes:

> One problem with POSIX in recent times is that
> essentially every interface gets tossed into it.

Not really. For example, POSIX has not standardized off64_t, which is
quite relevant to this thread. I agree, though, that POSIX is a more
ambitious effort than the C standard is: it's a larger spec, has more
people involved, and is a more-active project overall.

> Too bad the opportunity ws missed (twice now) to get ahead of the
> state of the art.

The POSIX process operates by consensus, and attempts to standardize
best existing practice rather than to get ahead of the state of the art.

Antoine Leca

unread,
Jun 7, 2004, 7:25:41 AM6/7/04
to
En 40c0eb8e....@News.individual.net, Dave Hansen va escriure:

> On Fri, 4 Jun 2004 16:20:29 +0200, "Antoine Leca" <ro...@localhost.gov>
> wrote:
>> Which new code? We are speaking about the introduction of
>>
>> char *ctime64(time64_t);
>
> Maybe you were, but Brian Inglis was talking about dropping ctime and
> asctime from the standard. It was to that suggestion I was
> responding. Read my post again.

I am very sorry. I missed a part of Brian's post. Please ignore my
intrusion. And of course you are right it appears impossible to remove
straight {as}ctime() (if for no other reasons, at least because strftime()
does not seem to have superceeded their use.)


Antoine

Richard Bos

unread,
Jun 7, 2004, 10:09:26 AM6/7/04
to
id...@hotmail.com (Dave Hansen) wrote:

> On Fri, 04 Jun 2004 07:05:41 GMT, r...@hoekstra-uitgeverij.nl (Richard
> Bos) wrote:
>
> >id...@hotmail.com (Dave Hansen) wrote:
> >
> >> On Thu, 03 Jun 2004 20:06:13 GMT, Brian Inglis
> >> <Brian....@SystematicSw.Invalid> wrote:
> >>
> >> >As ctime and asctime are also locale-unfriendly, maybe they should
> >> >just be dropped, and programmers recommended in the documentation
> >>
> >> Deprecated, please.
> >
> >OTOH, too much is deprecated and not enough is dropped. If C is ever to
>
> Like what? I'm not aware of anything that was deprecated in C99.

In general, not in C99 specifically.

> >_really_ go forward, the next Standard should drop, _not_ deprecate,
> >both gets() and old-style declarations. Especially gets() is nothing but
> >a Hallucigenia, well overdue to be buried inside a virtual Burgess
> >shale.
>
> I've posted before that I would have loved to have seen both implicit
> int and gets deprecated. Unfortunately, neither was.

Implicit int _was_ dropped, but it was something I didn't have much
problems with. It's old-style declarations and default argument
promotions that cause grief.

> Another candidate for deprecation in the next standard might be
> trigraphs.

Oh, yes. We've got digraphs now, and they're much more useful. Let's get
rid of the old eyesores.

Richard

Dan Pop

unread,
Jun 7, 2004, 10:03:03 AM6/7/04
to

A completely irrelevant standard to the C99 standard. The POSIX
specification is of absolutely no help to the people who need the feature
in *portable* C99 code. There are plenty of ISO standards irrelevant to
C99 programming, so the fact that POSIX is an ISO standard these days
means exactly zilch.

Dan Pop

unread,
Jun 7, 2004, 9:53:43 AM6/7/04
to
In <40q7p1-...@jones.homeip.net> lawrenc...@ugsplm.com writes:

>Dave Hansen <id...@hotmail.com> wrote:
>>
>> I've posted before that I would have loved to have seen both implicit
>> int and gets deprecated. Unfortunately, neither was.
>
>But implicit int was completely removed -- isn't that even better?

Certain instances of implicit int have been completely removed, indeed.

But it's still OK to remove "int" from the spelling of the following
types: short int, long int, long long int, usigned int etc. So, there
are still instances of implicit int into the standard, but, IMHO, harmless
(only beginners seem to have some problems with that). And, given their
ubiquitous usage, there is no way to even deprecate them.

David R Tribble

unread,
Jun 7, 2004, 12:34:54 PM6/7/04
to
David R Tribble wrote:
>> I would prefer a tick size of somewhere between 1 microsecond and
>> 10 nanoseconds. When encoded in a 63-bit value, this corresponds to a range
>> of between 292,000 and 2,922 years. This is, IMHO, a completely reasonable
>> compromise.

Douglas A. Gwyn wrote:
> Thus you should double that and allocate 128 bits.
>
> I certainly could not support a new standard for
> recording time on data processing systems that has
> a resolution greater than one nanosecond. Some of
> the harware I already use has a system clock with
> resolution much faster than one microsecond. And
> a range of only a couple of hundred years would be
> cutting it way too close. So use 128 bits.

Or why not 96 bits?


I prefer 64 bits for these reasons:
a) a standard 64-bit integer datatype exists (long long int is guaranteed
to be at least 64 bits wide). No need to require any primitive that's
not already in ISO C.
b) a 64 bit signed integer (63 bits unsigned) provides enough bits for a
wide date range (at least 2,900 years) as well as a fairly small precision
(no bigger than microseconds). Where you place the decimal point is
a minor issue.

For *most* applications, the civil date/time library functions that would
be provided by ISO C would be sufficient. I've already conceded that we
can't provide a library that is all things to all applications. You seem
to be trying to do just that, but it's a bad idea. Better to provide an
improved library that fixes most of the shortcomings of the existing
time_t / struct tm architecture. Several people, including myself, have
made such proposals. Besides, 128 bits really is overkill.

-drt

Keith Thompson

unread,
Jun 7, 2004, 3:43:12 PM6/7/04
to
r...@hoekstra-uitgeverij.nl (Richard Bos) writes:
> id...@hotmail.com (Dave Hansen) wrote:
[...]

> > Another candidate for deprecation in the next standard might be
> > trigraphs.
>
> Oh, yes. We've got digraphs now, and they're much more useful. Let's get
> rid of the old eyesores.

Digraphs don't entirely replace trigraphs; in particular, they don't
help in character or string literals. Nevertheless, I'd be delighted
to see trigraphs consigned to the dustbin of history.

Douglas A. Gwyn

unread,
Jun 7, 2004, 6:33:39 PM6/7/04
to
David R Tribble wrote:
> ... Besides, 128 bits really is overkill.

No, it isn't. I already mentioned that microsecond
resolution is not nearly good enough already. And to
accommodate a reasonable extrapolation into the not
too distant future we need to do more than fit the
most popular machines of the present. Otherwise
there will have to be a proliferation of additional
data types for the same purpose as time goes on.
If we're going to invent a new interface, it should
be forward-looking.

Brian Inglis

unread,
Jun 8, 2004, 4:45:52 AM6/8/04
to
On Thu, 03 Jun 2004 20:26:37 GMT in comp.std.c, id...@hotmail.com (Dave
Hansen) wrote:

>On Thu, 03 Jun 2004 20:06:13 GMT, Brian Inglis
><Brian....@SystematicSw.Invalid> wrote:
>
>[...]
>>As ctime and asctime are also locale-unfriendly, maybe they should
>>just be dropped, and programmers recommended in the documentation
>
>Deprecated, please.
>
>>(forced in the implementation) to use strftime, with examples given to
>>achieve the same results as ctime and asctime, and explanations of the
>>locale-friendly and thread-friendly benefits.
>
>That's fine for new code, but there is an existing base, after all.

I was referring *only* to Jacob Navia's time64_t versions of ctime64
and asctime64.
Get with the thread! ;^>
(I agree, and raise you a couple of dozen more functions.)

Brian Inglis

unread,
Jun 8, 2004, 4:49:19 AM6/8/04
to

You were correct, and I was unclear as I omitted the 64 suffix from
the function names, as I thought the time64_t context was sufficient.

Brian Inglis

unread,
Jun 8, 2004, 4:55:16 AM6/8/04
to

The English word "nasties" would probably be a better idiomatic
translation for "avanies" in this and similar contexts, rather than
affront, damage, or injustice.

Paul Eggert

unread,
Jun 8, 2004, 10:37:57 AM6/8/04
to
At 7 Jun 2004 14:03:03 GMT, Dan...@cern.ch (Dan Pop) writes:

> The POSIX specification is of absolutely no help to the people who
> need the feature in *portable* C99 code.

Sure it is. POSIX is quite portable in practice and it standardizes a
portable API for C99. (The "P" in POSIX stands for "portable", after
all....) I've written a lot of POSIX code, that runs on quite a wide
variety of machines.

Often, hosts that do not conform entirely to POSIX provide some POSIX
interfaces. So it's quite reasonable to suggest the POSIX standard
API to people who need a particular feature that C itself doesn't
standardize.

James Kuyper

unread,
Jun 8, 2004, 10:49:15 AM6/8/04
to
Paul Eggert wrote:
> At 7 Jun 2004 14:03:03 GMT, Dan...@cern.ch (Dan Pop) writes:
>
>
>>The POSIX specification is of absolutely no help to the people who
>>need the feature in *portable* C99 code.
>
>
> Sure it is. POSIX is quite portable in practice and it standardizes a
> portable API for C99. (The "P" in POSIX stands for "portable", after
> all....) I've written a lot of POSIX code, that runs on quite a wide
> variety of machines.

However, the POSIX specification is indeed of absolutely no help to
people who need the feature in code which must be portable to systems
with no POSIX support.

Antoine Leca

unread,
Jun 8, 2004, 10:56:01 AM6/8/04
to
En qdvac09qi3s26l4r9...@4ax.com, Brian Inglis va escriure:

> The English word "nasties" would probably be a better idiomatic
> translation for "avanies" in this and similar contexts, rather than
> affront, damage, or injustice.

Thanks, I shall try to keep this in a corner of my memory, for the next time
I find an use to it.


Antoine

Dan Pop

unread,
Jun 8, 2004, 11:44:07 AM6/8/04
to

Nope, it ain't, unless by "portable C99 code" you mean code that works
on platforms providing enough POSIX support. This is, however, not the
definition commonly used in this particular newsgroup and it doesn't
work particularly well in the real world, either. When porting some
Unix code to the Win32 API, gettimeofday was one of the few things I
had to implement myself (most of the Unixisms I needed were provided
by the Win32 socket library).

David R Tribble

unread,
Jun 8, 2004, 12:16:41 PM6/8/04
to
David R Tribble wrote:
>> ... Besides, 128 bits really is overkill.

Douglas A. Gwyn wrote:
> No, it isn't. I already mentioned that microsecond
> resolution is not nearly good enough already. And to
> accommodate a reasonable extrapolation into the not
> too distant future we need to do more than fit the
> most popular machines of the present. Otherwise
> there will have to be a proliferation of additional
> data types for the same purpose as time goes on.
> If we're going to invent a new interface, it should
> be forward-looking.

Then that new interface better have methods for storing/retrieving date
values that are shorter than 128 bits (with reduced precision or range,
obviously). I'd like to be able to store date values in something less
than 16 bytes, especially since I can store a stringized date
"yyyyjjjsssss" in only 12 bytes. That's one of the nice things about
time_t - it's an arithmetic type, and on most implementations it's fairly
small (typically 4 bytes). Going from a typical 4 bytes (or even 8 bytes
where it's an IEEE double) to 16 is quite a jump in storage requirements.


But your answer is probably going to be "just use time_t for those
applications". So the new interface will have to convert to and from
time_t. And this means that I either get time_t resolution or the
full-blown 128-bit resolution, with no binary representation with a scale
in between.


There's also the problem that I'd like to do arithmetic with the new
date time (e.g., subtract 100 seconds, determine the number of
milliseconds difference of two dates, add 30 years to a mortgage date,
etc.). The new inteface would have to provide these operations as
functions if the new type is not defined to be an existing primitive
type such as 'long long int'. But if that's the case, why not just
extend the existing 'struct tm' interface and forget about a pure
arithmetic representation altogether?

-drt

Perhaps we should propose calling it 'longtime_t' or 'bigtime_t'?
<http://www.amazon.com/exec/obidos/ASIN/0465007805>

David R Tribble

unread,
Jun 8, 2004, 12:31:15 PM6/8/04
to
Dave Hansen wrote:
>> I've posted before that I would have loved to have seen both implicit
>> int and gets deprecated. Unfortunately, neither was.

Lawrence Jones writes:
>> But implicit int was completely removed -- isn't that even better?

Dan Pop wrote:
> Certain instances of implicit int have been completely removed, indeed.
>
> But it's still OK to remove "int" from the spelling of the following
> types: short int, long int, long long int, usigned int etc. So, there
> are still instances of implicit int into the standard, but, IMHO, harmless
> (only beginners seem to have some problems with that). And, given their
> ubiquitous usage, there is no way to even deprecate them.

Technically speaking, the 'int' in all of those type names is, and always
was, redundant. In fact, the earliest versions of C had no 'short int',
'unsigned int', or 'long int' types but simply 'short', 'unsigned', and
'long'. There was no "implicit int" in any of these types.

It was simply syntactic sugar (added by Ritchie, IIRC) that an extra 'int'
was allowed to be specified.

That's not how ISO defines the integer types nowadays, of course.

-drt

Keith Thompson

unread,
Jun 8, 2004, 2:46:49 PM6/8/04
to

What I meant was, might something like "struct timeval" be a candidate
for inclusion in a future version of the C standard?

If a future C standard could mandate an integer representation for
time_t, with a 1-second resolution, something like "struct timeval"
and the gettimeofday() function might be a good way to get finer
resolution when needed.

That's a lot more specific than the current standard. Would any
current real-world implementations be harmed by a mandate for a
1-second resolution for time_t?

Douglas A. Gwyn

unread,
Jun 9, 2004, 1:51:36 AM6/9/04
to
David R Tribble wrote:
> But your answer is probably going to be "just use time_t for those
> applications". So the new interface will have to convert to and from
> time_t. And this means that I either get time_t resolution or the
> full-blown 128-bit resolution, with no binary representation with a scale
> in between.

Actually I am hoping for a carefully engineered replacement
for the whole suite of standard (C) time functions, and if
it existed I'd say use it for all applications. The
existing model is rather badly broken in many ways.

> There's also the problem that I'd like to do arithmetic with the new
> date time (e.g., subtract 100 seconds, determine the number of
> milliseconds difference of two dates, add 30 years to a mortgage date,
> etc.). The new inteface would have to provide these operations as
> functions if the new type is not defined to be an existing primitive
> type such as 'long long int'. But if that's the case, why not just
> extend the existing 'struct tm' interface and forget about a pure
> arithmetic representation altogether?

They serve different purposes. struct tm (or its improved
replacement) is concerned with human representations for
times, while time_t (or its improved replacement) is a
simple linear measure with clean arithmetic properties.

Look, we already need good support for arbitrary-precision
integer arithmetic. (Encryption is one of the main
applications.) Improved time functions can make use of
whatever is developed for that, or native 128-bit hardware
functions, or specialized 2-, 4-, or 8-word arithmetic in
software. Indeed, we had a similar situation on PDP-11
UNIX, and it is instructive to study the pros and cons of
the various approaches used there.

Douglas A. Gwyn

unread,
Jun 9, 2004, 1:54:22 AM6/9/04
to
Keith Thompson wrote:
> That's a lot more specific than the current standard. Would any
> current real-world implementations be harmed by a mandate for a
> 1-second resolution for time_t?

Obviously you won't find them among existing POSIX apps,
since they already have that spec.

However, it would be a significant problem for apps on
platforms where time_t has some known resolution such as
1 microsecond that had been taken advantage of in the
apps. In fact I know of apps that do that.

Dennis Ritchie

unread,
Jun 9, 2004, 2:17:01 AM6/9/04
to

"David R Tribble" <da...@tribble.com> wrote in message news:f4002eab.0406...@posting.google.com...
....

> Technically speaking, the 'int' in all of those type names is, and always
> was, redundant. In fact, the earliest versions of C had no 'short int',
> 'unsigned int', or 'long int' types but simply 'short', 'unsigned', and
> 'long'. There was no "implicit int" in any of these types.
>
> It was simply syntactic sugar (added by Ritchie, IIRC) that an extra 'int'
> was allowed to be specified.

The earliest versions (pre 7th ed) had no short, unsigned, or long
at all. When they went in, from the beginning the 'adjectival' versions
(combined with int) were there. Amusingly, it appears that
in 7th edition proper, 'short' was there as a total synonym for 'int',
and thus it looks like you could say 'long int' but not 'short int',
although this was added very soon after.

But it was just sugar.

Dennis


Dan Pop

unread,
Jun 9, 2004, 5:57:24 AM6/9/04
to
In <ln1xkpg...@nuthaus.mib.org> Keith Thompson <ks...@mib.org> writes:

>That's a lot more specific than the current standard. Would any
>current real-world implementations be harmed by a mandate for a
>1-second resolution for time_t?

Why mandate 1 second resolution and not 1 second or better resolution?

By making difftime return double, the committee clearly expressed the
intent to bless implementations providing sub-second resolution for
time_t.

Keith Thompson

unread,
Jun 9, 2004, 7:27:09 AM6/9/04
to
Dan...@cern.ch (Dan Pop) writes:
> In <ln1xkpg...@nuthaus.mib.org> Keith Thompson <ks...@mib.org> writes:
>
> >That's a lot more specific than the current standard. Would any
> >current real-world implementations be harmed by a mandate for a
> >1-second resolution for time_t?
>
> Why mandate 1 second resolution and not 1 second or better resolution?

I explained that in the portion that you snipped. What I had in mind
was to do something POSIX-like in a future C standard: mandate a
1-second resolution for time_t, and provide something like
gettimeofday() and struct timeval for finer resolution.
(struct timeval contains two members, a time_t and an integer
representing microseconds.)

> By making difftime return double, the committee clearly expressed the
> intent to bless implementations providing sub-second resolution for
> time_t.

Agreed, but I didn't know whether any actual implementation takes
advantage of that. If none did so, I thought it might present an
opportunity to nail things down a bit. However, Doug Gwyn pointed out
that some implementations use a time_t with a resolution of 1
microsecond, and some applications depend on that, so applying such a
restriction would break existing code. (It could be argued that such
code is already non-portable, I suppose.)

So gettimeofday() and struct timeval can't be cleanly grafted into the
existing <time.h>. Something like them might make their way into a
new time interface, but if we're starting from scratch a struct
probably isn't the best way to implement high-resolution timers
(especially since we're guaranteed at least 64-bit integers).

Richard Bos

unread,
Jun 9, 2004, 9:13:13 AM6/9/04
to
Keith Thompson <ks...@mib.org> wrote:

> r...@hoekstra-uitgeverij.nl (Richard Bos) writes:
> > id...@hotmail.com (Dave Hansen) wrote:
> [...]
> > > Another candidate for deprecation in the next standard might be
> > > trigraphs.
> >
> > Oh, yes. We've got digraphs now, and they're much more useful. Let's get
> > rid of the old eyesores.
>
> Digraphs don't entirely replace trigraphs; in particular, they don't
> help in character or string literals. Nevertheless, I'd be delighted
> to see trigraphs consigned to the dustbin of history.

s/Nevertheless,/All the more reason why/ IYAM...

Richard

It is loading more messages.
0 new messages