Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is QNX Y2038 complient?

333 views
Skip to first unread message

Richard M. Smith

unread,
Apr 25, 1998, 3:00:00 AM4/25/98
to

Hi Jeff,

The year 2038 problem is not just a Unix problem, but a bad design flaw in
the time.h functions in the standard ANSI C runtime library. Pretty much
any C code that uses time.h functions is going to break in year 2038 regardless
of the operating system . Year 2038 is when the time_t data type
goes negative and the world runs of out seconds.

The real killer is that the "locatime" function starts returning a NULL pointer
when it is given a negative time value. 90% of the C code in the
world never bothers to check for a NULL pointer and therefore will
start crashing in year 2038.

The work-around to this problem is not to use any functions from time.h
and instead use OS system calls for time related things. I know that
Win32 system calls that deal with time don't have any year 2038
issues. Not sure about Unix.

The year 2038 bug can even break code today. For example, if
the Email reader in Netscape 3.0 recevies a message dated after
the year 2038, it will crash. The email reader can't read Email again
until the message with the bad date is manually deleted from the POP3
server. Luckily this bug was fixed in version 4.0.

Richard

Jeff Adler wrote:

> I hate to bring this up NOW, but most of our beloved QNX and UNIX systems
> will have date problems in the year 2038. It might be overly optimistic to
> think that our current generation of code will still be around in 40 years,
> but for those of us that remember back then, we said that about our code
> back in the 70's and early 80's.
>
> With all of the "hubub" going around now about year 2000 compilance issues
> because "... those near-sited OS designers only allocated 2 digits for the
> year field", I'm putting into my time capsule thoughts about "those
> near-sited OS designers that only allocated 4 bytes for the time_t field".
>
> As a software designer, I'm sure not looking forward to hearing "didn't you
> learn your lesson the first time back in '99?"
>
> Jeff Adler
> Automation Services


Richard M. Smith

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Goran,

All of the C runtime libraries that I've seen make the time_t structure
a signed long. Too bad the ANSI spec. didn't make this an illegal
choice because of the year 2038 problem. As a minimum the spec.
need to recommend making time_t an unsigned long. This simple
change gives us another 68 years or so. Using 64-bit integers
is the best solution. Only one problem, I don't believe that 64-bit
int's are in standard C.

The year 2038 bug is very common. To see it, just fgrep
a bunch of C source code for calls to localtime and then see how
often a check is made for a NULL pointer being returned
by localtime. I've even seen the bug in code samples in magazine articles.

Richard

Goran Larsson wrote:

> In article <354239DC...@pharlap.com>,


> Richard M. Smith <r...@pharlap.com> wrote:
>
> > The year 2038 problem is not just a Unix problem, but a bad design flaw in
> > the time.h functions in the standard ANSI C runtime library. Pretty much
>

> There is nothing in the ISO (ANSI) C standard that mandates that the
> time related functions break in 2038. The standard uses the type time_t
> to hold the number of seconds since 1970-01-01 and if an implementation
> uses a 32 bit type for time_t then time will run out in 2038. There is
> nothing in the ISO (ANSI) standard that requires time_t to be a 32 bit
> integer, it could be a 64 bit integer or even a 64 bit floating point
> value. You can't look at the contents of a time_t if you wan't to be
> portable. To take the difference between two time_t you have to use
> the function difftime(3) that is documented to return a double. Some
> manpages for difftime(3) has this text:
>
> This function is provided because there are no general
> arithmetic properties defined for type time_t.


>
> > The year 2038 bug can even break code today. For example, if
> > the Email reader in Netscape 3.0 recevies a message dated after
> > the year 2038, it will crash. The email reader can't read Email again
>

> As email doesn't have the date coded as a time_t but as a string, this
> bug must be firmly put in the lap of Netscape...


>
> > until the message with the bad date is manually deleted from the POP3
> > server. Luckily this bug was fixed in version 4.0.
>

> ..and therefore they fixed it. Nice of them to fix this bug if some
> timewarp should happen to the Internet :-)
>
> --
> Goran Larsson hoh AT approve DOT se
> I was an atheist, http://home1 DOT swipnet DOT se/%7Ew-12153/
> until I found out I was God.


Don Yuniskis

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

In article <354357D7...@pharlap.com>,

Richard M. Smith <r...@pharlap.com> wrote:
>Goran,
>
>All of the C runtime libraries that I've seen make the time_t structure
>a signed long. Too bad the ANSI spec. didn't make this an illegal
>choice because of the year 2038 problem. As a minimum the spec.

Why *should* ANSI have made this "illegal"? Are you saying all
C programs need to know about dates? :-/ The point of the Standard
was to provide a reasonable core technology that could accomodate
most of the uses to which the language would be applied, etc. I
think folks writing code for 8051's might be annoyed if, for
example, they had to include support in the toolchain to handle
long long's, long doubles, etc.

>need to recommend making time_t an unsigned long. This simple
>change gives us another 68 years or so. Using 64-bit integers
>is the best solution. Only one problem, I don't believe that 64-bit
>int's are in standard C.

There is nothing to prent you from defining a time_t to be
a 5 byte array of char or a 6 byte struct, etc. Note that
there are no operators that allow direct arithmetic on time_t's
(e.g., time_t + time_t) so there is no need to force the
representation of a time_t to fit within a simple type.

>The year 2038 bug is very common. To see it, just fgrep
>a bunch of C source code for calls to localtime and then see how
>often a check is made for a NULL pointer being returned
>by localtime. I've even seen the bug in code samples in magazine articles.

Usually magazine examples are trying to illustrate some other
concept. I think, for example, you'll find most of these examples
aren't "100% portable", etc. They aren't intended (usually) as
monuments to coding perfection but, rather, "here's how to
implement XYZ..." (and assume the reader has sufficient gray
matter between his ears to sort out the details pertinent to
his/her application domain)

--don

Richard M. Smith

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Don,

The year 2038 problem pretty much goes away if time_t is
defined as the unsigned long type instead signed long. An
unsigned long means that the world doesn't run out seconds
until the year 2116.

The year 2038 bug is pretty interesting problem in embedded
systems. For example, can the typical embedded system
stay up and running if the date in the RTC is accidently set to a year
after 2038? I've done some testing and I know that some embedded systems
start crashing if this rather simple error is made.

Maybe some other folks can start this same test and report
back the results.

Richard

PS. Any program built with the 32-bit Borland C++ compiler is
particularily vunerable to the year 2038 bug. The ctime()
function of the Borland C runtime library crashes after year 2038.

Mike Davies

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

In article <3543ABEF...@pharlap.com>, "Richard M. Smith"
<r...@pharlap.com> writes

>The year 2038 problem pretty much goes away if time_t is
>defined as the unsigned long type instead signed long. An
>unsigned long means that the world doesn't run out seconds
>until the year 2116.

this is *exactly* the short term thinking that gave us the Y2K problem
in the first place.
Use strings for time and be done with it for Christs sake.

...snip...

>Richard

--
Mike Davies

Richard M. Smith

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Mike,

Just out of curiosity, in your part of the world, what date is the string "04/05/1998"?

Richard

Jeff Adler

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Mike Davies <mike_...@noco.demon.co.uk> wrote in article
<RJvlGDAC...@noco.demon.co.uk>...


> In article <3543ABEF...@pharlap.com>, "Richard M. Smith"
> <r...@pharlap.com> writes
> >The year 2038 problem pretty much goes away if time_t is
> >defined as the unsigned long type instead signed long. An
> >unsigned long means that the world doesn't run out seconds
> >until the year 2116.
>

> this is *exactly* the short term thinking that gave us the Y2K problem
> in the first place.
> Use strings for time and be done with it for Christs sake.
>

I can only guess that you don't deal with TIME very much. The use of a
(any size) number representing seconds since a base date is really the way
to go. For those of us who use it, we understand why.

For those who don't, just try to add 100 days to 02/14/1998 and the grief
begins.

I'm the worlds leading fan of x-byte time fields, but as we wrestle with
Y2K problems, we should look a little beyond. Note that QNX uses 32-bit
SIGNED time at present, but like the 1980-1995 fix, the Y2038 problem won't
be too hard to solve. Just DON'T use strings.......

Jeff Adler
Automation Services


Scott Gilbert

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Or not too far in the distant future, what date is the string "01/02/03"
?


Richard M. Smith wrote:
Just out of curiosity, in your part of the world, what date is the
string "04/05/1998"?

Jim Lambert

unread,
Apr 26, 1998, 3:00:00 AM4/26/98
to

Richard M. Smith wrote in message <354239DC...@pharlap.com>...
>Hi Jeff,

>
>The year 2038 problem is not just a Unix problem, but a bad design flaw in
>the time.h functions in the standard ANSI C runtime library. Pretty much
>any C code that uses time.h functions is going to break in year 2038
regardless
>of the operating system . Year 2038 is when the time_t data type
>goes negative and the world runs of out seconds.

This is the way I look at the problem. In 2030 I will be 65 years old.
Just as the company I am working for is trying to push me out with mandatory
retirement, I mention the ol' 2038 problem and also mention that I am the
only one who can fix their problems. Voila! Eight more years of work and I
plan on charging a LOT!

Jim


Don Yuniskis

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In article <3543ABEF...@pharlap.com>,

Richard M. Smith <r...@pharlap.com> wrote:
>
>The year 2038 problem pretty much goes away if time_t is
>defined as the unsigned long type instead signed long. An
>unsigned long means that the world doesn't run out seconds
>until the year 2116.

That just postpones the problem another 70 years... :>
I mean, folks who were writing code in the late 50's
probably figured they had a *plenty* of time to deal
with that... :>

>The year 2038 bug is pretty interesting problem in embedded
>systems. For example, can the typical embedded system
>stay up and running if the date in the RTC is accidently set to a year

Mine can. :> Actually allows you to set it *backwards*, too
(there is some interesting history in the calendar that makes
for fun reading if you get the chance...)

>after 2038? I've done some testing and I know that some embedded systems
>start crashing if this rather simple error is made.

There are lots of other errors that will cripple many systems
so this is just one to add to the pile. When the choices in RTC's
were sorely limited (15 - 20 years ago), you, by necessity, came up
with more robust timekeeping solutions. Older RTC's had 2 bit
counters for the year, etc.

>Maybe some other folks can start this same test and report
>back the results.
>
>Richard
>
>PS. Any program built with the 32-bit Borland C++ compiler is
>particularily vunerable to the year 2038 bug. The ctime()
>function of the Borland C runtime library crashes after year 2038.

Rewrite it to typedef a time_t as a long long (for example).
Hardly any *real* consequences to that since most programs
do very little date arithmetic, etc.

--don

Graham Murray

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In article <01bd716d$e21bf960$cd689cce@home-office>, "Jeff Adler"
<jad...@mho.net> writes:

> I can only guess that you don't deal with TIME very much. The use of a
> (any size) number representing seconds since a base date is really the way
> to go. For those of us who use it, we understand why.
>
> For those who don't, just try to add 100 days to 02/14/1998 and the grief
> begins.

What grief?

02/14/1998 + 100 days = 02/114/1998
(-28 days in Feb 1998)= 03/86/1998
(-31 days in Mar 1998)= 04/55/1998
(-30 days in Apr 1998)= 05/25/1998


Simply perform the arithmetic on the field(s) of the date, then
incrementally convert the resultant value to a valid date, firstly
generating a valid month and then a valid day within month.

The problems occur with such things as 01/30/1998 + 1 month, but I am
sure this would be problem (which the system specification would
have to clarify) for any system using date arithmetic. Actually, for
addding/subtracting any unit of time greater than 1 week, I think that
this mechanism is going to be easier than storing dates as the number
of seconds.

David L. Hawley

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In comp.os.qnx Goran Larsson <h...@approve.se.NO_JUNK_EMAIL> wrote:
[snip]

: The C standard doesn't say anything about prohibiting a 64 bit int
: or long, it only specifies minimum ranges for the types. There is
: no problem, except portability problems with non-portable programs,
: with having a 64 bit int/long.

: > The year 2038 bug is very common. To see it, just fgrep


: > a bunch of C source code for calls to localtime and then see how
: > often a check is made for a NULL pointer being returned
: > by localtime. I've even seen the bug in code samples in magazine articles.

: If the program uses time_t and the program is compiled by a compiler
: with a time_t with more than 31 bits then there won't be a 2038
: problem.

Except those foolish enough to want to read from historic files using the
4 byte time_t...

--
David L. Hawley D.L. Hawley and Associates 1(503)274-2242
Software Engineer
dlha...@teleport.com dlha...@qnx.com

Evandro Menezes

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In <ulnsr1...@cware.co.uk>, Graham Murray <gmu...@cware.co.uk>
wrote:

>In article <01bd716d$e21bf960$cd689cce@home-office>, "Jeff Adler"
><jad...@mho.net> writes:
>
>> For those who don't, just try to add 100 days to 02/14/1998 and the grief
>> begins.
>
>What grief?
>
>02/14/1998 + 100 days = 02/114/1998
>(-28 days in Feb 1998)= 03/86/1998
>(-31 days in Mar 1998)= 04/55/1998
>(-30 days in Apr 1998)= 05/25/1998

SQD! This way, one must use month tables look-ups, leap years
calculations, etc. Unarguably a whole more complex calculation than
just doing a multiplication and a sum.

____________________________________________________________
Evandro Menezes Austin, TX USA
Tel:+1-512-502-9199 ICQ:7957253
mailto:eva...@geocities.com http://over.to/evandro

Evandro Menezes

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In <6hust3$o4o$1...@supernews.com>, "Scott Gilbert"
<xsc...@nospam.theriver.com> wrote:

>Or not too far in the distant future, what date is the string "01/02/03"

In the US, January 2nd, 2003; in Japan and I guess parts of Asia,
February 3rd, 2001; in the rest of the world, February 1st, 2003!

What an amazing exercise!

Mike Davies

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In article <01bd716d$e21bf960$cd689cce@home-office>, Jeff Adler
<jad...@mho.net> writes
>
>Mike Davies <mike_...@noco.demon.co.uk> wrote in article
><RJvlGDAC...@noco.demon.co.uk>...
>> In article <3543ABEF...@pharlap.com>, "Richard M. Smith"
>> <r...@pharlap.com> writes

>> >The year 2038 problem pretty much goes away if time_t is
>> >defined as the unsigned long type instead signed long. An
>> >unsigned long means that the world doesn't run out seconds
>> >until the year 2116.
>>
>> this is *exactly* the short term thinking that gave us the Y2K problem
>> in the first place.
>> Use strings for time and be done with it for Christs sake.
>>
>
>I can only guess that you don't deal with TIME very much. The use of a
>(any size) number representing seconds since a base date is really the way
>to go. For those of us who use it, we understand why.

No, you're right, I don't.

>
>For those who don't, just try to add 100 days to 02/14/1998 and the grief
>begins.

Is there really no library that will do this for you ?

>
>I'm the worlds leading fan of x-byte time fields, but as we wrestle with
>Y2K problems, we should look a little beyond. Note that QNX uses 32-bit
>SIGNED time at present, but like the 1980-1995 fix, the Y2038 problem won't
>be too hard to solve. Just DON'T use strings.......

But I thought the whole point about the Y2K problem is not that the
individual bugs are hard to fix, but that they are so pervasive. Now we
face the same prospect a mere 38 years later ("Of course, no systems
will still be based upon 32 bit integers in 40 years time").

It's happening all over again before the first clean up is even
finished! This is plain silly !

Just remember which year you heard this first (OK I'm sure this wasn't
really the first place you heard it) :

Use strings for time for Christ's sake and be done with it !

>
>Jeff Adler
>Automation Services
>

--
Mike Davies

Mike Davies

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In article <3543C8FD...@pharlap.com>, "Richard M. Smith"
<r...@pharlap.com> writes
>Mike,
>
>Just out of curiosity, in your part of the world, what date is the string
>"04/05/1998"?

Use April/Avril/whatever and have a library function convert it to a
number if that's what you need.

Time/Dates as strings really *cannot* be beyond the ability of computer
science's finest library writers in 1998 surely ? If all else fails try
doing it in C++ (M.A. are you listening? :-)
>
>Richard


>
>Mike Davies wrote:
>
>> this is *exactly* the short term thinking that gave us the Y2K problem
>> in the first place.
>> Use strings for time and be done with it for Christs sake.
>
>
>

--
Mike Davies

Dennis J. Linse

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

In article <3545c83f...@news.nabi.net>, eva...@geocities.com
(Evandro Menezes) wrote:

> In <6hust3$o4o$1...@supernews.com>, "Scott Gilbert"
> <xsc...@nospam.theriver.com> wrote:
>
> >Or not too far in the distant future, what date is the string "01/02/03"
>
> In the US, January 2nd, 2003; in Japan and I guess parts of Asia,
> February 3rd, 2001; in the rest of the world, February 1st, 2003!

Take a look at descriptions of ISO 8601:1988, the international standard
on numeric representations of date and time:

<URL:http://www.ft.uni-erlangen.de/~mskuhn/iso-time.html>
<URL:http://www.mcs.vuw.ac.nz/technical/SGML/doc/iso8601/ISO8601.html>

By my reading of the descriptions of this standard, 01-02-03 is February
3, 1901 (or is it 2000? :-) (Note, dashes are the defined separators.)

Dennis

David L. Hawley

unread,
Apr 27, 1998, 3:00:00 AM4/27/98
to

16d$e21bf960$cd689cce@home-office> <5PsP3DAJ...@noco.demon.co.uk>
Organization: Teleport - Portland's Public Access (503) 220-1016
Distribution: world

In comp.os.qnx Mike Davies <mike_...@noco.demon.co.uk> wrote:
: In article <01bd716d$e21bf960$cd689cce@home-office>, Jeff Adler


: <jad...@mho.net> writes
: >
: >Mike Davies <mike_...@noco.demon.co.uk> wrote in article
: ><RJvlGDAC...@noco.demon.co.uk>...

: >> In article <3543ABEF...@pharlap.com>, "Richard M. Smith"
: >> <r...@pharlap.com> writes
: >> >The year 2038 problem pretty much goes away if time_t is


: >> >defined as the unsigned long type instead signed long. An
: >> >unsigned long means that the world doesn't run out seconds
: >> >until the year 2116.

: >>
: >> this is *exactly* the short term thinking that gave us the Y2K problem


: >> in the first place.
: >> Use strings for time and be done with it for Christs sake.

: >>
: >
: >I can only guess that you don't deal with TIME very much. The use of a


: >(any size) number representing seconds since a base date is really the way
: >to go. For those of us who use it, we understand why.

: No, you're right, I don't.

: >
: >For those who don't, just try to add 100 days to 02/14/1998 and the grief
: >begins.

: Is there really no library that will do this for you ?

: >
: >I'm the worlds leading fan of x-byte time fields, but as we wrestle with
: >Y2K problems, we should look a little beyond. Note that QNX uses 32-bit
: >SIGNED time at present, but like the 1980-1995 fix, the Y2038 problem won't
: >be too hard to solve. Just DON'T use strings.......

: But I thought the whole point about the Y2K problem is not that the
: individual bugs are hard to fix, but that they are so pervasive. Now we
: face the same prospect a mere 38 years later ("Of course, no systems
: will still be based upon 32 bit integers in 40 years time").

: It's happening all over again before the first clean up is even
: finished! This is plain silly !

: Just remember which year you heard this first (OK I'm sure this wasn't
: really the first place you heard it) :

: Use strings for time for Christ's sake and be done with it !

Strings are a fine way to represent a date, but sometimes one needs to do
date calculations too. That means converting to a number, or iterating on
day at a time. And did you remember to store your date in gmt time or are
you using local time? String representation is a problem in that people
need to be able to look at a date and know what it means. We normally
think that when we see a string that we do know what it means, but with
local problems, timezones and daylight saving time it's not that simple.
Also does your network cross atime zone, do your time stamped files get
updated from around the web?

Using numbers has it's own troubles. Using an unsigned long for time_t
solves a few problems, but it seems reasonable to expect that time_t
numbers should be valid before 0. I was born in 1952 and I would expect
that some agency some where may want to figure out how old I am.

The question boils down to what kind of object do we want to use to
represent time with. The representation does not really matter if we
consistantly use one lib to do "all manipulations" on that object. The
problem with time_t, is that since it's a number we tend to do things
like:

tomarrow = now + 24*60*60.
seconds_delta = then - now;
min_delta = seconds_delta / 60
write( &now, sizeof( now ), 1, fp_to_file_for_all_time )

quick now - was that write local time or gmt?

and this code is scattered all through our programs. Some of us use
struct tm to break up time_t's into components - it make short work of
figuring out how many senconds till midnight, but the library routines
are pretty low level and we tend to write our own cover functions to
handle the details. Many probably make the mistake:

printf( "19%02d %2d %2d\n", t.tm_year, t.tm_month + 1, t.tm_day);

This will most likely print 191XX ... after the end of 1999. tm_year is
years after 1900, not years % 100. This simple fact allows the libs to
"transparently" hide the size of time_t, but it's really not enough.

We know that we should really use strftime() for the converting to ASCII,
but, we don't. Then there's the problem of reading user date input.
Modern GUI's help a lot with this problem, but then there are those pesky
files with ASCII date strings in them.

I guess that I am of the opinion that time is pretty inportant and that
practices should be standardized into a nice set of high level, but
efficient routines. There should probably be layers of support so that
small embedded systems don't have to carry to much overhead. They might
be able to use a 4 byte time_t with a sliding window bit(s), larger
systems probably want to use 8 bytes. I don't buy the argument that we'll
all have 64 bit machines by 2038, but on the otherhand the new 400mHz PII
will be worth zip by then. The latest Watcom compiler supports 64 bit
longs now.

We are spending a lot of money to fix the Y2K problem. A small fraction
of that money could be used on standards to solve the problem for a real
long time.

Don Yuniskis

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

In article <5PsP3DAJ...@noco.demon.co.uk>,
Mike Davies <mike_...@noco.demon.co.uk> wrote:

[snip]

>Use strings for time for Christ's sake and be done with it !

Why is this an "optimal" solution? You can handle time_t's
quite nicely using 40 bit or 64 bit types. And, your timehandling
routines can probably do time arithmetic, storage, etc. a helluvalot
easier and faster with binary fields (instead of 20+ character
strings).

Let's see... 32 bit signed int gave us ~70 years so a 64 bit
value would give us 300 *million* (apologies if I'm off by
an order of magnitude since I'm just fudging this in my head...).
I think you can see how even a 40 bit time_t is a very "safe"
assumption (15,000 years??).

Strings are nice for things that *humans* will read. Would you
advocate all *arithmetic* operations in a machine be done in
packed BCD? Or, perhaps store all arguments and results
as strings, too! That would sure solve problems like "overflow"
since you could just say:
X * Y = "Overflow";
without having to worry about LONG_MAX, etc.

--don

Mitchell Schoenbrun

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

> Use strings for time for Christ's sake and be done with it !

It's late, so forgive me, but does anyone else detect the subtle
irony in this statement?

BTW: there is an algorithm around that converts back and forth
in seconds from, well you know, Christ's birthday. It's rather
natty as it has to deal with changes that were made in the
calender since then.

B..BTW: I've seen at least one suggestion, not in jest, that
we start reconning time since the big bang. Not that there
isn't some purity in this idea, but I think we'd be making
adjustments weekly.

Chip Brown

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

In article <6i3od4$ool$1...@baygull.rtd.com>, d...@rtd.com (Don Yuniskis) wrote:
[]

>Let's see... 32 bit signed int gave us ~70 years so a 64 bit
>value would give us 300 *million* (apologies if I'm off by
>an order of magnitude since I'm just fudging this in my head...).
>I think you can see how even a 40 bit time_t is a very "safe"
>assumption (15,000 years??).
>

By my calcs, a year is 365.25 days = ~31.56 million seconds.

A 64 bit signed number supports 2^^63/31.56*10^^6 = 292 Billion (10^^9) years

More than enough to cover the (estimated) time back to the Big Bang and then
some. More than enough to cover the time until the expected ballooning of our
sun into a red giant that will engulf the Earth. Do you reckon UNIX time or
even GMT will be the standard at that point?

I agree that a standard would be nice, but this discussion has gotten pretty
silly.

Keith Dysart

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

In article <6i4h3h$t4u$2...@camel20.mindspring.com>,

Chip Brown <REMOVE.A...@mobsec.CAPLETTERS.com> wrote:
>
>I agree that a standard would be nice, but this discussion has gotten pretty
>silly.

There is a standard: ISO 8601.

By the way, if you ask most programmers what the base time for UNIX time_t
is, they get it wrong. The online documentation on my Sparc is also
incorrect. The base time is actually 1970-01-01 00:00:21 (give or take)
and changes every time a leap second is added. So much for time_t making
it easy to determine the number of seconds between events. Not to
mention that it has no way to record the time 1997-06-30 23:59:60 (the
leap second added last year).

There are serious complexities with time arithmetic and just using a
different representation in no way addresses them. What is:

- the same day next year when today is Feb 29?
- Can't just add 365.
- Can't just increment the year.
- the same day next year when it is Feb 28 of the year preceeding a leap
year?
- Was it supposed to be the same day, or the end of the month?
- the same day next month when today is Mar 31?

...Keith
-------------------------------------------------------------
Keith Dysart Opinions Nortel Technology
dys...@nortel.ca are 3500 Carling Ave
Tel:+1-613-763-2255 mine Nepean, Ontario, Canada K1Y 4H7

Mike Davies

unread,
Apr 28, 1998, 3:00:00 AM4/28/98
to

In article <6i3od4$ool$1...@baygull.rtd.com>, Don Yuniskis <d...@rtd.com>
writes

>In article <5PsP3DAJ...@noco.demon.co.uk>,
>Mike Davies <mike_...@noco.demon.co.uk> wrote:
>
>[snip]
>
>>Use strings for time for Christ's sake and be done with it !
>
>Why is this an "optimal" solution? You can handle time_t's
>quite nicely using 40 bit or 64 bit types. And, your timehandling

"which in our case we have not got"

>--don

--
Mike Davies

s...@sadr.com

unread,
Apr 29, 1998, 3:00:00 AM4/29/98
to

Mike Davies <mike_...@noco.demon.co.uk> writes:

>Use April/Avril/whatever and have a library function convert it to a
>number if that's what you need.

>Time/Dates as strings really *cannot* be beyond the ability of computer
>science's finest library writers in 1998 surely ? If all else fails try
>doing it in C++ (M.A. are you listening? :-)

And you intend to implement a clock in hardware using strings, including
language specific month names, how?

And you intend to support Unicode, etc. in hardware? And have it figure
out which language is appropriate how?

Keith Graham
s...@sadr.com

Mike Davies

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to

In article <6i8k2r$q...@sadr.sadr.com>, s...@sadr.com writes

>Mike Davies <mike_...@noco.demon.co.uk> writes:
>
>>Use April/Avril/whatever and have a library function convert it to a
>>number if that's what you need.
>
>>Time/Dates as strings really *cannot* be beyond the ability of computer
>>science's finest library writers in 1998 surely ? If all else fails try
>>doing it in C++ (M.A. are you listening? :-)
>
>And you intend to implement a clock in hardware using strings, including
>language specific month names, how?

Pick a standard language (say English) for your hardware and translate
using your SW library functions, use a comma delimited output string
with a specified convention for the order and translate according to
your locale.
Whatever you do after this DONT MAKE A SINGLE 32 BIT WORD REPRESENTING
THE TIME !

For the slow on the uptake :- !!!!! IT WON'T FIT !!!!!

>
>And you intend to support Unicode, etc. in hardware? And have it figure
>out which language is appropriate how?

No

>
>Keith Graham
>s...@sadr.com


Mike Davies

JTK

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to

... so you use two unsigned longs. The aritmetic is still a hell of a
lot easier than screwing around with ASCII strings.

> >--don
>
> --
> Mike Davies

JTK

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to

Mike, you're missing a critical point here. The only time measure that
is reasonably constant is the second. Years have leap-days and
leap-seconds added to them periodically. Days aren't exactly 24 hrs.
etc. etc.

You are also missing another critical point. Using ASCII strings to
represent inherently numerical data (i.e. number of seconds elapsed) is
unabashed craziness, for the simple fact that we have better ways to do
it (eg 64-bit integers or two 32-bit unsigned ints). And ASCII strings
in hardware? Come on, are you playing with us here or what?

JTK

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to

NEEEE HAAAA!!! Stick it to the man, Jim!! :-)

Jeff Adler

unread,
Apr 30, 1998, 3:00:00 AM4/30/98
to


Mike Davies <mike_...@noco.demon.co.uk> wrote in article
>

> this is *exactly* the short term thinking that gave us the Y2K problem
> in the first place.

> Use strings for time and be done with it for Christs sake.
>

Mike:

Judging from the response to your approach to using strings for time, there
are several others (myself included) who disagree with using strings for
time, mostly because we've looked at the different approaches and their
merits and decided that "seconds since (some date)" (ie. time_t) is the
better solution.

However, it should be noted that different approaches are each have their
own merits, depending on the application. Some of the approaches that can
be used are:

1) Seconds since a base date (ie. time_t).
2) Separate fields for each time component (ie. struct tm).
3) Time represented as a string (ie. char*).

For real-time type applications that are not super high-speed, approach #1
works real well. We can get the time as an integer, get it again later,
subtract the two to easily find out the duration between the two
irrespective of calendar date, add a constant to the current date/time to
get a future time, and when necessary, convert the result to a string for
showing the general public.

Using this approach, most of the internal work on "time entities" is done
with quick math instructions and the much slower process of converting it
for display (this IS a slow process) only has to be done when needed.

The second and third approachs are better suited for accounting and other
types applications where the values are relatively stagnent and don't need
to be quickly manupulated. Comparing times, adding full years for
comparison, and other less-frequent manulipulation functions are slow, but
the full flexibility is there for multiple formats. If I were writing an
accounting program, I'd be much more apt to use method #2 or a hybrid
thereof, since the range of dates doesn't have to be after a certain date
(ie. 1-jan-1970).

The strings approach only works if you can force the data entry format into
a specific template and be sure that upper/lower case considerations can be
maintained for data integrety. Even then, the second method may be
preferrable due to size and conformity considerations.

Of course, if your ultimate storage format is an ASCII text file, strings
is the "only" answer, however, you also end up storing integers and
floating point numbers as ASCII strings which, of course, need to be
converted back into a machine "native" format before any real work can be
done with them.

Jeff Adler
Automation Services


Everett M. Greene

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

BTW: The spelling is "compliant".

In article <3544c785...@news.nabi.net> rin...@trbpvgvrf.pbz (Evandro Menezes) writes:
> In <ulnsr1...@cware.co.uk>, Graham Murray <gmu...@cware.co.uk>

> wrote:
>
> >In article <01bd716d$e21bf960$cd689cce@home-office>, "Jeff Adler"
> ><jad...@mho.net> writes:
> >
> >> For those who don't, just try to add 100 days to 02/14/1998 and the grief
> >> begins.
> >

> >What grief?
> >
> >02/14/1998 + 100 days = 02/114/1998
> >(-28 days in Feb 1998)= 03/86/1998
> >(-31 days in Mar 1998)= 04/55/1998
> >(-30 days in Apr 1998)= 05/25/1998
>
> SQD! This way, one must use month tables look-ups, leap years
> calculations, etc. Unarguably a whole more complex calculation than
> just doing a multiplication and a sum.

Whoever designed the Earth's orbit and rotation didn't properly plan for
the existence of computers. I believe a class-action suit for lack of
suitability for intended purpose (UCC) is in order.

For those who want to contemplate time and date problems, think
of the situation faced by those who deal with computerized genealogy.
Time is usually not a factor in genealogy, but dates certainly are.
And the dates span several centuries, different calendars, locations
switching calendars at different times in the past, etc.

Don Yuniskis

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

In article <6i4kcs$f...@bcarh8ab.bnr.ca>, Keith Dysart <dys...@nortel.ca> wrote:
>In article <6i4h3h$t4u$2...@camel20.mindspring.com>,
>Chip Brown <REMOVE.A...@mobsec.CAPLETTERS.com> wrote:
>>
>>I agree that a standard would be nice, but this discussion has gotten pretty
>>silly.
>
>There is a standard: ISO 8601.
>
>By the way, if you ask most programmers what the base time for UNIX time_t
>is, they get it wrong. The online documentation on my Sparc is also
>incorrect. The base time is actually 1970-01-01 00:00:21 (give or take)
>and changes every time a leap second is added. So much for time_t making

The C Standard makes no claims as to the actual time of the "epoch".
Also, it only requires implementations to provide a "best approximation"
to the current time, etc.

>it easy to determine the number of seconds between events. Not to
>mention that it has no way to record the time 1997-06-30 23:59:60 (the
>leap second added last year).

Sorry, but the ``tm_sec'' member of struct tm accomodates a range of
0 .. 61 for just this reason. Leap seconds are added at midnight on
30 June and 31 December, when "required". This allows as many as two leap
seconds to be reflected in the time (to my knowledge, this hasn't
happened yet).

>There are serious complexities with time arithmetic and just using a
>different representation in no way addresses them. What is:
>
>- the same day next year when today is Feb 29?
> - Can't just add 365.
> - Can't just increment the year.

Sure you can. Add 365*24*60*60 to the time_t and then pass it through
gmtime().

>- the same day next year when it is Feb 28 of the year preceeding a leap
> year?

Same as above.

> - Was it supposed to be the same day, or the end of the month?

If you *wanted* the end of end of the month, then you would have stated the
problem as "the last day of this month a year from now".

>- the same day next month when today is Mar 31?

As stated, there is no solution to this. It is an incompletely
specified problem since it doesn't carry any other criteria to
explain how it should be accomodated. Like adding $1 to the amount
of a check when the current amount is $999.65 and the check only
accomodates 5 digit (3.2) values.

Don't blame the library/standard/programmer/etc. for poorly framed
criteria.

--don

Jim Lambert

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

Don Yuniskis wrote in message <6iba79$btm$1...@baygull.rtd.com>...
>In article <3548D807...@nowhere.com>, JTK <unli...@nowhere.com>
wrote:
>I dunno if *I'd* want to be turning 65 around that time with this
>sort of attitude... You can *bet* there'll be another "Y2K" style
>crisis for government related things... like Social Security checks,
>etc. And, withdrawing your 401K's might prove to be a bit tedious,
>too! :>
>

The one thing that I have not heard mentioned is that no matter what
technical problems arise in 2038 they will not, I repeat, WILL NOT, get the
exposure and hysterical attention that the year 2000 problem is getting. A
lot of the year 2000 problem can be attributed to human behaviour whenever a
new century or millenium dawns. People all over are sure the world is going
to end on the year 2000 and since most humans don't believe in magic their
magic becomes computers and computers will bring about the end of the world.

I am not an expert on human actions but I do think that the year 2000 thing
is being blown way out of proportion. Are there going to be problems?
Definitely! Are they going to be life threatening or threats to our way of
life? Probably not. They will just be problems that get solved.

Just my $0.02 worth.

Jim

Patiently awaiting my 60's when I will become very, VERY rich!

Duncan O'Neal

unread,
May 1, 1998, 3:00:00 AM5/1/98
to

On 29 Apr 1998 21:27:55 -0400, s...@sadr.com wrote:

>Mike Davies <mike_...@noco.demon.co.uk> writes:
>
>>Use April/Avril/whatever and have a library function convert it to a
>>number if that's what you need.
>
>>Time/Dates as strings really *cannot* be beyond the ability of computer
>>science's finest library writers in 1998 surely ? If all else fails try
>>doing it in C++ (M.A. are you listening? :-)
>
>And you intend to implement a clock in hardware using strings, including
>language specific month names, how?

Gee, this makes for a pretty sophisticated time/date chip - don't it!
It would need event tables built in and non-volitile trap bits that
indicate if certian adjustments were done deals. It should also spit
out which time zone was being accessed and whether it is day light
savings or not. It should also have good enough basic accuracy so
adjustment can be limited to seconds per day. Oops.. it woud also need
a built-in back-up battery. * But it does seems like a simple month
number would have to surfice -- there would be to many languages!

Would it suite applications like cash-registar and postage-machines.
Could it prevent spoofing the date? What good is a date if it is
not certifiable.


>
>And you intend to support Unicode, etc. in hardware? And have it figure
>out which language is appropriate how?
>

>Keith Graham
>s...@sadr.com



+ Submitted by: Duncan O'Neal
+-------------------------------------------------------------+
| To send me a wire: BUT NOT TO ADVERTISE! And Not-to-use-
| -on-a-mailing-list: I scribe -- donealsh...@gov.jp
| Drop the m...@gov.jp...bit And insert @ before S.
| D0T between W&W and another before ca.
+-------------------------------------------------------------+


Mike Davies

unread,
May 2, 1998, 3:00:00 AM5/2/98
to

In article <3548D6A3...@nowhere.com>, JTK <unli...@nowhere.com>
writes

>Mike, you're missing a critical point here. The only time measure that
>is reasonably constant is the second. Years have leap-days and
>leap-seconds added to them periodically. Days aren't exactly 24 hrs.
>etc. etc.

No, I'm not missing these points any more than you are, I'm choosing not
to address them. And so are you.

>
>You are also missing another critical point. Using ASCII strings to
>represent inherently numerical data (i.e. number of seconds elapsed) is
>unabashed craziness, for the simple fact that we have better ways to do
>it (eg 64-bit integers

"Which in our case we have not got"

> or two 32-bit unsigned ints).

Which in our case we are choosing not to use.

Because multi-precision arithmetic is a pain in a high level language
(and assembler too, if it comes to that)

So : Use strings for time

> And ASCII strings
>in hardware? Come on, are you playing with us here or what?

Why not ? If you think about it it's not so very different from what RTC
chips do at the moment (from the little that I've seen of them anyway).


Mike Davies

Mike Davies

unread,
May 2, 1998, 3:00:00 AM5/2/98
to

In article <01bd7487$52b2d2e0$73689cce@home-office>, Jeff Adler
<jad...@mho.net> writes
>
>

>Mike Davies <mike_...@noco.demon.co.uk> wrote in article
>>
>> this is *exactly* the short term thinking that gave us the Y2K problem
>> in the first place.
>> Use strings for time and be done with it for Christs sake.
>>
>
>Mike:
>
>Judging from the response to your approach to using strings for time, there
>are several others (myself included) who disagree with using strings for
>time, mostly because we've looked at the different approaches and their
>merits and decided that "seconds since (some date)" (ie. time_t) is the
>better solution.

If you do this don't try to fit it into 32 bits unless you are happy
with 68 years (int) or 136 (unsigned int) years as your maximum time
difference. Of course this *may* be OK for your system, or (more likely
IMO) it may *seem* to be OK at the moment.
People dealing with historical data (ie almost any database now) will
probably not fall into this class though.

...snip..

>Of course, if your ultimate storage format

or *display* format, of course

> is an ASCII text file, strings
>is the "only" answer, however, you also end up storing integers and
>floating point numbers as ASCII strings which, of course, need to be
>converted back into a machine "native" format before any real work can be
>done with them.

Which sees you skipping quickly past my main point : If time won't fit
into universally available (and I grant efficient) integers (which are
*not* 40/64 bits long alas!) then why have the calculation format dealt
with in your code at all ? Put the function :

difference_in_seconds_between(time_as_string T1, time_as_string T2)

in a library and forget about it. (if it is defined to return an int
you'd better have C++ style exception handling switched on :-)

>
>Jeff Adler
>Automation Services
>

ciao

Mike Davies

Don Yuniskis

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

In article <GvXAPRAp...@noco.demon.co.uk>,

Mike Davies <mike_...@noco.demon.co.uk> wrote:
>In article <3548D6A3...@nowhere.com>, JTK <unli...@nowhere.com>
>writes

>>You are also missing another critical point. Using ASCII strings to


>>represent inherently numerical data (i.e. number of seconds elapsed) is
>>unabashed craziness, for the simple fact that we have better ways to do
>>it (eg 64-bit integers
>
>"Which in our case we have not got"
>
>> or two 32-bit unsigned ints).
>
>Which in our case we are choosing not to use.
>
>Because multi-precision arithmetic is a pain in a high level language
>(and assembler too, if it comes to that)
>
>So : Use strings for time

Ah, yes... strings are *so* much easier to use than 2 32bit ints or
a single "long long". Yes, I'm sure I can write a piece of code
to manipulate 28 byte strings:
"Mon May 4 10:20:37 MST 1998"
- "Mon May 4 10:20:35 MST 1998"
_________________________________
= "--- --- -- 00:00:02 --- ----"
*much* simpler/faster/reliably than I could *ever* deal with manipulating
two ints!

NOT!!

>> And ASCII strings
>>in hardware? Come on, are you playing with us here or what?
>
>Why not ? If you think about it it's not so very different from what RTC
>chips do at the moment (from the little that I've seen of them anyway).

Then perhaps you should go look at the types of RTC chips out there
before you get your foot caught any further down your throat!

--don

Scott Gilbert

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Many people have commented against using strings for (at least internal)
manipulation of time, and I certainly agree with them for all the
reasons mentioned. However the response has typically been to change to
a 64 bit integer (or two 32 bit integers).

What would be repercussions if instead of a 64 bit int, a double was
used in stead? A double storing the number of seconds would probably be
more valuable and would work on any hardware that has floating point
support (most) where as 64 bit ints are kind of rare and require
software support by the compiler on any common hardware besides the
Alpha.

That gives 52 bits of mantissa which is certainly enough seconds to
cover the 2038 problem! And it handles much greater periods of time
with less resolution while handling very small intervals with incredible
resolution. This solution would be as, if not more, accurate than a 64
bit int until the year 142,710,460 assuming we moved away from using
1970 as the start of time and used the year 0 instead.

Also, one probably cares less about seconds when working with larger
dates. Who care's about the exact picosecond that the universe ends.

I'm always annoyed when APIs use integer milliseconds (or microseconds)
instead of floating point. Were they really so short sighted as to
think that hardware couldn't eventually provide more performance than
that?

Just a thought....

Don Yuniskis

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

In article <6il5ai$rfo$1...@supernews.com>,

Scott Gilbert <xsc...@nospam.theriver.com> wrote:
>Many people have commented against using strings for (at least internal)
>manipulation of time, and I certainly agree with them for all the
>reasons mentioned. However the response has typically been to change to
>a 64 bit integer (or two 32 bit integers).
>
>What would be repercussions if instead of a 64 bit int, a double was
>used in stead? A double storing the number of seconds would probably be
>more valuable and would work on any hardware that has floating point
>support (most) where as 64 bit ints are kind of rare and require
>software support by the compiler on any common hardware besides the
>Alpha.

The problem with using any floating point is thta it burdens
architectures that don't have hardware assisted floating point.
A floating point "add" or "subtract" (the only operations that
really make sense with time's) is considerably more expensive
than a "long long" add/subtract where the floating point
operation is implemented in a software library, etc. The differences
in terms of time are probably two orders of magnitude (!). And,
the difference in terms of *space* is probably similar (I can check the
performance characteristics of my IEEE 754 implementation to get
a more realistic number here...)

[note by "space" I don't mean the cost of storing a "double" but,
rather, the cost of the floating point library operators themselves
since most embedded/realtime apps tend to avoid *any* unnecessary
dependance on them]

>That gives 52 bits of mantissa which is certainly enough seconds to
>cover the 2038 problem! And it handles much greater periods of time

Actually, 53 bits assuming 754-1985 format. Or, a maximum contiguous
range of integer "seconds" of 9,007,199,254,740,992 (note that this
is exclusive of the sign bit!). Or, roughly, 300 *million* years
(let's hope I didn't mistype *that*, Dave! :>)

>with less resolution while handling very small intervals with incredible

Yes, though then you aren't dealing with "time_t"'s any more...

>resolution. This solution would be as, if not more, accurate than a 64
>bit int until the year 142,710,460 assuming we moved away from using
>1970 as the start of time and used the year 0 instead.
>
>Also, one probably cares less about seconds when working with larger
>dates. Who care's about the exact picosecond that the universe ends.

Though then you are forced to deal with the typical issues concerning
floating point calculations -- adding 500 million seconds to "a big
number" one at a time leaves you with that same "big number", etc.

Likewise, comparisons have to be fuzzified, etc. This isn't necessary
with integer time_t's...

>I'm always annoyed when APIs use integer milliseconds (or microseconds)
>instead of floating point. Were they really so short sighted as to
>think that hardware couldn't eventually provide more performance than
>that?

I use a format that specifies "microtime" to nanoseconds. I have
decided that I am not interested in events finer than that! :>

--don

Gary Maier

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

I think when computers were first available generally, floating point
was a drag. Most compilers had software-emulation of the FPUs (even
bigger drag). And in the days of DOS, math libs could cause a program to
grow fast, exceeding 64 K boundaries and requiring larger-code models.
Also, it's a stack nuisance in small .COM and .EXE programs (DOS did
rule the roost despite what you might feel about it). Pushing 4 16-bit
words to move a double around (without pointers, like to printf) was
costly.

Personally, I'd rather have the two 32 bits, instead of the 64-bit
double, because I don't want my compiler generating extra code if I only
want to know if a second has changed, or If I want to compare only the
lower 32 bits to something else which was created this decade.

Also, it would be a pain for those who use standard compilers for
generating embedded code, as it might be the only thing which requires a
floating point, requiring work-arounds, or extra linked stuff - just to
deal with the time.

BTW how does making time a float help? You say APIs irritate you with
integer representations of time?

My $0.02

Scott Gilbert

unread,
May 4, 1998, 3:00:00 AM5/4/98
to

Gary Maier wrote:

>BTW how does making time a float help?

Well, the major advantages would only be apparent in an environment that
has 64 bit floating point support but does not have 64 bit integers.
Outside of embedded systems, the "double" type is pretty common. So
changing time_t from an int to a double would save a great deal of
hassle on a lot of platforms. (There are problems floating point
arithmetic though as mentioned by another poster).


>You say APIs irritate you with integer representations of time?

I've seen APIs (most notably Win32) that have you specify the number of
milliseconds as an integer. It seems to me at that point that you might
just as well use floating point for the day when the OS and hardware
support higher resolutions. The underlying kernel is prepared to do
some stuff in 100 nanosecond intervals, but since the API specifies
integer milliseconds for a lot of system calls you're stuck. It's one
thing to say that working with seconds is good enough (probably true for
many things). If you want really fine grain timing there are plenty of
cases where a millisecond is too course and stopping there seems like a
short sighted decision. Since they knew they'd have floating point
support, they should have just used a double for the time parameter.


JTK

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

But floating point doesn't buy you any simplification though. You can't
add 1 to a floating point indefinitely and see the value increase each
time you add. Try it yourself with a loop that just adds 1 to a double
- after a while, you'll see that the value stops changing (time stops).
Adding and/or subtracting two unsigned longs is really not a hassle, and
you don't get floating point madness involved at all.

Scott Gilbert

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

I agree with what you have to say, and I mentioned the problems with
floating point arithmetic as pointed out in another article. However,
while you can't add indefinitely, 140 million years for a 52 bit
mantissa is an awful lot of seconds. After that time span, you just
have to start adding 2 seconds at a time for the next 142710460 years.
Time only stops, as you say, long after we've been replaced by whatever
cockroaches evolve into, and as far as I'm concerned they can inherit
and maintain the legacy code at that point.

It's no hassle to work with unsigned longs, but that only gets you a
hundred or so years. It also doesn't handle sub-second resolution at
all.


JTK wrote in message <354EA07F...@nowhere.com>...

Tom Sheppard

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <6ila13$eet$1...@baygull.rtd.com>, d...@rtd.com (Don Yuniskis) wrote:

...


>The problem with using any floating point is thta it burdens
>architectures that don't have hardware assisted floating point.

...

I'm coming into this discussion late, so if this has been stated already,
I apologize.

One other problem with using floating point for such a basic unit as time
is that the OS must now save the FP registers on every context switch. It
doesn't know which of the (often many) FP registers a process happens to
be using to do its time calculations.

Also, once a process has told the OS it is using FP registers, the OS
doesn't know that the process is not using them for calculations other
than time. So the OS must save _all_ the registers.

That context switching penalty is too severe for some applications.

...Tom

Don Yuniskis

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <6imfrb$2vk$1...@supernews.com>,

Scott Gilbert <xsc...@nospam.theriver.com> wrote:
>I agree with what you have to say, and I mentioned the problems with
>floating point arithmetic as pointed out in another article. However,
>while you can't add indefinitely, 140 million years for a 52 bit
>mantissa is an awful lot of seconds. After that time span, you just

As I pointed out in a previous post, an IEEE 754 double actually gives
you 300 million years before and after the "epoch". I.e. if you
redefine the epoch to be "year 0" (there really is no such thing as
a year 0!), then you could represent times to one second precision
in the range 300,000,000 BC to 300,000,000 AD. Or, you'd be able to
computer to the nearest *second* how long ago the dinosaurs roamed
the Earth, etc.

>have to start adding 2 seconds at a time for the next 142710460 years.

The problem is inherent in any floating point computation. Do
you litter your code with:
if (value < 300 million)
value += 1.;
else if (value < 600 million)
/* sleep an extra second */
value += 2.;
else if (value < 1.2 billion)
/* sleep an extra 3 seconds */
value += 4.;
else if ...
Most folks don't even check if malloc() returns NULL and you're expecting
them to examine the magnitude of all time_t's and adjust their
computations accordingly? :-/

>Time only stops, as you say, long after we've been replaced by whatever
>cockroaches evolve into, and as far as I'm concerned they can inherit
>and maintain the legacy code at that point.
>
>It's no hassle to work with unsigned longs, but that only gets you a
>hundred or so years. It also doesn't handle sub-second resolution at
>all.

--don

Peter

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In comp.arch.embedded JTK <unli...@nowhere.com> wrote:

: Jim Lambert wrote:
:>
:> Richard M. Smith wrote in message <354239DC...@pharlap.com>...
:> >Hi Jeff,
:> >
:> >The year 2038 problem is not just a Unix problem, but a bad design flaw in
:> >the time.h functions in the standard ANSI C runtime library. Pretty much
:> >any C code that uses time.h functions is going to break in year 2038
:> regardless
:> >of the operating system . Year 2038 is when the time_t data type

Correct, unless some wise person will come up with a patch that fixes the code
in the libraries to understand legacy (present) UNIX time and whatever will
fix it. The ppl. who will invent this will have 38 years of time to cook it
up although writing with pencils at candlelight will be rather hard on them ;)
(I assume here that the Y2K effects will last for some time...)

Anyway, I keep running into a question: There is a lot of documentation that is
WAY older than Christ even, and astronomers who talk about 6,000,000,000 years
and other such, they all share the same academic premises with the CS guys, but
the CS guys never seem to notice, and the other guys never ask them.

I admit that it would be hard on us to keep all time in 128 bits format (nano-
seconds and such), but it's about time someone merged this into some form of
standard.

my 2 bits.

Peter

Evandro Menezes

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In <354E2C...@sprintmail.com>, Gary Maier
<gmai...@sprintmail.com> wrote:

>... or If I want to compare only the


>lower 32 bits to something else which was created this decade.

Now this is asking for trouble! It seems that the Y2K burden hasn't
taught anything... :-)

____________________________________________________________
Evandro Menezes Austin, TX USA
Tel:+1-512-502-9199 ICQ:7957253
mailto:eva...@geocities.com http://over.to/evandro

Mike Davies

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <6il0b1$6rj$1...@baygull.rtd.com>, Don Yuniskis <d...@rtd.com>
writes

>>Because multi-precision arithmetic is a pain in a high level language
>>(and assembler too, if it comes to that)
>>
>>So : Use strings for time
>
>Ah, yes... strings are *so* much easier to use than 2 32bit ints or
>a single "long long". Yes, I'm sure I can write a piece of code

You have long longs ? You lucky, lucky boy, you !

>to manipulate 28 byte strings:
> "Mon May 4 10:20:37 MST 1998"
> - "Mon May 4 10:20:35 MST 1998"
> _________________________________
> = "--- --- -- 00:00:02 --- ----"

Try:

1998,5,4,10,20,37

(implicit gmt)

This is as easily manipulable in hardware as the current set-up. Proof :
it *is* the current storage format in many RTCs ! (With the proviso that
the commas are not really there, and that the individual items are
selected by reading individual registers)

Use library functions to translate the time into your current time zone
and (if you really must, though it seems perverse to me) to take
difference between two times and get the result in seconds. It's *your*
responsibility to see that it fits though !


>*much* simpler/faster/reliably than I could *ever* deal with manipulating
>two ints!
>
>NOT!!

You have just said that you don't think you can master in SW a time/date
format that is handled routinely in HW !
OK, I believe you, *but* You don't have to !! Let somebody else do it !
And then use their library (I know, I know, this seems like cheating,
but hey, so what ! :-)

>
>>> And ASCII strings
>>>in hardware? Come on, are you playing with us here or what?
>>
>>Why not ? If you think about it it's not so very different from what RTC
>>chips do at the moment (from the little that I've seen of them anyway).
>
>Then perhaps you should go look at the types of RTC chips out there
>before you get your foot caught any further down your throat!

Yes, do that, and then you'll be able to tell me which (very common) rtc
uses a format like the one I've given above.

>
>--don

Regards,

--
Mike Davies

JTK

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

> You have just said that you don't think you can master in SW a time/date
> format that is handled routinely in HW !
> OK, I believe you, *but* You don't have to !! Let somebody else do it !
> And then use their library (I know, I know, this seems like cheating,
> but hey, so what ! :-)
>

Tell you what, Mike. Instead of making a further fool of yourself here
with your fever-induced ramblings, make us eat our words and write this
library of which you speak yourself (I hear it's real easy to do),
release it under the GNU GPL, and sit back and see how many people use
it. Let me save you the trouble: Nobody will.

Please answer one question for us before you do that, though: In what
possible way do you figure that adding and subtracting two sets of two
32-bit unsigned longs (WHICH EVERY EVERY EVERY C AND C++ COMPILER
HAS!!!!!!#$%$&@#*) is more difficult and makes less sense than adding
and subtracting two ASCII strings of the form 1998,5,4,10,20,37? Do you
read your own posts? Snap out of it! :-)

Don Yuniskis

unread,
May 5, 1998, 3:00:00 AM5/5/98
to

In article <OHSCVAA6...@noco.demon.co.uk>,

Mike Davies <mike_...@noco.demon.co.uk> wrote:
>In article <6il0b1$6rj$1...@baygull.rtd.com>, Don Yuniskis <d...@rtd.com>
>writes
>>>Because multi-precision arithmetic is a pain in a high level language
>>>(and assembler too, if it comes to that)
>>>
>>>So : Use strings for time
>>
>>Ah, yes... strings are *so* much easier to use than 2 32bit ints or
>>a single "long long". Yes, I'm sure I can write a piece of code
>
>You have long longs ? You lucky, lucky boy, you !
>
>>to manipulate 28 byte strings:
>> "Mon May 4 10:20:37 MST 1998"
>> - "Mon May 4 10:20:35 MST 1998"
>> _________________________________
>> = "--- --- -- 00:00:02 --- ----"
>
>Try:
>
> 1998,5,4,10,20,37

Let's see... assuming an ASCII string (not Unicode, etc.) I count
18 characters. Of course, the 5 and 4 couldd just as easily be 12
and 11 respectively so that's 20 characters. Discard all the
delimiters and you still have 15 characters -- 120 bits. Pack
it as BCD instead of "strings" and you still have 8 bytes. Gee,
last time I checked, that was 64 bits! And, you'll note that *this*
64 bit representation has far less dynamic range than a "long long"
would have! It has the dreaded Y10K problem as well as being unable
to handle dates B.C.

>(implicit gmt)
>
>This is as easily manipulable in hardware as the current set-up. Proof :
>it *is* the current storage format in many RTCs ! (With the proviso that
>the commas are not really there, and that the individual items are
>selected by reading individual registers)

And they aren't stored in ASCII and they don't handle leap seconds and
they don't handle leapyears in 2400 correctly and if you set the date
back to 1500 it will screw up the DoW "calculation" and...

>Use library functions to translate the time into your current time zone
>and (if you really must, though it seems perverse to me) to take
>difference between two times and get the result in seconds. It's *your*
>responsibility to see that it fits though !

No, the library already defines a set of routines that deal with
this quite nicely -- *if* you have an appropriately sized time_t.

>>*much* simpler/faster/reliably than I could *ever* deal with manipulating
>>two ints!
>>
>>NOT!!
>

>You have just said that you don't think you can master in SW a time/date
>format that is handled routinely in HW !

Show me a hardware RTC that implements localtime(), difftime(), etc.
It's quite simple to design a binary decade counter and cascade them
to form "N digits". But, another thing entirely to design a bit
of hardware to allow the difference between two of those time_t's
to be computed. Or a sum, etc.

>OK, I believe you, *but* You don't have to !! Let somebody else do it !
>And then use their library (I know, I know, this seems like cheating,
>but hey, so what ! :-)

Great! Now I have realtime events coming along and I want to timestamp
them. The "nearest second" is good enough. But, they are coming
at an average rate of a few hundred Hz. Perhaps I'm watching
network packets coming in on an ethernet and I want to note how long
they take to be serviced -- note the difference between "time in"
and "time out". How much CPU am I wasting doing what *should*
be a simple arithmetic operation?

>>>> And ASCII strings
>>>>in hardware? Come on, are you playing with us here or what?
>>>
>>>Why not ? If you think about it it's not so very different from what RTC
>>>chips do at the moment (from the little that I've seen of them anyway).
>>
>>Then perhaps you should go look at the types of RTC chips out there
>>before you get your foot caught any further down your throat!
>
>Yes, do that, and then you'll be able to tell me which (very common) rtc
>uses a format like the one I've given above.

Sorry, *none*! You'll find packed BCD and binary representations.
I haven't seen an RTC that uses ASCII yet! Let alone a "very common"
one... Make sure you understand what a "string" is vs. packed
BCD, etc.

--don

Jeff Adler

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Since the discussion goes on and on about which format for time is better
than others, might I suggest that the programmer who must use time for
his/her application pick the method that is best suited to the application.

The original point that I made is that the "current" Posix-C implementation
has some issues to be reconciled before that method runs out of gas.

If you like using floating point for time, go ahead. If you are a fan of
strings, may the force be with you. Some of us are quite content with
"seconds since a base date" for most of the applications where the method
fits.

Having grown up with slower machines and "expensive" (speed and $) FPU
hardware / emulation, it hurts me to use a FP number where an integer would
do, espically for time-critical real-time programs (which is why some of us
are using QNX anyway.

If I have to store someone's birthdate, the time_t format won't work very
well, so I won't use it. If, on the other hand, I need to "time stamp" an
event with resolution only down to the second, the time_t method is a good
fit. I have been using a "seconds since ..." long before I started using
QNX and Posix because it offers enough advantages of speed vs.
representation that it has been (for me, anyway) the best fit.

But for the rest of you out there, pick the method that works best for you.
I hope that the short-term fix of using unsigned int32's to gain another
68 years can be easily implemented in "legacy" code long before the problem
becomes another "computer millenium" problem.

Once again, if you really want to use strings to represent time, PLEASE, DO
SO!

Jeff Adler
Automation Services


David L. Hawley

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In comp.os.qnx Jeff Adler <jad...@mho.net> wrote:
: Since the discussion goes on and on about which format for time is better

: than others, might I suggest that the programmer who must use time for
: his/her application pick the method that is best suited to the application.

: The original point that I made is that the "current" Posix-C implementation
: has some issues to be reconciled before that method runs out of gas.

: If you like using floating point for time, go ahead. If you are a fan of
: strings, may the force be with you. Some of us are quite content with
: "seconds since a base date" for most of the applications where the method
: fits.

: Having grown up with slower machines and "expensive" (speed and $) FPU
: hardware / emulation, it hurts me to use a FP number where an integer would
: do, espically for time-critical real-time programs (which is why some of us
: are using QNX anyway.

Getting back to reality - in QNX at least it's pretty difficult to do any
floating point stuff in an interrupt handler.

: If I have to store someone's birthdate, the time_t format won't work very


: well, so I won't use it. If, on the other hand, I need to "time stamp" an
: event with resolution only down to the second, the time_t method is a good
: fit. I have been using a "seconds since ..." long before I started using
: QNX and Posix because it offers enough advantages of speed vs.
: representation that it has been (for me, anyway) the best fit.

: But for the rest of you out there, pick the method that works best for you.
: I hope that the short-term fix of using unsigned int32's to gain another
: 68 years can be easily implemented in "legacy" code long before the problem
: becomes another "computer millenium" problem.

: Once again, if you really want to use strings to represent time, PLEASE, DO
: SO!


Strings are a fine way to represent a date, but sometimes one needs to do
date calculations too. That means converting to a number, or iterating on
day at a time. And did you remember to store your date in gmt time or are
you using local time? String representation is a problem in that people
need to be able to look at a date and know what it means. We normally
think that when we see a string that we do know what it means, but with
local problems, timezones and daylight saving time it's not that simple.
Also does your network cross a time zone, do your time stamped files get
updated from around the web?

Using numbers has it's own troubles. Using an unsigned long for time_t
solves a few problems, but it seems reasonable to expect that time_t
numbers should be valid before 0. I was born in 1952 and I would expect
that some agency some where may want to figure out how old I am.

The question boils down to what kind of object do we want to use to
represent time with. The representation does not really matter if we
consistently use one lib to do "all manipulations" on that object. The
problem with time_t, is that since it's a number we tend to do things
like:

tomarrow = now + 24*60*60.
seconds_delta = then - now;
min_delta = seconds_delta / 60
write( &now, sizeof( now ), 1, fp_to_file_for_all_time )

quick now - was that write local time or gmt?

and this code is scattered all through our programs. Some of us use
struct tm to break up time_t's into components - it makes short work of
figuring out how many seconds till midnight, but the library routines
are pretty low level and we tend to write our own cover functions to
handle the details. Many probably make the mistake:

printf( "19%02d %2d %2d\n", t.tm_year, t.tm_month + 1, t.tm_day);

This will most likely print 191XX ... after the end of 1999. tm_year is
years after 1900, not years % 100. This simple fact allows the libs to
"transparently" hide the size of time_t, but it's really not enough.

We know that we should really use strftime() for the converting to ASCII,
but, we don't. Then there's the problem of reading user date input.
Modern GUI's help a lot with this problem, but then there are those pesky
files with ASCII date strings in them.

Then there are to funny old calanders. It would be nice if we had
something that could easily represent and convert dates between our time
and other times.

I guess that I am of the opinion that time is pretty important and that
practices should be standardized into a nice set of high level, but
efficient routines. There should probably be layers of support so that
small embedded systems don't have to carry to much overhead. They might
be able to use a 4 byte time_t with a sliding window bit(s), larger
systems probably want to use 8 bytes. I don't buy the argument that we'll
all have 64 bit machines by 2038, but on the other hand the new 400mHz PII
will be worth zip by then. The latest Watcom compiler supports 64 bit
longs now.

We are spending a lot of money to fix the Y2K problem. A small fraction
of that money could be used on standards to solve the problem for a real
long time.

If I've done the math correctly 64 bit numbers give us +/-25 million years
at nanosecond resolution. Not long enough to track t-rex wandering
through the forest, or to model deformations in an atomic blast, but
pretty good for a lot of things.

--
David L. Hawley D.L. Hawley and Associates 1(503)274-2242
Software Engineer
dlha...@teleport.com dlha...@qnx.com

Don Yuniskis

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <WXR31.90$326....@news.teleport.com>,

David L. Hawley <dlha...@user2.teleport.com> wrote:

>Using numbers has it's own troubles. Using an unsigned long for time_t
>solves a few problems, but it seems reasonable to expect that time_t
>numbers should be valid before 0. I was born in 1952 and I would expect
>that some agency some where may want to figure out how old I am.

time_t *must* be a signed quantity. See mktime(3C).
Of course, under C++ you could "fix" the (time_t) cast a bit...

>If I've done the math correctly 64 bit numbers give us +/-25 million years
>at nanosecond resolution. Not long enough to track t-rex wandering

Um, I don't think so. :> 64 bit *signed* long long is ~300 billion
years. Since nano = 1/billion, I'd guess 64 bits to give you +- 300
years. Unfortunately, I expect there will *still* be a sh*tload of
COBOL code *still* running 300 years hence... :-(

>through the forest, or to model deformations in an atomic blast, but
>pretty good for a lot of things.

--don

Mike Davies

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <6io34t$5lo$1...@baygull.rtd.com>, Don Yuniskis <d...@rtd.com>
writes

>>>to manipulate 28 byte strings:
>>> "Mon May 4 10:20:37 MST 1998"
>>> - "Mon May 4 10:20:35 MST 1998"
>>> _________________________________
>>> = "--- --- -- 00:00:02 --- ----"
>>
>>Try:
>>
>> 1998,5,4,10,20,37
>
>Let's see... assuming an ASCII string (not Unicode, etc.) I count
>18 characters. Of course, the 5 and 4 couldd just as easily be 12

Don't forget that it was counting the bytes that led us to the Y2K
problem in the first place ! It hasn't even happened the first time and
you're already planning the next !

>and 11 respectively so that's 20 characters. Discard all the
>delimiters and you still have 15 characters -- 120 bits. Pack
>it as BCD instead of "strings" and you still have 8 bytes. Gee,
>last time I checked, that was 64 bits! And, you'll note that *this*
>64 bit representation has far less dynamic range than a "long long"

again (again (again (...))) we don't have long longs !
Also my (off-the-cuff) representation has *infinite* range (add an
A.D./B.C. selector and a decimal point).

>would have! It has the dreaded Y10K problem as well as being unable
>to handle dates B.C.

No, implicit A.D. shown above.

>
>>(implicit gmt)
>>
>>This is as easily manipulable in hardware as the current set-up. Proof :
>>it *is* the current storage format in many RTCs ! (With the proviso that
>>the commas are not really there, and that the individual items are
>>selected by reading individual registers)
>
>And they aren't stored in ASCII and they don't handle leap seconds and

This ascii bit is a *complete* canard, how difficult is it to "or" in a
0x30 in hardware ?
If the HW RTC doesn't correctly handle leap seconds (and there is no
good reason for that either) how does using long longs help ? You're
still going to need library code to sort that out, and the RTCs aren't
guaranteed to have the same bugs, are they? :-)

>they don't handle leapyears in 2400 correctly and if you set the date
>back to 1500 it will screw up the DoW "calculation" and...

So the current HW RTCs aren't good enough. This isn't a reason to
cripple their replacements with a data format that rolls over every 136
years. My point was that RTCs give you the time in hours, minutes,
seconds etc. Their more modern replacements give 4 digits for the year,
if a fifo buffer was used then the extensions to give greater precision
in terms of timing resolution (use a decimal point) and have extra flags
for A.D./B.C. are trivial.

>
>>Use library functions to translate the time into your current time zone
>>and (if you really must, though it seems perverse to me) to take
>>difference between two times and get the result in seconds. It's *your*
>>responsibility to see that it fits though !
>
>No, the library already defines a set of routines that deal with
>this quite nicely -- *if* you have an appropriately sized time_t.

We don't though, do we. And though this may not be true for QNX which I
expect has the Watcom team already working on it, the general run of
embedded systems won't see long longs for years, I bet.

>
>>>*much* simpler/faster/reliably than I could *ever* deal with manipulating
>>>two ints!
>>>
>>>NOT!!
>>
>>You have just said that you don't think you can master in SW a time/date
>>format that is handled routinely in HW !
>
>Show me a hardware RTC that implements localtime(), difftime(), etc.

No, let the rendered data handle the strain, not the RTC.

>It's quite simple to design a binary decade counter and cascade them
>to form "N digits". But, another thing entirely to design a bit
>of hardware to allow the difference between two of those time_t's
>to be computed. Or a sum, etc.

No, actually arbitrary (within reason) hardware addition/subtraction is
not *that* hard. (And you could always put a low end micro in the RTC
;-)

>
>>OK, I believe you, *but* You don't have to !! Let somebody else do it !
>>And then use their library (I know, I know, this seems like cheating,
>>but hey, so what ! :-)
>
>Great! Now I have realtime events coming along and I want to timestamp
>them. The "nearest second" is good enough. But, they are coming
>at an average rate of a few hundred Hz. Perhaps I'm watching
>network packets coming in on an ethernet and I want to note how long
>they take to be serviced -- note the difference between "time in"
>and "time out". How much CPU am I wasting doing what *should*
>be a simple arithmetic operation?

Yes, and if you try to implement a more efficient method using two
halves made from ints, then the chances of putting an unintentional
YwhateverK bug of your own in there is quite high too. Especially if you
try to optimise it ! Especially too if it doesn't make it into your
company's code libraries and everybody and their dog writes their own
incompatible version ! (I know your own would work, of course -
actually I was going to ask for a quick look at the code ;-)

>
>>>>> And ASCII strings
>>>>>in hardware? Come on, are you playing with us here or what?
>>>>
>>>>Why not ? If you think about it it's not so very different from what RTC
>>>>chips do at the moment (from the little that I've seen of them anyway).
>>>
>>>Then perhaps you should go look at the types of RTC chips out there
>>>before you get your foot caught any further down your throat!
>>
>>Yes, do that, and then you'll be able to tell me which (very common) rtc
>>uses a format like the one I've given above.
>
>Sorry, *none*! You'll find packed BCD and binary representations.
>I haven't seen an RTC that uses ASCII yet! Let alone a "very common"

As I say above even my hardware skills are up to oring in a 0x30 to the
output!

>one... Make sure you understand what a "string" is vs. packed
>BCD, etc.
>
>--don

Regards,

--
Mike Davies

Mike Davies

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <01bd789a$ebc69a40$92689cce@home-office>, Jeff Adler
<jad...@mho.net> writes

>Since the discussion goes on and on about which format for time is better
>than others, might I suggest that the programmer who must use time for
>his/her application pick the method that is best suited to the application.

Nope, that's how we got a Y2K problem in the first place isn't it ?
If every programmer hadn't done his/her own thing then surely all we'd
have to do would be to re-link with the up-to-date library, no ?

So somebody has to write a library that doesn't have any YwhateverK bugs
in it. And then we have to use it.

...big snip of a whole pile of thoughtful stuff...

>Jeff Adler
>Automation Services
>

--
Mike Davies

Mike Davies

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <354F6426...@nowhere.com>, JTK <unli...@nowhere.com>
writes

>> You have just said that you don't think you can master in SW a time/date
>> format that is handled routinely in HW !
>> OK, I believe you, *but* You don't have to !! Let somebody else do it !
>> And then use their library (I know, I know, this seems like cheating,
>> but hey, so what ! :-)
>>
>
>Tell you what, Mike. Instead of making a further fool of yourself here
>with your fever-induced ramblings, make us eat our words and write this
>library of which you speak yourself (I hear it's real easy to do),
>release it under the GNU GPL, and sit back and see how many people use
>it. Let me save you the trouble: Nobody will.

Whether doubles, long longs, pairs of ints or (and this is the one I
like ;-) strings are used, a library will have to be written by
somebody. I hope whoever does it doesn't put a YwhateverK bug in it,
though (IMO easiest with strings, as I keep on saying).

>
>Please answer one question for us before you do that, though: In what
>possible way do you figure that adding and subtracting two sets of two
>32-bit unsigned longs (WHICH EVERY EVERY EVERY C AND C++ COMPILER
>HAS!!!!!!#$%$&@#*) is more difficult and makes less sense than adding

How do you do the carrys ? I'd like to see your implementation of that !
NOT!!

And it *still* doesn't work for *every* time/date !!!!

(And no other person in the world will do it the same way, nor use a
pair of unsigned longs instead of signed ones, nor even use a *pair* of
anything, I bet :-)

>and subtracting two ASCII strings of the form 1998,5,4,10,20,37? Do you
>read your own posts? Snap out of it! :-)

It has the virtue of infinite resolution (with the addition of a decimal
point, as I've said elsewhere). It is also easier to render, IMO.
Addition and subtractiom should be done in a library, as should the
transformation into local time from GMT.

Regards,

--
Mike Davies

Don Yuniskis

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <dpumeTAX...@noco.demon.co.uk>,

Mike Davies <mike_...@noco.demon.co.uk> wrote:
>In article <354F6426...@nowhere.com>, JTK <unli...@nowhere.com>
>writes

[snip]

>>Please answer one question for us before you do that, though: In what
>>possible way do you figure that adding and subtracting two sets of two
>>32-bit unsigned longs (WHICH EVERY EVERY EVERY C AND C++ COMPILER
>>HAS!!!!!!#$%$&@#*) is more difficult and makes less sense than adding
>
>How do you do the carrys ? I'd like to see your implementation of that !
>NOT!!

How do you do the "carries" between characters in the *strings*?? :>
You should, perhaps, look at some of the arbitrary precision math
packages that are out there and work quite well. Say, adding 200 byte
integers??

>And it *still* doesn't work for *every* time/date !!!!
>
>(And no other person in the world will do it the same way, nor use a
>pair of unsigned longs instead of signed ones, nor even use a *pair* of
>anything, I bet :-)

And we can be *sure* no one else will be fiddling with 20 character
*strings*!

>>and subtracting two ASCII strings of the form 1998,5,4,10,20,37? Do you
>>read your own posts? Snap out of it! :-)
>
>It has the virtue of infinite resolution (with the addition of a decimal
>point, as I've said elsewhere).

... and an infinite number of *digits*, right? Since we've already decided
that it suffers from Y10K whereas a long long implementation would postpone
this until Y10^9K (roughly).

>It is also easier to render, IMO.

"Render"? You mean convert to a printable form?? By that I guess
you would advocate all floating point numbers *also* be stored as
"strings"? (some floating point emulation libraries take this
approach). Likewise, *all* integers...

>Addition and subtractiom should be done in a library, as should the
>transformation into local time from GMT.

They are. See mktime(), localtime(), difftime(), etc. So,
why does "easier to render" factor into this??

--don

Don Yuniskis

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <moX4OJAz...@noco.demon.co.uk>,

Mike Davies <mike_...@noco.demon.co.uk> wrote:
>In article <6io34t$5lo$1...@baygull.rtd.com>, Don Yuniskis <d...@rtd.com>
>writes
>>
>>Let's see... assuming an ASCII string (not Unicode, etc.) I count
>>18 characters. Of course, the 5 and 4 couldd just as easily be 12
>
>Don't forget that it was counting the bytes that led us to the Y2K
>problem in the first place ! It hasn't even happened the first time and
>you're already planning the next !

Gee, why don't we come up with a *real* future safe representation
for time: we'll just add a bit to the length of the value for
every second that elapses. Then, all you have to do is count the
number of bits in a (variable length) time object to determine what
the current time is!

You are also "planning the next" Y2K problem. Or, are you saying the
time should be of indefinite length? With a 64 bit int, times
from 300billion BC through 300 billion AD are representable to 1
second resolution. I guess you *could* do the same thing
with strings:
275000000000,12,25,10:14:23
Yup. That does the trick all right! And only takes twice the memory!

Of course, it would be interesting to see how many thousands of CPU
cycles it would take to determine what day of the week this is.
Or, how long it has been (to the nearest second) since the timestamp
on this newsgroup post...

>>and 11 respectively so that's 20 characters. Discard all the
>>delimiters and you still have 15 characters -- 120 bits. Pack
>>it as BCD instead of "strings" and you still have 8 bytes. Gee,
>>last time I checked, that was 64 bits! And, you'll note that *this*
>>64 bit representation has far less dynamic range than a "long long"
>
>again (again (again (...))) we don't have long longs !

So, you're saying "I don't have something and have to create
something to solve this need. But, I can't solve the problem
with two 32 bit longs! Instead, I'll use 10, 20, etc. chars!!!"

Yeah. Right.

>Also my (off-the-cuff) representation has *infinite* range (add an
>A.D./B.C. selector and a decimal point).
>
>>would have! It has the dreaded Y10K problem as well as being unable
>>to handle dates B.C.
>
>No, implicit A.D. shown above.
>
>>>(implicit gmt)
>>>
>>>This is as easily manipulable in hardware as the current set-up. Proof :
>>>it *is* the current storage format in many RTCs ! (With the proviso that
>>>the commas are not really there, and that the individual items are
>>>selected by reading individual registers)
>>
>>And they aren't stored in ASCII and they don't handle leap seconds and
>
>This ascii bit is a *complete* canard, how difficult is it to "or" in a
>0x30 in hardware ?

*You* said there was a common RTC that did this! *I* didn't!
You'll also note that you first have to break each packed BCD
counter into two separate counters so you can "or in a 0x30".

I'm still waiting to see how you do the date arithmetic...

>If the HW RTC doesn't correctly handle leap seconds (and there is no
>good reason for that either) how does using long longs help ? You're

Because you can numerically manipulate a *numeric* time_t
a helluvalot easier than you can do this with an ASCII string.

LISP is wonderfully versatile for manipulating lists. But,
if your problem doesn't fit the list model, it stinks!
Most conventional CPU's want to crunch *numbers* and are
optimized to do that well -- not interpret ASCII codes
as if they *were* numbers, etc.

>still going to need library code to sort that out, and the RTCs aren't
>guaranteed to have the same bugs, are they? :-)
>
>>they don't handle leapyears in 2400 correctly and if you set the date
>>back to 1500 it will screw up the DoW "calculation" and...
>
>So the current HW RTCs aren't good enough. This isn't a reason to
>cripple their replacements with a data format that rolls over every 136
>years. My point was that RTCs give you the time in hours, minutes,
>seconds etc. Their more modern replacements give 4 digits for the year,
>if a fifo buffer was used then the extensions to give greater precision
>in terms of timing resolution (use a decimal point) and have extra flags
>for A.D./B.C. are trivial.
>
>>
>>>Use library functions to translate the time into your current time zone
>>>and (if you really must, though it seems perverse to me) to take
>>>difference between two times and get the result in seconds. It's *your*
>>>responsibility to see that it fits though !
>>
>>No, the library already defines a set of routines that deal with
>>this quite nicely -- *if* you have an appropriately sized time_t.
>
>We don't though, do we. And though this may not be true for QNX which I

*You* don't. That doesn't mean the rest of the world doesn't!

Hey, let's go back in time and fix the problem: time_t is a
300 bit integer type. Every compiler *must* support it.
Every processor must support it. Problem solved, eh?

If you think about the flavor of the C Standard and the original
library, you would realize that it tried to be as unimposing as
possible. It's real easy to design something that covers all
the SNAFU's -- as long as you don't care about how widely it
*isn't* accepted!

The standard libraries give a nice, flexible model for time_t
functions. If your implementation doesn't support a wide enough
time_t type, the *standard* isn't broken. Rather, complain to your
compiler vendor because *they* don't support wider types! Likewise,
you can complain that they don't support long doubles.

Yes, that's what we need... a heavy handed set of standards
*imposed* on everyone. "One size fits all".

NOT!

>expect has the Watcom team already working on it, the general run of
>embedded systems won't see long longs for years, I bet.

I would rather QNX made their libraries reentrant than waste time
on a time_t problem that can be easily worked around.

>>>>*much* simpler/faster/reliably than I could *ever* deal with manipulating
>>>>two ints!
>>>>
>>>>NOT!!
>>>
>>>You have just said that you don't think you can master in SW a time/date
>>>format that is handled routinely in HW !
>>
>>Show me a hardware RTC that implements localtime(), difftime(), etc.
>
>No, let the rendered data handle the strain, not the RTC.
>
>>It's quite simple to design a binary decade counter and cascade them
>>to form "N digits". But, another thing entirely to design a bit
>>of hardware to allow the difference between two of those time_t's
>>to be computed. Or a sum, etc.
>
>No, actually arbitrary (within reason) hardware addition/subtraction is
>not *that* hard. (And you could always put a low end micro in the RTC
>;-)

Have you ever designed an adder? Remember, we're talking about something
that is 88 bits wide (for your BCD implementation) and uses modular
arithmetic. Not only is it modular arithmetic, but it is also
*variable* modulus -- since the current date influences the
number of days in february of that year, etc.

And, this has to be as fast as the resolution of your RTC...

>>>OK, I believe you, *but* You don't have to !! Let somebody else do it !
>>>And then use their library (I know, I know, this seems like cheating,
>>>but hey, so what ! :-)
>>
>>Great! Now I have realtime events coming along and I want to timestamp
>>them. The "nearest second" is good enough. But, they are coming
>>at an average rate of a few hundred Hz. Perhaps I'm watching
>>network packets coming in on an ethernet and I want to note how long
>>they take to be serviced -- note the difference between "time in"
>>and "time out". How much CPU am I wasting doing what *should*
>>be a simple arithmetic operation?
>
>Yes, and if you try to implement a more efficient method using two
>halves made from ints, then the chances of putting an unintentional
>YwhateverK bug of your own in there is quite high too. Especially if you

This conversation is becoming absolutely ludicrous. You are claiming
that you can correctly perform variable modulus arithmetic on 80
bit BCD strings with *less* chance of introducing a bug than *I*
can implementing a 64 bit int from a pair of 32 bit ints?
Wow, I bet you must have hundreds of folks giving you job offers
with that sort of skill set!

>try to optimise it ! Especially too if it doesn't make it into your
>company's code libraries and everybody and their dog writes their own
>incompatible version ! (I know your own would work, of course -
>actually I was going to ask for a quick look at the code ;-)
>
>>
>>>>>> And ASCII strings
>>>>>>in hardware? Come on, are you playing with us here or what?
>>>>>
>>>>>Why not ? If you think about it it's not so very different from what RTC
>>>>>chips do at the moment (from the little that I've seen of them anyway).
>>>>
>>>>Then perhaps you should go look at the types of RTC chips out there
>>>>before you get your foot caught any further down your throat!
>>>
>>>Yes, do that, and then you'll be able to tell me which (very common) rtc
>>>uses a format like the one I've given above.
>>
>>Sorry, *none*! You'll find packed BCD and binary representations.
>>I haven't seen an RTC that uses ASCII yet! Let alone a "very common"
>
>As I say above even my hardware skills are up to oring in a 0x30 to the
>output!
>
>>one... Make sure you understand what a "string" is vs. packed
>>BCD, etc.

Do yourself a favor. *Write* a few of the routines in time.h
before you ramble on any further. It will give you some good
basic experience in how to handle multiple precision arithmetic.
When you're done, you can quickly test your code to see how
many places it screws up. After you get that fixed, you can
do some benchmarks on it compared to whatever library your
favorite compiler vendor supplies. See how big your TEXT
is. Run some timing tsts, etc.

If you are within *2* orders of magnitude on a wide range of
sample data, I'm sure there are folks here who would *jump*
at the chance to benefit from your cleverness.

Meanwhile, I'll stick to integer data types.

And that's it for *this* thread...

*PLONK*
--don

Everett M. Greene

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <01bd789a$ebc69a40$92689cce@home-office> "Jeff Adler" <jad...@mho.net> writes:
> Since the discussion goes on and on about which format for time is better
> than others, might I suggest that the programmer who must use time for
> his/her application pick the method that is best suited to the application.
>
> The original point that I made is that the "current" Posix-C implementation
> has some issues to be reconciled before that method runs out of gas.
>
> If you like using floating point for time, go ahead. If you are a fan of
> strings, may the force be with you. Some of us are quite content with
> "seconds since a base date" for most of the applications where the method
> fits.
[snip]

There seems to be two aspects to the problem: (1) the determination
of elapsed time and (2) determination of time and/or date. Much of
the argument seen has seemed to be complicated by mixing the two
aspects and trying to find a common solution. I am reaching the
conclusion that there is no way to (at least easily) derive a common
time notation which works well for both aspects. If one uses a
count of (fractions of) seconds since a base date, there is the
problem of leap-seconds, time zones, etc. which make converting the
elapsed time count to an accurate date and time all but impossible.
If one uses a calendar type of notation (year, month, date, time of
day), then elapsed time calculations become very involved.

0 new messages