Working on issue #139, I came across some weird behavior:
In fact, I want to increase the precision of the timestamps returned and
include fractional seconds, rather than rounding down to full seconds as
the code does now. (For that purpose, I've modified the code to ignore
timestamps from NMEA sentences which do not specify fractional seconds.)
The code is in hardware/hw/gps/gps_freerunner.c, in the last lines of
nmea_reader_update_time() (should be somewhere around the line 300-400
range). A tm structure is filled in with the date/time/members, then:
fix_time = mkgmtime( &tm );
r->fix.timestamp = (long long)fix_time * 1000;
return 0;
So far, so good; GPS Status (from the Applications section of the wiki)
reports a timestamp that accurately reflects local time. (Unfortunately,
the app does not display the date.)
But... when I change the second line of the code snippet to:
r->fix.timestamp = (long long)((fix_time * 1000) + (int)(fmod(seconds,
1) * 1000.0));
all of a sudden GPS Status reports a weird timestamp; at 21:00 I get
something shortly after 11:00. The date is probably even more off,
though I can't check that with this app.
The same happens with
r->fix.timestamp = (long long)((fix_time * 1000) + ((int) (seconds *
1000) % 1000));
However, if I fake a fractional part of 500 ms with
r->fix.timestamp = (long long)fix_time * 1000 + 500;
the timestamp looks OK. So the basic idea of adding milliseconds to the
calculation can't be that wrong, the error seems somewhere in the
calculation. Except I can't for the life of me figure out where...
anyone have a clue?
Michael
Sounds like a possible overflow to me. If you use (long long), I presume
these are the unix-seconds-since-1970. So doing an (int) of (seconds *
1000) must overflow pretty badly, no?
Anyway, your other suggestion looks much more nice anyway!
Linus
Attention: C-course ahead!
Yes, that's correct. But int has 32bits anyway! So the problem is with
the (fix_time * 1000). I did some tests (my C is really rusty!), and in
fact, the (long long)(fix_time * 1000) doesn't do what you think it
does! The (fix_time * 1000) is still done as an int, which is 32-bits,
and then only it's converted to long, so 64 bits! Stupid, but true. See
the variable "c" in the example at the end of this mail.
Freshing up my stdlib.h and math.h knowledge (I'm in ruby now;):
r->fix.timestamp = (long long)fix_time * 1000 +
fmod( seconds, 1 ) * 1000.;
or, with the %:
r->fix.timestamp = (long long)fix_time * 1000 +
(long long)(seconds * 1000) % 1000;
fmod returns the floating-point reminder of "seconds / 1.", which is the
part after the comma (there would also be modf, but the result you're
interested in is given back with a pointer).
Anyway, I still prefer your other solution ;)
Here is the program I used to get clear about how a gcc thinks:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main(){
int a = 65536;
long long b = a * a; // returns 0
long long c = (long long)(a * a); // returns 0
long long d = (long long)(a) * a; // returns 2**32
long long e = (long long)(a) * (a*a); // returns 0
printf( "b is %lli\n", b );
printf( "c is %lli\n", c );
printf( "d is %lli\n", d );
printf( "e is %lli\n", e );
int fix_time = 1000000000; // 10**9, so one can see overflows easily
// Corresponds to September 9, 2001, at exactly 01:46:40 (UTC)
// according to Wikipedia
float seconds = 50.203;
long long first = (long long)((fix_time * 1000) +
((int) (seconds * 1000) % 1000)); // returns -727379766
long long second = (long long)fix_time * 1000 +
(long long)(seconds * 1000) % 1000;// returns 1000000000202
long long third = (long long)fix_time * 1000 +
fmod( seconds, 1 ) * 1000.; // returns 1000000000202
printf( "first is %lli\n", first );
printf( "second is %lli\n", second );
printf( "third is %lli\n", third );
return 0;
}
and the result (on a 32-bit linux):
b is 0
c is 0
d is 4294967296
e is 0
first is -727379766
second is 1000000000202
third is 1000000000202
PS: Exercise for the reader: why are there only 202 ms instead of 203?