what is rational behind storing GPS time as floating-point number?

1,104 views
Skip to first unread message

Martin Isenburg

unread,
Dec 13, 2011, 8:29:20 PM12/13/11
to The LAS room - a friendly place to discuss specifications of the LAS format
Hello,

Does anyone know the rational behind storing the GPS time as a 64-bit
double-precision floating-point number in LAS?

It seems the necessity to require an "adjusted GPS Time" comes solely
from the fact that a 64-bit double-precision floating-point number was
chosen to store the GPS time instead of a properly scaled 64-bit
integer since a float "runs out of precision". Why? Because each time
an monotonically increasing positive value crosses another "power of
two barrier" the spacing between the smallest possible increments
doubles. For more on the topic floating-point versus integer storage
for uniformly samples domains see section II of this paper [1] or the
explanation in this video [2] from minute 1:00 to 5:00.

Maybe there was a specific reason to use 64-bit double-precision
floats that I am unaware of?

Cheers,

Martin @lastools

[1] http://www.cs.unc.edu/~isenburg/lastools/download/laszip.pdf
[2] http://www.youtube.com/watch?v=A0s0fVktj6U

Michael Unverferth

unread,
Dec 14, 2011, 9:17:29 AM12/14/11
to las...@googlegroups.com
I have often wondered what the rationale is of storing the GPS time at all?

As I understand it, the GPS Time, along with the Source ID are primarily used as a way to uniquely identify each point, presumably to associate some information outside the LAS file.  Using 10 bytes for this purpose seems terribly inefficient.  It is not even clear (to me anyway) if the GPS Time is the time of the Pulse or the time of the Return.  Not that there could be much difference, and perhaps none at all if precision loss becomes a factor.

Konstantin Lisitsyn

unread,
Dec 14, 2011, 7:43:02 PM12/14/11
to The LAS room - a friendly place to discuss specifications of the LAS format
Hello Michael,

There are couple of uses:

In the area of transmission lines LiDAR surveys, the laser point
timestamps are helpful to determine transmission line span survey
times. Span survey times are necessary to lookup meteo data and
electrical line load for the purpose of conductor temperature
calculation. Of course the milli- or nanosecond precision isn't
necessary, 5-10-60 seconds of timestamp accuracy would be sufficient
for this application.

Timestamps are also useful in identifying each flightline direction,
the knowledge necessary for manual calibration of the laser boresight
angles.

Another application of the timestamps is the ability to sort them by
the acquisition order after records have been previously somehow re-
arranged. Also knowledge of the precise timestamp can be used to
lookup the position of the scanner at the time of survey, and that
info may be helpful in data classification, intensity normalization
and 3D models reconstruction.

And of course, for the users that don't need timestamps for their
applications there are record types 0 and 2 in the LAS format
standard.

--
Konstantin

Martin Isenburg

unread,
Dec 15, 2011, 9:13:01 AM12/15/11
to The LAS room - a friendly place to discuss specifications of the LAS format
Hello,

not sure if anyone was following this thread in the LAStools user
group. It seems that using 64-bit floats and the "adjusted" bit to
overcome precision limitations in the GPS time field is a work-around
that will cause plenty of headaches down the road.

http://groups.google.com/group/lastools/browse_thread/thread/a9b0c6eb4baf9f54

The issue is: if you use the "adjusted" bit how do you subsequently
use the GPS time? Do you leave them in the "adjusted" mode, meaning
they are 1,000,000,000 seconds smaller than the actual value? Or do
you add the offset of 1 billion back in? If you do add the offset, how
do you prevent precision loss? By using 80-bit or 128-bit floats? You
cannot simply add 1,000,000,000 to the GPS time stored as a 64-bit
float because then you loose precision. And even if you correctly
convert it to ASCII with all the precision there is ... how would you
read it back in? If you were to output the time 918016980.024009 into
the string, how would you parse this back into a variable? You could
not use atof() or scanf("%lf") because if you scan the number into a
double you loose precision.

However, if we were to use a 64-bit integer that represents the
absolute GPS time since 1980 as picoseconds we could simply store
918,016,980,024,009. That is very robust and easily parsed with
scanf("%I64d") without any precision loss. And because 64 integer bits
allow us to store numbers up to 2^64 = 18,446,744,073,709,551,616
which means - even at a resolution of picoseconds - the counter would
last for 18,446,744,073,709 seconds. Hence, we can then go a total of
584,542 years before we would need an "adjustment" bit.

I think we should addend LAS 1.4 to have an "absolute GPS time in
picoseconds" bit in the global encoding field that changes the meaning
of the 64-bit GPS time field to be an unsigned 64 bit integer before
people start producing content with "adjusted" GPS-time that will
cause plenty of headaches down-stream.

Cheers,

Martin

Mike Grant

unread,
Dec 16, 2011, 1:21:17 PM12/16/11
to las...@googlegroups.com
On 14/12/11 14:17, Michael Unverferth wrote:
> I have often wondered what the rationale is of storing the GPS time at all?
>
> As I understand it, the GPS Time, along with the Source ID are primarily
> used as a way to uniquely identify each point, presumably to associate
> some information outside the LAS file. Using 10 bytes for this purpose

Yes, we (and I believe many other operators) certainly use GPS time as
the indexing key between different instruments and datasets.

Martin's already helpfully pointed out that a scaled int at picosecond
precision will last for a long time. On the flipside, it'd be
interesting to see if there's ever a situation where the float
representation actually causes genuine number representation problems.
For example, whether there's a GPS time value that means the precision
available is too low to accurately time, say, a full waveform sample @
500 picosecond resolution (our Leica system can theoretically produce
data at this accuracy). It's not too much of a stretch to imagine
wanting 100x greater accuracy (~5ps resolution = ~2mm) in the medium
term future too.

Otherwise I suppose it's just a slight waste of space in the data
structure, but convenient vs. having to descale ints (which is probably
the real reason it's there).

Cheers,

Mike.

Martin Isenburg

unread,
Dec 16, 2011, 3:15:10 PM12/16/11
to The LAS room - a friendly place to discuss specifications of the LAS format
Hi Mike,

> Yes, we (and I believe many other operators) certainly use GPS time as
> the indexing key between different instruments and datasets.

If this is the case then how does the "adjusted GPS time" stored as a
double with an implicit offset of 1 billion work together with other
instruments? what is the common representation in which the GPS times
are compared in? it cannot be doubles unless the other instruments use
the same "adjusted GPS time" concept.

> Martin's already helpfully pointed out that a scaled int at picosecond
> precision will last for a long time.

You guys are terrible. Nobody checked my math. We only get 584 years
and with nanosecond accuracy.

> For example, whether there's a GPS time value that means the precision
> available is too low to accurately time, say, a full waveform sample @
> 500 picosecond resolution (our Leica system can theoretically produce
> data at this accuracy).

The standard GPS time we use is the number of seconds since 00:00
January 6 1980, Usually this is not stored as one number but rather
two numbers: a counts in weeks and seconds of a week from that
instant. It is currently at 1666 weeks and 499,587 seconds. Here you
find a GPS clock.

http://leapsecond.com/java/gpsclock.htm

Since each week has 7*24*60*60 = 604,800 seconds we are at second
1,008,096,387 and counting. If we were to store this number of seconds
as a 64-bit double, how much precision would we currently (!) get?
Easy.

The number falls between 2^29 (536,870,912) and 2^30 (1,073,741,824)
and there are 51 mantissa bits to cover the range of 536,870,912
between 2^29 and 2^30. 51 mantissa bits can represent 2^51 =
2,251,799,813,685,248 different numbers, so the spacing is
536,870,912 / 2,251,799,813,685,248 = 0.000000238418579 seconds or
0.000238418579 milliseconds or 0.238418579 microseconds or 238.418579
nanoseconds or 238418.579 picoseconds.

The adjusted GPS time therefore subtracts 1,000,000,000 from the
current number of seconds. That puts us at 1,008,096,387 -
1,000,000,000 = 8,096,387. How much precision is there? Easy once
more.

The number falls between 2^22 (4,194,304) and 2^23 (8,388,608) and
there are again 51 bits but now the range is only 4,194,304. So we get
4,194,304 / 2,251,799,813,685,248 = 0.00000000186264514923 seconds or
0.00000186264514923 milliseconds or 0.00186264514923 microseconds or
1.86264514923 nanoseconds or 1862.64514923 picoseconds.

However, we will soon - in about 290,000 seconds or 81 hours - loose
another bit of precision, namely when the adjusted second count passes
the number 8,388,608 and moves into the range between 2^23 (8,388,608)
and 2^24 (16,777,216).

And we will soon after - in about 14 weeks - loose yet another bit of
precision, namely when the adjusted second count passes the number
16,777,216.

And just 28 weeks after that we will loose yet another bit of
precision, namely when the adjusted second count passes the number
33,554,432 ...

And about a year later we will loose yet another bit of precision ...

And two years later again ...

And four years later once more.

I think you now get the drift why the floating-point format is a
*TERRIBLE* representation for storing atime counter. As time passes
the counter slowly looses precision ...

Martin

Martin Isenburg

unread,
Jan 29, 2017, 10:48:37 PM1/29/17
to The LAS room - a friendly place to discuss the LAS and LAZ formats, LAStools - efficient command line tools for LIDAR processing
Hello.

does anyone remember this post of mine from the 16th of December 2011? At that moment the GPS clock [1] was at 1666 weeks and 499,587 seconds. Then our Adjusted GPS Time Stamps were 1666 * 604,800 + 499,587 - 1,000,000,000 = 1,008,096,387 - 1,000,000,000 = 8,096,387 and as that number falls between 2^22 (4,194,304) and 2^23 (8,388,608), storing the Adjusted GPS Time Stamp as a 64 bit floating point number was possible with a minimal spacing of 4,194,304 / 2^51 = 4,194,304 / 2,251,799,813,685,248 = 0.00000000186 seconds between subsequent time stamps.

Since then we have lost 5 bits of resolution.

The exponent part of the 64 bit floating point representation has increased by 5, hence the intervals that the 51 bit mantissa covers have become 2^5 = 32 times wider. 

Now the GPS clock [1] is at 1934 weeks and 99,287 seconds. So our Adjusted GPS Time Stamps are 1934 * 604,800 + 99,287 - 1,000,000,000 = 1,169,782,487 - 1,000,000,000 = 169,782,487.  That number falls between 2^27 (134,217,728) and 2^28 (268,435,456). So storing today's Adjusted GPS Time Stamps as 64 bit floating point numbers only gives us a minimal spacing of 134,217,728 / 2^51 = 134,217,728 / 2,251,799,813,685,248 = 0.0000000596 seconds between subsequent time stamps. This equals 0.0596 microseconds. 

In about 3 years we will loose another bit and the smallest possible spacing between GPS time stamps will increase to 0.1192 microseconds.


Regards from Singapore,


Martin @rapidlasso


Terje Mathisen

unread,
Jan 30, 2017, 4:05:31 AM1/30/17
to las...@googlegroups.com
As you conclude in the end: FP is increadibly unsuitable for time
stamps. :-(

I have been a member of the NTP Hackers team for 20+ years, we maintain
the Network Time Protocol which uses fixed-point 32:32 time stamps, i.e.
32-bit seconds with 1/4 ns resolution.

This format does wrap around, but only after 130+ years, so a sliding
window of 60+ years is sufficient to figure out the absolute year.

Terje
> --
> --
> You are subscribed to "The LAS room - a friendly place to discuss the
> the LAS or LAZ formats" for those who want to see LAS or LAZ succeed
> as open standards. Go on record with bug reports, suggestions, and
> concerns about current and proposed specifications.
>
> Visit this group at http://groups.google.com/group/lasroom
> Post to this group with an email to las...@googlegroups.com
> Unsubscribe by email to lasroom+u...@googlegroups.com
> ---
> You received this message because you are subscribed to the Google
> Groups "The LAS room - a friendly place to discuss the LAS and LAZ
> formats" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to lasroom+u...@googlegroups.com
> <mailto:lasroom+u...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.


--
- <Terje.M...@tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Reply all
Reply to author
Forward
0 new messages