In [8]: 1e6 - np.nextafter(1e6, 0)
Out[8]: 1.1641532182693481e-10
So that could represent values to precision of ~0.116 femtoseconds, or
116 attoseconds. Too bad. Femtosecond precision would cover a lot of
cases, if you really need attoseconds then it won't work.
-n
--
Nathaniel J. Smith -- https://vorpus.org
> Is there interest in a PEP for extending time, datetime / timedelta for
arbitrary or extended precision fractional seconds?
Having seen the utter disaster that similar ideas brought to numpy, I would
say: no.
On the other hand, nanoseconds are slowly making their way to the stdlib
and to add nanoseconds to datetime we only need a fully backward compatible
implementation, not even a PEP.
See <https://bugs.python.org/issue15443>.
_______________________________________________
Python-ideas mailing list
Python...@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/
Of course they are a problem— why else would they not be implemented
in datetime?
But my point if that given datetimestamp or calculation could be off
by a second or so depending on whether and how leap seconds are
implemented.
It just doesn’t seem like a good idea to be handling months and
femptoseconds with the same “encoding”
-CHB
On Tue, May 15, 2018 at 2:05 AM, Chris Barker via Python-ideas
<python...@python.org> wrote:
> But my question is whether high precision timedeltas belongs with "calendar
> time" at all.
>
> What with UTC and leap seconds, and all that, it gets pretty ugly, when down
> to the second or sub-second, what a given datetime really means.
UTC and leap seconds aren't a problem. When there's a leap second, you
have 23:59:60 (or you repeat 23:59:59, if you can't handle second
#60). That's pretty straight-forward, perfectly well-defined.
I'm sure that the issue of "what do you call the leap second itself" is not the problem that Chris Barker is referring to. The problem with leap seconds is that they create unpredictable differences between UTC and real elapsed time.You can represent a timedelta of exactly 10^8 seconds, but if you add it to the current time, what should you get? What UTC time will it be in 10^8 real-time seconds? You don't know, and neither does anybody else, because you don't know how many leap seconds will occur in that time.
> The other issue with leap-seconds is that python's datetime doesn't
support them :-)
That's not entirely true. Since the implementation of PEP 495, it is
possible to represent the 23:59:60 as 23:59:59 with the "fold" bit set. Of
course, the repeated 23:59:59 will be displayed and behave exactly the same
as the first 23:59:59, but a 3rd party library can be written to take the
"fold" bit into account in temporal operations.
TAI | UTC---------------------+--------------------2016-12-31T23:59:35 | 2016-12-31T23:59:592016-12-31T23:59:36 | 2016-12-31T23:59:60
2016-12-31T23:59:37 | 2017-01-01T00:00:00
this correspondence can be implemented in Python using the following datetime objects:
TAI | UTC-------------------------------+-------------------------------------------datetime(2016,12,31,23,59,35) | datetime(2016,12,31,23,59,59)datetime(2016,12,31,23,59,36) | datetime(2016,12,31,23,59,59,fold=1)
datetime(2016,12,31,23,59,37) | datetime(2017,1,1,0,0,0)
Alexander covered the Python part of this, so I'll answer the possible
higher-level question: we haven't yet needed a "negative" leap
second, and it's considered unlikely (but not impossible) that we ever
will. That's because the Earth's rotation is inexorably slowing ,so
the mean solar day inexorably lengthens when measured by SI seconds.
Other things can cause the Earth's rotation to speed up temporarily
(like some major geological events), but they've only been able to
overcome factors acting to slow rotation for brief periods, and never
yet got near to overcoming them by a full second.
[Chris Barker]
> Does that support the other way -- or do we never lose a leap second anyway?
> (showing ignorance here)
Alexander covered the Python part of this, ...
AstroPy solves for leap seconds [1][2] according to the IAU ERFA (SOFA) library [3] and the IERS-B and IERS-A tables [4]. IERS-B tables ship with AstroPy. The latest IERS-A tables ("from 1973 though one year into the future") auto-download on first use [5].[3] "Leap second day utc2tai interpolation"
AstroPy solves for leap seconds [1][2] according to the IAU ERFA (SOFA) library [3] and the IERS-B and IERS-A tables [4]. IERS-B tables ship with AstroPy. The latest IERS-A tables ("from 1973 though one year into the future") auto-download on first use [5].
Not really. There are multiple time standards in use. Atomic clocks
count the duration of time – from their point of view, every second is
the same (modulo relativistic effects). TAI is the international
standard based on using atomic clocks to count seconds since a fixed
starting point, at mean sea level on Earth.
Another approach is to declare that each day (defined as "the time
between the sun passing directly overhead the Greenwich Observatory
twice") is 24 * 60 * 60 seconds long. This is what UT1 does. The
downside is that since the earth's rotation varies over time, this
means that the duration of a UT1 second varies from day to day in ways
that are hard to estimate precisely.
UTC is defined as a hybrid of these two approaches: it uses the same
seconds as TAI, but every once in a while we add or remove a leap
second to keep it roughly aligned with UT1. This is the time standard
that computers use the vast majority of the time. Importantly, since
we only ever add or remove an integer number of seconds, and only at
the boundary in between seconds, UTC is defined just as precisely as
TAI.
So if you're trying to measure time using UT1 then yeah, your computer
clock is wrong all the time by up to 0.9 seconds, and we don't even
know what UT1 is more precisely than ~milliseconds. Generally it gets
slightly more accurate just after a leap second, but it's not very
precise either before or after. Which is why no-one does this.
But if you're trying to measure time using UTC, then computers with
the appropriate setup (e.g. at CERN, or in HFT data centers) routinely
have clocks accurate to <1 microsecond, and leap seconds don't affect
that at all.
The datetime module still isn't appropriate for doing precise
calculations over periods long enough to include a leap second though,
e.g. Python simply doesn't know how many seconds passed between two
arbitrary UTC timestamps, even if they were in the past.
-n
--
Nathaniel J. Smith -- https://vorpus.org
So, does that mean we now need to hold our breath for 1.9 british trillion
years or 1.9 american trillion years?
Assuming you were referring to the French-Latin-Arabic based numbers and
naming systems at all, that is... And anyway, what's that point doing
there, right between the "1" and the "9" ?
Stefan