I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about *both* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed.
I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?
Le 21 oct. 2017 20:31, "francismb" <fran...@email.de> a écrit :I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-)
On 22 October 2017 at 09:32, Victor Stinner <victor....@gmail.com> wrote:Le 21 oct. 2017 20:31, "francismb" <fran...@email.de> a écrit :I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-)There are actually solid physical reasons for that prediction likely being true. Aside from the power consumption, heat dissipation, and EM radiation issues that arise with higher switching frequencies, you also start running into more problems with digital circuit metastability ([1], [2]): the more clock edges you have per second, the higher the chances of an asynchronous input changing state at a bad time.So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :)
There are current applications with greater-than nanosecond precision:
- relativity experiments
- particle experiments
Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!
- https://scholar.google.com/scholar?q=femtosecond
- https://scholar.google.com/scholar?q=attosecond
- GPS now supports nanosecond resolution
-
https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_experimental_clocks
> In 2015 JILA evaluated the absolute frequency uncertainty of their latest strontium-87 optical lattice clock at 2.1 × 10−18, which corresponds to a measurable gravitational time dilation for an elevation change of 2 cm (0.79 in)
What about bus latency (and variance)?
From https://www.nist.gov/publications/optical-two-way-time-and-frequency-transfer-over-free-space :
> Optical two-way time and frequency transfer over free space
> Abstract
> The transfer of high-quality time-frequency signals between remote locations underpins many applications, including precision navigation and timing, clock-based geodesy, long-baseline interferometry, coherent radar arrays, tests of general relativity and fundamental constants, and future redefinition of the second. However, present microwave-based time-frequency transfer is inadequate for state-of-the-art optical clocks and oscillators that have femtosecond-level timing jitter and accuracies below 1 × 10−17. Commensurate optically based transfer methods are therefore needed. Here we demonstrate optical time-frequency transfer over free space via two-way exchange between coherent frequency combs, each phase-locked to the local optical oscillator. We achieve 1 fs timing deviation, residual instability below 1 × 10−18 at 1,000 s and systematic offsets below 4 × 10−19, despite frequent signal fading due to atmospheric turbulence or obstructions across the 2 km link. This free-space transfer can enable terrestrial links to support clock-based geodesy. Combined with satellite-based optical communications, it provides a path towards global-scale geodesy, high-accuracy time-frequency distribution and satellite-based relativity experiments.
How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies?
10-6 micro µ
10-9 nano n -- int64
10-12 pico p
10-15 femto f
10-18 atto a
10-21 zepto z
10-24 yocto y
I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective.
Natural keys with such time resolution are still unfortunately likely to collide.
On Saturday, October 21, 2017, Nick Coghlan <ncog...@gmail.com> wrote:So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :)There are current applications with greater-than nanosecond precision:
- relativity experiments
- particle experimentsMust they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
I worked at a molecular dynamics lab for a number of years. I advocated switching all our code to using attosecond units (rather than fractional picoseconds).However, this had nothing whatsoever to do with the machine clock speeds, but only with the physical quantities represented and the scaling/rounding math.It didn't happen, for various reasons. But if it had, I certainly wouldn't have expected standard library support for this. The 'time' module is about wall clock out calendar time, not about *simulation time*.FWIW, a very long simulation might cover a millisecond of simulated time.... we're a very long way from looking at molecular behavior over 104 days.
Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!
Aligning simulation data in context to other events may be enlightening: is there a good library for handing high precision time units in Python (and/or CFFI)?
Sure, but in these kinds of experiments you don't have a "timestamp" in
the usual sense.
You'll have some kind of high-precision "clock", but in most cases
there's no way and no reason to synchronise this to wall time. You end
up distinguishing between "macro-time" (wall time) and "micro-time"
(time in the experiment relative to something)
In a particle accelerator, you care about measuring relative times of
almost-simultaneous detection events with extremely high precision.
You'll also presumably have a timestamp for the event, but you won't be
able or willing to measure that with anything like the same accuracy.
While you might be able to say that you detected, say, a muon at
01:23:45.6789 at Δt=543.6ps*, you have femtosecond resolution, you have
a timestamp, but you don't have a femtosecond timestamp.
In ultrafast spectroscopy, we get a time resolution equal to the
duration of your laser pulses (fs-ps), but all the micro-times measured
will be relative to some reference laser pulse, which repeats at >MHz
frequencies. We also integrate over millions of events - wall-time
timestamps don't enter into it.
In summary, yes, when writing software for experiments working with high
time resolution you have to write your own implementations of whatever
data formats best describe time as you're measuring it, which generally
won't line up with time as a PC (or a railway company) looks at it.
Cheers
Thomas
* The example is implausible not least because I understand muon
chambers tend to be a fair bit bigger than 15cm, but you get my point.
Cheers
Thomas
* The example is implausible not least because I understand muon
chambers tend to be a fair bit bigger than 15cm, but you get my point.
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com
On Tue, 24 Oct 2017 09:00:45 +0200
Victor Stinner <victor....@gmail.com> wrote:
> By the way, you mentionned that clocks are not synchronized. That's another
> revelant point. Even if system clocks are synchronized on a single
> computer, I read that you cannot reach nanosecond resolution for a NTP
> synchronization even in a small LAN.
>
> For large systems or distributed systems, a "global (synchronized) clock"
> is not an option. You cannot synchronize clocks correctly, so your
> algorithms must not rely on time, or at least not too precise resolution.
>
> I am saying that to again repeat that we are far from sub-second nanosecond
> resolution for system clock.
What does synchronization have to do with it? If synchronization
matters, then your PEP should be rejected, because current computers
using NTP can't synchronize with a better precision than 230 ns.
From https://en.wikipedia.org/wiki/Virtual_black_hole :
> In the derivation of his equations, Einstein suggested that physical space-time is Riemannian, ie curved. A small domain of it is approximately flat space-time.
From https://en.wikipedia.org/wiki/Quantum_foam :
> Based on the uncertainty principles of quantum mechanics and the general theory of relativity, there is no reason that spacetime needs to be fundamentally smooth. Instead, in a quantum theory of gravity, spacetime would consist of many small, ever-changing regions in which space and time are not definite, but fluctuate in a foam-like manner.
So, in regards to time synchronization, FWIU:
- WWVB "can provide time with an accuracy of about 100 microseconds"
- GPS time can synchronize down to "tens of nanoseconds"
- Blockchains work around local timestamp issues by "enforcing" linearity
See https://blog.cloudflare.com/how-to-achieve-low-latency/
Regards
Antoine.
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com
NTP is layered over UDP. The article shows base case UDP latencies of
around 15µs over 10Gbps Ethernet.
Regards
Antoine.
2017-10-24 13:25 GMT+02:00 Antoine Pitrou <ant...@python.org>:
> NTP is layered over UDP. The article shows base case UDP latencies of
> around 15µs over 10Gbps Ethernet.
Ah ok.
IMHO the discussion became off-topic somewhere, but I'm curious, so I
searched about the best NTP accuracy and found:
https://blog.meinbergglobal.com/2013/11/22/ntp-vs-ptp-network-timing-smackdown/
"Is the accuracy you need measured in microseconds or nanoseconds? If
the answer is yes, you want PTP (IEEE 1588). If the answer is in
milliseconds or seconds, then you want NTP."
"There is even ongoing standards work to use technology developed at
CERN (...) to extend PTP to picoseconds."
It seems like PTP is more accurate than NTP.
Victor