[Python-Dev] PEP 564: Add new time functions with nanosecond resolution

195 views
Skip to first unread message

Victor Stinner

unread,
Oct 16, 2017, 6:44:43 AM10/16/17
to Python Dev
Hi,

While discussions on this PEP are not over on python-ideas, I proposed
this PEP directly on python-dev since I consider that my PEP already
summarizes current and past proposed alternatives.

python-ideas threads:

* Add time.time_ns(): system clock with nanosecond resolution
* Why not picoseconds?

The PEP 564 will be shortly online at:
https://www.python.org/dev/peps/pep-0564/

Victor


PEP: 564
Title: Add new time functions with nanosecond resolution
Version: $Revision$
Last-Modified: $Date$
Author: Victor Stinner <victor....@gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 16-October-2017
Python-Version: 3.7


Abstract
========

Add five new functions to the ``time`` module: ``time_ns()``,
``perf_counter_ns()``, ``monotonic_ns()``, ``clock_gettime_ns()`` and
``clock_settime_ns()``. They are similar to the function without the
``_ns`` suffix, but have nanosecond resolution: use a number of
nanoseconds as a Python int.

The best ``time.time_ns()`` resolution measured in Python is 3 times
better then ``time.time()`` resolution on Linux and Windows.


Rationale
=========

Float type limited to 104 days
------------------------------

The clocks resolution of desktop and latop computers is getting closer
to nanosecond resolution. More and more clocks have a frequency in MHz,
up to GHz for the CPU TSC clock.

The Python ``time.time()`` function returns the current time as a
floatting point number which is usually a 64-bit binary floatting number
(in the IEEE 754 format).

The problem is that the float type starts to lose nanoseconds after 104
days. Conversion from nanoseconds (``int``) to seconds (``float``) and
then back to nanoseconds (``int``) to check if conversions lose
precision::

# no precision loss
>>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x
0
# precision loss! (1 nanosecond)
>>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x
-1
>>> print(datetime.timedelta(seconds=2 ** 53 / 1e9))
104 days, 5:59:59.254741

``time.time()`` returns seconds elapsed since the UNIX epoch: January
1st, 1970. This function loses precision since May 1970 (47 years ago)::

>>> import datetime
>>> unix_epoch = datetime.datetime(1970, 1, 1)
>>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9))
1970-04-15 05:59:59.254741


Previous rejected PEP
---------------------

Five years ago, the PEP 410 proposed a large and complex change in all
Python functions returning time to support nanosecond resolution using
the ``decimal.Decimal`` type.

The PEP was rejected for different reasons:

* The idea of adding a new optional parameter to change the result type
was rejected. It's an uncommon (and bad?) programming practice in
Python.

* It was not clear if hardware clocks really had a resolution of 1
nanosecond, especially at the Python level.

* The ``decimal.Decimal`` type is uncommon in Python and so requires
to adapt code to handle it.


CPython enhancements of the last 5 years
----------------------------------------

Since the PEP 410 was rejected:

* The ``os.stat_result`` structure got 3 new fields for timestamps as
nanoseconds (Python ``int``): ``st_atime_ns``, ``st_ctime_ns``
and ``st_mtime_ns``.

* The PEP 418 was accepted, Python 3.3 got 3 new clocks:
``time.monotonic()``, ``time.perf_counter()`` and
``time.process_time()``.

* The CPython private "pytime" C API handling time now uses a new
``_PyTime_t`` type: simple 64-bit signed integer (C ``int64_t``).
The ``_PyTime_t`` unit is an implementation detail and not part of the
API. The unit is currently ``1 nanosecond``.

Existing Python APIs using nanoseconds as int
---------------------------------------------

The ``os.stat_result`` structure has 3 fields for timestamps as
nanoseconds (``int``): ``st_atime_ns``, ``st_ctime_ns`` and
``st_mtime_ns``.

The ``ns`` parameter of the ``os.utime()`` function accepts a
``(atime_ns: int, mtime_ns: int)`` tuple: nanoseconds.


Changes
=======

New functions
-------------

This PEP adds five new functions to the ``time`` module:

* ``time.clock_gettime_ns(clock_id)``
* ``time.clock_settime_ns(clock_id, time: int)``
* ``time.perf_counter_ns()``
* ``time.monotonic_ns()``
* ``time.time_ns()``

These functions are similar to the version without the ``_ns`` suffix,
but use nanoseconds as Python ``int``.

For example, ``time.monotonic_ns() == int(time.monotonic() * 1e9)`` if
``monotonic()`` value is small enough to not lose precision.

Unchanged functions
-------------------

This PEP only proposed to add new functions getting or setting clocks
with nanosecond resolution. Clocks are likely to lose precision,
especially when their reference is the UNIX epoch.

Python has other functions handling time (get time, timeout, etc.), but
no nanosecond variant is proposed for them since they are less likely to
lose precision.

Example of unchanged functions:

* ``os`` module: ``sched_rr_get_interval()``, ``times()``, ``wait3()``
and ``wait4()``

* ``resource`` module: ``ru_utime`` and ``ru_stime`` fields of
``getrusage()``

* ``signal`` module: ``getitimer()``, ``setitimer()``

* ``time`` module: ``clock_getres()``

Since the ``time.clock()`` function was deprecated in Python 3.3, no
``time.clock_ns()`` is added.


Alternatives and discussion
===========================

Sub-nanosecond resolution
-------------------------

``time.time_ns()`` API is not "future-proof": if clocks resolutions
increase, new Python functions may be needed.

In practive, the resolution of 1 nanosecond is currently enough for all
structures used by all operating systems functions.

Hardware clock with a resolution better than 1 nanosecond already
exists. For example, the frequency of a CPU TSC clock is the CPU base
frequency: the resolution is around 0.3 ns for a CPU running at 3
GHz. Users who have access to such hardware and really need
sub-nanosecond resolution can easyly extend Python for their needs.
Such rare use case don't justify to design the Python standard library
to support sub-nanosecond resolution.

For the CPython implementation, nanosecond resolution is convenient: the
standard and well supported ``int64_t`` type can be used to store time.
It supports a time delta between -292 years and 292 years. Using the
UNIX epoch as reference, this type supports time since year 1677 to year
2262::

>>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
1677.728976954687
>>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
2262.271023045313

Different types
---------------

It was proposed to modify ``time.time()`` to use float type with better
precision. The PEP 410 proposed to use ``decimal.Decimal``, but it was
rejected. Apart ``decimal.Decimal``, no portable ``float`` type with
better precision is currently available in Python. Changing the builtin
Python ``float`` type is out of the scope of this PEP.

Other ideas of new types were proposed to support larger or arbitrary
precision: fractions, structures or 2-tuple using integers,
fixed-precision floating point number, etc.

See also the PEP 410 for a previous long discussion on other types.

Adding a new type requires more effort to support it, than reusing
``int``. The standard library, third party code and applications would
have to be modified to support it.

The Python ``int`` type is well known, well supported, ease to
manipulate, and supports all arithmetic operations like:
``dt = t2 - t1``.

Moreover, using nanoseconds as integer is not new in Python, it's
already used for ``os.stat_result`` and
``os.utime(ns=(atime_ns, mtime_ns))``.

.. note::
If the Python ``float`` type becomes larger (ex: decimal128 or
float128), the ``time.time()`` precision will increase as well.

Different API
-------------

The ``time.time(ns=False)`` API was proposed to avoid adding new
functions. It's an uncommon (and bad?) programming practice in Python to
change the result type depending on a parameter.

Different options were proposed to allow the user to choose the time
resolution. If each Python module uses a different resolution, it can
become difficult to handle different resolutions, instead of just
seconds (``time.time()`` returning ``float``) and nanoseconds
(``time.time_ns()`` returning ``int``). Moreover, as written above,
there is no need for resolution better than 1 nanosecond in practive in
the Python standard library.


Annex: Clocks Resolution in Python
==================================

Script ot measure the smallest difference between two ``time.time()`` and
``time.time_ns()`` reads ignoring differences of zero::

import math
import time

LOOPS = 10 ** 6

print("time.time_ns(): %s" % time.time_ns())
print("time.time(): %s" % time.time())

min_dt = [abs(time.time_ns() - time.time_ns())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time_ns() delta: %s ns" % min_dt)

min_dt = [abs(time.time() - time.time())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))

Results of time(), perf_counter() and monotonic().

Linux (kernel 4.12 on Fedora 26):

* time_ns(): **84 ns**
* time(): **239 ns**
* perf_counter_ns(): 84 ns
* perf_counter(): 82 ns
* monotonic_ns(): 84 ns
* monotonic(): 81 ns

Windows 8.1:

* time_ns(): **318000 ns**
* time(): **894070 ns**
* perf_counter_ns(): 100 ns
* perf_counter(): 100 ns
* monotonic_ns(): 15000000 ns
* monotonic(): 15000000 ns

The difference on ``time.time()`` is significant: **84 ns (2.8x better)
vs 239 ns on Linux and 318 us (2.8x better) vs 894 us on Windows**. The
difference (presion loss) will be larger next years since every day adds
864,00,000,000,000 nanoseconds to the system clock.

The difference on ``time.perf_counter()`` and ``time.monotonic clock()``
is not visible in this quick script since the script runs less than 1
minute, and the uptime of the computer used to run the script was
smaller than 1 week. A significant difference should be seen with an
uptime of 104 days or greater.

.. note::
Internally, Python starts ``monotonic()`` and ``perf_counter()``
clocks at zero on some platforms which indirectly reduce the
precision loss.



Copyright
=========

This document has been placed in the public domain.
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Victor Stinner

unread,
Oct 16, 2017, 9:52:23 AM10/16/17
to Python Dev
I read again the discussions on python-ideas and noticed that I forgot
to mention the "time_ns module" idea. I also added a section to give
concrete examples of the precision loss.

https://github.com/python/peps/commit/a4828def403913dbae7452b4f9b9d62a0c83a278

Issues caused by precision loss
-------------------------------

Example 1: measure time delta
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A server is running for longer than 104 days. A clock is read before
and after running a function to measure its performance. This benchmark
lose precision only because the float type used by clocks, not because
of the clock resolution.

On Python microbenchmarks, it is common to see function calls taking
less than 100 ns. A difference of a single nanosecond becomes
significant.

Example 2: compare time with different resolution
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Two programs "A" and "B" are runing on the same system, so use the system
block. The program A reads the system clock with nanosecond resolution
and writes the timestamp with nanosecond resolution. The program B reads
the timestamp with nanosecond resolution, but compares it to the system
clock read with a worse resolution. To simplify the example, let's say
that it reads the clock with second resolution. If that case, there is a
window of 1 second while the program B can see the timestamp written by A
as "in the future".

Nowadays, more and more databases and filesystems support storing time
with nanosecond resolution.

.. note::
This issue was already fixed for file modification time by adding the
``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting
nanoseconds in ``os.utime()``. This PEP proposes to generalize the
fix.

(...)

Modify time.time() result type
------------------------------

It was proposed to modify ``time.time()`` to return a different float
type with better precision.

The PEP 410 proposed to use ``decimal.Decimal`` which already exists and
supports arbitray precision, but it was rejected. Apart
``decimal.Decimal``, no portable ``float`` type with better precision is
currently available in Python.

Changing the builtin Python ``float`` type is out of the scope of this
PEP.

Moreover, changing existing functions to return a new type introduces a
risk of breaking the backward compatibility even the new type is
designed carefully.

(...)

New time_ns module
------------------

Add a new ``time_ns`` module which contains the five new functions:

* ``time_ns.clock_gettime(clock_id)``
* ``time_ns.clock_settime(clock_id, time: int)``
* ``time_ns.perf_counter()``
* ``time_ns.monotonic()``
* ``time_ns.time()``

The first question is if the ``time_ns`` should expose exactly the same
API (constants, functions, etc.) than the ``time`` module. It can be
painful to maintain two flavors of the ``time`` module. How users use
suppose to make a choice between these two modules?

If tomorrow, other nanosecond variant are needed in the ``os`` module,
will we have to add a new ``os_ns`` module as well? There are functions
related to time in many modules: ``time``, ``os``, ``signal``,
``resource``, ``select``, etc.

Another idea is to add a ``time.ns`` submodule or a nested-namespace to
get the ``time.ns.time()`` syntax.

Victor

Antoine Pitrou

unread,
Oct 16, 2017, 11:08:53 AM10/16/17
to pytho...@python.org

Hi,

On Mon, 16 Oct 2017 12:42:30 +0200
Victor Stinner <victor....@gmail.com> wrote:
>
> ``time.time()`` returns seconds elapsed since the UNIX epoch: January
> 1st, 1970. This function loses precision since May 1970 (47 years ago)::

This is a funny sentence. I doubt computers (Unix or not) had
nanosecond clocks in May 1970.

> This PEP adds five new functions to the ``time`` module:
>
> * ``time.clock_gettime_ns(clock_id)``
> * ``time.clock_settime_ns(clock_id, time: int)``
> * ``time.perf_counter_ns()``
> * ``time.monotonic_ns()``
> * ``time.time_ns()``

Why not ``time.process_time_ns()``?

> Hardware clock with a resolution better than 1 nanosecond already
> exists. For example, the frequency of a CPU TSC clock is the CPU base
> frequency: the resolution is around 0.3 ns for a CPU running at 3
> GHz. Users who have access to such hardware and really need
> sub-nanosecond resolution can easyly extend Python for their needs.

Typo: easily. But how is easy is it?

> Such rare use case don't justify to design the Python standard library
> to support sub-nanosecond resolution.

I suspect that assertion will be challenged at some point :-)
Though I agree with the ease of implementation argument (about int64_t
being wide enough for nanoseconds but not picoseconds).

Regards

Antoine.

Victor Stinner

unread,
Oct 16, 2017, 11:25:25 AM10/16/17
to Antoine Pitrou, Python Dev
2017-10-16 17:06 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
>> This PEP adds five new functions to the ``time`` module:
>>
>> * ``time.clock_gettime_ns(clock_id)``
>> * ``time.clock_settime_ns(clock_id, time: int)``
>> * ``time.perf_counter_ns()``
>> * ``time.monotonic_ns()``
>> * ``time.time_ns()``
>
> Why not ``time.process_time_ns()``?

I only wrote my first email on python-ideas to ask this question, but
I got no answer on this question, only proposal of other solutions to
get time with nanosecond resolution. So I picked the simplest option:
start simple, only add new clocks, and maybe add more "_ns" functions
later.

If we add process_time_ns(), should we also add nanosecond resolution
to other functions related to process or CPU time?

* Add "ru_utime_ns" and "ru_stime_ns" to the resource.struct_rusage
used by os.wait3(), os.wait4() and resource.getrusage()

* For os.times(): add os.times_ns()? For this one, I prefer to add a
new function rather than duplicating *all* fields of os.times_result,
since all fields store durations

Victor

Ben Hoyt

unread,
Oct 16, 2017, 11:39:29 AM10/16/17
to Victor Stinner, Python Dev
I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about *both* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed.

-Ben

Antoine Pitrou

unread,
Oct 16, 2017, 11:59:46 AM10/16/17
to pytho...@python.org
On Mon, 16 Oct 2017 17:23:15 +0200
Victor Stinner <victor....@gmail.com> wrote:
> 2017-10-16 17:06 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
> >> This PEP adds five new functions to the ``time`` module:
> >>
> >> * ``time.clock_gettime_ns(clock_id)``
> >> * ``time.clock_settime_ns(clock_id, time: int)``
> >> * ``time.perf_counter_ns()``
> >> * ``time.monotonic_ns()``
> >> * ``time.time_ns()``
> >
> > Why not ``time.process_time_ns()``?
>
> I only wrote my first email on python-ideas to ask this question, but
> I got no answer on this question, only proposal of other solutions to
> get time with nanosecond resolution. So I picked the simplest option:
> start simple, only add new clocks, and maybe add more "_ns" functions
> later.
>
> If we add process_time_ns(), should we also add nanosecond resolution
> to other functions related to process or CPU time?

Restricting this PEP to the time module would be fine with me.

Regards

Antoine.

Guido van Rossum

unread,
Oct 16, 2017, 12:02:53 PM10/16/17
to Ben Hoyt, Python Dev
On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt <ben...@gmail.com> wrote:
I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about *both* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed.

A long-running server might still want to log precise *durations* of various events. (Durations of events are the bread and butter of server performance tuning.) And for this it might want to use the most precise clock available, which is perf_counter(). But if perf_counter()'s epoch is the start of the process, after 104 days it can no longer report ns precision due to float rounding (even though the internal counter does not lose ns).
 
--
--Guido van Rossum (python.org/~guido)

Victor Stinner

unread,
Oct 16, 2017, 12:13:36 PM10/16/17
to Ben Hoyt, Python Dev
2017-10-16 17:37 GMT+02:00 Ben Hoyt <ben...@gmail.com>:
> I've read the examples you wrote here, but I'm struggling to see what the
> real-life use cases are for this. When would you care about *both* very
> long-running servers (104 days+) and nanosecond precision? I'm not saying it
> could never happen, but would want to see real "experience reports" of when
> this is needed.

The second example doesn't depend on the system uptime nor how long
the program is running. You can hit the issue just after the system
finished to boot:

"Example 2: compare time with different resolution"
https://www.python.org/dev/peps/pep-0564/#example-2-compare-time-with-different-resolution

Victor
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Victor Stinner

unread,
Oct 16, 2017, 12:16:36 PM10/16/17
to Antoine Pitrou, Python Dev
2017-10-16 17:42 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
> Restricting this PEP to the time module would be fine with me.

Maybe I should add a short sentence to keep the question open, but
exclude it from the direct scope of the PEP? For example:

"New nanosecond flavor of these functions may be added later, if a
concrete use case comes in."

What do you think?

Victor

Ben Hoyt

unread,
Oct 16, 2017, 12:16:39 PM10/16/17
to Guido van Rossum, Python Dev
Got it -- fair enough.

We deploy so often where I work (a couple of times a week at least) that 104 days seems like an eternity. But I can see where for a very stable file server or something you might well run it that long without deploying. Then again, why are you doing performance tuning on a "very stable server"?

-Ben

Antoine Pitrou

unread,
Oct 16, 2017, 12:34:46 PM10/16/17
to pytho...@python.org
On Mon, 16 Oct 2017 18:06:06 +0200
Victor Stinner <victor....@gmail.com> wrote:
> 2017-10-16 17:42 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
> > Restricting this PEP to the time module would be fine with me.
>
> Maybe I should add a short sentence to keep the question open, but
> exclude it from the direct scope of the PEP? For example:
>
> "New nanosecond flavor of these functions may be added later, if a
> concrete use case comes in."
>
> What do you think?

It sounds fine to me!

Regards

Antoine.

Victor Stinner

unread,
Oct 16, 2017, 12:37:45 PM10/16/17
to Ben Hoyt, Python Dev
2017-10-16 18:14 GMT+02:00 Ben Hoyt <ben...@gmail.com>:
> Got it -- fair enough.
>
> We deploy so often where I work (a couple of times a week at least) that 104
> days seems like an eternity. But I can see where for a very stable file
> server or something you might well run it that long without deploying. Then
> again, why are you doing performance tuning on a "very stable server"?

I'm not sure of what you mean by "performance *tuning*". My idea in
the example is more to collect live performance metrics to make sure
that everything is fine on your "very stable server". Send these
metrics to your favorite time serie database like Gnocchi, Graphite,
Graphana or whatever.

Victor
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Victor Stinner

unread,
Oct 16, 2017, 12:55:29 PM10/16/17
to Antoine Pitrou, Python Dev
2017-10-16 18:28 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
>> What do you think?
>
> It sounds fine to me!

Ok fine, I updated the PEP. Let's start simple with the few functions
(5 "clock" functions) which are "obviously" impacted by the precission
loss.

Victor

Ben Hoyt

unread,
Oct 16, 2017, 12:57:55 PM10/16/17
to Victor Stinner, Python Dev
Makes sense, thanks. -Ben

Antoine Pitrou

unread,
Oct 16, 2017, 1:02:32 PM10/16/17
to pytho...@python.org
On Mon, 16 Oct 2017 18:53:18 +0200
Victor Stinner <victor....@gmail.com> wrote:

> 2017-10-16 18:28 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
> >> What do you think?
> >
> > It sounds fine to me!
>
> Ok fine, I updated the PEP. Let's start simple with the few functions
> (5 "clock" functions) which are "obviously" impacted by the precission
> loss.

It should be 6 functions, right?

Victor Stinner

unread,
Oct 16, 2017, 1:22:37 PM10/16/17
to Antoine Pitrou, Python Dev
Oh, now I'm confused. I misunderstood your previous message. I understood that you changed you mind and didn't want to add process_time_ns().

Can you elaborate why you consider that time.process_time_ns() is needed, but not the nanosecond flavor of os.times() nor resource.getrusage()? These functions use the same or similar clock, no?

Depending on platform, time.process_time() may be implemented with resource.getrusage(), os.times() or something else.

Victor

Antoine Pitrou

unread,
Oct 16, 2017, 1:26:18 PM10/16/17
to pytho...@python.org
On Mon, 16 Oct 2017 19:20:44 +0200
Victor Stinner <victor....@gmail.com> wrote:
> Oh, now I'm confused. I misunderstood your previous message. I understood
> that you changed you mind and didn't want to add process_time_ns().
>
> Can you elaborate why you consider that time.process_time_ns() is needed,
> but not the nanosecond flavor of os.times() nor resource.getrusage()? These
> functions use the same or similar clock, no?

I didn't say they weren't needed, I said that we could restrict
ourselves to the time module for the time being if it makes things
easier.

But if you want to tackle all of them at once, go for it! :-)

Regards

Antoine.

Victor Stinner

unread,
Oct 17, 2017, 9:13:25 AM10/17/17
to Python Dev
> Since the ``time.clock()`` function was deprecated in Python 3.3, no
> ``time.clock_ns()`` is added.

FYI I just proposed a change to *remove* time.clock() from Python 3.7:
https://bugs.python.org/issue31803

This change is not required by, nor directly related to, the PEP 564.

Victor

Victor Stinner

unread,
Oct 17, 2017, 6:07:32 PM10/17/17
to Antoine Pitrou, Python Dev
Antoine Pitrou:
> Why not ``time.process_time_ns()``?

I measured the minimum delta between two clock reads, ignoring zeros.
I tested time.process_time(), os.times(), resource.getrusage(), and
their nanosecond variants (with my WIP implementation of the PEP 564).

Linux:

* process_time_ns(): 1 ns
* process_time(): 2 ns
* resource.getrusage(): 1 us
ru_usage structure uses timeval, so it makes sense
* clock(): 1 us
CLOCKS_PER_SECOND = 1,000,000 => res = 1 us
* times_ns().elapsed, times().elapsed: 10 ms
os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms
* times_ns().user, times().user: 10 ms
os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms

Windows:

* process_time(), process_time_ns(): 15.6 ms
* os.times().user, os.times_ns().user: 15.6 ms

Note: I didn't test os.wait3() and os.wait4(), but they also use the
ru_usage structure and so probably also have a resolution of 1 us.

It looks like *currently*, only time.process_time() has a resolution
in nanoseconds (smaller than 1 us). I propose to only add
time.process_time_ns(), as you proposed.

We might add nanosecond variant for the other functions once operating
systems will add new functions with better resolution.

Victor

Victor Stinner

unread,
Oct 17, 2017, 7:16:49 PM10/17/17
to Antoine Pitrou, Python Dev
I updated my PEP 564 to add time.process_time_ns():
https://github.com/python/peps/blob/master/pep-0564.rst

The HTML version should be updated shortly:
https://www.python.org/dev/peps/pep-0564/

I better explained why some functions got a new nanosecond variant,
whereas others don't. The rationale is the precision loss affecting
only a few functions in practice.

I completed the "Annex: Clocks Resolution in Python" with more
numbers, again, to explain why some functions don't need a nanosecond
variant.

Thanks Antoine, the PEP now looks better to me :-)

Victor

francismb

unread,
Oct 21, 2017, 7:41:43 AM10/21/17
to pytho...@python.org
Hi Victor,

On 10/18/2017 01:14 AM, Victor Stinner wrote:
> I updated my PEP 564 to add time.process_time_ns():
> https://github.com/python/peps/blob/master/pep-0564.rst
>
> The HTML version should be updated shortly:
> https://www.python.org/dev/peps/pep-0564/

** In practive, the resolution of 1 nanosecond **

** no need for resolution better than 1 nanosecond in practive in the
Python standard library.**

practice vs practice



If I understood you correctly on Python-ideas (here just for the
records, otherwise please ignore it):

why not something like (please change '_in' for what you like):

time.time_in(precision)
time.monotonic_in(precision)


where precision is an enumeration for: 'seconds', 'milliseconds'
'microseconds'... (or 's', 'ms', 'us', 'ns', ...)


Thanks,
--francis

Guido van Rossum

unread,
Oct 21, 2017, 11:48:01 AM10/21/17
to fran...@email.de, Python-Dev
That sounds like unnecessary generality, and also suggests that the API might support precisions way beyond what is realistic.

francismb

unread,
Oct 21, 2017, 2:31:05 PM10/21/17
to gu...@python.org, Python-Dev
If it sounds as there is no need or is unnecessary to you then
it its ok :-), thank you for the feedback ! I'm just curious on:

On 10/21/2017 05:45 PM, Guido van Rossum wrote:
> That sounds like unnecessary generality,
Meaning that the selection of precision on running time 'costs'?

I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?

> and also suggests that the API
> might support precisions way beyond what is realistic.
Doesn't that depends on the offered/supported enums (in that case down
to 'ns' as Victor proposed) ?

Thanks,
--francis
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Victor Stinner

unread,
Oct 21, 2017, 7:34:29 PM10/21/17
to fran...@email.de, Python Dev
Le 21 oct. 2017 20:31, "francismb" <fran...@email.de> a écrit :
I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?

If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-)

Did you hear that the Moore law is no more true since 2012 (Intel said since 2015)? Since 2002, CPUs frequency are blocked around 3 GHz. Overclock records are around 8 GHz with very specialized hardware, not usable for a classical PC.

I don't want to overengineer an API "just in case". Let's provide nanoseconds. We can discuss picoseconds later, maybe in 10 years?

You can now start to bet if decimal128 will come before or after picoseconds in mainstream CPUs :-)

By the way, we are talking about a resolution of 1 ns, but remember that a Python function call is closer to 50 ns. I am not sure that picosecond makes sense if CPU doesn't become much faster.

I am too shy to put such predictions in a very offical PEP ;-)

Victor

Nick Coghlan

unread,
Oct 21, 2017, 11:03:39 PM10/21/17
to Victor Stinner, Python Dev
On 22 October 2017 at 09:32, Victor Stinner <victor....@gmail.com> wrote:
Le 21 oct. 2017 20:31, "francismb" <fran...@email.de> a écrit :
I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?

If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-)

There are actually solid physical reasons for that prediction likely being true. Aside from the power consumption, heat dissipation, and EM radiation issues that arise with higher switching frequencies, you also start running into more problems with digital circuit metastability ([1], [2]): the more clock edges you have per second, the higher the chances of an asynchronous input changing state at a bad time.

So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :)

Cheers,
Nick.

[3] https://medium.com/@decodoku/how-to-program-a-quantum-computer-982a9329ed02


--
Nick Coghlan   |   ncog...@gmail.com   |   Brisbane, Australia

Antoine Pitrou

unread,
Oct 22, 2017, 5:42:30 AM10/22/17
to pytho...@python.org

Hi Victor,

I made some small fixes to the PEP.

As far as I'm concerned, the PEP is ok and should be approved :-)

Regards

Antoine.

Wes Turner

unread,
Oct 22, 2017, 11:08:55 AM10/22/17
to Nick Coghlan, Python Dev


On Saturday, October 21, 2017, Nick Coghlan <ncog...@gmail.com> wrote:
On 22 October 2017 at 09:32, Victor Stinner <victor....@gmail.com> wrote:
Le 21 oct. 2017 20:31, "francismb" <fran...@email.de> a écrit :
I understand that one can just multiply/divide the nanoseconds returned,
(or it could be a factory) but wouldn't it help for future enhancements
to reduce the number of functions (the 'pico' question)?

If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-)

There are actually solid physical reasons for that prediction likely being true. Aside from the power consumption, heat dissipation, and EM radiation issues that arise with higher switching frequencies, you also start running into more problems with digital circuit metastability ([1], [2]): the more clock edges you have per second, the higher the chances of an asynchronous input changing state at a bad time.

So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :)

There are current applications with greater-than nanosecond precision:

- relativity experiments
- particle experiments

Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!

- https://scholar.google.com/scholar?q=femtosecond
- https://scholar.google.com/scholar?q=attosecond
- GPS now supports nanosecond resolution
-

https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_experimental_clocks

> In 2015 JILA evaluated the absolute frequency uncertainty of their latest strontium-87 optical lattice clock at 2.1 × 10−18, which corresponds to a measurable gravitational time dilation for an elevation change of 2 cm (0.79 in)

What about bus latency (and variance)?

From https://www.nist.gov/publications/optical-two-way-time-and-frequency-transfer-over-free-space :

> Optical two-way time and frequency transfer over free space
> Abstract
> The transfer of high-quality time-frequency signals between remote locations underpins many applications, including precision navigation and timing, clock-based geodesy, long-baseline interferometry, coherent radar arrays, tests of general relativity and fundamental constants, and future redefinition of the second. However, present microwave-based time-frequency transfer is inadequate for state-of-the-art optical clocks and oscillators that have femtosecond-level timing jitter and accuracies below 1 × 10−17. Commensurate optically based transfer methods are therefore needed. Here we demonstrate optical time-frequency transfer over free space via two-way exchange between coherent frequency combs, each phase-locked to the local optical oscillator. We achieve 1 fs timing deviation, residual instability below 1 × 10−18 at 1,000 s and systematic offsets below 4 × 10−19, despite frequent signal fading due to atmospheric turbulence or obstructions across the 2 km link. This free-space transfer can enable terrestrial links to support clock-based geodesy. Combined with satellite-based optical communications, it provides a path towards global-scale geodesy, high-accuracy time-frequency distribution and satellite-based relativity experiments.

How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies?

10-6 micro µ
10-9 nano n -- int64
10-12 pico p
10-15 femto f
10-18 atto a
10-21 zepto z
10-24 yocto y

I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective.

Natural keys with such time resolution are still unfortunately likely to collide.

Chris Angelico

unread,
Oct 22, 2017, 11:27:34 AM10/22/17
to Python Dev
On Mon, Oct 23, 2017 at 2:06 AM, Wes Turner <wes.t...@gmail.com> wrote:
> What about bus latency (and variance)?

I'm currently in Los Angeles. Bus latency is measured in minutes, and
may easily exceed sixty of them. :|

Seriously though: For applications requiring accurate representation
of relativistic effects, the stdlib datetime module has a good few
problems besides lacking sub-nanosecond precision. I'd be inclined to
YAGNI this away unless/until some third-party module demonstrates that
there's actually a use for a datetime module that can handle all that.

ChrisA

Nick Coghlan

unread,
Oct 22, 2017, 11:38:41 AM10/22/17
to Wes Turner, Python Dev
On 23 October 2017 at 01:06, Wes Turner <wes.t...@gmail.com> wrote:
On Saturday, October 21, 2017, Nick Coghlan <ncog...@gmail.com> wrote:
So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :)

There are current applications with greater-than nanosecond precision:

- relativity experiments
- particle experiments

Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!

Yes, as time is a critical part of their experimental setup - when you're operating at relativistic speeds and the kinds of energy levels that particle accelerators hit, it's a bad idea to assume that regular time libraries that assume Newtonian physics applies are going to be up to the task.

Normal software assumes a nanosecond is almost no time at all - in high energy particle physics, a nanosecond is enough time for light to travel 30 centimetres, and a high energy particle that stuck around that long before decaying into a lower energy state would be classified as "long lived".

Cheers.
Nick.

P.S. "Don't take code out of the environment it was designed for and assume it will just keep working normally" is one of the main lessons folks learned from the destruction of the first Ariane 5 launch rocket in 1996 (see the first paragraph in https://en.wikipedia.org/wiki/Ariane_5#Notable_launches )

David Mertz

unread,
Oct 22, 2017, 1:32:58 PM10/22/17
to Wes Turner, Nick Coghlan, Python-Dev
I worked at a molecular dynamics lab for a number of years. I advocated switching all our code to using attosecond units (rather than fractional picoseconds). 

However, this had nothing whatsoever to do with the machine clock speeds, but only with the physical quantities represented and the scaling/rounding math.

It didn't happen, for various reasons. But if it had, I certainly wouldn't have expected standard library support for this. The 'time' module is about wall clock out calendar time, not about *simulation time*.

FWIW, a very long simulation might cover a millisecond of simulated time.... we're a very long way from looking at molecular behavior over 104 days.

_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev

Wes Turner

unread,
Oct 22, 2017, 4:44:49 PM10/22/17
to David Mertz, Nick Coghlan, Python-Dev


On Sunday, October 22, 2017, David Mertz <me...@gnosis.cx> wrote:
I worked at a molecular dynamics lab for a number of years. I advocated switching all our code to using attosecond units (rather than fractional picoseconds). 

However, this had nothing whatsoever to do with the machine clock speeds, but only with the physical quantities represented and the scaling/rounding math.

It didn't happen, for various reasons. But if it had, I certainly wouldn't have expected standard library support for this. The 'time' module is about wall clock out calendar time, not about *simulation time*.

FWIW, a very long simulation might cover a millisecond of simulated time.... we're a very long way from looking at molecular behavior over 104 days.

Maybe that's why we haven't found any CTCs (closed timelike curves) yet.

Aligning simulation data in context to other events may be enlightening: is there a good library for handing high precision time units in Python (and/or CFFI)?

... 


Victor Stinner

unread,
Oct 22, 2017, 7:56:34 PM10/22/17
to Wes Turner, Nick Coghlan, Python Dev
Le 22 oct. 2017 17:06, "Wes Turner" <wes.t...@gmail.com> a écrit :
Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?!

Yes, exactly.

Note: Adding resolution better than 1 us to datetime is not in the scope of the PEP but there is an issue, open since a long time.

I don't think that time.time_ns() is usable for such experiment. Again, calling a function is Python takes around 50 ns.

Victor

Chris Barker

unread,
Oct 23, 2017, 7:02:02 PM10/23/17
to Wes Turner, Nick Coghlan, Python-Dev
On Sun, Oct 22, 2017 at 1:42 PM, Wes Turner <wes.t...@gmail.com> wrote:
Aligning simulation data in context to other events may be enlightening: is there a good library for handing high precision time units in Python (and/or CFFI)?

Well, numpy's datetime64 can be set to use (almost) whatever unit you want:

 
Though it uses a single epoch, which I don't think ever made sense with femtoseconds....

And it has other problems, but it was designed that way, just for the reason.

However, while there has been discussion of improvements, like making the epoch settable, none of them have happened, which makes me think that no one is using it for physics experiments, but rather plain old human calendar time...

-CHB

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris....@noaa.gov

Thomas Jollans

unread,
Oct 23, 2017, 7:39:21 PM10/23/17
to pytho...@python.org
On 22/10/17 17:06, Wes Turner wrote:
> There are current applications with greater-than nanosecond precision:
>
> - relativity experiments
> - particle experiments
>
> Must they always use their own implementations of time., datetime.
> __init__, fromordinal, fromtimestamp ?!
>
> - https://scholar.google.com/scholar?q=femtosecond
> - https://scholar.google.com/scholar?q=attosecond
> - GPS now supports nanosecond resolution
> -

Sure, but in these kinds of experiments you don't have a "timestamp" in
the usual sense.

You'll have some kind of high-precision "clock", but in most cases
there's no way and no reason to synchronise this to wall time. You end
up distinguishing between "macro-time" (wall time) and "micro-time"
(time in the experiment relative to something)

In a particle accelerator, you care about measuring relative times of
almost-simultaneous detection events with extremely high precision.
You'll also presumably have a timestamp for the event, but you won't be
able or willing to measure that with anything like the same accuracy.

While you might be able to say that you detected, say, a muon at
01:23:45.6789 at Δt=543.6ps*, you have femtosecond resolution, you have
a timestamp, but you don't have a femtosecond timestamp.

In ultrafast spectroscopy, we get a time resolution equal to the
duration of your laser pulses (fs-ps), but all the micro-times measured
will be relative to some reference laser pulse, which repeats at >MHz
frequencies. We also integrate over millions of events - wall-time
timestamps don't enter into it.

In summary, yes, when writing software for experiments working with high
time resolution you have to write your own implementations of whatever
data formats best describe time as you're measuring it, which generally
won't line up with time as a PC (or a railway company) looks at it.

Cheers
Thomas


* The example is implausible not least because I understand muon
chambers tend to be a fair bit bigger than 15cm, but you get my point.

Wes Turner

unread,
Oct 23, 2017, 10:20:40 PM10/23/17
to Thomas Jollans, pytho...@python.org
(Sorry, maybe too OT)

So these experiments are all done in isolation; referent to t=0.

Aligning simulation data in context to other events may be enlightening:


IIUC,
https://en.wikipedia.org/wiki/Quantum_mechanics_of_time_travel implies that there are (or may) Are potentially connections between events over greater periods of time.

It's unfortunate that aligning this data requires adding offsets and working with nonstandard adhoc time structs.
 
A problem for another day, I suppose.

Thanks for adding time_ns(l.


Cheers
Thomas


* The example is implausible not least because I understand muon
chambers tend to be a fair bit bigger than 15cm, but you get my point.
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com

Victor Stinner

unread,
Oct 24, 2017, 3:02:44 AM10/24/17
to Thomas Jollans, Python Dev
Thanks Thomas, it was interesting! You confirmed that time.time_ns() and other system clocks exposed by Python are inappropriate for sub-nanosecond physical experiment.

By the way, you mentionned that clocks are not synchronized. That's another revelant point. Even if system clocks are synchronized on a single computer, I read that you cannot reach nanosecond resolution for a NTP synchronization even in a small LAN.

For large systems or distributed systems, a "global (synchronized) clock" is not an option. You cannot synchronize clocks correctly, so your algorithms must not rely on time, or at least not too precise resolution.

I am saying that to again repeat that we are far from sub-second nanosecond resolution for system clock.

Victor

Antoine Pitrou

unread,
Oct 24, 2017, 5:24:48 AM10/24/17
to pytho...@python.org
On Tue, 24 Oct 2017 09:00:45 +0200
Victor Stinner <victor....@gmail.com> wrote:
> By the way, you mentionned that clocks are not synchronized. That's another
> revelant point. Even if system clocks are synchronized on a single
> computer, I read that you cannot reach nanosecond resolution for a NTP
> synchronization even in a small LAN.
>
> For large systems or distributed systems, a "global (synchronized) clock"
> is not an option. You cannot synchronize clocks correctly, so your
> algorithms must not rely on time, or at least not too precise resolution.
>
> I am saying that to again repeat that we are far from sub-second nanosecond
> resolution for system clock.

What does synchronization have to do with it? If synchronization
matters, then your PEP should be rejected, because current computers
using NTP can't synchronize with a better precision than 230 ns.

See https://blog.cloudflare.com/how-to-achieve-low-latency/

Regards

Antoine.


_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Wes Turner

unread,
Oct 24, 2017, 6:38:16 AM10/24/17
to Antoine Pitrou, pytho...@python.org


On Tuesday, October 24, 2017, Antoine Pitrou <soli...@pitrou.net> wrote:
On Tue, 24 Oct 2017 09:00:45 +0200
Victor Stinner <victor....@gmail.com> wrote:
> By the way, you mentionned that clocks are not synchronized. That's another
> revelant point. Even if system clocks are synchronized on a single
> computer, I read that you cannot reach nanosecond resolution for a NTP
> synchronization even in a small LAN.
>
> For large systems or distributed systems, a "global (synchronized) clock"
> is not an option. You cannot synchronize clocks correctly, so your
> algorithms must not rely on time, or at least not too precise resolution.
>
> I am saying that to again repeat that we are far from sub-second nanosecond
> resolution for system clock.

What does synchronization have to do with it?  If synchronization
matters, then your PEP should be rejected, because current computers
using NTP can't synchronize with a better precision than 230 ns.

From https://en.wikipedia.org/wiki/Virtual_black_hole :

> In the derivation of his equations, Einstein suggested that physical space-time is Riemannian, ie curved. A small domain of it is approximately flat space-time.


From https://en.wikipedia.org/wiki/Quantum_foam :

> Based on the uncertainty principles of quantum mechanics and the general theory of relativity, there is no reason that spacetime needs to be fundamentally smooth. Instead, in a quantum theory of gravity, spacetime would consist of many small, ever-changing regions in which space and time are not definite, but fluctuate in a foam-like manner.

So, in regards to time synchronization, FWIU:

- WWVB "can provide time with an accuracy of about 100 microseconds"

- GPS time can synchronize down to "tens of nanoseconds"

- Blockchains work around local timestamp issues by "enforcing" linearity

 

See https://blog.cloudflare.com/how-to-achieve-low-latency/ 

Regards

Antoine.


_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com

Victor Stinner

unread,
Oct 24, 2017, 7:23:17 AM10/24/17
to Antoine Pitrou, Python Dev
2017-10-24 11:22 GMT+02:00 Antoine Pitrou <soli...@pitrou.net>:
> What does synchronization have to do with it? If synchronization
> matters, then your PEP should be rejected, because current computers
> using NTP can't synchronize with a better precision than 230 ns.

Currently, the PEP 564 is mostly designed for handling time on the
same computer. Better resolution inside the same process, and
"synchronization" between two processes running on the same host:
https://www.python.org/dev/peps/pep-0564/#issues-caused-by-precision-loss

Maybe tomorrow, time.time_ns() will help for use cases with more computers :-)


> See https://blog.cloudflare.com/how-to-achieve-low-latency/

This article doesn't mention NTP, synchronization or nanoseconds.
Where did you see "230 ns" for NTP?

Victor

Antoine Pitrou

unread,
Oct 24, 2017, 7:27:22 AM10/24/17
to Python Dev

Le 24/10/2017 à 13:20, Victor Stinner a écrit :
>> See https://blog.cloudflare.com/how-to-achieve-low-latency/
>
> This article doesn't mention NTP, synchronization or nanoseconds.

NTP is layered over UDP. The article shows base case UDP latencies of
around 15µs over 10Gbps Ethernet.

Regards

Antoine.

Victor Stinner

unread,
Oct 24, 2017, 11:37:17 AM10/24/17
to Antoine Pitrou, Python Dev
Warning: the PEP 564 doesn't make any assumption about clock
synchronizations. My intent is only to expose what the operating
system provides without losing precision. That's all :-)

2017-10-24 13:25 GMT+02:00 Antoine Pitrou <ant...@python.org>:
> NTP is layered over UDP. The article shows base case UDP latencies of
> around 15µs over 10Gbps Ethernet.

Ah ok.

IMHO the discussion became off-topic somewhere, but I'm curious, so I
searched about the best NTP accuracy and found:

https://blog.meinbergglobal.com/2013/11/22/ntp-vs-ptp-network-timing-smackdown/

"Is the accuracy you need measured in microseconds or nanoseconds? If
the answer is yes, you want PTP (IEEE 1588). If the answer is in
milliseconds or seconds, then you want NTP."

"There is even ongoing standards work to use technology developed at
CERN (...) to extend PTP to picoseconds."

It seems like PTP is more accurate than NTP.

Victor

francismb

unread,
Oct 28, 2017, 5:00:00 AM10/28/17
to pytho...@python.org
Hi David,

On 10/22/2017 07:30 PM, David Mertz wrote:
> The 'time' module is about
> wall clock out calendar time, not about *simulation time*.
means that the other scale direction makes more sense for the module?
aka 'get_time('us'), get_time('ms'), 'get_time('s')

Thanks,
--francis
_______________________________________________
Python-Dev mailing list
Pytho...@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/dev-python%2Bgarchive-30976%40googlegroups.com

Guido van Rossum

unread,
Oct 30, 2017, 1:20:52 PM10/30/17
to Victor Stinner, Python-Dev
I have read PEP 564 and (mostly) followed the discussion in this thread, and I am happy with the PEP. I am hereby approving PEP 564. Congratulations Victor!

Ethan Furman

unread,
Oct 30, 2017, 6:53:02 PM10/30/17
to pytho...@python.org
On 10/30/2017 10:18 AM, Guido van Rossum wrote:

> I have read PEP 564 and (mostly) followed the discussion in this thread, and I am happy with the PEP. I am hereby
> approving PEP 564. Congratulations Victor!

Congrats, Victor!

Victor Stinner

unread,
Nov 2, 2017, 11:18:44 AM11/2/17
to Guido van Rossum, Python-Dev
Thank you Guido for your review and approval.

I just implemented the PEP 564 and so changed the PEP status to Final.

FYI I also added 3 new clock identifiers to the time module in Python
3.7: CLOCK_BOOTTIME, CLOCK_PROF and CLOCK_UPTIME.

So you can now get your Linux uptime with a resolution of 1 nanosecond :-D

haypo@selma$ ./python -c 'import time;
print(time.clock_gettime_ns(time.CLOCK_BOOTTIME))'
232172588663888

Don't do that at home, it's just for educational purpose only! ;-)

Victor

Guido van Rossum

unread,
Nov 2, 2017, 10:22:07 PM11/2/17
to Victor Stinner, Python-Dev
Yay! Record time from acceptance to implementation. :-)
Reply all
Reply to author
Forward
0 new messages