Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

newlib and time()

169 views
Skip to first unread message

pozz

unread,
Sep 30, 2022, 2:29:31 AM9/30/22
to
I often use newlib standard C libraries with gcc toolchain for Cortex-M
platforms. It sometimes happens I need to manage calendar time: seconds
from 1970 or broken down time. And it sometimes happens I need to manage
timezone too, because the time reference comes from NTP (that is UTC).

newlib as expected defines a time() function that calls a syscall
function _gettimeofday(). It should be defined as in [1].

What is it? There's an assembler instruction that I don't understand:

asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");

What is the cleanest way to override this behaviour and let newlib
time() to return a custom calendar time, maybe counted by a local RTC,
synchronized with a NTP?

The solution that comes to my mind is to override _gettimeofday() by
defining a custom function.



[1] https://github.com/eblot/newlib/blob/master/libgloss/arm/syscalls.c

David Brown

unread,
Sep 30, 2022, 3:04:55 AM9/30/22
to
On 30/09/2022 08:29, pozz wrote:
> I often use newlib standard C libraries with gcc toolchain for Cortex-M
> platforms. It sometimes happens I need to manage calendar time: seconds
> from 1970 or broken down time. And it sometimes happens I need to manage
> timezone too, because the time reference comes from NTP (that is UTC).
>
> newlib as expected defines a time() function that calls a syscall
> function _gettimeofday(). It should be defined as in [1].
>
> What is it? There's an assembler instruction that I don't understand:
>
>   asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
>

It is a "software interrupt" instruction. If you have a separation of
user-space and supervisor-space code in your system, this is the way you
make a call to supervisor mode.

> What is the cleanest way to override this behaviour and let newlib
> time() to return a custom calendar time, maybe counted by a local RTC,
> synchronized with a NTP?
>
> The solution that comes to my mind is to override _gettimeofday() by
> defining a custom function.
>

Yes, that's the way to do it.

Or define your own time functions that are appropriate to the task. I
have almost never had a use for the standard library time functions -
they are too much for most embedded systems which rarely need all the
locale stuff, time zones, and tracking leap seconds, while lacking the
stuff you /do/ need like high precision time counts.

Use a single 64-bit monotonic timebase running at high speed (if your
microcontroller doesn't support that directly, use a timer with an
interrupt for tracking the higher part). That's enough for nanosecond
precision for about 600 years.

For human-friendly time and dates, either update every second or write
your own simple second-to-human converter. It's easier if you have
your base point relatively recently (there's no need to calculate back
to 01.01.1970).

If you have an internet connection, NTP is pretty simple if you are
happy to use the NTP pools as a rough reference without trying to do
millisecond synchronisation.


>
>
> [1] https://github.com/eblot/newlib/blob/master/libgloss/arm/syscalls.c
>

Clifford Heath

unread,
Sep 30, 2022, 3:12:44 AM9/30/22
to
On 30/9/22 16:29, pozz wrote:
> I often use newlib standard C libraries with gcc toolchain for Cortex-M
> platforms. It sometimes happens I need to manage calendar time: seconds
> from 1970 or broken down time. And it sometimes happens I need to manage
> timezone too, because the time reference comes from NTP (that is UTC).
>
> newlib as expected defines a time() function that calls a syscall
> function _gettimeofday(). It should be defined as in [1]. > What is it? There's an assembler instruction that I don't understand:

It's the internal implementation of a bog-standard BSD Unix system call.
Have you tried `man 2 gettimeofday`?

The first parameter points to a struct timeval with time_t tv_sec and
suseconds_t tv_usec, and the second one (both optional) to a struct
timezone with two ints called tz_minuteswest and tz_dsttime.


>   asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
> What is the cleanest way to override this behaviour and let newlib
> time() to return a custom calendar time, maybe counted by a local RTC,
> synchronized with a NTP?

That depends on details of newlib and your tool chain.

Clifford Heath

pozz

unread,
Sep 30, 2022, 5:29:55 AM9/30/22
to
Il 30/09/2022 09:04, David Brown ha scritto:
> On 30/09/2022 08:29, pozz wrote:
>> I often use newlib standard C libraries with gcc toolchain for
>> Cortex-M platforms. It sometimes happens I need to manage calendar
>> time: seconds from 1970 or broken down time. And it sometimes happens
>> I need to manage timezone too, because the time reference comes from
>> NTP (that is UTC).
>>
>> newlib as expected defines a time() function that calls a syscall
>> function _gettimeofday(). It should be defined as in [1].
>>
>> What is it? There's an assembler instruction that I don't understand:
>>
>>    asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
>>
>
> It is a "software interrupt" instruction.  If you have a separation of
> user-space and supervisor-space code in your system, this is the way you
> make a call to supervisor mode.

Ok, but how that instruction helps in returning a value from
_gettimeofday()?


>> What is the cleanest way to override this behaviour and let newlib
>> time() to return a custom calendar time, maybe counted by a local RTC,
>> synchronized with a NTP?
>>
>> The solution that comes to my mind is to override _gettimeofday() by
>> defining a custom function.
>>
>
> Yes, that's the way to do it.
>
> Or define your own time functions that are appropriate to the task.  I
> have almost never had a use for the standard library time functions -
> they are too much for most embedded systems which rarely need all the
> locale stuff, time zones, and tracking leap seconds, while lacking the
> stuff you /do/ need like high precision time counts.
>
> Use a single 64-bit monotonic timebase running at high speed (if your
> microcontroller doesn't support that directly, use a timer with an
> interrupt for tracking the higher part).  That's enough for nanosecond
> precision for about 600 years.
>
> For human-friendly time and dates, either update every second or write
> your own simple second-to-human converter.   It's easier if you have
> your base point relatively recently (there's no need to calculate back
> to 01.01.1970).
>
> If you have an internet connection, NTP is pretty simple if you are
> happy to use the NTP pools as a rough reference without trying to do
> millisecond synchronisation.

I agree with you and I used to implement my own functions to manage
calendar times. Sometimes I used internal or external RTC that gives
date and time in broken down fields (seconds, minutes, ...).

However most RTCs don't manage automatic DST (daylight saving time), so
I started using a different approach.
I started using a simple 32-bits timer that increments every 1 second.
Maybe a timer clocked from an accurate 32.768kHz quartz with a 32768
prescaler (many RTC can be configured as a simple 32-bit counter).
Rarely I need calendar times with a resolution better than 1 second.

Now the big question: what the counter exactly represents? Of course,
seconds elapsed from an epoch (that could be Unix 1970 or 2000 or 2020
or what you choose). But the real question is: UTC or localtime?

I started using localtime, for example the timer counts seconds since
year 2020 (so avoiding wrap-around at year 2038) in Rome timezone.
However this approach pulls-in other issues.

How to convert this number (seconds since 2020 in Rome) to broken-down
time (day, month, hours...)? It's very complex, because you should count
for leap years, but mostly for DST rules.
In Rome we have a calendar times that occur two times, when the clock is
moved backward by one hour for DST. What is the counter value of this
times as seconds from epoch in Rome for this time?

It's much more simple to start from seconds in UTC, as Linux (and maybe
Windows) does. In this way you can use standard functions to convert
seconds in UTC to localtime. For example, you can use localtime() (or
localtime_r() that is better).

Another bonus is when you have NTP, that returns seconds in UTC, so you
can set your counter with the exact number retrived by NTP.



Clifford Heath

unread,
Sep 30, 2022, 7:52:05 AM9/30/22
to
On 30/9/22 19:29, pozz wrote:
> Il 30/09/2022 09:04, David Brown ha scritto:
>> On 30/09/2022 08:29, pozz wrote:
>>> I often use newlib standard C libraries with gcc toolchain for
>>> Cortex-M platforms. It sometimes happens I need to manage calendar
>>> time: seconds from 1970 or broken down time. And it sometimes happens
>>> I need to manage timezone too, because the time reference comes from
>>> NTP (that is UTC).
>>>
>>> newlib as expected defines a time() function that calls a syscall
>>> function _gettimeofday(). It should be defined as in [1].
>>>
>>> What is it? There's an assembler instruction that I don't understand:
>>>
>>>    asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
>>
>> It is a "software interrupt" instruction.  If you have a separation of
>> user-space and supervisor-space code in your system, this is the way
>> you make a call to supervisor mode.
>
> Ok, but how that instruction helps in returning a value from
> _gettimeofday()?


It traps into Kernel mode, with a different stack. The kernel uses
memory manipulation to push return values into user-mode registers or
the user stack as needed to simulate a procedure return.

> It's much more simple to start from seconds in UTC, as Linux (and maybe
> Windows) does. In this way you can use standard functions to convert
> seconds in UTC to localtime

That's a good way to always get the wrong result. You are ignoring the
need for leap seconds. If you want a monotonic counter of seconds since
some epoch, you must not use UTC, but TAI:

<https://en.wikipedia.org/wiki/International_Atomic_Time>

When I implmented this, I used a 64-bit counter in 100's of nanoseconds
since a date about 6000BC, measuring in TAI. You can convert to UTC
easily enough, and then use the timezone tables to get local times.

Clifford Heath.

David Brown

unread,
Sep 30, 2022, 8:38:51 AM9/30/22
to
On 30/09/2022 11:29, pozz wrote:
> Il 30/09/2022 09:04, David Brown ha scritto:
>> On 30/09/2022 08:29, pozz wrote:
>>> I often use newlib standard C libraries with gcc toolchain for
>>> Cortex-M platforms. It sometimes happens I need to manage calendar
>>> time: seconds from 1970 or broken down time. And it sometimes happens
>>> I need to manage timezone too, because the time reference comes from
>>> NTP (that is UTC).
>>>
>>> newlib as expected defines a time() function that calls a syscall
>>> function _gettimeofday(). It should be defined as in [1].
>>>
>>> What is it? There's an assembler instruction that I don't understand:
>>>
>>>    asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
>>>
>>
>> It is a "software interrupt" instruction.  If you have a separation of
>> user-space and supervisor-space code in your system, this is the way
>> you make a call to supervisor mode.
>
> Ok, but how that instruction helps in returning a value from
> _gettimeofday()?

It will work if you have an OS that provides services to user-level
code. The service type is passed in the SWI instruction (in this case,
"SWI_Time"), and the service should return a value in r0.

Calls like this are part of the "hosted" C library functions - they rely
on a host OS to do the actual work.
That should all be fine.

Theo

unread,
Sep 30, 2022, 9:06:20 AM9/30/22
to
pozz <pozz...@gmail.com> wrote:
> I often use newlib standard C libraries with gcc toolchain for Cortex-M
> platforms. It sometimes happens I need to manage calendar time: seconds
> from 1970 or broken down time. And it sometimes happens I need to manage
> timezone too, because the time reference comes from NTP (that is UTC).
>
> newlib as expected defines a time() function that calls a syscall
> function _gettimeofday(). It should be defined as in [1].
>
> What is it? There's an assembler instruction that I don't understand:
>
> asm ("swi %a1; mov %0, r0" : "=r" (value): "i" (SWI_Time) : "r0");
>
> What is the cleanest way to override this behaviour and let newlib
> time() to return a custom calendar time, maybe counted by a local RTC,
> synchronized with a NTP?

It appears 'libgloss' is the system-dependent part of newlib. It has
various ideas of what those low level system functions should do, in
particular linux-syscalls0.S, redboot-syscalls.c and syscalls.c (which
appears to be calling Arm's Angel monitor).

There's a guide for porting newlib to a new platform that describes what
libgloss is and how to port it:
https://sourceware.org/newlib/libgloss.html
as well as:
https://www.embecosm.com/appnotes/ean9/ean9-howto-newlib-1.0.html
https://wiki.osdev.org/Porting_Newlib

So the cleanest way wouldn't be to override _gettimeofday() as such, you'd
make your own libgloss library that implemented the backend functions you
wanted.

Theo

pozz

unread,
Sep 30, 2022, 9:32:22 AM9/30/22
to
Il 30/09/2022 13:51, Clifford Heath ha scritto:
> On 30/9/22 19:29, pozz wrote:
>> Il 30/09/2022 09:04, David Brown ha scritto:
>>> On 30/09/2022 08:29, pozz wrote:

[...]

>> It's much more simple to start from seconds in UTC, as Linux (and
>> maybe Windows) does. In this way you can use standard functions to
>> convert seconds in UTC to localtime
>
> That's a good way to always get the wrong result. You are ignoring the
> need for leap seconds. If you want a monotonic counter of seconds since
> some epoch, you must not use UTC, but TAI:
>
> <https://en.wikipedia.org/wiki/International_Atomic_Time>
>
> When I implmented this, I used a 64-bit counter in 100's of nanoseconds
> since a date about 6000BC, measuring in TAI. You can convert to UTC
> easily enough, and then use the timezone tables to get local times.

What happens if the counter is UTC instead of IAT in a typical embedded
application? There's a time when the counter is synchronized (by a
manual operation from the user, by NTP or other means). At that time the
broken-down time shown on the display is precise.

You should wait for the next leap second to have an error of... 1 second.


Richard Damon

unread,
Sep 30, 2022, 9:48:44 AM9/30/22
to
The key thing to remember is there is more than one system of time.

As I remember time() returns UTC seconds since the epoch, which ignores
leap seconds. This means the time() function will either pause for 1
second during the leap second, or some systems smear the 1 second
anomoly over a period of time. (This smeared UTC is monotonic, if not
exactly accurate during the smear period).

For time() ALL days are 24*60*60 seconds long.

There are other time systems (Like the TAI) that keep track of leap
seconds, but then to use those to convert to wall-clock time, you need a
historical table of when leap seconds occured, and you need to either
refuse to handle the farther future or admit you are needing to "Guess"
when those leap seconds will need to be applied.

Most uses of TAI time are for just short intervals without a need to
convert to wall clock.

Don Y

unread,
Sep 30, 2022, 2:42:21 PM9/30/22
to
On 9/30/2022 2:29 AM, pozz wrote:
> Another bonus is when you have NTP, that returns seconds in UTC, so you can set
> your counter with the exact number retrived by NTP.

No. You always have to ensure that time keeps flowing in one direction.

So, time either "doesn't exist" before your initial sync with the
time server (what if the server isn't available when you want to
do that?) *or* you have to look at your current notion of "now"
and ensure that the "real" value of now, when obtained from the
time server, is always in the future relative to your notion.

[Note that NTP slaves don't blindly assume the current time is
as reported but *slew* to the new value, over some interval.]

This also ignores the possibility of computations with relative
*intervals* being inconsistent with these spontaneous "resets".

Don Y

unread,
Sep 30, 2022, 2:55:16 PM9/30/22
to
On 9/30/2022 4:51 AM, Clifford Heath wrote:
> When I implmented this, I used a 64-bit counter in 100's of nanoseconds since a
> date about 6000BC, measuring in TAI. You can convert to UTC easily enough, and
> then use the timezone tables to get local times.

How did you address calls for times during the Gregorian changeover?
Or, times before leap seconds were "created" (misnomer)?

Going back "too far" opens the door for folks to think values
before "recent times" are valid.

I find it easier to treat "system time" as an arbitrary metric
that runs at a nominal 1Hz per second and is never "reset".
(an external timebase allows you to keep adjusting your
notion of a "second"). This ensures that there are always
N seconds between any two *system* times, X and X+N (for all X).

Then, "wall time" is a bogus concept introduced just for human
convenience. Do you prevent a user (or an external reference)
from ever setting the wall time backwards? What if he *wants*
to? Then, anything you've done relying on that is suspect.

You don't FORCE system time to remain in sync with wall time
(even at a specific relative offset) but, rather, treat them
as separate things.

So, if I (the user) want to "schedule an appointment" at 9:00AM,
the code uses the *current* notion of the wall time -- which might
change hundreds of times between now and then, at the whim of the
user. If the wall time suddenly changes, then the time to the
appointment will also change -- including being overshot.

Damn near everything else wants to rely on relative times
which track the system time.

If a user wants to do something "in 5 minutes", you don't
convert that to "current wall time + 5 minutes" but, rather,
schedule it at "current SYSTEM time + 300 seconds".

OTOH, if it is now 11:50 and he wants something to happen at
11:55 (now+5 minutes) then he must *say* "11:55".

This allows a user to know what to expect in light of the
fact that he can change one notion of time but not the other.

Clifford Heath

unread,
Sep 30, 2022, 6:43:21 PM9/30/22
to
On 30/9/22 23:48, Richard Damon wrote:
> There are other time systems (Like the TAI) that keep track of leap
> seconds, but then to use those to convert to wall-clock time, you need a
> historical table of when leap seconds occured,

There haven't been that many of them, so it's not a very big table.

> and you need to either
> refuse to handle the farther future or admit you are needing to "Guess"
> when those leap seconds will need to be applied.

There is a proposal to never add more leap seconds anyhow. They never
did anyone any good. Astronomers don't use UTC anyway.

> Most uses of TAI time are for just short intervals without a need to
> convert to wall clock.

Yes. But if you're going to implement a monotonic system, you may as
well do it properly.

Clifford Heath

Clifford Heath

unread,
Sep 30, 2022, 6:48:50 PM9/30/22
to
On 1/10/22 04:55, Don Y wrote:
> On 9/30/2022 4:51 AM, Clifford Heath wrote:
>> When I implmented this, I used a 64-bit counter in 100's of
>> nanoseconds since a date about 6000BC, measuring in TAI. You can
>> convert to UTC easily enough, and then use the timezone tables to get
>> local times.
>
> How did you address calls for times during the Gregorian changeover?

You're asking a question about calendars, not time. Different problem.

> I find it easier to treat "system time" as an arbitrary metric
> that runs at a nominal 1Hz per second and is never "reset".
...> Then, "wall time" is a bogus concept introduced just for human
> convenience.  Do you prevent a user (or an external reference)
> from ever setting the wall time backwards?

That doesn't work for someone who's travelling between timezones.
Time keeps advancing regardless, but wall clock time jumps about.
Same problem for DST. Quite a lot of enterprise (financial) systems are
barred from running any transaction processing for an hour during DST
switch-over, because of software that might malfunction.

Correctness is difficult, especially when you build systems on shifting
sands.

Clifford Heath.

Don Y

unread,
Sep 30, 2022, 7:11:54 PM9/30/22
to
On 9/30/2022 3:48 PM, Clifford Heath wrote:
> On 1/10/22 04:55, Don Y wrote:
>> On 9/30/2022 4:51 AM, Clifford Heath wrote:
>>> When I implmented this, I used a 64-bit counter in 100's of nanoseconds
>>> since a date about 6000BC, measuring in TAI. You can convert to UTC easily
>>> enough, and then use the timezone tables to get local times.
>>
>> How did you address calls for times during the Gregorian changeover?
>
> You're asking a question about calendars, not time. Different problem.

They are related as time is often interpreted relative to some
*other* "bogus concept" (e.g., calendar) related to how humans want
to frame time references.

>> I find it easier to treat "system time" as an arbitrary metric
>> that runs at a nominal 1Hz per second and is never "reset".
>> Then, "wall time" is a bogus concept introduced just for human
>> convenience.  Do you prevent a user (or an external reference)
>> from ever setting the wall time backwards?
>
> That doesn't work for someone who's travelling between timezones.

Or for someone who wants to change the current wall time.
Note that these library functions were created when "only god"
(sysadm) could change the current notion of time -- and didn't
do so casually.

Now, damn near ever device (c.a.EMBEDDED) allows the user to
dick with the wall clock with impunity. Including intentionally
setting the time incorrectly (e.g., folks who set their alarm clocks
"5 minutes fast" thinking it will somehow trick them to getting
out of bed promptly whereas the CORRECT time might not?)

And, one can have a suite of devices in a single environment
each with their own notion of "now".

> Time keeps advancing regardless, but wall clock time jumps about.

Because wall time has an ill-defined reference point -- that can often
be changed, at will!

E.g., we don't observer DST, here. So, the broadcast TV schedules
are "off" by an hour. When something is advertised as airing at
X mountain time (or pacific time), what does that really mean for us?

> Same problem for DST. Quite a lot of enterprise (financial) systems are barred
> from running any transaction processing for an hour during DST switch-over,
> because of software that might malfunction.
>
> Correctness is difficult, especially when you build systems on shifting sands.

The issue is considerably larger than many folks would think. Because
there are a multitude of time references in most environments; what
your phone claims, what your TV thinks, what your PC/time server thinks,
how you've set the clock on your microwave, bedside alarm, etc.

If you have two "systems" (appliances) interacting, which one's
notion of time should you abide?

How do you *report* a timestamp on an event that happened 5 minutes
ago -- if the wall clock was set BACKWARDS by an hour in the intervening
interval? Should the timestamp reflect a *future* time ("The event
happen-ED 55 minutes *from* now")? Should it be adjusted to reflect
the time at which it occurred relative to the current notion of
wall time?

How do you *order* events /ex post factum/ in the presence of such
ambiguity?

(i.e., I map everything, internally, to system time as that lets
me KNOW their relative orders, regardless of what the "wall clock"
said at the time.)

If you've scheduled "something" to happen at 11:55 and the user
sets the wall clock forward, an hour (perhaps accidentally), do you
trigger that event (assuming the new "now" > 11:55) instantly? If
you automatically clear "completed events", then setting the wall
clock back to the "correct" time won't resurrect the 11:55 event
at the originally intended "absolute time".

pozz

unread,
Oct 2, 2022, 6:09:20 PM10/2/22
to
Il 30/09/2022 20:42, Don Y ha scritto:
> On 9/30/2022 2:29 AM, pozz wrote:
>> Another bonus is when you have NTP, that returns seconds in UTC, so
>> you can set your counter with the exact number retrived by NTP.
>
> No.  You always have to ensure that time keeps flowing in one direction.
>
> So, time either "doesn't exist" before your initial sync with the
> time server (what if the server isn't available when you want to
> do that?)

At startup, if NTP server is not available and I don't have any notion
of "now", I start from a date in the past, i.e. 01/01/2020.


> *or* you have to look at your current notion of "now"
> and ensure that the "real" value of now, when obtained from the
> time server, is always in the future relative to your notion.

Actually I don't do that and I replace the timer counter with the value
retrieved from NTP.
What happens if the local timer is clocked by a faster clock then
nominal? For example, 16.001MHz with 16M prescaler.
If I try to NTP re-sync every 1-hour, it's probably the local counter is
greater than the value retrieved from NTP. I'm forced to decrease the
local counter, my notion of "now".

What happens if the time doesn't flow in one direction only?

Don Y

unread,
Oct 2, 2022, 10:03:00 PM10/2/22
to
On 10/2/2022 3:09 PM, pozz wrote:
> Il 30/09/2022 20:42, Don Y ha scritto:
>> On 9/30/2022 2:29 AM, pozz wrote:
>>> Another bonus is when you have NTP, that returns seconds in UTC, so you can
>>> set your counter with the exact number retrived by NTP.
>>
>> No.  You always have to ensure that time keeps flowing in one direction.
>>
>> So, time either "doesn't exist" before your initial sync with the
>> time server (what if the server isn't available when you want to
>> do that?)
>
> At startup, if NTP server is not available and I don't have any notion of
> "now", I start from a date in the past, i.e. 01/01/2020.

Then you have to be able to accept a BIG skew in the time when the first
update arrives. What if that takes an hour, a day or more (because the
server is down, badly configured or incorrect routing)? What if it
*never* arrives?

If you apply the new time in a step function, then all of the potential
time related events between ~1/1/2020 and "now" will appear to occur
at the same instant -- *now* -- or, not at all. And, any time-related
calculations will be grossly incorrect.

start_time := now()
dispenser(on)
wait_until(start_time + interval)

Imagine what will happen if the time is changed during this fragment.
If the change adds >= interval to the local notion of now, then the
dispenser will be "on" only momentarily. If it adds (0,interval),
then it will be on for some period LESS than the "interval" intended.

[I'm ignoring the possibility of it going BACKWARDS, for now]

Note that wait_until() could have been expressed as delay(interval)
and, depending on how this is internally implemented, it might be
silently translated to a wait_until() and thus dependant on the
actual value of now().

Likewise, imagine trying to measure the duration of an event:

wait_until(event)
start_time := now()
wait_until(!event)
duration = now() - start_time

Similarly, any implied ordering of actions is vulnerable:

do(THIS, time1)
do(THAT, time2)

What if the value of now() makes a jump from some time prior to
time1 to some time after time1, but before time2. Will THIS happen?
(i.e., will it be scheduled to happen?) How much ACTUAL (execution)
time will there be between THIS being started and THAT?

What if the value of now() makes a jump from some time prior to
time1 to some time after time2. Will THIS happen before THAT?
Will both start (be made ready) concurrently? Who will win the
unintended race?

[Note that many NTP clients won't "accept" a time declaration that is
"too far" from the local notion of now. If you want to *set* the
current time, you use something like ntpdate to impose a specific time
regardless of how far that deviates from your own notion.]

>> *or* you have to look at your current notion of "now"
>> and ensure that the "real" value of now, when obtained from the
>> time server, is always in the future relative to your notion.
>
> Actually I don't do that and I replace the timer counter with the value
> retrieved from NTP.

Then you run the risk that the local counter may have already surpassed
the NTP "count" by, for example, N seconds. And, time now jerks backwards
as the previous N seconds appear to be relived.

Will you AGAIN do the task that was scheduled for "a few seconds ago"?
(even though it has already been completed) Will you remember to ALSO
do the task that expected to be done an hour before that -- if the "jerk
back" wasn't a full hour?

You likely wrote your code (or, your user scheduled events) on the assumption
that there are roughly 60 seconds between any two "minutes", etc. And, that
time1 precedes time2 by (time2 - time1) actual seconds.

> What happens if the local timer is clocked by a faster clock then nominal? For
> example, 16.001MHz with 16M prescaler.
> If I try to NTP re-sync every 1-hour, it's probably the local counter is
> greater than the value retrieved from NTP. I'm forced to decrease the local
> counter, my notion of "now".

No. You change the rate at which you run the local "clock" -- whatever
timebase you are counting. So, if your jiffy was designed to happen at
100ms intervals (counted down from some XTAL reference by a divisor of
MxN) and you now discover that your notion of 100 was actually 98.7 REAL ms
(because your time has been noted as moving faster than the NTP reference),
then you change the divisor used to generate the jiffy to something slightly
larger to effectively slow the jiffy down to 100+ ms (the "+" being present
to ensure the local time eventually slows enough so that "real" time
falls into sync).

This is an continuous process. (read the NTP sources and how the kernel
implements "adjtime()")

> What happens if the time doesn't flow in one direction only?

Then everything that (implicitly) relies on time to be monotonic is
hosed.

Repeat the examples at the start of my post with the case of time
jumping backwards and see what happens.

What if time goes backwards enough to muck with some calculation
or event sequence -- but, not far enough to cause the code that
*schedules* those events to reflect the difference.

What would you do if you saw entries in a log file:

12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:04 finished up

>> [Note that NTP slaves don't blindly assume the current time is
>> as reported but *slew* to the new value, over some interval.]
>>
>> This also ignores the possibility of computations with relative
>> *intervals* being inconsistent with these spontaneous "resets".

It's important that the RATE of time passage is reasonably accurate
and consistent (and monotonically increasing). But, the notion of
the "time of day" is dubious and exists just as a convenience for
humans to order events relative to the outside world (which uses
wall clocks). How accurate is YOUR wall clock? Does it agree
with your cell phone's notion of now? The alarm clock in your
bedroom? Your neighbor's timepiece when he comes to visit? etc.

Tauno Voipio

unread,
Oct 3, 2022, 3:21:05 AM10/3/22
to
NTP has solved this question, just get publications of prof. Mills.

There is a short description in Wikipedia article on NTP.

--

-TV

pozz

unread,
Dec 30, 2022, 11:08:19 AM12/30/22
to
Eventualli I had some free time to read this interesting post and reply.

Il 03/10/2022 04:02, Don Y ha scritto:
> On 10/2/2022 3:09 PM, pozz wrote:
>> Il 30/09/2022 20:42, Don Y ha scritto:
>>> On 9/30/2022 2:29 AM, pozz wrote:
>>>> Another bonus is when you have NTP, that returns seconds in UTC, so
>>>> you can set your counter with the exact number retrived by NTP.
>>>
>>> No.  You always have to ensure that time keeps flowing in one direction.
>>>
>>> So, time either "doesn't exist" before your initial sync with the
>>> time server (what if the server isn't available when you want to
>>> do that?)
>>
>> At startup, if NTP server is not available and I don't have any notion
>> of "now", I start from a date in the past, i.e. 01/01/2020.
>
> Then you have to be able to accept a BIG skew in the time when the first
> update arrives.  What if that takes an hour, a day or more (because the
> server is down, badly configured or incorrect routing)?   What if it
> *never* arrives?

Certainly there's an exception at startup. When the *first* NTP response
received, the code should accept a BIG shock of the current notion of
now (that could be undefined or 2020 or another epoch until now).
I read that ntpd accepts -g command line option that enable one (and
only one) big difference between current system notion of now and "NTP now".

I admit that this could lead to odd behaviours as you explained. IMHO
however there aren't many solutions at startup, mainly if the embedded
device should be autonomous and can't accept suggestions from the user.

One is to suspend, at startup, all the device activites until a "fresh
now" is received from NTP server. After that, the normal tasks are
started. As you noted, this could introduce a delay (even a BIG delay,
depending on Internet connection and NTP servers) between the power on
and the start of tasks. I think this isn't compatible with many
applications.

Another solution is to fix the code in such a way it correctly faces the
situation of a big afterward or backward step in the "now" counter.
The code I'm thinking of is not the one that manages normal timers that
can depend on a local reference (XTAL, ceramic resonator, ...)
completely independent from calendar counter. Most of the time, the
precision of timers isn't strict and intervals are short: we need to
activate a relay for 3 seconds (but nothing happens if it is activated
for 3.01 seconds) or we need to generate a pulse on an output of 100ms
(but no problem if it is 98ms).
This means having a main counter clocked at 10ms (or whatever) from a
local clock of 100Hz (or whatever). This counter isn't corrected with NTP.
The only code that must be fixed is the one that manages events that
must occurs at specific calendar times (at 12 o'clock of 1st January, at
8:30 of everyday, and so on). So you should have *another* counter
clocked at 1Hz (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt
changes should be taken into account (event if I don't know how).


> If you apply the new time in a step function, then all of the potential
> time related events between ~1/1/2020 and "now" will appear to occur
> at the same instant -- *now* -- or, not at all.  And, any time-related
> calculations will be grossly incorrect.
>
>     start_time := now()
>     dispenser(on)
>     wait_until(start_time + interval)
>
> Imagine what will happen if the time is changed during this fragment.
> If the change adds >= interval to the local notion of now, then the
> dispenser will be "on" only momentarily.  If it adds (0,interval),
> then it will be on for some period LESS than the "interval" intended.
>
> [I'm ignoring the possibility of it going BACKWARDS, for now]
>
> Note that wait_until() could have been expressed as delay(interval)
> and, depending on how this is internally implemented, it might be
> silently translated to a wait_until() and thus dependant on the
> actual value of now().

Good point. As I wrote before, events that aren't strictly related to
wall clock shouldn't be coded with functions() that use now(). If the
code that makes a 100ms pulse at an output uses now(), it is wrong and
must be corrected.


> Likewise, imagine trying to measure the duration of an event:
>
>     wait_until(event)
>     start_time := now()
>     wait_until(!event)
>     duration = now() - start_time

Same thing. Instead of using now(), that returns "calendar seconds"
related to NTP, this code should returns ticks or jiffies that are
related only to local reference.


> Similarly, any implied ordering of actions is vulnerable:
>
>     do(THIS, time1)
>     do(THAT, time2)
>
> What if the value of now() makes a jump from some time prior to
> time1 to some time after time1, but before time2.  Will THIS happen?
> (i.e., will it be scheduled to happen?)  How much ACTUAL (execution)
> time will there be between THIS being started and THAT?
>
> What if the value of now() makes a jump from some time prior to
> time1 to some time after time2.  Will THIS happen before THAT?
> Will both start (be made ready) concurrently?  Who will win the
> unintended race?
>
> [Note that many NTP clients won't "accept" a time declaration that is
> "too far" from the local notion of now.  If you want to *set* the
> current time, you use something like ntpdate to impose a specific time
> regardless of how far that deviates from your own notion.
If you implement in this way:

void do(action_fn fn, uint32_t delay_ms) {
timer_add(delay_ms, fn);
}

and timer_add() uses the counter that is clocked *only* from local
reference, no problem occurs.

Some problems could occur when time1 and time2 are calendar times. One
solution could be to have one module that manages calendar events with
the following interface:

cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg);
void cevents_do(time_t now);

Every second cevents_do() is called with the new calendar time (seconds
from an epoch).

void cevents_do(time_t now) {
static time_t old_now;
if (now != old_now + 1) {
/* There's a discontinuity in now. What can we do?
* - Remove expired events without calling callback
* - Remove expired events and call callback for each of them
* I think the choice is application dependent */
}
/* Process the first elements of FIFO queue (that is sorted) */
cevent_s *ev;
while((ev = cevents_queue_peek())->time == now) {
ev->fn(ev->arg);
cevents_queue_pop();
}
old_now = now;
}



>>> *or* you have to look at your current notion of "now"
>>> and ensure that the "real" value of now, when obtained from the
>>> time server, is always in the future relative to your notion.
>>
>> Actually I don't do that and I replace the timer counter with the
>> value retrieved from NTP.
>
> Then you run the risk that the local counter may have already surpassed
> the NTP "count" by, for example, N seconds.  And, time now jerks backwards
> as the previous N seconds appear to be relived.
>
> Will you AGAIN do the task that was scheduled for "a few seconds ago"?
> (even though it has already been completed)  Will you remember to ALSO
> do the task that expected to be done an hour before that -- if the "jerk
> back" wasn't a full hour?

Good questions. You could try to implement a complex calendar time
system in your device, one that mimics full featured OS. I mean the
counter that tracks "now" (seconds or milliseconds from an epoch) isn't
changed abruptly, but its reference is slowed down or accelerated.
You should have an hw that supports this. Many processors have timers
that can be used as counters, but their clock reference is limited to a
prescaled main clock and the prescaler value is usually an integer,
maybe only one from a limited set of values (1, 2, 4, 8, 32, 64, 256).

Anyway, even if you are so smart to implement this in a correct way, you
have to solve the "startup issue". What happens if the first NTP
response arrived after 5 minutes from startup and your notion of now at
startup is completely useless (i.e., no battery is present)?
Maybe during initialization code you already added some calendar events.


> You likely wrote your code (or, your user scheduled events) on the
> assumption
> that there are roughly 60 seconds between any two "minutes", etc.  And,
> that
> time1 precedes time2 by (time2 - time1) actual seconds.
>
>> What happens if the local timer is clocked by a faster clock then
>> nominal? For example, 16.001MHz with 16M prescaler.
>> If I try to NTP re-sync every 1-hour, it's probably the local counter
>> is greater than the value retrieved from NTP. I'm forced to decrease
>> the local counter, my notion of "now".
>
> No.  You change the rate at which you run the local "clock" -- whatever
> timebase you are counting.  So, if your jiffy was designed to happen at
> 100ms intervals (counted down from some XTAL reference by a divisor of
> MxN) and you now discover that your notion of 100 was actually 98.7 REAL ms
> (because your time has been noted as moving faster than the NTP reference),
> then you change the divisor used to generate the jiffy to something
> slightly larger to effectively slow the jiffy down to 100+ ms (the "+"
> being present
> to ensure the local time eventually slows enough so that "real" time
> falls into sync).
>
> This is an continuous process.  (read the NTP sources and how the kernel
> implements "adjtime()")

I got the point, but IMHO is not so simple to implement this in a
correct way and anyway you have the "startup issue".


>> What happens if the time doesn't flow in one direction only?
>
> Then everything that (implicitly) relies on time to be monotonic is
> hosed.
>
> Repeat the examples at the start of my post with the case of time
> jumping backwards and see what happens.
>
> What if time goes backwards enough to muck with some calculation
> or event sequence -- but, not far enough to cause the code that
> *schedules* those events to reflect the difference.
>
> What would you do if you saw entries in a log file:
>
> 12:01:07  start something
> 12:01:08  did whatever
> 12:01:15  did something else
> 12:01:04  finished up

In a real world, could this happen? Except at startup, the seconds
reported from NTP should be very similar to "local seconds" that is
clocked from local reference. I didn't make any test, but I expect
offsets measured by NTP are well below 1s in normal situations. The
worst case should be:

12:01:07 start something
12:01:08 did whatever
12:01:15 did something else
12:01:14 finished up

I admit it's not very good.

Don Y

unread,
Dec 30, 2022, 7:35:46 PM12/30/22
to
On 12/30/2022 9:08 AM, pozz wrote:
>>> At startup, if NTP server is not available and I don't have any notion of
>>> "now", I start from a date in the past, i.e. 01/01/2020.
>>
>> Then you have to be able to accept a BIG skew in the time when the first
>> update arrives.  What if that takes an hour, a day or more (because the
>> server is down, badly configured or incorrect routing)?   What if it
>> *never* arrives?
>
> Certainly there's an exception at startup. When the *first* NTP response
> received, the code should accept a BIG shock of the current notion of now (that
> could be undefined or 2020 or another epoch until now).
> I read that ntpd accepts -g command line option that enable one (and only one)
> big difference between current system notion of now and "NTP now".

Yes. But, your system design still has to "make sense" if it NEVER gets
told the current time.

> I admit that this could lead to odd behaviours as you explained. IMHO however
> there aren't many solutions at startup, mainly if the embedded device should be
> autonomous and can't accept suggestions from the user.

Note that "wall/calendar time" is strictly a user convenience. A device
need only deal with it if it has to interact with a world in which the
user relates to temporal events using some external timepiece -- which
may actually be inaccurate!

But, your device can always have its own notion of time that monotonically
increases -- even if the rate of time that it increases isn't entirely
accurate wrt "real units" (i.e., if YOUR second is 1.001 REAL seconds,
that's likely not too important)

So, if you can postpone binding YOUR "system time" (counting jiffies)
to "wall time", then the problem is postponed.

E.g., I deal with events as referenced to *my* timebase (bogounits):
00000006 system initialized
00001003 network on-line
00001100 accepting requests
00020348 request from 10.0.1.88
00020499 reply to 10.0.1.88 issued
...
Eventually, there will be an entry:
XXXXXXXX NTPclient receives update (12:42:00.444)

At this point, you can retroactively update the times in the "log" with
"real" times, relative to the time delivered to the NTP client.
(or, leave the log in bogounits and not worry about it)

The real problem comes when <someone> wants <something> to happen at
some *specific* wall time -- and, you don't yet know what the current
wall time happens to be!

If that will be guaranteed to be sufficiently far in the future that
you (think!) the actual wall time will be known to you, then you
can just cross your fingers and wait to sort out "when" that will be.

> One is to suspend, at startup, all the device activites until a "fresh now" is
> received from NTP server. After that, the normal tasks are started. As you
> noted, this could introduce a delay (even a BIG delay, depending on Internet
> connection and NTP servers) between the power on and the start of tasks. I
> think this isn't compatible with many applications.
>
> Another solution is to fix the code in such a way it correctly faces the
> situation of a big afterward or backward step in the "now" counter.

You're better off picking a time you KNOW to be in the past so that
any adjustments continue to run time *forwards*. We inherently think
of A happening after B implying that time(A) > time(B). It's so
fundamental that you likely don't even notice these dependencies in
your code/algorithms.

> The code I'm thinking of is not the one that manages normal timers that can
> depend on a local reference (XTAL, ceramic resonator, ...) completely
> independent from calendar counter. Most of the time, the precision of timers
> isn't strict and intervals are short: we need to activate a relay for 3 seconds
> (but nothing happens if it is activated for 3.01 seconds) or we need to
> generate a pulse on an output of 100ms (but no problem if it is 98ms).
> This means having a main counter clocked at 10ms (or whatever) from a local
> clock of 100Hz (or whatever). This counter isn't corrected with NTP.
> The only code that must be fixed is the one that manages events that must
> occurs at specific calendar times (at 12 o'clock of 1st January, at 8:30 of
> everyday, and so on). So you should have *another* counter clocked at 1Hz (or
> 10Hz or 100Hz) that is adjusted by NTP. And abrupt changes should be taken into
> account (event if I don't know how).

You can use NTP to discipline the local oscillator so that times
measured from it are "more accurate". This, regardless of whether
or not the local time tracks the wall time.

>> If you apply the new time in a step function, then all of the potential
>> time related events between ~1/1/2020 and "now" will appear to occur
>> at the same instant -- *now* -- or, not at all.  And, any time-related
>> calculations will be grossly incorrect.
>>
>>      start_time := now()
>>      dispenser(on)
>>      wait_until(start_time + interval)
>>
>> Imagine what will happen if the time is changed during this fragment.
>> If the change adds >= interval to the local notion of now, then the
>> dispenser will be "on" only momentarily.  If it adds (0,interval),
>> then it will be on for some period LESS than the "interval" intended.
>>
>> [I'm ignoring the possibility of it going BACKWARDS, for now]
>>
>> Note that wait_until() could have been expressed as delay(interval)
>> and, depending on how this is internally implemented, it might be
>> silently translated to a wait_until() and thus dependant on the
>> actual value of now().
>
> Good point. As I wrote before, events that aren't strictly related to wall
> clock shouldn't be coded with functions() that use now(). If the code that
> makes a 100ms pulse at an output uses now(), it is wrong and must be corrected.

Time should always be treated "fuzzily".

So, if (now() == CONSTANT) may NEVER be satisfied! E.g., if the code
runs at time CONSTANT+1, then you can know that it's never going to
meet that condition (imagine it in a wait_till loop)

If, instead, you assume that something may delay that statement from
being executed *prior* to CONSTANT, you may, instead, want to
code it as "if (now() >= CONSTANT)" to ensure it gets executed.
(and, if you only want it to be executed ONCE, then take steps to
note when you *have* executed it so you don't execute it again)

For example, my system is real-time so every action has an
associated deadline. But, it is entirely possible that some
actions will be blocked until long after their deadlines
have expired. Checking for "now() == deadline" would lead
to erroneous behavior; the time between deadline and now()
effectively doesn't exist, from the perspective of the
action in question. So, the deadline handler should be
invoked for ANY now() >= deadline.
What if a "second" is skipped (because some higher priority activity
was using the processor)?

> void cevents_do(time_t now) {
>   static time_t old_now;
>   if (now != old_now + 1) {
>     /* There's a discontinuity in now. What can we do?
>      * - Remove expired events without calling callback
>      * - Remove expired events and call callback for each of them
>      * I think the choice is application dependent */
>   }
>   /* Process the first elements of FIFO queue (that is sorted) */
>   cevent_s *ev;
>   while((ev = cevents_queue_peek())->time == now) {
>     ev->fn(ev->arg);
>     cevents_queue_pop();
>   }
>   old_now = now;
> }

You have to figure out how to generalize this FOR YOUR APPLICATION.

Some things may not be important enough to waste effort on;
others may be considerably more sensitive.

>>>> *or* you have to look at your current notion of "now"
>>>> and ensure that the "real" value of now, when obtained from the
>>>> time server, is always in the future relative to your notion.
>>>
>>> Actually I don't do that and I replace the timer counter with the value
>>> retrieved from NTP.
>>
>> Then you run the risk that the local counter may have already surpassed
>> the NTP "count" by, for example, N seconds.  And, time now jerks backwards
>> as the previous N seconds appear to be relived.
>>
>> Will you AGAIN do the task that was scheduled for "a few seconds ago"?
>> (even though it has already been completed)  Will you remember to ALSO
>> do the task that expected to be done an hour before that -- if the "jerk
>> back" wasn't a full hour?
>
> Good questions. You could try to implement a complex calendar time system in
> your device, one that mimics full featured OS. I mean the counter that tracks
> "now" (seconds or milliseconds from an epoch) isn't changed abruptly, but its
> reference is slowed down or accelerated.

If you ensure time always moves forward, most of these issues are
easy to resolve. You *know* you haven't done things scheduled for
t > now().

> You should have an hw that supports this. Many processors have timers that can
> be used as counters, but their clock reference is limited to a prescaled main
> clock and the prescaler value is usually an integer, maybe only one from a
> limited set of values (1, 2, 4, 8, 32, 64, 256).

You can dither the timebase so the average rate tracks your intent.

> Anyway, even if you are so smart to implement this in a correct way, you have
> to solve the "startup issue". What happens if the first NTP response arrived
> after 5 minutes from startup and your notion of now at startup is completely
> useless (i.e., no battery is present)?
> Maybe during initialization code you already added some calendar events.

So, those may *never* get executed -- if the NTP server never replies OR
if you've coded for "time == now()". Or, may get executed (much) later
than intended (e.g., if the NTP server tells you it is 6:00, now, and
you had something scheduled for 5:00...)

>>> What happens if the time doesn't flow in one direction only?
>>
>> Then everything that (implicitly) relies on time to be monotonic is
>> hosed.
>>
>> Repeat the examples at the start of my post with the case of time
>> jumping backwards and see what happens.
>>
>> What if time goes backwards enough to muck with some calculation
>> or event sequence -- but, not far enough to cause the code that
>> *schedules* those events to reflect the difference.
>>
>> What would you do if you saw entries in a log file:
>>
>> 12:01:07  start something
>> 12:01:08  did whatever
>> 12:01:15  did something else
>> 12:01:04  finished up
>
> In a real world, could this happen?

In a multithreaded application, of course it can!

task0() {
spawn(task1);
log("finished up");
}

task1() {
log("start something");
...
log("did whatever");
log("did something else")
}

Assume log() prepends a timestamp to the message emitted.
Assume task1 is lower priority than task0. It is spawned by
task0 but doesn't get a chance to execute until after
task0 has already printed its final message and quit.

If multiple processors/nodes are involved, then the uncertainty
between their individual clocks further complicates this.

And, of course, what do you do if <something> deliberately
introduces a delta to the current time?

Imagine Bob wants to set an alarm for a meeting at 5:00PM.
He then changes the current time to one hour later -- presumably
because he noticed that the clock was incorrect. Does that
mean the meeting will be one hour *sooner* than it would
appear to have been, previously?

What if he notices the date is off and it's really "tomorrow"
and advances the date by one day. Should the alarm be
canceled as the appointed time has already passed? Or,
should the date component of the alarm time be similarly
advanced?

And, what will *Bob* think the correct answers to these
questions should be? Will he be pissed because the alarm
didn't go off when he *expected* it? Or, pissed because the
alarm went off even though the meeting was YESTERDAY?

pozz

unread,
Jan 4, 2023, 11:09:59 AM1/4/23
to
Il 31/12/2022 01:35, Don Y ha scritto:
> On 12/30/2022 9:08 AM, pozz wrote:
>>>> At startup, if NTP server is not available and I don't have any
>>>> notion of "now", I start from a date in the past, i.e. 01/01/2020.
>>>
>>> Then you have to be able to accept a BIG skew in the time when the first
>>> update arrives.  What if that takes an hour, a day or more (because the
>>> server is down, badly configured or incorrect routing)?   What if it
>>> *never* arrives?
>>
>> Certainly there's an exception at startup. When the *first* NTP
>> response received, the code should accept a BIG shock of the current
>> notion of now (that could be undefined or 2020 or another epoch until
>> now).
>> I read that ntpd accepts -g command line option that enable one (and
>> only one) big difference between current system notion of now and "NTP
>> now".
>
> Yes.  But, your system design still has to "make sense" if it NEVER gets
> told the current time.

Yes, the only solution that comes to my mind is to have a startup
calendar time, such as 01/01/2023 00:00:00. Until a new time is received
from NTP, that is the calendar time that the system will use.

Of course, with this wrong "now", any event that is related to a
calendar time would fail.


>> I admit that this could lead to odd behaviours as you explained. IMHO
>> however there aren't many solutions at startup, mainly if the embedded
>> device should be autonomous and can't accept suggestions from the user.
>
> Note that "wall/calendar time" is strictly a user convenience.  A device
> need only deal with it if it has to interact with a world in which the
> user relates to temporal events using some external timepiece -- which
> may actually be inaccurate!

Yes, of course. At the contrary, NTP is useless at all.


> But, your device can always have its own notion of time that monotonically
> increases -- even if the rate of time that it increases isn't entirely
> accurate wrt "real units" (i.e., if YOUR second is 1.001 REAL seconds,
> that's likely not too important)
>
> So, if you can postpone binding YOUR "system time" (counting jiffies)
> to "wall time", then the problem is postponed.
>
> E.g., I deal with events as referenced to *my* timebase (bogounits):
>   00000006 system initialized
>   00001003 network on-line
>   00001100 accepting requests
>   00020348 request from 10.0.1.88
>   00020499 reply to 10.0.1.88 issued
>   ...
> Eventually, there will be an entry:
>   XXXXXXXX NTPclient receives update (12:42:00.444)
>
> At this point, you can retroactively update the times in the "log" with
> "real" times, relative to the time delivered to the NTP client.
> (or, leave the log in bogounits and not worry about it)

Yes, a log with timestamps can be managed in these ways.


> The real problem comes when <someone> wants <something> to happen at
> some *specific* wall time -- and, you don't yet know what the current
> wall time happens to be!
>
> If that will be guaranteed to be sufficiently far in the future that
> you (think!) the actual wall time will be known to you, then you
> can just cross your fingers and wait to sort out "when" that will be.

Yes.
Yes, but I don't remember an application I worked on that didn't track
the wall time and, at the same time, needed a greater precision than the
local oscillator.
Suppose you have some alarms scheduled weekly, for example at 8:00:00
every Monday and at 9:00:00 every Saturday.
In the week you have 604'800 seconds.
8:00 on Monday is at 28'800 seconds from the beginning of the week (I'm
considering Monday as the first day of the week).
9:00 on Saturday is at 194'400 secs.

If the alarms manager is called exactly one time each second, it should
be very simple to understand if we are on time for an alarm:

if (now_weekly_secs == 28800) fire_alarm(ALARM1);
if (now_weekly_secs == 194400) fire_alarm(ALARM2);

Note the equality test. With disequality you can't use this:

if (now_weekly_secs > 28800) fire_alarm(ALARM1);
if (now_weekly_secs > 194400) fire_alarm(ALARM2);

otherwise alarms will occur countinuously after the deadline. You should
tag the alarm as occured for the current week to avoid firing it again
at the next call.

Is it so difficult to *guarantee* calling alarms_manager(weekly_secs)
every second?
A second is a very long interval. It's difficult to think of a system
that isn't able to satisfy programmatically a deadline of a second.
Simple to write.
No. The meeting is always at 5:00PM.


> What if he notices the date is off and it's really "tomorrow"
> and advances the date by one day. Should the alarm be
> canceled as the appointed time has already passed? Or,
> should the date component of the alarm time be similarly
> advanced?

IMHO if the user set a time using the wall clock convention (shut the
door at 8:00PM every afternoon), it shouldn't be changed when the
calendar time used by the system is adjusted. Anyway this should be
application dependent.

Don Y

unread,
Jan 4, 2023, 2:20:06 PM1/4/23
to
On 1/4/2023 9:09 AM, pozz wrote:
> Il 31/12/2022 01:35, Don Y ha scritto:
>> On 12/30/2022 9:08 AM, pozz wrote:
>>>>> At startup, if NTP server is not available and I don't have any notion of
>>>>> "now", I start from a date in the past, i.e. 01/01/2020.
>>>>
>>>> Then you have to be able to accept a BIG skew in the time when the first
>>>> update arrives.  What if that takes an hour, a day or more (because the
>>>> server is down, badly configured or incorrect routing)?   What if it
>>>> *never* arrives?
>>>
>>> Certainly there's an exception at startup. When the *first* NTP response
>>> received, the code should accept a BIG shock of the current notion of now
>>> (that could be undefined or 2020 or another epoch until now).
>>> I read that ntpd accepts -g command line option that enable one (and only
>>> one) big difference between current system notion of now and "NTP now".
>>
>> Yes.  But, your system design still has to "make sense" if it NEVER gets
>> told the current time.
>
> Yes, the only solution that comes to my mind is to have a startup calendar
> time, such as 01/01/2023 00:00:00. Until a new time is received from NTP, that
> is the calendar time that the system will use.

To be clear, that is the *initial* time that it will use; you still want
time to move forward and not "stall" there, indefinitely.

> Of course, with this wrong "now", any event that is related to a calendar time
> would fail.

And how will the user react to this? Will he even consider it
to be a possibility? Will a big red light come on to remind him
that the product doesn't know what time/day it is?

The technological issues are easy to address (e.g., if being "close to
correct" in your initial assessment of time is crucial, then you include
provisions to know that "on board"). The tougher parts are trying to
match your implementation to the users' expectations. Principle
of least surprise, etc.

>>> The code I'm thinking of is not the one that manages normal timers that can
>>> depend on a local reference (XTAL, ceramic resonator, ...) completely
>>> independent from calendar counter. Most of the time, the precision of timers
>>> isn't strict and intervals are short: we need to activate a relay for 3
>>> seconds (but nothing happens if it is activated for 3.01 seconds) or we need
>>> to generate a pulse on an output of 100ms (but no problem if it is 98ms).
>>> This means having a main counter clocked at 10ms (or whatever) from a local
>>> clock of 100Hz (or whatever). This counter isn't corrected with NTP.
>>> The only code that must be fixed is the one that manages events that must
>>> occurs at specific calendar times (at 12 o'clock of 1st January, at 8:30 of
>>> everyday, and so on). So you should have *another* counter clocked at 1Hz
>>> (or 10Hz or 100Hz) that is adjusted by NTP. And abrupt changes should be
>>> taken into account (event if I don't know how).
>>
>> You can use NTP to discipline the local oscillator so that times
>> measured from it are "more accurate".  This, regardless of whether
>> or not the local time tracks the wall time.
>
> Yes, but I don't remember an application I worked on that didn't track the wall
> time and, at the same time, needed a greater precision than the local oscillator.

I use time in many of my data acquisition techniques. I try to
design so that I don't care how *accurate* the timebase is but
I want it to be stable/repeatable.

A "crystal" often drifts (temperature). I don't want measurements
made at one time of day (temperature) to differ from those made
at another time of day. (or, any other attribute that can affect
my notion of time)

I designed a product that had a really sensitive "front end".
To reduce the impact of the ACmains on our signal, I would
run the acquisition system at the mains frequency -- and,
included a setting for 50 vs. 60Hz selection (domestic/foreign
markets).

We found that blindly assuming the ACmains frequency was
as stated wasn't enough. Errors in the local oscillator
and variations in the ACmains introduced differences
so we had to frequency lock to the actual mains. Then,
use that to derive all related timing based on the
observed frequency as expressed by the local oscillator.

[I've been using a similar technique since the 70's to
build timepieces that exhibit excellent long term
accuracy with crappy crystals]
Can you *guarantee* that it will always be called once and
exactly once per second? Regardless of other activities that
may be going on in your design? Including those that you
haven't yet imagined?

It is unlikely that this "needs" to be a high priority job.
If you have to MAKE it such, then you're bastardizing the
design needlessly.

> Note the equality test. With disequality you can't use this:
>
>    if (now_weekly_secs > 28800) fire_alarm(ALARM1);
>    if (now_weekly_secs > 194400) fire_alarm(ALARM2);
>
> otherwise alarms will occur countinuously after the deadline. You should tag
> the alarm as occured for the current week to avoid firing it again at the next
> call.

if (!alarm1_done && now_weekly_seconds > 28800) {
fire_alarm(ALARM1);
alarm1_done = TRUE;
}

assuming that your code doesn't implicitly do this (e.g., one typically
designs a timer service that lets you schedule alarms and then
*clears* them (removes them) once they have expired.

> Is it so difficult to *guarantee* calling alarms_manager(weekly_secs) every
> second?

You tell me. There are times when I see "not responding" in the window
frame of apps on my PC. Why? Do their designers KNOW that this will
happen? Are *they* busy -- or, is the *system* busy (disk thrashing, etc.)?

Will you (and anyone else who comes after you) remember that this
MUST happen? If an alarm fails to fire, will you know? Will you
know that it is because it took 1.001 seconds for that iteration
of the loop to be serviced?

My approach has always been to make each job as independant of the rest of
the system as possible. I don't want to have to revisit some code because
the system load has changed or some other job was considerably more
important AND costly than I had originally imagined would be the case
(when I designed some OTHER job)

>>> Some problems could occur when time1 and time2 are calendar times. One
>>> solution could be to have one module that manages calendar events with the
>>> following interface:
>>>
>>>    cevent_hdl_t cevent_add(time_t time, cevent_fn fn, void *arg);
>>>    void cevents_do(time_t now);
>>>
>>> Every second cevents_do() is called with the new calendar time (seconds from
>>> an epoch).
>>
>> What if a "second" is skipped (because some higher priority activity
>> was using the processor)?
>
> A second is a very long interval. It's difficult to think of a system that
> isn't able to satisfy programmatically a deadline of a second.

Think harder. :>

If you don't operate in a preemptive environment, then any "task"
that hogs the processor can screw you over. Or, any *series*
of task invocations can screw you without individually misbehaving.

In a preemptive environment, any *effective* change in priorities
can screw you over.

In each of these cases, the problem may or may not be repeatable.
Fear the case where it screws up occasionally -- enough to piss
off a user but not enough to be traceable (by you, after you've
forgotten this dependency, or your successor -- who may be unaware
of it!)

E.g., I *need* my deadline handlers to be invoked when a task
misses its deadline. *They* ensure the system remains in a
consistent state, even if the tasks can't meet their intended
goals. I can't afford to rely on them being invoked at a
specific time which might not be noticed because "things were
busy".
But *which* 5:00PM? There's the 5:00PM on his wristwatach,
the 5:00PM in your device, the 5:00PM on his *boss's* wristwatch
(which trumps his), etc.

And, was it *today* at 5:00PM? Which "today"? (given that all
of these are arbitrary time references)

>> What if he notices the date is off and it's really "tomorrow"
>> and advances the date by one day.  Should the alarm be
>> canceled as the appointed time has already passed?  Or,
>> should the date component of the alarm time be similarly
>> advanced?
>
> IMHO if the user set a time using the wall clock convention (shut the door at
> 8:00PM every afternoon), it shouldn't be changed when the calendar time used by
> the system is adjusted. Anyway this should be application dependent.

It's 7:00. I set the device's time to 8:09. Do you close the door,
"9 minutes late"?

I realize my mistake and set the time back to *7*:09. Now what?
It's not yet 8:00 -- why is the door closed?

George Neuner

unread,
Jan 4, 2023, 8:12:22 PM1/4/23
to
On Wed, 4 Jan 2023 12:19:55 -0700, Don Y <blocked...@foo.invalid>
wrote:

>On 1/4/2023 9:09 AM, pozz wrote:


>I designed a product that had a really sensitive "front end".
>To reduce the impact of the ACmains on our signal, I would
>run the acquisition system at the mains frequency -- and,
>included a setting for 50 vs. 60Hz selection (domestic/foreign
>markets).
>
>We found that blindly assuming the ACmains frequency was
>as stated wasn't enough. Errors in the local oscillator
>and variations in the ACmains introduced differences
>so we had to frequency lock to the actual mains. Then,
>use that to derive all related timing based on the
>observed frequency as expressed by the local oscillator.

You can't assume AC frequency will be held ... generator spin rates
are not constant under changing loads, and rectification is more
difficult because of the high voltages and the fact that generators
typically are 4..12 multiphase (for efficiency) being squashed into
(some resemblance of) a sine wave.

But the utilities are required to provide the expected number of
cycles in a given period. In the US, that period is contractual (not
law) and typically is from 6..24 hours.

If you've ever seen cycle driven wall clocks run slow all morning or
afternoon and then suddenly run fast for several minutes just before
noon, or 6pm, or midnight (or all three) ... that's the electric
utility catching up on a low cycle count.


>> Suppose you have some alarms scheduled weekly, for example at 8:00:00 every
>> Monday and at 9:00:00 every Saturday.
>> In the week you have 604'800 seconds.
>> 8:00 on Monday is at 28'800 seconds from the beginning of the week (I'm
>> considering Monday as the first day of the week).
>> 9:00 on Saturday is at 194'400 secs.
>>
>> If the alarms manager is called exactly one time each second, it should be very
>> simple to understand if we are on time for an alarm:
>>
>>    if (now_weekly_secs == 28800) fire_alarm(ALARM1);
>>    if (now_weekly_secs == 194400) fire_alarm(ALARM2);
>
>Can you *guarantee* that it will always be called once and
>exactly once per second? Regardless of other activities that
>may be going on in your design? Including those that you
>haven't yet imagined?

'exactly once' semantics are impossible to guarantee open loop.
Lacking positive feedback, the best you can achieve is 'at most once'
or 'at least once'.


>It is unlikely that this "needs" to be a high priority job.
>If you have to MAKE it such, then you're bastardizing the
>design needlessly.
>
>> Note the equality test. With disequality you can't use this:
>>
>>    if (now_weekly_secs > 28800) fire_alarm(ALARM1);
>>    if (now_weekly_secs > 194400) fire_alarm(ALARM2);
>>
>> otherwise alarms will occur countinuously after the deadline. You should tag
>> the alarm as occured for the current week to avoid firing it again at the next
>> call.
>
> if (!alarm1_done && now_weekly_seconds > 28800) {
> fire_alarm(ALARM1);
> alarm1_done = TRUE;
> }
>
>assuming that your code doesn't implicitly do this (e.g., one typically
>designs a timer service that lets you schedule alarms and then
>*clears* them (removes them) once they have expired.

Exactly. The above is an example of 'at most once'.
But incomplete: it fails to reset for next week. ;-)


>> ...
>> A second is a very long interval. It's difficult to think of a system that
>> isn't able to satisfy programmatically a deadline of a second.
>
>Think harder. :>
>
>If you don't operate in a preemptive environment, then any "task"
>that hogs the processor can screw you over. Or, any *series*
>of task invocations can screw you without individually misbehaving.
>
>In a preemptive environment, any *effective* change in priorities
>can screw you over.

Yup!.



>>>>>> What happens if the time doesn't flow in one direction only?
>>>>>
>>>>> Then everything that (implicitly) relies on time to be monotonic is
>>>>> hosed.

Simple enough to maintain monotonically increasing system time (at
least until the counter rolls over, but that's easily handled).

>>> If multiple processors/nodes are involved, then the uncertainty
>>> between their individual clocks further complicates this.
>>>
>>> And, of course, what do you do if <something> deliberately
>>> introduces a delta to the current time?
>>>
>>> Imagine Bob wants to set an alarm for a meeting at 5:00PM.
>>> He then changes the current time to one hour later -- presumably
>>> because he noticed that the clock was incorrect.  Does that
>>> mean the meeting will be one hour *sooner* than it would
>>> appear to have been, previously?
>>
>> No. The meeting is always at 5:00PM.
>
>But *which* 5:00PM? There's the 5:00PM on his wristwatach,
>the 5:00PM in your device, the 5:00PM on his *boss's* wristwatch
>(which trumps his), etc.
>
>And, was it *today* at 5:00PM? Which "today"? (given that all
>of these are arbitrary time references)

The washer repairman says "I have you down for Tuesday". But where is
down? And which Tuesday?
-- Erma Bombeck



George

Don Y

unread,
Jan 4, 2023, 9:41:06 PM1/4/23
to
On 1/4/2023 6:12 PM, George Neuner wrote:
> On Wed, 4 Jan 2023 12:19:55 -0700, Don Y <blocked...@foo.invalid>
> wrote:
>> I designed a product that had a really sensitive "front end".
>> To reduce the impact of the ACmains on our signal, I would
>> run the acquisition system at the mains frequency -- and,
>> included a setting for 50 vs. 60Hz selection (domestic/foreign
>> markets).
>>
>> We found that blindly assuming the ACmains frequency was
>> as stated wasn't enough. Errors in the local oscillator
>> and variations in the ACmains introduced differences
>> so we had to frequency lock to the actual mains. Then,
>> use that to derive all related timing based on the
>> observed frequency as expressed by the local oscillator.
>
> You can't assume AC frequency will be held ... generator spin rates
> are not constant under changing loads, and rectification is more
> difficult because of the high voltages and the fact that generators
> typically are 4..12 multiphase (for efficiency) being squashed into
> (some resemblance of) a sine wave.

Yes. Which is why you need a Line Frequency Clock (LFC) in
the design. Anything related to that frequency must be derived
from frequency locking your local oscillator to the "period"
sensed on the LFC.

> But the utilities are required to provide the expected number of
> cycles in a given period. In the US, that period is contractual (not
> law) and typically is from 6..24 hours.

I'm not sure of that, anymore. There was a move to break this
arrangement. That wasn't approved. Then it was (?).

But, my LFC controlled timepieces still keep good time so
I wonder if the utilities just kept it up even if not
obligated to do so (?)

>> It is unlikely that this "needs" to be a high priority job.
>> If you have to MAKE it such, then you're bastardizing the
>> design needlessly.
>>
>>> Note the equality test. With disequality you can't use this:
>>>
>>>    if (now_weekly_secs > 28800) fire_alarm(ALARM1);
>>>    if (now_weekly_secs > 194400) fire_alarm(ALARM2);
>>>
>>> otherwise alarms will occur countinuously after the deadline. You should tag
>>> the alarm as occured for the current week to avoid firing it again at the next
>>> call.
>>
>> if (!alarm1_done && now_weekly_seconds > 28800) {
>> fire_alarm(ALARM1);
>> alarm1_done = TRUE;
>> }
>>
>> assuming that your code doesn't implicitly do this (e.g., one typically
>> designs a timer service that lets you schedule alarms and then
>> *clears* them (removes them) once they have expired.
>
> Exactly. The above is an example of 'at most once'.
> But incomplete: it fails to reset for next week. ;-)

Details included on sheet #2...

>>>>>>> What happens if the time doesn't flow in one direction only?
>>>>>>
>>>>>> Then everything that (implicitly) relies on time to be monotonic is
>>>>>> hosed.
>
> Simple enough to maintain monotonically increasing system time (at
> least until the counter rolls over, but that's easily handled).

For many devices, there's nothing wrong with starting the counter from 0
at each reset. As most don't run for years on end, the possibility of
overrun is greatly minimized.

The important thing is not to tie your internal representation
of "time" to the wall clock. Handle that separately so you
can let the latter shift back and forth, relative to the former,
without mucking up the timing of algorithms.

Too often, I've seen folks think they are clever by using ONE
notion of time throughout their design. This forces them to
deal with the issue of allowing one *representation* to be
changed without impacting the other concurrent uses.

[What do you do if someone wants to set your clock/calendar
to unusual times -- e.g., during the Gregorian switchover.
Or, to "prehistory"? Or, to a specific leap-second??]

Human-Time is just a royal PITA.

0 new messages