I'm asking because I'm exploring the use of Posix timers to implement Fl::add_timeout()
for the Wayland platform (see the timer_create() and timer_settime() functions).These functions allow to trigger the timer either as a signal or starting a thread.
I've chosen the thread option, and have the thread call Fl::awake(cb, data)so that the timer's callback gets then called by the main thread in its event loop.
@Manolo: Before you dive too deep into a specific implementation for Wayland I'd like to share some thoughts I'm having since some time now to unify the timer handling on all platforms. I believe that the Linux timer implementation is superior to the Windows and maybe also the macOS implementation. The Linux timer implementation works like this (maybe over-simplified):
(1) Every call to Fl::add_timeout() or Fl::repeat_timeout() adds a timer entry to the internal timer queue. This queue is sorted by the timer's due time.
(2) There's only one system timer, using the smallest delta value, i.e. the time of the first timer in the queue.
(3) Whenever the timer triggers (or maybe more often) the event handling decrements the delta time of all timers.
(4) All timers callbacks of expired timers are called.
(5) A new timer with the shortest delay (which is always the first timer in the queue) is scheduled.
(6) Wait for timer events...
This is AFAICT done because the standard Unix timers can be interrupted and need to re-scheduled whenever such interrupts occur.
The benefit of this approach is as described in Fl::repeat_timer() docs: if the call to Fl::repeat_timer() is done "late" it can be corrected by the delay and the overall timer sequence of repeated timers is more accurate than on other platforms.
On the Windows platform we're (AFAICT) using one system timer per Fl::add/repeat_timer() call. The Windows timer events are less accurate anyway, but a change as designed for Unix/Linux could probably contribute to more accuracy of repeated timer events because the correction of timer delay as on Unix/Linux could work better (it does not currently on Windows).
I know less about the macOS platform, but I know for sure that the timer handling is different. There are inconsistencies WRT Unix/Linux/Windows on the user visible level (which I intend to demonstrate with a test program anyway) but which are too difficult (and OT now).
That all said: I hope that the Wayland implementation would be basically like the Unix/Linux timer queue handling so we can easily unify all platforms.
More about the unification: I'm thinking of a platform independent timer queue where Fl::add_timeout() and friends would be platform independent. They would add an Fl_Timeout_XX object to the timer queue which may contain platform specific timer data (or not?). Triggering the timeout would then, as always, be done by the system, the timer queue handling would still be platform independent, as well as calling the callbacks etc.. The more I think about it, the more I believe that only the scheduling of this single timer event would be a platform dependent (i.e. system driver) function.
Le lundi 5 juillet 2021 à 22:02:45 UTC+2, Albrecht Schlosser a écrit :
@Manolo: Before you dive too deep into a specific implementation for Wayland I'd like to share some thoughts I'm having since some time now to unify the timer handling on all platforms. I believe that the Linux timer implementation is superior to the Windows and maybe also the macOS implementation. The Linux timer implementation works like this (maybe over-simplified):
I believe this means the timer implementation for the X11 FLTK platform (which covers Linux but also Unix and Darwin).
(1) Every call to Fl::add_timeout() or Fl::repeat_timeout() adds a timer entry to the internal timer queue. This queue is sorted by the timer's due time.
(2) There's only one system timer, using the smallest delta value, i.e. the time of the first timer in the queue.
In my view, there's no system timer at all. FLTK sets the max length of the next select/poll call to the smallest delta value,which has the effect of breaking the event loop at the desired time. This setup is possible because with X11 (and with Wayland too)the event loop is built using a select/poll call that returns when data arrive on a fd or when the waiting delay expires.
(3) Whenever the timer triggers (or maybe more often) the event handling decrements the delta time of all timers.
I find this procedure awkward, even though it's correct.
(4) All timers callbacks of expired timers are called.
(5) A new timer with the shortest delay (which is always the first timer in the queue) is scheduled.
(6) Wait for timer events...
In my view, there're no real timer events: the poll/select call expires.
This is AFAICT done because the standard Unix timers can be interrupted and need to [be] re-scheduled whenever such interrupts occur.
The benefit of this approach is as described in Fl::repeat_timer() docs: if the call to Fl::repeat_timer() is done "late" it can be corrected by the delay and the overall timer sequence of repeated timers is more accurate than on other platforms.
On the Windows platform we're (AFAICT) using one system timer per Fl::add/repeat_timer() call. The Windows timer events are less accurate anyway, but a change as designed for Unix/Linux could probably contribute to more accuracy of repeated timer events because the correction of timer delay as on Unix/Linux could work better (it does not currently on Windows).
I know less about the macOS platform, but I know for sure that the timer handling is different. There are inconsistencies WRT Unix/Linux/Windows on the user visible level (which I intend to demonstrate with a test program anyway) but which are too difficult (and OT now).
The macOS FLTK platform uses a system timer: the event loop is made by calling a function that does "wait until an event arrives",and Fl::add_timeout creates a system object that makes the waiting function run the timer cb when the delay has expired.
My idea was to also use a true system timer for the Wayland platform (but that could be for all Linux). Posix timers do that.
They trigger either a signal or a thread after a specified delay. With the thread approach, having the child thread call Fl::awake(cb, data)allows the main thread to stop waiting and process the timeout cb.
That all said: I hope that the Wayland implementation would be basically like the Unix/Linux timer queue handling so we can easily unify all platforms.
More about the unification: I'm thinking of a platform independent timer queue where Fl::add_timeout() and friends would be platform independent. They would add an Fl_Timeout_XX object to the timer queue which may contain platform specific timer data (or not?). Triggering the timeout would then, as always, be done by the system, the timer queue handling would still be platform independent, as well as calling the callbacks etc.. The more I think about it, the more I believe that only the scheduling of this single timer event would be a platform dependent (i.e. system driver) function.
As written above, the X11 approach uses the fd through which all X11 data arrives and the poll/select call on this fd to simulatetimeout events: it reduces the max waiting time of the poll/select call. Is your idea to change the organization of the event loopof other platforms (namely macOS) and have it wait for GUI events for a time determined by the next scheduled timeout?
I know less about the macOS platform, but I know for sure that the timer handling is different. There are inconsistencies WRT Unix/Linux/Windows on the user visible level (which I intend to demonstrate with a test program anyway) but which are too difficult (and OT now).
The macOS FLTK platform uses a system timer: the event loop is made by calling a function that does "wait until an event arrives",and Fl::add_timeout creates a system object that makes the waiting function run the timer cb when the delay has expired.Is it correct that this is one distinct system timer per timer queue entry?
My idea was to also use a true system timer for the Wayland platform (but that could be for all Linux). Posix timers do that.
They trigger either a signal or a thread after a specified delay. With the thread approach, having the child thread call Fl::awake(cb, data)allows the main thread to stop waiting and process the timeout cb.
Hmm, is this a necessary change for the Wayland platform, or do you want to do it because you find the current implementation "awkward"?
More about the unification: I'm thinking of a platform independent timer queue where Fl::add_timeout() and friends would be platform independent. They would add an Fl_Timeout_XX object to the timer queue which may contain platform specific timer data (or not?). Triggering the timeout would then, as always, be done by the system, the timer queue handling would still be platform independent, as well as calling the callbacks etc.. The more I think about it, the more I believe that only the scheduling of this single timer event would be a platform dependent (i.e. system driver) function.
As written above, the X11 approach uses the fd through which all X11 data arrives and the poll/select call on this fd to simulatetimeout events: it reduces the max waiting time of the poll/select call. Is your idea to change the organization of the event loopof other platforms (namely macOS) and have it wait for GUI events for a time determined by the next scheduled timeout?
I can't answer this question with yes or no.
My basic idea is to unify (and therefore simplify) the timer event handling on all platforms. I've seen IMHO too much platform specific code to handle timer events. The more platforms we add, the more platform specific code we need to maintain.
My goal is to do as much as possible in platform independent code. This platform independent code would schedule timer events by adding them to the timer event queue - in my model the Unix/Linux code would be a valid implementation. This could be done on all current and future platforms in a platform independent way. There would also be only one platform independent timer event processing function. This could be (like) the current Unix/Linux implementation which decrements the timer delay of all timer events in the queue.
The only platform dependent code should be the scheduling of the timer event. This could be as it is now on Unix/Linux to reduce the select/poll timer to the next timer delay or to schedule exactly one system timer on each platform which would be the delta time to the next (first) timer event. The only requirement is that the resulting timer event would call the platform independent process_timer_event() function.
For further clarification: my model would allow to use the fd approach we're using now as well as POSIX timers and Windows or macOS timers as long as we're only scheduling one timer for all systems and we're doing the timer event processing in only one system independent function. This is basically what I want to achieve.
This would allow us to get the same behavior on all current and future platforms including the optimal "repeated timer delay correction" with the minimum of platform specific code. A nice side effect would be that porting to another platform would be simplified.
--
You received this message because you are subscribed to the Google Groups "fltk.coredev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fltkcoredev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fltkcoredev/42064247-9E8D-48B7-846F-E9BEDD261F81%40gmail.com.
Before actually working on the new proposal I tried to fix two
inconsistencies of the macOS platform. I committed one fix
(87475c20d6cc81912e) and created PR #248. Manolo, could you please
review the PR?
Le mercredi 7 juillet 2021 à 16:45:59 UTC+2, Albrecht Schlosser a écrit :
Before actually working on the new proposal I tried to fix two
inconsistencies of the macOS platform. I committed one fix
(87475c20d6cc81912e) and created PR #248. Manolo, could you please
review the PR?
OK with this change.
I'm uncertain about what is expected by Fl::repeat_timeout().
Its general goal is to schedule a new timeout at a given delay (δ) after the previous timeout(last_t) was scheduled. My question is "what should Fl::repeat_timeout() do when it runs after thisdelay expired (after last_t + δ) ?". The current implementation for the macOS platform prioritizes the regularityof timeouts, and schedules a new timeout for last_t + n * δ where n is the smallest integer value that puts this date in the future.
I believe the Unix platform implementation prioritizes the running of timeout callbacks andhas the callback run several times without delay.
What do FLTK developers believe should be the priority of Fl::repeat_timeout()?
I think in this code the Unix/Linux would schedule the next timeout immediately so we don't miss a timeout.
Do you say that the macOS code would schedule the next timeout at 10.40? Is this the difference you're talking about?
If this was the case, then the average of timer delays would suffer significantly (by 0.1/n), whereas the Unix implementation would only "drift" once by 0.01 seconds and the average would increase by 0.01/n).
What do FLTK developers believe should be the priority of Fl::repeat_timeout()?
My personal opinion is that the next timeout should be sheduled as soon as possible if the calculated "next" timeout has already passed (if I understood your question).
Fl::repeat_timeout() should be triggered as exact as possible after the point in time where the last (current) timeout should have been triggered plus the delay given as the argument to "allow for more accurate timing", as the docs express it.
In other words: the above described sequence of n timeouts should not "drift away" as it probably does on Windows in our current implementation because there's no correction applied. I had planned to write such a demo program anyway. I'll do this shortly and post it here.
Le vendredi 9 juillet 2021 à 14:31:35 UTC+2, Albrecht Schlosser a écrit :
What do FLTK developers believe should be the priority of Fl::repeat_timeout()?
My personal opinion is that the next timeout should be scheduled as soon as possible if the calculated "next" timeout has already passed (if I understood your question).
Fl::repeat_timeout() should be triggered as exact as possible after the point in time where the last (current) timeout should have been triggered plus the delay given as the argument to "allow for more accurate timing", as the docs express it.
In other words: the above described sequence of n timeouts should not "drift away" as it probably does on Windows in our current implementation because there's no correction applied. I had planned to write such a demo program anyway. I'll do this shortly and post it here.
When the timeout is just a little bit late, say by delta, the correct solution is clear: schedule the next timeout to now + delay - delta.
My question arises when the timeout is very late, more than the delay between successive timeouts. What to do in that situation?
Either :
- skip one iteration, because its time is over, and schedule for next iteration;- play two iterations without delay in between.
--
You received this message because you are subscribed to the Google Groups "fltk.coredev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fltkcoredev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fltkcoredev/9ec3088f-4007-d154-5ae5-3ffd738ab993%40online.de.
On 9 Jul 2021, at 17:54, Bill Spitzak wrote:I vaguely remember that repeat_timeout, if the calculated remaining time was zero or negative, would punt and instead act like add_timeout. My feeling was that if a program was too slow it would be running the timeouts continuously if the alternative of just calling it immediately was done. There was certainly no testing as to whether this was the correct solution or not.
But, to me at least, it sounds like it probably is.
The crux, like Bill said, is that is you are running so slowly that you miss the timeout, then trying to “fill in” all the missing timeouts is only going to make matters worse, I imagine...