[Some aggressive eliding as we're getting pretty far afield of
"exception vs. error code"]
> Your particular example isn't possible, but other things are -
> including having values seem to appear or disappear when they are
> examined at different points within your transaction.
But the point of the transaction is to lock these changes
(or recognize their occurrence) so this "ambiguity" can't
manifest. (?)
The "client" either sees the result of entire transaction or none of it.
> There absolutely IS the notion of partial completion when you use
> inner (ie. sub-) transactions, which can succeed and fail
> independently of each other and of the outer transaction(s) in which
> they are nested. Differences in isolation can permit side effects from
> other ongoing transactions to be visible.
But you don't expose those partial results (?) How would the client
know that he's seeing partial results?
>>>>> So-called 'edge' computing largely is based on distributed tuple-space
>>>>> models specifically /because/ they are (or can be) self-organizing and
>>>>> are temporally decoupled: individual devices can come and go at will,
>>>>> but the state of ongoing computations is maintained in the fabric.
>>>>
>>>> But (in the embedded/RT system world) they are still "devices" with
>>>> specific functionalities. We're not (yet) accustomed to treating
>>>> "processing" as a resource that can be dispatched as needed. There
>>>> are no mechanisms where you can *request* more processing (beyond
>>>> creating another *process* and hoping <something> recognizes that
>>>> it can co-execute elsewhere)
>>>
>>> The idea doesn't preclude having specialized nodes ... the idea is
>>
>> I'm arguing for the case of treating each node as "specialized + generic"
>> and making the generic portion available for other uses that aren't
>> applicable to the "specialized" nature of the node (hardware).
>>
>> Your doorbell sits in an idiot loop waiting to "do something" -- instead
>> of spending that "idle time" working on something *else* so the "device"
>> that would traditionally be charged with doing that something else
>> can get by with less resources on-board.
>>
>> [I use cameras galore. Imagining feeding all that video to a single
>> "PC" would require me to keep looking for bigger and faster PCs!]
>
> And if frames from the camera are uploaded into a cloud queue, any
> device able to process them could look there for new work. And store
> its results into a different cloud queue for the next step(s). Faster
> and/or more often 'idle' CPUs will do more work.
That means every CPU must know how to recognize that sort of "work"
and be able to handle it. Each of those nodes then bears a cost
even if it doesn't actually end up contributing to the result.
it also makes the "cloud" a shared resource akin to the "main computer".
What do you do when it isn't available?
If the current resource set is insufficient for the current workload,
then (by definition) something has to be shed. My "workload manager"
handles that -- deciding that there *is* a resource shortage (by looking
at how many deadlines are being missed/aborted) as well as sorting out
what the likeliest candidates to "off-migrate" would be.
Similarly, deciding when there is an abundance of resources that
could be offered to other nodes.
So, if a node is powered up *solely* for its compute resources
(or, it's unique hardware-related tasks have been satisfied) AND
it discovers another node(s) has enough resources to address
its needs, it can push it's workload off to that/those node(s) and
then power itself down.
Each node effectively implements part of a *distributed* cloud
"service" by holding onto resources as they are being used and
facilitating their distribution when there are "greener pastures"
available.
But, unlike a "physical" cloud service, they accommodate the
possibility of "no better space" by keeping the resources
(and loads) that already reside on themselves until such a place
can be found -- or created (i.e., bring more compute resources
on-line, on-demand). They don't have the option of "parking"
resources elsewhere, even as a transient measure.
When a "cloud service" is unavailable, you have to have a backup
policy in place as to how you'll deal with these overloads.
> Pipelines can be 'logical' as well as 'physical': opportunistically
> processed data queues qualify as 'pipeline' stages.
>
> You often seem to get hung up on specific examples and fail to see how
> the idea(s) can be applied more generally.
I don't have a "general" system. :> And, suspect future (distributed)
embedded systems will shy away from the notion of any centralized "controller
node" for the obvious dependencies that that imposes on the solution.
Sooner or later, that node will suffer from scale. Or, reliability.
(one of the initial constraints I put on the system was NOT to rely on
any "outside" service; why not use a DBMS "in the cloud"? :> )
It's only a matter of time before we discover some egregious data
breach or system unavailability related to cloud services. You're
reliant on that service keeping itself available for YOUR operation
AND the fabric to access it being operational. Two big dependencies
that you have no control over (beyond paying your usage fees)
>>> simply that if a node crashes, the task state [for some approximation]
>>> is preserved "in the cloud" and so can be restored if the same node
>>> returns, or the task can be assumed by another node (if possible).
>>>
>>> It often requires moving code as well as data, and programs need to be
>>> written specifically to regularly checkpoint / save state to the
>>> cloud, and to be able to resume from a given checkpoint.
>
> TS models produce an implicit 'sequence' checkpoint with every datum
> tuple uploaded into the cloud. In many cases that sequencing is all
> that's needed by external processes to accomplish the goal.
I can get that result just by letting a process migrate itself to its
current node -- *if* it wants to "remember" that it can resume cleanly
from that point (but not any point beyond that unless side-effects
are eliminated). The first step in the migration effectively creates
the process snapshot.
There is overhead to taking that snapshot -- or pushing those
"intermediate results" to the cloud. You have to have designed your
*application* with that in mind.
Just like an application can choose to push "temporary data" into
the DBMS, in my world. And, incur those costs at run-time.
The more interesting problem is seeing what you can do "from the
outside" without the involvement of the application.
E.g., if an application had to take special measures in order to
be migrate-able, then I suspect most applications wouldn't be!
And, as a result, the system wouldn't have that flexibility.
OTOH, if the rules laid out for the environment allow me to wedge
that type of service *under* the applications, then there's no
cost-adder for the developers.
> Explicit checkpoint is required to resume only when processing is so
> time consuming that you /expect/ the node may fail (or be reassigned)
> before completing work on its current 'morsel' of input. To avoid
> REDOing lots of work - e.g., by starting over - it makes more sense to
> periodically checkpoint your progress.
>
> Different meta levels.
>
> Processing of your camera video above is implicitly checkpointed with
> every frame that's completed (at whatever stage). It's a perfect
> situation for distributed TS.
But that means the post processing has to happen WHILE the video is
being captured. I.e., you need "record" and "record-and-commercial-detect"
primitives. Or, to expose the internals of the "record" operation.
Similarly, you could retrain the speech models WHILE you are listening
to a phone call. But, that means you need the horsepower to do so
AT THAT TIME, instead of just capturing the audio ("record") and
doing the retraining "when convenient" ("retrain").
I've settled on simpler primitives that can be applied in more varied
situations. E.g., you will want to "record" the video when someone
wanders onto your property. But, there won't be any "commercials"
to detect in that stream.
Trying to make "primitives" that handle each possible combination of
actions seems like a recipe for disaster; you discover some "issue"
and handle it in one implementation and imperfectly (or not at all)
handle it in the other. "Why does it work if I do 'A then B' but
'B while A' chokes?"
>> Yes. For me, all memory is wrapped in "memory objects". Each has particular
>> attributes (and policies/behaviors), depending on its intended use.
>>
>> E.g., the TEXT resides in an R/O object ...
>> The DATA resides in an R/W object ...
>>
>> I leverage my ability to "migrate" a task (task is resource
>> container) to *pause* the task and capture a snapshot of each
>> memory object (some may not need to be captured if they are
>> copies of identical objects elsewhere in the system) AS IF it
>> was going to be migrated.
>>
>> But, instead of migrating the task, I simply let it resume, in place.
>>
>> The problem with this is watching for side-effects that happen
>> between snapshots. I can hook all of the "handles" out of the
>> task -- but, no way I can know what each of those "external
>> objects" might be doing.
>
> Or necessarily be able to reconnect the plumbing.
If the endpoint objects still exist, the plumbing remains intact
(even if the endpoints have moved "physically").
If an endpoint is gone, then the (all!) reference is notified and
has to do its own cleanup. But, that's the case even in "normal
operation" -- the "exceptions" we've been talking about.
>> OTOH, if I know that no external references have taken place since the
>> last "snapshot", then I can safely restart the task from the last
>> snapshot.
>>
>> It is great for applications that are well suited to checkpointing,
>> WITHOUT requiring the application to explicitly checkpoint itself.
>
> The point I was making above is that TS models implicitly checkpoint
> when they upload data into the cloud. If that data contains explicit
> sequencing, then it can be an explicit checkpoint as well.
>
> Obviously this depends on how you write the program and the
> granularity of the data. A program like your AI that needs to
> save/restore a whole web of inferences is very different from one that
> when idle grabs a few frames of video and transcodes them to MPG.
Remember, we're (I'm) trying to address something as "simple" as
"exceptions vs error codes", here. Expecting a developer to write
code with the notion of partial recovery in mind goes far beyond
that!
He can *choose* to structure his application/object/service in such
a way that makes that happen. Or not.
E.g., the archive DB treats each "file processed/examined" as a
single event. Kill off the process before a file is completely
processed and it will look like NO work was done for that file.
Kill off the DB before it can be updated to REMEMBER the work
that was done and the same is true.
So, I can SIGKILL the process and restart it at any time, knowing
that it will eventually sort out where it was when it died
(it may have a different workload to process *now* but that's
just a consequence of calendar time elapsing (e.g., "list files
that haven't been verified in the past N hours")
I think it's hard to *generally* design solutions that can be
interrupted and partially restored. You have to make a deliberate
effort to remember what you've done and what you were doing.
We seem to have developed the habit/practice of not "formalizing"
intermediate results as we expect them to be transitory.
[E.g., if I do an RMI to a node that uses a different endianness,
the application doesn't address that issue by representing the
data in some endian-neutral manner. Instead, the client-/server-
-side stubs handle that without the caller knowing it is happening.]
*Then*, you need some assurance that you *will* be restarted; otherwise,
the progress that you've already made may no longer be useful.
I don't, for example, universally use my checkpoint-via-OS hack
because it will cause more grief than it will save. *But*, if
a developer knows it is available (as a service) and the constraints
of how it works, he can offer a hint (at install time) to suggest
his app/service be installed with that feature enabled *instead* of
having to explicitly code for resumption.
Again, my goal is always to make it more enticing for you to do things
"my way" than to try to invent your own mechanism -- yet not
forcing you to comply!
>>> The "tuple-space" aspect specifically is to coordinate efforts by
>>> multiple nodes without imposing any particular structure or
>>> communication pattern on partipating nodes ... with appropriate TS
>>> support many different communication patterns can be accomodated
>>> simultaneously.
>
> :
>
>>> For many programs, checkpoint data will be much more compact than a
>>> snapshot of the running process, so it makes more sense to design
>>> programs to be resumed - particularly if you can arrange that reset of
>>> a faulting node doesn't eliminate the program, so code doesn't have to
>>> be downloaded as often (or at all).
>>
>> Yes, but that requires more skill on the part of the developer.
>> And, makes it more challenging for him to test ("What if your
>> app dies *here*? Have you checkpointed the RIGHT things to
>> be able to recover? And, what about *here*??")
>
> Only for resuming non-sequenced work internal to the node. Whether
> you need to do this depends on the complexity of the program.
Of course. But, you have to be aware of "what won't have been done"
when you are restored from a checkpoint and ensure that you haven't
done something that is difficult to "undo"/safe-to-redo.
> Like I said, if the work is simple enough to just do over, the
> implicit checkpoint of a datum being in the 'input' queue may be
> sufficient.
>
> There are TS models that actively support the notion of 'checking out'
> work, tracking who is doing what, timing-out unfinished work,
> restoring 'checked-out' (removed) data, ignoring results from
> timed-out workers (should the result show up eventually), etc.
I limit the complexity to just tracking local load and local resources.
If I push a job off to another node, I have no further knowledge of
it; it may have subsequently been killed off, etc. (e.g., maybe it failed
to meet its deadline and was aborted)
No "one" is watching the system to coordinate actions. I'm not worried
about "OPTIMAL load distribution" as the load is dynamic and, by the time
the various agents recognize that there may be a better "reshuffling"
of tasks, the task set will likely have changed.
What I want to design against is the need to over-specify resources
*just* for some "job" that may be infrequent or transitory. That
leads to nodes costing more than they have to. Or, doing less than
they *could*!
If someTHING can notice imbalances in resources/demands and dynamically
adjust them, then one node can act as if it has more "capability" than
its own hardware would suggest.
[I'm transcoding some videos for SWMBO in the other room. And, having to
wait for that *one* workstation to finish the job. Why can't *it* ask
for help from any of the 4 other machines presently running in the house?
Why do *I* have to distribute the workload if I want to finish sooner?]
> The TS server is more complicated, but the clients don't have to be.
>
>> I'm particularly focused on user-level apps (scripts) where I can
>> build hooks into the primitives that the user employs to effectively
>> keep track of what they've previously been asked to do -- keeping in
>> mind that these will tend to be very high-levels of abstraction
>> (from the user's perspective).
>>
>> E.g.,
>> At 5:30PM record localnews
>> At 6:00PM record nationalnews
>> remove_commercials(localnews)
>> remove_commercials(nationalnews)
>> when restarted, each primitive can look at the current time -- and state
>> of the "record" processes -- to sort out where they are in the sequence.
>> And, the presence/absence of the "commercial-removed" results. (obviously
>> you can't record a broadcast that has already ended so why even try!)
>>
>> Note that the above can be a KB of "code" + "state" -- because
>> all of the heavy lifting is (was?) done in other processes.
>>
>>> Even if the checkpoint data set is enormous, it often can be saved
>>> incrementally. You then have to weigh the cost of resuming, which
>>> requires the whole data set be downloaded.
>
> Well, recording is a sequential, single node process. Obviously
> different nodes can record different things simultaneously.
I push frames into an object ("recorder"). It's possible that a new recorder
could distribute those frames to a set of cooperating nodes. But, the intent
is for the "recorder" to act as an elastic store (the "real" store may not
have sufficient bandwidth to handle all of the instantaneous demands placed
on it so let the recorder buffer things locally) as it moves frames onto
the "storage medium" (another object).
I can, conceivably, arrange for the "store object" to be a "commercial
detector" but that requires the thing that interprets the script to recognize
this possibility for parallelism instead of just processing the script
as a "sequencer".
But, I want to ensure the policy decisions aren't embedded in the
implementation. E.g., if I want to preserve a "raw" version of the video
(to guard against the case where something may have been elided that
was NOT a commercial), then I should be able to do so.
Or, if I want to represent the "commercial detected" version as a *script*
that can be fed to the video player ("When you get to timestamp X, skip
forward to timestamp Y").
> But - depending on how you identify content vs junk - removing the
> commercials could be done in parallel by a gang, each of which needs
> only to look at a few video frames at a time.
>
>> OTOH, if you want to do something at 9:05 -- assuming it is 9:00 now -- you
>> set THAT timer based on the wall time. The guarantee it gives is that
>> it will trigger at or after "9:05"... regardless of how many seconds elapse
>> between now and then!
>>
>> So, if something changes the current wall time, the "in 5 minutes" timer
>> will not be affected by that change; it will still wait the full 300 seconds.
>> OTOH, the timer set for 9:05 will expire *at* 9:05. If the *current* notion
>> of wall time claims that it is now 7:15, then you've got a long wait ahead!
>>
>> On smaller systems, the two ideas of time are often closely intertwined;
>
> In large systems too!
>
> If time goes backwards, all bets are off. Most systems are designed
> so that can't happen unless a priveleged user intervenes. System time
> generally is kept in UTC and 'display' time is computed wrt system
> time when necessary.
>
> But your notion of 'wall time' seems unusual: typically it refers to
> a notion of time INDEPENDENT of the computer - ie. the clock on the
> wall, the watch on my wrist, etc. - not to whatever the computer may
> /display/ as the time.
The "computer" (system) has no need for the wall time, except as a
convenient reference frame for activities that interact with the user(s).
OTOH, it *does* need some notion of "time" ("system time") in order to
make scheduling and resource decisions.
E.g., I can get a performance metric from the video transcoder and
use that to predict *when* the transcoding task will be complete.
With this, I can decide whether or not some other task(s) will be
able to meet it's deadline(s) IF THE TRANSCODER IS CONSUMING RESOURCES
for that interval. And, decide whether I should kill off the transcoder
to facilitate those other tasks meeting their deadlines *or* kill off
(not admit) the other task(s) as I know they won't meet *their*
deadlines while the transcoder is running.
None of those decisions require knowledge of the angular position of
the earth on its access.
> Ie. if you turn back the wall clock, the computer doesn't notice. If
That's not true in all (embedded) systems. Often, the system time
and wall time are linear functions of each other. In effect, when
you say "do something at 9:05", the current wall time is used to
determine how far in the future (past?) that will be. And, from this,
compute the associated *system* time -- which is then used as the "alarm
time". You've implicitly converted an absolute time into a relative
time offset -- by assuming the current wall time is immutable.
"Wall time" is an ephemeral concept as far as the system is concerned.
So, change the wall time to 9:04 and be surprised when the event DOESN'T
happen in 60 seconds!
This was my point wrt having two notions of time in a system and
two ways of referencing "it" (them?)
In my system, if you schedule an event for "9:05", then the current
notion of wall time is used to determine if that event should
activate. If you perpetually kept reseting the wall clock to
7:00, then the event would NEVER occur.
By contrast, an event scheduled for "300 seconds from now" WILL
happen in 300 seconds.
> you turn back the computer's system clock, then you are an
> administrator and you get what you deserve.
But you can't alter the *system* time in my system. It marches steadily
forward. The "wall time", OTOH, is a convenience that can be redefined
at will. Anything *tied* to those references would be at the mercy
of such a redefinition.
So, using system time, I can tell you what the average transfer rate for
an FTP transfer was. If I had examined the *wall* time at the start
and end of the transfer, there's no guarantee that the resulting
computation would be correct (cuz the wall time might have been changed
in that period).
> There are a number of monotonic time conventions, but mostly you just
> work in UTC if you want to ignore local time conventions like
> 'daylight saving' that might result in time moving backwards. Network
> time-set protocols never move the local clock backwards: they adjust
> the length of the local clock tick such that going forward the local
> time converges with the external time at some (hopefully near) point
> in the future.
Even guaranteeing that time never goes backwards doesn't leave time
as a useful metric. If you slow my "wall clock" by 10% to allow
"real" time to catch up to it, then any measurements made with that
"timebase" are off by 10%.
I keep system time pretty tightly synchromized between nodes.
So, time moves at a consistent rate across the system.
*Wall* time, OTOH, is subject to the quality of the references
that I have available. If I am reliant on the user to manually tell
me the current time, then the possibility of large discontinuities
is a real issue. If the user sets the time to 10:00 and you presently
think it to be *12:00*, you can't slowly absorb the difference!
The user would wonder why you were still "indicating" 12:00 despite
his recent (re)setting of the time. The idea that you don't want to
"move backwards" is anathema to him; of COURSE he wants you to move
backwards because IT IS 10:00, NOT 12:00 (in his mind).
[Time is a *huge* project because of all the related issues. You
still need some "reference" for the timebases -- wall and system.
And, a way to ensure they track in some reasonably consistent
manner: 11:00 + 60*wait(60 sec) should bring you to 11:00
even though you're using times from two different domains!]
> You still might encounter leap-seconds every so often, but [so far]
> they have only gone forward so as yet they haven't caused problems
> with computed delays. Not guaranteed though.
Keeping all of that separate from "system time" makes system time
a much more useful facility. A leap second doesn't turn a 5 second
delay into a *6* second delay (if the leap second manifested within
that window).
And, if the wall time was ignorant of the leap second's existence,
the only consequence would be that the "external" notion of
"current time of day" would be off by a second. If you've set
a task to record a broadcast at 9:00, it will actually be recorded
at 8:59:59 (presumably, the broadcaster has accounted for the
leap second in his notion of "now", even if you haven't). The *user*
might complain but then the user could do something about it
(including filing a bug report).
> My AT parser produced results that depended on current system time to
> calculate, but the results were fixed points in UTC time wrt the 1970
> Unix epoch. The computed point might be 300 seconds or might be
> 3,000,000 seconds from time of the parse - but it didn't matter so
> long as nobody F_d with the system clock.
I have no "epoch". Timestamps reflect the system time at which the
event occurred. If the events had some relation to "wall time"
("human time"), then any discontinuities in that time frame are
the problem of the human. Time "starts" at the factory. Your
system's "system time" need bear no relationship to mine.
E.g., it's 9:00. Someone comes to the door and drops off a package.
Some time later, you (or some agent) change the wall time to
reflect an earlier time. Someone is seen picking up at 8:52 (!).
How do you present these events to the user? He sees his package
"stolen" before it was delivered! (if you treat wall time as
significant).
But, the "code" knows that the dropoff occurred at system time X
and the theft at X+n so it knows the proper ordering of the events,
even if the (wall) time being displayed on the imagery is "confused".
>> the system tick (jiffy) effectively drives the time-of-day clock. And,
>> "event times" might be bound at time of syscall *or* resolved late.
>> So, if the system time changes (can actually go backwards in some poorly
>> designed systems!), your notion of "the present time" -- and, with it,
>> your expectations of FUTURE times -- changes.
>>
>> Again, it should be a simple distinction to get straight in your
>> head. When you're dealing with times that the rest of the world
>> uses, use the wall time. When you're dealing with relative times,
>> use the system time. And, be prepared for there to be discontinuities
>> between the two!
>
> Yes, but if you have to guard against (for lack of a better term)
> 'timebase' changes, then your only recourse is to use absolute
> countdown.
>
> The problem is, the current state of a countdown has to be maintained
> continuously and it can't easily be used in a 'now() >= epoch' polling
> software timer. That makes it very inconvenient for some uses.
If you want to deal with a relative time -- either SPECIFYING one or
measuring one -- you use the system time. Wanna know how much time
has elapsed since a point in time?
reference := get_system_time(...)
...
elapsed_time := get_system_time(...) - reference
Want to know how long until some "future" (human time) event?
wait := scheduled_time - get_wall_time(...)
Note that "wait" can be negative, even if you had THOUGHT it was a
"future" event! And, if something has dicked with the "wall time",
the magnitude is essentially boundless.
OTOH, "elapsed_time" is *always* non-negative. Regardless of what
"time" the clock on the wall claims it to be! And, always reflects
the actual rotation of the Earth on its axis.
> And there are times when you really do want the delay to reflect the
> new clock setting: ie. the evening news comes on at 6pm regardless of
> daylight saving, so the showtime moves (in opposition) with the change
> in the clock.
Then you say "record at 6:00PM" -- using the wall clock time.
If "something" ensures that the wall time is *accurate* AND
reflects savings/standard time changes, the recording will take
place exactly as intended.
Because the time in question is an EXTERNALLY IMPOSED notion of time,
not one inherent to the system.
[When the system boots, it has no idea "what time it is" until it can
get a time fix from some external agency. That *can* be an RTC -- but,
RTC batteries can die, etc. It can be manually specified -- but, that
can be in error. <shrug> The system doesn't care. The *user* might
care (if his shows didn't get recorded at the right times)...]
> Either countdown or fixed epoch can handle this if computed
> appropriately (i.e. daily with reference to calendar) AND the computer
> remains online to maintain the countdown for the duration. If the
> computer may be offline during the delay period, then only fixed epoch
> will work.
You still require some sort of "current time" indicator/reference
(in either timing system).
For me, time doesn't exist when EVERYTHING is off. Anything that
was supposed to happen during that interval obviously can't happen.
And, nothing that has happened (in the environment) can be "noticed"
so there's no way of ordering those observations!
If I want some "event" to be remembered beyond an outage, then
the time of the event has to be intentionally stored in persistent
storage (i.e., the DBMS) and retrieved from it (and rescheduled)
once the system restarts.
These tend to be "human time" events (broadcast schedules, HVAC
events, etc.). Most "system time" events are short-term and don't
make sense spanning an outage *or* aren't particularly concerned
with accuracy (e.g., vacuum the DB every 4 hours).
I have a freerunning timer that is intended to track the passage of
time during an outage (like an RTC would). It can never be set
(reset) so, in theory, tells me what the system timer WOULD have
been, had the system not suffered an outage.
[The passage of time via this mechanism isn't guaranteed to be
identical to the rate time passes on the "real" system timer.
But, observations of it while the system is running give
me an idea as to how fast/slow it may be so I can compute a
one-time offset between its reported value and the deduced
system time and use that to initialize the system time
(knowing that it will always be greater than the system time
at which the outage occurred)]
[[I use a similar scheme in my digital clocks, using the AC mains
frequency to "discipline" the XTAL oscillator so it tracks,
long term]]
I have several potential sources for "wall time":
- the user (boo hiss!)
- WWVB
- OTA DTV
- GPS
each of which may/mayn't be available and has characteristics
that I've previously observed (e.g., DTV time is sometimes
off by an hour here, as we don't observe DST).
I pick the best of these, as available, and use that to
initialize the wall time. Depending on my confidence in
the source, I may revise the *system* time FORWARD by some
amount (but never backwards). Because the system time
has to be well-behaved.
>> It may seem trivial but if you are allowing something to interfere with
>> your notion of "now", then you have to be prepared when that changes
>> outside of your control.
>>
>> [I have an "atomic" clock that was off by 14 hours. WTF??? When
>> your day-night schedule is as freewheeling as mine, it makes a
>> difference if the clock tells you a time that suggests the sun is
>> *rising* when, in fact, it is SETTING! <frown>]
>
> WTF indeed. The broadcast is in UTC or GMT (depending), so if your
> clock was off it had to be because it's offset was wrong.
There are only 4 offsets: Pacific, Mountain, Central and Eastern
timezones. It's wrong half of the year due to our lack of DST
(so, I tell it we're in California for that half year!)
Something internally must have gotten wedged and, because it is battery
backed, never got unwedged (I pulled the batteries out when I noticed
this problem and let it "reset" itself).
> I say 'offset' rather than 'timezone' because some "atomic" clocks
> have no setup other than what is the local time. Internally, the
> mechanism just notes the difference between local and broadcast time
> during setup, and if the differential becomes wrong it adjusts the
> local display to fix it.
This adjust the broadcast time based on the TZ you have selected.
I.e., it wouldn't work in europe. Or AK/HI! etc.
> I have an analog electro-mechanical (hands on dial) "atomic" clock
> that does this. Position the hands so they reflect the current local
> time and push a button on the mechanism. From that point the time
> broadcast keeps it correct [at least until the batteries die].
This just waits until it acquires signal. Then, sets itself to
the broadcast time, offset by the specified TZ.
I would never have noticed it as it's only role is as a bedside clock.
So, only consulted when I'm thinking about getting up.
If it's "light" -- or "dark"! -- then getting up is just a decision
made based on my own personal preference.
OTOH, if it is dusk/dawn, then I have to think about whether or not
there are any commitments that I have to meet. Evening walk. Morning
shopping trip. Doctor appointment later that day. etc.
In this case, I'd gone to bed in the late afternoon. When I awoke,
the clock suggested it was early morning -- just after sunup.
(Ahhh... a nice long sleep!)
But, C observed: "Well, that wasn't a very long nap..." I.e., it
was *still* early evening, not dawn! :< The "low light" condition
I'd observed was the sun waning, not waxing. (I can't see where the
sun is located in the sky from my bedroom)
----
Now, back to my error/exception problem. I have to see if there are
any downsides to offering a dual API to address each developer's
"style"...