I've been thinking about cooperative multithreading,
basically with cooperative threading with timeouts,
since about 2016 I wrote up this idea about the "re-routine",
which is a sort of outline for cooperative multithreading,
where the idea is that all the calls are idempotent and
memoized, and none return null on success, and their
exceptions are modeled as usual error modes or for flow
of control with exceptions if that's the thing, then the
idea being is that the executor sort of runs the re-routine
and then figures that whenever it gets a null, then it's pending,
so it throws itself out, that when the callback arrives when
there's a response, then it just runs again right through the
re-routine, where of course it's all conditioned on the idempotent
memoized intermediate results, until it completes or errors.
This way it's about the same on the heap the intermediate
memo-ized results as the routine is running, but there's
no stack at all, so, it's not stack-bound, at all.
Of course preemptive multithreading and the thread stack
and context switching is about the greatest sort of thing
when some code isn't well-behaved, or won't yield, here
then for figuring about how basically to implement
cooperative multithreading including timeouts and priority.
Thusly, in the same sort of context as the co-routine, is,
the re-routine, this being for a model of cooperating
multi-threading, at least in a module where it can be
either entirely callback-driven or including timeouts,
and besides the executor can run its own threads in
the stack-bound for ensuring usual throughput in
case whatever re-routine is implemented in blocking
fashion.
This is a sort of idea about where mostly what I
want is that the routine is written as if it were
synchronous and serial and blocking, so it's simple,
then that just the semantics of the adapters or
the extra work on the glue logic, makes it so that
the serial re-routine is written naturally "in the language",
making it all readable and directly up-front and
getting the async semantics out of the way.
That's not really relevant here in this context
about "the mathematical infinite and unbounded",
but it results a sort of "factory industry pattern",
or re-routines, then that the implementation of
routine ends up being as simple as possible and
as close as possible to the language of its predication,
while it's all well-formed and guaranteed its behavior,
that then it can be implemented variously local or remote.
I suppose "cooperative multithreading" isn't for everybody,
but re-routines is a great idea.
Then the idea of the sliques or monohydra is basically
exactly as for the buffering the serving the IO's.
I.e., the idea here is that usually request/response
type things for transits can sort of be related to
what goes through DMA and nonblocking or async I/O,
and scatter/gather and vector I/O, getting things flowing,
right by the little ring through their nose.
Then about the infinite and the infinite limit,
it's called "the infinite limit".
Consider a snake that travels from 0 to 1 then 1 to 2.
Not only did it get "close enough", to 1,
it got "far enough", away, to get to 2.
I.e. deductively it crossed that bridge.
In continuous time, ....
It's called "infinite limit", with the idea being also
that when it results continuous called "continuum limit".