Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Interrupts on the Concertina II

241 views
Skip to first unread message

Quadibloc

unread,
Jan 17, 2024, 12:33:23 PMJan 17
to
When a computer recieves an interrupt signal, it needs to save
the complete machine state, so that upon return from the
interrupt, the program thus interrupted is in no way affected.

This is because interrupts can happen at any time, and thus
programs don't prepare for them or expect them. Any disturbance
to the contents of any register would risk causing programs to
crash.

The Concertina II has a potentially large machine state which most
programs do not use. There are vector registers, of the huge
kind found in the Cray I. There are banks of 128 registers to
supplement the banks of 32 registers.

One obvious step in addressing this is for programs that don't
use these registers to run without access to those registers.
If this is indicated in the PSW, then the interrupt routine will
know what it needs to save and restore.

A more elaborate and more automated method is also possible.

Let us imagine the computer speeds up interrupts by having a
second bank of registers that interrupt routines use. But two
register banks aren't enough, as many user programs are running
concurrently.

Here is how I envisage the sequence of events in response to
an interrupt could work:

1) The computer, at the beginning of an area of memory
sufficient to hold all the contents of the computer's
registers, including the PSW and program counter, places
a _restore status_.

2) The computer switches to the interrupt register bank,
and places a pointer to the restore status in one of the
registers, according to a known convention.

3) As the interrupt service routine runs, the computer,
separately and in the background, saves the registers of the
interrupted program into memory. Once this is complete, the
_restore status_ value in memory is changed to reflect this.

4) The restore status value has _two_ uses.

One is, obviously, that when returning from an interrupt,
there will be a 'return from interrupt' routine that will
either just switch register banks, if the registers aren't
saved yet, or re-fill the registers that are actually in
use (the register status also indicating what the complement
of registers available to the interrupted program was) from
memory.

The other is that the restore status can be tested. If the
main register set isn't saved yet, then it's too soon after
the interrupt to *switch to another user program* which also
would use the main register set, but with a different set
of saved values.

5) Some other factors complicate this.

There may be multiple sets of user program registers to
facilitate SMT.

The standard practice in an operating system is to leave
the privileged interrupt service routine as quickly as
possible, and continue handling the interrupt in an
unprivileged portion of the operating system. However, the
return from interrupt instruction is obviously privileged,
as it allows one to put an arbitrary value from memory into
the Program Status Word, including one that would place
the computer into a privileged state after the return.

That last is not unique to the Concertina II, however. So
the obvious solution of allowing the kernel to call
unprivileged subroutines - which terminate in a supervisor
call rather than a normal return - has been found.

John Savard

Scott Lurndal

unread,
Jan 17, 2024, 1:02:28 PMJan 17
to
Quadibloc <quad...@servername.invalid> writes:
>When a computer recieves an interrupt signal, it needs to save
>the complete machine state, so that upon return from the
>interrupt, the program thus interrupted is in no way affected.
>
>This is because interrupts can happen at any time, and thus
>programs don't prepare for them or expect them. Any disturbance
>to the contents of any register would risk causing programs to
>crash.

Something needs to preserve state, either the hardware or
the software. Most risc processors lean towards the latter,
generally for good reason - one may not need to save
all the state if the interrupt handler only touchs part of it.

>
>The Concertina II has a potentially large machine state which most
>programs do not use. There are vector registers, of the huge
>kind found in the Cray I. There are banks of 128 registers to
>supplement the banks of 32 registers.
>
>One obvious step in addressing this is for programs that don't
>use these registers to run without access to those registers.
>If this is indicated in the PSW, then the interrupt routine will
>know what it needs to save and restore.

Just like x86 floating point.

>
>A more elaborate and more automated method is also possible.
>
>Let us imagine the computer speeds up interrupts by having a
>second bank of registers that interrupt routines use. But two
>register banks aren't enough, as many user programs are running
>concurrently.
>
>Here is how I envisage the sequence of events in response to
>an interrupt could work:
>
>1) The computer, at the beginning of an area of memory
>sufficient to hold all the contents of the computer's
>registers, including the PSW and program counter, places
>a _restore status_.

Slow DRAM or special SRAMs? The former will add
considerable latency to an interrupt, the later costs
area (on a per-hardware-thread basis) and floorplanning
issues.

Best is to save the minimal amount of state in hardware
and let software deal with the rest, perhaps with
hints from the hardware (e.g. a bit that indicates
whether the FPRs were modified since the last context
switch, etc).

EricP

unread,
Jan 17, 2024, 3:36:49 PMJan 17
to
Quadibloc wrote:
> When a computer recieves an interrupt signal, it needs to save
> the complete machine state, so that upon return from the
> interrupt, the program thus interrupted is in no way affected.

It needs to save the portion of the machine state overwritten by that
interrupt. Often this is a small subset of the whole state because many
interrupts only need a few integer registers, maybe 6 or 7.
Additional integer state will be saved and restored as needed by the call ABI
so does not need to be done for every interrupt by the handler prologue.

This allows the OS to only save and restore the full state on thread
switch which only happens after something significant occurs that
changes the highest priority thread on a particular core.
This occurs much less than the frequency of interrupts.

> This is because interrupts can happen at any time, and thus
> programs don't prepare for them or expect them. Any disturbance
> to the contents of any register would risk causing programs to
> crash.
>
> The Concertina II has a potentially large machine state which most
> programs do not use. There are vector registers, of the huge
> kind found in the Cray I. There are banks of 128 registers to
> supplement the banks of 32 registers.

I have trouble imagining what an interrupt handler might use vectors for.

Some OS's deal with this by specifying that drivers can only use integers.
(Graphics drivers get special dispensation but they run in special context.)

> One obvious step in addressing this is for programs that don't
> use these registers to run without access to those registers.
> If this is indicated in the PSW, then the interrupt routine will
> know what it needs to save and restore.
>
> A more elaborate and more automated method is also possible.
>
> Let us imagine the computer speeds up interrupts by having a
> second bank of registers that interrupt routines use. But two
> register banks aren't enough, as many user programs are running
> concurrently.

A consequence of this approach is that the second bank of interrupt
registers are architecturally visible, which means they tie down
resources like physical registers for all those interrupt registers
even when not in use.

And since there are multiple priority interrupt levels each with a bank,
you quickly wind up with hundreds of physical registers mostly sitting
around doing nothing but tied to other contexts.

MitchAlsup1

unread,
Jan 17, 2024, 5:15:51 PMJan 17
to
Quadibloc wrote:

> When a computer recieves an interrupt signal, it needs to save
> the complete machine state, so that upon return from the
> interrupt, the program thus interrupted is in no way affected.

State needs to be saved, whether SW or HW does the save is a free
variable.

> This is because interrupts can happen at any time, and thus
> programs don't prepare for them or expect them. Any disturbance
> to the contents of any register would risk causing programs to
> crash.

Also note:: the ABA problem can happen when an interrupt transpires
in the middle of an ATOMIC sequence. Thus, My 66000 fails the event
before transferring control to the interrupt handler.

> The Concertina II has a potentially large machine state which most
> programs do not use. There are vector registers, of the huge
> kind found in the Cray I. There are banks of 128 registers to
> supplement the banks of 32 registers.

In the past, each register set can be guarded by an in-use bit
and save/restore can be avoided when not in-use.

Then again, My 66000 only has 64 registers, and one reason for this
is to keep context switch times minimal, and the SW path from one
context to the next minimal.

> One obvious step in addressing this is for programs that don't
> use these registers to run without access to those registers.

Or not have them to begin with:: Take Vector Registers:: My 66000
Virtual Vector Method (VVM) enables HW to vectorize a loop with
the property that if an interrupt or exception transpires, the
loop collapses into Scalar mode and the interrupt handler remains
blissfully unaware the the instruction raising the exception is at
a precise point. When control returns the rest of one loop runs in
Scalar mode and when control transfers back to the top of the loop
the loop is re-vectorized. This costs 2 instructions (VEC and LOOP)
and 6-bits of state in PSL

> If this is indicated in the PSW, then the interrupt routine will
> know what it needs to save and restore.

I use PSL instead of PSW because the amount of state is a cache line
not a word, doubleword, or quadword. But space here is at a premium.

> A more elaborate and more automated method is also possible.

> Let us imagine the computer speeds up interrupts by having a
> second bank of registers that interrupt routines use. But two
> register banks aren't enough, as many user programs are running
> concurrently.

You are ignoring several interesting facts with modern interrupt
systems.

a) each GuestOS has its own interrupt table(s) and the Hypervisor
has its own interrupt tables.

b) multiprocessing is a given. There are situations where an interrupt
is sent from a device and a number of cores can respond. The proper
core is the one running at the lowest priority level, and the core
that gets there first.

In My 66000 case, the interrupt is recognized by the core detecting a
write to the raised interrupt register, going out and fetching the
interrupt #. This bus transaction can return "here is the interrupt
#" or it can respond with "someone else already got it". In the later
case, the core goes on doing its current thing. In the former case,
core responds with "I am going to handle this one", and the interrupt
controller acknowledges the interrupt.

Until the interrupt # returns, the core continues processing whatever
it was doing, and then the core goes out and fetches the 5 cache lines
of the interrupt dispatcher thread, as these lines arrive, they displace
the current thread lines. So the restore happens before the save, the
save after the reload, and arriving data pushes out current data.

When control arrives at interrupt dispatcher it has a complete set of
registers (minus 2 used to tell the dispatcher what interrupt it is
to handle.) So, the receiving thread has its SP, its FP, its Root
pointer, its ASID, and for all practical purposes it begins running
as if the first instruction at the dispatcher saw the 30 register
the last instruction performed the last time this thread ran. Thus,
it can hold any variety of pointers to various data structures the
OS deems relevant.

The interrupt dispatcher is 4 instructions long, the first checks
that the interrupt is in bounds, the second directs control out of
if the interrupt is not within bounds, the third transfers control
the the handler (CALL) and when the handler returns control attempts
to return to the interrupted thread.

Many times the handler SoftIRQs (or DPCs) cleanup handlers. The
SuperVisor Return instruction (SVR) checks if there are scheduled
thread above the thread we are attempting to return control to,
and transfers control too them before transferring control to
the interrupted thread. There is no need for SW to sort this out.

> Here is how I envisage the sequence of events in response to
> an interrupt could work:

> 1) The computer, at the beginning of an area of memory
> sufficient to hold all the contents of the computer's
> registers, including the PSW and program counter, places
> a _restore status_.

I found no reason and no rational to assign any of this to
any specific place in memory.

Instead, the entire Supervisor "Stack" is in a set of control
registers, and from those control registers we can find any
thread and all state associated with them.

The Supervisor Stack contains 4 CRs that point at {HV, GuestHV,
GuestOS, Application} a 2-bit "who is currently in charge" a
6-bit priority, an interrupt table pointer, and a dispatcher.
It is (also) 1 cache line long.

> 2) The computer switches to the interrupt register bank,
> and places a pointer to the restore status in one of the
> registers, according to a known convention.

> 3) As the interrupt service routine runs, the computer,
> separately and in the background, saves the registers of the
> interrupted program into memory. Once this is complete, the
> _restore status_ value in memory is changed to reflect this.

> 4) The restore status value has _two_ uses.

> One is, obviously, that when returning from an interrupt,
> there will be a 'return from interrupt' routine that will
> either just switch register banks, if the registers aren't
> saved yet, or re-fill the registers that are actually in
> use (the register status also indicating what the complement
> of registers available to the interrupted program was) from
> memory.

You are going to have a lot of complications with SoftIRQ/DPCs
doing it this way.

> The other is that the restore status can be tested. If the
> main register set isn't saved yet, then it's too soon after
> the interrupt to *switch to another user program* which also
> would use the main register set, but with a different set
> of saved values.

> 5) Some other factors complicate this.

> There may be multiple sets of user program registers to
> facilitate SMT.

> The standard practice in an operating system is to leave
> the privileged interrupt service routine as quickly as
> possible, and continue handling the interrupt in an
> unprivileged portion of the operating system.

This is the SoftIRQs and DPCs.

> However, the
> return from interrupt instruction is obviously privileged,

Why ?? If you are not "in" an interrupt it can raise OPERATION
exception WHITHOUT BEING PRIVILEGEd !! {{Hint: you don't want
a privileged thread to perform a return from interrupt unless
you are IN an interrupt EITHER.}}

> as it allows one to put an arbitrary value from memory into
> the Program Status Word, including one that would place
> the computer into a privileged state after the return.

> That last is not unique to the Concertina II, however. So
> the obvious solution of allowing the kernel to call
> unprivileged subroutines - which terminate in a supervisor
> call rather than a normal return - has been found.

How does privilege get restored on return ??

> John Savard

MitchAlsup1

unread,
Jan 17, 2024, 5:20:43 PMJan 17
to
EricP wrote:

> Quadibloc wrote:
>> When a computer recieves an interrupt signal, it needs to save
>> the complete machine state, so that upon return from the
>> interrupt, the program thus interrupted is in no way affected.

> It needs to save the portion of the machine state overwritten by that
> interrupt. Often this is a small subset of the whole state because many
> interrupts only need a few integer registers, maybe 6 or 7.

Would an interrupt handler not run fewer instructions if its register
state was seeded with pointers of use to the device(s) being serviced ??

> Additional integer state will be saved and restored as needed by the call ABI
> so does not need to be done for every interrupt by the handler prologue.

> This allows the OS to only save and restore the full state on thread
> switch which only happens after something significant occurs that
> changes the highest priority thread on a particular core.

Doesn't running a SoftIRQ (or DPC) require a full register state ??
And don't most device initerrupts need to SoftIRQ ??
{{Yes, I see timer interrupts not needing so much of that}}

> This occurs much less than the frequency of interrupts.

>> This is because interrupts can happen at any time, and thus
>> programs don't prepare for them or expect them. Any disturbance
>> to the contents of any register would risk causing programs to
>> crash.
>>
>> The Concertina II has a potentially large machine state which most
>> programs do not use. There are vector registers, of the huge
>> kind found in the Cray I. There are banks of 128 registers to
>> supplement the banks of 32 registers.

> I have trouble imagining what an interrupt handler might use vectors for.

Memory to memory move from Disk Cache to User Buffer.
SoftIRQ might use Vector arithmetic to verify CRC, Encryption, ...

Chris M. Thomasson

unread,
Jan 17, 2024, 6:53:42 PMJan 17
to
On 1/17/2024 2:11 PM, MitchAlsup1 wrote:
> Quadibloc wrote:
>
>> When a computer recieves an interrupt signal, it needs to save
>> the complete machine state, so that upon return from the
>> interrupt, the program thus interrupted is in no way affected.
>
> State needs to be saved, whether SW or HW does the save is a free variable.
>
>> This is because interrupts can happen at any time, and thus
>> programs don't prepare for them or expect them. Any disturbance
>> to the contents of any register would risk causing programs to
>> crash.
>
> Also note:: the ABA problem can happen when an interrupt transpires
> in the middle of an ATOMIC sequence. Thus, My 66000 fails the event
> before transferring control to the interrupt handler.
[...]

Just to be clear an interrupt occurring within the hardware
implementation of a CAS operation (e.g, lock cmpxchg over on Intel)
should not effect the outcome of the CAS. Actually, it should not happen
at all, right? CAS does not have any spurious failures.

Scott Lurndal

unread,
Jan 17, 2024, 6:55:49 PMJan 17
to
mitch...@aol.com (MitchAlsup1) writes:
>EricP wrote:
>
>> Quadibloc wrote:
>>> When a computer recieves an interrupt signal, it needs to save
>>> the complete machine state, so that upon return from the
>>> interrupt, the program thus interrupted is in no way affected.
>
>> It needs to save the portion of the machine state overwritten by that
>> interrupt. Often this is a small subset of the whole state because many
>> interrupts only need a few integer registers, maybe 6 or 7.
>
>Would an interrupt handler not run fewer instructions if its register
>state was seeded with pointers of use to the device(s) being serviced ??

I doubt that would have any effect one way or the other on the
number of instructions executed by the handler (a difference of one
instruction isn't significant).

>
>> Additional integer state will be saved and restored as needed by the call ABI
>> so does not need to be done for every interrupt by the handler prologue.
>
>> This allows the OS to only save and restore the full state on thread
>> switch which only happens after something significant occurs that
>> changes the highest priority thread on a particular core.
>
>Doesn't running a SoftIRQ (or DPC) require a full register state ??

Those are run in a separate kernel thread not the interrupt
handler. They havve full context, provided by the thread.

>And don't most device initerrupts need to SoftIRQ ??

Some do, some don't. It really depends on the interrupt
and the operating software (i.e. the linux kernel stack,
which is structured around PCIe semantics).


>> I have trouble imagining what an interrupt handler might use vectors for.
>
>Memory to memory move from Disk Cache to User Buffer.

That's DMA - no cpu intervention required. If you're referring
to a "Soft" disk cache maintained by the kernel (in unix, the
buffer cache or file cache), a dedicated DMA engine that can offload
such transfers would be a better solution than using
vector registers in an interrupt handler which has no business
transferring bulk data.

>SoftIRQ might use Vector arithmetic to verify CRC, Encryption, ...

Both of those are offloaded to on-chip accelerators. Much better
use of area.

MitchAlsup1

unread,
Jan 17, 2024, 7:05:44 PMJan 17
to
ABA failure happens BECAUSE one uses the value of data to decide if
something appeared ATOMIC. The CAS instruction (itself and all variants)
is ATOMIC, the the setup to CAS is non-ATOMIC, because the original value
to be compared was fetched without any ATOMIC indicator, and someone else
can alter it before CAS. If more than 1 thread alters the location, it
can (seldom) end up with the same data value as the suspended thread
thought it should be.

CAS is ATOMIC, the code leading to CAS was not and this opens up the
hole.

Note:: CAS functionality implemented with LL/SC does not suffer ABA
because the core monitors the LL address until the SC is performed.
It is an addressed based comparison not a data value based one.

EricP

unread,
Jan 17, 2024, 9:00:22 PMJan 17
to
MitchAlsup1 wrote:
> EricP wrote:
>
>> Quadibloc wrote:
>>> When a computer recieves an interrupt signal, it needs to save
>>> the complete machine state, so that upon return from the
>>> interrupt, the program thus interrupted is in no way affected.
>
>> It needs to save the portion of the machine state overwritten by that
>> interrupt. Often this is a small subset of the whole state because many
>> interrupts only need a few integer registers, maybe 6 or 7.
>
> Would an interrupt handler not run fewer instructions if its register
> state was seeded with pointers of use to the device(s) being serviced ??

It would save a couple of load immediate instructions and
costs a set of preserved registers for each interrupt level.
Not worth the trouble.

>> Additional integer state will be saved and restored as needed by the
>> call ABI
>> so does not need to be done for every interrupt by the handler prologue.
>
>> This allows the OS to only save and restore the full state on thread
>> switch which only happens after something significant occurs that
>> changes the highest priority thread on a particular core.
>
> Doesn't running a SoftIRQ (or DPC) require a full register state ??
> And don't most device initerrupts need to SoftIRQ ??
> {{Yes, I see timer interrupts not needing so much of that}}

No, not if you layer the software appropriately.

The ISR prologue saves the non-preserved register subset (say R0 to R7).
The First Level Interrupt Handler (FLIH) determines whether to
restore the saved register subset or jump into the OS.
If it decides to jump into the OS then R0:R7 are already saved on the stack,
and since this was a FLIH that stack must be the prior thread kernel stack.
So that leaves R8:R31 as still containing the prior threads values.

You call whatever routines you like, when they return to this routine
R8:R31 will still contain the prior thread's data. Only when you decide
to switch threads do you needs to spill R8:R31, into the thread header
context save area, plus any float, simd or vector registers,
and then save the kernel stack pointer there so you can find the values
you pushed in the prologue (if you need to edit the thread context).

You then switch thread header pointers to the new thread's.

To load the next thread you pick up from the new thread context R8:R31
and float, simd, vector registers, and the kernel stack pointer.
Its R0:R7 remain on the kernel stack where they were left when saved.

You can now return to the ISR epilogue which pops R0:R7 for this new
thread and REI Return from Exception or Interrupt to run the new thread.

>> This occurs much less than the frequency of interrupts.
>
>>> This is because interrupts can happen at any time, and thus
>>> programs don't prepare for them or expect them. Any disturbance
>>> to the contents of any register would risk causing programs to
>>> crash.
>>>
>>> The Concertina II has a potentially large machine state which most
>>> programs do not use. There are vector registers, of the huge
>>> kind found in the Cray I. There are banks of 128 registers to
>>> supplement the banks of 32 registers.
>
>> I have trouble imagining what an interrupt handler might use vectors for.
>
> Memory to memory move from Disk Cache to User Buffer.
> SoftIRQ might use Vector arithmetic to verify CRC, Encryption, ...

I was thinking of device interrupt service routines but yeah
the DPC/SoftIRQ routines might do this.

I would have those routines that do want this manually save and restore
any non-integer registers. There may be other sync issues to deal with
(pending exceptions intended for the prior thread context).

A set of utility Hardware Abstraction Layer (HAL) subroutines could
handle this for each platform.

Quadibloc

unread,
Jan 18, 2024, 1:22:34 AMJan 18
to
On Wed, 17 Jan 2024 15:35:56 -0500, EricP wrote:
> Quadibloc wrote:

>> When a computer recieves an interrupt signal, it needs to save
>> the complete machine state, so that upon return from the
>> interrupt, the program thus interrupted is in no way affected.
>
> It needs to save the portion of the machine state overwritten by that
> interrupt. Often this is a small subset of the whole state because many
> interrupts only need a few integer registers, maybe 6 or 7.
> Additional integer state will be saved and restored as needed by the call ABI
> so does not need to be done for every interrupt by the handler prologue.

Yes, that is true in many cases.

> I have trouble imagining what an interrupt handler might use vectors for.

However, you're apparently forgetting one very important case.

What if the interrupt is a *real-time clock* interrupt, and what is going
to happen is that the computer will _not_ return immediately from that
interrupt to the interrupted program, but instead, regarding it as
compute-bound, proceed to a different program possibly even belonging
to another user?

So you're quite correct that the problem does not _always_ arise. But it
does arise on occasion.

John Savard

EricP

unread,
Jan 18, 2024, 10:33:07 AMJan 18
to
No, I'm not forgetting that. Return from Exception or Interrupt (REI)
has two possible paths, return to what it was doing before or jump into
the OS and do more processing. On many platforms this particular piece
of code is long, complicated, and riddled with race conditions.

But designing an REI mechanism that makes this sequence
simple, efficient and fast is a separate issue.

For interrupts there are two main pieces of code, the Interrupt Service
Routine (ISR), and post processing routine DPC/SoftIrq.

The ISR for a particular device is called by the OS in response
to a hardware priority interrupt. The ISR may decide it needs further
processing but does not want to block other interrupts while doing it
so ISR posts a request for deferred post processing.

There also can be many restrictions on what an ISR is allowed to do
because the OS designers did not want to, say, force every ISR to
sync with the slow x87 FPU just in case someone wanted to use it.

I would not assume that anything other than integer registers would be
available in an ISR.

In a post processing DPC/SoftIrq routine it might be possible but again
there can be limitations. What you don't ever want to happen is to hang
the cpu waiting to sync with a piece of hardware so you can save its state,
as might happen if it was a co-processor. You also don't want to have to
save any state just in case a post routine might want to do something,
but rather save/restore the state on demand and just what is needed.
So it really depends on the device and the platform.


Chris M. Thomasson

unread,
Jan 18, 2024, 4:05:34 PMJan 18
to
Yup. Fwiw, some years ago I actually tried to BURN a CAS by creating
several rouge threads that would alter the CAS target using random
numbers at full speed ahead. The interesting part is that forward
progress was damaged for sure, but still occurred. It did not live lock
on me. Interesting.


> CAS is ATOMIC, the code leading to CAS was not and this opens up the hole.

Indeed.


> Note:: CAS functionality implemented with LL/SC does not suffer ABA
> because the core monitors the LL address until the SC is performed.
> It is an addressed based comparison not a data value based one.

Exactly. Actually, I asked myself if I just wrote a stupid question to
you. Sorry Mitch... ;^)

Chris M. Thomasson

unread,
Jan 18, 2024, 4:07:52 PMJan 18
to
I compared CAS successes vs failures. When the rouge threads were turned
on, the failure rate went way up, but not to a point where an actual
sustained livelock occurred.

EricP

unread,
Jan 19, 2024, 12:11:16 PMJan 19
to
Yes but an equal point of view is that LL/SC only emulates atomic and
uses the cache line ownership grab while "locked" to detect possible
interference and infer potential change.

Note that if LL/SC is implemented with temporary line pinning
(as might be done to guarantee forward progress and prevent ping-pong)
then it cannot be interfered with, and CAS and atomic-fetch-op sequences
are semantically identical to the equivalent single instructions
(which may also be implemented with temporary line pinning if their
data must move from cache through the core and back).

Also LL/SC as implemented on Alpha, MIPS, Power, ARM, RISC-V don't allow
any other location loads or stores between them so really aren't useful
for detecting ABA because detecting it requires monitoring two memory
locations for change.

The classic example is the single linked list with items head->A->B->C
Detecting ABA requires monitoring if either head or head->Next change
which LL/SC cannot do as reading head->Next cancels the lock on head.

x86 has cmpxchg8b and ARM has double wide LL/SC which can be used to
implement CASD atomic-double-wide-compare-and-swap. The first word holds
the head pointer and the second word holds a generation counter whose
change is used to infer that head->Next might have changed.


Chris M. Thomasson

unread,
Jan 19, 2024, 5:41:25 PMJan 19
to
Pretty much it. ABA is explicitly mentioned in an IBM principals of
operation wrt Free-Pool Manipulation in an appendix A-48:

https://www.ibm.com/support/pages/sites/default/files/inline-files/SA22-7832-00.pdf

The problem will destroy the integrity of the list. Perform Locked
Operation (PLO) in the same appendix is pretty interesting as well... ;^)

Quadibloc

unread,
Jan 20, 2024, 1:54:37 PMJan 20
to
On Wed, 17 Jan 2024 15:35:56 -0500, EricP wrote:
> Quadibloc wrote:

>> When a computer recieves an interrupt signal, it needs to save
>> the complete machine state, so that upon return from the
>> interrupt, the program thus interrupted is in no way affected.

> It needs to save the portion of the machine state overwritten by that
> interrupt. Often this is a small subset of the whole state because many
> interrupts only need a few integer registers, maybe 6 or 7.
> Additional integer state will be saved and restored as needed by the call ABI
> so does not need to be done for every interrupt by the handler prologue.

Having been so concerned by the large machine state of the Concertina
II, parts of which were rarely used, and not realizing the conventional
approach was entirely adequate... I missed what was the biggest flaw in
interrupts on the Concertina II.

Because in some important ways it is patterned after the IBM System/360,
it shares its biggest problem with interrupts.

On the System/360, it is a *convention* that the last few general registers,
registers 11, 12, 13, 14, and 15 or so, are used as base registers. A
base register *must* be properly set up before a program can write any data
to memory.

So one can't just have an interrupt behave like on an 8-bit microprocessor,
saving only the program counter and the status bits, and leaving any
registers to be saved by software. At least some of the general registers
have to be saved, and set up with new starting values, for the interrupt
routine to be able to save anything else, if need be.

Of course, the System/360 was able to solve this problem, so it's not
intractable. But the fact that the System/360 solved it by saving all
sixteen general registers, and then loading them from an area in memory
allocated to that interrupt type, is what fooled me into thinking I
would need to automatically save _everything_. It didn't save the
floating-point registers - software did, if the need was to move to
a different user's program, and saving the state in two separate pieces
by two different parts of the OS did not cause hopeless confusion.

John Savard

BGB

unread,
Jan 21, 2024, 3:24:01 PMJan 21
to
IIRC, saving off PC, some flags bits, swapping the stack registers, and
doing a computed branch relative to a control register (via bit-slicing,
*). This is effectively the interrupt mechanism I am using on a 64-bit ISA.

*: For a table that is generally one-off in the kernel or similar, it
doesn't ask much to mandate that it has a certain alignment. And if the
required alignment is larger than the size of the table, you have just
saved yourself needing an adder...


If anything, it is a little simpler than the mechanism used on some
8-bit systems, which would have needed a mechanism to push these values
to the stack, and restore them from the stack.

Having side-channels that allow these values to "magically appear" in
certain SPRs is at least simpler, though the cost of the logic itself
could be more up for debate.



> Of course, the System/360 was able to solve this problem, so it's not
> intractable. But the fact that the System/360 solved it by saving all
> sixteen general registers, and then loading them from an area in memory
> allocated to that interrupt type, is what fooled me into thinking I
> would need to automatically save _everything_. It didn't save the
> floating-point registers - software did, if the need was to move to
> a different user's program, and saving the state in two separate pieces
> by two different parts of the OS did not cause hopeless confusion.
>

For ISR convenience, it would make sense to have, say, two SPR's or CR's
designated for ISR use to squirrel off values from GPRs to make the
prolog/epilog easier. Had considered this, but not done so (in this
case, first thing the ISR does is save a few registers to the ISR stack
to free them up to get the rest of the registers saved "more properly",
then has to do a secondary reload to get these registers back into the
correct place).


Assuming interrupts aren't too common, then it isn't a huge issue, and
seemingly a majority of the clock-cycles spent on interrupt entry mostly
have to do with L1 misses (since typically pretty much nothing the ISR
touches is in the L1 cache; and in my implementation, ISR's may not
share L1 cache lines with non-ISR code; so basically it is an
architecturally required "L1 miss" storm).


Only real way to avoid a lot of the L1 misses would be to have multiple
sets of banked registers or similar, but, this is not cheap for the
hardware... Similarly, the ISR would need to do as little, and touch as
little memory, as is possible to perform its task.


> John Savard

MitchAlsup1

unread,
Jan 21, 2024, 3:55:47 PMJan 21
to
EricP wrote:

> There also can be many restrictions on what an ISR is allowed to do
> because the OS designers did not want to, say, force every ISR to
> sync with the slow x87 FPU just in case someone wanted to use it.

What about all the architectures that are not x86 and do not need to synch
to FP, Vectors, SIMD, ..... ?? Why are they constrained by the one badly
designed long life architecture ??

> I would not assume that anything other than integer registers would be
> available in an ISR.

This is quite reasonable: as long as you have a sufficient number that
the ISR can be written in some HLL without a bunch of flags to the compiler.

> In a post processing DPC/SoftIrq routine it might be possible but again
> there can be limitations. What you don't ever want to happen is to hang
> the cpu waiting to sync with a piece of hardware so you can save its state,
> as might happen if it was a co-processor. You also don't want to have to
> save any state just in case a post routine might want to do something,
> but rather save/restore the state on demand and just what is needed.
> So it really depends on the device and the platform.

As long as there are not more than one flag to clue the compiler in,
I am on board.

MitchAlsup1

unread,
Jan 21, 2024, 3:55:48 PMJan 21
to
No need to be sorry, this is a NG dedicated to make people think, and
then after they have expressed what they though, to correct and refine
what they think and how.

MitchAlsup1

unread,
Jan 21, 2024, 4:00:44 PMJan 21
to
Which, BTW, opens up a different side channel ...

> Note that if LL/SC is implemented with temporary line pinning
> (as might be done to guarantee forward progress and prevent ping-pong)
> then it cannot be interfered with, and CAS and atomic-fetch-op sequences
> are semantically identical to the equivalent single instructions
> (which may also be implemented with temporary line pinning if their
> data must move from cache through the core and back).

Line pinning requires a NAK in the coherence protocol. As far as I know,
only My 66000 interconnect protocol has such a NaK.

> Also LL/SC as implemented on Alpha, MIPS, Power, ARM, RISC-V don't allow
> any other location loads or stores between them so really aren't useful
> for detecting ABA because detecting it requires monitoring two memory
> locations for change.

> The classic example is the single linked list with items head->A->B->C
> Detecting ABA requires monitoring if either head or head->Next change
> which LL/SC cannot do as reading head->Next cancels the lock on head.

Detecting ABA requires one to monitor addresses not data values.

MitchAlsup1

unread,
Jan 21, 2024, 4:05:43 PMJan 21
to
Quadibloc wrote:

> On Wed, 17 Jan 2024 15:35:56 -0500, EricP wrote:
>> Quadibloc wrote:

>>> When a computer recieves an interrupt signal, it needs to save
>>> the complete machine state, so that upon return from the
>>> interrupt, the program thus interrupted is in no way affected.

>> It needs to save the portion of the machine state overwritten by that
>> interrupt. Often this is a small subset of the whole state because many
>> interrupts only need a few integer registers, maybe 6 or 7.
>> Additional integer state will be saved and restored as needed by the call ABI
>> so does not need to be done for every interrupt by the handler prologue.

> Having been so concerned by the large machine state of the Concertina
> II, parts of which were rarely used, and not realizing the conventional
> approach was entirely adequate... I missed what was the biggest flaw in
> interrupts on the Concertina II.

The second to last word is misleading and unnecessary::

> interrupts on Concertina II.

The implies there will be only one and it already exists.

> Because in some important ways it is patterned after the IBM System/360,
> it shares its biggest problem with interrupts.

> On the System/360, it is a *convention* that the last few general registers,
> registers 11, 12, 13, 14, and 15 or so, are used as base registers. A
> base register *must* be properly set up before a program can write any data
> to memory.

Captain Obvious strikes again...

> So one can't just have an interrupt behave like on an 8-bit microprocessor,
> saving only the program counter and the status bits, and leaving any
> registers to be saved by software. At least some of the general registers
> have to be saved, and set up with new starting values, for the interrupt
> routine to be able to save anything else, if need be.

Once HW starts saving "a few" it might as well save "enough" of them to mater.

MitchAlsup1

unread,
Jan 21, 2024, 4:20:45 PMJan 21
to
BGB wrote:

> On 1/20/2024 12:54 PM, Quadibloc wrote:
>> On Wed, 17 Jan 2024 15:35:56 -0500, EricP wrote:
>>
>> So one can't just have an interrupt behave like on an 8-bit microprocessor,
>> saving only the program counter and the status bits, and leaving any
>> registers to be saved by software. At least some of the general registers
>> have to be saved, and set up with new starting values, for the interrupt
>> routine to be able to save anything else, if need be.
>>

> IIRC, saving off PC, some flags bits, swapping the stack registers, and
> doing a computed branch relative to a control register (via bit-slicing,
> *). This is effectively the interrupt mechanism I am using on a 64-bit ISA.

And sounds like the interrupt mechanism for an 8-bit µprocessor...

> *: For a table that is generally one-off in the kernel or similar, it
> doesn't ask much to mandate that it has a certain alignment. And if the
> required alignment is larger than the size of the table, you have just
> saved yourself needing an adder...

In a modern system where you have several HyperVisors and a multiplicity
of GuestOSs, a single interrupt table is unworkable looking forward.
What you want and need is every GuestOS to have its own table, and
every HyperVisor have its own table, some kind of routing mechanism to
route device interrupts to the correct table, and inform appropriate
cores of raised and enabled interrupts. All these tables have to be
concurrently available continuously and simultaneously. The old fixed
mapping will no longer work efficiently--you can make them work with
\a near-Herculean amount of carefully programming.

Or you can look at the problem from a modern viewpoint and fix the
model so the above is manifest.

> If anything, it is a little simpler than the mechanism used on some
> 8-bit systems, which would have needed a mechanism to push these values
> to the stack, and restore them from the stack.

Do you think you mechanism would work "well" with 1024 cores in your
system ??

> Having side-channels that allow these values to "magically appear" in
> certain SPRs is at least simpler, though the cost of the logic itself
> could be more up for debate.

Once you have an FMAC FPU none of the interrupt logic adds any area
to any core.

>> Of course, the System/360 was able to solve this problem, so it's not
>> intractable. But the fact that the System/360 solved it by saving all
>> sixteen general registers, and then loading them from an area in memory
>> allocated to that interrupt type, is what fooled me into thinking I
>> would need to automatically save _everything_. It didn't save the
>> floating-point registers - software did, if the need was to move to
>> a different user's program, and saving the state in two separate pieces
>> by two different parts of the OS did not cause hopeless confusion.
>>

> For ISR convenience, it would make sense to have, say, two SPR's or CR's
> designated for ISR use to squirrel off values from GPRs to make the
> prolog/epilog easier. Had considered this, but not done so (in this
> case, first thing the ISR does is save a few registers to the ISR stack
> to free them up to get the rest of the registers saved "more properly",
> then has to do a secondary reload to get these registers back into the
> correct place).

Or, you could take the point of view that your architecture makes context
switching easy (like 1 instruction from application 1 to application 2)
and when you do this the rest of the model pretty much drops in for free.

> Assuming interrupts aren't too common, then it isn't a huge issue, and
> seemingly a majority of the clock-cycles spent on interrupt entry mostly
> have to do with L1 misses (since typically pretty much nothing the ISR
> touches is in the L1 cache; and in my implementation, ISR's may not
> share L1 cache lines with non-ISR code; so basically it is an
> architecturally required "L1 miss" storm).

The fewer cycles before raising the interrupt at the device, and the
servicing of the interrupt by ISR (prior to scheduling a SoftIRQ/DPC)
is the key number. All these little pieces of state that are obtained
1 at a time by running code not in the ICache, along with the manipulations
on the TLBs, ... get in the way of your goal.

> Only real way to avoid a lot of the L1 misses would be to have multiple
> sets of banked registers or similar, but, this is not cheap for the
> hardware... Similarly, the ISR would need to do as little, and touch as
> little memory, as is possible to perform its task.

You can read up thread-state from DRAM and continue what you are working
on until they arrive, and when they arrive, you ship out the old values
as if the core state was a cache in its own right. Thus, you continue to
make progress on the current thread until you have the state needed to
run the ISR thread and when it arrives you have everything you need to
proceed. ...

>> John Savard

Scott Lurndal

unread,
Jan 21, 2024, 4:58:58 PMJan 21
to
mitch...@aol.com (MitchAlsup1) writes:

>
>In a modern system where you have several HyperVisors and a multiplicity
>of GuestOSs, a single interrupt table is unworkable looking forward.
>What you want and need is every GuestOS to have its own table, and
>every HyperVisor have its own table, some kind of routing mechanism to
>route device interrupts to the correct table, and inform appropriate
>cores of raised and enabled interrupts. All these tables have to be
>concurrently available continuously and simultaneously. The old fixed
>mapping will no longer work efficiently--you can make them work with
>\a near-Herculean amount of carefully programming.

For an extant implementation thereof, see

GICv3 Architecture Specification.

https://documentation-service.arm.com/static/6012f2e54ccc190e5e681256

BGB

unread,
Jan 21, 2024, 6:43:18 PMJan 21
to
On 1/21/2024 3:18 PM, MitchAlsup1 wrote:
> BGB wrote:
>
>> On 1/20/2024 12:54 PM, Quadibloc wrote:
>>> On Wed, 17 Jan 2024 15:35:56 -0500, EricP wrote:
>>>
>>> So one can't just have an interrupt behave like on an 8-bit
>>> microprocessor,
>>> saving only the program counter and the status bits, and leaving any
>>> registers to be saved by software. At least some of the general
>>> registers
>>> have to be saved, and set up with new starting values, for the interrupt
>>> routine to be able to save anything else, if need be.
>>>
>
>> IIRC, saving off PC, some flags bits, swapping the stack registers,
>> and doing a computed branch relative to a control register (via
>> bit-slicing, *). This is effectively the interrupt mechanism I am
>> using on a 64-bit ISA.
>
> And sounds like the interrupt mechanism for an 8-bit µprocessor...
>

It was partly a simplification of the design from the SH-4, which was a
32-bit CPU mostly used in embedded systems (and in the Sega Dreamcast...).

Though, the SH-4 did bank out half the registers, which was a feature
that ended up being dropped for cost-saving reasons.


>> *: For a table that is generally one-off in the kernel or similar, it
>> doesn't ask much to mandate that it has a certain alignment. And if
>> the required alignment is larger than the size of the table, you have
>> just saved yourself needing an adder...
>
> In a modern system where you have several HyperVisors and a multiplicity
> of GuestOSs, a single interrupt table is unworkable looking forward.
> What you want and need is every GuestOS to have its own table, and
> every HyperVisor have its own table, some kind of routing mechanism to
> route device interrupts to the correct table, and inform appropriate
> cores of raised and enabled interrupts. All these tables have to be
> concurrently available continuously and simultaneously. The old fixed
> mapping will no longer work efficiently--you can make them work with
> \a near-Herculean amount of carefully programming.
>
> Or you can look at the problem from a modern viewpoint and fix the
> model so the above is manifest.
>

Presumably, only the actual "bare metal" layer has an actual
hardware-level interrupt table, and all of the "guest" tables are faked
in software?...

Much like with MMU:
Only the base level needs to actually handle TLB miss events, and
everything else (nested translation, etc), can be left to software
emulation.


>> If anything, it is a little simpler than the mechanism used on some
>> 8-bit systems, which would have needed a mechanism to push these
>> values to the stack, and restore them from the stack.
>
> Do you think you mechanism would work "well" with 1024 cores in your
> system ??
>

Number of cores should not matter that much.

Presumably, each core gets its own ISR stack, which should not have any
reason to need to interact with each other.

For extra speed, maybe the ISR stacks could be mapped to some sort of
core-local SRAM. This hasn't been done yet though.

Idea here being probably the SRAM region could have a special address
range, and any access to this region would be invisible to any other
cores (and it need not need have backing in external RAM).

One could maybe debate the cost of giving each core 4K or 8K of
dedicated local SRAM though merely for "slightly faster interrupt handling".


>> Having side-channels that allow these values to "magically appear" in
>> certain SPRs is at least simpler, though the cost of the logic itself
>> could be more up for debate.
>
> Once you have an FMAC FPU none of the interrupt logic adds any area
> to any core.
>

I don't have conventional FMA because it would have had too much cost
and latency.

Did experimentally add a "double rounded" FMAC that basically allows
gluing the FMUL and FADD units together, with a timing of roughly 12
clock cycles (non-pipelined).

No SIMD MAC operations mostly because this would also need too much
latency (can't really shoe-horn this into 3 cycles). Similar reasoning
to why the SIMD ops are hard-wired for truncate rounding.


>>> Of course, the System/360 was able to solve this problem, so it's not
>>> intractable. But the fact that the System/360 solved it by saving all
>>> sixteen general registers, and then loading them from an area in memory
>>> allocated to that interrupt type, is what fooled me into thinking I
>>> would need to automatically save _everything_. It didn't save the
>>> floating-point registers - software did, if the need was to move to
>>> a different user's program, and saving the state in two separate pieces
>>> by two different parts of the OS did not cause hopeless confusion.
>>>
>
>> For ISR convenience, it would make sense to have, say, two SPR's or
>> CR's designated for ISR use to squirrel off values from GPRs to make
>> the prolog/epilog easier. Had considered this, but not done so (in
>> this case, first thing the ISR does is save a few registers to the ISR
>> stack to free them up to get the rest of the registers saved "more
>> properly", then has to do a secondary reload to get these registers
>> back into the correct place).
>
> Or, you could take the point of view that your architecture makes context
> switching easy (like 1 instruction from application 1 to application 2)
> and when you do this the rest of the model pretty much drops in for free.
>

This would cost well more, on the hardware side, than having two
non-specialized CRs and being like "ISR's are allowed to stomp these at
will, nothing else may use them".


>> Assuming interrupts aren't too common, then it isn't a huge issue, and
>> seemingly a majority of the clock-cycles spent on interrupt entry
>> mostly have to do with L1 misses (since typically pretty much nothing
>> the ISR touches is in the L1 cache; and in my implementation, ISR's
>> may not share L1 cache lines with non-ISR code; so basically it is an
>> architecturally required "L1 miss" storm).
>
> The fewer cycles before raising the interrupt at the device, and the
> servicing of the interrupt by ISR (prior to scheduling a SoftIRQ/DPC)
> is the key number. All these little pieces of state that are obtained
> 1 at a time by running code not in the ICache, along with the manipulations
> on the TLBs, ... get in the way of your goal.
>

Possibly...

Though, as of yet, I don't have any hardware interrupts either (apart
from the timer IRQ). Pretty much all of the IO at present is polling IO.

Granted, it is possible that it would not scale well to do a full
featured system with exclusively polling IO.

Some other tasks can be handled with the microsecond timer and a loop.

Say:
//void DelayUsec(int usec);
DelayUsec:
CPIUD 30
ADD R4, R0, R6
.L0:
CPUID 30
CMPQGT R0, R6
BT .L0
RTS
Which would create a certain amount of delay (in microseconds) relative
to when the function is called.


>> Only real way to avoid a lot of the L1 misses would be to have
>> multiple sets of banked registers or similar, but, this is not cheap
>> for the hardware... Similarly, the ISR would need to do as little, and
>> touch as little memory, as is possible to perform its task.
>
> You can read up thread-state from DRAM and continue what you are working
> on until they arrive, and when they arrive, you ship out the old values
> as if the core state was a cache in its own right. Thus, you continue to
> make progress on the current thread until you have the state needed to
> run the ISR thread and when it arrives you have everything you need to
> proceed. ...
>

Possibly, if the core were more advanced than a naive strictly-in-order
design...

Or, some way of handling memory other than stalling the pipeline
whenever the L1 cache misses.


But, yeah, for a more advanced system, could maybe justify using a
different interrupt-handler mechanism (and accept that possibly kernel
level code may not be entirely binary compatible between the ISA variants).


>>> John Savard

MitchAlsup1

unread,
Jan 21, 2024, 8:25:44 PMJan 21
to
Name a single ISA that fakes the TLB ?? (and has an MMU)

>>> If anything, it is a little simpler than the mechanism used on some
>>> 8-bit systems, which would have needed a mechanism to push these
>>> values to the stack, and restore them from the stack.
>>
>> Do you think you mechanism would work "well" with 1024 cores in your
>> system ??
>>

> Number of cores should not matter that much.

Exactly !! but then try running 1024 cores under differing GuestOSs, and
HyperVisors under one set of system-wide Tables !!

> Presumably, each core gets its own ISR stack, which should not have any
> reason to need to interact with each other.

I presume an interrupt can be serviced by any number of cores.
I presume that there are a vast number of devices. Each device assigned
to a few GuestOSs.
I presume the core that services the interrupt (ISR) is running the same
GuestOS under the same HyperVisor that initiated the device.
I presume the core that services the interrupt was of the lowest priority
of all the cores then running that GuestOS.
I presume the core that services the interrupt wasted no time in doing so.

And the GuestOS decides on how its ISR stack is {formatted, allocated, used,
serviced, ...} which can be different for each GuestOS.

> For extra speed, maybe the ISR stacks could be mapped to some sort of
> core-local SRAM. This hasn't been done yet though.

Caches either work or they don't.

Wasting cycles fetching instructions, translations, and data are genuine
overhead that can be avoided if one treats thread-state as a cache.

If the interrupt occurs often enough to mater, its instructions, data,
and translations will be in the cache hierarchy.

HW that knows what it is doing can start fetching these things even
BEFORE it can execute the first instruction on behalf of the interrupt
dispatcher. SW can NEVER do any of this prior to starting to run instr.

> Idea here being probably the SRAM region could have a special address
> range, and any access to this region would be invisible to any other
> cores (and it need not need have backing in external RAM).

> One could maybe debate the cost of giving each core 4K or 8K of
> dedicated local SRAM though merely for "slightly faster interrupt handling".

Yech: end of debate.....

>>> Having side-channels that allow these values to "magically appear" in
>>> certain SPRs is at least simpler, though the cost of the logic itself
>>> could be more up for debate.
>>
>> Once you have an FMAC FPU none of the interrupt logic adds any area
>> to any core.
>>

> I don't have conventional FMA because it would have had too much cost
> and latency.

Because you are measuring from the wrong implementation technology,
using macros ill suited to the problem at hand.


>>> For ISR convenience, it would make sense to have, say, two SPR's or
>>> CR's designated for ISR use to squirrel off values from GPRs to make
>>> the prolog/epilog easier. Had considered this, but not done so (in
>>> this case, first thing the ISR does is save a few registers to the ISR
>>> stack to free them up to get the rest of the registers saved "more
>>> properly", then has to do a secondary reload to get these registers
>>> back into the correct place).
>>
>> Or, you could take the point of view that your architecture makes context
>> switching easy (like 1 instruction from application 1 to application 2)
>> and when you do this the rest of the model pretty much drops in for free.
>>

> This would cost well more, on the hardware side, than having two
> non-specialized CRs and being like "ISR's are allowed to stomp these at
> will, nothing else may use them".

The sequencer is surprisingly small. Everything else already exists and
is just waiting around for a signal to capture this or emit that.


> Some other tasks can be handled with the microsecond timer and a loop.

> Say:
> //void DelayUsec(int usec);
> DelayUsec:
> CPIUD 30
> ADD R4, R0, R6
> .L0:
> CPUID 30
> CMPQGT R0, R6
> BT .L0
> RTS
> Which would create a certain amount of delay (in microseconds) relative
> to when the function is called.

CPUID n Opteron's time frame took 200-600 cycles. Do you really want to
talk to your timer with those kinds of delay ??

Chris M. Thomasson

unread,
Jan 21, 2024, 8:55:25 PMJan 21
to
Well, the version counter tries to negate this wrt double-wide
compare-and-swap? ;^)

Chris M. Thomasson

unread,
Jan 21, 2024, 8:59:25 PMJan 21
to
On 1/21/2024 12:58 PM, MitchAlsup1 wrote:
Not 100% true.

MitchAlsup1

unread,
Jan 21, 2024, 9:10:43 PMJan 21
to
IBM's original ABA problem was encountered when a background task
(once a week or once a month) was swapped out to disk the instruction
prior to CAS, and when it came back the data comparison register
matched the memory data, but the value to be swapped in had no
relationship with the current linked list structure. Machine crashed.

Without knowing the address, how can this particular problem be
rectified ??

Chris M. Thomasson

unread,
Jan 21, 2024, 9:31:52 PMJan 21
to
The version counter wrt a double wide compare and swap where:

struct dwcas_anchor
{
word* next;
word version;
};


comes into play.

Basically, from IBM:
__________________________________
Consider a chained list of the type used in the
LIFO lock/unlock example. Assume that the first
two elements are at locations A and B, respec tively. If one program
attempted to remove the
first element and was interrupted between the
fourth and fifth instructions of the LUNLK routine,
the list could be changed so that elements A and
C are the first two elements when the interrupted
program resumes execution. The COMPARE
AND SWAP instruction would then succeed in
storing the value B into the header, thereby
destroying the list.
The probability of the occurrence of such list
destruction can be reduced to near zero by
appending to the header a counter that indicates
the number of times elements have been added to
the list. The use of a 32-bit counter guarantees
that the list will not be destroyed unless the fol lowing events occur,
in the exact sequence:
1. An unlock routine is interrupted between the
fetch of the pointer from the first element and
the update of the header.
2. The list is manipulated, including the deletion
of the element referenced in 1, and exactly
2óò (or an integer multiple of 2óò) additions to
the list are performed. Note that this takes on
the order of days to perform in any practical
situation.
3. The element referenced in 1 is added to the
list.
4. The unlock routine interrupted in 1 resumes
execution.
__________________________________


Chris M. Thomasson

unread,
Jan 21, 2024, 9:33:08 PMJan 21
to
On 1/21/2024 6:31 PM, Chris M. Thomasson wrote:
> On 1/21/2024 6:07 PM, MitchAlsup1 wrote:
>> Chris M. Thomasson wrote:
>>
>>> On 1/21/2024 12:58 PM, MitchAlsup1 wrote:
>>
>>>> Detecting ABA requires one to monitor addresses not data values.
>>
>>> Not 100% true.
>>
>> IBM's original ABA problem was encountered when a background task
>> (once a week or once a month) was swapped out to disk the instruction
>> prior to CAS, and when it came back the data comparison register
>> matched the memory data, but the value to be swapped in had no
>> relationship with the current linked list structure. Machine crashed.
>>
>> Without knowing the address, how can this particular problem be
>> rectified ??
>
> The version counter wrt a double wide compare and swap where:
>
> struct dwcas_anchor
> {
>     word* next;
>     word version;
> };

sizeof(word*) == sizeof(word), sizeof(struct dwcas_anchor) ==
sizeof(word) * 2, in this setup, and they must be contiguous.

BGB

unread,
Jan 21, 2024, 10:34:38 PMJan 21
to
Not sure.
At least in software, the BJX2 emulator fakes the TLB.

Nothing says it can't be "turtles all the way down", though care would
be needed in the emulator design to limit how much performance overhead
is added with each HV level.

Assuming that a mechanism is in place, say, to trap LDTLB events, then
potentially each (virtual) LDTLB can hook into the next as a sort of
cascade, until it reaches a top-level virtual TLB (which could then be
treated similar to a software-managed inverted page-table).

Likely, would need some way to signal to the top-level TLB-Miss ISR that
it is running a VM, and that it should access this IPT for the TLBE's,
else somehow forward the TLB miss back into the VM (would need to come
up with an API for this, and or route it in via the "signal()" mechanism
or similar).


>>>> If anything, it is a little simpler than the mechanism used on some
>>>> 8-bit systems, which would have needed a mechanism to push these
>>>> values to the stack, and restore them from the stack.
>>>
>>> Do you think you mechanism would work "well" with 1024 cores in your
>>> system ??
>>>
>
>> Number of cores should not matter that much.
>
> Exactly !! but then try running 1024 cores under differing GuestOSs, and
> HyperVisors under one set of system-wide Tables !!
>

Nothing requires that all cores use the same tables.


>> Presumably, each core gets its own ISR stack, which should not have
>> any reason to need to interact with each other.
>
> I presume an interrupt can be serviced by any number of cores.
> I presume that there are a vast number of devices. Each device assigned
> to a few GuestOSs.
> I presume the core that services the interrupt (ISR) is running the same
> GuestOS under the same HyperVisor that initiated the device.
> I presume the core that services the interrupt was of the lowest priority
> of all the cores then running that GuestOS.
> I presume the core that services the interrupt wasted no time in doing so.
>
> And the GuestOS decides on how its ISR stack is {formatted, allocated,
> used,
> serviced, ...} which can be different for each GuestOS.
>

I would have assumed that the cores are organized in a hierarchy, say:
Core 0:
Starts the Boot process;
Natural target for hardware interrupts.
Cores 1-11:
Go to sleep at power-on, waked by main core;
Do not receive hardware interrupts;
Cores 12-15:
Reserved for nested cores;
These operate on a sub-ring, repeating the same pattern;
Sub-rings could potentially add another level of cache.

Communication between cores is via inter-processor interrupts and memory.

Or, possibly, for a 136-core target:
Core 0:
Main Core;
Cores 1-7:
Secondary top-level cores;
8-15:
Sub-Rings, each holding 16 more cores.

Though, to go to a larger number of cores, may need to expand the
ringbus routing scheme (and/or apply something similar to NAT regarding
request/response routing).


Though, it is likely if another level of cache were added (say, an
"L2-Sub" cache), it may need to use Write-Through semantics with a
mechanism to selectively ignore cache-lines (for "No-Cache" or possible
"knock detection"), say, so that the weak coherence model still works
with the multi-level cache (well, either this, or it is the same size as
the L1 caches, so that any L1 knock events also knock the L2S cache).

Well, or maybe rename the top-level L2 cache to L3, and call the L2S
caches "L2 Cache"?...


Though, by this point, may need a good way to explicitly signal no-cache
memory accesses, as "cache knocking" isn't a particularly scalable strategy.



Then again, not like any viable (cost-effective) FPGA's have the
resources for this many cores.

Well, at least excluding the option of going over to 16-bit CPU cores or
similar (possibly each with its own local non-shared RAM spaces, and a
programming model similar to Occam or Erlang?...).


Assuming fairly basic RISC style CPU cores, could potentially fit around
16x 32-bit CPU cores onto an XC7A200T.

Going too much more than this isn't going to happen short of going over
to a 16-bit design.


Though, this is unlikely to be particularly useful...


And, say, if I wanted to use them for Binary16 FP-SIMD, the cost of the
SIMD units would require a smaller number of cores (would be
hard-pressed to fit more than around 4 cores, if one wants FP-SIMD).

And, for an "FP-SIMD beast", might be cheaper just to make bigger cores
that can do 8x Binary16 vector ops.

Well, and/or come up with a cheaper alternative to Binary16.



>> For extra speed, maybe the ISR stacks could be mapped to some sort of
>> core-local SRAM. This hasn't been done yet though.
>
> Caches either work or they don't.
>

The SRAM, as a cache, would be "technically not working" as a cache...
It is faster by sake of being invisible to the outside world.

Thus, not needing to contribute any activity to the bus outside the area
for which it applies.


> Wasting cycles fetching instructions, translations, and data are genuine
> overhead that can be avoided if one treats thread-state as a cache.
>
> If the interrupt occurs often enough to mater, its instructions, data,
> and translations will be in the cache hierarchy.
>
> HW that knows what it is doing can start fetching these things even
> BEFORE it can execute the first instruction on behalf of the interrupt
> dispatcher. SW can NEVER do any of this prior to starting to run instr.
>
>> Idea here being probably the SRAM region could have a special address
>> range, and any access to this region would be invisible to any other
>> cores (and it need not need have backing in external RAM).
>
>> One could maybe debate the cost of giving each core 4K or 8K of
>> dedicated local SRAM though merely for "slightly faster interrupt
>> handling".
>
> Yech: end of debate.....
>

There are reasons I have not done this.

AFAIK, use of core-local SRAM areas isn't entirely unheard of though?...


I guess, under the previous scheme, it could be consolidated into bigger
chunks shared between adjacent cores (but without any requirement that
cores be able to see what the other cores had written into this area).


>>>> Having side-channels that allow these values to "magically appear"
>>>> in certain SPRs is at least simpler, though the cost of the logic
>>>> itself could be more up for debate.
>>>
>>> Once you have an FMAC FPU none of the interrupt logic adds any area
>>> to any core.
>>>
>
>> I don't have conventional FMA because it would have had too much cost
>> and latency.
>
> Because you are measuring from the wrong implementation technology,
> using macros ill suited to the problem at hand.
>

Possibly, either because generalized logic is expensive, or because FPU
cheats by being able to leverage DSP48E blocks.


It is unclear if FPU would still be viable without the DSP48E blocks or
some other similar hard-logic.


>
>>>> For ISR convenience, it would make sense to have, say, two SPR's or
>>>> CR's designated for ISR use to squirrel off values from GPRs to make
>>>> the prolog/epilog easier. Had considered this, but not done so (in
>>>> this case, first thing the ISR does is save a few registers to the
>>>> ISR stack to free them up to get the rest of the registers saved
>>>> "more properly", then has to do a secondary reload to get these
>>>> registers back into the correct place).
>>>
>>> Or, you could take the point of view that your architecture makes
>>> context
>>> switching easy (like 1 instruction from application 1 to application 2)
>>> and when you do this the rest of the model pretty much drops in for
>>> free.
>>>
>
>> This would cost well more, on the hardware side, than having two
>> non-specialized CRs and being like "ISR's are allowed to stomp these
>> at will, nothing else may use them".
>
> The sequencer is surprisingly small. Everything else already exists and
> is just waiting around for a signal to capture this or emit that.
>

OK.


>
>> Some other tasks can be handled with the microsecond timer and a loop.
>
>> Say:
>>    //void DelayUsec(int usec);
>>    DelayUsec:
>>      CPIUD  30
>>      ADD    R4, R0, R6
>>      .L0:
>>      CPUID  30
>>      CMPQGT  R0, R6
>>      BT     .L0
>>      RTS
>> Which would create a certain amount of delay (in microseconds)
>> relative to when the function is called.
>
> CPUID n Opteron's time frame took 200-600 cycles. Do you really want to
> talk to your timer with those kinds of delay ??


CPUID runs directly in the core in question, has a roughly 2-cycle
latency. Within each core, there is basically a running microsecond
counter, and also an RNG.

It basically fetches a value from a sort of internal read-only hardware
register. Most of the registers hold constant values or zeroes, except
for a few that were special. In this case, the output register was
hard-wired as R0.


Using CPUID to fetch timer value was preferable because:
If is faster than using MMIO;
Doesn't require a syscall;
Fetching the microsecond counter was a big source of latency with the
original MMIO based timer (and some programs need to access the timer in
rapid succession);
...


David Brown

unread,
Jan 22, 2024, 2:16:43 AMJan 22
to
On 17/01/2024 19:02, Scott Lurndal wrote:
> Quadibloc <quad...@servername.invalid> writes:
>> When a computer recieves an interrupt signal, it needs to save
>> the complete machine state, so that upon return from the
>> interrupt, the program thus interrupted is in no way affected.
>>
>> This is because interrupts can happen at any time, and thus
>> programs don't prepare for them or expect them. Any disturbance
>> to the contents of any register would risk causing programs to
>> crash.
>
> Something needs to preserve state, either the hardware or
> the software. Most risc processors lean towards the latter,
> generally for good reason - one may not need to save
> all the state if the interrupt handler only touchs part of it.
>
>>
>> The Concertina II has a potentially large machine state which most
>> programs do not use. There are vector registers, of the huge
>> kind found in the Cray I. There are banks of 128 registers to
>> supplement the banks of 32 registers.
>>
>> One obvious step in addressing this is for programs that don't
>> use these registers to run without access to those registers.
>> If this is indicated in the PSW, then the interrupt routine will
>> know what it needs to save and restore.
>
> Just like x86 floating point.

Also ARM floating point (at least, on the 32-bit Cortex-M ARM families).

>
>>
>> A more elaborate and more automated method is also possible.
>>
>> Let us imagine the computer speeds up interrupts by having a
>> second bank of registers that interrupt routines use. But two
>> register banks aren't enough, as many user programs are running
>> concurrently.
>>
>> Here is how I envisage the sequence of events in response to
>> an interrupt could work:
>>
>> 1) The computer, at the beginning of an area of memory
>> sufficient to hold all the contents of the computer's
>> registers, including the PSW and program counter, places
>> a _restore status_.
>
> Slow DRAM or special SRAMs? The former will add
> considerable latency to an interrupt, the later costs
> area (on a per-hardware-thread basis) and floorplanning
> issues.

An SRAM block sufficient to hold a small number of copies of registers,
even for ISA's with lots of registers, would be small compared to
typical cache blocks. Indeed, it could be considered as a kind of
dedicated cache.

>
> Best is to save the minimal amount of state in hardware
> and let software deal with the rest, perhaps with
> hints from the hardware (e.g. a bit that indicates
> whether the FPRs were modified since the last context
> switch, etc).

A combined effort sounds good to me.


EricP

unread,
Jan 22, 2024, 10:01:44 AMJan 22
to
MitchAlsup1 wrote:
> EricP wrote:
>
>> There also can be many restrictions on what an ISR is allowed to do
>> because the OS designers did not want to, say, force every ISR to
>> sync with the slow x87 FPU just in case someone wanted to use it.
>
> What about all the architectures that are not x86 and do not need to synch
> to FP, Vectors, SIMD, ..... ?? Why are they constrained by the one badly
> designed long life architecture ??

This is about minimizing what it saves *by default*.

I am not assuming a vector coprocessor would be halted by interrupts.
Not automatically halting for interrupts is one reason to have a coprocessor.

Also I am not assuming that halting these devices is free.

It is also about an OS *by design* discouraging people from putting code
which requires a large state save and restore into latency sensitive
device drivers that can effect the whole system performance.

If you really insist on using SIMD in a driver then
(a) don't put it in an ISR, put it in a post routine,
(b) use utility routines to manually save and restore that state.

>> I would not assume that anything other than integer registers would be
>> available in an ISR.
>
> This is quite reasonable: as long as you have a sufficient number that
> the ISR can be written in some HLL without a bunch of flags to the
> compiler.

Yes, an OS can have a different ABI for ISR routines and everything else.
ISR level routines would have a different declaration as 99% of the time
a small save/restore set is sufficient.

>> In a post processing DPC/SoftIrq routine it might be possible but again
>> there can be limitations. What you don't ever want to happen is to hang
>> the cpu waiting to sync with a piece of hardware so you can save its
>> state,
>> as might happen if it was a co-processor. You also don't want to have to
>> save any state just in case a post routine might want to do something,
>> but rather save/restore the state on demand and just what is needed.
>> So it really depends on the device and the platform.
>
> As long as there are not more than one flag to clue the compiler in,
> I am on board.

I would do it with declarations (routine attributes) as it is less
error prone, just like MS C has stdcall, cdecl calling conventions.

void __isrcall MyDeviceIsr (IoDevice_t *dev, etc);

This would just be for an ISR routine and the few routines it calls.
The driver post routines could use a standard ABI.

The isrcall attribute changes the ABI to be R0:R7 are not preserved,
R8:R31 are preserved. Also there is no need for a frame pointer
as variable allocations are not allowed, neither are exceptions.
It could also do things like change the stack pointer to be in
R7 instead of R31 (just pointing out the possibilities).

The interrupt prologue saves R0:R7 and loops calling the ISR for
each device receiving an interrupt at that interrupt priority level.
After all are serviced it checks if it needs post processing.
If not then it executes the epilogue to restore R0:R7 and REI's.
If it does then it saves R8:R15 to comply with the standard ABI
and jumps into the OS Dispatcher which flushes the post routines.



Scott Lurndal

unread,
Jan 22, 2024, 11:09:59 AMJan 22
to
mitch...@aol.com (MitchAlsup1) writes:
>BGB wrote:
>

>> Much like with MMU:
>> Only the base level needs to actually handle TLB miss events, and
>> everything else (nested translation, etc), can be left to software
>> emulation.
>
>Name a single ISA that fakes the TLB ?? (and has an MMU)

MIPS?


>> Presumably, each core gets its own ISR stack, which should not have any
>> reason to need to interact with each other.
>
>I presume an interrupt can be serviced by any number of cores.

Or restricted to a specific set of cores (i.e. those currently
owned by the target guest).

The guest OS will generally specify the target virutal core (or set of cores)
for a specific interrupt. The Hypervisor and/or hardware needs
to deal with the case where the interrupt arrives while the target
guest core isn't currently scheduled on a physical core (and poke
the kernel to schedule the guest optionally). Such as recording
the pending interrupt and optionally notifying the hypervisor that
there is a pending guest interrupt so it can schedule the guest
core(s) on physical cores to handle the interrupt.

>I presume that there are a vast number of devices. Each device assigned
>to a few GuestOSs.

Or, with SR-IOV, virtual functions are assigned to specific guests
and all interrupts are MSI-X messages from the device to the
interrupt controller (LAPIC, GIC, etc).

Dealing with inter-processor interrupts in a multicore guest can also
be tricky; either trapped by the hypervisor or there must be hardware
support in the interrupt controller to notify the hypervisor that a pending
guest IPI interrupt has arrived. ARM started with the former behavior, but
added a mechanism to handle direct injection of interprocessor interrupts
by the guest, without hypervisor intervention (assuming the guest core
is currently scheduled on a physical core, otherwise the hypervisor gets
notified that there is a pending interrupt for a non-scheduled guest
core).

>I presume the core that services the interrupt (ISR) is running the same
>GuestOS under the same HyperVisor that initiated the device.

Generally a safe assumption. Note that the guest core may not be
resident on any physical core when the guest interrupt arives.

>I presume the core that services the interrupt was of the lowest priority
>of all the cores then running that GuestOS.
>I presume the core that services the interrupt wasted no time in doing so.
>
>And the GuestOS decides on how its ISR stack is {formatted, allocated, used,
>serviced, ...} which can be different for each GuestOS.

To a certain extent, the format of the ISR stack is hardware defined,
and there rest is completely up to the guest. ARM for example,
saves the current PC into a system register (ELR_ELx) and switches
the stack pointer. Everything else is up to the software interrupt
handler to save/restore. I see little benefit in hardware doing
any state saving other than that.


>
>If the interrupt occurs often enough to mater, its instructions, data,
>and translations will be in the cache hierarchy.

Although there has been a great deal of work mitigating the
number of interrupts (setting interrupt threshholds, RSS,
polling (DPDK, ODP), etc)

I don't see any advantages to all the fancy hardware interrupt
proposals from either of you.

EricP

unread,
Jan 22, 2024, 11:25:11 AMJan 22
to
How so? The location has to be inside the same virtual space.

>> Note that if LL/SC is implemented with temporary line pinning
>> (as might be done to guarantee forward progress and prevent ping-pong)
>> then it cannot be interfered with, and CAS and atomic-fetch-op sequences
>> are semantically identical to the equivalent single instructions
>> (which may also be implemented with temporary line pinning if their
>> data must move from cache through the core and back).
>
> Line pinning requires a NAK in the coherence protocol. As far as I know,
> only My 66000 interconnect protocol has such a NaK.

Not necessarily, provided it is time limited (few tens of clocks).

Also I suspect the worst case latency for moving a line ownership
could be quite large (a lots of queues and cache levels to traverse),
and main memory can be many hundreds of clocks away.

So the cache protocol should already be long latency tolerant
and adding some 10's of clocks shouldn't really matter.

>> Also LL/SC as implemented on Alpha, MIPS, Power, ARM, RISC-V don't allow
>> any other location loads or stores between them so really aren't useful
>> for detecting ABA because detecting it requires monitoring two memory
>> locations for change.
>
>> The classic example is the single linked list with items head->A->B->C
>> Detecting ABA requires monitoring if either head or head->Next change
>> which LL/SC cannot do as reading head->Next cancels the lock on head.
>
> Detecting ABA requires one to monitor addresses not data values.

It is a method for reading a pair of addresses, and knowing that
neither of them has changed between those two steps,
proceeding to update the first address.

It requires monitoring a first address while reading a second address,
and then updating the first address (releasing the monitor),
and using any update to the first address between those three steps to
infer there might have been a change to the second and blocking the update.

Which none of the LL/SC guarantee you can do.

MitchAlsup1

unread,
Jan 22, 2024, 2:20:45 PMJan 22
to
Anything, that changes the amount of time something takes; opens up a
side channel. Whether data can flow through the channel is a different
story. Holding a line changes the bounds on the time taken.

>>> Note that if LL/SC is implemented with temporary line pinning
>>> (as might be done to guarantee forward progress and prevent ping-pong)
>>> then it cannot be interfered with, and CAS and atomic-fetch-op sequences
>>> are semantically identical to the equivalent single instructions
>>> (which may also be implemented with temporary line pinning if their
>>> data must move from cache through the core and back).
>>
>> Line pinning requires a NAK in the coherence protocol. As far as I know,
>> only My 66000 interconnect protocol has such a NaK.

> Not necessarily, provided it is time limited (few tens of clocks).

That I will grant.

> Also I suspect the worst case latency for moving a line ownership
> could be quite large (a lots of queues and cache levels to traverse),
> and main memory can be many hundreds of clocks away.

Figure 100-cycles as a loaded system average.

> So the cache protocol should already be long latency tolerant
> and adding some 10's of clocks shouldn't really matter.

But does 100+cycles ??

>>> Also LL/SC as implemented on Alpha, MIPS, Power, ARM, RISC-V don't allow
>>> any other location loads or stores between them so really aren't useful
>>> for detecting ABA because detecting it requires monitoring two memory
>>> locations for change.
>>
>>> The classic example is the single linked list with items head->A->B->C
>>> Detecting ABA requires monitoring if either head or head->Next change
>>> which LL/SC cannot do as reading head->Next cancels the lock on head.
>>
>> Detecting ABA requires one to monitor addresses not data values.

> It is a method for reading a pair of addresses, and knowing that
> neither of them has changed between those two steps,
> proceeding to update the first address.

> It requires monitoring a first address while reading a second address,
> and then updating the first address (releasing the monitor),
> and using any update to the first address between those three steps to
> infer there might have been a change to the second and blocking the update.

> Which none of the LL/SC guarantee you can do.

Right LL/SC is a single container synchronization model.

MitchAlsup1

unread,
Jan 22, 2024, 2:20:47 PMJan 22
to
Scott Lurndal wrote:

> mitch...@aol.com (MitchAlsup1) writes:
>>BGB wrote:
>>

>>> Much like with MMU:
>>> Only the base level needs to actually handle TLB miss events, and
>>> everything else (nested translation, etc), can be left to software
>>> emulation.
>>
>>Name a single ISA that fakes the TLB ?? (and has an MMU)

> MIPS?

Even R2000 has a TLB, it is a SW serviced TLB, but the "zero overhead
on hit" part is present.

>>> Presumably, each core gets its own ISR stack, which should not have any
>>> reason to need to interact with each other.
>>
>>I presume an interrupt can be serviced by any number of cores.

> Or restricted to a specific set of cores (i.e. those currently
> owned by the target guest).

Even that gets tricky when you (or the OS) virtualizes cores.

> The guest OS will generally specify the target virutal core (or set of cores)

Yes, set of cores.

> for a specific interrupt. The Hypervisor and/or hardware needs
> to deal with the case where the interrupt arrives while the target
> guest core isn't currently scheduled on a physical core (and poke
> the kernel to schedule the guest optionally). Such as recording
> the pending interrupt and optionally notifying the hypervisor that
> there is a pending guest interrupt so it can schedule the guest
> core(s) on physical cores to handle the interrupt.

That is the routing I was talking about.

>>I presume that there are a vast number of devices. Each device assigned
>>to a few GuestOSs.

> Or, with SR-IOV, virtual functions are assigned to specific guests
> and all interrupts are MSI-X messages from the device to the
> interrupt controller (LAPIC, GIC, etc).

In my case, the interrupt controller merely sets bits in the interrupt
table, the watching cores watch for changes to its pending interrupt
register (64-bits). Said messages come up from PCIe as MSI-X messages,
and are directed to the interrupt controller over in the Memory Controller
(L3).

> Dealing with inter-processor interrupts in a multicore guest can also
> be tricky;

Core sends MSI-X message to interrupt controller and the rest happens
no different than a device initerrupt.

> either trapped by the hypervisor or there must be hardware
> support in the interrupt controller to notify the hypervisor that a pending
> guest IPI interrupt has arrived. ARM started with the former behavior, but
> added a mechanism to handle direct injection of interprocessor interrupts
> by the guest, without hypervisor intervention (assuming the guest core
> is currently scheduled on a physical core, otherwise the hypervisor gets
> notified that there is a pending interrupt for a non-scheduled guest
> core).

>>I presume the core that services the interrupt (ISR) is running the same
>>GuestOS under the same HyperVisor that initiated the device.

> Generally a safe assumption. Note that the guest core may not be
> resident on any physical core when the guest interrupt arives.

Which is why its table has to be present at all times--even if the threads
are not. When one or more threads from that GuestOS are activated, the
pending interrupt will be serviced.

>>I presume the core that services the interrupt was of the lowest priority
>>of all the cores then running that GuestOS.
>>I presume the core that services the interrupt wasted no time in doing so.
>>
>>And the GuestOS decides on how its ISR stack is {formatted, allocated, used,
>>serviced, ...} which can be different for each GuestOS.

> To a certain extent, the format of the ISR stack is hardware defined,
> and there rest is completely up to the guest. ARM for example,
> saves the current PC into a system register (ELR_ELx) and switches
> the stack pointer. Everything else is up to the software interrupt
> handler to save/restore. I see little benefit in hardware doing
> any state saving other than that.


>>
>>If the interrupt occurs often enough to mater, its instructions, data,
>>and translations will be in the cache hierarchy.

> Although there has been a great deal of work mitigating the
> number of interrupts (setting interrupt threshholds, RSS,
> polling (DPDK, ODP), etc)

> I don't see any advantages to all the fancy hardware interrupt
> proposals from either of you.

I understand.

BGB

unread,
Jan 22, 2024, 2:25:32 PMJan 22
to
On 1/22/2024 10:09 AM, Scott Lurndal wrote:
> mitch...@aol.com (MitchAlsup1) writes:
>> BGB wrote:
>>
>
>>> Much like with MMU:
>>> Only the base level needs to actually handle TLB miss events, and
>>> everything else (nested translation, etc), can be left to software
>>> emulation.
>>
>> Name a single ISA that fakes the TLB ?? (and has an MMU)
>
> MIPS?
>

Hmm...

In my case, the use of Soft TLB is not strictly required, as the OS may
opt-in to use a hardware page-walker "if it exists", with TLB Miss
interrupts mostly happening if no hardware page walker exists (or if
there is not a valid page in the page table).

This allows the option of implementing a nested page-translation
mechanism in the top-level TLB Miss handler (with a guess able to opt
out of the hardware page walking if it wants to run its own VM, it which
case will need to recursively emulate the TLB Miss ISR's and LDTLB
handling).

Well, or come up with a convention where the top level can see the VM
state of each guest recursively, so that the top-level ISR can
(directly) handle N levels of nested page-tables (rather than needing to
nest the TLB Miss ISR).


Though, the most likely option for this would be to make the nested VM's
express their VM state using the same contest structure as normal
threads/processes, and effectively canonizing these parts of the
structure as part of the ISA/ABI spec (and a guest deviating from this
structure would come at potentially significant performance cost).


May also make sense to add specific interrupts for specific privileged
instructions, such that common cases like accessing a CR or using an
LDTLB instruction can be trapped more efficiently (IOW: not needing to
disassemble the offending instruction to figure out what to do).

>
>>> Presumably, each core gets its own ISR stack, which should not have any
>>> reason to need to interact with each other.
>>
>> I presume an interrupt can be serviced by any number of cores.
>
> Or restricted to a specific set of cores (i.e. those currently
> owned by the target guest).
>
> The guest OS will generally specify the target virutal core (or set of cores)
> for a specific interrupt. The Hypervisor and/or hardware needs
> to deal with the case where the interrupt arrives while the target
> guest core isn't currently scheduled on a physical core (and poke
> the kernel to schedule the guest optionally). Such as recording
> the pending interrupt and optionally notifying the hypervisor that
> there is a pending guest interrupt so it can schedule the guest
> core(s) on physical cores to handle the interrupt.
>

I am guessing maybe my assumed approach of always routing all of the
external hardware interrupts to a specific core, is not typical then?...

Say, only Core=0 or Core=1, will get the interrupts.


*: Here, 0 vs 1 is ambiguous partly as '0' was left as a "This core",
with other cores numbered 1-15.

This scheme does work directly with < 15 cores, with trickery for 16
cores, but would require nesting trickery for more cores.


>> I presume that there are a vast number of devices. Each device assigned
>> to a few GuestOSs.
>
> Or, with SR-IOV, virtual functions are assigned to specific guests
> and all interrupts are MSI-X messages from the device to the
> interrupt controller (LAPIC, GIC, etc).
>
> Dealing with inter-processor interrupts in a multicore guest can also
> be tricky; either trapped by the hypervisor or there must be hardware
> support in the interrupt controller to notify the hypervisor that a pending
> guest IPI interrupt has arrived. ARM started with the former behavior, but
> added a mechanism to handle direct injection of interprocessor interrupts
> by the guest, without hypervisor intervention (assuming the guest core
> is currently scheduled on a physical core, otherwise the hypervisor gets
> notified that there is a pending interrupt for a non-scheduled guest
> core).
>

Yeah.

Admittedly, I hadn't really thought about or looked into these parts...


>> I presume the core that services the interrupt (ISR) is running the same
>> GuestOS under the same HyperVisor that initiated the device.
>
> Generally a safe assumption. Note that the guest core may not be
> resident on any physical core when the guest interrupt arives.
>

Trying to route actual HW interrupts into virtual guest OS's seems like
a pain.

In any case, it needs to be routed to where it needs to go.


>> I presume the core that services the interrupt was of the lowest priority
>> of all the cores then running that GuestOS.
>> I presume the core that services the interrupt wasted no time in doing so.
>>
>> And the GuestOS decides on how its ISR stack is {formatted, allocated, used,
>> serviced, ...} which can be different for each GuestOS.
>
> To a certain extent, the format of the ISR stack is hardware defined,
> and there rest is completely up to the guest. ARM for example,
> saves the current PC into a system register (ELR_ELx) and switches
> the stack pointer. Everything else is up to the software interrupt
> handler to save/restore. I see little benefit in hardware doing
> any state saving other than that.
>

Mostly agreed.

If ARM goes minimnal here, and pretty much nowhere else, this seems
telling...


As I see it, the main limiting factor for interrupt performance is not
the instructions to save and restore the registers, but rather the L1
misses that result from doing so.

Short of having special core-local SRAM or similar, this cost is
unavoidable.


Currently there is an SRAM region, but it is shared and in the L2 Ring,
so it will not have L2 misses, but has higher access latency than if it
were in the L1 ring.

But, it is debatable if it really actually matters, and there are
probably reasons not to have core-local memory regions.

But, compared with the RISC-V solution of doing N copies of the register
file, a core-local SRAM for the ISR stack would be cheap.


But, yeah:
Save PC;
Save any CPU flags/state;
Swap the stacks;
Set CPU state to a supervisor+ISR mode;
Branch to ISR entry point (to an offset in a vector table).

Does work, and seems pretty close to the minimum requirement.
Couldn't really think up a good way to trim it down much smaller.
At least without adding a bunch of extra wonk.

In HW, there are effectively two stack-pointer register registers, which
swap places on ISR entry/exit (currently by renumbering the registers in
the decoder).


Can't really get rid of the stack-swap without adding considerably more
wonk to the ISR handling mechanism (if the ISR entry point has 0 free
registers, and no usable stack pointer, well then, we have a bit more of
a puzzle...).

So, a mechanism to swap a pair of stack-pointer registers seemed like a
necessary evil.


With a Soft-TLB, it is also basically required to fall back to physical
addressing for ISR's (and with HW page-walking, if virtual-memory could
exist in ISRs, it would likely be necessary to jump over to a different
set of page-tables from the usermode program).


>
>>
>> If the interrupt occurs often enough to mater, its instructions, data,
>> and translations will be in the cache hierarchy.
>
> Although there has been a great deal of work mitigating the
> number of interrupts (setting interrupt threshholds, RSS,
> polling (DPDK, ODP), etc)
>
> I don't see any advantages to all the fancy hardware interrupt
> proposals from either of you.

?...

In my case, I had not been arguing for any fancy interrupt handling in
hardware...

The most fancy part of my interrupt mechanism, is that one can encode
the ID of a core into the value passed to a "TRAPA", and it will
redirect the interrupt to that specific core.


But, this mechanism currently has the limitations of a 4-bit field, so
going beyond ~ 15 cores is going to require a nesting scheme and
bouncing IPI's across multiple cores.

Though, if needed, I could tweak the format slightly in this case, and
maybe expand the Core-ID for IPI's to 8-bits, albeit limiting it to 16
unique IPI interrupt types.


Or, an intermediate would be 6-bit, and then require nesting for more
than 63 cores.

Doesn't matter for an FPGA, as with the BJX2 Core, I am mostly limited
to 1 or 2 cores on "consumer grade" FPGAs (all of the FPGA's that could
fit more than two cores; well, I can no longer use free Vivado).


In theory, could fit a quad-core on a Kintex-325T that I got of
AliExpress (and probably run at a higher clock-speed as well), but,
can't exactly use this FPGA in the free version of Vivado (and the
open-source tools both didn't work for me, and put up some "major red
flags" regarding their reverse engineering strategies; so even if the
tools did work, using them to generate bitstreams for a Kintex-325T or
similar would be legally suspect).

...


Though, apparently, some people are getting clock-speeds by just letting
the design fail timing and running it at higher clock-speeds (say, if
the design passes timing at 50MHz, can in theory push it up to around
75-100 MHz before it starts glitching out).

I was playing it safe here though.

...


Chris M. Thomasson

unread,
Jan 22, 2024, 3:28:22 PMJan 22
to
I feel the need to clarify that these words needs to be adjacent within
the _same_ cache line.

MitchAlsup1

unread,
Jan 22, 2024, 3:35:44 PMJan 22
to
BGB wrote:

> On 1/22/2024 10:09 AM, Scott Lurndal wrote:
>> mitch...@aol.com (MitchAlsup1) writes:


> In my case, the use of Soft TLB is not strictly required, as the OS may
> opt-in to use a hardware page-walker "if it exists", with TLB Miss
> interrupts mostly happening if no hardware page walker exists (or if
> there is not a valid page in the page table).

Has anyone done a SW refill TLB implementation that has both Hypervisor
and Supervisor page <nested> translations ??

This seems to me a bad idea as HV would end up having to manipulate
GuestOS mappings {Because you cannot allow GuestOS to see HV mappings}.

{{Aside:: At one time I was enamored with SW TLB refill and one could
reduce TLB refill penalty by allocating a "big enough" secondary hashed
TLB (1MB+). When HV + GuesOS came about, I saw the futility of it all}}

>>
>> The guest OS will generally specify the target virutal core (or set of cores)
>> for a specific interrupt. The Hypervisor and/or hardware needs
>> to deal with the case where the interrupt arrives while the target
>> guest core isn't currently scheduled on a physical core (and poke
>> the kernel to schedule the guest optionally). Such as recording
>> the pending interrupt and optionally notifying the hypervisor that
>> there is a pending guest interrupt so it can schedule the guest
>> core(s) on physical cores to handle the interrupt.
>>

> I am guessing maybe my assumed approach of always routing all of the
> external hardware interrupts to a specific core, is not typical then?...

> Say, only Core=0 or Core=1, will get the interrupts.

What do you think happens when there are thousands of cores and thousands
of disks, hundreds of Gigabit Ethernets controllers, where the number of
interrupts per second is larger than 1 or 2 cores can manage ??

<snip>

> So, a mechanism to swap a pair of stack-pointer registers seemed like a
> necessary evil.


> With a Soft-TLB, it is also basically required to fall back to physical
> addressing for ISR's (and with HW page-walking, if virtual-memory could
> exist in ISRs, it would likely be necessary to jump over to a different
> set of page-tables from the usermode program).

Danger Will Robinson, Danger


> In my case, I had not been arguing for any fancy interrupt handling in
> hardware...

In my case, MSI-X interrupts are routed to MC(L3) where each message sets
up to 2 bits, one demarking the unique interrupt, and the other merging
interrupts of a priority level into a second single bit. The setting of
this second bit is SNOOPed by cores to decide if they should attempt to
recognize an interrupt. Cores not associated with that interrupt table
do not see that interrupt; but those that are do. Thus, there is no pre-
assigned cores to service interrupts.

> The most fancy part of my interrupt mechanism, is that one can encode
> the ID of a core into the value passed to a "TRAPA", and it will
> redirect the interrupt to that specific core.


> But, this mechanism currently has the limitations of a 4-bit field, so
> going beyond ~ 15 cores is going to require a nesting scheme and
> bouncing IPI's across multiple cores.

Danger Will Robinson, Danger !!

> Though, if needed, I could tweak the format slightly in this case, and
> maybe expand the Core-ID for IPI's to 8-bits, albeit limiting it to 16
> unique IPI interrupt types.

I have 512 unique interrupts per priority level. There are 64 priority
levels.

Scott Lurndal

unread,
Jan 22, 2024, 3:40:43 PMJan 22
to
mitch...@aol.com (MitchAlsup1) writes:
>Scott Lurndal wrote:
>

>> Or restricted to a specific set of cores (i.e. those currently
>> owned by the target guest).
>
>Even that gets tricky when you (or the OS) virtualizes cores.

Oh, indeed. It's helpful to have good hardware support. The
ARM GIC, for example, helps eliminate hypervisor interaction
during normal guest interrupt handling (aside from scheduling the
guest on a host core).


>
>In my case, the interrupt controller merely sets bits in the interrupt
>table, the watching cores watch for changes to its pending interrupt
>register (64-bits). Said messages come up from PCIe as MSI-X messages,

The interrupt space for MSI-X messages is 32-bits. Implementations
may support fewer than 2**32 interrupts - ours support 2**24 distinct
interrupt vectors.

>and are directed to the interrupt controller over in the Memory Controller
>(L3).
>
>> Dealing with inter-processor interrupts in a multicore guest can also
>> be tricky;
>
>Core sends MSI-X message to interrupt controller and the rest happens
>no different than a device initerrupt.

Not necessarily, particularly if the guest isn't resident on any
core at the time the interrupt is received.


>
>>>I presume the core that services the interrupt (ISR) is running the same
>>>GuestOS under the same HyperVisor that initiated the device.
>
>> Generally a safe assumption. Note that the guest core may not be
>> resident on any physical core when the guest interrupt arives.
>
>Which is why its table has to be present at all times--even if the threads
>are not. When one or more threads from that GuestOS are activated, the
>pending interrupt will be serviced.

Yes, but the hypervisor needs to be notified by the hardware when the table
is updated and the target guest VCPU isn't currently scheduled
on any core so that it can decide to schedule the guest (which may,
for instance, have been parked because it executed a WFI, PAUSE
or MWAIT instruction).

Scott Lurndal

unread,
Jan 22, 2024, 5:15:34 PMJan 22
to
BGB <cr8...@gmail.com> writes:
>On 1/22/2024 10:09 AM, Scott Lurndal wrote:

>I am guessing maybe my assumed approach of always routing all of the
>external hardware interrupts to a specific core, is not typical then?...
>
>Say, only Core=0 or Core=1, will get the interrupts.

Maybe on a microcontroller :-).

On a desktop or server system (particularly the latter), the kernel
may distribute interrupts however it likes. Network card RSS
(Receive Side Scaling) requires being able to distribute interrupts
over a set of (or all) cores. Any time you make a core "special"
all kinds of new usage constraints arise (not to mention reduced
fault tolerance).



>
>Trying to route actual HW interrupts into virtual guest OS's seems like
>a pain.

Check out the ARM GICv3/v4 implementation to see how it does
this. It has evolved over time to where you see it now. Originally,
they only provided a set of CPU system registers to the hypervisor
that allowed the hypervisor to inject interrupts into the guest. The
hypervisor handled all interrupts itself, then queued them
(in a set of one or more List Registers) to the guest. When the
hypervisor dispatched the guest on the core, it would get
an interrupt and read the same interrupt ack register that
the hypervisor uses but the hardware would, for the guest
access, access one of the list registers and announce that
interrupt to the guest. The guest would end the interrupt
just like a bare-metal os by writing the interrupt number
to and interrupt END system register, which would drop
the running interrupt priority (for nested interrupts).
If the interrupt was level sensitive, unmasked and the
highest priority pending interrupt, the guest would
get another interrupt (wash, rinse, repeat).

Lots of trips (even low cost on AAarch64) between exception
levels.

So, they've added a capability (only for message signaled
interrupts) to deliver the MSI interrupt directly to the
guest - if the target guest core isn't resident, the hardware
will ring a doorbell for the hypervisor. Once the HV makes
the guest resident on the CPU, it will take any pending
interrupts recored for that virtual CPU, in order of
interrupt priority.

The final enhancement (GICV4.1) adds the ability to issue
virtual inter-processor interrupts between virtual CPU's
without hypervisor intervention (other than making the
guest vcpus resident on real cores).


>As I see it, the main limiting factor for interrupt performance is not
>the instructions to save and restore the registers, but rather the L1
>misses that result from doing so.

If the interrupt is happening at a rate where the L1
cache miss is significant, then the device probably needs to be
redesigned to reduce the number of interrupts (e.g.
interrupt coalescing), use DMA, or do more work per interrupt,
or poll the completion status from the driver rather
than waiting for the interrupt.

BGB-Alt

unread,
Jan 22, 2024, 5:17:56 PMJan 22
to
On 1/22/2024 2:31 PM, MitchAlsup1 wrote:
> BGB wrote:
>
>> On 1/22/2024 10:09 AM, Scott Lurndal wrote:
>>> mitch...@aol.com (MitchAlsup1) writes:
>
>
>> In my case, the use of Soft TLB is not strictly required, as the OS
>> may opt-in to use a hardware page-walker "if it exists", with TLB Miss
>> interrupts mostly happening if no hardware page walker exists (or if
>> there is not a valid page in the page table).
>
> Has anyone done a SW refill TLB implementation that has both Hypervisor
> and Supervisor page <nested> translations ??
>
> This seems to me a bad idea as HV would end up having to manipulate
> GuestOS mappings {Because you cannot allow GuestOS to see HV mappings}.
>
> {{Aside:: At one time I was enamored with SW TLB refill and one could
> reduce TLB refill penalty by allocating a "big enough" secondary hashed
> TLB (1MB+). When HV + GuesOS came about, I saw the futility of it all}}
>

One would need to standardize on parts of the ABI, and treat them like
one would hardware-level constraints, to allow the top-level HV to cross
multiple levels of mapping.

Or, suffer the performance overhead of using multiple levels of emulation.
One of the two...


Granted, in simple cases, things like DOSBox and QEMU do surprisingly
well on Windows, despite the overheads of these using software emulation
rather than fancy hardware virtualization (so, the HW native stuff may
be overrated).

But, granted, being on a machine where the actual hardware
virtualization apparently doesn't work for some unknown reason, these
are basically the only real option (as most of the other VMs have,
annoyingly, gone over to requiring that the hardware virtualization
"actually work"...).


>>>
>>> The guest OS will generally specify the target virutal core (or set
>>> of cores)
>>> for a specific interrupt.   The Hypervisor and/or hardware needs
>>> to deal with the case where the interrupt arrives while the target
>>> guest core isn't currently scheduled on a physical core (and poke
>>> the kernel to schedule the guest optionally).   Such as recording
>>> the pending interrupt and optionally notifying the hypervisor that
>>> there is a pending guest interrupt so it can schedule the guest
>>> core(s) on physical cores to handle the interrupt.
>>>
>
>> I am guessing maybe my assumed approach of always routing all of the
>> external hardware interrupts to a specific core, is not typical then?...
>
>> Say, only Core=0 or Core=1, will get the interrupts.
>
> What do you think happens when there are thousands of cores and thousands
> of disks, hundreds of Gigabit Ethernets controllers, where the number of
> interrupts per second is larger than 1 or 2 cores can manage ??
>

Dunno.


Most of the systems I was familiar with were focused on using the cores
for large amounts of floating-point or integer math, rather than huge
amounts of IO. Had not considered IO intensive cases.


But, in this case, it makes more sense to have 1 core run the OS and
similar, and most of the other cores are left to grind away at doing
math or similar.

Possibly with the dataset carved up along a grid and each thread
primarily working on its own local section of the grid, ...


> <snip>
>
>> So, a mechanism to swap a pair of stack-pointer registers seemed like
>> a necessary evil.
>
>
>> With a Soft-TLB, it is also basically required to fall back to
>> physical addressing for ISR's (and with HW page-walking, if
>> virtual-memory could exist in ISRs, it would likely be necessary to
>> jump over to a different set of page-tables from the usermode program).
>
> Danger Will Robinson, Danger
>
>
>> In my case, I had not been arguing for any fancy interrupt handling in
>> hardware...
>
> In my case, MSI-X interrupts are routed to MC(L3) where each message sets
> up to 2 bits, one demarking the unique interrupt, and the other merging
> interrupts of a priority level into a second single bit. The setting of
> this second bit is SNOOPed by cores to decide if they should attempt to
> recognize an interrupt. Cores not associated with that interrupt table
> do not see that interrupt; but those that are do. Thus, there is no pre-
> assigned cores to service interrupts.
>

Hmm...

I guess it could be possible there could be a hardware "interrupt
router", where it tries to figure out which device sent the interrupt
and where it needs to go, but dunno...


AFAIK, traditional was more like:
Hardware sends an interrupt, a chip records this in an internal flag
register;
This asserts an IRQ pin or similar on the CPU, somehow this signaled to
one of several interrupt handlers, which would then access the chip to
figure out which actual IRQ had generated the interrupt.


Well, or my scheme, where a 16-bit ID is used, 8 bits of which was left
for the device to signal which device it was, say:
Cnxx:
n: Core that interrupt is directed to
Typically 0 for HW, maps to Core 1.
xx: Magic number to categorize IRQ or self-identify, 00..FF.
With Dnxx having been left for IPI.


Say:
0xxx-7xxx: Non-interrupt signals.
8nxx: Fault
9nxx: Reserved
Anxx: TLB
Bnxx: Reserved
Cnxx: Interrupt (Hardware)
Dnxx: Interrupt (Inter-Processor)
Enxx: Syscall
Fnxx: CPU Internal Interrupt (RTE and similar)

There is another 48 bits combined to this, generally an address
associated with the fault in question (such as the faulted memory
address, or where the miss occurred for a TLB Miss).


>> The most fancy part of my interrupt mechanism, is that one can encode
>> the ID of a core into the value passed to a "TRAPA", and it will
>> redirect the interrupt to that specific core.
>
>
>> But, this mechanism currently has the limitations of a 4-bit field, so
>> going beyond ~ 15 cores is going to require a nesting scheme and
>> bouncing IPI's across multiple cores.
>
> Danger Will Robinson, Danger !!
>

The field was 16 bits, split between 48-bit address and 16-bit status
code. Can't really make the field bigger short of redesigning some things.


>> Though, if needed, I could tweak the format slightly in this case, and
>> maybe expand the Core-ID for IPI's to 8-bits, albeit limiting it to 16
>> unique IPI interrupt types.
>
> I have 512 unique interrupts per priority level. There are 64 priority
> levels.

As part of the design, there were 4 priority levels.
At present, this is more interpreted as two levels though, functioning
more like CLI/STI on x86.

Though, this part of the design was also carried over from SuperH...
The general layout of the SR register, as can be noted, is fairly
similar to the layout used in SuperH (the bits which indicate the
location of User/Supervisor, whether it is in an ISR, and the SR.S /
SR.T and interrupt bits, etc, were mostly all carried over from SuperH).


The design of the interrupt mechanism is similar as well, just I moved
some stuff from MMIO to CRs, and made some design simplifications (vs SH-4).

Say, collapsing down the VBR relative entry points to be 8 bytes apart,
rather than some more ad-hoc offsets. I am guessing originally they had
hard-coded the VBR offsets relative to the layout of their Boot ROM or
something. In this case, 8 bytes was enough to encode a branch to
wherever the entry point was.

...

Chris M. Thomasson

unread,
Jan 22, 2024, 5:21:55 PMJan 22
to
On 1/21/2024 6:33 PM, Chris M. Thomasson wrote:
> On 1/21/2024 6:31 PM, Chris M. Thomasson wrote:
>> On 1/21/2024 6:07 PM, MitchAlsup1 wrote:
>>> Chris M. Thomasson wrote:
>>>
>>>> On 1/21/2024 12:58 PM, MitchAlsup1 wrote:
>>>
>>>>> Detecting ABA requires one to monitor addresses not data values.
>>>
>>>> Not 100% true.
>>>
>>> IBM's original ABA problem was encountered when a background task
>>> (once a week or once a month) was swapped out to disk the instruction
>>> prior to CAS, and when it came back the data comparison register
>>> matched the memory data, but the value to be swapped in had no
>>> relationship with the current linked list structure. Machine crashed.
>>>
>>> Without knowing the address, how can this particular problem be
>>> rectified ??
>>
>> The version counter wrt a double wide compare and swap where:
>>
>> struct dwcas_anchor
>> {
>>      word* next;
^^^^^^^^^^^

Actually, I should say head here wrt the name of dwcas_anchor::next.
Sorry for any confusion. The head of the list would be in the main anchor.

Chris M. Thomasson

unread,
Jan 22, 2024, 5:28:48 PMJan 22
to
> [...]

Fwiw, have you ever taken at look at Kegs32?

https://www.emaculation.com/kegs32.htm

Pretty nice! :^)

EricP

unread,
Jan 22, 2024, 6:02:23 PMJan 22
to
MitchAlsup1 wrote:
> BGB wrote:
>
>> On 1/22/2024 10:09 AM, Scott Lurndal wrote:
>>> mitch...@aol.com (MitchAlsup1) writes:
>
>
>> In my case, the use of Soft TLB is not strictly required, as the OS
>> may opt-in to use a hardware page-walker "if it exists", with TLB Miss
>> interrupts mostly happening if no hardware page walker exists (or if
>> there is not a valid page in the page table).
>
> Has anyone done a SW refill TLB implementation that has both Hypervisor
> and Supervisor page <nested> translations ??
>
> This seems to me a bad idea as HV would end up having to manipulate
> GuestOS mappings {Because you cannot allow GuestOS to see HV mappings}.

I actually pondered something like this to eliminate the two-level table
walk in virtual machines. I was thinking that the HV might propagate its
PTE entries into the GuestOS PTE entries, then mark them (somehow)
so they trap to the HV if GuestOS tries to look at them.
But it got complicated and never really went anywhere.

One accomplishes the same effect by caching the interior PTE nodes
for each of the HV and GuestOS tables separately on the downward walk,
and hold the combined nested table mapping in the TLB.
The bottom-up table walkers on each interior PTE cache should
eliminate 98% of the PTE reads with none of the headaches.

> {{Aside:: At one time I was enamored with SW TLB refill and one could
> reduce TLB refill penalty by allocating a "big enough" secondary hashed
> TLB (1MB+). When HV + GuesOS came about, I saw the futility of it all}}

I also wondered if an hashed/inverted page table could help here.
But that also went nowhere. The separate bottom-up walkers looked best.



MitchAlsup1

unread,
Jan 22, 2024, 7:55:43 PMJan 22
to
EricP wrote:

> MitchAlsup1 wrote:
>> BGB wrote:
>>
>>> On 1/22/2024 10:09 AM, Scott Lurndal wrote:
>>>> mitch...@aol.com (MitchAlsup1) writes:
>>
>>
>>> In my case, the use of Soft TLB is not strictly required, as the OS
>>> may opt-in to use a hardware page-walker "if it exists", with TLB Miss
>>> interrupts mostly happening if no hardware page walker exists (or if
>>> there is not a valid page in the page table).
>>
>> Has anyone done a SW refill TLB implementation that has both Hypervisor
>> and Supervisor page <nested> translations ??
>>
>> This seems to me a bad idea as HV would end up having to manipulate
>> GuestOS mappings {Because you cannot allow GuestOS to see HV mappings}.

> I actually pondered something like this to eliminate the two-level table
> walk in virtual machines. I was thinking that the HV might propagate its
> PTE entries into the GuestOS PTE entries, then mark them (somehow)
> so they trap to the HV if GuestOS tries to look at them.
> But it got complicated and never really went anywhere.

> One accomplishes the same effect by caching the interior PTE nodes
> for each of the HV and GuestOS tables separately on the downward walk,
> and hold the combined nested table mapping in the TLB.
> The bottom-up table walkers on each interior PTE cache should
> eliminate 98% of the PTE reads with none of the headaches.

I call these things:: TableWalk Accelerators.

Given CAMs at your access, one can cache the outer layers and short
circuit most of the MMU accesses--such that you don't siply read the
Accelerator RAM 25 times (two 5-level tables), you CAM down both
GuestOS and HV tables so only walk the parts not in your CAM. {And
them put them in your CAM.} A Density trick is for each CAM to have
access to a whole cache line of PTEs (8 in my case).

>> {{Aside:: At one time I was enamored with SW TLB refill and one could
>> reduce TLB refill penalty by allocating a "big enough" secondary hashed
>> TLB (1MB+). When HV + GuesOS came about, I saw the futility of it all}}

> I also wondered if an hashed/inverted page table could help here.
> But that also went nowhere. The separate bottom-up walkers looked best.

Best I could do was two tables, one mapping appliction to GuestPA, the
other mapping GuestPA to RealPA. If the former missed, GuestOS fixed its
own table, if the late, HV fixed its own table.

EricP

unread,
Jan 23, 2024, 2:34:17 PMJan 23
to
MitchAlsup1 wrote:
> EricP wrote:
>
>> One accomplishes the same effect by caching the interior PTE nodes
>> for each of the HV and GuestOS tables separately on the downward walk,
>> and hold the combined nested table mapping in the TLB.
>> The bottom-up table walkers on each interior PTE cache should
>> eliminate 98% of the PTE reads with none of the headaches.
>
> I call these things:: TableWalk Accelerators.
>
> Given CAMs at your access, one can cache the outer layers and short
> circuit most of the MMU accesses--such that you don't siply read the
> Accelerator RAM 25 times (two 5-level tables), you CAM down both
> GuestOS and HV tables so only walk the parts not in your CAM. {And
> them put them in your CAM.} A Density trick is for each CAM to have
> access to a whole cache line of PTEs (8 in my case).

An idea I had here was to allow the OS more explicit control
for the invalidates of the interior nodes caches.

On x86/x64 the interior cache invalidation had to be backwards compatible,
so the INVLPG instruction has to guess what besides the main TLB needs to be
invalidated, and it has to do so in a conservative (ie paranoid) manner.
So it tosses these interior PTE's just in case which means they
have to be reloaded on the next TLB miss.

The OS knows which paging levels it is recycling memory for and
can provide a finer grain control for these TLB invalidates.
The INVLPG and INVPCID instructions need a control bit mask allowing OS
to invalidate just the TLB levels it is changing for a virtual address.

And for OS debugging purposes, all these HW TLB tables need to be readable
and writable by some means (as control registers or whatever).
Because when something craps out, what's in memory may not be the same
as what was loaded into HW some time ago. A debugger should be able to
look into and manipulate these HW structures.


MitchAlsup1

unread,
Jan 23, 2024, 4:10:51 PMJan 23
to
EricP wrote:

> MitchAlsup1 wrote:
>> EricP wrote:
>>
>>> One accomplishes the same effect by caching the interior PTE nodes
>>> for each of the HV and GuestOS tables separately on the downward walk,
>>> and hold the combined nested table mapping in the TLB.
>>> The bottom-up table walkers on each interior PTE cache should
>>> eliminate 98% of the PTE reads with none of the headaches.
>>
>> I call these things:: TableWalk Accelerators.
>>
>> Given CAMs at your access, one can cache the outer layers and short
>> circuit most of the MMU accesses--such that you don't siply read the
>> Accelerator RAM 25 times (two 5-level tables), you CAM down both
>> GuestOS and HV tables so only walk the parts not in your CAM. {And
>> them put them in your CAM.} A Density trick is for each CAM to have
>> access to a whole cache line of PTEs (8 in my case).

> An idea I had here was to allow the OS more explicit control
> for the invalidates of the interior nodes caches.

The interior nodes, stored in the CAM, retain their physical address, and
are snooped, so no invalidation is required. ANY write to them is seen and
the entry invalidates itself.

> On x86/x64 the interior cache invalidation had to be backwards compatible,
> so the INVLPG instruction has to guess what besides the main TLB needs to be
> invalidated, and it has to do so in a conservative (ie paranoid) manner.
> So it tosses these interior PTE's just in case which means they
> have to be reloaded on the next TLB miss.

> The OS knows which paging levels it is recycling memory for and
> can provide a finer grain control for these TLB invalidates.
> The INVLPG and INVPCID instructions need a control bit mask allowing OS
> to invalidate just the TLB levels it is changing for a virtual address.

OS or HV does not need to bother in My 66000.

> And for OS debugging purposes, all these HW TLB tables need to be readable
> and writable by some means (as control registers or whatever).
> Because when something craps out, what's in memory may not be the same
> as what was loaded into HW some time ago. A debugger should be able to
> look into and manipulate these HW structures.

All control registers, including the TLB CAMs are accessible via MMI/O
accesses. So a remote core can decide what a crashed core was doing at
the instant of the crash.

EricP

unread,
Jan 25, 2024, 9:47:41 AMJan 25
to
MitchAlsup1 wrote:
> EricP wrote:
>
>> MitchAlsup1 wrote:
>>> EricP wrote:
>>>
>>>> One accomplishes the same effect by caching the interior PTE nodes
>>>> for each of the HV and GuestOS tables separately on the downward walk,
>>>> and hold the combined nested table mapping in the TLB.
>>>> The bottom-up table walkers on each interior PTE cache should
>>>> eliminate 98% of the PTE reads with none of the headaches.
>>>
>>> I call these things:: TableWalk Accelerators.
>>>
>>> Given CAMs at your access, one can cache the outer layers and short
>>> circuit most of the MMU accesses--such that you don't siply read the
>>> Accelerator RAM 25 times (two 5-level tables), you CAM down both
>>> GuestOS and HV tables so only walk the parts not in your CAM. {And
>>> them put them in your CAM.} A Density trick is for each CAM to have
>>> access to a whole cache line of PTEs (8 in my case).
>
>> An idea I had here was to allow the OS more explicit control
>> for the invalidates of the interior nodes caches.
>
> The interior nodes, stored in the CAM, retain their physical address, and
> are snooped, so no invalidation is required. ANY write to them is seen and
> the entry invalidates itself.

On My66000, but other cores don't have automatically coherent TLB's.
This feature is intended for that general rabble.

Just to play devil's advocate...

To snoop page table updates My66000 TLB would need a large CAM with all
the physical addresses of the PTE's source cache lines parallel to the
virtual and ASID CAM's, and route the cache line invalidates through it.

While the virtual index CAM's are separated in different banks,
one for each page table level, the P.A. CAM is for all entries in all banks.
This extra P.A. CAM will have a lot of entries and therefore be slow.

Also routing the Invalidate messages through the TLB could slow down all
their ACK's messages even though there is very low probability of a hit
because page tables update relatively infrequently.

Also the L2-TLB's, called the STLB for Second-level TLB by Intel,
are set assoc., and would have to be virtually indexed and virtually
tagged with both VA and ASID plus table level to select address mask.
On Skylake the STLB for 4k/2M pages is 128-rows*12-way, 1G is 4-rows*4-way.

How can My66000 look up STLB entries by invalidate physical line address?
It would have to scan all 128 rows for each message.

MitchAlsup1

unread,
Jan 25, 2024, 12:07:32 PMJan 25
to
Scott Lurndal wrote:

> mitch...@aol.com (MitchAlsup1) writes:
>>Scott Lurndal wrote:
>>

>>> Or restricted to a specific set of cores (i.e. those currently
>>> owned by the target guest).
>>
>>Even that gets tricky when you (or the OS) virtualizes cores.

> Oh, indeed. It's helpful to have good hardware support. The
> ARM GIC, for example, helps eliminate hypervisor interaction
> during normal guest interrupt handling (aside from scheduling the
> guest on a host core).


>>
>>In my case, the interrupt controller merely sets bits in the interrupt
>>table, the watching cores watch for changes to its pending interrupt
>>register (64-bits). Said messages come up from PCIe as MSI-X messages,

> The interrupt space for MSI-X messages is 32-bits. Implementations
> may support fewer than 2**32 interrupts - ours support 2**24 distinct
> interrupt vectors.

My 66000 supports 2^16 tables of 2^15 distinct interrupts (non vectored)
per table.

>>and are directed to the interrupt controller over in the Memory Controller
>>(L3).
>>
>>> Dealing with inter-processor interrupts in a multicore guest can also
>>> be tricky;
>>
>>Core sends MSI-X message to interrupt controller and the rest happens
>>no different than a device initerrupt.

> Not necessarily, particularly if the guest isn't resident on any
> core at the time the interrupt is received.

When an interrupt is registered (recognized are raised and enabled)
and the receiving GuestOS is not running on any core, the interrupt
remains pending until some core context switches to that GuestOS.

MitchAlsup1

unread,
Jan 25, 2024, 12:18:52 PMJan 25
to
Yes, .....

> While the virtual index CAM's are separated in different banks,
> one for each page table level, the P.A. CAM is for all entries in all banks.
> This extra P.A. CAM will have a lot of entries and therefore be slow.

That is the TWA.

> Also routing the Invalidate messages through the TLB could slow down all
> their ACK's messages even though there is very low probability of a hit
> because page tables update relatively infrequently.

TLBs don't ACK they self-invalidate. And they can be performing a translation
while self-invalidating.

> Also the L2-TLB's, called the STLB for Second-level TLB by Intel,
> are set assoc., and would have to be virtually indexed and virtually
> tagged with both VA and ASID plus table level to select address mask.
> On Skylake the STLB for 4k/2M pages is 128-rows*12-way, 1G is 4-rows*4-way.

All TLB walks are performed with RealPA.
All Snoops are performed with RealPA

> How can My66000 look up STLB entries by invalidate physical line address?
> It would have to scan all 128 rows for each message.

It is not structured like Intel L2 TLB.

Scott Lurndal

unread,
Jan 25, 2024, 12:48:05 PMJan 25
to
mitch...@aol.com (MitchAlsup1) writes:

>> Not necessarily, particularly if the guest isn't resident on any
>> core at the time the interrupt is received.
>
>When an interrupt is registered (recognized are raised and enabled)
>and the receiving GuestOS is not running on any core, the interrupt
>remains pending until some core context switches to that GuestOS.

It is useful to notify the hypervisor of that, so that it can
schedule the guest.

EricP

unread,
Jan 26, 2024, 10:48:54 AMJan 26
to
No, not the Table Walk Accelerator. I'm thinking the PA CAM would
only need to be accessed for cache line invalidates. However it would be
very inconvenient if the TLB CAMs had faster access time for virtual
address lookups than for physical address lookups, so the access time
would be the longer of the two, that being PA.

Basically I'm suggesting the big PA CAM slows down VA translates
and therefore possibly all memory accesses.

>> Also routing the Invalidate messages through the TLB could slow down all
>> their ACK's messages even though there is very low probability of a hit
>> because page tables update relatively infrequently.
>
> TLBs don't ACK they self-invalidate. And they can be performing a
> translation
> while self-invalidating.

Hmmm... Danger Will Robinson. Most OS page table management depends on
synchronizing after the shootdowns complete on all affected cores.

The basic safe sequence is:
- acquire page table mutex
- modify PTE in memory for a VA
- issue IPI's with VA to all cores that might have a copy in TLB
- invalidate local TLB for VA
- wait for IPI ACK's from remote cores
- release mutex

If it doesn't wait for shootdown ACKs then it might be possible for a
stale PTE copy to exist and be used after the mutex is released.

>> Also the L2-TLB's, called the STLB for Second-level TLB by Intel,
>> are set assoc., and would have to be virtually indexed and virtually
>> tagged with both VA and ASID plus table level to select address mask.
>> On Skylake the STLB for 4k/2M pages is 128-rows*12-way, 1G is
>> 4-rows*4-way.
>
> All TLB walks are performed with RealPA.
> All Snoops are performed with RealPA
>
>> How can My66000 look up STLB entries by invalidate physical line address?
>> It would have to scan all 128 rows for each message.
>
> It is not structured like Intel L2 TLB.

Are you saying the My66000 STLB is physically indexed, physically tagged?
Hows this work for a bottom-up table walk (aka your TWA)?

The only way I know to do a bottom-up walk is to use the portion of the
VA for the higher index level to get the PA of the page table page.
That requires lookup by a masked portion of the VA with the ASID.
The bottom-up walk is done by making the VA mask shorter for each level.
This implies a Virtually Indexed Virtually Tagged PTE cache.

The VIVT PTE cache implies that certain TLB VA or ASID invalidates require
a full STLB table scan which could be up to 128 clocks for that instruction.

MitchAlsup1

unread,
Jan 26, 2024, 4:45:46 PMJan 26
to
VA PA
| |
V V
+-----------+ +-----------+-+ +-----------+
| VA CAM |->| PTEs |v|<-| PA CAM |
+-----------+ +-----------+-+ +-----------+

> Basically I'm suggesting the big PA CAM slows down VA translates
> and therefore possibly all memory accesses.

It is a completely independent and concurrent hunk of logic that only has
access to the valid bit and can only clear the valid bit.

>>> Also routing the Invalidate messages through the TLB could slow down all
>>> their ACK's messages even though there is very low probability of a hit
>>> because page tables update relatively infrequently.
>>
>> TLBs don't ACK they self-invalidate. And they can be performing a
>> translation
>> while self-invalidating.

> Hmmm... Danger Will Robinson. Most OS page table management depends on
> synchronizing after the shootdowns complete on all affected cores.

> The basic safe sequence is:
> - acquire page table mutex
> - modify PTE in memory for a VA

Here you have obtained write permission on the line containing the PTE
being modified. By the time you have obtained write permission, all
other TLBs will have been invalidated.

> - issue IPI's with VA to all cores that might have a copy in TLB
> - invalidate local TLB for VA
> - wait for IPI ACK's from remote cores
> - release mutex

> If it doesn't wait for shootdown ACKs then it might be possible for a
> stale PTE copy to exist and be used after the mutex is released.

Race condition does not exist. By the time the core modifying the PTE
obtains write permission, all the TLBs have been cleared of that entry.

>>> Also the L2-TLB's, called the STLB for Second-level TLB by Intel,
>>> are set assoc., and would have to be virtually indexed and virtually
>>> tagged with both VA and ASID plus table level to select address mask.
>>> On Skylake the STLB for 4k/2M pages is 128-rows*12-way, 1G is
>>> 4-rows*4-way.
>>
>> All TLB walks are performed with RealPA.
>> All Snoops are performed with RealPA
>>
>>> How can My66000 look up STLB entries by invalidate physical line address?
>>> It would have to scan all 128 rows for each message.
>>
>> It is not structured like Intel L2 TLB.

> Are you saying the My66000 STLB is physically indexed, physically tagged?
> Hows this work for a bottom-up table walk (aka your TWA)?

L2 TLB is a different structure (SRAM) than TWAs (CAM).
I can't talk about it:: as Ivan used to say:: NYF.

> The only way I know to do a bottom-up walk is to use the portion of the
> VA for the higher index level to get the PA of the page table page.

I <actually> did not say I did a bottom up walk. I said I short circuited
the table walks for those layers I have recent translation PTPs. Its more
like CAM the Root to the last PTP and every CAM that hits is one layer
you don't have to access.

EricP

unread,
Jan 28, 2024, 1:35:25 PMJan 28
to
Of course, but for a 5 or 6 level page table you'd have a CAM bank
for each level to search in parallel. The loading on the PA path
would be the same as if you a CAM as large as the sum of all entries.

But as you point out below, this shouldn't be an issue because
it has little to do after the lookup.

>> Basically I'm suggesting the big PA CAM slows down VA translates
>> and therefore possibly all memory accesses.
>
> It is a completely independent and concurrent hunk of logic that only has
> access to the valid bit and can only clear the valid bit.

Yes, ok. The lookup on the PA path may take longer but there is
little to do on a hit so the total path length is shorter,
so PA invalidate won't be on the critical path for the MMU.

>>>> Also routing the Invalidate messages through the TLB could slow down
>>>> all
>>>> their ACK's messages even though there is very low probability of a hit
>>>> because page tables update relatively infrequently.
>>>
>>> TLBs don't ACK they self-invalidate. And they can be performing a
>>> translation
>>> while self-invalidating.
>
>> Hmmm... Danger Will Robinson. Most OS page table management depends on
>> synchronizing after the shootdowns complete on all affected cores.
>
>> The basic safe sequence is:
>> - acquire page table mutex
>> - modify PTE in memory for a VA
>
> Here you have obtained write permission on the line containing the PTE
> being modified. By the time you have obtained write permission, all
> other TLBs will have been invalidated.

It means you can't use the outer cache levels to filter invalidates.
You'd have to pass all invalidate messages from the coherence network
directly to the TLB PA, bypassing the cache hierarchy, to ensure the
TLB entry is removed before the cache ACK's the invalidate message.

>> - issue IPI's with VA to all cores that might have a copy in TLB
>> - invalidate local TLB for VA
>> - wait for IPI ACK's from remote cores
>> - release mutex
>
>> If it doesn't wait for shootdown ACKs then it might be possible for a
>> stale PTE copy to exist and be used after the mutex is released.
>
> Race condition does not exist. By the time the core modifying the PTE
> obtains write permission, all the TLBs have been cleared of that entry.

Ok

>>>> Also the L2-TLB's, called the STLB for Second-level TLB by Intel,
>>>> are set assoc., and would have to be virtually indexed and virtually
>>>> tagged with both VA and ASID plus table level to select address mask.
>>>> On Skylake the STLB for 4k/2M pages is 128-rows*12-way, 1G is
>>>> 4-rows*4-way.
>>>
>>> All TLB walks are performed with RealPA.
>>> All Snoops are performed with RealPA
>>>
>>>> How can My66000 look up STLB entries by invalidate physical line
>>>> address?
>>>> It would have to scan all 128 rows for each message.
>>>
>>> It is not structured like Intel L2 TLB.
>
>> Are you saying the My66000 STLB is physically indexed, physically tagged?
>> Hows this work for a bottom-up table walk (aka your TWA)?
>
> L2 TLB is a different structure (SRAM) than TWAs (CAM).
> I can't talk about it:: as Ivan used to say:: NYF.

Rats.

>> The only way I know to do a bottom-up walk is to use the portion of the
>> VA for the higher index level to get the PA of the page table page.
>
> I <actually> did not say I did a bottom up walk. I said I short circuited
> the table walks for those layers I have recent translation PTPs. Its more
> like CAM the Root to the last PTP and every CAM that hits is one layer
> you don't have to access.

What I call a bottom-up walk can be performed in parallel, serial,
or a bit of both, across the banks for each page table level.

I'd have a VA TLB lookup in parallel for page levels 1, 2 and 3 (4K, 2M, 1G),
and if all three miss then do a sequential lookups for levels 4, 5, 6.

MitchAlsup1

unread,
Jan 28, 2024, 4:15:50 PMJan 28
to
What you see above is the TLB
What your above paragraph talks about is what I call the TWA.

> But as you point out below, this shouldn't be an issue because
> it has little to do after the lookup.

Basically only wait for SNOOPs and for TLB misses.
With my exclusive cache, I have to do that anyway. With wider than
register accesses I am already in a position where I have BW for these
SNOOPs without little overhead on either channel.
0 new messages