Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Memory dependency microbenchmark

682 views
Skip to first unread message

Anton Ertl

unread,
Nov 3, 2023, 5:35:41 AM11/3/23
to
I have written a microbenchmark for measuring how memory dependencies
affect the performance of various microarchitectures. You can find it
along with a description and results on
<http://www.complang.tuwien.ac.at/anton/memdep/>.

You can find the text portion of this page below; there are a few
links on the page that are not produced below.

If you have information on any of the things I don't know (or don't
remember), please let me know about it.

- anton

Memory Dependency Microbenchmark

This microbenchmark tests some aspects of how hardware deals with
dependences through memory. It performs the following loop:

for (i=0; i<250000000; i++) {
*b = *a+1;
*d = *c+1;
*f = *e+1;
*h = *g+1;
}

The eight parameters of the binary allow you to determine to which
memory location the pointers a b c d e f g h refer, in particular, if
some of them refer to the same memory location or not. Each parameter
corresponds to one pointer, and two pointers refer to the same memory
location iff the corresponding parameters are the same.

This microbenchmark uses pointers that have been in their registers
for a long time, so it does not test speculative alias detection by
the hardware.

I have measured using the following parameter combinations (in the
order a b c d e f g h):

* A 0 1 2 3 4 5 6 7: All computations (statements in the loop body)
are completely independent. Ideally they can be performed as fast
as the resources allow.
* B 0 1 1 2 2 3 3 4: A sequence of 4 dependent computations, but
the next iteration does not depend on the results of the previous
one, so again, the work can be performed as fast as the resources
allow. However, in order to achieve this performance, the loads
of the next iteration have to be started while several
architecturally earlier stores still have to be executed, and,
comparing the results of B to those of A, all measured cores have
more difficulty with that, resulting in slower performance.
* C 0 0 2 2 4 4 6 6: 1-computation recurrences (i.e., the
computation in the next iteration depends on a computation in the
current iteration), four of those. So at most 4 of these
computations (plus the loop overhead) can be performed in
parallel.
* D 0 1 1 0 2 3 3 2: 2-computation recurrences (i.e., two dependent
computations in an iteration, and the first of those in the
current iteration depends on the second one in the previous
iteration), 2 of those. So at most two of these computations
(plus the loop overhead) can be performed in parallel.
* E 0 1 2 3 1 0 3 2: The same data flow as D, but the computations
are arranged differently: Here we first have two independent
computations, and then two computations that depend on the
earlier computations.
* F 0 1 1 2 2 0 3 3: A 3-computation recurrence and a 1-computation
recurrence. In the best case you see the latency of three
computations per iteration.
* G 0 1 1 2 3 3 2 0: The same data flow as F, but the computations
are arranged differently.
* H 0 0 0 0 0 0 0 0: One 4-computation recurrence. These
computations can only be performed sequentialy, only the loop
overhead can be performed in parallel to them.

The results for different CPU cores are shown in the following. The
numbers are cycles per computation (statement in the loop body). To
get the cycles per iteration, multiply by 4.

A B C D E F G H microarchitecture CPU
1.17 2.11 1.46 3.28 3.67 4.50 4.50 6.93 K8 Athlon 64 X2 4600+
1.00 1.26 2.00 4.00 4.00 6.01 6.01 8.00 Zen Ryzen 5 1600X
1.00 1.19 2.00 4.00 4.00 6.00 6.01 8.00 Zen2 Ryzen 9 3900X
0.75 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Zen3 Ryzen 7 5800X
1.00 1.52 1.99 3.70 3.66 4.65 4.62 7.42 Sandy Bridge Xeon E3-1220
1.00 1.26 1.61 3.06 3.00 4.50 4.50 6.85 Haswell Core i7-4790K
1.00 1.12 1.43 2.90 2.84 4.03 4.05 5.53 Skylake Core i5-6600K
0.78 0.92 1.01 1.81 1.13 2.00 2.00 2.25 Rocket Lake Xeon W-1370P
0.81 0.92 0.83 0.86 0.81 0.82 0.81 1.01 Tiger Lake Core i5-1135G7
3.25 4.00 3.25 3.75 3.25 3.75 3.50 4.00 Cortex-A53 Amlogic S905
3.25 4.00 3.25 3.75 3.25 3.75 3.50 4.00 Cortex-A55 RK3588
1.25 2.26 2.06 3.95 4.00 6.18 6.18 7.67 Cortex-A72 RK3399
1.50 1.77 1.63 3.00 3.03 4.10 4.07 5.51 Cortex-A76 RK3588
1.03 1.66 1.50 3.00 3.00 4.50 4.50 7.84 IceStorm Apple M1 (variations from 6.0-8.75 for H)
3.01 4.52 3.01 4.01 3.01 4.01 4.01 5.02 U74 JH7100 (RISC-V)

There can be different sophistication levels of CPU cores, both in
terms of dealing with aliases, and in terms of forwarding from stores
to loads. And of course there is the difference between in-order
execution and out-of-order execution.

Aliases

The simplest approach would be to assume that all memory accesses may
refer to the same memory location. In that case we would expect that
the core performs all parameter combinations as badly as H. None of
the measured cores exhibits this behaviour, so obviously they are
more sophisticated than that.

The next approach is to allow to perform architecturally later loads
at the same time, or, with OoO, earlier than stores to a different
address. All tested cores seem to do this.

Finally, there is the issue of what to do when the load is to the
same address as an architecturally preceding store. This brings us to
the next section:

Store to load forwarding

The simplest approach is to wait until the data arrives in the cache
and then load it from there. I dimly rememeber seeing 15 cycles per
iteration for a loop that incremented a memory location in every
iteration, but none of the cores measured above take that long
(although, for Zen and Zen2 one might wonder).

The next approach is to let the load read from the store buffer if
the data is there (and first wait until it is there). In this case
the whole sequence of store-load has a total latency that is not much
higher than the load latency. It seems that most cores measured here
do that. We see this best in the H results; e.g., a Skylake has 4
cycles of load latency and 1 cycle of add latency, and we see 5.53
cycles of latency for store-load-add, meaning that the store
contributes an average latency of 0.53 cycles. There are some
complications in case the load only partially overlaps the store (I
should put an appropriate chipsncheese link here).

Finally, the core might detect that the data that is loaded is coming
from a store that has not been retired yet, so the physical register
used by the store can be directly renamed into the register of the
load result, and the load does not need to access the store buffer or
cache at all (what is the name of this feature?). As a result, in the
ideal case we see only the 1-cycle latency of the add in case H. In
the measured cores, Zen3 and Tiger Lake exhibit this behaviour fully;
Rocket Lake probably also does that, but either does not succeed in
all cases (the different results between D and E speak for this
theory), or there is an additional source of latency.

I expect that Firestorm (the performance core of the Apple M1) also
has this feature, but unfortunately the cycles performance counter
does not work for Firestorm on Linux 6.4.0-asahi-00571-gda70cd78bc50

In-order vs. out-of-order execution

With in-order execution (on Cortex A53 and A55, and on the U74), the
loads cannot be executed before architecturally earlier stores, even
if both refer to different memory locations. So even A is relatively
slow on in-order cores. In-order execution also means that B is
almost as slow as H, while with out-of-order execution it can
theoretically be executed as fast as A (but in practice they exhibit
slightly slower performance, but certainly much faster than H).

With OoO, we see much better performance in cases where there are
independent computation chains. Given the size of the buffers in the
various OoO microarchitectures (hundreds of instructions in the
reorder buffer, dozens in schedulers), it is surprising that B is
slower than A given the small size of each iteration (~15
instructions); and even D is slower than E on some
microarchitectures, most notably Rocket Lake.

Measuring your own hardware

You can download the contents of this directory and run the benchmark
with

wget http://www.complang.tuwien.ac.at/anton/memdep/memdep.zip
unzip memdep.zip
cd memdep
make

If you want to do your own parameter combinations, you can run the
binary with

./memdep-`uname -m` a b c d e f g h

where a b c d e f g h are integers and correspond to the pointers in
the loop. If you want to get results like in the table above, run it
like this:

perf stat --log-fd 3 -x, -e $(CYCLES) ./memdep-$(ARCH) a b c d e f g h 3>&1 | awk -F, '{printf("%5.2f\n",$$1/1000000000)}'

Future work

Make a good microbenchmark that produces the addresses so late that
either the core has to wait for the addresses or has to speculate
whether the load accesses the same address as the store or not. Do it
for predictable aliasing and for unpredictable aliasing. I remember a
good article about this predictor (that actually looked at Intel's
patent), but don't remember the URL; Intel calls this technique as
implemented in Ice Lake and later P-cores Fast Store Forwarding
Predictor (FSFP) (but my memory says that the article I read about
looked at older microarchitectures that have a similar feature). AMD
describes such a hardware feature under the name predictive store
forwarding (PSF), which they added in Zen3.

Related

One thing I remember is that I have done a microbenchmark that was
intended to measure predictive store forwarding, but it (also?)
measured the forward-at-register-level technique described above.

--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7...@googlegroups.com>

EricP

unread,
Nov 3, 2023, 1:14:11 PM11/3/23
to
I notice that all your 8 references are within the same cache line.
Store to load forwarding might behave differently across lines
and 4kB page boundaries.
Instead of 0..7 you might try 0*64..7*64 and 0*4096..7*4096

extern void memdep(long *a, long *b, long *c, long *d,
long *e, long *f, long *g, long *h);

for (i=0; i<1000; i++)
a[i] = 0;
memdep(a+offsets[0], a+offsets[1], a+offsets[2], a+offsets[3],
a+offsets[4], a+offsets[5], a+offsets[6], a+offsets[7]);

Chris M. Thomasson

unread,
Nov 3, 2023, 6:29:37 PM11/3/23
to
On 11/3/2023 2:15 AM, Anton Ertl wrote:
> I have written a microbenchmark for measuring how memory dependencies
> affect the performance of various microarchitectures. You can find it
> along with a description and results on
> <http://www.complang.tuwien.ac.at/anton/memdep/>.
[...]

Is the only arch out there that does not require an explicit memory
barrier for data-dependent loads a DEC Alpha? I think so.

Anton Ertl

unread,
Nov 4, 2023, 1:40:39 PM11/4/23
to
EricP <ThatWould...@thevillage.com> writes:
>I notice that all your 8 references are within the same cache line.
>Store to load forwarding might behave differently across lines
>and 4kB page boundaries.
>Instead of 0..7 you might try 0*64..7*64 and 0*4096..7*4096

The numbers are in 8-byte granularity, so to get different cache lines
n*8 is good enough, and for different pagres n*512. However, the
array is only 8000 bytes long, so you can use only numbers in the
range 0...999. Anyway, I made another make target "ericp", so when
you "make ericp", you get the following parameter combinations:

X 0 8 16 24 32 40 48 56

Different cache lines, but always the same "bank" in a cache line;
this should hurt K8, maybe also others.

Y 0 9 18 27 36 45 54 63

Different cache lines, and different banks for different accesses.

Z 0 513 994 11 524 989 22 535

All different cache lines and different banks; the second access is on
a different page than the first, and the third is likely on a
different page. Then start with the first page again. These are all
independent accesses.

Results:

X Y Z A B C D E F G H
1.00 1.00 1.00 0.75 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Zen3
1.00 1.00 1.00 0.78 0.91 1.02 1.81 1.13 2.00 2.00 2.25 Rocketlake
2.00 1.17 1.17 1.17 2.03 1.25 3.30 3.69 4.50 4.50 6.91 K8

We indeed see a slowdown on Zen3 and Rocketlake on X Y Z compare to
the dependence-wise equivalent A. My gyess is that these CPUs can
only store to one cache line per cycle, and can optimize the stores in
some cases for A. If that is the case, that is a resource thing, not
a failure to recognize independence.

We see a slowdown for X on K8, which is somewhat expected; however,
thinking about t, I wonder: It seems as if the K8 can do only one load
or store per cycle, if they are to the same bank, but several to
different banks; it has been too longs since a read how it worked. Y
and Z are the same speed as A, showing that the distance between
addresses does not influence the no-alias detection (at least for
these distances).

Is this what you had in mind?

- anton

Anton Ertl

unread,
Nov 4, 2023, 1:53:18 PM11/4/23
to
I don't know any architecture that requires memory barriers for
single-threaded programs that access just memory, not even Alpha.

You may be thinking of the memory consistency model of Alpha, which is
even weaker than everything else I know of. This is not surprising,
given that a prominent advocacy paper for weak consistency
[adve&gharachorloo95] came out of DEC.

@TechReport{adve&gharachorloo95,
author = {Sarita V. Adve and Kourosh Gharachorloo},
title = {Shared Memory Consistency Models: A Tutorial},
institution = {Digital Western Research Lab},
year = {1995},
type = {WRL Research Report},
number = {95/7},
annote = {Gives an overview of architectural features of
shared-memory computers such as independent memory
banks and per-CPU caches, and how they make the (for
programmers) most natural consistency model hard to
implement, giving examples of programs that can fail
with weaker consistency models. It then discusses
several categories of weaker consistency models and
actual consistency models in these categories, and
which ``safety net'' (e.g., memory barrier
instructions) programmers need to use to work around
the deficiencies of these models. While the authors
recognize that programmers find it difficult to use
these safety nets correctly and efficiently, it
still advocates weaker consistency models, claiming
that sequential consistency is too inefficient, by
outlining an inefficient implementation (which is of
course no proof that no efficient implementation
exists). Still the paper is a good introduction to
the issues involved.}
}

- anton

BGB

unread,
Nov 4, 2023, 3:50:28 PM11/4/23
to
On 11/4/2023 12:40 PM, Anton Ertl wrote:
> "Chris M. Thomasson" <chris.m.t...@gmail.com> writes:
>> On 11/3/2023 2:15 AM, Anton Ertl wrote:
>>> I have written a microbenchmark for measuring how memory dependencies
>>> affect the performance of various microarchitectures. You can find it
>>> along with a description and results on
>>> <http://www.complang.tuwien.ac.at/anton/memdep/>.
>> [...]
>>
>> Is the only arch out there that does not require an explicit memory
>> barrier for data-dependent loads a DEC Alpha? I think so.
>
> I don't know any architecture that requires memory barriers for
> single-threaded programs that access just memory, not even Alpha.
>
> You may be thinking of the memory consistency model of Alpha, which is
> even weaker than everything else I know of. This is not surprising,
> given that a prominent advocacy paper for weak consistency
> [adve&gharachorloo95] came out of DEC.
>

Hmm, one could maybe save some logic and also make timing a little
easier if they disallowed RAW and WAW (in the same cache line), and made
them undefined behavior...

However, this would suck for software (and would effectively either
mandate strict stack alignment to keep prologs/epilogs working, or
require saving registers in a convoluted order). And, in my case, I can
tell that code steps on these scenarios very often (particularly in
prolog sequences).


This is why I bother with a "somewhat expensive" feature of forwarding
stored cache lines back to load in cases where one is accessing a "just
accessed" cache line.

Disabling this forwarding (and forcing a stall instead), tends to have a
more noticeable adverse effect on performance (but is often needed to
try to pass timing at higher clock speeds).

Granted, I guess it is possible I could fiddle with the compiler to try
to improve this situation, say:
Only use MOV.X with a 16B alignment;
Stagger even/odd stores in pairs when possible.

Say, as opposed to:
MOV.X R12, (SP, 160)
MOV.X R10, (SP, 144)
MOV.X R8, (SP, 128)
MOV.X R30, (SP, 112)
MOV.X R28, (SP, 96)
MOV.X R26, (SP, 80)
MOV.X R24, (SP, 64)
Say:
MOV.X R10, (SP, 144)
MOV.X R12, (SP, 160)
MOV.X R30, (SP, 112)
MOV.X R8, (SP, 128)
MOV.X R26, (SP, 80)
MOV.X R28, (SP, 96)
MOV.X R24, (SP, 64)

Say, to try to avoid two adjacent stores within the same 32-byte
paired-line (for 64-bit load, one would try to avoid two adjacent stores
within the same 32 bytes).


But, I sort of ended my 75MHz experiment for now, and fell back to
50MHz, where I can more easily afford to have this forwarding (in which
case the above becomes mostly N/A).


As for cache-coherence between cores, this basically still does not
exist yet.

It also seems like it would require a bit more ringbus traffic to pull
off, say:
If a core wants to write to a line, it needs to flag its intention to do so;
If a line was previously fetched for read, but is now being written, the
line would need to be flushed and re-fetched with the new intention;
If a line is being fetched for write, we somehow need to signal it to be
flushed from whatever L1 cache was last holding it;
...



But, the alternative (what I currently have), effectively disallows
traditional forms of (writable) memory sharing between threads (and
"volatile" memory access seems to pretty slow at present). So, for
multithreaded code, one would need to have the threads work mostly
independently (without any shared mutable structures), and then trigger
an L1 flush when it is done with its work (and the receiving thread also
needs to trigger an L1 flush).

Granted, similar ended up happening with the rasterizer module with
TKRA-GL, where cache-flushing is needed whenever handing off the
framebuffer between the main CPU and the rasterizer module (otherwise,
graphical garbage and incompletely drawn geometry may result); and one
also needs to flush the L1 cache and trigger the rasterizer module to
flush its texture cache whenever uploading a new texture.

And, as-is, texture uploading has ended up being horridly slow. So, even
if I can (sort of) afford to run GLQuake with lightmaps now, dynamic
lightmaps need to be disabled as it is too slow.

Though, an intermediate possibility could be to store lightmaps with
dynamic lights in (only) one of 2 states, with two different textures:
Dynamic light ON/Max, Dynamic light OFF/Min). However, this would
effectively disallow effects like "slow pulsate" (which could only
alternate between fully-on and fully-off); as well as updates from
dynamic light sources like rockets (where, as-is, firing a rocket with
dynamic lightmaps, eats the CPU something hard).

Not entirely sure how something like a Pentium-1 managed all this either.

Granted, faster in this case would still be to do everything with vertex
lighting (as is currently still the default in my GLQuake port), if
albeit (not as good), and a bit of a hack as Quake wasn't really
designed for vertex lighting.

EricP

unread,
Nov 5, 2023, 10:10:47 AM11/5/23
to
It might be that A is occasionally combining the multiple stores
to the same line in the store buffer whereas X Y Z do not.
So maybe A needs 25% less cache accesses.

> We see a slowdown for X on K8, which is somewhat expected; however,
> thinking about t, I wonder: It seems as if the K8 can do only one load
> or store per cycle, if they are to the same bank, but several to
> different banks; it has been too longs since a read how it worked. Y
> and Z are the same speed as A, showing that the distance between
> addresses does not influence the no-alias detection (at least for
> these distances).
>
> Is this what you had in mind?
>
> - anton

Yes. My thought was that by targeting the same cache line you
might be triggering alternate mechanisms that cause serialization.

First was that x86-TSO coherence allows a younger load to bypass (execute
before) an older store to a non-overlapping address, otherwise it is serial.
The detection of "same address" could be as high resolution as 8-byte
operand or as low as a cache line. So by targeting separate cache lines
it could allow more load-store bypassing and concurrency.

Also, as you noted, by targeting the same cache line it would serialize
on the same cache bank port, if it has multiple banks.

And I just suggested different pages to disable any "same page"
virtual address translation optimizations (if there are any).



Chris M. Thomasson

unread,
Nov 5, 2023, 3:46:43 PM11/5/23
to
Iirc, one can release a spinlock using an atomic store on x86, no
LOCK'ED RMW. Btw, have you ever tried to implement hazard pointers on an
x86? It requires an explicit memory barrier.

EricP

unread,
Nov 8, 2023, 9:26:49 AM11/8/23
to
Sure, because its not a RMW. Its just a store which under x86-TSO
becomes visible after prior stores to the protected section.

This rule seems to imposes some particular design complexities on the
order a Load-Store Queue and cache can perform store operations.

Say we have two cache lines A and B.

If there is a store to cache line ST [A+0]
then to a different line ST [B+0], then a another ST [A+1],
and if the first ST [A+0] hits cache but second ST [B+0] misses,
then under TSO the third ST [A+1] must appear to stall so that it
does not become visible until after the ST [B+0] has been performed,
even though line A is in the cache.

ST [A+0],r1 <- this hits cache
ST [B+0],r2 <- this misses cache
ST [A+1],r3 <- this waits for B to arrive and store to [B+0] to finish

On core C1, if ST [A+1] was allowed to be performed before [B+0] then an
invalidate msg might get in and transfer ownership of line A to a different
core C2, allowing the new value of [A+1] to be visible at C2 before [B+0].

In order to prevent this under TSO, either LSQ actually stalls ST [A+1],
or it allows it to proceed to the cache but pins line A until the
update to B is done. If it uses the second pinning approach then it
must also deal with all the potential deadlock/livelock possibilities.

And the cache access is pipelined so all of this is asynchronous.
When ST [B+0] misses cache, ST [A+1] might already be in the pipeline.
So even in the simple "stall until older stores done" approach it needs
even more logic to detect this and NAK the following stores back to LSQ,
and later wake them up and replay them when the ST [B+0] is done.

> Btw, have you ever tried to implement hazard pointers on an
> x86? It requires an explicit memory barrier.

That lock-free stuff makes my brain hurt.





MitchAlsup

unread,
Nov 9, 2023, 8:42:41 PM11/9/23
to
BGB wrote:

> On 11/4/2023 12:40 PM, Anton Ertl wrote:
>> "Chris M. Thomasson" <chris.m.t...@gmail.com> writes:
>>> On 11/3/2023 2:15 AM, Anton Ertl wrote:
>>>> I have written a microbenchmark for measuring how memory dependencies
>>>> affect the performance of various microarchitectures. You can find it
>>>> along with a description and results on
>>>> <http://www.complang.tuwien.ac.at/anton/memdep/>.
>>> [...]
>>>
>>> Is the only arch out there that does not require an explicit memory
>>> barrier for data-dependent loads a DEC Alpha? I think so.
>>
>> I don't know any architecture that requires memory barriers for
>> single-threaded programs that access just memory, not even Alpha.
>>
>> You may be thinking of the memory consistency model of Alpha, which is
>> even weaker than everything else I know of. This is not surprising,
>> given that a prominent advocacy paper for weak consistency
>> [adve&gharachorloo95] came out of DEC.
>>

> Hmm, one could maybe save some logic and also make timing a little
> easier if they disallowed RAW and WAW (in the same cache line), and made
> them undefined behavior...
<
Intractable in practice.
<
What you CAN do is to dynamically solve this problem in HW. By constructing
a memory dependency matrix and not allowing a memory reference to complete
unless it is safe to do so.
<
> However, this would suck for software (and would effectively either
> mandate strict stack alignment to keep prologs/epilogs working, or
> require saving registers in a convoluted order). And, in my case, I can
> tell that code steps on these scenarios very often (particularly in
> prolog sequences).
<
It only suck when there is a REAL dependency and saves you neck every time
a dependency is not found (mid-to-high 90%-ile).
<
> This is why I bother with a "somewhat expensive" feature of forwarding
> stored cache lines back to load in cases where one is accessing a "just
> accessed" cache line.
<
I use a MDM (above) attached to a temporal cache.
<
When an instruction is inserted in the MDM/TC==CC, it is made dependent on all
instructions that are already in the CC that have not completed. and installed
in the Unknown Address state.
<
When an address is AGENed, the address is sent to the data cache and a portion
is sent to the CC and that portion CAMs against all the other addresses. The
result of the comparison is either "Is the same line" or "Can't be the same line"
and the state changes to Known Address.
<
While the compares are taking place, the MDM is relaxed. And entry that "can't
be the same as" is removed as a dependency. When all dependencies have been
removed, the instruction can complete.
<
If either CC or DC returns with data, the state advances to Known Data. Stores
with Known data are allowed to complete into CC. Modified CC data migrates back
to DC when the ST instruction becomes retireable.
<
At this point the ST has completed as seen by the CPU, but has not completed
as seen by "interested Observers", and is subject to prediction repair, AGEN
replay, and those rare situations where one changes the memory ordering model
{ATOMICs, MMI/O, Config space}. This is the key point--the CPU can think "its
done" while the external world can think "It has not happened yet".
<
Thus, memory references that alias on a cache line basis are performed in order,
while those that are not run independently.
<
The conditional (temporal) cache holds the address and <at least a portion> of
the associated cache line. Any reference can hit on that data in CC even if it
is port or bank blocked in the DC. CC hits a bit over 50% of the time, effectively
reducing cache port/bank conflicts that have been touched in the time of the
temporal cache.
<
> Disabling this forwarding (and forcing a stall instead), tends to have a
> more noticeable adverse effect on performance (but is often needed to
> try to pass timing at higher clock speeds).

> Granted, I guess it is possible I could fiddle with the compiler to try
> to improve this situation, say:
> Only use MOV.X with a 16B alignment;
> Stagger even/odd stores in pairs when possible.

> Say, as opposed to:
> MOV.X R12, (SP, 160)
> MOV.X R10, (SP, 144)
> MOV.X R8, (SP, 128)
> MOV.X R30, (SP, 112)
> MOV.X R28, (SP, 96)
> MOV.X R26, (SP, 80)
> MOV.X R24, (SP, 64)
> Say:
> MOV.X R10, (SP, 144)
> MOV.X R12, (SP, 160)
> MOV.X R30, (SP, 112)
> MOV.X R8, (SP, 128)
> MOV.X R26, (SP, 80)
> MOV.X R28, (SP, 96)
> MOV.X R24, (SP, 64)
<
I will use this as an example as to Why you want save/restore instructions
in ISA::
a) so the compiler does not need to deal with ordering problems
b) so fewer instructions are produced.

> Say, to try to avoid two adjacent stores within the same 32-byte
> paired-line (for 64-bit load, one would try to avoid two adjacent stores
> within the same 32 bytes).
<
Solved by CC in my case.
<
>

MitchAlsup

unread,
Nov 9, 2023, 8:42:41 PM11/9/23
to
If you have an MDM+TC == CC, the CPU can perform the ST into CC where it
awaits "ordering" while the external world is left believing it has not
started yet {This is 1991 technology}. CC can effectively eliminate ST
ordering stalls seen from the SPU while preserving all of the TSO-ness
the external observers need.
<
> or it allows it to proceed to the cache but pins line A until the
> update to B is done. If it uses the second pinning approach then it
> must also deal with all the potential deadlock/livelock possibilities.
<
In a conditional Cache, every instructions has (at least a portion) of
its Data Cache associated line. So every ST has a place to deposit its
data; and that place can be subject to backup and cancellation (based on
external stuff happening}.
<
After the ST reached the complete state (ready to retire) CC data is
migrated to DC data as porting and banking permiti.

MitchAlsup

unread,
Nov 9, 2023, 8:46:37 PM11/9/23
to
Anton Ertl wrote:

> I have written a microbenchmark for measuring how memory dependencies
> affect the performance of various microarchitectures.
<
Absolutely Brilliant.
<
Well Done, and Thanks.

BGB

unread,
Nov 9, 2023, 11:13:21 PM11/9/23
to
Probably true.

My options were one of:
Forward results from an in-progress Store back to a following Load/Store
(more expensive, but faster);
Stall the pipeline until the prior Store can complete (cheaper but slower).


Granted, the "not bother" option could be cheaper still, but carries an
unreasonable level of risk (and in practice would likely mean that any
store through a free pointer followed by another memory access would end
up needing to use NOP padding or similar).

Or, handling it in the CPU, by generating an interlock stall whenever a
Store is followed by another memory operation. But, this would probably
be the worst possible option in terms of performance...


>> However, this would suck for software (and would effectively either
>> mandate strict stack alignment to keep prologs/epilogs working, or
>> require saving registers in a convoluted order). And, in my case, I
>> can tell that code steps on these scenarios very often (particularly
>> in prolog sequences).
> <
> It only suck when there is a REAL dependency and saves you neck every time
> a dependency is not found (mid-to-high 90%-ile).
> <

OK.
In my pipeline, it is simpler...

Just, the L1 cache sees that the next stage holds the results of a store
to the same cache lines, and either forwards the result or generates a
stall until the stall completes.

Where, forwarding is faster, but stalling is cheaper.
Still haven't gotten around to this, but it could potentially help
performance with the cheaper cache options (namely, configurations
without the internal forwarding).

But, yeah, the compiler does still need to deal with this sometimes.


>> Say, to try to avoid two adjacent stores within the same 32-byte
>> paired-line (for 64-bit load, one would try to avoid two adjacent
>> stores within the same 32 bytes).
> <
> Solved by CC in my case.

OK.

> <
>>

Chris M. Thomasson

unread,
Nov 10, 2023, 3:57:42 PM11/10/23
to
Iirc, hazard pointers require a store followed by a load to another
location to be honored. This requires a membar on x86.

MitchAlsup

unread,
Nov 10, 2023, 6:36:28 PM11/10/23
to
It is stuff like this that made My 66000 architecture define changes
to memory order based on several pieces of state (thus no membars)
<
Accesses to ROM are unordered
Accesses to config space is strongly ordered
Accesses to MMI/O space is sequentially consistent
Participating accesses (ATOMIC) are sequentially consistent
everything else is causal.
And the HW tracks this on a per memory reference basis--in effect
all orderings are in effect all the time.
<
Performance guys get what they want,
Lamport guys (atomic) get what they want
Device drivers get what they want on the accesses that need it
..

MitchAlsup

unread,
Nov 10, 2023, 6:36:29 PM11/10/23
to
Imagine a scenario where you are returning from one function call only
to call another function*. Since EXIT performs all the state reload
(and the RET), and the return IP is read first so the front end can
fetch-decode to feed the machine..... (*) possibly moving registers
around to form the argument list.
<
NOW imagine that the front end encounters a CALL and the instruction
at the target of the CALL is ENTER.
a) the restore process can stop-or-complete
b) the save process can mostly be skipped since memory already contains
the correct bit patterns (My 66000 ABI)
<
Saving cycles a compiler cannot.....

Chris M. Thomasson

unread,
Nov 10, 2023, 11:07:26 PM11/10/23
to
Nice! Well, there is a way to avoid the explicit membar on x86. It
involves a marriage of RCU and Hazard Pointers.

EricP

unread,
Nov 11, 2023, 4:44:21 PM11/11/23
to
MDM = Memory Dependency Matrix
TC = Temporal Cache
CC = Conditional Cache

The Conditional Cache sounds similar to the Store Buffers that other
designs refer to but with multiple versions, as you outline below.
This seems to duplicate many existing functions of the Load Store Queue.

I'm thinking it would be simpler to keep everything in one circular
load/store queue and hold store data there after retire until it
can be handed off to the cache. See below.

> <
>> or it allows it to proceed to the cache but pins line A until the
>> update to B is done. If it uses the second pinning approach then it
>> must also deal with all the potential deadlock/livelock possibilities.
> <
> In a conditional Cache, every instructions has (at least a portion) of
> its Data Cache associated line. So every ST has a place to deposit its
> data; and that place can be subject to backup and cancellation (based on
> external stuff happening}.
> <
> After the ST reached the complete state (ready to retire) CC data is
> migrated to DC data as porting and banking permiti.
> <

Yes, but that seems a rather expensive approach.
The problem I have with the CC is that it requires some pretty complex
logic to track multiple versions of cache lines, journal before-image copies,
track their inter-dependencies, and decide when to commit updates.
And much of this duplicates functionality that the LSQ already has to
support store-to-load forwarding and load-store bypass address matching.
All those buffer need free lists and allocators, another dependency matrix,
CAM's to match addresses to the assigned buffers.

Its not clear to me that all this is significantly better than
a simpler approach.

Instead I was thinking of having a unified LSQ as a single circular buffer
with all the load and store entries in (circular) program order.
Store data is held in the LSQ after the store instruction retires
until it is accepted by the cache. While it is in the LSQ it can still be
forwarded to younger loads to the same address.

LSQ stores are sent to the pipelined cache which indicates back to LSQ
a hit/miss after the pipeline latency.
If it hits then the entry is removed from LSQ.
If it misses then the store data is held in LSQ and cache blocks further
stores until the miss resolves. However LSQ continues to send future
store address to trigger line prefetches until all miss buffers are busy.
If a load misses then all further loads must stop because
load-load bypassing is not allowed until TSO.

When the missed line arrives, cache sends a wakeup signal to LSQ
which restarts sending entries from where it left off.



MitchAlsup

unread,
Nov 11, 2023, 6:33:08 PM11/11/23
to
a) it is circular and tied intimately to the execution window.
b) it integrates the LDs into the queue {just in case a store has
to be replayed all dependent LDs get replayed too and for solving
memory order problems dynamically.}

>> <
>>> or it allows it to proceed to the cache but pins line A until the
>>> update to B is done. If it uses the second pinning approach then it
>>> must also deal with all the potential deadlock/livelock possibilities.
>> <
>> In a conditional Cache, every instructions has (at least a portion) of
>> its Data Cache associated line. So every ST has a place to deposit its
>> data; and that place can be subject to backup and cancellation (based on
>> external stuff happening}.
>> <
>> After the ST reached the complete state (ready to retire) CC data is
>> migrated to DC data as porting and banking permiti.
>> <

> Yes, but that seems a rather expensive approach.
> The problem I have with the CC is that it requires some pretty complex
> logic to track multiple versions of cache lines, journal before-image copies,
> track their inter-dependencies, and decide when to commit updates.
<
We did not track all that stuff (Mc 88120 circa 1991) what we did was to copy
data from the cache/memory into everybody needing that data, then stores could
overwrite data into the cache into younger accesses; so most of the STs did
not actually write into DC because we knew a younger store would do that for us.
<
> And much of this duplicates functionality that the LSQ already has to
> support store-to-load forwarding and load-store bypass address matching.
<
The CC is the LD/ST queue, but by integrating certain features, most of the
control logic vanishes, and the component "pretty much" manages itself. It
does speak back to the Execution window logic to control the advancement of
the consistent point and the retire point of the window.
<
> All those buffer need free lists and allocators, another dependency matrix,
> CAM's to match addresses to the assigned buffers.
<
Nope:: dedicated storage. If the storage is not available, insert stalls.
<
> Its not clear to me that all this is significantly better than
> a simpler approach.
<
Think of it like a reservation station for a horizon of memory reference
instructions. Instead of CAMing on renamed registers, you CAM on portion
of VA from the CPU side, and on PA for the snooping side. The MDM handles
the memory renaming in a way that can be undone {Branch prediction repair
and Snoop discovering memory order has been violated.}

> Instead I was thinking of having a unified LSQ as a single circular buffer
> with all the load and store entries in (circular) program order.
<
Yep, that is what a CC does. However, once 2 addresses discover "we cannot
refer to the same cache line container" they become independent.
<
> Store data is held in the LSQ after the store instruction retires
<
The ST must deliver its data to DC before the ST can retire. Until the ST
delivers its data to DC, the ST is subject to being replayed upon detection
of memory order violations.
<
> until it is accepted by the cache. While it is in the LSQ it can still be
> forwarded to younger loads to the same address.

> LSQ stores are sent to the pipelined cache which indicates back to LSQ
> a hit/miss after the pipeline latency.
<
And hit data is written into All CC entires waiting for it.
<
> If it hits then the entry is removed from LSQ.
<
Entries are inserted into CC at issue and removed from CC at retire. They
remain replayable until consistent.
<
> If it misses then the store data is held in LSQ and cache blocks further
> stores until the miss resolves. However LSQ continues to send future
<
Mc 88120 kept byte writes in the CC.data so STs could write data into CC
even before data arrives from the memory hierarchy.
<
> store address to trigger line prefetches until all miss buffers are busy.
<
The CC is the Miss Buffer !! and entries that missed but got translated
are selected for external access in much the same way that instructions
are selected from reservation stations.
<
> If a load misses then all further loads must stop because
> load-load bypassing is not allowed until TSO.
<
But the MDM effectively allows multiple LDs to miss and be serviced
independently {except config, MMI/O, ATOMIC) and repair back to TSO
only if an external even requires actual TSO behavior.
<
> When the missed line arrives, cache sends a wakeup signal to LSQ
> which restarts sending entries from where it left off.
<
When miss data returns with the PA, a portion of PA it used and everybody who
is waiting on the returning data snarfs it up in the same write (into CC)
cycle.

MitchAlsup

unread,
Nov 12, 2023, 2:16:33 PM11/12/23
to
I watched membar evolve during my time at AMD and decided to make a
machine that never needs to use them--but still gets the right thinig
done.
<

Chris M. Thomasson

unread,
Nov 12, 2023, 5:01:50 PM11/12/23
to
TSO? Oh, wait. I need to go think back about your system. It been some
years sense we conversed about it. Btw, what happened to the Mill?
Anyway, I digress.

Fwiw, imho, it helps to think about proper alignment and boundaries, try
to work with a given architecture, not against it.

A highly relaxed memory model can be beneficial for certain workloads.

Kent Dickey

unread,
Nov 12, 2023, 5:33:15 PM11/12/23
to
In article <uiri0a$85mp$2...@dont-email.me>,
Chris M. Thomasson <chris.m.t...@gmail.com> wrote:

>A highly relaxed memory model can be beneficial for certain workloads.

I know a lot of people believe that statement to be true. In general, it
is assumed to be true without proof.

I believe that statement to be false. Can you describe some of these
workloads?

Kent

Chris M. Thomasson

unread,
Nov 12, 2023, 5:39:16 PM11/12/23
to
One example... Basically, think along the lines of RCU. Usually,
read-mostly and write rather rarely can _greatly_ benefit for not using
any memory barriers. And this in on rather relaxed systems. Think SPARC
in RMO mode. DEC alpha is a "special case" wrt RCU.

Chris M. Thomasson

unread,
Nov 12, 2023, 5:43:23 PM11/12/23
to
On 11/12/2023 2:33 PM, Kent Dickey wrote:
Also, think about converting any sound lock-free algorithm's finely
tuned memory barriers to _all_ sequential consistency... That would ruin
performance right off the bat... Think about it.

MitchAlsup

unread,
Nov 12, 2023, 7:21:43 PM11/12/23
to
Assuming you are willing to accept the wrong answer fast, rather than the
right answer later. There are very few algorithms with this property.

MitchAlsup

unread,
Nov 12, 2023, 7:21:44 PM11/12/23
to
Kent Dickey wrote:

> In article <uiri0a$85mp$2...@dont-email.me>,
> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:

>>A highly relaxed memory model can be beneficial for certain workloads.

> I know a lot of people believe that statement to be true. In general, it
> is assumed to be true without proof.
<
In its most general case, relaxed order only provides a performance advantage
when the code is single threaded.
<
> I believe that statement to be false. Can you describe some of these
> workloads?
<
Relaxed memory order fails spectacularly when multiple threads are accessing
data.
<
> Kent

Chris M. Thomasson

unread,
Nov 12, 2023, 7:28:24 PM11/12/23
to
That does not make any sense to me. Think of a basic mutex. It basically
requires an acquire membar for the lock and a release membar for the
unlock. On SPARC that would be:

acquire = MEMBAR #LoadStore | #LoadLoad

release = MEMBAR #LoadStore | #StoreStore

Okay, fine. However, if I made them sequentially consistent, it would
require a damn #StoreLoad barrier for both acquire and release. This is
not good and should be avoided when possible.

Also, RCU prides itself with not having to use any memory barriers for
its read side. If RCU was forced to use a seq cst, basically LOCKED RMW
or MFENCE on Intel, it would completely ruin its performance.

Kent Dickey

unread,
Nov 13, 2023, 12:44:49 AM11/13/23
to
In article <uirqj3$9q9q$1...@dont-email.me>,
You are using the terms in the exact opposite meaning as I would understand
computer architecture.

We'll just assume 3 choices for CPU ordering:

- Sequential consistency (SC). Hardware does everything, there are no
barriers needed, ld/st instructions appear to happen in some
global order.
- Total Store Ordering (TSO) (x86). Stores appear to be done in program
order, but a CPU can peek and see its own local store results
before other CPUs can. Loads appear to be done in some total
program order (not counting hitting its own local stores).
TSO is like SC except there's effectively a store queue, and
stores can finish when placed in the queue, and the queue drains
in FIFO order. Needs no barriers, except for special cases like
Lamport's algorithm (it's easy to avoid barriers).
- Relaxed. Loads and stores are not ordered, users have to put in memory
barriers and hope they did it right.

So, a highly relaxed memory model is the ONLY model which needs barriers.
If you want to get rid of barriers, use a better memory model.

A relaxed ordering SYSTEM says rather than spending a few thousand
gates getting ordering right by hardware in the CPU, instead we're going
to require software to put in some very difficult to understand barriers.
And we're going to have a 45 page academic paper using all the greek
alphabet to describe when you need to put in barriers. Literally no one
understands all the rules, so the best bet is put in too many barriers
and wait for someone to nitpick your code and fix it for you.

[I have a theorem: there is no correct non-trivial multithreaded program
on an architecture which requires barriers for correctness.].

A very simple thought exercise shows even if Sequential Consistency
and/or TSO were slower (and I maintain they are not), but even if you
believe that, a Relaxed Ordering system WILL be slower than TSO or
Sequential for workloads which often use barriers (instructions tagged
with acquire/release are barriers). In a Relaxed ordering system, the
barriers will not be as efficient as the automatic barriers of TSO/SC
(otherwise, why not just do that?), so if barriers are executed often,
performance will be lower than hardware TSO/SC, even if there are no
contentions or any work for the barriers to do. In fact, performance
could be significantly lower.

People know this, it's why they keep trying to get rid of barriers in
their code. So get rid of all them and demand TSO ordering.

Thus, the people trapped in Relaxed Ordering Hell then push weird schemes
on everyone else to try to come up with algorithms which need fewer
barriers. It's crazy.

Relaxed Ordering is a mistake.

Kent

Kent Dickey

unread,
Nov 13, 2023, 12:54:35 AM11/13/23
to
In article <82b3b3b710652e60...@news.novabbs.com>,
MitchAlsup <mitch...@aol.com> wrote:
>Kent Dickey wrote:
>
>> In article <uiri0a$85mp$2...@dont-email.me>,
>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>
>>>A highly relaxed memory model can be beneficial for certain workloads.
>
>> I know a lot of people believe that statement to be true. In general, it
>> is assumed to be true without proof.
><
>In its most general case, relaxed order only provides a performance advantage
>when the code is single threaded.

I believe a Relaxed Memory model provides a small performance improvement
ONLY to simple in-order CPUs in an MP system (if you're a single CPU,
there's nothing to order).

Relazed Memory ordering provides approximately zero performance improvement
to an OoO CPU, and in fact, might actually lower performance (depends on
how barriers are done--if done poorly, it could be a big negative).

Yes, the system designers of the world have said: let's slow down our
fastest most expensive most profitable CPUs, so we can speed up our cheapest
lowest profit CPUs a few percent, and push a ton of work onto software
developers.

It's crazy.

>> I believe that statement to be false. Can you describe some of these
>> workloads?
><
>Relaxed memory order fails spectacularly when multiple threads are accessing
>data.

Probably need to clarify with "accessing modified data".

Kent

Anton Ertl

unread,
Nov 13, 2023, 3:50:37 AM11/13/23
to
"Chris M. Thomasson" <chris.m.t...@gmail.com> writes:
>Also, think about converting any sound lock-free algorithm's finely
>tuned memory barriers to _all_ sequential consistency... That would ruin
>performance right off the bat... Think about it.

Proof by claim?

I think about several similar instances, where people went for
simple-minded hardware designs and threw the complexity over the wall
to the software people, and claimed that it was for performance; I
call that the "supercomputing attitude", and it may work in areas
where the software crisis has not yet struck[1], but is a bad attitude
in areas like general-purpose computing where it has struck.

1) People thought that they could achieve faster hardware by throwing
the task of scheduling instructions for maximum instruction-level
parallelism over to the compiler people. Several companies (in
particular, Intel, HP, and Transmeta) invested a lot of money into
this dream (and the Mill project relives this dream), but it turned
out that doing the scheduling in hardware is faster.

2) A little earlier, the Alpha designers thought that they could gain
speed by not implementing denormal numbers and by implementing
imprecise exceptions for FP operations, so that it is not possible to
implement denormal numbers through a software fixup, either. For
dealing properly with denormal numbers, you had to insert a trap
barrier right after each FP instruction, and presumably this cost a
lot of performance on early Alpha implementations. However, when I
measured it on the 21264 (released six years after the first Alpha),
the cost was like that of a nop; I guess that the trap barrier was
actually a nop on the 21264, because, as an OoO processor, the 21264
performs precise exceptions without breaking a sweat. And the 21264
is faster than the models where the trap barrier actually does
something. Meanwhile, Mitch Alsup also has posted that he can
implement fast denormal numbers with IIRC 30 extra gates (which is
probably less than what is needed for implementing the trap barrier).

3) The Alpha is a rich source of examples of the supercomputer
attitude: It started out without instructions for accessing 8-bit and
16-bit data in memory. Instead, the idea was that for accessing
memory, you would use instruction sequences, and for accessing I/O
devices, the device was mapped three times or so: In one address range
you performed bytewise access, in another address range 16-bit
accesses, and in the third address range 32-bit and 64-bit accesses;
I/O driver writers had to write or modify their drivers for this
model. The rationale for that was that they required ECC for
permanent storage and that would supposedly require slow RMW accesses
for writing bytes to write-back caches. Now the 21064 and 21164 had a
write-through D-cache. That made it easy to add byte and word
accesses (BWX) in the 21164A (released 1996), but they could have done
it from the start. The 21164A is in no way slower than the 21164; it
has the same IPC and a higher clock rate.

Some people welcome and celebrate the challenges that the
supercomputer attitude poses for software, and justify it with
"performance", but as the examples above show, such claims often turn
out to be false when you actually invest effort into more capable
hardware.

Given that multi-processors come out of supercomputing, it's no
surprise that the supercomputing attitude is particularly strong
there.

But if you look at it from an architecture (i.e., hardware/software
interface) perspective, weak consistency is just bad architecture:
good architecture says what happens to the architectural state when
software performs some instruction. From that perspective sequential
consistency is architecturally best. Weaker consistency models
describe how the architecture does not provide the sequential
consistency guarantees that are so easy to describe; the weaker the
model, the more deviations it has to describe.

[1] The software crisis is that software costs are higher than
hardware costs, and supercomputing with its gigantic hardware costs
notices the software crisis much later than general-purpose computing.

a...@littlepinkcloud.invalid

unread,
Nov 13, 2023, 6:49:45 AM11/13/23
to
Kent Dickey <ke...@provalid.com> wrote:
> We'll just assume 3 choices for CPU ordering:
>
> - Sequential consistency (SC). Hardware does everything, there are no
> barriers needed, ld/st instructions appear to happen in some
> global order.
> - Total Store Ordering (TSO) (x86). Stores appear to be done in program
> order, but a CPU can peek and see its own local store results
> before other CPUs can. Loads appear to be done in some total
> program order (not counting hitting its own local stores).
> TSO is like SC except there's effectively a store queue, and
> stores can finish when placed in the queue, and the queue drains
> in FIFO order. Needs no barriers, except for special cases like
> Lamport's algorithm (it's easy to avoid barriers).
> - Relaxed. Loads and stores are not ordered, users have to put in memory
> barriers and hope they did it right.
>
> So, a highly relaxed memory model is the ONLY model which needs barriers.

But this isn't true. Real processors aren't anywhere near as wildly
chaotic as this.

The common form of relaxed memory we see today is causally consistent
and multi-copy atomic (and cache coherent). So, all other threads see
stores to a single location in the same order, and you don't get the
extraordinary going-backwards-in-time behaviour of DEC Alpha.

> A relaxed ordering SYSTEM says rather than spending a few thousand
> gates getting ordering right by hardware in the CPU, instead we're
> going to require software to put in some very difficult to
> understand barriers.

I don't think that's really true. The reorderings we see in currently-
produced hardware are, more or less, a subset of the same reorderings
that C compilers perform. Therefore, if you see a confusing hardware
reordering in a multi-threaded C program it may well be (probably is!)
a bug (according to the C standard) *even on a TSO machine*. The only
common counter-example to this is for volatile accesses.

> A very simple thought exercise shows even if Sequential Consistency
> and/or TSO were slower (and I maintain they are not), but even if
> you believe that, a Relaxed Ordering system WILL be slower than TSO
> or Sequential for workloads which often use barriers (instructions
> tagged with acquire/release are barriers). In a Relaxed ordering
> system, the barriers will not be as efficient as the automatic
> barriers of TSO/SC (otherwise, why not just do that?),

Whyever not? They do the same thing.

Andrew.

Stefan Monnier

unread,
Nov 13, 2023, 10:39:27 AM11/13/23
to
> 1) People thought that they could achieve faster hardware by throwing
> the task of scheduling instructions for maximum instruction-level
> parallelism over to the compiler people. Several companies (in
> particular, Intel, HP, and Transmeta) invested a lot of money into
> this dream (and the Mill project relives this dream), but it turned
> out that doing the scheduling in hardware is faster.

IIRC the main argument for the Mill wasn't that it was going to be
faster but that it would give a better performance per watt by avoiding
the administrative cost of managing those hundreds of reordered
in-flight instructions, without losing too much peak performance.

The fact that performance per watt of in-order ARM cores is not really
lower than that of OOO cores suggests that the Mill wouldn't deliver on
this "promise" either.
Still, I really would like to see how it plays out in practice, instead
of having to guess.


Stefan

EricP

unread,
Nov 13, 2023, 1:23:25 PM11/13/23
to
Kent Dickey wrote:
> In article <uirqj3$9q9q$1...@dont-email.me>,
> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>> On 11/12/2023 4:20 PM, MitchAlsup wrote:
>>> Chris M. Thomasson wrote:
>>>
>>>> On 11/12/2023 2:33 PM, Kent Dickey wrote:
>>>>> In article <uiri0a$85mp$2...@dont-email.me>,
>>>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>>>
>>>>>> A highly relaxed memory model can be beneficial for certain workloads.
>>>>> I know a lot of people believe that statement to be true. In
>>>>> general, it
>>>>> is assumed to be true without proof.
>>>>>
>>>>> I believe that statement to be false. Can you describe some of these
I suggest something different: the ability to switch between TSO and
relaxed with non-privileged user mode instructions.

Non-concurrent code does not see the relaxed ordering, and should benefit
from extra concurrency in the Load Store Queue and cache that relaxed rules
allow, because the local core always sees its own memory as consistent.
For example, relaxed ordering allows multiple LD and ST to be in
multiple pipelines to multiple cache banks at once without regard
as to the exact order the operations are applied.

This is fine for non concurrently accessed data structures,
either non-shared data areas or shared but guarded by mutexes.

But relaxed is hard for people to reason about for concurrently accessed
lock free data structures. Now these don't just appear out of thin air so
it is reasonable for a program to emit TSO_START and TSO_END instructions.

On the other hand, almost no code is lock-free or ever will be.
So why have all the extra HW logic to support TSO if its only really
needed for this rare kind of programming.

But there is also a category of memory area that is not covered by the
above rules, one where one core thinks its memory is local and not shared
but in fact it is being accessed concurrently.

If thread T1 (say an app) on core C1 says its memory is relaxed, and calls
a subroutine passing a pointer to a value on T1's stack, and that pointer
is passed to thread T2 (a driver) on core C2 which accesses that memory,
then even if T2 declared itself to be using TSO rules it would not force
T1 on C1 obey them.

Where this approach could fail is the kind of laissez-faire sharing done
by many apps, libraries, and OS's behind the scenes in the real world.


MitchAlsup

unread,
Nov 13, 2023, 2:15:13 PM11/13/23
to
Anton Ertl wrote:

> "Chris M. Thomasson" <chris.m.t...@gmail.com> writes:
>>Also, think about converting any sound lock-free algorithm's finely
>>tuned memory barriers to _all_ sequential consistency... That would ruin
>>performance right off the bat... Think about it.

> Proof by claim?

> I think about several similar instances, where people went for
> simple-minded hardware designs and threw the complexity over the wall
> to the software people, and claimed that it was for performance; I
> call that the "supercomputing attitude", and it may work in areas
> where the software crisis has not yet struck[1], but is a bad attitude
> in areas like general-purpose computing where it has struck.

> 1) People thought that they could achieve faster hardware by throwing
> the task of scheduling instructions for maximum instruction-level
> parallelism over to the compiler people. Several companies (in
> particular, Intel, HP, and Transmeta) invested a lot of money into
> this dream (and the Mill project relives this dream), but it turned
> out that doing the scheduling in hardware is faster.
<
Not faster, but easier to do with acceptable HW costs. The pipeline
is 1-3 stages longer, but HW has dynamic information that SW cannot have.
<
> 2) A little earlier, the Alpha designers thought that they could gain
> speed by not implementing denormal numbers and by implementing
> imprecise exceptions for FP operations, so that it is not possible to
> implement denormal numbers through a software fixup, either. For
<
So did I in Mc 88100--just as wrong then as it is now.
<
> dealing properly with denormal numbers, you had to insert a trap
> barrier right after each FP instruction, and presumably this cost a
> lot of performance on early Alpha implementations. However, when I
> measured it on the 21264 (released six years after the first Alpha),
> the cost was like that of a nop; I guess that the trap barrier was
> actually a nop on the 21264, because, as an OoO processor, the 21264
> performs precise exceptions without breaking a sweat. And the 21264
> is faster than the models where the trap barrier actually does
> something. Meanwhile, Mitch Alsup also has posted that he can
> implement fast denormal numbers with IIRC 30 extra gates (which is
> probably less than what is needed for implementing the trap barrier).
<
I recall saying it is about 2% of the gate count of an FMAC unit.
The problem that the weak consistency models enabled comes from the
fact it was universal over all accesses. However the TLB can be used
to solve that problem so each access has its own model and the HW has
to perform with that model often across a multiplicity of memory
references. For my part I have 4 memory models and the CPUs switch to
the appropriate model upon detection without needing instructions. So
when the first instruction of an ATOMIC event is detected (decode),
All weaker outstanding request are allowed to complete, and all of
the ATOMIC requests are performed in sequentially consistent manner,
then afterwards the memory model is weakened, again.

MitchAlsup

unread,
Nov 13, 2023, 2:17:03 PM11/13/23
to
The control logic to perform may be that small, the state to maintain it
is at least (5×48^2)×1.2-gates.

> I don't think that's really true. The reorderings we see in currently-
> produced hardware are, more or less, a subset of the same reorderings
> that C compilers perform. Therefore, if you see a confusing hardware
> reordering in a multi-threaded C program it may well be (probably is!)
> a bug (according to the C standard) *even on a TSO machine*. The only
> common counter-example to this is for volatile accesses.
<
Andy Glew used to report that there was surprisingly few instructions
performed out-of-order on an GBOoO machine.

MitchAlsup

unread,
Nov 13, 2023, 2:31:05 PM11/13/23
to
I suggest this switch between modes be done without executing any extra
instructions. I do the switches based on the address space of the access
{DRAM, MMI/O, config, ROM} and I also switch to SC when an ATOMIC event
begins.
<
> Non-concurrent code does not see the relaxed ordering, and should benefit
> from extra concurrency in the Load Store Queue and cache that relaxed rules
> allow, because the local core always sees its own memory as consistent.
> For example, relaxed ordering allows multiple LD and ST to be in
> multiple pipelines to multiple cache banks at once without regard
> as to the exact order the operations are applied.
<
I suspect SUN lost significant performance by always running TSO and
it still required barrier instructions.
<
> This is fine for non concurrently accessed data structures,
> either non-shared data areas or shared but guarded by mutexes.

> But relaxed is hard for people to reason about for concurrently accessed
> lock free data structures.
<
It is hard for people who have a completely vonNeumann thinking pattern
pattern to reason about these things, it is not hard for someone whos
entire career was spent doing multiplicity of things concurrently.
<
SW languages (and debuggers,...) teach people to think:: this happens than
that happens then something else happens. HW languages teach people to think
"crap all of this is happening at once, how do I make sense out of it".
<
In CPU design (or chip design in general) one is NEVER given the illusion
that a single <vast> state describes the moment. It is s shame the SW
did not follow similar route.
<
> Now these don't just appear out of thin air so
> it is reasonable for a program to emit TSO_START and TSO_END instructions.

> On the other hand, almost no code is lock-free or ever will be.
> So why have all the extra HW logic to support TSO if its only really
> needed for this rare kind of programming.
<
I made this exact argument to SUN circa 1993.....
<
> But there is also a category of memory area that is not covered by the
> above rules, one where one core thinks its memory is local and not shared
> but in fact it is being accessed concurrently.

> If thread T1 (say an app) on core C1 says its memory is relaxed, and calls
> a subroutine passing a pointer to a value on T1's stack, and that pointer
> is passed to thread T2 (a driver) on core C2 which accesses that memory,
> then even if T2 declared itself to be using TSO rules it would not force
> T1 on C1 obey them.

> Where this approach could fail is the kind of laissez-faire sharing done
> by many apps, libraries, and OS's behind the scenes in the real world.

So, anything written in JavaScript........

Chris M. Thomasson

unread,
Nov 13, 2023, 4:11:26 PM11/13/23
to
Be sure to keep C++ std::atomic in mind... Also, std::memory_order_*

Chris M. Thomasson

unread,
Nov 13, 2023, 4:31:35 PM11/13/23
to
[...]
> < I suspect SUN lost significant performance by always running TSO and
> it still required barrier instructions.
[...]

Intel still requires an explicit membar for hazard pointers as-is. Sparc
in TSO mode still requires a membar for this. Spard needs a #StoreLoad
wrt the store followed by a load to another location relationship to
hold. Intel needs a LOCK'ed atomic or MFENCE to handle this.

Chris M. Thomasson

unread,
Nov 13, 2023, 4:36:42 PM11/13/23
to
Huh? So, C++ is crazy for allowing for std::memory_order_relaxed to even
exist? I must be misunderstanding you point here. Sorry if I am. ;^o

a...@littlepinkcloud.invalid

unread,
Nov 14, 2023, 5:25:36 AM11/14/23
to
Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> On 11/13/2023 3:49 AM, a...@littlepinkcloud.invalid wrote:
>>
>> I don't think that's really true. The reorderings we see in currently-
>> produced hardware are, more or less, a subset of the same reorderings
>> that C compilers perform. Therefore, if you see a confusing hardware
>> reordering in a multi-threaded C program it may well be (probably is!)
>> a bug (according to the C standard) *even on a TSO machine*. The only
>> common counter-example to this is for volatile accesses.
>
> Be sure to keep C++ std::atomic in mind... Also, std::memory_order_*

Maybe I wasn't clear enough. If you use std::atomic and
std::memory_order_* in such a way that there are no data races, your
concurrent program will be fine on both TSO and relaxed memory
ordering. If you try to fix data races with volatile instead of
std::atomic and std::memory_order_*, that'll mostly fix things on a
TSO machine, but not on a machine with relaxed memory ordering.

(For pedants: Mostly, but not completely, even on TSO, e.g. Dekker's
Algorithm, which needs something stronger.)

Because of this, the assertion that programming a non-TSO machine is
"harder" doesn't IMO stand up, at least in C programs, because the
same data-race bugs can manifest themselves as either compiler
optimizations or hardware reorderings. And a compiler can, at least in
theory, can do things that are far weirder than any memory system
does.

Andrew.

Chris M. Thomasson

unread,
Nov 14, 2023, 3:44:49 PM11/14/23
to
On 11/14/2023 2:25 AM, a...@littlepinkcloud.invalid wrote:
> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>> On 11/13/2023 3:49 AM, a...@littlepinkcloud.invalid wrote:
>>>
>>> I don't think that's really true. The reorderings we see in currently-
>>> produced hardware are, more or less, a subset of the same reorderings
>>> that C compilers perform. Therefore, if you see a confusing hardware
>>> reordering in a multi-threaded C program it may well be (probably is!)
>>> a bug (according to the C standard) *even on a TSO machine*. The only
>>> common counter-example to this is for volatile accesses.
>>
>> Be sure to keep C++ std::atomic in mind... Also, std::memory_order_*
>
> Maybe I wasn't clear enough.

Well, I think I was being a bit too dense here and missed your main
point. Sorry.


> If you use std::atomic and
> std::memory_order_* in such a way that there are no data races, your
> concurrent program will be fine on both TSO and relaxed memory
> ordering.

Agreed.


> If you try to fix data races with volatile instead of
> std::atomic and std::memory_order_*, that'll mostly fix things on a
> TSO machine, but not on a machine with relaxed memory ordering.
>
> (For pedants: Mostly, but not completely, even on TSO, e.g. Dekker's
> Algorithm, which needs something stronger.)

Indeed, it does.

Branimir Maksimovic

unread,
Nov 14, 2023, 11:10:13 PM11/14/23
to
I think that Apple M1 requires, too. I has problems wihout membar.


--

7-77-777, Evil Sinner!
https://www.linkedin.com/in/branimir-maksimovic-6762bbaa/

Anton Ertl

unread,
Nov 15, 2023, 1:49:54 PM11/15/23
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>I have written a microbenchmark for measuring how memory dependencies
>affect the performance of various microarchitectures. You can find it
>along with a description and results on
><http://www.complang.tuwien.ac.at/anton/memdep/>.

I have now added parameter combinations for detecting something that I
currently call memory renaming: If the microarchitecture can overlap
independent accesses to the same memory location (analogous to
register renaming).

Does anybody know if there is a commonly-used term for this feature?
I would also be interested in a commonly-used term for bypassing the
store buffer in store-to-load forwarding (as we see in Zen3 and Tiger
Lake).

Concerning memory renaming, it turns out that Intel added it in
Nehalem, and AMD added it between K10 and Excavator (probably in the
Bulldozer). ARM has not implemented this in any microarchitecture I
have measured (up to Cortex-A76), and Apple has it in Firestorm (M1
P-core), but not in Icestorm