Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"random" number experiment...

46 views
Skip to first unread message

Chris M. Thomasson

unread,
Sep 28, 2022, 4:49:55 PM9/28/22
to
Experimentally creating some "random" numbers from a multi-threaded race
condition. Is this post on topic?

https://groups.google.com/g/comp.lang.c++/c/7u_rLgQe86k/m/fYU9SnuAFQAJ

Here is my C++ code:
_______________________________________________
#include <thread>
#include <atomic>
#include <cstdio>

#define THREADS 7

static std::atomic<unsigned int> g_racer(0);


void racer(unsigned int n)
{
for (unsigned int i = 0; i < n; ++i)
{
// Race infested fetch-and-add op
unsigned int r = g_racer.load(std::memory_order_relaxed);
r = r + 1;
g_racer.store(r, std::memory_order_relaxed);
}
}


unsigned int get_rand_bit(unsigned int n)
{
std::thread t[THREADS];

for (unsigned int i = 0; i < THREADS; ++i)
{
t[i] = std::thread(racer, n);
}

for (unsigned int i = 0; i < THREADS; ++i)
{
t[i].join();
}

unsigned int r = g_racer.load(std::memory_order_relaxed);

return r & 1;
}


void display_rand_bits(unsigned int n)
{
std::printf("%u Race-Condition Bits\n"
"____________________________________________\n",
n
);

for (unsigned int i = 0; i < n; ++i)
{
unsigned int r = get_rand_bit(1000000);
std::printf("%u", r);
if (!((i + 1) % 64)) std::printf("\n");
std::this_thread::yield();
}

std::printf("\n____________________________________________\n");
}


int main()
{
display_rand_bits(1024);

return 0;
}
_______________________________________________


Can you compile and run it? If so, post your output.

Max

unread,
Sep 29, 2022, 6:50:34 AM9/29/22
to
If it runs on your system why wouldn't it run on mine? Haven't tried it,
yet. A short explanation what exactly you're measuring would be nice
(i.e. how many threads, what task). As the race conditions may come out
biased, keep in mind to use the von Neumann method to convert the output
to unbiased randomness
https://mcnp.lanl.gov/pdf_files/nbs_vonneumann.pdf (tl;dr check
double-bits, discard 00 and 11 and use 01 for 0 and 10 for 1).

Cheers,

Max

Rich

unread,
Sep 29, 2022, 8:53:45 AM9/29/22
to
Max <maxt...@gmx.net> wrote:
> On 28.09.22 22:49, Chris M. Thomasson wrote:
>> Experimentally creating some "random" numbers from a multi-threaded race
>> condition. Is this post on topic?

It is closer to on topic than scam posts about blank cards for get rich
quick schemes. :)

> If it runs on your system why wouldn't it run on mine?

Indeed, it should, provided the same C++ compiler (and a compatible
version) was used. Using a different one from the one Chris used (or a
different version) may or may not make one's mileage vary.

> Haven't tried it, yet. A short explanation what exactly you're
> measuring would be nice (i.e. how many threads, what task).

Yes. (This next sentence is for Chris): Instead of a blob of C++, with
only one comment, how about an explanation of the algorithm you are
implementing, how you think it works, and why you think it works?

> As the race conditions may come out biased, keep in mind to use the
> von Neumann method to convert the output to unbiased randomness
> https://mcnp.lanl.gov/pdf_files/nbs_vonneumann.pdf (tl;dr check
> double-bits, discard 00 and 11 and use 01 for 0 and 10 for 1).

Also, given that it is 'race condition' based, and 'threads' based
(which is about all the detail Chris has provided) it also probably has
a very big failure mode: single CPU systems.

Now, granted, in 2022, single CPU systems are way less common than they
used to be, but in a single CPU system, the 'race' will be synchronized
to the task switch frequency used by the host OS. Which likely will
introduce either non-randomness, or quite significant bias.

And, in a multi-cpu system, if the host OS happens, for some reason, to
schedule the threads of this task all on the same CPU, the end result
would be identical to running it on a single CPU system.

Chris M. Thomasson

unread,
Oct 15, 2022, 4:16:40 PM10/15/22
to
On 9/29/2022 3:50 AM, Max wrote:
> On 28.09.22 22:49, Chris M. Thomasson wrote:
>> Experimentally creating some "random" numbers from a multi-threaded
>> race condition. Is this post on topic?
>>
>> https://groups.google.com/g/comp.lang.c++/c/7u_rLgQe86k/m/fYU9SnuAFQAJ
>>
>> Here is my C++ code:
>> _______________________________________________
[...]

> If it runs on your system why wouldn't it run on mine? Haven't tried it,
> yet.

It should compile and run on your end. I failed to ask if you have
multiple processors, multi-core, hyper-threads, ect... This can
theoretically be run on a single thread and there will be no
"randomness" introduced by the race condition. I don't know what type of
system(s) you are going to run it on. I would consider a sequential
monotonic result to be a failure condition? It outputs 2-ary symbols 0
and 1.



> A short explanation what exactly you're measuring would be nice
> (i.e. how many threads, what task). As the race conditions may come out
> biased, keep in mind to use the von Neumann method to convert the output
> to unbiased randomness
> https://mcnp.lanl.gov/pdf_files/nbs_vonneumann.pdf (tl;dr check
> double-bits, discard 00 and 11 and use 01 for 0 and 10 for 1).

The task is to create a race-infested atomic RMW fetch-add operation. If
the RMW operation is not atomic, like in my experimental code, well, it
is broken. However, can we still make use of its broken nature for other
things? How about generating a "random" sequence for fun. Use as many
threads as you like:

#define THREADS 7

I noticed that the results are "better" when the system is under load.
Run it on a system that is idle. Then run it on a system that is under
load. Even moving the mouse around and running random programs can
influence the generated sequence.

Give it a go and pose your output here.

Chris M. Thomasson

unread,
Oct 15, 2022, 4:19:10 PM10/15/22
to
On 9/29/2022 5:53 AM, Rich wrote:
> Max <maxt...@gmx.net> wrote:
>> On 28.09.22 22:49, Chris M. Thomasson wrote:
>>> Experimentally creating some "random" numbers from a multi-threaded race
>>> condition. Is this post on topic?
>
> It is closer to on topic than scam posts about blank cards for get rich
> quick schemes. :)
>
>> If it runs on your system why wouldn't it run on mine?
>
> Indeed, it should, provided the same C++ compiler (and a compatible
> version) was used. Using a different one from the one Chris used (or a
> different version) may or may not make one's mileage vary.
>
>> Haven't tried it, yet. A short explanation what exactly you're
>> measuring would be nice (i.e. how many threads, what task).

Is my response to Max good enough at all Rich? For some damn reason this
thread got deleted from my newsreader. I only found this response by
chance by looking at the group using google groups interface. I must of
Mr. Fumble fingered the thread, and accidentally deleted it.

Damn!

Max

unread,
Oct 17, 2022, 7:54:40 PM10/17/22
to
Afaik, there is no guarantee that on a multi-core system multiple
threads will be run on different cores. That might explain why the
results on a largely idle machine are unsatisfying. As you pointed out,
a single-core system will completely break your code. How about setting
this up to run on two completely different systems and letting them
communicate via an interface? E.g., if after a given number of
operations on system 1 an even or odd number of operations was performed
on system 2.

Chris M. Thomasson

unread,
Oct 17, 2022, 8:08:16 PM10/17/22
to
On 10/17/2022 4:54 PM, Max wrote:
> On 15.10.22 22:16, Chris M. Thomasson wrote:
>> On 9/29/2022 3:50 AM, Max wrote:
>>> On 28.09.22 22:49, Chris M. Thomasson wrote:
>>>> Experimentally creating some "random" numbers from a multi-threaded
>>>> race condition. Is this post on topic?
>>>>
>>>> https://groups.google.com/g/comp.lang.c++/c/7u_rLgQe86k/m/fYU9SnuAFQAJ
[...]
>>> A short explanation what exactly you're measuring would be nice (i.e.
>>> how many threads, what task). As the race conditions may come out
>>> biased, keep in mind to use the von Neumann method to convert the
>>> output to unbiased randomness
>>> https://mcnp.lanl.gov/pdf_files/nbs_vonneumann.pdf (tl;dr check
>>> double-bits, discard 00 and 11 and use 01 for 0 and 10 for 1).
>>
>> The task is to create a race-infested atomic RMW fetch-add operation.
>> If the RMW operation is not atomic, like in my experimental code,
>> well, it is broken. However, can we still make use of its broken
>> nature for other things? How about generating a "random" sequence for
>> fun. Use as many threads as you like:
>>
>> #define THREADS 7
>>
>> I noticed that the results are "better" when the system is under load.
>> Run it on a system that is idle. Then run it on a system that is under
>> load. Even moving the mouse around and running random programs can
>> influence the generated sequence.
>
>
> Afaik, there is no guarantee that on a multi-core system multiple
> threads will be run on different cores.

[...]

I can use affinity masks to pin threads to specific cores...

Chris M. Thomasson

unread,
Oct 17, 2022, 8:11:52 PM10/17/22
to
However, this is not long using portable C++. I will be using the
affinity API. Iirc, POSIX:

https://www.man7.org/linux/man-pages/man3/pthread_setaffinity_np.3.html

Damn, the *_np postfix is non-portable! Yikes! No longer pure C++ once I
follow this road...

Windows has one. I forgot the damn API off hand. Looking it up...
SetThreadAffinityMask ahh found it.

Rich

unread,
Oct 17, 2022, 8:38:46 PM10/17/22
to
Max <maxt...@gmx.net> wrote:
> On 15.10.22 22:16, Chris M. Thomasson wrote:
>> On 9/29/2022 3:50 AM, Max wrote:
>>> A short explanation what exactly you're measuring would be nice
>>> (i.e. how many threads, what task). As the race conditions may
>>> come out biased, keep in mind to use the von Neumann method to
>>> convert the output to unbiased randomness
>>> https://mcnp.lanl.gov/pdf_files/nbs_vonneumann.pdf (tl;dr check
>>> double-bits, discard 00 and 11 and use 01 for 0 and 10 for 1).
>>
>> The task is to create a race-infested atomic RMW fetch-add
>> operation. If the RMW operation is not atomic, like in my
>> experimental code, well, it is broken. However, can we still make
>> use of its broken nature for other things? How about generating a
>> "random" sequence for fun. Use as many threads as you like:
>>
>> #define THREADS 7
>>
>> I noticed that the results are "better" when the system is under
>> load. Run it on a system that is idle. Then run it on a system
>> that is under load. Even moving the mouse around and running random
>> programs can influence the generated sequence.
>
>
> Afaik, there is no guarantee that on a multi-core system multiple
> threads will be run on different cores.

In fact, some schedulers just might schedule threads on the same cpu to
take advantage of warm caches on that cpu (under the assumption that
threads are used where fast access to shared data structures are
desired). But yes, in any case there's no guarantee of any particular
scheduling pattern/method, that is unless the user has the ability (and
knowledge) to specifically configure a setup.

General users would have no guarantees.

> That might explain why the results on a largely idle machine are
> unsatisfying.

That very well might be. Plus, in a power constrained environment
(i.e., laptop on battery) scheduling on same CPU might produce power
savings over having to power up/down other cores.

> As you pointed out, a single-core system will completely break your
> code. How about setting this up to run on two completely different
> systems and letting them communicate via an interface? E.g., if
> after a given number of operations on system 1 an even or odd number
> of operations was performed on system 2.

Two separate systems would provide some amount of unpredictabiilty, but
needing to have two independent systems to run this randomness
generator would kind of make it a bit less useful to the general
person.

Rich

unread,
Oct 17, 2022, 8:40:11 PM10/17/22
to
If you go that route, yes, you'll need different code for each
different major OS you want to support.

And, is 'pinning' a guarantee, or a "try our best" from the OS
scheduler?

Chris M. Thomasson

unread,
Oct 17, 2022, 8:49:31 PM10/17/22
to
Difficult to answer. Humm... I remember being able to pin a thread that
would burn a specific CPU on a SunFire T2000 I won from the CoolThreads
contest held by Sun, running Solaris. My project name was vZoom. Only
one processor got hot during the pinning experiment. I had the server
out with the top off, sounded like a damn vacuum cleaner. The hot
processor was always the pinned one. I might be able to show you the
contest html via the waybackmachine. Humm, I wonder if I can find it.
Iirc, Windows tends to honor thread pinning via affinity mask.

Rich

unread,
Oct 17, 2022, 10:51:43 PM10/17/22
to
None of that hardware, nor that software, will likely be in general use
today.

What "guarantee" to you get, today, with today's OS'es
(win/mac/linux/anroid/ios)?

Chris M. Thomasson

unread,
Oct 17, 2022, 11:05:37 PM10/17/22
to
Not much. Iirc, the ALOM in the sunfire would tell a cpu's temp. Then
there is the quote from Microsoft:
__________________
A thread affinity mask is a bit vector in which each bit represents a
logical processor that a thread is allowed to run on. A thread affinity
mask must be a subset of the process affinity mask for the containing
process of a thread. A thread can only run on the processors its process
can run on. Therefore, the thread affinity mask cannot specify a 1 bit
for a processor when the process affinity mask specifies a 0 bit for
that processor.

Setting an affinity mask for a process or thread can result in threads
receiving less processor time, as the system is restricted from running
the threads on certain processors. In most cases, it is better to let
the system select an available processor.

If the new thread affinity mask does not specify the processor that is
currently running the thread, the thread is rescheduled on one of the
allowable processors.

Starting with Windows 11 and Windows Server 2022, on a system with more
than 64 processors, process and thread affinities span all processors in
the system, across all processor groups, by default. The
dwThreadAffinityMask must specify processors in the thread's current
primary group.
__________________


The warning:
__________________
Setting an affinity mask for a process or thread can result in threads
receiving less processor time, as the system is restricted from running
the threads on certain processors. In most cases, it is better to let
the system select an available processor.
__________________


Is interesting. Not a guarantee, but seems like it tries to pin the thread.

Rich

unread,
Oct 17, 2022, 11:29:27 PM10/17/22
to
Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> On 10/17/2022 7:51 PM, Rich wrote:
>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>> On 10/17/2022 5:40 PM, Rich wrote:
>>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>>> On 10/17/2022 5:08 PM, Chris M. Thomasson wrote:
>>>>>> I can use affinity masks to pin threads to specific cores...
>>>>>
>>
>> What "guarantee" to you get, today, with today's OS'es
>> (win/mac/linux/anroid/ios)?
>
> The warning:
> __________________
> Setting an affinity mask for a process or thread can result in threads
> receiving less processor time, as the system is restricted from running
> the threads on certain processors. In most cases, it is better to let
> the system select an available processor.
> __________________
>
>
> Is interesting. Not a guarantee, but seems like it tries to pin the
> thread.

Nope, not, per se., a 'guarantee'.

Plus, for your generator, you'd need to allow the 'process' to run on
at least two CPU's to even be able to have the two threads runing on
two different CPU's (which your system depends upon in order to provide
any semblance of randomness).

What guarantee to you get that both threads actually get scheduled
concurrently on both CPU's? I.e., what happens if a higher priority
process comes through that the OS schedules on one of your two chosen
CPU's? Does one thread halt? In which case you gain no randomness.
Does the thread that was preempted by the higher priority process get
scheduled on the other CPU with its companion? In which case you gain
no randomness.

From a 'security' standpoint, there's enough holes here to say, don't
use this for creating cryptographic quality randomness. For playing a
game of Monoply, maybe it might be generally random enough. But not
for cryptography.

Chris M. Thomasson

unread,
Oct 18, 2022, 7:56:53 PM10/18/22
to
I get no guarantee. From experience, all processors sure seem to run hot
100% load. However, regardless, that is no guarantee, indeed. Therefore,
this is just for fun to try to show how a race-condition can possibly
create some "random" numbers. That was the whole point. I was just
wondering if perhaps the randomness, or lack thereof, can give us an
insight into the current state of the system. Perhaps we can tell the
difference between "randomness" generated from a RMW fetch-and-add, vs
say, a CAS.

Jakob Bohm

unread,
Oct 28, 2022, 11:10:21 AM10/28/22
to
On 2022-10-18 05:29, Rich wrote:
> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>> On 10/17/2022 7:51 PM, Rich wrote:
>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>> On 10/17/2022 5:40 PM, Rich wrote:
>>>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>>>> On 10/17/2022 5:08 PM, Chris M. Thomasson wrote:
>>>>>>> I can use affinity masks to pin threads to specific cores...
>>>>>>
>>>
>>> What "guarantee" to you get, today, with today's OS'es
>>> (win/mac/linux/anroid/ios)?
>>
>> The warning:
>> __________________
>> Setting an affinity mask for a process or thread can result in threads
>> receiving less processor time, as the system is restricted from running
>> the threads on certain processors. In most cases, it is better to let
>> the system select an available processor.
>> __________________
>>
>>
>> Is interesting. Not a guarantee, but seems like it tries to pin the
>> thread.
>

That message says nothing either way, it is a warning to idiots that
pinning your thread to specific CPUs/cores obviously gives your thread
fewer available CPU/core resources to choose from, thus potentially
running slower in terms of the wall clock.

Any messages Microsoft may have about the intended and promised effect
of pinning must be elsewhere in the documentation of the pinning
functions.

However in older Microsoft systems, pinning restrictions were absolute,
but logically invalid pinning requests (such as pinning to a phusically
absent CPU or a CPU not enabled for the containing process or the
containing multi-process job) would be ignored with or without error.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Chris M. Thomasson

unread,
Oct 28, 2022, 5:21:16 PM10/28/22
to
On 10/28/2022 8:10 AM, Jakob Bohm wrote:
> On 2022-10-18 05:29, Rich wrote:
>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>> On 10/17/2022 7:51 PM, Rich wrote:
>>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>>> On 10/17/2022 5:40 PM, Rich wrote:
>>>>>> Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>>>>>>> On 10/17/2022 5:08 PM, Chris M. Thomasson wrote:
>>>>>>>> I can use affinity masks to pin threads to specific cores...
>>>>>>>
>>>>
>>>> What "guarantee" to you get, today, with today's OS'es
>>>> (win/mac/linux/anroid/ios)?
>>>
>>> The warning:
>>> __________________
>>> Setting an affinity mask for a process or thread can result in threads
>>> receiving less processor time, as the system is restricted from running
>>> the threads on certain processors. In most cases, it is better to let
>>> the system select an available processor.
>>> __________________
>>>
>>>
>>> Is interesting.  Not a guarantee, but seems like it tries to pin the
>>> thread.
>>
>
> That message says nothing either way, it is a warning to idiots that
> pinning your thread to specific CPUs/cores obviously gives your thread
> fewer available CPU/core resources to choose from, thus potentially
> running slower in terms of the wall clock.
[...]

Actually, it can give a way to emulate per-cpu data in user-space.
0 new messages