Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

boolean and thread safety

2,401 views
Skip to first unread message

rap...@gmail.com

unread,
Feb 11, 2011, 1:09:04 PM2/11/11
to
Are booleans inherently thread safe?

I am thinking of something like

public class AsyncBoolean {

boolean value = false;

public AsyncBoolean(boolean initValue) {
value = initValue;
}

public boolean getValue() {
return value;
}

public void setValue(boolean value) {
value = newValue;
}

}

Assuming I define the variable using

final AsyncBoolean varName = new AsyncBoolean(true);

does this in effect auto-sync, since you can't partially read a byte
and even if you could it is a boolean anyway?

Since it is declared as final, the reference can't change, so I could
then call varName.setValue(someValue) and varName.getValue() from
threads without the need for syncing.

markspace

unread,
Feb 11, 2011, 2:00:07 PM2/11/11
to
On 2/11/2011 10:09 AM, rap...@gmail.com wrote:
> Are booleans inherently thread safe?


No.

> final AsyncBoolean varName = new AsyncBoolean(true);

> does this in effect auto-sync, since you can't partially read a byte
> and even if you could it is a boolean anyway?


No. There are more than one types of memory in a modern system. Cache
and registers come into play. So you can't just "read a byte" because
you don't know where you are reading from. One byte could be in memory
while a different thread could have that same byte in cache or in a
register.


> Since it is declared as final, the reference can't change, so I could
> then call varName.setValue(someValue) and varName.getValue() from
> threads without the need for syncing.


The final declaration is only thread safe for instance variables, after
its constructor has finished. Anything else requires more explicit
synchronization.

For this specific example you probably want to declare your boolean as
volatile. That removes the need for final (in this case) and also
removes the need for any other synchronization (in this case). Also, on
i86 archetecture, volatile is very light-weight and "free" for reads.

Daniele Futtorovic

unread,
Feb 11, 2011, 2:24:36 PM2/11/11
to

You're mixing different things here. *Assigning* a boolean value, as in
value = true
is atomic, and hence thread-safe.

But what thread safety is about is only marginally atomicity (because
all assignments save those of double and long variables are atomic). The
main point is what a thread "sees". That is, whether it "sees" the
changes operated by another thread.

With respects to that, no primitive type, no matter how small, is
thread-safe. You'll have to ensure synchronisation of information across
threads. The easiest way to do that, assuming the propagation of the
reference to the "AsyncBoolean" instance is ensured, would be to declare
the "value" field /volatile/. Another way would be to synchronise in the
getter and setter.

Have a look at java.util.concurrent.atomic.AtomicBoolean.

--
DF.

Lew

unread,
Feb 11, 2011, 5:46:24 PM2/11/11
to
rap...@gmail.com wrote:
>> Are booleans inherently thread safe?

Have you read the tutorials on concurrent programming in Java? Have you read
/Java Concurrency in Practice/ by Brian Goetz, et al.? Any of the concurrency
articles in IBM's Developerworks/Java site?

You should.

markspace wrote:
> No.

rap...@gmail.com wrote:
>> final AsyncBoolean varName = new AsyncBoolean(true);
>>
>> does this in effect auto-sync, since you can't partially read a byte
>> and even if you could it is a boolean anyway?

markspace wrote:
> No. There are more than one types of memory in a modern system. Cache and
> registers come into play. So you can't just "read a byte" because you don't
> know where you are reading from. One byte could be in memory while a different
> thread could have that same byte in cache or in a register.

rap...@gmail.com wrote:
>> Since it is declared as final, the reference can't change, so I could
>> then call varName.setValue(someValue) and varName.getValue() from
>> threads without the need for syncing.

That only protects against changing the reference, not the contents of the
item pointed to. So you still need to synch. Note that 'varName.getValue()'
doesn't change the reference, but does change the object. It is changes to
the *object* that you must guard here.

markspace wrote:
> The final declaration is only thread safe for instance variables, after its

and only for changing the *variable* contents. For reference variables, that
doesn't help against changes to the *object* contents.

Reference variables are pointers.

> constructor has finished. Anything else requires more explicit synchronization.
>
> For this specific example you probably want to declare your boolean as
> volatile. That removes the need for final (in this case) and also removes the
> need for any other synchronization (in this case). Also, on i86 archetecture,
> volatile is very light-weight and "free" for reads.

As the study material on Java concurrency explains.

--
Lew
Ceci n'est pas une fenĂȘtre.
.___________.
|###] | [###|
|##/ | *\##|
|#/ * | \#|
|#----|----#|
|| | * ||
|o * | o|
|_____|_____|
|===========|

Lew

unread,
Feb 11, 2011, 5:49:10 PM2/11/11
to
rap...@gmail.com wrote:
> does this in effect auto-sync, since you can't partially read a byte
> and even if you could it is a boolean anyway?

Why do you think anything about the 'byte' type is relevant here? You're
discussing 'boolean', a very different type.

Daniele Futtorovic

unread,
Feb 11, 2011, 6:04:54 PM2/11/11
to
On 11/02/2011 23:49, Lew allegedly wrote:
> rap...@gmail.com wrote:
>> does this in effect auto-sync, since you can't partially read a byte
>> and even if you could it is a boolean anyway?
>
> Why do you think anything about the 'byte' type is relevant here? You're
> discussing 'boolean', a very different type.

I assume what he was thinking about was that thread A might be reading
the value while thread B would write it. In that sense, if the smallest
thing that could be read/addressed were a byte, no interruption could occur.
Hence my talking about assignments, and that the matter of
synchronisation isn't primarily about interlacing read/writes, but
rather about whither to the write and whence from the read take place.

--
DF.

Owen Jacobson

unread,
Feb 11, 2011, 10:25:54 PM2/11/11
to

Not only that, but also in *what order* other threads see changes.
Given shared non-volatile boolean fields 'startedWork' and
'finishedWork' both initially false and no memory barriers (volatile,
synchronization, or otherwise), if Thread #1 executes

foo.startedWork = true;
/* ... some work ... */
foo.finishedWork = true;

then Thread #2 may legitimately print "Can't happen!" from

if (foo.finishedWork && !foo.startedWork) {
System.out.println("Can't happen!");
}

even though Thread #1 ensures that foo.finishedWork only becomes true
after foo.startedWork is true. While this is relatively easy to
diagnose when both writes (and reads) affect closely-related shared
fields, the same surprising behaviour can arise from unrelated objects
and writes (reads) separated by large stretches of code.

Adding memory barriers or using AtomicFoo types not only ensures that
changes will be seen but also ensures that the changes will be seen in
a predictable order.

The Java memory model (defined at the end of the JLS) lays out the
specific guarantees for various scenarios.

> With respects to that, no primitive type, no matter how small, is
> thread-safe. You'll have to ensure synchronisation of information across
> threads. The easiest way to do that, assuming the propagation of the
> reference to the "AsyncBoolean" instance is ensured, would be to declare
> the "value" field /volatile/. Another way would be to synchronise in the
> getter and setter.
>
> Have a look at java.util.concurrent.atomic.AtomicBoolean.

All good advice. I'll throw another vote on "read Java Concurrency in
Practice", too.

-o

Volker Borchert

unread,
Feb 12, 2011, 1:21:56 AM2/12/11
to

Others have pointed out ad nauseam that while setValue() is indeed
atomic, the final only guarantees visibility of the varName assignment,
not of any changes to the object pointed to thereby.

Why don't you simply use AtomicBoolean?

--

"I'm a doctor, not a mechanic." Dr Leonard McCoy <mc...@ncc1701.starfleet.fed>
"I'm a mechanic, not a doctor." Volker Borchert <v_bor...@despammed.com>

Wanja Gayk

unread,
Feb 12, 2011, 6:19:41 AM2/12/11
to
In article <e9ded49c-dfed-4642-83cb-
f4b183...@i40g2000yqh.googlegroups.com>, rap...@gmail.com says...

> Are booleans inherently thread safe?

References and variables are not inherently thread safe.
Final fields are, but that does not make the objects they are
referencing thread safe.

> I am thinking of something like

[...]

Re-inventing the class "AtomicBoolean" in a not threadsafe way?

To clarify a bit:

There is something like non uniform memory access, which means that one
thread may see another (cached) version of the variable any time.
To synchronize caches you need to declare a variable "volatile" or use
it in a synchronized context.
Declaring a variable "volatile" does not make read-modify-write-
operations (like "++x") thread safe - because these are three
operations, which need to be enclosed in a sychronized block (or
implemented using a lock free algorithm, like AtomicInteger does).
It does however guarantee that it's the very same variable everywhere
(from the program's perspective).

You might like to read this:
http://www.ibm.com/developerworks/java/library/j-jtp06197.html
and this:
http://www.ibm.com/developerworks/java/library/j-threads1.html

The book "Java Concurrency in practice" by Brian Goetz is your very best
friend to avoid the pitfalls. One of the best Java-books I've ever read.

Kind regards,


--
..Alesi's problem was that the back of the car was jumping up and down
dangerously - and I can assure you from having been teammate to
Jean Alesi and knowing what kind of cars that he can pull up with,
when Jean Alesi says that a car is dangerous - it is. [Jonathan Palmer]

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Lew

unread,
Feb 12, 2011, 9:56:39 AM2/12/11
to

You assume that. I'm still waiting for the OP to weigh in. It remains that
his assertion that a 'byte' is a 'boolean' is wrong. They are distinct and
incommensurate types in Java. We don't even know for sure that a 'boolean'
occupies eight bits of memory.

Phrasing the point in terms of atomic reads and writes makes sense, and it's
easy enough to make that adjustment, but the OP clearly needs education that
'byte' and 'boolean' aren't the same.

Daniele Futtorovic

unread,
Feb 12, 2011, 11:11:33 AM2/12/11
to
On 12/02/2011 04:25, Owen Jacobson allegedly wrote:
> On 2011-02-11 14:24:36 -0500, Daniele Futtorovic said:
>> But what thread safety is about is only marginally atomicity (because
>> all assignments save those of double and long variables are atomic). The
>> main point is what a thread "sees". That is, whether it "sees" the
>> changes operated by another thread.
>
> Not only that, but also in *what order* other threads see changes. Given
> shared non-volatile boolean fields 'startedWork' and 'finishedWork' both
> initially false and no memory barriers (volatile, synchronization, or
> otherwise), if Thread #1 executes
>
> foo.startedWork = true;
> /* ... some work ... */
> foo.finishedWork = true;
>
> then Thread #2 may legitimately print "Can't happen!" from
>
> if (foo.finishedWork && !foo.startedWork) {
> System.out.println("Can't happen!");
> }
>
> even though Thread #1 ensures that foo.finishedWork only becomes true
> after foo.startedWork is true. While this is relatively easy to diagnose
> when both writes (and reads) affect closely-related shared fields, the
> same surprising behaviour can arise from unrelated objects and writes
> (reads) separated by large stretches of code.

Thanks for the elaboration, Owen. One thing I'm wondering about, though:
is what you describe due to the possible reordering of statements, or
can the memory actually be "partially flushed", to put it simply?

--
DF.

markspace

unread,
Feb 12, 2011, 11:28:04 AM2/12/11
to
On 2/12/2011 8:11 AM, Daniele Futtorovic wrote:

> Thanks for the elaboration, Owen. One thing I'm wondering about, though:
> is what you describe due to the possible reordering of statements, or
> can the memory actually be "partially flushed", to put it simply?


I think it's both. The compiler is certainly allowed to reorder reads
and writes if it follows the same "program order" as the source code.
In addition, the hardware may do strange things on its own.

It doesn't even need to "flush" memory. CPUs on the same chip typically
share a memory controller, and that controller can sniff bytes out of a
neighboring cache if it deems it opportune to do so.

There's some good videos of talks on the Java Memory Model. I'll see if
I can find the one I'm thinking of.


markspace

unread,
Feb 12, 2011, 11:43:57 AM2/12/11
to
On 2/12/2011 8:28 AM, markspace wrote:
>
> I think it's both. The compiler is certainly allowed to reorder reads
> and writes if it follows the same "program order" as the source code. In
> addition, the hardware may do strange things on its own.


Here's a decent video with further explanation:

<http://vimeo.com/3757991>

Daniele Futtorovic

unread,
Feb 12, 2011, 11:49:48 AM2/12/11
to
On 12/02/2011 17:28, markspace allegedly wrote:
> On 2/12/2011 8:11 AM, Daniele Futtorovic wrote:
>
>> Thanks for the elaboration, Owen. One thing I'm wondering about, though:
>> is what you describe due to the possible reordering of statements, or
>> can the memory actually be "partially flushed", to put it simply?
>
>
> I think it's both. The compiler is certainly allowed to reorder reads
> and writes if it follows the same "program order" as the source code. In
> addition, the hardware may do strange things on its own.
>
> It doesn't even need to "flush" memory. CPUs on the same chip typically
> share a memory controller, and that controller can sniff bytes out of a
> neighboring cache if it deems it opportune to do so.

In light of things like these, it never ceases to amaze me that our
programs actually work. :)

Although, I would assume that the purpose of many of these dispositions
is actually to cover the tracks for compiler/vm writers and hardware
manufacturers, and that what they describe occurs only exceptionally in
practice. IFF that assumption is correct, how likely are things to go
bad if what before was exceptional becomes, for what reason so ever, the
regular case?

--
DF.

Arved Sandstrom

unread,
Feb 12, 2011, 12:09:09 PM2/12/11
to
On 11-02-12 12:49 PM, Daniele Futtorovic wrote:
> On 12/02/2011 17:28, markspace allegedly wrote:
>> On 2/12/2011 8:11 AM, Daniele Futtorovic wrote:
>>
>>> Thanks for the elaboration, Owen. One thing I'm wondering about, though:
>>> is what you describe due to the possible reordering of statements, or
>>> can the memory actually be "partially flushed", to put it simply?
>>
>>
>> I think it's both. The compiler is certainly allowed to reorder reads
>> and writes if it follows the same "program order" as the source code. In
>> addition, the hardware may do strange things on its own.
>>
>> It doesn't even need to "flush" memory. CPUs on the same chip typically
>> share a memory controller, and that controller can sniff bytes out of a
>> neighboring cache if it deems it opportune to do so.
>
> In light of things like these, it never ceases to amaze me that our
> programs actually work. :)
[ SNIP ]

Me too...although I am thinking more about general code quality.

That concurrency doesn't make things worse more often than it does is
because (I believe) that the large majority of coders are blissfully
insulated from the effects of ignorance. As an example, consider someone
who is programming JSF-based web apps on J2EE. The default patterns that
even a novice would use are tilted towards not sharing state, so they
simply don't run into concurrency problems.

I don't think it's common to use ApplicationScoped managed beans, so
that's one source of worry gone...unknowingly. RequestScoped managed
beans you're in your own little silo anyway. Using session scoping
(either SessionScoped beans or accessing the HttpSession directly) you
have to have objects that are not thread-safe, *and* some rather
unfortunate coding...because after all only the one session user is
involved, and you'd need 2 or more requests from the same user at the
same time banging on the same un-thread-safe state to cause potential
problems.

So basically well-thought-out and well-implemented frameworks and
libraries, each of which only needs half a dozen people who know what
they're about, saves the bacon of tens of thousands of programmers who
don't know what they're about.

AHS
--
We must recognize the chief characteristic of the modern era - a
permanent state of what I call violent peace.
-- James D. Watkins

Lew

unread,
Feb 12, 2011, 1:24:11 PM2/12/11
to
Daniele Futtorovic wrote:
> In light of things like these, it never ceases to amaze me that our
> programs actually work. :)
>
> Although, I would assume that the purpose of many of these dispositions
> is actually to cover the tracks for compiler/vm writers and hardware
> manufacturers, and that what they describe occurs only exceptionally in
> practice. IFF that assumption is correct, how likely are things to go
> bad if what before was exceptional becomes, for what reason so ever, the
> regular case?

They already have. The changes to the Java memory model with version 3 of the
JLS were a direct response to the increased visibility of concurrency bugs,
most of which had been in production for a while before anyone noticed.

Around 2003 - 2005 (it's fuzzy but I peg it around there) the computer
industry pulled back from the 3 GHz clock-speed limit and started stampeding
into multi-core. With actually separate CPUs handling separate threads,
memory visibility problems became, er, visible.

That said, irrespective of platform and throughout the history of concurrent
computing, concurrency is tricky, on occasion fatally so. Bugs there are by
nature elusive and probabilistic (or, to coin a portmanteau,
"probaballistic"). Conditions extrinsic to the application affect the behavior.

The answer to, "How likely?" is, "It depends."

--
Lew
Honi soit qui mal y pense.

Roedy Green

unread,
Feb 12, 2011, 1:49:01 PM2/12/11
to
On Fri, 11 Feb 2011 10:09:04 -0800 (PST), "rap...@gmail.com"
<rap...@gmail.com> wrote, quoted or indirectly quoted someone who
said :

>Are booleans inherently thread safe?

no more so that ints or shorts.

Writing thread safe code is really tricky. Your code can work "most"
of the time, but that is not good enough. Your best bet is to fob all
co-ordination stuff off onto thread libraries and buy a book Java
Concurrency in Practice.
see http://mindprod.com/jgloss/thread.html
--
Roedy Green Canadian Mind Products
http://mindprod.com
Refactor early. If you procrastinate, you will have
even more code to adjust based on the faulty design.
.

Volker Borchert

unread,
Feb 13, 2011, 6:17:42 PM2/13/11
to
Lew wrote:
> They already have. The changes to the Java memory model with version 3 of the
> JLS were a direct response to the increased visibility of concurrency bugs,
> most of which had been in production for a while before anyone noticed.

Is there a concise description of _what_ was considered "broken" in the
older memory model, and why, and how it was fixed?

Mike Schilling

unread,
Feb 13, 2011, 7:35:29 PM2/13/11
to

"Volker Borchert" <v_bor...@despammed.com> wrote in message
news:ij9oqm$p54$1...@Gaia.teknon.de...


> Lew wrote:
>> They already have. The changes to the Java memory model with version 3
>> of the
>> JLS were a direct response to the increased visibility of concurrency
>> bugs,
>> most of which had been in production for a while before anyone noticed.
>
> Is there a concise description of _what_ was considered "broken" in the
> older memory model, and why, and how it was fixed?

Lots of good stuff linked here
http://www.cs.umd.edu/~pugh/java/memoryModel/.

The basic complaints I recall were:

* access to volatiles didn't interact with caching of other variables, which
is why double-checked locking didn't work
* finals weren't necessarily visible to all threads, so that even immutable
objects couldn't be shared among threads safely.

Patricia Shanahan

unread,
Feb 13, 2011, 7:35:20 PM2/13/11
to
On 2/13/2011 3:17 PM, Volker Borchert wrote:
> Lew wrote:
>> They already have. The changes to the Java memory model with version 3 of the
>> JLS were a direct response to the increased visibility of concurrency bugs,
>> most of which had been in production for a while before anyone noticed.
>
> Is there a concise description of _what_ was considered "broken" in the
> older memory model, and why, and how it was fixed?
>

The best resource I know of for this is
http://www.cs.umd.edu/~pugh/java/memoryModel/. In particular, for a
reasonably concise discussion see the JSR 133 FAQ,
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html

One of the main problems, in my opinion, was the dependence on the idea
of a "main memory" that sees all reads and writes in a definite order. A
reasonably large server almost certainly does not have a single main
memory, and it may impossible for it to even act as though it had one. A
single point that all non-cache memory operations go through would be a
disastrous system bottleneck.

Patricia

markspace

unread,
Feb 13, 2011, 7:42:45 PM2/13/11
to
On 2/13/2011 3:17 PM, Volker Borchert wrote:
> Lew wrote:
>> They already have. The changes to the Java memory model with version 3 of the
>> JLS were a direct response to the increased visibility of concurrency bugs,
>> most of which had been in production for a while before anyone noticed.
>
> Is there a concise description of _what_ was considered "broken" in the
> older memory model, and why, and how it was fixed?


I may be wrong here, but I believe what was broken was there wasn't a
formal memory model. The informal memory model of "just try to compile
this" was so loose it had basically no useful semantics.

Consider our old pal the infamous double checked locking.

class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null) {
helper = new Helper();
}
}
}
return helper;
}
}

This does in fact work on i86 about 99% of the time, even though it's
broken because based on random events it won't work some of the time.
However on other CPU architectures (like Alpha) it doesn't work, at all.
Like not even once.

So Java had to implement a memory model so everyone was on the same page
and knew how to implement basic things like how two threads execute
together.

Patricia Shanahan

unread,
Feb 13, 2011, 7:52:02 PM2/13/11
to
On 2/13/2011 4:42 PM, markspace wrote:
> On 2/13/2011 3:17 PM, Volker Borchert wrote:
>> Lew wrote:
>>> They already have. The changes to the Java memory model with version
>>> 3 of the
>>> JLS were a direct response to the increased visibility of concurrency
>>> bugs,
>>> most of which had been in production for a while before anyone noticed.
>>
>> Is there a concise description of _what_ was considered "broken" in the
>> older memory model, and why, and how it was fixed?
>
>
> I may be wrong here, but I believe what was broken was there wasn't a
> formal memory model. The informal memory model of "just try to compile
> this" was so loose it had basically no useful semantics.

There was an attempt at a memory model, but it was neither implementable
nor sufficient. See Section 17 of the 2nd edition of the JLS.

Patricia

markspace

unread,
Feb 13, 2011, 8:03:53 PM2/13/11
to
On 2/13/2011 4:52 PM, Patricia Shanahan wrote:

> There was an attempt at a memory model, but it was neither implementable
> nor sufficient. See Section 17 of the 2nd edition of the JLS.


If you're correct in your assertion elsewhere that the old memory model
required all threads to always see the reads and writes of all other
threads in a sequentially consistent fashion, then it would have slowed
CPU execution down by a factor of about 100.

And no JVMs implemented their code that way, so as you say it probably
wasn't truly implementable.

See that video link I posted up thread, it's really interesting.


Patricia Shanahan

unread,
Feb 13, 2011, 8:08:32 PM2/13/11
to
On 2/13/2011 5:03 PM, markspace wrote:
> On 2/13/2011 4:52 PM, Patricia Shanahan wrote:
>
>> There was an attempt at a memory model, but it was neither implementable
>> nor sufficient. See Section 17 of the 2nd edition of the JLS.
>
>
> If you're correct in your assertion elsewhere that the old memory model
> required all threads to always see the reads and writes of all other
> threads in a sequentially consistent fashion, then it would have slowed
> CPU execution down by a factor of about 100.

That is not what I intended to assert. The old model certainly permitted
caching, with some restrictions. When actions were required to
be visible between threads, there was an assumption of ordering through
"main memory".

Patricia

Mike Schilling

unread,
Feb 13, 2011, 8:55:12 PM2/13/11
to

"Patricia Shanahan" <pa...@acm.org> wrote in message
news:1v2dneSkOLsIHcXQ...@earthlink.com...

I'm perhaps oversimplifying, but the old model was, more or less:

* There is a main memory to which all writes are eventually made.
* Each thread is allowed to cache both reads and writes.
* When a thread locks a monitor, its read-ahead cache is flushed (so that
the next read to any location is made from main memory)
* When a thread releases a monitor, its write-behind cache is flushed to
main memory.
* A write to a volatile always goes directly to main memory. It has no
effect on the thread's caches.
* A read from a volatile always comes from main memory. It has no effect on
the thread's caches.

Note that there's no mention here of either finals or immutability. Thus,
there's no guarantee that a String created in thread 1 has the correct value
in thread 2.

0 new messages