Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Quick simple question:

73 views
Skip to first unread message

Doug Mika

unread,
Jun 11, 2015, 12:33:23 PM6/11/15
to
Could someone quickly confirm, in the program below, we need to join() the threads because, amongst other things, the functions running inside the threads store data in a variable that is itself inside the main() thread, correct?

void f(const vector<double>& v, double* res);//take input from v; place //result in *res
class F {
public:
F(const vector<double>& vv, double* p) :v{vv}, res{p} { }
void operator()(); // place result in *res
private:
const vector<double>& v; //source of input
double* res; //target for output
};

int main() {
vector<double> some_vec;
vector<double> vec2;
// ...
double res1;
double res2;
thread t1 {f,some_vec,&res1}; //f(some_vec,&res1) executes in a separate //thread
thread t2 {F{vec2,&res2}}; // F{vec2,&res2}() executes in a separate //thread
t1.join();
t2.join();
cout << res1 << ' ' << res2 << '\n';
}

Victor Bazarov

unread,
Jun 11, 2015, 12:59:16 PM6/11/15
to
The 'join' makes the calling function block until the thread finishes,
no? If you don't use both 'join' calls, then the 'cout' will get the
output before the threads will have a chance to complete execution, I
believe. If you don't care about that, you don't need to 'join'. If
you care to wait for the results to be calculated, *then* output them,
then you need to 'join'.

V
--
I do not respond to top-posted replies, please don't ask

K. Frank

unread,
Jun 11, 2015, 1:18:32 PM6/11/15
to
Hello Doug!

On Thursday, June 11, 2015 at 12:33:23 PM UTC-4, Doug Mika wrote:
> Could someone quickly confirm, in the program below, we need to join() the threads because, amongst other things, the functions running inside the threads store data in a variable that is itself inside the main() thread, correct?

I wouldn't phrase it quite the way you do.

As you probably know, thread::join() causes the
calling thread (in your case the "main" thread)
to pause its execution until the thread you're
joining has completed -- that is, until the thread
function of the thread with which you're joining has
exited.

In your specific example there are three reasons
you want thread::join (or some other synchronization
mechanism):

1) You don't want to access res1 and res2 until t1
and t2 are done doing there stuff. So you wait for
t1 and t2 to finish (i.e., join them).

2) res1 and res2 are variables local to the function
"main" (on the stack). You don't want them to go out
of scope -- i.e., you don't want main to exit -- until
t1 and t2 are done. Otherwise t1 and t2 would be
accessing and modifying stack space that might be being
used for some other purpose.

3) t1 and t2 are *also* stack variables local to main.
You *really* don't want them to go out of scope until
t1 and t2 (more precisely, the execution of t1's and
t2's thread functions) are done.

Now reasons 2 and 3 are kind of hidden by the fact that
when main completes (and its local variables go out of
scope) your program exits, so you might not see any
corruption that has occurred (but you might).

Also, I wouldn't say that res1, for example, is "a
variable that is itself inside the main() thread."
res1 and res2 are variables that are in the address
space of your program (your multi-threaded process).
(They happen to be on the stack, but that is part
of your process's address space.) As such they are
available to all threads in your process. (There is
such a thing as "thread-local storage," but you're
not using it.)

It is true that res1 and res2 are variables local to
the function "main," but (in c++) there is no notion
(not counting thread-local storage) of specific variables
belonging to specific threads. "main" is a special
function, but, in general, any given function can be
called by many different threads.

It's not exactly wrong to say that the variable res1
is "inside the main() thread," but it's an odd way of
putting, and is more likely to be confusing than
helpful.


By the way (following up on an earlier post of yours)
what compiler / version and command line did you end
up using for exploring std::thread?


Happy Multi-Threaded Hacking!


K. Frank


> void f(const vector<double>& v, double* res);//take input from v; place //result in *res
> class F {
> public:
> F(const vector<double>& vv, double* p) :v{vv}, res{p} { }
> void operator()(); // place result in *res
> private:
> const vector<double>& v; //source of input
> double* res; //target for output
> };
>
> int main() {
> vector<double> some_vec;
> vector<double> vec2;
> // ...
> double res1;
> double res2;
> thread t1 {f,some_vec,&res1}; //f(some_vec,&res1) executes in a separate //thread
> thread t2 {F{vec2,&res2}}; // F{vec2,&res2}() executes in a separate //thread
> t1.join();
> t2.join();
> cout << res1 << ' ' << res2 << '\n';
> }


P.S. I haven't tested it or looked at it that closely, but
this code of yours looks correct to me.

Doug Mika

unread,
Jun 11, 2015, 2:36:51 PM6/11/15
to
Well, I haven't gotten my MinGW to compile any std::thread program, but I found a group on sourceforge dedicated to MinGW, so I hope I'll find out what the problem is soon enough. For now, I'm using the tutorialspoint.com online C++ shell for smaller activities. I was thinking of getting Microsoft's VC++ lite, but I heard they don't follow the C++11 standard on everything...I only heard that. Ideally, I'm looking for something that will let me read my Stroustrup book and do the work with a compiler that complements the theory

Doug

Victor Bazarov

unread,
Jun 11, 2015, 2:48:22 PM6/11/15
to
On 6/11/2015 2:36 PM, Doug Mika wrote:
> [..]
> Well, I haven't gotten my MinGW to compile any std::thread program,
> but I found a group on sourceforge dedicated to MinGW, so I hope I'll
> find out what the problem is soon enough. For now, I'm using the
> tutorialspoint.com online C++ shell for smaller activities. I was
> thinking of getting Microsoft's VC++ lite, but I heard they don't
> follow the C++11 standard on everything...I only heard that.

Right, so you formed a prejudice against them before even trying their
product. It's not the "VC++ lite", it's called "Express Edition". You
can get their 2013 Express Edition for Windows Desktop, it's pretty
close to C++11. You can also find what exactly is and what isn't
implemented from C++11 in it; they actually don't mind telling you, if
you care to see.

> Ideally, I'm looking for something that will let me read my
> Stroustrup book and do the work with a compiler that complements the
> theory

Presumably you have some result in mind, you're probably working towards
actually using your skills for something. Learning in a vacuum is not
useful. Do you know what you're going to be doing in C++ once you have
learned it all? Then start doing that something today! And for that
you should get yourself a compiler and stick with it. Be it G++ or
MSVC, or any other, whether paid for or free, they all differ slightly,
but don't that stop you from getting at least something. Also consider
a set of tools along with it, consisting of a good editor, a debugger, a
profiler (nice if it can handle threads). And plug away!

Ian Collins

unread,
Jun 11, 2015, 3:11:27 PM6/11/15
to
Doug Mika wrote:
>
> Well, I haven't gotten my MinGW to compile any std::thread program,
> but I found a group on sourceforge dedicated to MinGW, so I hope I'll
> find out what the problem is soon enough. For now, I'm using the
> tutorialspoint.com online C++ shell for smaller activities. I was
> thinking of getting Microsoft's VC++ lite, but I heard they don't
> follow the C++11 standard on everything...I only heard that.
> Ideally, I'm looking for something that will let me read my
> Stroustrup book and do the work with a compiler that complements the
> theory

Install Cygwin and g++.

--
Ian Collins

K. Frank

unread,
Jun 12, 2015, 1:01:37 AM6/12/15
to
Hi Doug (and Ian)!
There is also mingw-w64. It is a fork of, and a separate project
from mingw (and, just to be clear, is a windows port of gcc). This
allows you to run g++ on windows without the potentially undesirable
overhead of cygwin. (But cygwin offers its own benefits, as well.)

Mingw-w64 offers g++ builds that support std::thread out of the
box. I have used mingw-w64 for almost all basic std::thread
features, and I haven't come across any problems.

(The most recent versions of g++ might enable c++11 / c++14 features
automatically, but until recently you had to include "-std=c++11"
(or "-std=gnu++11" or "-std=++14", etc.) on the command line for
std::thread (and other c++11 features) to work.

>
> Ian Collins


Good luck!


K. Frank

Öö Tiib

unread,
Jun 12, 2015, 4:54:54 AM6/12/15
to
On Thursday, 11 June 2015 22:11:27 UTC+3, Ian Collins wrote:
> Doug Mika wrote:
> >
> > Well, I haven't gotten my MinGW to compile any std::thread program,
> > but I found a group on sourceforge dedicated to MinGW, so I hope I'll
> > find out what the problem is soon enough. For now, I'm using the
> > tutorialspoint.com online C++ shell for smaller activities. I was
> > thinking of getting Microsoft's VC++ lite, but I heard they don't
> > follow the C++11 standard on everything...I only heard that.

MSVC is fine in practice. MinGW also usually just works fine.
Likely it is some silly configuring problem.

> > Ideally, I'm looking for something that will let me read my
> > Stroustrup book and do the work with a compiler that complements the
> > theory
>
> Install Cygwin and g++.

Cygwin is not simpler to configure than MinGW.

Cygwin emulates UNIX/POSIX more fully than MinGW but when we compile
under it then we get Cygwin app. That can be good enough for our own
purposes but more often we want to get Windows app to give to someone
else. Windows app we can get with MinGW or with MSVC but not with
Cygwin.

Luca Risolia

unread,
Jun 12, 2015, 10:37:47 AM6/12/15
to
On 11/06/2015 19:18, K. Frank wrote:
> 1) You don't want to access res1 and res2 until t1
> and t2 are done doing there stuff. So you wait for
> t1 and t2 to finish (i.e., join them).

> 2) res1 and res2 are variables local to the function
> "main" (on the stack). You don't want them to go out
> of scope -- i.e., you don't want main to exit -- until
> t1 and t2 are done. Otherwise t1 and t2 would be
> accessing and modifying stack space that might be being
> used for some other purpose.

> Also, I wouldn't say that res1, for example, is "a
> variable that is itself inside the main() thread."
> res1 and res2 are variables that are in the address
> space of your program (your multi-threaded process).
> (They happen to be on the stack, but that is part
> of your process's address space.) As such they are
> available to all threads in your process. (There is
> such a thing as "thread-local storage," but you're
> not using it.)

1) and 2) are essentially wrong.

The thread constructor copies or moves all the arguments to
*thread-accessible storage*.

In this case, when f is invoked in the context of the new thread, "res"
is a *copy* of "res1" (they are distinct pointers pointing to the same
object).

Doug Mika

unread,
Jun 12, 2015, 11:34:23 AM6/12/15
to
How do we choose between the thread constructor "copying" or "moving" the arguments to "thread-accessible storage"? (I had a look at the thread constructors at www.cplusplus.com but I didn't manage to figure it out from that - actually, template <class Fn, class... Args>
explicit thread (Fn&& fn, Args&&... args);, don't the && imply "moving" of the arguments as they are/expect rvalues?)

K.Frank

unread,
Jun 12, 2015, 12:27:59 PM6/12/15
to
Hello Luca!

On Friday, June 12, 2015 at 10:37:47 AM UTC-4, Luca Risolia wrote:
> On 11/06/2015 19:18, K. Frank wrote:
> > 1) You don't want to access res1 and res2 until t1
> > and t2 are done doing there stuff. So you wait for
> > t1 and t2 to finish (i.e., join them).
>
> > 2) res1 and res2 are variables local to the function
> > "main" (on the stack). You don't want them to go out
> > of scope -- i.e., you don't want main to exit -- until
> > t1 and t2 are done. Otherwise t1 and t2 would be
> > accessing and modifying stack space that might be being
> > used for some other purpose.
> ...
>
> 1) and 2) are essentially wrong.
>
> The thread constructor copies or moves all the arguments to
> *thread-accessible storage*.
>
> In this case, when f is invoked in the context of the new thread, "res"
> is a *copy* of "res1" (they are distinct pointers pointing to the same
> object).

No, I don't think your description of what is happening is
correct.

In short, res1 is a double, while res is a double*. A copy
of res1 is never made. res points to res1, in place on the
stack as a variable local to main.

In more detail, here's Doug's code from "main":

// ...
double res1;
double res2;
thread t1 {f,some_vec,&res1}; //f(some_vec,&res1) executes in a separate
//thread
thread t2 {F{vec2,&res2}}; // F{vec2,&res2}() executes in a separate
//thread

Please note that res1 and res2 are of type double. They
are not pointers to doubles. The t1's thread constructor
gets called with &res1, which is indeed a pointer-to-double,
and points to res1, a variable local to main, and on the stack.
(Basically the same thing happens with res2.)

The third argument of Doug's thread function f (res) is
a pointer-to-double, and when is gets called it does
indeed get called with a copy of the value of &res1,
but this value points to res1, the variable local to
main. A copy of res1 is never made. If res1 is accessed
by main or goes out of scope before the thread t1 modifies
it (through the pointer-to-double res that has the value
&res1), trouble arises. That is why some sort of
synchronization is needed (in Doug's example, t1.join()).


Good luck.


K. Frank

red floyd

unread,
Jun 12, 2015, 12:47:29 PM6/12/15
to
You can install MinGW compilers under Cygwin as well.
Message has been deleted
Message has been deleted
Message has been deleted

Luca Risolia

unread,
Jun 12, 2015, 1:34:45 PM6/12/15
to
Il 12/06/2015 18:27, K.Frank ha scritto:
>
> No, I don't think your description of what is happening is
> correct.
>
> In short, res1 is a double, while res is a double*. A copy
> of res1 is never made. res points to res1, in place on the
> stack as a variable local to main.
>
> In more detail, here's Doug's code from "main":
>
> // ...
> double res1;
> double res2;
> thread t1 {f,some_vec,&res1}; //f(some_vec,&res1) executes in a separate

Sorry, typo: it's clear what I meant to say "res" is *a copy* of the
passed pointer which is "&res1" - not "res1", of course.

I read your "you don't want to access res1" as if one could not write
res1 (which is a pointer itself), which is clearly perfectly safe. You
probably meant to say "dereference" instead of "access".

However, I think what is important for the OP to notice in his example
is that what you get in f(), when it is invoked in the context of a new
thread of execution, is a const reference to **A COPY** of the original
vector passed as argument to the thread constructor. ** A COPY **
allocated in a thread-specific storage, as if by the function:

template <class T>
typename decay<T>::type decay_copy(T&& v) {
return std::forward<T>(v);
}

Many people actually fail to understand that.

For this reason, most of the times what you really need is to wrap the
things with std::cref() or std::ref() to share the same data between
threads.

Luca Risolia

unread,
Jun 12, 2015, 2:04:16 PM6/12/15
to
Il 12/06/2015 17:34, Doug Mika ha scritto:
> How do we choose between the thread constructor "copying" or "moving"
> the arguments to "thread-accessible storage"? (I had a look at the
> thread constructors at www.cplusplus.com but I didn't manage to
> figure it out from that - actually, template <class Fn, class...
> Args> explicit thread (Fn&& fn, Args&&... args);,


> don't the && imply
> "moving" of the arguments as they are/expect rvalues?)

No. Due to reference collapsing rules, an Arg&& becomes Arg& if you pass
an lvalue ref as argument (Arg&& & -> Arg&).

To answer your question then, you can "choose" by explicit casting your
arguments to lvalue-references or rvalue-references respectively.
Message has been deleted
Message has been deleted
Message has been deleted

K. Frank

unread,
Jun 12, 2015, 8:43:42 PM6/12/15
to
Hi Luca!

On Friday, June 12, 2015 at 1:34:45 PM UTC-4, Luca Risolia wrote:
> Il 12/06/2015 18:27, K.Frank ha scritto:
> >
> > No, I don't think your description of what is happening is
> > correct.
> >
> > In short, res1 is a double, while res is a double*. A copy
> > of res1 is never made. res points to res1, in place on the
> > stack as a variable local to main.
> >
> > In more detail, here's Doug's code from "main":
> >
> > // ...
> > double res1;
> > double res2;
> > thread t1 {f,some_vec,&res1}; //f(some_vec,&res1) executes in a separate
>

It seems that you are saying two inconsistent things
here -- you first recognize that &res1 is a pointer,
but then repeat the mistaken claim that res1 is itself
a pointer.

> Sorry, typo: it's clear what I meant to say "res" is *a copy* of the
> passed pointer which is "&res1" - not "res1", of course.

Yes, "the passed pointer" is "&res1". res1 is a double
(not a pointer) and &res1 is a pointer (pointer-to-double).

> I read your "you don't want to access res1" as if one could not write
> res1 (which is a pointer itself),

res1 is *not* a pointer. It is a plain double.

> which is clearly perfectly safe.

Absent synchronization, it is *not* safe to write to res1
(or read from it) in main because thread t1 also has access
to res1 through the pointer variable res in t1's thread
function, f. When the thread function f runs, res has
the value &res1, so f could be writing to res1 (through
the pointer), while main is reading from res1. This would
undefined behavior.

> You
> probably meant to say "dereference" instead of "access".

No, I meant to say "access" (as in "read to" or "write from").
You cannot dereference res1 because it is not a pointer.

To emphasize this point, here is the declaration of res1
from Doug's example code:

double res1;

res1 is declared to be a double, not a pointer-to-double
(nor a pointer to any other type).

My original points 1 and 2 stand. Doug's example code is
correct because his use of t1.join() provides the necessary
thread synchronization, but if the synchronization had not
been provided, the code would be incorrect and lead to
undefined behavior.

t1 has been "granted" access to res1 because it has been
passed the value &res1 -- a pointer to res1.

1) If t1 writes to res1 (through the pointer) while main
is reading from it, you have an error and undefined behavior.

2) If t1 writes to res1 (through the pointer) after res1
goes out of scope (because main has exited), you have an
error and undefined behavior.

> However, I think what is important for the OP to notice in his example
> is that what you get in f(), when it is invoked in the context of a new
> thread of execution, is a const reference to **A COPY** of the original
> vector passed as argument to the thread constructor.

(I do agree that the thread function gets a copy of the
vector some_vec, and that main and t1 can safely access
their separate copies of some_vec at the same time without
synchronization. In particular, if t1 were to modify its
copy of some_vec, those modification would not show up in
the original some_vec in main.)


Best.


K. Frank

Luca Risolia

unread,
Jun 13, 2015, 7:01:54 AM6/13/15
to
Il 13/06/2015 02:43, K. Frank ha scritto:
> res1 is *not* a pointer. It is a plain double.

Yes, I kept mixing the types in my mind. I was too tired :)

0 new messages