effiency of ParallelTable

33 views
Skip to first unread message

Yossi Biton

unread,
Mar 26, 2015, 11:36:26 AM3/26/15
to tor...@googlegroups.com
Hello,

Im using a set of neural networks applied on different regions of the image. Today i'm training each network in different process but I'm looking for another way which is less io expensive...
ParallelTable seems to be relevant (I can feed it with table with the relevant image regions) - but I see it apply each branch sequently and not in a paraller way, am in right ?
Do you know a good solution for my case ?
I prefer to train all networks in one process, in the fastest available way.

soumith

unread,
Mar 26, 2015, 11:41:08 AM3/26/15
to torch7 on behalf of Yossi Biton
ParallelTable is indeed not muliti-threaded (if that is what you are asking about).
What you can do for multithreaded training, is to use the threads package: https://github.com/torch/threads-ffi/tree/master/benchmark


--
You received this message because you are subscribed to the Google Groups "torch7" group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at http://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

Yossi Biton

unread,
Mar 27, 2015, 4:07:50 AM3/27/15
to tor...@googlegroups.com
thanks.
and assuming i prefer not to deal with threads - is there a way to use "shared memory" (shared between different processes) in lua ?


On Thursday, March 26, 2015 at 5:41:08 PM UTC+2, smth chntla wrote:
ParallelTable is indeed not muliti-threaded (if that is what you are asking about).
What you can do for multithreaded training, is to use the threads package: https://github.com/torch/threads-ffi/tree/master/benchmark
Reply all
Reply to author
Forward
0 new messages