You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
Hi,
We are able to run one caffe model on multiple images on one GPU.
We were not able to run multiple caffe models on multiple images on one GPU for making predictions in parallel. We wanted to check if that is possible.
The models are pretty light and we are using NVIDIA 980
Thank, Abhinav
Jan C Peters
unread,
Oct 26, 2015, 6:26:06 AM10/26/15
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
As long as you are running caffe (or pycaffe) in different OS processes that shouldn't be a problem (I am running multiple caffe instances for training on a single GPU right now). The only points you may get in trouble are: - running short of available memory on the GPU if you have too many processes/too large batchsizes or sample dimensions - if you are using the same sample db for input for multiple caffe instances, you probably can't use LevelDB (afaik it has no read-only mode and only allows exclusive access). But for LMDB and H5 that shouldn't be a problem.