You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
I build a ResNet-34 clone and did some manual timing testing, with both CAFFE and CUDNN as engine for the convolution part. The timing results i obtained with CAFFE are consistent with measurements i made some weeks ago. With CUDNN as engine it takes now nearly double the time in comparison.
Does cuDNN performance depend on specific kernel and library versions? Or are there some "hidden" configuration files that I did forget about?
Thanks for any ideas.
My system: Driver Version: 375.39 cuDNN Version: 5.1 CUDA Version: 8.0 Ubuntu 16.04.2 LTS Kernel: 4.4.0-62-generic
Detlef Schmicker
unread,
Feb 20, 2017, 3:18:55 AM2/20/17
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
if you have a gtx 970 it has 3.5 gb fast memory and 0.5 gb slow, making a lot of difference if the net is in the slow as you need more than 3.5 gb
Bart Vosteen
unread,
Feb 20, 2017, 6:54:29 AM2/20/17
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
I use a GTX 1060 6Gb. To my knowledge all its memory is attached with the same bandwidth.
Just to make it clear, the only thing that changed, was the point in time i took the measurements.