Torch caffe binding

190 views
Skip to first unread message

srikanth sridhar

unread,
Jan 5, 2016, 1:58:06 PM1/5/16
to torch7
Hello all,


Recently to try the "Fully convolutional network for semantic segmentation" I have installed torch-caffe-binding. from here https://github.com/szagoruyko/torch-caffe-binding


Since this segmentation requires "caffe's future release version", i built the "future version" and the same future version is used to add torch-caffe-binding



Now to check the "torch-caffe-binding" i am trying the

"bvlc_alexnet.caffemodel"


While building caffe's future release, i had some problems in 'make test',rest all passed


I continued with the compilation and when i tried require "caffe" th prompted a 'true'


Then I tried "bvlc_alexnet.caffemodel" as given in the torch-caffe-binding link,when the net:forward(input) is given,the itorch notebook is still running.The terminal displays error like,


th> output = net:forward(input)
F0106 00:23:46.470368 23604 caffe.cpp:43] Check failed: bottom->size[0]*bottom->size[1]*bottom->size[2]*bottom->size[3] == input_blobs[i]->count() (309174 vs. 1545870) MatCaffe input size does not match the input size of the network
*** Check failure stack trace: ***



I am not using MatCaffe and I cannot understand what is happening behind.

sorry for the long question.Any suggestions will be of great help



regards
srikanth



Sergey Zagoruyko

unread,
Jan 5, 2016, 2:57:16 PM1/5/16
to torch7 on behalf of srikanth sridhar
Looks like your input is not the same as caffe expects, try to make your input = torch.FloatTensor(10,3,227,227) instead of torch.FloatTensor(2,3,227,227).

Sergey.

--
You received this message because you are subscribed to the Google Groups "torch7" group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.

srikanth sridhar

unread,
Jan 6, 2016, 12:03:06 AM1/6/16
to torch7
Hello sergey,


When I tried,
 input = torch.FloatTensor(10,3,227,227) it shows the following output in terminal and the itorch notebook is still running
   
[I 10:26:06.317 NotebookApp] Adapting to protocol v4.0 for kernel c5bd5dd7-28ec-4824-adf3-be0e7f4cfec9
WARNING: comm_open not handled yet   
   
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0106 10:26:16.692391  3275 net.cpp:50] Initializing net from parameters:
name: "AlexNet"
input: "data"
state {
  phase: TEST
}
input_shape {
  dim: 10
  dim: 3
  dim: 227
  dim: 227
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 96
    kernel_size: 11
    stride: 4
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "norm1"
  type: "LRN"
  bottom: "conv1"
  top: "norm1"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "norm1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 2
    kernel_size: 5
    group: 2
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "norm2"
  type: "LRN"
  bottom: "conv2"
  top: "norm2"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "norm2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "conv4"
  type: "Convolution"
  bottom: "conv3"
  top: "conv4"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu4"
  type: "ReLU"
  bottom: "conv4"
  top: "conv4"
}
layer {
  name: "conv5"
  type: "Convolution"
  bottom: "conv4"
  top: "conv5"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu5"
  type: "ReLU"
  bottom: "conv5"
  top: "conv5"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc8"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 1000
  }
}
layer {
  name: "prob"
  type: "Softmax"
  bottom: "fc8"
  top: "prob"
}
I0106 10:26:16.693889  3275 net.cpp:436] Input 0 -> data
I0106 10:26:16.714329  3275 layer_factory.hpp:76] Creating layer conv1
I0106 10:26:16.714395  3275 net.cpp:111] Creating Layer conv1
I0106 10:26:16.714411  3275 net.cpp:478] conv1 <- data
I0106 10:26:16.714431  3275 net.cpp:434] conv1 -> conv1
I0106 10:26:16.720396  3275 net.cpp:156] Setting up conv1
I0106 10:26:16.720448  3275 net.cpp:164] Top shape: 10 96 55 55 (2904000)
I0106 10:26:16.720484  3275 layer_factory.hpp:76] Creating layer relu1
I0106 10:26:16.720499  3275 net.cpp:111] Creating Layer relu1
I0106 10:26:16.720507  3275 net.cpp:478] relu1 <- conv1
I0106 10:26:16.720517  3275 net.cpp:420] relu1 -> conv1 (in-place)
I0106 10:26:16.723407  3275 net.cpp:156] Setting up relu1
I0106 10:26:16.723448  3275 net.cpp:164] Top shape: 10 96 55 55 (2904000)
I0106 10:26:16.723462  3275 layer_factory.hpp:76] Creating layer norm1
I0106 10:26:16.723484  3275 net.cpp:111] Creating Layer norm1
I0106 10:26:16.723495  3275 net.cpp:478] norm1 <- conv1
I0106 10:26:16.723508  3275 net.cpp:434] norm1 -> norm1
I0106 10:26:16.723534  3275 net.cpp:156] Setting up norm1
I0106 10:26:16.723551  3275 net.cpp:164] Top shape: 10 96 55 55 (2904000)
I0106 10:26:16.723561  3275 layer_factory.hpp:76] Creating layer pool1
I0106 10:26:16.723572  3275 net.cpp:111] Creating Layer pool1
I0106 10:26:16.723587  3275 net.cpp:478] pool1 <- norm1
I0106 10:26:16.723597  3275 net.cpp:434] pool1 -> pool1
I0106 10:26:16.723628  3275 net.cpp:156] Setting up pool1
I0106 10:26:16.723639  3275 net.cpp:164] Top shape: 10 96 27 27 (699840)
I0106 10:26:16.723649  3275 layer_factory.hpp:76] Creating layer conv2
I0106 10:26:16.723662  3275 net.cpp:111] Creating Layer conv2
I0106 10:26:16.723671  3275 net.cpp:478] conv2 <- pool1
I0106 10:26:16.723681  3275 net.cpp:434] conv2 -> conv2
I0106 10:26:16.724416  3275 net.cpp:156] Setting up conv2
I0106 10:26:16.724467  3275 net.cpp:164] Top shape: 10 256 27 27 (1866240)
I0106 10:26:16.724491  3275 layer_factory.hpp:76] Creating layer relu2
I0106 10:26:16.724509  3275 net.cpp:111] Creating Layer relu2
I0106 10:26:16.724529  3275 net.cpp:478] relu2 <- conv2
I0106 10:26:16.724545  3275 net.cpp:420] relu2 -> conv2 (in-place)
I0106 10:26:16.724565  3275 net.cpp:156] Setting up relu2
I0106 10:26:16.724581  3275 net.cpp:164] Top shape: 10 256 27 27 (1866240)
I0106 10:26:16.724596  3275 layer_factory.hpp:76] Creating layer norm2
I0106 10:26:16.724609  3275 net.cpp:111] Creating Layer norm2
I0106 10:26:16.724617  3275 net.cpp:478] norm2 <- conv2
I0106 10:26:16.724627  3275 net.cpp:434] norm2 -> norm2
I0106 10:26:16.724642  3275 net.cpp:156] Setting up norm2
I0106 10:26:16.724652  3275 net.cpp:164] Top shape: 10 256 27 27 (1866240)
I0106 10:26:16.724659  3275 layer_factory.hpp:76] Creating layer pool2
I0106 10:26:16.724669  3275 net.cpp:111] Creating Layer pool2
I0106 10:26:16.724678  3275 net.cpp:478] pool2 <- norm2
I0106 10:26:16.724686  3275 net.cpp:434] pool2 -> pool2
I0106 10:26:16.724697  3275 net.cpp:156] Setting up pool2
I0106 10:26:16.724706  3275 net.cpp:164] Top shape: 10 256 13 13 (432640)
I0106 10:26:16.724714  3275 layer_factory.hpp:76] Creating layer conv3
I0106 10:26:16.724725  3275 net.cpp:111] Creating Layer conv3
I0106 10:26:16.724732  3275 net.cpp:478] conv3 <- pool2
I0106 10:26:16.724741  3275 net.cpp:434] conv3 -> conv3
I0106 10:26:16.726446  3275 net.cpp:156] Setting up conv3
I0106 10:26:16.726500  3275 net.cpp:164] Top shape: 10 384 13 13 (648960)
I0106 10:26:16.726517  3275 layer_factory.hpp:76] Creating layer relu3
I0106 10:26:16.726533  3275 net.cpp:111] Creating Layer relu3
I0106 10:26:16.726543  3275 net.cpp:478] relu3 <- conv3
I0106 10:26:16.726554  3275 net.cpp:420] relu3 -> conv3 (in-place)
I0106 10:26:16.726567  3275 net.cpp:156] Setting up relu3
I0106 10:26:16.726575  3275 net.cpp:164] Top shape: 10 384 13 13 (648960)
I0106 10:26:16.726584  3275 layer_factory.hpp:76] Creating layer conv4
I0106 10:26:16.726595  3275 net.cpp:111] Creating Layer conv4
I0106 10:26:16.726603  3275 net.cpp:478] conv4 <- conv3
I0106 10:26:16.726611  3275 net.cpp:434] conv4 -> conv4
I0106 10:26:16.727447  3275 net.cpp:156] Setting up conv4
I0106 10:26:16.727493  3275 net.cpp:164] Top shape: 10 384 13 13 (648960)
I0106 10:26:16.727506  3275 layer_factory.hpp:76] Creating layer relu4
I0106 10:26:16.727520  3275 net.cpp:111] Creating Layer relu4
I0106 10:26:16.727531  3275 net.cpp:478] relu4 <- conv4
I0106 10:26:16.727540  3275 net.cpp:420] relu4 -> conv4 (in-place)
I0106 10:26:16.727552  3275 net.cpp:156] Setting up relu4
I0106 10:26:16.727560  3275 net.cpp:164] Top shape: 10 384 13 13 (648960)
I0106 10:26:16.727568  3275 layer_factory.hpp:76] Creating layer conv5
I0106 10:26:16.727579  3275 net.cpp:111] Creating Layer conv5
I0106 10:26:16.727586  3275 net.cpp:478] conv5 <- conv4
I0106 10:26:16.727604  3275 net.cpp:434] conv5 -> conv5
I0106 10:26:16.728345  3275 net.cpp:156] Setting up conv5
I0106 10:26:16.728363  3275 net.cpp:164] Top shape: 10 256 13 13 (432640)
I0106 10:26:16.728376  3275 layer_factory.hpp:76] Creating layer relu5
I0106 10:26:16.728387  3275 net.cpp:111] Creating Layer relu5
I0106 10:26:16.728394  3275 net.cpp:478] relu5 <- conv5
I0106 10:26:16.728409  3275 net.cpp:420] relu5 -> conv5 (in-place)
I0106 10:26:16.728420  3275 net.cpp:156] Setting up relu5
I0106 10:26:16.728427  3275 net.cpp:164] Top shape: 10 256 13 13 (432640)
I0106 10:26:16.728435  3275 layer_factory.hpp:76] Creating layer pool5
I0106 10:26:16.728443  3275 net.cpp:111] Creating Layer pool5
I0106 10:26:16.728451  3275 net.cpp:478] pool5 <- conv5
I0106 10:26:16.728458  3275 net.cpp:434] pool5 -> pool5
I0106 10:26:16.728469  3275 net.cpp:156] Setting up pool5
I0106 10:26:16.728477  3275 net.cpp:164] Top shape: 10 256 6 6 (92160)
I0106 10:26:16.728484  3275 layer_factory.hpp:76] Creating layer fc6
I0106 10:26:16.728497  3275 net.cpp:111] Creating Layer fc6
I0106 10:26:16.728503  3275 net.cpp:478] fc6 <- pool5
I0106 10:26:16.728512  3275 net.cpp:434] fc6 -> fc6
I0106 10:26:16.775821  3275 net.cpp:156] Setting up fc6
I0106 10:26:16.775907  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.775940  3275 layer_factory.hpp:76] Creating layer relu6
I0106 10:26:16.776005  3275 net.cpp:111] Creating Layer relu6
I0106 10:26:16.776067  3275 net.cpp:478] relu6 <- fc6
I0106 10:26:16.776123  3275 net.cpp:420] relu6 -> fc6 (in-place)
I0106 10:26:16.776195  3275 net.cpp:156] Setting up relu6
I0106 10:26:16.776240  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.776301  3275 layer_factory.hpp:76] Creating layer drop6
I0106 10:26:16.776422  3275 net.cpp:111] Creating Layer drop6
I0106 10:26:16.776455  3275 net.cpp:478] drop6 <- fc6
I0106 10:26:16.776480  3275 net.cpp:420] drop6 -> fc6 (in-place)
I0106 10:26:16.776542  3275 net.cpp:156] Setting up drop6
I0106 10:26:16.776573  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.776593  3275 layer_factory.hpp:76] Creating layer fc7
I0106 10:26:16.776619  3275 net.cpp:111] Creating Layer fc7
I0106 10:26:16.776638  3275 net.cpp:478] fc7 <- fc6
I0106 10:26:16.776662  3275 net.cpp:434] fc7 -> fc7
I0106 10:26:16.798665  3275 net.cpp:156] Setting up fc7
I0106 10:26:16.798724  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.798741  3275 layer_factory.hpp:76] Creating layer relu7
I0106 10:26:16.798758  3275 net.cpp:111] Creating Layer relu7
I0106 10:26:16.798765  3275 net.cpp:478] relu7 <- fc7
I0106 10:26:16.798774  3275 net.cpp:420] relu7 -> fc7 (in-place)
I0106 10:26:16.798789  3275 net.cpp:156] Setting up relu7
I0106 10:26:16.798796  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.798804  3275 layer_factory.hpp:76] Creating layer drop7
I0106 10:26:16.798817  3275 net.cpp:111] Creating Layer drop7
I0106 10:26:16.798825  3275 net.cpp:478] drop7 <- fc7
I0106 10:26:16.798833  3275 net.cpp:420] drop7 -> fc7 (in-place)
I0106 10:26:16.798845  3275 net.cpp:156] Setting up drop7
I0106 10:26:16.798857  3275 net.cpp:164] Top shape: 10 4096 (40960)
I0106 10:26:16.798879  3275 layer_factory.hpp:76] Creating layer fc8
I0106 10:26:16.798894  3275 net.cpp:111] Creating Layer fc8
I0106 10:26:16.798907  3275 net.cpp:478] fc8 <- fc7
I0106 10:26:16.798920  3275 net.cpp:434] fc8 -> fc8
I0106 10:26:16.804666  3275 net.cpp:156] Setting up fc8
I0106 10:26:16.804715  3275 net.cpp:164] Top shape: 10 1000 (10000)
I0106 10:26:16.804729  3275 layer_factory.hpp:76] Creating layer prob
I0106 10:26:16.804746  3275 net.cpp:111] Creating Layer prob
I0106 10:26:16.804757  3275 net.cpp:478] prob <- fc8
I0106 10:26:16.804769  3275 net.cpp:434] prob -> prob
I0106 10:26:16.804788  3275 net.cpp:156] Setting up prob
I0106 10:26:16.804801  3275 net.cpp:164] Top shape: 10 1000 (10000)
I0106 10:26:16.804810  3275 net.cpp:241] prob does not need backward computation.
I0106 10:26:16.804817  3275 net.cpp:241] fc8 does not need backward computation.
I0106 10:26:16.804824  3275 net.cpp:241] drop7 does not need backward computation.
I0106 10:26:16.804831  3275 net.cpp:241] relu7 does not need backward computation.
I0106 10:26:16.804838  3275 net.cpp:241] fc7 does not need backward computation.
I0106 10:26:16.804846  3275 net.cpp:241] drop6 does not need backward computation.
I0106 10:26:16.804853  3275 net.cpp:241] relu6 does not need backward computation.
I0106 10:26:16.804860  3275 net.cpp:241] fc6 does not need backward computation.
I0106 10:26:16.804867  3275 net.cpp:241] pool5 does not need backward computation.
I0106 10:26:16.804874  3275 net.cpp:241] relu5 does not need backward computation.
I0106 10:26:16.804882  3275 net.cpp:241] conv5 does not need backward computation.
I0106 10:26:16.804889  3275 net.cpp:241] relu4 does not need backward computation.
I0106 10:26:16.804896  3275 net.cpp:241] conv4 does not need backward computation.
I0106 10:26:16.804903  3275 net.cpp:241] relu3 does not need backward computation.
I0106 10:26:16.804910  3275 net.cpp:241] conv3 does not need backward computation.
I0106 10:26:16.804919  3275 net.cpp:241] pool2 does not need backward computation.
I0106 10:26:16.804927  3275 net.cpp:241] norm2 does not need backward computation.
I0106 10:26:16.804936  3275 net.cpp:241] relu2 does not need backward computation.
I0106 10:26:16.804944  3275 net.cpp:241] conv2 does not need backward computation.
I0106 10:26:16.804952  3275 net.cpp:241] pool1 does not need backward computation.
I0106 10:26:16.804960  3275 net.cpp:241] norm1 does not need backward computation.
I0106 10:26:16.804968  3275 net.cpp:241] relu1 does not need backward computation.
I0106 10:26:16.804976  3275 net.cpp:241] conv1 does not need backward computation.
I0106 10:26:16.804985  3275 net.cpp:284] This network produces output prob
I0106 10:26:16.805032  3275 net.cpp:298] Network initialization done.
I0106 10:26:16.805055  3275 net.cpp:299] Memory required for data: 77048960
I0106 10:26:19.392630  3275 upgrade_proto.cpp:611] Attempting to upgrade input file specified using deprecated transformation parameters: bvlc_alexnet.caffemodel
I0106 10:26:19.392684  3275 upgrade_proto.cpp:614] Successfully upgraded file specified using deprecated data transformation parameters.
W0106 10:26:19.392699  3275 upgrade_proto.cpp:616] Note that future Caffe releases will only support transform_param messages for transformation fields.
I0106 10:26:19.392709  3275 upgrade_proto.cpp:620] Attempting to upgrade input file specified using deprecated V1LayerParameter: bvlc_alexnet.caffemodel
I0106 10:26:19.571703  3275 upgrade_proto.cpp:628] Successfully upgraded file specified using deprecated V1LayerParameter
F0106 10:27:12.355908  3275 caffe.cpp:56] Unknown Caffe mode.

*** Check failure stack trace: ***
[I 10:28:05.956 NotebookApp] Saving file at /Untitled1.ipynb


Is it something to do with the way built caffe future version?


regards
srikanth









On Wednesday, 6 January 2016 01:27:16 UTC+5:30, Sergey Zagoruyko wrote:
Looks like your input is not the same as caffe expects, try to make your input = torch.FloatTensor(10,3,227,227) instead of torch.FloatTensor(2,3,227,227).

Sergey.

srikanth sridhar

unread,
Jan 6, 2016, 12:20:57 AM1/6/16
to torch7
Hello sergey,


After seeing the caffe.CPP file, I did net:setModeCPU()


Now its giving a 10x1000x1x1 output probabilities


Is it possible to use the GPU .I can run GPU in torch 7,but dint build caffe with gpu.I run GPU in torch 7 using cutorch,cunn not cudnn.Have some issues in installing cudnn.

So is there a way to run this using GPU without loadcaffe.Bcz i am having issues using loadcaffe for the "Fully convolutional semantic segmentation" which is what I have to experiment with.



Regards

Francisco Vitor Suzano Massa

unread,
Jan 6, 2016, 1:41:05 AM1/6/16
to torch7
you need to compile caffe with GPU support (not necessarily with cudnn support though) if you want to run it in GPU inside torch.
when it's done, simply do
net:setModeGPU()

In any case, in this caffe-binding, you always pass a FloatTensor as input to the network, even if it's on the GPU (the conversion will be done inside the network)

srikanth sridhar

unread,
Jan 6, 2016, 2:49:32 AM1/6/16
to torch7
Hello Francisco,

Thank you for replying

I built caffe by uncommenting the CPU_ONLY option in the "Makefile.config"

While running that the last but one line,TEST_GPUID := 0 is uncommented.Does this mean I have actually built caffe with gpu as well or should I comment CPU_ONLY  and uncomment USE_cuDNN


regards
srikanth

Francisco Vitor Suzano Massa

unread,
Jan 6, 2016, 5:55:22 AM1/6/16
to torch7
you need to comment the line containing
CPU_ONLY
https://github.com/BVLC/caffe/blob/master/Makefile.config.example#L7-L8

if you need to use cudnn, uncomment the line
USE_cuDNN
https://github.com/BVLC/caffe/blob/master/Makefile.config.example#L4-L5

srikanth sridhar

unread,
Jan 6, 2016, 11:17:50 AM1/6/16
to torch7
Thank you Francisco.I built caffe in GPU by commenting CPU_ONLY.It works fine when i tried alexnet in gpu mode.When running 'make runtest' it passed and printed that two tests are disabled.

Great and thank you very much
Reply all
Reply to author
Forward
0 new messages