/nn/SpatialConvolution.lua:88: bad argument #2 to 'SpatialConvolutionMM_updateOutput' (3D or 4D(batch mode) tensor expected)
stack traceback:
[C]: at 0x7f30eee04930
[C]: in function 'SpatialConvolutionMM_updateOutput'
/lua/5.1/nn/SpatialConvolution.lua:88: in function 'updateOutput'
/lua/5.1/nn/Sequential.lua:25: in function 'forward'
MyCNN.lua:41: in main chunk
require 'nn'
require 'image'
require 'lfs'
-- ACTIVATION FUNCTION
ReLU = nn.ReLU
-- NETWORK TOPOLOGY
-- SpatialConvolution(nInputPlane, nOutputPlane, kW, kH, [dW], [dH], [padding])
-- SpatialMaxPooling(kW, kH [, dW, dH])
local model = nn.Sequential()
model:add(nn.SpatialConvolution(3, 3, 240, 240))
model:add(nn.SpatialConvolution(3, 3, 11, 11, 4, 4)):add(ReLU(true))
model:add(nn.SpatialConvolution(3, 48, 5, 5)):add(ReLU(true))
model:add(nn.SpatialMaxPooling(5, 5, 3, 3))
model:add(nn.SpatialConvolution(48, 256, 3, 3)):add(ReLU(true))
model:add(nn.SpatialConvolution(256, 192, 3, 3)):add(ReLU(true))
model:add(nn.SpatialConvolution(192, 192, 3, 3)):add(ReLU(true))
model:add(nn.Linear(4096,4096)):add(nn.ReLU(true))
model:add(nn.Linear(4096,2)):add(nn.ReLU(true))
-- TRAINING THE NETWORK --
for file in lfs.dir(lfs.currentdir().."/FinalData") do
if (file ~= ".") and (file ~= "..") then
local input = image.load("FinalData/"..file, 3); --torch.DoubleTensor
local output= torch.Tensor(2);
local c = file:sub(1,1)
local outpustStorage = output:storage()
if c == "A" then
outpustStorage[1]=0;
outpustStorage[2]=1;
elseif c == "G" then
outpustStorage[1]=1;
outpustStorage[2]=0;
else
error("Invalid Input Image Filename")
end
criterion = nn.MSECriterion();
criterion:forward(model:forward(input), output);
model:zeroGradParameters();
model:backward(input, criterion:backward(model.output, output));
model:updateParameters(0.01);
end
end
--
You received this message because you are subscribed to the Google Groups "torch7" group.
To unsubscribe from this group and stop receiving emails from it, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at http://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.
What is the dimensions of input? Your model seems to require a large image as input.It would be good to check dimensions of output in each convolution layer.
conv_nodes = model:findModules('nn.SpatialConvolution')
for i = 1, #conv_nodes do
print(conv_nodes[i].output:size())
end
nn.SpatialConvolution
Tensor.lua:23: calling 'min' on bad self (tensor must have one dimension)
What is the dimensions of input? Your model seems to require a large image as input.It would be good to check dimensions of output in each convolution layer.
The input of the network is a color image of 240px by 240px.
My hope was that I could achieve this by having the first layer of the network be 3 planes (one for red values, one for green, one for blue). With 1 kernal of size 240x240.
I am starting to think I am not understanding torch's convolution nn layers correctly. Am I too far off from the right idea?
Thank you.
On Saturday, April 25, 2015 at 11:43:49 AM UTC-4, Jonghoon Jin wrote:
What is the dimensions of input? Your model seems to require a large image as input.It would be good to check dimensions of output in each convolution layer.