Receiving error: Assertion `cur_target >= 0 && cur_target < n_classes' failed.

閲覧: 1,220 回
最初の未読メッセージにスキップ

jjinho

未読、
2016/03/21 19:10:282016/03/21
To: torch7
To teach myself how to use Torch7 and Lua, I am trying to apply Torch7 to the "Digit Recognizer" problem from Kaggle. Just for background, this uses the MNIST data in which the training data is given as a 42000 x 785 csv file, where there are 42000 examples, of which the first column is the label, and the remaining 784 items in the row are the 28 x 28 image. 

I imported the data into a 42000 x 1 x 28 x 28 tensor and split this into a training (40000 x 1 x 28 x 28) tensor and a validation (2000 x 1 x 28 x 28) tensor, and have attempted to modify the code from the Deep Learning with Torch: the 60-minute blitz tutorial to use on this data. 

I have mainly modified the convolution neural network that was given in the example program to reflect the fact that I am using a 28 x 28 input image rather than a 32 x 32 input image.

After running the file (which is shown below), I get the following error:

/home/jjinho/torch/install/bin/luajit: /home/jjinho/torch/install/share/lua/5.1/nn/THNN.lua:109: Assertion `cur_target >= 0 && cur_target < n_classes' failed.  at /tmp/luarocks_nn-scm-1-3214/nn/lib/THNN/generic/ClassNLLCriterion.c:38
stack traceback:
[C]: in function 'v'
/home/jjinho/torch/install/share/lua/5.1/nn/THNN.lua:109: in function 'ClassNLLCriterion_updateOutput'
...aul/torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:41: in function 'forward'
...ul/torch/install/share/lua/5.1/nn/StochasticGradient.lua:35: in function 'train'
pcnet.lua:106: in main chunk
[C]: in function 'dofile'
...jjinho/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

I am stumped and not sure what to make of this. I would really appreciate help in figuring out what I am doing wrong. Thank you all very much.

The code is:

require 'nn'

-- Data
trainset = torch.load('train_mnist.th7')
trainsetLabels = torch.load('train_label_mnist.th7')

trainData = trainset[{ {1,40000}, {}, {}, {} }]
trainLabel = trainsetLabels[{ {1,40000} }]

validData = trainset[{ {40001, 42000}, {}, {}, {} }]
validLabel = trainsetLabels[{ {40001, 42000} }]

-- Preparing training data for use with nn.StochasticGradient
train = {
data = trainData,
label = trainLabel
}

-- nn.StochasticGradient requires that the training set have an index
setmetatable(train, {__index = function(t, i)  return { t.data[i], t.label[i] } end} );

-- nn.StochasticGradient requires that the training set return size
function train:size() return self.data:size(1) end

-- Prepare validation data for use with nn.StochasticGradient
validate = {
data = validData,
label = validLabel
}
 
setmetatable(validate, {__index = function(t, i)  return { t.data[i], t.label[i] } end} );
function validate:size() return self.data:size(1) end

-- Classes
classes = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}

-- Preprocessing training data
mean = {}
stdv = {}

for i=1,40000 do
mean[i] = train.data[{ {i}, {}, {}, {} }]:mean()
train.data[{ {i}, {}, {}, {} }]:add(-mean[i])
stdv[i] = train.data[{ {i}, {}, {}, {} }]:std()
train.data[{ {i}, {}, {}, {} }]:div(stdv[i])
end

-- Preprocessing validation data
for i=1,2000 do
mean[i] = validate.data[{ {i}, {}, {}, {} }]:mean()
validate.data[{ {i}, {}, {}, {} }]:add(-mean[i])
stdv[i] = validate.data[{ {i}, {}, {}, {} }]:std()
validate.data[{ {i}, {}, {}, {} }]:div(stdv[i])
end

-- Model
net = nn.Sequential()
-- 1 x 28 x 28 -> 6 x 24 x 24
net:add(nn.SpatialConvolution(1, 6, 5, 5))
net:add(nn.ReLU())
-- 6 x 24 x 24 -> 6 x 12 x 12
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.ReLU())
-- 6 x 12 x 12 -> 16 x 8 x 8
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.ReLU())
-- 16 x 8 x 8 -> 16 x 4 x 4
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.View(16 * 4 * 4))
net:add(nn.Linear(16 * 4 * 4, 120))
net:add(nn.ReLU())
net:add(nn.Linear(120, 84))
net:add(nn.ReLU())
net:add(nn.Linear(84, #classes))
net:add(nn.LogSoftMax())

print(net)

-- Criterion
criterion = nn.ClassNLLCriterion()

-- Training
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.001
trainer.maxIteration = 5


trainer:train(train)

correct = 0
for i=1,2000 do
local groundtruth = validate.label[i]
local prediction = net:forward(validate.data[i])
-- true here means sorting in descending order
local confidences, indices = torch.sort(prediction, true)
if groundtruth == indices[1] then
correct = correct + 1
end
end

print(correct)

Anuj Godase

未読、
2016/10/09 7:23:462016/10/09
To: torch7
did you find out what is the issue?
even i am having a similar problem

Anuj Godase

未読、
2016/10/09 7:44:102016/10/09
To: torch7
The issues is torch expects the classes to be indexed from 1 to 'n' so for mnist dataset it has to be 1 to 10.
the error is caused due to the label '0'

I worked around the issue by replacing all the '0' labels to '10'
全員に返信
投稿者に返信
転送
新着メール 0 件