self.loss = lasagne.objectives.categorical_crossentropy(self.prediction, self.target_var)
self.loss = self.loss.mean()
self.params = lasagne.layers.get_all_params(self.network, trainable=True)
self.updates = lasagne.updates.nesterov_momentum(self.loss, self.params, learning_rate=learning_rate, momentum=momentum)
self.test_prediction = lasagne.layers.get_output(self.network, deterministic=True)
self.test_loss = lasagne.objectives.categorical_crossentropy(self.test_prediction, self.target_var)
self.test_loss = self.test_loss.mean()
# As a bonus, also create an expression for the classification accuracy:
self.test_acc = T.mean(T.eq(T.argmax(self.test_prediction, axis=1), self.target_var),
dtype=theano.config.floatX)
# Compile a function performing a training step on a mini-batch (by giving
# the updates dictionary) and returning the corresponding training loss:
self.train_fn = theano.function([self.input_var, self.target_var], self.loss, updates=self.updates)
# Compile a second function computing the validation loss and accuracy:
self.val_fn = theano.function([self.input_var, self.target_var], [self.test_loss, self.test_acc])
# Lasagne Function
self.prediction = lasagne.layers.get_output(self.network)
# create loss function
# Create a loss expression for training, i.e., a scalar objective we want
# to minimize (for our multi-class problem, it is the cross-entropy loss):
#self.loss = lasagne.objectives.categorical_crossentropy(self.prediction, self.target_var)
self.loss = lasagne.objectives.squared_error(self.prediction, self.target_var)
self.loss = self.loss.mean()
self.params = lasagne.layers.get_all_params(self.network, trainable=True)
self.updates = lasagne.updates.nesterov_momentum(self.loss, self.params, learning_rate=learning_rate, momentum=momentum)
self.test_prediction = lasagne.layers.get_output(self.network, deterministic=True)
self.test_loss = lasagne.objectives.squared_error(self.prediction, self.target_var)
self.test_loss = self.test_loss.mean()
# As a bonus, also create an expression for the classification accuracy:
self.test_acc = T.mean(T.eq(T.argmax(self.test_prediction, axis=1), self.target_var),
dtype=theano.config.floatX)
# Compile a function performing a training step on a mini-batch (by giving
# the updates dictionary) and returning the corresponding training loss:
self.train_fn = theano.function([self.input_var, self.target_var], self.loss, updates=self.updates)
# Compile a second function computing the validation loss and accuracy:
self.val_fn = theano.function([self.input_var, self.target_var], [self.test_loss, self.test_acc])
I don't know what your question is, but maybe lasagne.objectives.squared_error() is the answer.
--
You received this message because you are subscribed to a topic in the Google Groups "lasagne-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lasagne-users/fBtOHc33svM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lasagne-user...@googlegroups.com.
To post to this group, send email to lasagn...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lasagne-users/650ea84b-9fbf-4a73-b0a7-edf17b0f9dd1%40googlegroups.com.
Inputs shapes: [(1000, 1), (1, 1000)]
input_var = T.tensor4('inputs')
target_var = T.ivector('targets')
And use batch_size = 1, it works.
ivector | int32 | 1 | (?,) | (False,) |
imatrix | int32 | 2 | (?,?) | (False, False) |
--
You received this message because you are subscribed to a topic in the Google Groups "lasagne-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lasagne-users/fBtOHc33svM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lasagne-user...@googlegroups.com.
To post to this group, send email to lasagn...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lasagne-users/7de75f3d-2f91-4e6f-a65d-87a388377415%40googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "lasagne-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lasagne-users/fBtOHc33svM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lasagne-user...@googlegroups.com.
To post to this group, send email to lasagn...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lasagne-users/1b7c114a-75cd-48c4-b086-409ea3b2c2dc%40googlegroups.com.
Carefully read the error message and understand what each part means. Do not try to guess what to do just because you get an error, the message actually tells you everything you need to know when you learn how to read it.
--
You received this message because you are subscribed to a topic in the Google Groups "lasagne-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lasagne-users/fBtOHc33svM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lasagne-user...@googlegroups.com.
To post to this group, send email to lasagn...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lasagne-users/d60843bf-1b43-4869-9e2a-8e76933026ef%40googlegroups.com.
# We iterate over epochs:
for epoch in range(0, max_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_batches = 0
epoch_start_time = time.time()
for batch in self.iterate_minibatches(self.x_train, self.y_train, self.batch_size, shuffle=False):
inputs, targets = batch
targets = targets.reshape(-1,1)
train_err += self.train_fn(inputs, targets)
train_batches += 1
# And a full pass over the validation data:
val_err = 0
val_acc = 0
val_batches = 0
for batch in self.iterate_minibatches(self.x_validation, self.y_validation, self.batch_size, shuffle=False):
inputs, targets = batch
targets = targets.reshape(-1,1)
err, acc = self.val_fn(inputs, targets)
val_err += err
val_acc += acc
val_batches += 1
Seems it worked. # As a bonus, also create an expression for the classification accuracy:
self.test_acc = T.mean(T.eq(T.argmax(self.test_prediction, axis=1), self.target_var), dtype=theano.config.floatX)
Seems it worked.
Does anything fit for this task, an output that does MSE, regarding accuracy?