net = NeuralNet( layers=[ ('input', layers.InputLayer), ('dropout1', layers.DropoutLayer), ('hidden', layers.DenseLayer), ('dropout2', layers.DropoutLayer), ('output', layers.DenseLayer), ], # layer parameters: input_shape=(None, X.shape[1]), dropout1_p=0.85, dropout2_p=0.5, hidden_num_units=2500, hidden_nonlinearity=very_leaky_rectify, output_nonlinearity=None, output_num_units=y.shape[1], # optimization method: train_split=TrainSplit(eval_size=0.0), # ! update=nesterov_momentum, update_learning_rate=theano.shared(float32(0.01)), update_momentum=theano.shared(float32(0.9)), regression=True, max_epochs=5000, verbose=1, on_epoch_finished=[AdjustVariable('update_learning_rate', start=0.01, stop=0.00001), AdjustVariable('update_momentum', start=0.9, stop=0.999)], custom_scores=[("acc", lambda y, yhat: accuracy(y, yhat))] ) np.random.seed(0) net.fit(X, y)
But I get the following error message:
Traceback (most recent call last):
np.random.seed(0)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nolearn/lasagne/base.py", line 544, in fit
self.train_loop(X, y, epochs=epochs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nolearn/lasagne/base.py", line 641, in train_loop
custom_scores, weights=batch_valid_sizes, axis=1)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/numpy/lib/function_base.py", line 1140, in average
"Weights sum to zero, can't be normalized")
ZeroDivisionError: Weights sum to zero, can't be normalized
Does anyone know how to fix this, I'd really appriciate any help, because I'm new in this field. I really hope someone can help me here.
Thanks in advance:)
if I want to use all data, I've to use trainsplit(eval_size=0.0)
Hi,
so far I'm still stuck and have found no solution for the problem. I tried it on Ubuntu 16.04.1 LTS as well, but no difference in the error message.
But thanks for your help,
Yvonne
if I want to use all data, I've to use trainsplit(eval_size=0.0)
I don't know nolearn, but there must be another way to provide the splits, or a way to not have a validation set at all. Or a way to use the training set for validation as well. Check the train_loop or trainsplit implementations in nolearn to see what you could use instead of TrainSplit(eval_size=0.0). Let us know if you found a way (in case anybody else searches for this in future), or let us know if you get stuck.
Best, Jan
--
You received this message because you are subscribed to a topic in the Google Groups "lasagne-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lasagne-users/fDnCriwfdO4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lasagne-user...@googlegroups.com.
To post to this group, send email to lasagn...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lasagne-users/78954aaa-95c8-4fb0-ad32-eb49ebd25ee5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
so far I'm still stuck and have found no solution for the problem. I tried it on Ubuntu 16.04.1 LTS as well, but no difference in the error message.
The Ubuntu version will not change anything. The problem is that it will still try to split up the data into two parts, one of 100% and one of 0% of the elements, and then it does something with the 0% part that involves dividing by its length. You will need to go into the "train_loop" code in nolearn that is mentioned in the error stacktrace, and figure out where the problem appears.
You'll need to add a lot of `if` cases to circumvent the corresponding code if len(X_valid) is zero.