About task : I have class distances as input and want to get class confidences (number between 0.0 and 1.0). So I have something like :
[
[
0.0,
0.0,
0.0,
6.371921190238224,
0.0,
3.3287083713830516,
7.085957828217146,
7.747408965761948,
5.498717498872398,
5.498717498872398,
5.498717498872398,
5.498717498872398,
8.529725281060978
],
[
6.396501448825533,
0.0,
0.0,
5.217483270813266,
0.0,
5.319046151560534,
5.823161030197735,
3.8991256371824976,
6.269856323952211,
5.517874167220461,
6.396501448825533,
5.328678274963717,
3.8991256371824976
],
]
And as result
[
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
...
]
I have about 200 examples. My network building code is next :
def train(self, distances, classes):
"""
Train network
:param distances: array of distances to classes
:type distances: list[list[float]]
:param classes: array of class indicators
:type classes: list[list[float]]
"""
example_count, class_count = self._dimensions(distances, classes)
self.model = Sequential()
self.model.add(Dense(128, input_dim=class_count))
self.model.add(Dense(class_count))
self.model.compile(optimizer=SGD(), loss='mse')
self.model.fit(array(distances), array(classes))
But during training I get next output :
Epoch 1/10
425/425 [==============================] - 0s - loss: nan
Epoch 2/10
425/425 [==============================] - 0s - loss: nan
Epoch 3/10
425/425 [==============================] - 0s - loss: nan
Epoch 4/10
425/425 [==============================] - 0s - loss: nan
Epoch 5/10
425/425 [==============================] - 0s - loss: nan
Epoch 6/10
425/425 [==============================] - 0s - loss: nan
Epoch 7/10
425/425 [==============================] - 0s - loss: nan
Epoch 8/10
425/425 [==============================] - 0s - loss: nan
Epoch 9/10
425/425 [==============================] - 0s - loss: nan
Epoch 10/10
425/425 [==============================] - 0s - loss: nan
And when I trying to use model.predict(numpy.array([[ 0.0, 0.0, 0.0, 6.371921190238224, 0.0, 3.3287083713830516, 7.085957828217146, 7.747408965761948, 5.498717498872398, 5.498717498872398, 5.498717498872398, 5.498717498872398, 8.529725281060978]])) (example from train set) - I getting [[ nan nan nan nan nan nan nan nan nan nan nan nan nan]]
What can be wrong in data or building code?
Seems like I had wrong fit parameters (learning rate and other). Now I have next code (yes, I added neurons to hidden layer and increased train epochscount during testing):
example_count, class_count = self._dimensions(distances, classes)
self.model = Sequential()
self.model.add(Dense(1024, input_dim=class_count))
self.model.add(Dense(class_count))
self.model.compile(optimizer=SGD(lr=0.002, momentum=0.0, decay=0.0, nesterov=True), loss='mse', metrics=['accuracy'])
self.model.fit(array(distances), array(classes), nb_epoch=80)
And it gives
...
Epoch 79/80
425/425 [==============================] - 0s - loss: 0.0381 - acc: 0.6729
Epoch 80/80
425/425 [==============================] - 0s - loss: 0.0382 - acc: 0.6871
[[ 0.19048974 0.1585739 0.28798762 -0.23555818 0.4293299 0.10981751
-0.08614585 -0.06363138 0.05927059 0.07283521 -0.07852616 -0.02396417
-0.28515971]]
Not a good accuracity, but topic problem solved. Also, are there any way to "normalize" output? (as I written - I need numbers in [0.0:1.0]) ?