def flattened_l1_distance( vects ):
x, y = vects
return K.sigmoid( K.sum( K.abs(x - y), axis = 1, keepdims = True ))
distance = Lambda(flattened_l1_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])
model = Model(input=[input_a, input_b], output=distance)
# train
rms = RMSprop()
model.compile(loss= 'binary_crossentropy', optimizer=rms )def get_abs_diff( vects ):
x, y = vects
return K.abs( x - y )
def eucl_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0], 1)
abs_diff = Lambda(get_abs_diff, output_shape = eucl_dist_output_shape)([processed_a, processed_b])
flattened_weighted_distance = Dense(1, activation = 'sigmoid')(abs_diff)
model = Model(input=[input_a, input_b], output = flattened_weighted_distance)
ValueError: ('shapes (128,128) and (1,1) not aligned: 128 (dim 1) != 1 (dim 0)', (128L, 128L), (1L, 1L))
Apply node that caused the error: Dot22(Elemwise{Abs}[(0, 0)].0, dense_29_W)
Toposort index: 79
Inputs types: [TensorType(float32, matrix), TensorType(float32, matrix)]
Inputs shapes: [(128L, 128L), (1L, 1L)]
Inputs strides: [(512L, 4L), (4L, 4L)]
Inputs values: ['not shown', array([[ 1.61876023]], dtype=float32)]
Outputs clients: [[Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)](Dot22.0, InplaceDimShuffle{x,0}.0)]]In [1]: run mnist_siamese_graph.py
Using Theano backend.
Train on 108400 samples, validate on 17820 samples
Epoch 1/20
108400/108400 [==============================] - 34s - loss: 0.3370 - val_loss: 0.2466
Epoch 2/20
108400/108400 [==============================] - 39s - loss: 0.1671 - val_loss: 0.1641
Epoch 3/20
108400/108400 [==============================] - 41s - loss: 0.1125 - val_loss: 0.1328
Epoch 4/20
108400/108400 [==============================] - 41s - loss: 0.0858 - val_loss: 0.0947
Epoch 5/20
108400/108400 [==============================] - 46s - loss: 0.0695 - val_loss: 0.0982
Epoch 6/20
108400/108400 [==============================] - 38s - loss: 0.0589 - val_loss: 0.0911
Epoch 7/20
108400/108400 [==============================] - 35s - loss: 0.0508 - val_loss: 0.0811
Epoch 8/20
108400/108400 [==============================] - 39s - loss: 0.0450 - val_loss: 0.0828
Epoch 9/20
108400/108400 [==============================] - 35s - loss: 0.0390 - val_loss: 0.0873
Epoch 10/20
108400/108400 [==============================] - 43s - loss: 0.0345 - val_loss: 0.0778
Epoch 11/20
108400/108400 [==============================] - 46s - loss: 0.0305 - val_loss: 0.0796
Epoch 12/20
108400/108400 [==============================] - 45s - loss: 0.0283 - val_loss: 0.0835
Epoch 13/20
108400/108400 [==============================] - 42s - loss: 0.0264 - val_loss: 0.0783
Epoch 14/20
108400/108400 [==============================] - 46s - loss: 0.0243 - val_loss: 0.0820
Epoch 15/20
108400/108400 [==============================] - 33s - loss: 0.0228 - val_loss: 0.0821
Epoch 16/20
108400/108400 [==============================] - 33s - loss: 0.0212 - val_loss: 0.0879
Epoch 17/20
108400/108400 [==============================] - 34s - loss: 0.0204 - val_loss: 0.0790
Epoch 18/20
108400/108400 [==============================] - 40s - loss: 0.0201 - val_loss: 0.0819
Epoch 19/20
108400/108400 [==============================] - 36s - loss: 0.0186 - val_loss: 0.0811
Epoch 20/20
108400/108400 [==============================] - 34s - loss: 0.0164 - val_loss: 0.0792
* Accuracy on training set: 0.32%
* Accuracy on test set: 3.02%