Depo Depo
unread,Feb 17, 2021, 9:24:31 AM2/17/21Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Keras-users
I am doing multi class segmentation using UNet. My output from the model is,
outputs = layers.Conv3D(n_classes, (1, 1, 1), padding="same", activation='softmax')(d4)
Using SparseCategoricalCrossentropy I can train the network fine. Now I would like to also try dice coefficient as the loss function. My true and pred shapes are as follows,
y_true = tf.constant([0.0, 1.0, 2.0])
y_pred = tf.constant([[0.9, 0.95, 0.90], [0.1, 0.8, 0.5], [0.1, 0.8, 0.9]])
I've implemented dice coeffient as follows,
def softargmax(x, beta=1e10):
x = tf.convert_to_tensor(x)
x_range = tf.range(x.shape.as_list()[-1], dtype=x.dtype)
return tf.reduce_sum(tf.nn.softmax(x*beta) * x_range, axis=-1)
def dice_coef(y_true, y_pred, smooth=1e-7):
y_true = K.flatten(K.one_hot(K.cast(y_true, 'int32'), num_classes=n_classes))
y_pred = softargmax(y_pred)
y_pred = K.flatten(K.one_hot(K.cast(y_pred, 'int32'), num_classes=n_classes))
intersect = K.sum(y_true * y_pred, axis=-1)
denom = K.sum(y_true + y_pred, axis=-1)
return K.mean((2. * intersect / (denom + smooth)))
def dice_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
I can test this in isolation
dice_coef(y_true, y_pred)
However when I try to use this as the loss function I get an error that says,
ValueError: No gradients provided for any variable:
Googling for the error I found that this happens when a function is not differentiable. while debugging problem seems to be arise from `y_pred = K.flatten...` line but it works for `y_true` why does it fail for `y_pred`?