Sample-wise 2D weights in loss-function

1,547 views
Skip to first unread message

Joe

unread,
Feb 8, 2017, 1:59:37 PM2/8/17
to Keras-users
I would like to implement a pixelwise segmentation like in https://arxiv.org/abs/1505.04597. To enforce the learning of small boundaries the authors of the paper use a pixelwise weight-map (2D) which is multiplied to the error in the loss-function. So instead of X, y (input, label) each training-sample contains of a X, y, w (input, label, weight).

I have designed a custom cross-entropy function which can deal with the weights but unforunately I have no idea how I can bring the weights into the loss-function. Static weight-maps are solved in https://github.com/fchollet/keras/issues/2115 but this does not apply to sample-wise weights.

johannes...@gmail.com

unread,
Feb 10, 2017, 5:48:00 AM2/10/17
to Keras-users
I finally managed it by stacking the labeled segmentation and the weight. So my X is the original image and my y is a Tensor with shape (batch_size, row, col, 2).

In my custom loss function I unstack and evaluate them:

def weighted_cross_entropy(y_true, y_pred):
   
try:
       
[seg, weight] = tf.unstack(y_true, 2, axis=3)

edgarme...@gmail.com

unread,
Mar 31, 2017, 11:12:23 AM3/31/17
to Keras-users, johannes...@gmail.com
Hi johannes

what would the return function be?

either, a matrix? or an array?

vardenpav...@gmail.com

unread,
Apr 12, 2018, 5:38:20 AM4/12/18
to Keras-users
Hey Johannes,
could you pls post code for your custom loss-function?

johannes...@gmail.com

unread,
Apr 12, 2018, 2:12:38 PM4/12/18
to Keras-users
def weighted_cross_entropy(y_true, y_pred):
   
try:
       
[seg, weight] = tf.unstack(y_true, 2, axis=3)

        seg
= tf.expand_dims(seg, -1)
        weight
= tf.expand_dims(weight, -1)
   
except:
       
pass

    epsilon
= tf.convert_to_tensor(10e-8, y_pred.dtype.base_dtype)
    y_pred
= tf.clip_by_value(y_pred, epsilon, 1 - epsilon)
    y_pred
= tf.log(y_pred / (1 - y_pred))

    zeros
= array_ops.zeros_like(y_pred, dtype=y_pred.dtype)
    cond
= (y_pred >= zeros)
    relu_logits
= math_ops.select(cond, y_pred, zeros)
    neg_abs_logits
= math_ops.select(cond, -y_pred, y_pred)
    entropy
= math_ops.add(relu_logits - y_pred * seg, math_ops.log1p(math_ops.exp(neg_abs_logits)), name=None)
   
return K.mean(math_ops.multiply(weight, entropy), axis=-1)

shin...@gmail.com

unread,
Mar 30, 2020, 5:50:50 PM3/30/20
to Keras-users
This is a beautiful solution and works to some degree, I think.  When I use it, I get these warnings in the Jupyter Notebook (window with black background and white text).

E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:697] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.

The program doesn't crash, but I'm wondering whether this work-around leads to a non-optimal answer.
Reply all
Reply to author
Forward
0 new messages