“LookupError: gradient registry has no entry for: LogUniformCandidateSampler”

2,164 views
Skip to first unread message

lhlmgr

unread,
Jul 18, 2016, 10:03:47 AM7/18/16
to Discuss
Hi guys,

with the new `raw_rnn`, I was able to implement a dynamic rnn decoder (thanks to ebrevdo).

For the evaluation I implemented a loss function [2], which iterates over each timestep of the TensorArray, and calls either a custom loss function, or the `nn_ops.parse_softmax_cross_entropy_with_logits()` (same as in the seq2seq_model [1]).

However, when I use the custom sampled_loss_function, used in the translate-example [3]:

     if num_samples > 0 and num_samples < self.target_vocab_size:
      w = tf.get_variable("proj_w", [size, self.target_vocab_size])
      w_t = tf.transpose(w)
      b = tf.get_variable("proj_b", [self.target_vocab_size])
      output_projection = (w, b)

      def sampled_loss(inputs, labels):
        labels = tf.reshape(labels, [-1, 1])
        return tf.nn.sampled_softmax_loss(w_t, b, inputs, labels, num_samples,
                self.target_vocab_size)
      softmax_loss_function = sampled_loss

I get following error:

> WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.BasicLSTMCell
> object at 0x7f50696455f8>: Using a concatenated state is slower and
> will soon be deprecated.  Use state_is_tuple=True. Traceback (most
> recent call last):   File
> "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py",
> line 448, in gradients
>     grad_fn = ops.get_gradient_function(op)   File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py",
> line 1634, in get_gradient_function
>     return _gradient_registry.lookup(op_type)   File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/registry.py",
> line 85, in lookup
>     "%s registry has no entry for: %s" % (self._name, name)) LookupError: gradient registry has no entry for:
> LogUniformCandidateSampler
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):   File
> "/home/aa/code/python/bb/dyn_main.py", line 176, in <module>
>     tf.app.run()   File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/platform/app.py",
> line 30, in run
>     sys.exit(main(sys.argv))   File "/home/aa/code/python/bb/dyn_main.py", line 83, in main
>     gradients = tf.gradients(loss2, params)   File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py",
> line 452, in gradients
>     (op.name, op.type)) LookupError: No gradient defined for operation 'dynamic_rnn_seq2seq/sequence_loss_by_example_dyn/while/cond/sampled_softmax_loss/LogUniformCandidateSampler'
> (op type: LogUniformCandidateSampler)

I use a pretty up-to-date, compiled from source (last saturday) tensorflow version with GPU enabled.
Did I get something wrong or is this a bug?

Thanks in advance!

 - [1]

Martin Wicke

unread,
Jul 18, 2016, 11:29:16 AM7/18/16
to lhlmgr, Discuss
This is a bug, in a way. TensorFlow thinks it needs a gradient although it doesn't. You can work around the issue by defining a gradient (can be all zero) for LogUniformCandidateSampler. Or you can file an issue (or a PR). If you were to do this, it would look something like this:

@tf.RegisterGradient("LogUniformCandidateSampler")
def _LogUniformCandidateSamplerGrad(op,grad,foo,bar): 
  return [tf.cast(tf.zeros_like(foo), tf.int64)]

-- 
Martin

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/4996ef7b-f010-4190-b21f-b33557447a52%40tensorflow.org.

lhlmgr

unread,
Jul 18, 2016, 11:56:03 AM7/18/16
to Discuss
Hi Martin,

thanks a lot for your fast and helpful reply.
I'll post an issue on github. 

Cheers, Leo
Reply all
Reply to author
Forward
0 new messages