Hi guys,
with the new `raw_rnn`, I was able to implement a dynamic rnn decoder (thanks to ebrevdo).
For the evaluation I implemented a loss function [2], which iterates over each timestep of the TensorArray, and calls either a custom loss function, or the `nn_ops.parse_softmax_cross_entropy_with_logits()` (same as in the seq2seq_model [1]).
However, when I use the custom sampled_loss_function, used in the translate-example [3]:
if num_samples > 0 and num_samples < self.target_vocab_size:
w = tf.get_variable("proj_w", [size, self.target_vocab_size])
w_t = tf.transpose(w)
b = tf.get_variable("proj_b", [self.target_vocab_size])
output_projection = (w, b)
def sampled_loss(inputs, labels):
labels = tf.reshape(labels, [-1, 1])
return tf.nn.sampled_softmax_loss(w_t, b, inputs, labels, num_samples,
self.target_vocab_size)
softmax_loss_function = sampled_loss
I get following error:
> WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.BasicLSTMCell
> object at 0x7f50696455f8>: Using a concatenated state is slower and
> will soon be deprecated. Use state_is_tuple=True. Traceback (most
> recent call last): File
> "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py",
> line 448, in gradients
> grad_fn = ops.get_gradient_function(op) File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py",
> line 1634, in get_gradient_function
> return _gradient_registry.lookup(op_type) File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/registry.py",
> line 85, in lookup
> "%s registry has no entry for: %s" % (self._name, name)) LookupError: gradient registry has no entry for:
> LogUniformCandidateSampler
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last): File
> "/home/aa/code/python/bb/dyn_main.py", line 176, in <module>
> tf.app.run() File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/platform/app.py",
> line 30, in run
> sys.exit(main(sys.argv)) File "/home/aa/code/python/bb/dyn_main.py", line 83, in main
> gradients = tf.gradients(loss2, params) File "/home/aa/anaconda3/envs/master_tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients.py",
> line 452, in gradients
> (
op.name, op.type)) LookupError: No gradient defined for operation 'dynamic_rnn_seq2seq/sequence_loss_by_example_dyn/while/cond/sampled_softmax_loss/LogUniformCandidateSampler'
> (op type: LogUniformCandidateSampler)
I use a pretty up-to-date, compiled from source (last saturday) tensorflow version with GPU enabled.
Did I get something wrong or is this a bug?
Thanks in advance!
- [1]