Error when using tf.saved_model.save()

141 views
Skip to first unread message

kimjisoo

unread,
Jun 24, 2020, 8:35:16 PM6/24/20
to TensorFlow Developers
Hello,

I have a question about the error when using tf.saved_model.save()

I run the code from site above and tried to save the trained model. To use tf.saved_model.save(), I add tf.function and specify input_signatures. 

And run the code, 

tf.saved_model.save(transformer, "some path")

However, I got error like below.

ValueError: Structure of Python function inputs does not match input_signature: inputs: ( [<tf.Tensor 'args_0:0' shape=(None, None) dtype=int64>, <tf.Tensor 'args_1:0' shape=(None, None) dtype=int64>, <tf.Tensor 'args_2:0' shape=() dtype=bool>, <tf.Tensor 'args_3:0' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'args_4:0' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'args_5:0' shape=(None, None, None, None) dtype=float32>], False) input_signature: ( TensorSpec(shape=(None, None), dtype=tf.int64, name=None), TensorSpec(shape=(None, None), dtype=tf.int64, name=None), TensorSpec(shape=(), dtype=tf.bool, name=None), TensorSpec(shape=(None, None, None, None), dtype=tf.float32, name=None), TensorSpec(shape=(None, None, None, None), dtype=tf.float32, name=None), TensorSpec(shape=(None, None, None, None), dtype=tf.float32, name=None)) 

I don't know why the inputs include False at last. (see the text which is red)
Anyone has ideas for this?

I add the class of transformer (trained model):

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)

  @tf.function(input_signature=[tf.TensorSpec(shape=[None, None], dtype = tf.int64),
                                tf.TensorSpec(shape=[None, None], dtype = tf.int64),
                                tf.TensorSpec(shape=[], dtype=bool),
                                tf.TensorSpec(shape=[None, None, None, None], dtype = tf.float32),
                                tf.TensorSpec(shape=[None, None, None, None], dtype = tf.float32),
                                tf.TensorSpec(shape=[None, None, None, None], dtype = tf.float32)])
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights

Thanks.


 
Reply all
Reply to author
Forward
0 new messages