i got this ValueError: not enough values to unpack (expected 5, got 3)

14 views
Skip to first unread message

Abdikader Mohamed

unread,
May 23, 2023, 10:56:39 PM5/23/23
to keras...@googlegroups.com
previous this code was LSTM and the code is working but I also need to use GRU With this code instead of LSTM
and I got the above Error please can anyone help me with this?




k.clear_session()

latent_dim = 200
embedding_dim = 300

# Encoder
encoder_inputs = Input(shape=(max_text_len, ),name='Encoder_Inputs')

# Embedding layer
enc_emb = wv_layer(encoder_inputs)

# Encoder LSTM 1
encoder_gru1 = Bidirectional(GRU(latent_dim, return_sequences=True,
                     return_state=True, dropout=0.2,
                     recurrent_dropout=0.2,name='Encoder_BiLSTM_Layer1'))
(encoder_output1, forward_state_h1, forward_state_c1,backward_state_h1,backward_state_c1) = encoder_gru1(enc_emb)
state_h1=Concatenate()([forward_state_h1,backward_state_h1])
state_c1=Concatenate()([forward_state_c1,backward_state_c1])

# Encoder LSTM 2
encoder_gru2 = Bidirectional(GRU(latent_dim, return_sequences=True,
                     return_state=True, dropout=0.2,
                     recurrent_dropout=0.2,name='Encoder_BiLSTM_Layer2'))
(encoder_outputs, forward_state_h2, forward_state_c2,backward_state_h2,backward_state_c2) = encoder_gru2(encoder_output1)
state_h=Concatenate()([forward_state_h2,backward_state_h2])
state_c=Concatenate()([forward_state_c2,backward_state_c2])

# Set up the decoder, using encoder_states as the initial state
decoder_inputs = Input(shape=(None, ),name='Decoder_Inputs')

# Embedding layer
dec_emb_layer = Embedding(y_vocab, embedding_dim, trainable=True, name='Decoder_Embedding_Inputs')
dec_emb = dec_emb_layer(decoder_inputs)

# Decoder LSTM1
decoder_gru = GRU(latent_dim*2, return_sequences=True,
                    return_state=True, dropout=0.2,
                    recurrent_dropout=0.2,name='Decoder_LSTM_Layer')
(decoder_outputs, decoder_fwd_state, decoder_back_state) = decoder_gru(dec_emb, initial_state=[state_h, state_c])

# Attention Layer
attn_layer = AttentionLayer(name='Attention_Layer')
attn_out,attn_states=attn_layer([encoder_outputs,decoder_outputs])

decoder_concat_input=Concatenate(axis=-1,name='Concat_layer')([decoder_outputs,attn_out])
# Dense layer
decoder_dense = TimeDistributed(Dense(y_vocab, activation='tanh',name='TimeDistribution_Layer'))
decoder_outputs = decoder_dense(decoder_concat_input)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs,name='Attn_Seq2Seq')

model.summary()
Reply all
Reply to author
Forward
0 new messages