The weights should be identical because they are shared across the two models.
You should get the same output for both models, before and after training, regardless if you fit "model1" or "model2"
But I see what you are saying, for some reason the get_weights() function is returning the weights of the two "models" in a different order, but they should be identical.
I tried to debug this!
for i in range(len(model1.trainable_weights)):
print(i,model1.trainable_weights[i],model2.trainable_weights[i])
output identical
for i in range(len(model1.non_trainable_weights)):
print(i,model1.non_trainable_weights[i],model2.non_trainable_weights[i])
output identical
for i in range(len(model1.get_weights())):
print(i,model1.get_weights()[i].shape,model2.get_weights()[i].shape)
output different
The get_weights() function is returning the trainable/(non-trainable weights from the batchnorm layer) in different order between the two "models"...
Here is what the get_weights() function does:
def get_weights(self):
weights = [] for layer in self.layers: weights += layer.weights return K.batch_get_value(weights)
Let's reproduce to see what happens:
for i in range(len(model1.layers)):
for j in range(len(model1.layers[i].weights)):
print("model1",i,j,model1.layers[i].weights[j])
for i in range(len(model2.layers)):
for j in range(len(model2.layers[i].weights)):
print("model2",i,j,model2.layers[i].weights[j])
It appears the trainable weights are printed first and the non-trainable weights are printed last for each layer!
Since the first layer of model2 is model1 (see
print(model2.layers) or (print(model2.summary(120)), it returns the all the weights from model1 (regardless of what layer they are part of in model1) in the order of trainable followed by non-trainable.
This feels like a bug...
Not sure if we should try fix the get_weights() function or hack an alternative way to get the weights in the right order.