I'm fine-tuning VGG-19 and only changing the fully connected classifier for now.
I created the base model without the top layer:
base_model = keras.applications.VGG19(include_top = False, weights = 'imagenet')
I am trying to recreate the following, but with a very simple binary classifier for now.
|
| # Classification block |
| x = Flatten(name='flatten')(x) |
| x = Dense(4096, activation='relu', name='fc1')(x) |
| x = Dense(4096, activation='relu', name='fc2')(x) |
| x = Dense(1000, activation='softmax', name='predictions')(x) |
Here's where the issue comes into play
x = base_model.output
x = Flatten()(x)
x = Dense(256, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
predictions = Dense(1, activation = 'sigmoid', name = 'predicts')(x)
model = Model(input= base_model.input, output = predictions)
When 'include_top = false', the output_shape[1:] of the base model is (512, None, None) instead of a (512,7,7) tensor that can be flattened. The only thing I can find that converts a 4D tensor with 'None' to a 2D tensor is Global Pooling, but that doesn't seem like the correct step to take considering a MaxPool layer precedes the fully connected section. Does anyone know the correct approach to customizing the fully connected classifier with the Functional API when the top of the VGG19 model is not included?
Thanks,
Brian