The code looks mostly fine (although I don't really know Keras very well).
There are cleaner ways to handle the variable reading and renaming:
- Have an explicit map from Slim layer name to Keras layer name (e.g., { "vggish/conv1": "vggish_conv1", ..., "vggish/fc1/fc1_2": "vggish_fc1_2", ... etc }).
- Instead of two loops through the list of operations, you could do one loop through the list of variables to find the weights and biases of each layer.
- There are a couple of ways to get the list of variables.
* You could load a checkpoint in a TF Session (as you're doing), and then iterate over tf.get_collection(tf.GraphKeys.VARIABLES)), where you will get Variable objects and you can pass
var.name to session.run() to get the value, and parse the name to get the layer name (names will be of the form "vggish/conv1/weights:0").
* You can read the checkpoint directly using NewCheckpointReader without having to define the model and load the checkpoint. The code will look something like this:
with tf.Graph().as_default():
reader = tf.train.NewCheckpointReader(path_to_slim_checkpoint)
var_names = reader.get_variable_to_shape_map().keys()
for var_name in var_names:
# var_name will be of the form "vggish/conv1/weights"
var_value = reader.get_tensor(var_name)