It may depend on your model as to what kind of information you can get out of it. You should be able to "chop" your trained network off at the encoded layer, run all of your inputs trough and collect their encodings. Run those encodings through t-SNE and reduce the dimensions down to 2D or 3D. Then use matplotlib to plot your results.
Try making a copy of your original network but literally remove every layer past the encoded layer.
Use a modified version of the model.load_weights function to only load the weights up to the encoded layer:
def load_weights(self, filepath):
# Loads weights from HDF5 file
import h5py
f = h5py.File(filepath)
for k in range(NUM_LAYERS_TO_LOAD):
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
self.layers[k].set_weights(weights)
f.close()
I'm pretty new to Keras, but I know this method works in the Neon framework, so I hope I'm not steering you wrong.
What's the topology of the network are you using to generate your embeddings? Maybe we can whip up some code if we know what you're dealing with.