Hello all,
I have a question regarding the interpretation of a neural network constructed using tfp layers and standard Keras layers.
This is quite a common case e.g when doing transfer learning from "frequentists" neural network or when constructing a model using layers not available in tffp (RNN).
When we train the whole network using elbo as loos what we get?
My interpretation is that the Keras layers are Bayesian layers with deterministic surrogate posterior and normal or Laplace prior according regularization (L2 or L1).
Is this correct?
Regards
--
Krzysztof Rusek