Understanding Google's tutorial on BNN: Why the use of VariableLayers?

79 views
Skip to first unread message

Man Old

unread,
Dec 21, 2021, 7:45:57 AM12/21/21
to tfprob...@tensorflow.org
In this Google's tuto, when they define the prior and posterior, they use a VariableLayer. From what I understood, this layer simply returns a trainable constant (the bias terms?) as output...

I think part of the objective pf the VariableLayer is to make the prior and posterior trainable.

 1. How exactly does the VariableLayer helps us making them trainable? From the sequential way of building, it seems we're just passing some constants to the LambdaLayers...
 2. Couldn't we just have used a usual Dense layer (by imposing a null kernel), instead of the VariableLayer?

 3. What's the point in making the prior trainable?

Kind regards.

Pavel Sountsov

unread,
Dec 21, 2021, 3:16:17 PM12/21/21
to Man Old, TensorFlow Probability
On Tue, Dec 21, 2021 at 4:45 AM Man Old <anoldman...@gmail.com> wrote:
In this Google's tuto, when they define the prior and posterior, they use a VariableLayer. From what I understood, this layer simply returns a trainable constant (the bias terms?) as output...

I think part of the objective pf the VariableLayer is to make the prior and posterior trainable.

 1. How exactly does the VariableLayer helps us making them trainable? From the sequential way of building, it seems we're just passing some constants to the LambdaLayers...

The values of these constants are trainable because they are backed by a tf.Variable.
 
 2. Couldn't we just have used a usual Dense layer (by imposing a null kernel), instead of the VariableLayer?

I don't think the Dense layer has that feature. If it did, yes, that'd be the same thing.
 

 3. What's the point in making the prior trainable?

The tutorial is doing type-2 MLE/empirical bayes. This approach makes a few less assumptions about the data than full bayes would (where you'd keep the prior constant). Ultimately, it's a modeling choice and the prior could have been constant as well depending on what you wanted to do.
 

Kind regards.

--
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/CAFdMCePEU0A9UM9Fw5XSwmEZj1hJWfkajCS-u1%2B-VRY-pi1qwA%40mail.gmail.com.

Man Old

unread,
Dec 22, 2021, 4:40:17 AM12/22/21
to Pavel Sountsov, TensorFlow Probability
Thanks for the answer Pavel. ;)
Reply all
Reply to author
Forward
0 new messages