How to include known error as input for Regression with Probabilistic Layers

91 views
Skip to first unread message

Julian Moore

unread,
Oct 31, 2019, 3:02:57 AM10/31/19
to TensorFlow Probability
I have been following TF development for some time but I am only now starting to work with it, consequently, whilst, any input will be welcome, pedagogical answers will be greatly appreciated...

In the TFP article and examples of Regression with Probabilistic errors at Medium.com (which I have successfully run), the input data is considered without error bars.

I have a dataset in which the samples are time averaged and supplied with std. deviation info based on the raw data

Assuming that the errors are Gaussian (although I expect other common distributions would be easily accommodated) how can the error bars be input and exploited in the TFP examples for regression (especially the tabular rasa example)?

Note: this question was also asked at stack overflow but has so far received no replies, so I'm hoping that the stronger focus on the subject here will prove fruitful

(And, assuming the TFP team also follow this group, allow me to express my enormous admiration for your work!)

Thanks in advance

Brian Patton 🚀

unread,
Oct 31, 2019, 10:56:58 AM10/31/19
to Julian Moore, TensorFlow Probability
Seems like you could perhaps optimize the kl divergence between your oracle values and the distribution inferred by regression. Better yet, of course, to get the raw data. :-)

--
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/6ead07db-b618-4786-abca-8ed8897d2d3d%40tensorflow.org.

Julian Moore

unread,
Nov 8, 2019, 12:40:32 PM11/8/19
to TensorFlow Probability
Thanks; you're the only one to offer any input. Unoftunately I don't understand KL divergence, and whilst I do have the raw data (I created the mean and SD!) I want to know how to do this if I don't because surely it has to be more efficient to input this that to let the NN work it out.

Pavel Sountsov

unread,
Nov 8, 2019, 1:32:35 PM11/8/19
to Julian Moore, TensorFlow Probability
The current loss looks like this:

negloglik = lambda y, p_y: -p_y.log_prob(y)

If you can convince Keras to pass in the y_err to the loss function, then you can use KL divergence like so:

def kl_loss(y, y_err, p_y):
  return tfd.Normal(y, y_err).kl_divergence(p_y)

You might need to concatenate y and y_err in your data and then split it in the loss, or something like that, my Keras is very rusty.

The original loss is actually a special case of that:

def kl_loss(y, p_y):
  return tfd.Deterministic(y).kl_divergence(p_y)


On Fri, Nov 8, 2019 at 9:40 AM Julian Moore <juliani...@gmail.com> wrote:
Thanks; you're the only one to offer any input. Unoftunately I don't understand KL divergence, and whilst I do have the raw data (I created the mean and SD!) I want to know how to do this if I don't because surely it has to be more efficient to input this that to let the NN work it out.

--
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.

P.

unread,
Jun 21, 2022, 9:26:24 AM6/21/22
to TensorFlow Probability, Pavel Sountsov, TensorFlow Probability, Julian Moore

I'm in the same situation. has there been any update during these years that allows to easily include the error bars?

Julian Moore

unread,
Jun 21, 2022, 12:35:25 PM6/21/22
to TensorFlow Probability, p.cate...@gmail.com

Nothing to my knowledge :(

P.

unread,
Jun 22, 2022, 12:19:23 AM6/22/22
to TensorFlow Probability, Julian Moore, P.
it seems odd that such an important feature is missing. i am using bnns for a physics problem and it would be really useful to be able to use uncertainties on measurements.

David Harris

unread,
Jul 1, 2022, 5:19:12 PM7/1/22
to TensorFlow Probability, p.cate...@gmail.com, Julian Moore
The example in the link below shows (I think) an approach to regression with known errors in input, using the JointDistributionSequential distribution
Is this the kind of thing you are looking for?

Reply all
Reply to author
Forward
Message has been deleted
0 new messages