--
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/fe0160ec-873e-4b80-b382-da54a8954e4e%40tensorflow.org.
That sounds like a neat problem! The first thing I would try would be to split the computation in your target_log_prob computation across GPUs using MirroredStrategy. We don't have any examples of doing this but I would love for us to have one. I will talk tomorrow with Colin (on cc) to gauge his interest in coming up with a basic demo.
On Mon, Mar 2, 2020, 12:07 PM Angel Berihuete <angel.b...@gm.uca.es> wrote:
--Dear TensorFlow Probability community.Recently our research group has finished the code of a hierarchical model using Tensorflow probability and MCMC (NUTs) to do the inference. The inference works pretty well in one machine with 8 CPUs and a low sample size (1e4). Our intention is to increase the sample size to 1e8 and distribute batch of data across 8 GPUs in one machine.We are wondering what is the best type of Tensorflow strategy to distribute our data (model?) across 8 GPUs in one machine using NUTs? Is it possible to do the inference with several chains distributing the data across 8 GPUs? Could you please share some links containing examples about how to do this?Best regards,Ángel
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprob...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/94c4c1fd-9d19-4fcb-a181-6f3ae16aae55%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/94c4c1fd-9d19-4fcb-a181-6f3ae16aae55%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/16b461c5-94e7-4663-beec-63212ddf465a%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/16b461c5-94e7-4663-beec-63212ddf465a%40tensorflow.org.
--
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
Hi Rif, Brian,
Many thanks for your response. I am so sorry for this huge delay on my response, but here, in Spain, the situation is complicated due to COVID-19.
The latest tests of my code do not show problems with the Gamma distribution, but still I wonder how to do NUTs calculations into multi-GPUs.
I have a hierarchical model and I need to include uncertainities of the observations. The first attempt was to use the tfd.JointDistributionCoroutine followgin the advices on TPF notebook Modeling_with_JointDistribution.ipynb. The problem coding this way was how to include uncertainities and observations using batches in order to replicate on GPUs. I tried to do a
probabilistic_model(uncertainities, hyperparameters):
…
...
concrete_model = functools.partial(probabilistic_model, batch_uncerteainities, hyperparameters)
model = tfd.JointDistributionCoroutine(concrete_model)
but this was pointless to me, because I need to do a model for each batch of uncertainites. My first question, do you have examples delaying with uncertainities using tfd.JointDistributionCoroutine?
Then, my sencond attempt (where I am working on) was to follow Brendan Hasz’s notebook (https://github.com/brendanhasz/svi-gaussian-mixture-model/blob/master/BayesianGaussianMixtureModel.ipynb) in order to do include my model in a tf.Model and shift the model with batches of observations and uncertainities to multi-GPUs properly. Then I wrote
class Mymodel(tf.Module):
def __init__(self, hyperparameters);
hyperparameters
priors
def __call__ (self, x):
x # batch of observations and uncertainities
return log_joint_prob
and then I follow the recommendations to do a [custom training loop](https://www.tensorflow.org/tutorials/distribute/custom_training#training_loop) within a mirrored strategy:
model = Mymodel(hyperparamters)
optimizer = tf.keras.optimizers.Adam(lr=1e-3)
def train_step(inputs):
def distributed_train_step(batches of dataset_inputs)
for epoch in range(EPOCHS):
and it works properly on multi-GPUs!! … but now, I do not know how to translate this in terms of NUTS. Any clue/help will be welcome.
My best wishes to you.
In my benchmarks, the new gamma sampler gives a very large speedup (~10x) for large numbers of samples (10000). Is gamma sampling a bottleneck for you? Note that if this were really a bottleneck, we could experiment with unrolling the loop some.
On Mon, Mar 2, 2020 at 9:07 AM Angel Berihuete <angel.b...@gm.uca.es> wrote:
--Dear TensorFlow Probability community.Recently our research group has finished the code of a hierarchical model using Tensorflow probability and MCMC (NUTs) to do the inference. The inference works pretty well in one machine with 8 CPUs and a low sample size (1e4). Our intention is to increase the sample size to 1e8 and distribute batch of data across 8 GPUs in one machine.We are wondering what is the best type of Tensorflow strategy to distribute our data (model?) across 8 GPUs in one machine using NUTs? Is it possible to do the inference with several chains distributing the data across 8 GPUs? Could you please share some links containing examples about how to do this?Best regards,Ángel
You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprob...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/b293b44b-2c67-40c1-8c9d-caa828111c7a%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/b293b44b-2c67-40c1-8c9d-caa828111c7a%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/b293b44b-2c67-40c1-8c9d-caa828111c7a%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/787554a2-eec1-4985-ac5e-a7e58608603b%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/787554a2-eec1-4985-ac5e-a7e58608603b%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/787554a2-eec1-4985-ac5e-a7e58608603b%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/80fa4d86-a1af-42b4-bbe2-e02d3d9e9888%40tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/80fa4d86-a1af-42b4-bbe2-e02d3d9e9888%40tensorflow.org.