Speed up TFP using hardware (only)

Skip to first unread message

Michael West

Nov 24, 2021, 7:37:28 AM11/24/21
to TensorFlow Probability
Hi all,

I'm estimating a TFP model through HMC sampling and am looking to speed up the process. While there is probably a way to do this by changing the model code, I'm currently looking for a way to do this with minimal changes in the code by setting up a AWS/Azure/GCP compute instance with a lot of computing power.

I already set up an instance with 48 cores and 96GB RAM, but the increase in computation speed was not as much as I hoped for.

Does anyone have experience with the best way to do this? What kind of compute instance am I looking for, is it a GPU, or should I be using even more CPU kernels, etc.?

Many thanks in advance,

Brian Patton 🚀

Dec 2, 2021, 10:43:07 AM12/2/21
to Michael West, TensorFlow Probability
The best speedups would probably be accomplished by optimizing the log_prob and gradient calculations, distributing those on the hardware. HMC is inherently a sequential algorithm, so the knobs for parallelism are basically in large event dimensions (tfp.experimental.distribute), large likelihood computations (many data examples), and very many HMC chains, in which case the main expense becomes burning in the chains sequentially.

Brian Patton | Software Engineer | b...@google.com

You received this message because you are subscribed to the Google Groups "TensorFlow Probability" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfprobabilit...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfprobability/9fc1b245-c1a8-4263-9c75-ca14c352b901n%40tensorflow.org.
Reply all
Reply to author
0 new messages