Thanks for your quick reply.
I'm trying to get from some mathematical model descriptions into a joint probability distribution, to define a collection of priors and a likelihood for conditioning a posterior. I'm really new to TFP and the first model I tried to translate resulted in the error described above, so I was trying to figure out what what happening with a simplified model. Most of what I know about this sort of thing comes from Richard McElreath's Statistical Rethinking, which is written in R, but a terrific book. In that book, he provides model definitions like this (p. 97, if you happen to have it):
h_i ~ Normal(mu_i, sigma)
mu_i = a + b(x_i - x-mean)
a ~ Normal(178, 20)
b ~ Lognorm(0,1)
sigma ~ Uniform(0, 50)
And in his book, this sort of thing translates directly into R code, much like the code given in some joint distribution TFP examples. But none of those examples use bijectors, which I was thinking were the appropriate way to pass distributions through a joint distribution, along the lines of the second line of the model above. The model I'm actually working on is an attempt to model a probability that should be a monotonically increasing function of a variable, so I'm trying to devise a model that learns alpha and beta in a beta distribution given input categorical and continuous features.
Is there a way to handle components of a model definition like the second line of the model definition above with TFP?
Thanks in advance for any advice or instruction you have time for.