I'm not sure what you mean by reweighting the prior within unnormalized_log_prob. I think I'm not understanding what you want to do (for example, I'm not sure why you linked to _make_importance_weighted_divergence_fn, given the wording of the question you asked -- i'm probably missing something).
For full batch VI, you want the expected log likelihood of all the data, and a single KL penalty. For a minibatch of B out of N data, you need to downweight the KL term by B / N, so that after N / B minibatches, you end up with the equivalent of 1 KL penalty. In the case you wrote down, for a single data point, B = 1 and you just need to reweight the KL by 1/N. My original suggestion would accomplish this, but I suspect you're actually trying to do something else. If you can clarify, maybe I can offer more help.