I'm working to better understand Tensorflow Rust, and specifically I'm trying to understand how to correctly set up sources of random numbers.
Conceptually I follow the process of building a new random op (of scalar shape for now), scaling it by mean/stdev, clipping it to min/max bounds (if requested), and finally building to produce the reusable generator for that specific configuration. What I'm not understanding is how to best handle seeding and consistency.
keeps a single seeded state and can generate any shape of random values from any distribution on request. Whereas any new random op I generate could theoretically be given new seed/seed2 values. Does creating a new random op seed just the op itself, or does it set some internal state? If each random op must be uniquely seeded, is it as simple as incrementing my crate-level SEED/SEED2 static AtomicI64s by one after every new random distribution generator gets added to avoid overlap for two of the same base distribution types? Should I first set up a single random scope with uniformly seeded unique base distributions (normal, uniform, etc.) and use those as independent/consistent control inputs for every scaled random generator I'd like to build on top of that? How does that change if/when I want to build generators beyond just scalar values? Would it be easier to just have Rust hold a single python tf.random.Generator instance and take advantage of its internal seeding/consistency setup? Or, is there a better way to do this that I'm still missing?
Any help would be greatly appreciated. Thanks!