Understanding random ops

Skip to first unread message


Aug 17, 2021, 7:20:23 AM8/17/21
to Rust for TensorFlow
Hello All,

I'm working to better understand Tensorflow Rust, and specifically I'm trying to understand how to correctly set up sources of random numbers.

Conceptually I follow the process of building a new random op (of scalar shape for now), scaling it by mean/stdev, clipping it to min/max bounds (if requested), and finally building to produce the reusable generator for that specific configuration. What I'm not understanding is how to best handle seeding and consistency. 

The tf.random.Generator keeps a single seeded state and can generate any shape of random values from any distribution on request. Whereas any new random op I generate could theoretically be given new seed/seed2 values. Does creating a new random op seed just the op itself, or does it set some internal state? If each random op must be uniquely seeded, is it as simple as incrementing my crate-level SEED/SEED2 static AtomicI64s by one after every new random distribution generator gets added to avoid overlap for two of the same base distribution types? Should I first set up a single random scope with uniformly seeded unique base distributions (normal, uniform, etc.) and use those as independent/consistent control inputs for every scaled random generator I'd like to build on top of that? How does that change if/when I want to build generators beyond just scalar values? Would it be easier to just have Rust hold a single python tf.random.Generator instance and take advantage of its internal seeding/consistency setup? Or, is there a better way to do this that I'm still missing?

Any help would be greatly appreciated. Thanks!

Adam Crume

Aug 17, 2021, 12:49:01 PM8/17/21
to PFaas, Rust for TensorFlow
One important piece of background information is that the Rust bindings never run any Python code.  The Rust bindings use the C API underneath, so at runtime everything is either TensorFlow internals (C++) or Rust.  The Rust code can still load and run models created in Python, but it won't actually run Python code.  This means that Rust does not hold e.g. a tf.random.Generator instance.  Instead, it manages a graph which contains ops like "StatelessRandomNormalV2" or "StatelessRandomUniformV2".

If you're creating the model in Python, saving it, and then loading it in Rust, you probably don't need to do anything special other than setting the seed if you want to do that.  If you're creating the model in Rust (which sounds like the case), you'll need to create those ops directly (with e.g. the tensorflow::ops::StatelessRandomNormalV2 builder or the tensorflow::ops::stateless_random_normal_v2 helper).  The Rust bindings don't currently have wrappers to help manage the seeds.  Creating the ops doesn't have any side effect other than adding the op to the graph, i.e. the act of creating the ops doesn't modify any random generator state, and my reading of the TensorFlow docs suggests that the ops don't automatically adjust state shared by other ops, although if they share a counter created with tensorflow::ops::stateless_random_get_key_counter, then presumably they would share a counter and avoid generating correlated numbers even with a shared seed.

You received this message because you are subscribed to the Google Groups "Rust for TensorFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rust+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/rust/2a6f40fc-3da8-4fa6-8907-710c8aecf06fn%40tensorflow.org.


Aug 18, 2021, 4:44:20 AM8/18/21
to Rust for TensorFlow, acr...@google.com, Rust for TensorFlow, PFaas
Thanks for the quick response! That matches with my understanding and adds some more info.

It sounds like the smart way to do it will be to build a separate sub scope with pure/unmodified random ops as a 'non-colliding' set of random distribution sources. I think that will require one per distribution/dtype/shape[/thread/device]. Then any scaled random generators built based on those should be drawing from non-colliding and deterministic distribution sources. I also think that means I will need to increment the seeds for each new distribution, just to be sure.

Doing some further research, I see Numpy uses only a single bitstream generator for its random module, which ends up making it inconsistent for multiple threads. That helped me understand what I'm aiming to get from TensorFlow's capabilities. But that brings me to another question - is there a reason to choose the stateless random distributions? I figured I'd want the ones with state so they can be checkpointed/restored to. Also, how does the seed2 parameter work to avoid collisions? Would the backend need to store the hashes of all its used seeds to know when to use seed2 instead? But I suppose now this is moving well beyond questions specific to TensorFlow Rust bindings.

Thanks again for the help! And thanks for TensorFlow Rust!!!

Adam Crume

Aug 20, 2021, 12:40:34 AM8/20/21
to PFaas, Rust for TensorFlow
As far as I understand, seed and seed2 are essentially concatenated to make a single 128-bit seed.  I'm not sure exactly how the random ops work, but tf.random.Generator.uniform() (for example) creates a StatelessRandomUniformV2 op which indirectly takes input from an RngReadAndSkip op which updates the state (through a resource handle?).  I'd suggest saving a very simple graph in Python and reproducing the same structure in Rust.  You could copy/modify the addition.py example, for example, and to use tf.random.Generator and change the write_graph call to use as_text=True to make it easy to read the generated graph.

It's possible that the stateful ops may be easier to manage.  I'm not sure what the difference is, since PRNGs are stateful by nature, so I assume the stateless/stateful name is an indication of where the state is stored.

I'm just glad someone finds TensorFlow Rust useful!
Reply all
Reply to author
0 new messages