Bear with me here, I am learning about the different ways in which numpy and tensorflow handle optimization of tensor operations.
The tf.broadcast_to documentation claims that the memory copy needed by broadcasting can be optimized out of the graph. However I'm having trouble figuring out which part of Grappler (which I'm assuming the docs refer to) would perform this kind of optimization.
possibly the "Layout Optimizer".
Would appreciate any material people have for learning in more detail about this.