I have a `tf.summary.scalar` which the loss from my model goes into. What is the most idiomatic way to read the summary stats from my rust code each iteration of my training code? I was considering attaching an output node to the loss as well so it doesn't just go into the summary writer and I could read that but it feels like a bit of a hack.It would also be good if there was an example of this in the examples folder (whatever the best approach is).
--
You received this message because you are subscribed to the Google Groups "Rust for TensorFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rust+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/rust/69aa2deb-17aa-4989-8d26-2b71af9c73d0o%40tensorflow.org.
--
You received this message because you are subscribed to the Google Groups "Rust for TensorFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rust+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/rust/10e7fa60-3bf0-488e-914a-23db34f75c81o%40tensorflow.org.
if let Some(data) = owned_data.as_slice() { let mut lr = Tensor::<f32>::new(&[]).with_values(&[p.learning_rate])?; let input_tensor = Tensor::<f32>::new(owned_data.shape()).with_values(data)?;
let mut args = SessionRunArgs::new(); args.add_feed(&self.context.inputs[0], 0, &input_tensor); args.add_feed(&self.context.inputs[1], 0, &lr); let loss_token = args.request_fetch(&self.context.outputs[0], 0); args.add_target(&self.context.targets[0]); let mut loss_acc = 0.0; for i in 1..=p.epochs { self.context.update(&mut args)?; let loss: Tensor<f32> = args.fetch(loss_token)?; loss_acc += loss[0]; if i % 100 == 0 { info!("Epoch {}: loss {}", i - 1, loss_acc / 100.0); loss_acc = 0.0; if i % 1000 == 0 { lr[0] = lr[0] * 0.8; } } }}
Could you provide a code snippet?
On Thu, Jun 18, 2020 at 5:04 AM Daniel McKenna <danielm...@gmail.com> wrote:
Ah yeah that worked fine, I thought it had issues because I was getting printouts about being unable to sort the graph ops but the protobuf was created with tensorflow 1.15 so when I started running against that it worked!--Another (hopefully quick) training question. I'm trying to decay my learning rate. I input it as a tensor and now want to mutate that tensor between Session::update calls at a regular interval but having an issue with the borrow checker as the SessionRunArgs takes a non-mutable reference so the borrow checker (rightfully) complains.
On Wednesday, June 17, 2020 at 3:13:37 PM UTC+1, Daniel McKenna wrote:I have a `tf.summary.scalar` which the loss from my model goes into. What is the most idiomatic way to read the summary stats from my rust code each iteration of my training code? I was considering attaching an output node to the loss as well so it doesn't just go into the summary writer and I could read that but it feels like a bit of a hack.It would also be good if there was an example of this in the examples folder (whatever the best approach is).
You received this message because you are subscribed to the Google Groups "Rust for TensorFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ru...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to rust+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/rust/fc16159a-a0d6-4813-9ebe-cfd4f7fd5427o%40tensorflow.org.