Hi all,
I'm attempting to observe the variation in the train and test loss in the mnist example (as it trains).
I'm trying to print the training loss (in addition to the test loss), in the classification.clj
test-fn.
I've observed that these lines in train.clj (
line 36 onwards) can be used to print the loss on the test dataset.
(let [labels (execute/run new-network test-ds
:batch-size batch-size
:loss-outputs? true)
loss-fn (execute/execute-loss-fn new-network labels test-ds)]
However, I get errors when I try to generate the train-loss.
Caused by: clojure.lang.ExceptionInfo: Failed to resolve argument {:argument {:gradients? true, :key :output, :type :node-output, :node-id :relu-2}, :node-outputs (:labels)}
at clojure.core$ex_info.invokeStatic(core.clj:4725)
at clojure.core$ex_info.invoke(core.clj:4725)
at cortex.graph$eval30219$fn__30220.invoke(graph.clj:765)
at clojure.lang.MultiFn.invoke(MultiFn.java:251)
at cortex.graph$resolve_arguments$fn__30236.invoke(graph.clj:795)
at clojure.core$mapv$fn__7890.invoke(core.clj:6788)
at clojure.core.protocols$fn__7665.invokeStatic(protocols.clj:167)
at clojure.core.protocols$fn__7665.invoke(protocols.clj:124)
(lines elided).
What is the right argument to :loss-outputs when the training set is the argument to execute/run ?
Any explanation on what :loss-outputs stands for would be very helpful too.
Another question I have is:
Is it possible to change/set the learning rate and the loss function in the experiment/* interface?
Thanks!
Kiran