how to print training loss

47 views
Skip to first unread message

kiran karkera

unread,
Nov 23, 2017, 5:01:01 AM11/23/17
to clojure-cortex
Hi all,

I'm attempting to observe the variation in the train and test loss in the mnist example (as it trains).

I'm trying to print the training loss (in addition to the test loss), in the classification.clj test-fn.

I've observed that these lines in train.clj (line 36 onwards) can be used to print the loss on the test dataset. 

(let [labels (execute/run new-network test-ds
                 :batch-size batch-size
                 :loss-outputs? true)
        loss-fn (execute/execute-loss-fn new-network labels test-ds)]

However, I get errors when I try to generate the train-loss. 

Caused by: clojure.lang.ExceptionInfo: Failed to resolve argument {:argument {:gradients? true, :key :output, :type :node-output, :node-id :relu-2}, :node-outputs (:labels)}
at clojure.core$ex_info.invokeStatic(core.clj:4725)
at clojure.core$ex_info.invoke(core.clj:4725)
at cortex.graph$eval30219$fn__30220.invoke(graph.clj:765)
at clojure.lang.MultiFn.invoke(MultiFn.java:251)
at cortex.graph$resolve_arguments$fn__30236.invoke(graph.clj:795)
at clojure.core$mapv$fn__7890.invoke(core.clj:6788)
at clojure.core.protocols$fn__7665.invokeStatic(protocols.clj:167)
at clojure.core.protocols$fn__7665.invoke(protocols.clj:124)
(lines elided).

What is the right argument to :loss-outputs when the training set is the argument to execute/run ?
Any explanation on what :loss-outputs stands for would be very helpful too.



Another question I have is:

Is it possible to change/set the learning rate and the loss function in the experiment/* interface?  

Thanks!
Kiran

Harold

unread,
Nov 24, 2017, 12:17:32 PM11/24/17
to clojure-cortex
Hi Kiran,

Great questions.

I agree that the current setup doesn't make it obvious enough how to compare train/test results during training. It's actually a bit of work to get it done, and the places you've pointed to are related, but not the whole story.

I've made an issue to track the required work:

Perhaps someone will just pick that up; otherwise, lets move the discussion there so we can get this feature into the experiment framework and examples.

Re: your other question:
> Is it possible to change/set the learning rate and the loss function in the experiment/* interface? 

By default, the experiment framework uses the "adam" optimizer, as seen here:

The adam optimizer adapts the learning rate automatically based on statistical properties of training.

There's also an "sgd" optimizer:

That allows specification of the learning rate directly.

Hope that helps,
-Harold 

kiran karkera

unread,
Nov 30, 2017, 12:50:03 AM11/30/17
to clojure-cortex
Hi Harold,

Thanks for pointing out where the learning rate and optimiser can be set.
I've also submitted a PR that adds support for computing the train and test loss metrics.

regards
Kiran
Reply all
Reply to author
Forward
0 new messages