Training loss not decrease and fine-tuning decrease, WHAT's the problem?

79 views
Skip to first unread message

Hao Wang

unread,
Jun 11, 2016, 5:43:55 AM6/11/16
to Caffe Users
When I trained from scratch, the networks is like Alexnet, but the data is optical flow may be different from the natural image data, the loss is around the inital loss 4.63 and didn't decrease by now is 600+ iterations. BUT when i fine-tuning the net the loss decrease obviously.

SO is there the weight initialization problem, I copy the weight initialization parameter from Alex net?

I0611 16:38:43.634160 13733 solver.cpp:228] Iteration 0, loss = 4.63397
I0611 16:38:43.634223 13733 solver.cpp:244]     Train net output #1: loss = 4.63397 (* 1 = 4.63397 loss)
I0611 16:41:21.797159 13733 solver.cpp:228] Iteration 20, loss = 4.63357
I0611 16:41:21.797503 13733 solver.cpp:244]     Train net output #1: loss = 4.63357 (* 1 = 4.63357 loss)
I0611 16:43:49.936383 13733 solver.cpp:228] Iteration 40, loss = 4.62364
I0611 16:43:49.936622 13733 solver.cpp:244]     Train net output #1: loss = 4.62364 (* 1 = 4.62364 loss)
I0611 16:46:36.435134 13733 solver.cpp:228] Iteration 60, loss = 4.64674
I0611 16:46:36.435389 13733 solver.cpp:244]     Train net output #1: loss = 4.64674 (* 1 = 4.64674 loss)
I0611 16:49:32.840558 13733 solver.cpp:228] Iteration 80, loss = 4.61586
I0611 16:49:32.840869 13733 solver.cpp:244]     Train net output #1: loss = 4.61586 (* 1 = 4.61586 loss)
I0611 16:52:21.351488 13733 solver.cpp:228] Iteration 100, loss = 4.6407
I0611 16:52:21.351804 13733 solver.cpp:244]     Train net output #1: loss = 4.6407 (* 1 = 4.6407 loss)
I0611 16:54:36.660387 13733 solver.cpp:228] Iteration 120, loss = 4.67369
I0611 16:54:36.660588 13733 solver.cpp:244]     Train net output #1: loss = 4.67369 (* 1 = 4.67369 loss)
I0611 16:56:59.279364 13733 solver.cpp:228] Iteration 140, loss = 4.64865
I0611 16:56:59.279688 13733 solver.cpp:244]     Train net output #1: loss = 4.64865 (* 1 = 4.64865 loss)
I0611 16:59:24.811175 13733 solver.cpp:228] Iteration 160, loss = 4.6102
I0611 16:59:24.811558 13733 solver.cpp:244]     Train net output #1: loss = 4.6102 (* 1 = 4.6102 loss)
I0611 17:01:47.994952 13733 solver.cpp:228] Iteration 180, loss = 4.64995
I0611 17:01:47.995254 13733 solver.cpp:244]     Train net output #1: loss = 4.64995 (* 1 = 4.64995 loss)
I0611 17:03:59.409205 13733 solver.cpp:228] Iteration 200, loss = 4.63385
I0611 17:03:59.409472 13733 solver.cpp:244]     Train net output #1: loss = 4.63385 (* 1 = 4.63385 loss)
I0611 17:06:10.470062 13733 solver.cpp:228] Iteration 220, loss = 4.62748
I0611 17:06:10.470222 13733 solver.cpp:244]     Train net output #1: loss = 4.62748 (* 1 = 4.62748 loss)
I0611 17:08:18.853078 13733 solver.cpp:228] Iteration 240, loss = 4.62072
I0611 17:08:18.853291 13733 solver.cpp:244]     Train net output #1: loss = 4.62072 (* 1 = 4.62072 loss)
I0611 17:10:19.710860 13733 solver.cpp:228] Iteration 260, loss = 4.63698
I0611 17:10:19.711091 13733 solver.cpp:244]     Train net output #1: loss = 4.63698 (* 1 = 4.63698 loss)
I0611 17:12:27.735249 13733 solver.cpp:228] Iteration 280, loss = 4.63701
I0611 17:12:27.735605 13733 solver.cpp:244]     Train net output #1: loss = 4.63701 (* 1 = 4.63701 loss)
I0611 17:14:27.265866 13733 solver.cpp:228] Iteration 300, loss = 4.60108
I0611 17:14:27.266371 13733 solver.cpp:244]     Train net output #1: loss = 4.60108 (* 1 = 4.60108 loss)
I0611 17:16:22.921154 13733 solver.cpp:228] Iteration 320, loss = 4.62625
I0611 17:16:22.921778 13733 solver.cpp:244]     Train net output #1: loss = 4.62625 (* 1 = 4.62625 loss)
I0611 17:18:20.411530 13733 solver.cpp:228] Iteration 340, loss = 4.63296
I0611 17:18:20.412036 13733 solver.cpp:244]     Train net output #1: loss = 4.63296 (* 1 = 4.63296 loss)
I0611 17:20:20.515871 13733 solver.cpp:228] Iteration 360, loss = 4.61399
I0611 17:20:20.516480 13733 solver.cpp:244]     Train net output #1: loss = 4.61399 (* 1 = 4.61399 loss)
I0611 17:22:18.264253 13733 solver.cpp:228] Iteration 380, loss = 4.64041
I0611 17:22:18.264829 13733 solver.cpp:244]     Train net output #1: loss = 4.64041 (* 1 = 4.64041 loss)
I0611 17:24:09.782224 13733 solver.cpp:228] Iteration 400, loss = 4.61226
I0611 17:24:09.782783 13733 solver.cpp:244]     Train net output #1: loss = 4.61226 (* 1 = 4.61226 loss)
I0611 17:26:00.819542 13733 solver.cpp:228] Iteration 420, loss = 4.62276
I0611 17:26:01.101272 13733 solver.cpp:244]     Train net output #1: loss = 4.62276 (* 1 = 4.62276 loss)
I0611 17:27:51.837548 13733 solver.cpp:228] Iteration 440, loss = 4.63533
I0611 17:27:51.838073 13733 solver.cpp:244]     Train net output #1: loss = 4.63533 (* 1 = 4.63533 loss)
I0611 17:29:53.786026 13733 solver.cpp:228] Iteration 460, loss = 4.64923
I0611 17:29:53.786598 13733 solver.cpp:244]     Train net output #1: loss = 4.64923 (* 1 = 4.64923 loss)
I0611 17:31:49.045650 13733 solver.cpp:228] Iteration 480, loss = 4.62817
I0611 17:31:49.046030 13733 solver.cpp:244]     Train net output #1: loss = 4.62817 (* 1 = 4.62817 loss)
I0611 17:33:41.281044 13733 solver.cpp:228] Iteration 500, loss = 4.61978
I0611 17:33:41.281350 13733 solver.cpp:244]     Train net output #1: loss = 4.61978 (* 1 = 4.61978 loss)
I0611 17:35:32.345113 13733 solver.cpp:228] Iteration 520, loss = 4.63579
I0611 17:35:32.345434 13733 solver.cpp:244]     Train net output #1: loss = 4.63579 (* 1 = 4.63579 loss)
I0611 17:37:31.285205 13733 solver.cpp:228] Iteration 540, loss = 4.63167
I0611 17:37:31.285521 13733 solver.cpp:244]     Train net output #1: loss = 4.63167 (* 1 = 4.63167 loss)
I0611 17:39:27.009618 13733 solver.cpp:228] Iteration 560, loss = 4.61786
I0611 17:39:27.009892 13733 solver.cpp:244]     Train net output #1: loss = 4.61786 (* 1 = 4.61786 loss)
I0611 17:41:17.862682 13733 solver.cpp:228] Iteration 580, loss = 4.63481
I0611 17:41:17.862931 13733 solver.cpp:244]     Train net output #1: loss = 4.63481 (* 1 = 4.63481 loss)
I0611 17:43:04.419114 13733 solver.cpp:228] Iteration 600, loss = 4.63331
I0611 17:43:04.419450 13733 solver.cpp:244]     Train net output #1: loss = 4.63331 (* 1 = 4.63331 loss)

sam moes

unread,
Jun 11, 2016, 1:08:25 PM6/11/16
to Caffe Users

You should share more of your data preparation code and network prototxt files.
Reply all
Reply to author
Forward
0 new messages