I would like to train my network manually in python because I want to have a very specific template of the kernel weights at the very first convolutional layer.
More specifically, I want to impose the (i,j)^th value at my convolutional kernel in layer 1 to be always equal to a scalar called beta.
So I am wondering if it's possible to do that manually by net.forward() then net.backward() from the loss layer until the layer 1 then change the (i,j)^th entry to beta and finally feedforward() again and so forth until it converges..
If so how can I do that and is the learning rate gonna be updated as described in the solver.prototxt file?
I am also wondering if I can do net.backward(layen_n, layer_m) where it backpropagates the data from layer_n until layer_m.
Many thanks in advance!