Hi qq406, the update of the weights is a task shared by networks, layers, and solvers. Layers implement forward and backward methods, forward to compute the output given the input, and backward to calculate the derivatives used posteriorly to update the weights. Networks basically orchestrate the method calling for all the layers. Solver incorporates learning rates and other stuff in the derivative and perform the weights updates
per se. An example is the method ApplyUpdate in sgd_solver.cpp (line 109). It is called immediately after a forward and a backward pass in the network, and it is quite simple to understand.
I believe that to implement your new learning method, you should manually update the attribute data_ from the blobs used to store the layers' params after a forward and a backward pass. For a reference, check the method Update() in caffe/src/caffe/blob.cpp, line 156. The important part of the method is the call of the function caffe_axpy (or caffe_gpu_axpy, if using GPU). This function sums the vectors diff_->cpu_data() and data_->mutable_cpu_data(), and stores the result in the second. Given that, I believe that your method should update data_->mutable_cpu_data() .