I was previously using new operator for allocating memory for intermediate results for the custom loss layer I am building.
The layer was working and I could run my network.
However, that is not the correct way of doing it...
caffe uses Blobs for memory operations
From one of the sample layers , I could get some information
So I did the following
In the loss_layers.hpp , where the definition of my new layer is :
Blob<Dtype> intermediate_sum_;
Blob<Dtype> log_label_;
etc...
In the custom_loss.cpp , the file where I am writing my initialisation,forward and backward pass :
The problem arises as soon as I use
intermediate_sum_.mutable_cpu_data()
or
intermediate_sum_.cpu_data()
...for instance
caffe_copy(outer_num_*inner_num_, label, intermediate_sum_.mutable_cpu_data());
All code before this line work and I am quite sure error starts at this line.
The stack trace:
F0624 08:38:50.733919 92222 blob.cpp:102] Check failed: data_
*** Check failure stack trace: ***
@ 0x7fce0a16c5cd google::LogMessage::Fail()
@ 0x7fce0a16e433 google::LogMessage::SendToLog()
@ 0x7fce0a16c15b google::LogMessage::Flush()
@ 0x7fce0a16ee1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fce0a6003ab caffe::Blob<>::mutable_cpu_data()
@ 0x7fce0a593c59 caffe::DepthLossLayer<>::Forward_cpu()
@ 0x7fce0a4d07ea caffe::Net<>::ForwardFromTo()
@ 0x7fce0a4d0b27 caffe::Net<>::ForwardPrefilled()
@ 0x7fce0a5fa8f5 caffe::Solver<>::Step()
@ 0x7fce0a5fb3d4 caffe::Solver<>::Solve()
@ 0x407f09 train()
@ 0x405c58 main
@ 0x7fce095fe830 __libc_start_main
@ 0x406179 _start
@ (nil) (unknown)
I think I am using Blob in a wrong way , can someone give an example of how to use Blob , or where am I going wrong?