De Normalization Layer

17 views
Skip to first unread message

Haider Khan Lodhi

unread,
May 12, 2018, 12:48:12 AM5/12/18
to Caffe Users
Hi, 
I have an convolutional autoencoder with 2 modalities(images and nir data). I use BN and Scale layer to normalize the data, since it is good practice to normalize data from different input streams to bring them into one common scale. However, this is enough for a classification problem but I have an autoencoder. I want to get the L2 error between the original input and the recreated output from the decoder. It makes no sense to compare the normalized input and the normalized output...

As I am aware that BN and Scale layer does the following:
y = gamma * ( (x - mean) / std) + beta.

So, I need to do the opposite of it to get the real input.

x = std * ((y - beta)/gamma) + mean
 
Can I create a new layer for this? I am confused and running out of ideas. Do I need to take care of gamma and beta or just do it like:

x= (y * std) + mean ?

any hints or tips, leads to a possible solution would be very helpful.

Thanks
Reply all
Reply to author
Forward
0 new messages