Skip to first unread message

Prabhu

unread,
Feb 14, 2015, 12:13:53 PM2/14/15
to caffe...@googlegroups.com
I never understood properly why is Local Response Normalization (LRN) is used between the layers?
I see that from imagenet example 2 LRNs used in between pooling and convolution layers

layers {
  name
: "pool2"
  type
: POOLING
  bottom
: "conv2"
  top
: "pool2"
  pooling_param
{
    pool
: MAX
    kernel_size
: 3
    stride
: 2
 
}
}
layers
{
  name
: "norm2"
  type
: LRN
  bottom
: "pool2"
  top
: "norm2"
  lrn_param
{
    local_size
: 5
    alpha
: 0.0001
    beta
: 0.75
 
}
}
layers
{
  name
: "conv3"
  type
: CONVOLUTION
  bottom
: "norm2"
  top
: "conv3"
  convolution_param
{
    num_output
: 384
    pad
: 1
    kernel_size
: 3
 
}
}


sharath s

unread,
Aug 10, 2015, 9:07:02 AM8/10/15
to Caffe Users
Hello Prabhu

I was also in the process of understanding LRN, do you have sufficient knowledge about LRN  by now?

if yes, it would be great if you share it.

Thanks in advance

BR
sharath
Reply all
Reply to author
Forward
0 new messages