As in classical neural networks, units in layers up to F6 compute a dot product between their input vector and their weight vector, to which bias is added. This weighted sum is then passed through a sigmoid squashing function to produce the state of unit i
The convolution layers in Lenet model in caffe do not use any activation function or in other words use identity activation.