Hi Dong Ki,
I don't believe any deconv functionality is currently implemented in Caffe. However, if you just want to visualize features as in [1] (that is, using the forward filters in reverse instead of learning the backwards filters), you basically just want to do backprop, except at the relu layers you should apply the relu again instead of multiplying by the relu derivative (0 or 1). For more info, see Section 4 of [2].
I think the easiest way to implement this would be to make a Layer::Deconv_cpu method that just calls Layer::Backward_cpu for conv layers and pooling layers. For the relu layers it should take the relu instead of calling Layer::Backward_cpu.
jason
[1] Zeiler and Fergus, 2013, "Visualizing and Understanding Convolutional Networks"
[2] Simonyan et al, 2014, "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"