Unsupervised learning is a whole conversation in itself, but for now I'll just point out that Caffe models need not be supervised convolutional neural nets although that's sometimes the impression and you can have an unsupervised good time now. See for instance the MNIST autoencoder example. Supervision is just a matter of the loss, and one can make reconstruction models of various kinds in Caffe.
Granted common unsupervised models like RBMs are not presently modeled by Caffe, but there is no framework opposition to this. A contrastive divergence Solver plus an RBM layer for shorthand would do nicely.
As ever, PRs are welcome!
(I will be excited for more attention to return to the unsupervised side of life and for the space of models types to keep growing in general and in Caffe.)