Actually I had to figure that out the hard way, too. Didn't find much information about that either. Well, this is what I "inferred" by using caffe for quite some time now (the terminology I use is mostly not canon):
In principle, there are two different kinds of network configs, the "train"/"normal" config and the "deploy" config. The train config is used with the solver to train the network, while the deploy config is used to "manually" feed data through the already trained network, i. e. data that is not saved in an appropriate DB/ not specified in a data layer. These two configs are mostly the same, but for the Data layers: The train config contains data layers to fill the blobs (usually called "data" and "label") that are fed into the remaining network, while the deploy config contains no data layers. But caffe still needs to know what the input blobs are and their shape, so instead of data layers the deploy prototxt has to include the "input" and "input_shape" specifiers to inform caffe about where to expect the inputs to the network and how they are shaped, e. g. like it is shown in
https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt. However, after loading into python the network the shapes can still be changed.
So you can create a deploy version of a net config automatically if you know the dimensions of your data, which you should. I don't know if there are any official scripts for that, I wrote my own.
And no, you can't use deploy configs with a solver.
Jan