1. Because
net._layer_names is a vector of layer names while
net.blobs.keys() is a list of blob names. Some layers don't produce new blobs (often dropout, ReLU, which are in-place operations), so for their entries in
_layer_names you will not find any corresponding ones in
blobs. Also, you should not assume that a layer will produce a blob of exactly the same name as the layer - in this case you should look at
net.top_names and
net.bottom_names which are OrderedDicts binding blobs to layers by name.
2. I'm not a C++ magician but looking at ReLU code, it doesn't look too difficult to derive your custom layer from ReLU. I see no reason why you should implement setup(), since ReLU itself does not (inheriting it from the abstract Layer, see layer.hpp). Might indeed have to implement forward() but again, my understanding of caffe C++ code is limited, so take my words on that point with even a few grains of salt.
3. Once your model is loaded, there's nothing that can be done (to my knowledge), so you need to modify your network at the prototxt level. I suppose you could get away with simple text substitution there, but a more elegant (and less error-prone) solution would be to use NetParameter, kind of like so:
from google.protobuf import text_format
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open('path_to_my_model.prototxt').read(), model)
for x in model.layer:
if x.type == u'ReLU':
x.type = u'myReLU'
open('output.prototxt', 'w').write(str(model))