Hi,
how would I go about sharing weights inside a layer? I know there are special layers that share weights like the convolutional layers. My envisioned net however does not use any of these predefined layers as far as I know.
Consider this example: I have one 1-dimensional bottom blob of size n and an top blob of the same shape and size. I'd like to share weights in such a way, that all edges between bottom[i] and top[i] use the same weight for all i. Same for the edge between bottom[i] and top[(i + 1) % n] and so on. Is this possible given the predefined layers or would i need to implement my own layer? In the second case, how would i ideally go about this?
Best regards,
Tim