I am looking through the code of cortex.nn.layers and cortex.graph, but it is not clear to me how I would go about concatenating the input layer to a network at some deeper layer. Looks possible though.
I was thinking of something like
(network/linear-network [(layers/input 8 1 1 :id :data)
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/concat [(layers/linear 20 :l2-regularization 0.05) (layers/prelu)] [(layers/input 8 1 1 :id :data)])
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/linear 2)
(layers/softmax :id :labels)])
where somewhere halfway the network, I feed the network the input layer again.
I can imagine you are thinking/working on a syntax that is more universal than just adding the input layer again, so maybe a layers/store and layers/retrieve command, together with a layers/concat command.
(network/linear-network [(layers/input 8 1 1 :id :data)
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/store :output_of_above_layer)
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/concat [(layers/linear 20 :l2-regularization 0.05) (layers/prelu)] [(layers/retrieve :output_of_above_layer)])
(layers/linear 40 :l2-regularization 0.05) (layers/prelu)
(layers/linear 2)
(layers/softmax :id :labels)])