I have run into issues (rather nonsensical ones at first, and then illuminating ones) when trying to supply a custom activation function.
A search through the source code leads me to actf.jl, and the
related documentation page hints at the provision of gradients. I see that in actf.jl, reluforw is the relu function in essence, while reluback is the documented gradient. How do these definitions get linked to the `relu` symbol? How would I imitate this to create a 'smooth' function with correct use of defined smoothforw/back functions?
I am hoping to get a headstart on figuring out the provision of a custom activation function. If I have to forgo @gpu (because I see that there are custom mode flags for each activation function) that is fine. Are the GPU activation functions actually defined in the GPU/CUDNN julia