It can, but NNs are optimizing a loss function by doing stochastic gradient descent which will be stuck at a local minimum. If you have reason to believe that the features you project on to, needs to be the same as the kernels you use for reconstruction then you should impose it by tying the weights.
After all CNNs are also applying the same approach. It is tying the weights of all units in a channel. Even if we dont have any computational constraints convolutional layers will still be very usefull in learning from images for many reasons...