is important to you will depend on your use case. How much of a problem would it be if a random bit in GPU memory flipped its state (either from 0 to 1 or 1 to 0)?
This is a general technology that has no specific relevance to machine learning with Lasagne or neural networks.
A bit flip corruption might occur in memory related to the operation of the GPU driver, or the Theano framework, which may corrupt the logic of the computation as whole and perhaps prevent it from continuing. In this case, as long as you are saving the model state to persistent storage at suitable points through the computation (e.g. once per epoch) then you could simply restart the computation from the last saved state.
If the corruption occurs in your data or model parameters (most likely since this space is likely much larger than the system overheads) then the effect may or may not be important to you. A random flip of a bit in a floating point number could have very little, or a great deal of effect (e.g. switching sign). Your computation may or may not be robust to such changes.
I don't know how much ECC affects memory speed (I suspect very little since it's a hardware mechanism and the memory clock speed is presumably set to a level that permits ECC operation) but GPU memory size is affected. The PDF you linked to suggests the GPU memory size will be 10% smaller with ECC enabled.
The probability of suffering a random flip is very small and, I suspect usually, the impact of a random flip would be negligible. So ECC is unlikely to be useful for most people. That 10% of memory is probably much more valuable for most.
Daniel