We have employed two criteria to our MST data for exclusion:
1) If there are 5 or fewer similar responses. We cannot estimate an accurate LDI if participants are not using the critical "similar" response key. We consider this a failure to comply with task instructions.
2) If the recognition rate for repeated items is below chance. Usually this occurs because the response keys have been mixed up or participants are responding randomly and not complying with task instructions.
I'm not certain what you mean by not having 80% of useable trials or if that is across all conditions. If you mean "no response" trials, then I encourage you to use the response-terminated feature in the future, where the images are presented for a fixed period of time (say, 2 seconds) but the response can be collected during a blank screen afterwards. This approach guarantees a response for each image. Generally speaking, for any task, if I did not have 80% of the data, I would likely not have enough trials to estimate a reliable response for each condition. We have shown that an LDI can be computed from as few at 16 lures that is consistent with the LDI computed from 64 trials (Stark et al., 2015, Behavioral Neuroscience), but I would not recommend using fewer than that.
I hope this helps!
Shauna