Thanks for clarifying Chris,
After applying an initial guess at a good error model - did you visually pick out the bad outliers or use some cutoff based on metrics like the speed and core deviation?
Right now I'm having a hard time deciding whether to proceed with some outlier cutoff, because I have more than just one or two bad points. I think a heavy tailed distribution would be most appropriate, but a majority of the data do have reasonable residuals after fitting what seems to be the most appropriate error model. There are some case when outliers make up a larger portion of the data due to the habitat in which the animal died (the collar with the most outliers was under a 15 ft rocky overhang).
Without removing outliers, the best model only gets me down to a Z^2 of 13.4 and I've tried about every combination of available error metrics I can think of. But if I fit the best error model, then remove calibration data greater than the 1.5*IQR + third quantile of speed or core deviation for each fix class within each test collar, I can get down to a Z^2 of 2.9.
Do you think this would be a good approach, even if for some collars, this outlier filter removed 6, or 12% of the data? These were the worst cases where the habitat was unusual and likely caused substantial bounce. And would you recommend a second round of checking for outliers after this first filter? I notice that even with the low Z^2, there are still some, but much fewer outliers in the calibration data.
If you're in need of some calibration data that might be useful toward developing a heavy tailed model, I'd be happy to share these with you.
Many thanks,
Christina