Apologies for yet another message! I am trying to work out how tau[velocity] and speed/distance change over time, over a stationary period of ranging behaviour. I have an example of a lion here tracked for approx 700 days. I have fit a cttm to calibrated data for the full dataset, which seems to work well, with OUF selected - with a tau[position] in the region of 7 days, and tau[velocity] of just over an hour.

With these kinds of estimates, it seems like I would need a minimum of something like 150 days for home range estimates, and 2-days plus for speed/distance to reach DOF ~20.
My first question is about non-stationarity. It seems like lions likely exhibit non-stationarity in their velocity autocorrelation as they are stationary for most of the day, and active/foraging between about 7 pm - 6 am (which I can see from plotting the instantaneous speeds)

This is pretty similar in some ways to the simulation from Mike's paper of a CPF which showed biased estimates of e.g. daily distances. The paper suggests subsetting into stationary periods and estimating these separately, however, this would give me very few DOF for speed. I have used calibrated data to try to mitigate error bias during the day when they are moving little -- but am just wondering how much of a concern this should be?
My second question is about estimating speed/distance values for subsets of the dataset, and the relative merits of estimating these based on guesstimates/fits from the full dataset vs the subset.
I first tried using the subsets of data to fit ctmms using ctmm.select(), then using these to generate speed estimates. I found that once I clipped the data down to 48 hours, the models had trouble resolving tau[velocity], but by about 7 days they were producing similar tau[velocity] estimates to the full dataset (albeit with wider CIs) - so I tried also generating speed estimates using this. Here's what the two options looked like compared to the variogram of the first 7 days of data (where red is the fit from the full dataset, and purple is the fit from the subset:


I guess I have two main questions here -- first, if tau[velocity] is just over an hour, and the sampling schedule is an hour, is the reason that 48-hour subsets wouldn't be able to resolve the velocity param because of a sample size issue? Secondly, in this case, is there much benefit to fitting a separate ctmm per subset, rather than just using the fit from the full dataset? As it seems like this would be computationally more efficient and would mean I only have to check once per trajectory if a model with velocity autocorrelation is selected, rather than for each data subset.
Thank you!
Gen