Hi Jillian,
There are efficient algorithms in the package for most every calculation, though there are options and complications that can cause trouble. I try to document this as much as possible in the individual help files.
variogram(...,fast=FALSE) is an inherently O(n²) calculation and is vectorized for speed, so that's never going to work well on larger datasets. O(n²) means that the computational speed (and memory in this case) scales with the square of the amount of data. However, the default variogram(...,fast=TRUE) is exactly the same if there is no irregularity in the sampling schedule beyond missingness. And even if there is irregularity in the data, the differences are often quite small in larger datasets. variogram(...,fast=FALSE)is more for squeezing blood out of smaller, lower quality datasets.
An opposite problem that people sometimes have with variogram(...,fast=TRUE) is if their data has something like 1Hz bursts with huge gaps. variogram(...,fast=TRUE) is O(n log n) but involves constructing a discrete time grid for the entire sampling period to perform an FFT on. Therefore, in that case, you can also run out of memory and need to invoke the dt option to choose a larger sampling interval for which to grid the entire dataset, or to break up the dataset and average the individual variograms together.
With akde(), the default res option is larger than necessary for aesthetic reasons. I'm kind of surprised that you ran out of memory with a single individual? But you should be able to safely go from res=10 to res=1 without issue, and that would decrease the memory costs by a factor of 100. Tell me if that works here and I can take a look at tweaking the default options.
For akde() with multiple individuals being calculated simultaneously on the same grid, it's easier to run out of memory because I need to re-write that code to be more efficient. But nobody has raised a real issue there yet, so I haven't put priority on that.
Best,
Chris