From reading all that, an old saying comes to mind:
"never let the perfect be the enemy of the good.".
Yes, you can do this in a very complicated fashion. The fact that it seems to have been an issue so far for three years implies that the perfect is becoming the enemy of the good.
Yes, individually averaging the latitude, the longitude, and the altitude would be an excellent start. But you could also plot these individual averages over time. This way we could see how long it takes to achieve a stable average.
A somewhat more complicated method, but I think would still be quite doable, would be a plot of the distance from the average at the the end of the data. I realize that sounds complicated so I'm going to have to clarify
As this average is created, it should approach ultimately an ideal average value. The hardware and software will not know what that average value eventually is, but it has its own running average. It could create a plot which it goes back updates the value: how far in three dimensional space were all those individual readings as compared with the eventual average value?
I hope I can make this clear: in most plots, you would determine what the value was at some specific time, and draw it in the graph. In the system that I am thinking about, the early readings are not static: as the average value of the location changes, the initial plotted values are corrected against the then final value.
So what's the purpose of this? I think that you want to ask the following question:
"How far is this current reading from the eventual average that this averaging process determines?"
Another question might be: how far from the eventual averaged value is what would be the current average value, if you were to stop the process at this point?
So I would say this: choose a process and implement it so it can be used. It looks like for the last nearly 3 years, nothing has happened.