Handling Different Temporal Resolutions in a Time Series Analysis

31 views
Skip to first unread message

Orul Orul Oyun

unread,
Aug 1, 2025, 5:02:38 AMAug 1
to MintPy

First of all, hello. In my latest study, I conducted a 10-year time series analysis. As you may know, with the launch of Sentinel-1B, the temporal resolution improved to approximately 6 days, allowing for more frequent analyses. However, following a technical malfunction in 2022, data acquisition reverted back to a 12-day interval. Prior to Sentinel-1B becoming operational in 2016, data was also collected at 12-day intervals.

Furthermore, due to occasional technical issues with the satellites, the analysis was sometimes performed using datasets with varying temporal resolutions.

My question is as follows:
When conducting an analysis spanning the years 2015 to 2025, how scientifically accurate is it to present a single-point time series graph that includes periods with both 6-day and 12-day acquisition intervals? For example, while the data in 2021 is collected every 6 days, the data in 2022 is collected every 12 days.

Is it appropriate to present these periods together in the same time series graph? How should such data be evaluated? Is it scientifically valid to display the graph as it is, or would it be better to use monthly averages to account for the temporal inconsistencies—even though this approach may require more effort over a 10-year period?

I look forward to your insights. Wishing everyone a great day.

Yuan-Kai Liu

unread,
Aug 20, 2025, 5:48:06 PMAug 20
to MintPy
I don't see why this would be a problem. For all science fields, there are data gaps here and there. We can just honestly present as it is. We can also do whatever averaging in time, but that will be a representation with that specific averaging window. Although the averaging may make it seemingly evenly sampled in time, it is still not. Some bins have higher uncertainty than others. If we don't present the binned uncertainty properly, it is still not honest.

If you want to infer some model fit to the time series (linear trend, sinusoids, etc), and estimate their uncertainty, sure, mintpy has functions to do that, either taking the residual and assuming epochs are all independent; or you can use the full covariance matrix of the timeseries. But I think this least-squares problem cannot help to quantify the extra uncertainty from the data gap. It still assume the forward design matrix G is perfect. More gaps do not give you higher error... Or correct me if I am wrong. I haven't thought more carefully.

If you want to incorporating the density of the data into the error of the inferred model, maybe ayou want to look into the Gaussian Process kind of methods.

But anyway, if the purpose is to present the data, I think showing what it is is fine. (maybe add a window average on top? to guide our eyes?
Reply all
Reply to author
Forward
0 new messages