It's worth reading the entire paper —
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
— it's a pretty clear description of the anomaly computation (as it
stood in 1987) and doesn't demand much (if any) knowledge of climate
science.
Richard Hendricks wrote:
>> “For the results we present, we used only station records which had
>> an overlap of 20 years or more with the combination of other stations
>> within 1200 km. We tested other choices for this overlap period and
>> found little effect on the global and zonal results. Some effect
>> could
>> be seen on global maps of derived temperature change; a limit of 5
>> years or less caused several unrealistic local hot spots or cold
>> spots
>> to appear, while a limit greater than 20 years caused a significant
>> reduction in the global area with station coverage.”
>
> I don't understand why shorter overlaps cause temperature spots.
> Did they explain this further?
There's no explanation beyond what I quoted above (see page 13,350,
right column). However, it seems clear how it could happen: allowing
shorter overlaps will lead to station records occasionally being
combined on the basis of unrepresentative (outlier) years. Imagine the
extreme case in which we permit temperature records to be combined on
the basis of a single year of overlap, and imagine two stations being
combined on the basis of a single year in which station A was
unusually hot and station B was unusually cold. We'd end up with a
local anomaly that was misleadingly warm (if A was combined into B) or
cold (if B was combined into A). This will happen only rarely (when
there are stations which only overlap with their neighbours for
outlier years), leading to local hot- and cold-spots.
>> If I understand rightly, they used a single run from the model to do
>> their error analysis. I guess morally speaking they ought to run the
>> model many times with different parameters, but in 1987 I doubt they
>> had enough computer time to do that. (They’ve undoubtedly done many
>> more runs since 1987, with updated and improved GCMs.)
>
> I didn't get that from what you quoted. Was there additional text
> that
> implied they only did one run?
It was a general impression I got from section 5. Here's part of the
introduction to the section (pages 13,360–13,362):
“We obtain a quantitative estimate of the error due to imperfect
spatial and temporal coverage with the help of a 100-year run of a
general circulation model (GCM). The GCM is model II, described by
Hansen et al. [1983]. In the 100-year run the ocean temperature was
computed, but hor/zontal ocean heat transports were fixed (varying
geographically and seasonally, but identical from year to year) as
described by Hansen et al. [1984]. The ocean mixed layer depth also
varied geographically and seasonally, and no heat exchange occurred
between the mixed layer and the deeper ocean. This 100-year run will
be described in more detail elsewhere, since it serves as the control
run for several transient CO2/trace gas climate experiments.”
> I would be very surprised if they did just one run of a GCM; even
> then they knew that minor input effects could change the results of
> a single run (hence all GCM results are "ensemble" type outputs, not
> just single runs, unless specifically discussing the effects in a
> single run).
I don't doubt that they have done multiple runs of their GCMs over the
years, but section 5 of the paper implies to me that they used a
single run of the model to do their error estimation for their 1987
results. Do read the paper and see if you agree with me.
--
Gareth Rees