Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Using Global Climate Models

4 views
Skip to first unread message

David Ball

unread,
Nov 19, 2000, 3:00:00 AM11/19/00
to
I decided to start a new thread here largely because the two
Mr. Drake has initiated in recent days on climate models have been
getting a little congested with several different conversations going
on under the same banner.
Mr. Drake (I refuse to apply the Dr. as he has refused all
requests to tell us exactly what his doctorate is in and it really
isn't germaine to the discussion) seems to be having some difficulty
understanding how climate models in particular and numerical models in
general operate. The summary of his posts boils down on one
unassailable fact, at least from his point of view: numerical models
don't work.
Yet we know that they do. Using high resolution meso-scale
models showing the complex wind patterns inside a supercell
thunderstorm to somewhat lower resolution operational weather models
to even lower resolution climate models we have been able to
understand many aspects of how the atmosphere works. In most
instances, the model output has been corroborated by independent
evidence. The current 5-day weather forecasts - soon to be 7-day in
many parts of the world - are available solely because of the ability
of numerical models to accurately portray the future state of the
atmosphere.
Are numerical models perfect? God no! There is much we don't
understand about the underlying processes involved in shaping the
atmosphere. New ways of initializing numerical models, new
parameterization schemes, new data assimilation techniques and new
explicit modeling schemes are being created all the time and each new
improvement leads to improved output.
Mr. Drake's problem is that he misunderstands what goes on
inside a numerical model on a fundamental level. He assumes that all
there is a functional representation of what the real atmosphere looks
like that one merrily integrates forward in time until you reach some
pre-determined point in the future and then you stop and look at the
results.
No model is run that way. Climate models can be run on any
historical data. You could start your model with data from 50 years
ago and watch how it handles the global climate over that 50 year
period, tuning the model algorithms until you are able to create an
acceptable match with the climates know have occurred over the globe.
That is the control experiment. Once you start to integrate the model
forward in time, there is no dispersion. The model will not blow up or
diverge or anything else, because you are modeling a real entity: the
atmosphere. If nothing changes in the way the model atmosphere is
constructed the output should remain realistic.
The real problem is that the myriad processes being modeled
are not static. They change over time. Changes in solar output.
Changes in greenhouse gas emissions. Changes in land use practices.
The list goes on and on. The departures that one sees between model
output and reality do not come about, as Mr. Drake asserts because the
model is wrong, but because the atmosphere changes. The model has to
change with each of these processes. Figure out how to do that!!
That's why modeling future climate is really a game of what
if? We don't know that CO2 values will be double their historic values
by the turn of the next century. We don't know what land use changes
are going to happen. We don't know if the Rockies are going to
suddenly slide into the Pacific. So we make some educated guesses and
run the model. Then we change the inputs and run the model again. And
again. And again, until we begin to get a picture of how sensitive the
atmosphere/ocean system to changes large and small. If get an updated
projection on something like CO2 emissions, we run the models again.
Each and every time we do it, we get a slightly different result.
There are dozens of GCM's being run around the world and different
scale with different model physics etc. and the output from each will
be slightly different with each run. Is this bad? Hell no. It's
valuable information.
The problem is Mr. Drake is looking for a definitive answer.
He knows that that isn't possible and uses it as "proof" that modeling
doesn't work. Well we know that it does work, but like any tool it has
to be used appropriately. Mr. Drake seeks to use it inappropriately
and voila! he is surprised when the tool doesn't do what he wants it
to do. Model output is guidance. It should be used as that. That is
what the majority of policy makers are attempting to use it for. Mr.
Drake isn't happy about that, but then how much can you really trust
someone who is attempting to hammer a nail with a crescent wrench?

--
Dave.

Leonard Evens

unread,
Nov 19, 2000, 3:00:00 AM11/19/00
to
David Ball wrote:
>
> I decided to start a new thread here largely because the two
> Mr. Drake has initiated in recent days on climate models have been
> getting a little congested with several different conversations going
> on under the same banner.
> Mr. Drake (I refuse to apply the Dr. as he has refused all
> requests to tell us exactly what his doctorate is in and it really
> isn't germaine to the discussion)

I've noticed that he often signs his messages as DDD. Is it possible
that "Dr." is his first name? I stand ready to be corrected about this
entirely unsupported conjecture. :-)

--

Leonard Evens l...@math.nwu.edu 847-491-5537
Dept. of Mathematics, Northwestern Univ., Evanston, IL 60208

Don Libby

unread,
Nov 19, 2000, 3:00:00 AM11/19/00
to
Leonard Evens wrote:
>
> David Ball wrote:
> >
> > I decided to start a new thread here largely because the two
> > Mr. Drake has initiated in recent days on climate models have been
> > getting a little congested with several different conversations going
> > on under the same banner.
> > Mr. Drake (I refuse to apply the Dr. as he has refused all
> > requests to tell us exactly what his doctorate is in and it really
> > isn't germaine to the discussion)
>
> I've noticed that he often signs his messages as DDD. Is it possible
> that "Dr." is his first name? I stand ready to be corrected about this
> entirely unsupported conjecture. :-)
>

I suspect "Donald Drake" is a play on "Donald Duck". And I second Len's
opinion "I stand ready to be corrected about this entirely unsupported
conjecture. :-)"

Real doctoral degrees are a matter of public record, although it may
take some work to find the records. A search of Dissertation Abstracts
may turn up a hit or two for Drakes with doctorates, but not all
dissertators submit their abstracts. Much more reliable is a search of
the library catalog at whatever university Drake claims to have
graduated. I haven't been paying attention to Drake's posts to know
what's been said of his credentials.

-dl

--

*********************************************************
* Replace "never.spam" with "dlibby" to reply by e-mail *
*********************************************************

j...@watson.ibm.com

unread,
Nov 19, 2000, 9:31:11 PM11/19/00
to
In article <3a17f86f...@news.escape.ca>,
on Sun, 19 Nov 2000 15:58:17 GMT,
wra...@mb.sympatico.ca (David Ball) writes:

<snip>

> No model is run that way. Climate models can be run on any
>historical data. You could start your model with data from 50 years
>ago and watch how it handles the global climate over that 50 year
>period, tuning the model algorithms until you are able to create an
>acceptable match with the climates know have occurred over the globe.
>That is the control experiment. Once you start to integrate the model
>forward in time, there is no dispersion. The model will not blow up or
>diverge or anything else, because you are modeling a real entity: the
>atmosphere. If nothing changes in the way the model atmosphere is
>constructed the output should remain realistic.

I don't believe this can properly be called a "control
experiment". By the way what constitutes an "acceptable match".
If we could rewind time and redo the last 50 years over and over
again with slightly perturbed initial conditions, how much would
the climate appear to vary? How do we know?
I don't follow your argument about modeling a real entity.
The output of a weather model may stay "realistic" but it will
certainly diverge from reality and this could be true of a climate
model as well.

> The real problem is that the myriad processes being modeled
>are not static. They change over time. Changes in solar output.
>Changes in greenhouse gas emissions. Changes in land use practices.
>The list goes on and on. The departures that one sees between model
>output and reality do not come about, as Mr. Drake asserts because the
>model is wrong, but because the atmosphere changes. The model has to
>change with each of these processes. Figure out how to do that!!

I don't believe this is the real problem.

> That's why modeling future climate is really a game of what
>if? We don't know that CO2 values will be double their historic values
>by the turn of the next century. We don't know what land use changes
>are going to happen. We don't know if the Rockies are going to
>suddenly slide into the Pacific. So we make some educated guesses and
>run the model. Then we change the inputs and run the model again. And
>again. And again, until we begin to get a picture of how sensitive the
>atmosphere/ocean system to changes large and small. If get an updated
>projection on something like CO2 emissions, we run the models again.
>Each and every time we do it, we get a slightly different result.
>There are dozens of GCM's being run around the world and different
>scale with different model physics etc. and the output from each will
>be slightly different with each run. Is this bad? Hell no. It's
>valuable information.

What do you mean by "slightly different"?

> The problem is Mr. Drake is looking for a definitive answer.
>He knows that that isn't possible and uses it as "proof" that modeling
>doesn't work. Well we know that it does work, but like any tool it has
>to be used appropriately. Mr. Drake seeks to use it inappropriately
>and voila! he is surprised when the tool doesn't do what he wants it
>to do. Model output is guidance. It should be used as that. That is
>what the majority of policy makers are attempting to use it for. Mr.
>Drake isn't happy about that, but then how much can you really trust
>someone who is attempting to hammer a nail with a crescent wrench?

It is inappropriate to expect complicated models of complex
poorly understand processes like climate to be more predictive than
simple models. They just give a false sense of security. Policy
makers should use simple models for guidance and accept that there is
a lot of uncertainty.
James B. Shearer

David Ball

unread,
Nov 19, 2000, 10:47:22 PM11/19/00
to
On Mon, 20 Nov 2000 02:31:11 GMT, j...@watson.ibm.com wrote:

[..]


>
> I don't believe this can properly be called a "control
>experiment". By the way what constitutes an "acceptable match".
>If we could rewind time and redo the last 50 years over and over
>again with slightly perturbed initial conditions, how much would
>the climate appear to vary? How do we know?

You are modeling a known quantity. As such it is a control. If
your model cannot adequately capture climate that we know has
occurred, then it cannot be expected to adequately capture an unknown
future climate.
Running models with slightly perturbed initial conditions is a
hallmark of ensemble forecasting. One of the characteristics you want
in your model is that if you run the model with input dataset A you
get result a. If you run the model again with input dataset B -
slighly different from A - you will get result b. You want a) and b)
to be close together. If you have a) and b) very different you have a
serious problem with your model.
A good primer on the basics of ensemble forecasting can be
found at:

http://www.cmc.ec.gc.ca/~cmsw/ensemble/course/ABC.html

Now, this applies to operational weather models, but the principles
involved are similar.

> I don't follow your argument about modeling a real entity.
>The output of a weather model may stay "realistic" but it will
>certainly diverge from reality and this could be true of a climate
>model as well.

Is the atmosphere not real? The oceans? You misunderstand. It
is not the model that diverges from the atmosphere but the atmosphere
that diverges from the model. A numerical model is nothing more than
the physical equations that describe on-going processes in the
atmosphere. Nothing more. Equations of motion. Radiative transfer.
Continuity of mass.
Once you have a model that accurately shows how the global
climate has worked for the past, say, 50 years, integrating the model
forward in time is a no-brainer...provided changes in the atmosphere
are adequately anticipated or if your atmosphere/ocean system didn't
change. But it does change, and that is the problem. It isn't the
model solution that diverges from the real atmosphere but the real
atmosphere that diverges from the model.

>> The real problem is that the myriad processes being modeled
>>are not static. They change over time. Changes in solar output.
>>Changes in greenhouse gas emissions. Changes in land use practices.
>>The list goes on and on. The departures that one sees between model
>>output and reality do not come about, as Mr. Drake asserts because the
>>model is wrong, but because the atmosphere changes. The model has to
>>change with each of these processes. Figure out how to do that!!
>
> I don't believe this is the real problem.

And you "belief" is based on what?

>
>> That's why modeling future climate is really a game of what
>>if? We don't know that CO2 values will be double their historic values
>>by the turn of the next century. We don't know what land use changes
>>are going to happen. We don't know if the Rockies are going to
>>suddenly slide into the Pacific. So we make some educated guesses and
>>run the model. Then we change the inputs and run the model again. And
>>again. And again, until we begin to get a picture of how sensitive the
>>atmosphere/ocean system to changes large and small. If get an updated
>>projection on something like CO2 emissions, we run the models again.
>>Each and every time we do it, we get a slightly different result.
>>There are dozens of GCM's being run around the world and different
>>scale with different model physics etc. and the output from each will
>>be slightly different with each run. Is this bad? Hell no. It's
>>valuable information.
>
> What do you mean by "slightly different"?

Exactly what it says. You won't get the same result when you
run a numerical model twice using slightly different input data. You
should expect the results to be similar, or you have a problem with
your model.

[..]

>
> It is inappropriate to expect complicated models of complex
>poorly understand processes like climate to be more predictive than
>simple models. They just give a false sense of security. Policy
>makers should use simple models for guidance and accept that there is
>a lot of uncertainty.

It is inappropriate to "expect" sophisticated numerical models
to be better than simple ones? How do you figure? Are you suggesting
that a simple baroclinic model with 7 vertical layers on a 400 km grid
as they ran 25 years ago will produce accurate weather forecasts of
the kind one gets today with 16 km model with 35 levels in the
vertical run on a global domain? Hardly the case.
If you read the IPCC reports you will see the uncertainties
clearly spelled out. They are using the model output as guidance,
exactly the way they should do it.

--
Dave.

Leonard Evens

unread,
Nov 20, 2000, 3:00:00 AM11/20/00
to
j...@watson.ibm.com wrote:
>
> It is inappropriate to expect complicated models of complex
> poorly understand processes like climate to be more predictive than
> simple models. They just give a false sense of security. Policy
> makers should use simple models for guidance and accept that there is
> a lot of uncertainty.
> James B. Shearer

Why?

What is your definition of "simple model" as opposed to a "complicated
model"? What is your definition of "poorly understood"?

Should a paper and pencil one dimensional model be as useful and
a three dimensional model which incorporates a realistic picture
of oceans and the atmosphere?

What is your definition of "is"? Is it equivalent to "may be"? :-)

Miguel Aguirre

unread,
Nov 20, 2000, 3:00:00 AM11/20/00
to

Don Libby wrote:

>
> Real doctoral degrees are a matter of public record, although it may
> take some work to find the records. A search of Dissertation Abstracts
> may turn up a hit or two for Drakes with doctorates, but not all
> dissertators submit their abstracts. Much more reliable is a search of
> the library catalog at whatever university Drake claims to have
> graduated. I haven't been paying attention to Drake's posts to know
> what's been said of his credentials.

Remeber that the Germany that was democratic was called just Germany but the
Germany that was not called itself the Democratic Republic of Germany. People
call theyselves things they are not
--
Aguirre was considered to be a thoroughly disreputable character, and his
name practically became synonymous with cruelty and treachery
Encyclopaedia Britannica.

Miguel Aguirre

unread,
Nov 20, 2000, 3:00:00 AM11/20/00
to

David Ball wrote:

> >
> > It is inappropriate to expect complicated models of complex
> >poorly understand processes like climate to be more predictive than
> >simple models. They just give a false sense of security. Policy
> >makers should use simple models for guidance and accept that there is
> >a lot of uncertainty.
>
> It is inappropriate to "expect" sophisticated numerical models
> to be better than simple ones? How do you figure? Are you suggesting
> that a simple baroclinic model with 7 vertical layers on a 400 km grid
> as they ran 25 years ago will produce accurate weather forecasts of
> the kind one gets today with 16 km model with 35 levels in the
> vertical run on a global domain? Hardly the case.
> If you read the IPCC reports you will see the uncertainties
> clearly spelled out. They are using the model output as guidance,
> exactly the way they should do it.
>
> --
> Dave.

My understanding is that a 'good' climatic model needs a much smaller number
of degrees of freedom that a 'good' weather prediction one.

In the other hand a 'good' climatic model will need the proper modalization
of many feedback loops that are difficult to implement (sea atmosphere, sea
ice atmosphere) or that we do not have a clue how to implement (biosphera to
atmosphere and sea)

Shawn Carroll

unread,
Nov 20, 2000, 3:00:00 AM11/20/00
to

Phil Hays

unread,
Nov 20, 2000, 3:00:00 AM11/20/00
to
j...@watson.ibm.com wrote:

> Consider the following simple model. Suppose the average
> temperature of the earth is determined by CO2 forcing (assumed to
> be proportional to the log of concentration) plus random noise.
> Throw in a heat capacity term to give a lagged response. Find the
> best fit for the last 150 years.

The problem with simple models is that it is fairly easy to generate them, and
fairly hard to determine which simple model gives better predictions. A simple
model I might suggest would be exactly the same as yours, but to use the last
20,000 years to set CO2 to temperature relationship, and the last 1000 years to
set the climate noise parameter. I'm not sure how you determine the "heat
capacity term".

This simple model will give rather different predictions for double CO2 in 2100
than your simple model. Which simple model is better? The reasons I might give
for my simple model are that the past 20,000 years has the best recorded
relationship between CO2 levels and temperature; and the past 1000 years has the
best record of year to year climate "noise", in my opinion, of course. I'm sure
that you can come up with similar opinions as to why your model might be better,
yet how, other than opinion, could we choose between simple models?


--
Phil Hays

Josh Halpern

unread,
Nov 20, 2000, 8:53:02 PM11/20/00
to
Miguel Aguirre wrote:

> Don Libby wrote:
> > Real doctoral degrees are a matter of public record, although it may
> > take some work to find the records. A search of Dissertation Abstracts
> > may turn up a hit or two for Drakes with doctorates, but not all
> > dissertators submit their abstracts. Much more reliable is a search of
> > the library catalog at whatever university Drake claims to have
> > graduated. I haven't been paying attention to Drake's posts to know
> > what's been said of his credentials.
>
> Remeber that the Germany that was democratic was called just Germany but the
> Germany that was not called itself the Democratic Republic of Germany. People
> call theyselves things they are not

To pick a nit, it was called the Bundesrepublik Deutschland and it still is.
But only among friends:)

(Federal Repulic of Germany). Nit are even more disreputible than Aguirre.

josh halpern

j...@watson.ibm.com

unread,
Nov 20, 2000, 10:20:42 PM11/20/00
to
In article <3a189769...@news.escape.ca>,
on Mon, 20 Nov 2000 03:47:22 GMT,

wra...@mb.sympatico.ca (David Ball) writes:
>On Mon, 20 Nov 2000 02:31:11 GMT, j...@watson.ibm.com wrote:
>
>[..]
>>
>> I don't believe this can properly be called a "control
>>experiment". By the way what constitutes an "acceptable match".
>>If we could rewind time and redo the last 50 years over and over
>>again with slightly perturbed initial conditions, how much would
>>the climate appear to vary? How do we know?
>
> You are modeling a known quantity. As such it is a control. If
>your model cannot adequately capture climate that we know has
>occurred, then it cannot be expected to adequately capture an unknown
>future climate.

The climate for the last 50 years is not a known quantity.
We know what happened, but what happened may have been unlikely.
If you fit what happened too closely you are just fitting to noise.
For example we can flip a fair coin 50 times and get 30 heads. The
model that matches this most closely says the probability of heads
is .6, however the model that predicts the future best says the
probability is of heads .5. So it would be a mistake to reject the
.5 model even though it does not match the past as well as the .6
model.

> Running models with slightly perturbed initial conditions is a
>hallmark of ensemble forecasting. One of the characteristics you want
>in your model is that if you run the model with input dataset A you
>get result a. If you run the model again with input dataset B -
>slighly different from A - you will get result b. You want a) and b)
>to be close together. If you have a) and b) very different you have a
>serious problem with your model.

This is wrong. If reality has a sensitive dependence on
initial conditions so should your model. For example if you are
modeling a coin flip the fact that a small change in the input can
change the output from head to tails does not indicate a serious
problem with the model.

>> I don't follow your argument about modeling a real entity.
>>The output of a weather model may stay "realistic" but it will
>>certainly diverge from reality and this could be true of a climate
>>model as well.
>
> Is the atmosphere not real? The oceans? You misunderstand. It
>is not the model that diverges from the atmosphere but the atmosphere
>that diverges from the model. A numerical model is nothing more than
>the physical equations that describe on-going processes in the
>atmosphere. Nothing more. Equations of motion. Radiative transfer.
>Continuity of mass.
> Once you have a model that accurately shows how the global
>climate has worked for the past, say, 50 years, integrating the model
>forward in time is a no-brainer...provided changes in the atmosphere
>are adequately anticipated or if your atmosphere/ocean system didn't
>change. But it does change, and that is the problem. It isn't the
>model solution that diverges from the real atmosphere but the real
>atmosphere that diverges from the model.

Once again the climate for the last 50 years is not
known. Furthermore integrating the model forward once is of little
value if there is a sensitivity to initial conditions.

>>> The real problem is that the myriad processes being modeled
>>>are not static. They change over time. Changes in solar output.
>>>Changes in greenhouse gas emissions. Changes in land use practices.
>>>The list goes on and on. The departures that one sees between model
>>>output and reality do not come about, as Mr. Drake asserts because the
>>>model is wrong, but because the atmosphere changes. The model has to
>>>change with each of these processes. Figure out how to do that!!
>>
>> I don't believe this is the real problem.
>
> And you "belief" is based on what?

My reading of the literature.

>> What do you mean by "slightly different"?
>
> Exactly what it says. You won't get the same result when you
>run a numerical model twice using slightly different input data. You
>should expect the results to be similar, or you have a problem with
>your model.

Some physical systems have a sensitive dependence on
initial conditions. In which case dissimilar results reflect
reality and are not a problem with the model.

>>
>> It is inappropriate to expect complicated models of complex
>>poorly understand processes like climate to be more predictive than
>>simple models. They just give a false sense of security. Policy
>>makers should use simple models for guidance and accept that there is
>>a lot of uncertainty.
>
> It is inappropriate to "expect" sophisticated numerical models
>to be better than simple ones? How do you figure? Are you suggesting
>that a simple baroclinic model with 7 vertical layers on a 400 km grid
>as they ran 25 years ago will produce accurate weather forecasts of
>the kind one gets today with 16 km model with 35 levels in the
>vertical run on a global domain? Hardly the case.
> If you read the IPCC reports you will see the uncertainties
>clearly spelled out. They are using the model output as guidance,
>exactly the way they should do it.

I am suggesting that if you have no good way to validate a
model you should keep it simple.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 20, 2000, 10:47:04 PM11/20/00
to
In article <3A192D03...@math.nwu.edu>,
on Mon, 20 Nov 2000 07:54:11 -0600,
Leonard Evens <l...@math.nwu.edu> writes:

>j...@watson.ibm.com wrote:
>>
>> It is inappropriate to expect complicated models of complex
>> poorly understand processes like climate to be more predictive than
>> simple models. They just give a false sense of security. Policy
>> makers should use simple models for guidance and accept that there is
>> a lot of uncertainty.
>> James B. Shearer
>
>Why?

Because when complicated models are tested against simple
models, simple models usually win.

>What is your definition of "simple model" as opposed to a "complicated
>model"? What is your definition of "poorly understood"?

Well, of course, there is really a range from very simple
models to very complicated models. Suppose the model is implemented
as a computer program. There are many ways to measure the complexity
of a computer program. Since I like simple models I will go with
lines of code.
So say a model is simple if it can be implemented in less
100 lines of code, fairly simple for 100-10000 lines of code,
and complicated for more than 10000 lines of code.
Note software generally has on the order of 1 bug per
100 lines of code.

>Should a paper and pencil one dimensional model be as useful and
>a three dimensional model which incorporates a realistic picture
>of oceans and the atmosphere?

Consider the following simple model. Suppose the average


temperature of the earth is determined by CO2 forcing (assumed to
be proportional to the log of concentration) plus random noise.
Throw in a heat capacity term to give a lagged response. Find the

best fit for the last 150 years. Now we have a model that will
predict the earth's temperature in 2100 given any CO2 levels you
care to assume. I see no particular reason to expect some
complicated 3d model to give better predictions.
James B. Shearer

wmconnolley

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
j...@watson.ibm.com wrote:
> If you fit what happened too closely you are just fitting to noise.
> For example we can flip a fair coin 50 times and get 30 heads. The
> model that matches this most closely says the probability of heads
> is .6, however the model that predicts the future best says the
> probability is of heads .5

Yes. This is why people worry a lot about getting the models variabilty
right, and why verifying against the current climate is difficult.

> This is wrong. If reality has a sensitive dependence on
> initial conditions so should your model

Its a basic assumption that the climate - unlike the weather - is not
sensitive to the initial conditions. Whether or not this is true might
be an interesting topic for discussion.

> I am suggesting that if you have no good way to validate a
> model you should keep it simple.

The argument is valid but the premise (no good way to validate)
probably false. The only way to generate "natural" variability in
a physically plausible way is probably through a GCM. Aspects of this
(MSLP, say) can be tested. Others (deep ocean circulation, say) can't.

-W.

--
W. M. Connolley | http://www.wmc.care4free.net
No, I haven't lost my job: NERC's newserver has become intolerable....
Posting, as ever, in a personal capacity.


Sent via Deja.com http://www.deja.com/
Before you buy.

David Ball

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
On Tue, 21 Nov 2000 03:20:42 GMT, j...@watson.ibm.com wrote:

>In article <3a189769...@news.escape.ca>,
> on Mon, 20 Nov 2000 03:47:22 GMT,
> wra...@mb.sympatico.ca (David Ball) writes:
>>On Mon, 20 Nov 2000 02:31:11 GMT, j...@watson.ibm.com wrote:
>>
>>[..]
>>>
>>> I don't believe this can properly be called a "control
>>>experiment". By the way what constitutes an "acceptable match".
>>>If we could rewind time and redo the last 50 years over and over
>>>again with slightly perturbed initial conditions, how much would
>>>the climate appear to vary? How do we know?
>>
>> You are modeling a known quantity. As such it is a control. If
>>your model cannot adequately capture climate that we know has
>>occurred, then it cannot be expected to adequately capture an unknown
>>future climate.
>
> The climate for the last 50 years is not a known quantity.
>We know what happened, but what happened may have been unlikely.
>If you fit what happened too closely you are just fitting to noise.
>For example we can flip a fair coin 50 times and get 30 heads. The
>model that matches this most closely says the probability of heads
>is .6, however the model that predicts the future best says the
>probability is of heads .5. So it would be a mistake to reject the
>.5 model even though it does not match the past as well as the .6
>model.

The climate for the past 50 years is not a known quantity?
You're going to have to help me out on this. I can go back and show
you what the weather was doing at a particular date and time at most
places on the planet 40 years ago. Exactly how is the climate not
known?
Climate is not even remotely like flipping a coin. You're
looking at the problem as if it was related to curve-fitting or the
lack of generalization one gets sometimes with a neural network. The
whole process of data assimilation is designed to deal with the issue
of getting very different types into the model without having to
contend with noise, or as little of it as possible.


>
>> Running models with slightly perturbed initial conditions is a
>>hallmark of ensemble forecasting. One of the characteristics you want
>>in your model is that if you run the model with input dataset A you
>>get result a. If you run the model again with input dataset B -
>>slighly different from A - you will get result b. You want a) and b)
>>to be close together. If you have a) and b) very different you have a
>>serious problem with your model.
>
> This is wrong. If reality has a sensitive dependence on
>initial conditions so should your model. For example if you are
>modeling a coin flip the fact that a small change in the input can
>change the output from head to tails does not indicate a serious
>problem with the model.

It is not wrong and again, climate is not a coin toss. If the
results coming from a model differ by 3 orders of magnitude, I have a
problem with my model. If I know what happened, run my model and
everything works correctly except that things happen faster or slower
than expected, I have a problem with my model. Of course, you should
not expect exact agreement, but the numbers have to be realistic both
in time and space. In a climate context, if the global pattern of
cooling that took place from the 1950's to the early 1970's is not
captured or appears in the 1980's you have a problem with your model.

>
>>> I don't follow your argument about modeling a real entity.
>>>The output of a weather model may stay "realistic" but it will
>>>certainly diverge from reality and this could be true of a climate
>>>model as well.
>>
>> Is the atmosphere not real? The oceans? You misunderstand. It
>>is not the model that diverges from the atmosphere but the atmosphere
>>that diverges from the model. A numerical model is nothing more than
>>the physical equations that describe on-going processes in the
>>atmosphere. Nothing more. Equations of motion. Radiative transfer.
>>Continuity of mass.
>> Once you have a model that accurately shows how the global
>>climate has worked for the past, say, 50 years, integrating the model
>>forward in time is a no-brainer...provided changes in the atmosphere
>>are adequately anticipated or if your atmosphere/ocean system didn't
>>change. But it does change, and that is the problem. It isn't the
>>model solution that diverges from the real atmosphere but the real
>>atmosphere that diverges from the model.
>
> Once again the climate for the last 50 years is not
>known. Furthermore integrating the model forward once is of little
>value if there is a sensitivity to initial conditions.

Once again, you're going to have to help me here. Did the
tri-state tornado of 1925 not occur? The Galveston hurricane? Are you
suggesting we don't know what the global temperature pattern from 1960
to 1990 looked like? What the global precipitation pattern was in
1994? Climate is nothing more than past weather. We know what has
happened.
Who said anything about running a model once, BTW? See the
passage about ensemble modeling.

>>> What do you mean by "slightly different"?
>>
>> Exactly what it says. You won't get the same result when you
>>run a numerical model twice using slightly different input data. You
>>should expect the results to be similar, or you have a problem with
>>your model.
>
> Some physical systems have a sensitive dependence on
>initial conditions. In which case dissimilar results reflect
>reality and are not a problem with the model.

The problem may lie with my use of the word "similar." In this
context it does not mean exact, but by the same token it does not mean
3 orders of magnitude different. You are modeling reality here. Model
output has to look like reality or you have a problem. Dis-similar
results, as you term them, are indeed expected, but if a model, for
example, shows a uniform increase in temperature world-wide, the
results are questionable since we *know* that global temperatures do
not behave this way. If the model violates the hydrostatic
approximation or conservation of mass or radiative transfer equations


you have a problem with your model.

>>>


>>> It is inappropriate to expect complicated models of complex
>>>poorly understand processes like climate to be more predictive than
>>>simple models. They just give a false sense of security. Policy
>>>makers should use simple models for guidance and accept that there is
>>>a lot of uncertainty.
>>
>> It is inappropriate to "expect" sophisticated numerical models
>>to be better than simple ones? How do you figure? Are you suggesting
>>that a simple baroclinic model with 7 vertical layers on a 400 km grid
>>as they ran 25 years ago will produce accurate weather forecasts of
>>the kind one gets today with 16 km model with 35 levels in the
>>vertical run on a global domain? Hardly the case.
>> If you read the IPCC reports you will see the uncertainties
>>clearly spelled out. They are using the model output as guidance,
>>exactly the way they should do it.
>
> I am suggesting that if you have no good way to validate a
>model you should keep it simple.

Validating model output is problematic, i'll grant you,
especially since you really won't know how good your model is for
decades. That neither mandates nor necessitates looking at the
atmosphere/ocean system in an overly simplistic fashion. Model output
is guidance. Nothing more. It should be used to for planning purposes
or for identifying problems. It should not be used as prophecy because
it is not.

--
Dave.

Robert Grumbine

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
In article <20001120....@yktvmv.WATSON.IBM.COM>,
<j...@watson.ibm.com> wrote:

> Because when complicated models are tested against simple
>models, simple models usually win.

On the other hand, one of the cases simple models do _not_ win is
in weather prediction. That being the reason that numerical weather
prediction modellers keep whining for more computer power, and keep
producing better forecast guidance when they get it.

Weather and climate are somewhat different problems, but it should
give one pause that the 'more complex = better' equation holds so
well for the related (to climate modelling) problem.

> Consider the following simple model. Suppose the average
>temperature of the earth is determined by CO2 forcing (assumed to
>be proportional to the log of concentration) plus random noise.
>Throw in a heat capacity term to give a lagged response. Find the
>best fit for the last 150 years. Now we have a model that will
>predict the earth's temperature in 2100 given any CO2 levels you
>care to assume. I see no particular reason to expect some
>complicated 3d model to give better predictions.

How about:

'Irreversible' feedbacks which alter 'boundary' conditions, as in --

changes to the thermohaline circulation
changes to the sea ice pack (as in, the seasonal disappearance of
the arctic ice pack)
changes to large scale albedo (desertification, deforestation,
urban sprawl, cloud albedo modification by aerosols)
changes to aerosol loadings in the atmosphere
changes in the solar 'constant'

Interest in other parameters than global average temperature, as:

Precipitation
Evaporation
Growing seasons
Ocean wave climate (for shipping)
Storm severity
Heat wave severity
Cold wave severity
Sea level
...

Above I labelled some elements as 'irreversible'. Not that they
cannot be changed back, but that they are variables which either
aren't fed back on by global average temperature (such as solar constant)
or for which the behavior is not 'increase/decrease a little if
something else increases a little, then do the opposite if the
something else decreases a little'. Shutting down the thermohaline
circulation can be quite rapid, but restarting it is not equally
rapid.

Side note: Part of my sentiment regarding simple models is that I've
played around with some for global average temperature. Nothing published,
and even though simple, more involved than the above damped force-restore
system. Still nothing satisfactory enough to publish.

--
Robert Grumbine http://www.radix.net/~bobg/ Science faqs and amateur activities notes and links.
Sagredo (Galileo Galilei) "You present these recondite matters with too much
evidence and ease; this great facility makes them less appreciated than they
would be had they been presented in a more abstruse manner." Two New Sciences

Robert Grumbine

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
In article <8vdp3g$af4$1...@nnrp1.deja.com>, wmconnolley <w...@bas.ac.uk> wrote:

>j...@watson.ibm.com wrote:
>
>> This is wrong. If reality has a sensitive dependence on
>> initial conditions so should your model
>
>Its a basic assumption that the climate - unlike the weather - is not
>sensitive to the initial conditions. Whether or not this is true might
>be an interesting topic for discussion.

Although it is time to revisit the question, it has been considered.

In the 1960's and 1970's, with atmosphere-only GCM's, the question of
how long to let the model 'spin up' was a serious point. The problem
being that folks didn't want their conclusions to depend on how they
had initialized the model. This was undesirable, in part, because
the initial conditions used weren't particularly accurate. It was
considered desirable to be able to spin up from rough conditions
on the grounds that if the model were good, it was supposed to _generate_
a correct climate. Outcome was that after a few months (90 days comes
to mind) it seemed that the statistics became independant of initial
conditions.

That was for atmosphere-only models. I can't, offhand, think of
examples of more modern editions of climate models having run such
tests. At least not specifically to address this question; some of
the answer is implicit in some of the runs I have seen.

wmconnolley

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
bo...@Radix.Net (Robert Grumbine) wrote:

> w...@bas.ac.uk> wrote:
> >Its a basic assumption that the climate - unlike the weather - is not
> >sensitive to the initial conditions. Whether or not this is true
> might be an interesting topic for discussion.

> In the 1960's and 1970's, with atmosphere-only GCM's, the question of
> how long to let the model 'spin up' was a serious point...
> ... Outcome was that after a few months (90 days comes
> to mind

Yes (though to put a fly in the ointment, there are studies (eg
James_IN sometime) showing that even atmos-only models have a red
power spectrum. Perhaps thats better ignored...).

But indeed I meant the real climate, ie as modelled by coupled models.

It used to be true that coupled models were "spun up" to some
nearly-balanced state between ocean and atmos, often I think for
something totalling hundreds of virual years, and sometimes using
pseudo-timestepping in the ocean to speed things up. But by some
mysterious alchemy that I don't understand, this is apparently not
necessary for non-flux-corrected coupled models.

But anyway: all that tells you is what you already know: a well-
balanced coupled model will run for a thousand years generating
small "natural" variations around its base state. One could perhaps
argue that the stability of the global climate over the last few
thousand years is evidence that the real world does the same.

And that still leaves the response to anthropogenic forcing.

Leonard Evens

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to

Okay. Why don't you do exactly that and submit the results for
publication. Then we can rely on the normal processes of science
to determine if what you say has any validity.

Most of your arguments come down in the end to arguing by
personal incredulity.

> James B. Shearer

Phil Hays

unread,
Nov 21, 2000, 3:00:00 AM11/21/00
to
j...@watson.ibm.com wrote:

> How would the two models differ?

The CO2 to temperature parameters would be different, for one. The change in
temperature over the past 150 years has been very roughly 0.6C or so, with a CO2
change of about 280ppm to 360ppm. The change in temperature over the past
20,000 years has been very roughly 6C, with a CO2 change of about 190ppm to
360ppm.


> Obviously there is some uncertainly so there is no way to
> say some model is best. However if two plausible simple models give
> significantly different results then it is probably possible to
> explain why the difference arises and perhaps what you should look
> at to try to reduce the amount of uncertainty.

This leads us to more complex models.


> How is the situation better with complicated models? You
> will still have significantly different predictions and now it will
> be much harder to understand the source of the differences. How do
> you choose between complicated models?

More complex models make multiple predictions that can be tested against distant
past, current and near future climate. More work, but a much more valuable
result.


--
Phil Hays

j...@watson.ibm.com

unread,
Nov 21, 2000, 11:25:44 PM11/21/00
to
In article <3A1A0F33...@sprynet.com>,
on Mon, 20 Nov 2000 21:59:15 -0800,
Phil Hays <spampos...@sprynet.com> writes:

>j...@watson.ibm.com wrote:
>
>> Consider the following simple model. Suppose the average
>> temperature of the earth is determined by CO2 forcing (assumed to
>> be proportional to the log of concentration) plus random noise.
>> Throw in a heat capacity term to give a lagged response. Find the
>> best fit for the last 150 years.
>
>The problem with simple models is that it is fairly easy to generate them, and
>fairly hard to determine which simple model gives better predictions. A simple
>model I might suggest would be exactly the same as yours, but to use the last
>20,000 years to set CO2 to temperature relationship, and the last 1000 years to
>set the climate noise parameter. I'm not sure how you determine the "heat
>capacity term".
>
>This simple model will give rather different predictions for double CO2 in 2100
>than your simple model. Which simple model is better? The reasons I might give
>for my simple model are that the past 20,000 years has the best recorded
>relationship between CO2 levels and temperature; and the past 1000 years has the
>best record of year to year climate "noise", in my opinion, of course. I'm sure
>that you can come up with similar opinions as to why your model might be better,
>yet how, other than opinion, could we choose between simple models?

How would the two models differ?


Obviously there is some uncertainly so there is no way to
say some model is best. However if two plausible simple models give
significantly different results then it is probably possible to
explain why the difference arises and perhaps what you should look
at to try to reduce the amount of uncertainty.

How is the situation better with complicated models? You
will still have significantly different predictions and now it will
be much harder to understand the source of the differences. How do
you choose between complicated models?

James B. Shearer

j...@watson.ibm.com

unread,
Nov 21, 2000, 11:37:58 PM11/21/00
to
In article <8vdp3g$af4$1...@nnrp1.deja.com>,
on Tue, 21 Nov 2000 12:16:53 GMT,
wmconnolley <w...@bas.ac.uk> writes:

<snip>

>> This is wrong. If reality has a sensitive dependence on
>> initial conditions so should your model
>

>Its a basic assumption that the climate - unlike the weather - is not
>sensitive to the initial conditions. Whether or not this is true might
>be an interesting topic for discussion.

I don't see any particular reason to believe this is true.
Over the last million years the earth has been flipping in and out
of ice ages. It seems plausible that there are intermediate states
that can go either way with small changes in the initial conditions.
Also if weather is chaotic, then weather averaged over a
finite time period will also be chaotic although with reduced
variance.
Finally even if the long term averages are independent of
the initial conditions the averages over say the first 100 years
could still be sensitive to the initial conditions.

>> I am suggesting that if you have no good way to validate a
>> model you should keep it simple.
>

>The argument is valid but the premise (no good way to validate)
>probably false. The only way to generate "natural" variability in
>a physically plausible way is probably through a GCM. Aspects of this
>(MSLP, say) can be tested. Others (deep ocean circulation, say) can't.

If you can't test the whole model you can't trust it.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 21, 2000, 11:54:33 PM11/21/00
to
In article <3a1a625...@news.escape.ca>,
on Tue, 21 Nov 2000 12:44:44 GMT,
wra...@mb.sympatico.ca (David Ball) writes:

<snip>

> The climate for the past 50 years is not a known quantity?
>You're going to have to help me out on this. I can go back and show
>you what the weather was doing at a particular date and time at most
>places on the planet 40 years ago. Exactly how is the climate not
>known?

Climate is a distribution. Taking a sample from a
distribution gives you an estimate of distribution but not the
exact distribution. The problem is worse if the distribution is
changing.

> Climate is not even remotely like flipping a coin. You're
>looking at the problem as if it was related to curve-fitting or the
>lack of generalization one gets sometimes with a neural network. The
>whole process of data assimilation is designed to deal with the issue
>of getting very different types into the model without having to
>contend with noise, or as little of it as possible.

Climate in some ways is a lot like flipping a coin. For
example consider yearly rainfall amounts at some location. You
may have wet years and dry years more or less at random.

<snip>

> It is not wrong and again, climate is not a coin toss. If the
>results coming from a model differ by 3 orders of magnitude, I have a
>problem with my model. If I know what happened, run my model and
>everything works correctly except that things happen faster or slower
>than expected, I have a problem with my model. Of course, you should
>not expect exact agreement, but the numbers have to be realistic both
>in time and space. In a climate context, if the global pattern of
>cooling that took place from the 1950's to the early 1970's is not
>captured or appears in the 1980's you have a problem with your model.

You are asserting this pattern would be preserved with
small changes in the initial conditions and is not random noise.
How do you know that? At what level of detail do the patterns
in what actually happened become dominated by random noise? How
do you know?

>> Once again the climate for the last 50 years is not
>>known. Furthermore integrating the model forward once is of little
>>value if there is a sensitivity to initial conditions.
>
> Once again, you're going to have to help me here. Did the
>tri-state tornado of 1925 not occur? The Galveston hurricane? Are you
>suggesting we don't know what the global temperature pattern from 1960
>to 1990 looked like? What the global precipitation pattern was in
>1994? Climate is nothing more than past weather. We know what has
>happened.

If we went back in time and made a slight perturbation in
1850 the tri-state tornado and Galveston hurricane would not have
occurred. These events are random noise. Fitting a model to
random noise is a very bad idea.

<snip>

>> Some physical systems have a sensitive dependence on
>>initial conditions. In which case dissimilar results reflect
>>reality and are not a problem with the model.
>
> The problem may lie with my use of the word "similar." In this
>context it does not mean exact, but by the same token it does not mean
>3 orders of magnitude different. You are modeling reality here. Model
>output has to look like reality or you have a problem. Dis-similar
>results, as you term them, are indeed expected, but if a model, for
>example, shows a uniform increase in temperature world-wide, the
>results are questionable since we *know* that global temperatures do
>not behave this way. If the model violates the hydrostatic
>approximation or conservation of mass or radiative transfer equations
>you have a problem with your model.

We aren't talking about 3 orders of magnitude difference,
we are talking about say two models which have a 1 degree difference
in global temperature in 2100.
James B. Shearer

Robert Grumbine

unread,
Nov 22, 2000, 3:00:00 AM11/22/00
to
In article <8venk8$630$1...@nnrp1.deja.com>, wmconnolley <w...@bas.ac.uk> wrote:

>Yes (though to put a fly in the ointment, there are studies (eg
>James_IN sometime) showing that even atmos-only models have a red
>power spectrum. Perhaps thats better ignored...).

Pay no attention to the dust under the run ...

>But anyway: all that tells you is what you already know: a well-
>balanced coupled model will run for a thousand years generating
>small "natural" variations around its base state. One could perhaps
>argue that the stability of the global climate over the last few
>thousand years is evidence that the real world does the same.

Unfortunately, I think it is more that we modellers have a great
aversion to models that behave chaotically. As you know from
the ice core records, the last few thousand years seem anomalously
stable. The models are doing ok at 'capturing' that. They don't,
however, do nearly as well at representing the rapid climate changes
of the 100+ ky before this 'stable' period.

How's that for a scary thought: The models are _too_ smooth and
stable. The climate changes as modelled are almost certainly too
optimistically stable. This was, I believe, one of Michael Tobis'
points.

>And that still leaves the response to anthropogenic forcing.

A forcing which is multivariate (greenhouse gases aren't the
only variety), time dependant, not well-constrained by the climate
itself, ...

wmconnolley

unread,
Nov 22, 2000, 3:00:00 AM11/22/00
to
j...@watson.ibm.com wrote:
> wmconnolley <w...@bas.ac.uk> writes:

> >Its a basic assumption that the climate - unlike the weather - is not
> >sensitive to the initial conditions. Whether or not this is true
> >might be an interesting topic for discussion.
>
> I don't see any particular reason to believe this is true.
> Over the last million years the earth has been flipping in and out
> of ice ages. It seems plausible that there are intermediate states
> that can go either way with small changes in the initial conditions.

This isn't a good example. If current theories are correct, ice ages
are strongly linked to orbital forcing, and thus the iceage cycle is
determined by external forcing and is *not* dependent on initial
conditions.

If you were thinking of rapid climate changes (RMG mentioned them
elsewhere) then I think thats not a good example either: they are
(as I understand it) only apllicable in states that start with a
large icesheet over N America. So, they don't apply to the sensitivity
of the *current* climate. There is also a suggestion that they show
at least quasi-periodicity.

> Also if weather is chaotic, then weather averaged over a
> finite time period will also be chaotic although with reduced
> variance.

This might be true (I'm not sure) in some formal way.
However, in practice the reduced-by-averaging
variance may become so small that it stops being of interest.

> Finally even if the long term averages are independent of
> the initial conditions the averages over say the first 100 years
> could still be sensitive to the initial conditions.

Yes, but you've plucked "100 years" out of the air. Perhaps it
settles down in 10 (or 20 (or...)) years.

In GCMs, climate change is mostly predictable (in the sense that
ensemble runs tend to show much the same results). This results
quite naturally from the sort of processes included. About the only
"drastic" changes you can get are shutting off thermohaline circulation,
and even that tends to be gradual, I think, rather than the
"catastrophe" type shut-downs that ?occur? in simpler models.

Weather is chaotic - but within limited bounds. There are energy
constraints. The recurrence times of depressions, cyclones, etc are
chaotic. But the distributions of temperatures, rainfalls, brought
by this weather isn't.

> If you can't test the whole model you can't trust it.

If you believe that, you can certainly reject AOGCMs. We will never
know the exact details of, say, the abysal circulation to the level
of detail provided by GCMs. I think your test is too stringent.

Phil Hays

unread,
Nov 22, 2000, 3:00:00 AM11/22/00
to
j...@watson.ibm.com wrote:

> Ok, that's different (although the .6 is understated because
> of the lag term). I would take my model because I think ice age
> transitions are not driven by CO2.

What is the lag term? How much is the 0.6C understated because of the "lag
term"? What about the "noise term"? How much might the 0.6C under or
overestimated because of the noise term? What about measurement error? How
much is the 0.6C under or overestimated because of measurement error? Perhaps a
longer term and and a larger CO2 level change might help.

What needs to be added to your model to handle ice age transitions? Even if ice
age transitions are not "driven" by CO2, the near doubling of CO2 is surely a
significant part of the process, yes?


> Yes as you gain in understanding you can intelligently add
> complexity to a model. However there is a long distance between
> the kind of simple model I gave and a very complicated coupled
> ocean atmosphere model.

The complexity of model found useful is probably a function of the understanding
of the problem.


> One problem is you may be adding degrees of freedom faster
> than testable predictions.

And maybe not.


--
Phil Hays

j...@watson.ibm.com

unread,
Nov 22, 2000, 9:20:23 PM11/22/00
to
In article <8ve0lj$euq$1...@saltmine.radix.net>,
on 21 Nov 2000 09:25:55 -0500,

bo...@Radix.Net (Robert Grumbine) writes:
>In article <20001120....@yktvmv.WATSON.IBM.COM>,
> <j...@watson.ibm.com> wrote:
>
>> Because when complicated models are tested against simple
>>models, simple models usually win.
>
> On the other hand, one of the cases simple models do _not_ win is
>in weather prediction. That being the reason that numerical weather
>prediction modellers keep whining for more computer power, and keep
>producing better forecast guidance when they get it.
>
> Weather and climate are somewhat different problems, but it should
>give one pause that the 'more complex = better' equation holds so
>well for the related (to climate modelling) problem.

When I refer to more complex models I am not referring
to solving the same model with a finer mesh or smaller timestep.
I imagine when meteorologists add interactions to their
weather models it often takes them several interactions to get it
right (ie to improve predictions). Climate modelers have to get
it right the first time which is a lot harder.

>> Consider the following simple model. Suppose the average
>>temperature of the earth is determined by CO2 forcing (assumed to
>>be proportional to the log of concentration) plus random noise.
>>Throw in a heat capacity term to give a lagged response. Find the
>>best fit for the last 150 years. Now we have a model that will
>>predict the earth's temperature in 2100 given any CO2 levels you
>>care to assume. I see no particular reason to expect some
>>complicated 3d model to give better predictions.
>

> How about:
>
> 'Irreversible' feedbacks which alter 'boundary' conditions, as in --
>
>changes to the thermohaline circulation
>changes to the sea ice pack (as in, the seasonal disappearance of
> the arctic ice pack)
>changes to large scale albedo (desertification, deforestation,
> urban sprawl, cloud albedo modification by aerosols)
>changes to aerosol loadings in the atmosphere
>changes in the solar 'constant'

Obviously simple models leave out a lot of stuff. However
throwing in every interaction you can think of is not necessarily an
improvement. It can and often does reduce the predictive ability
of the model.

> Interest in other parameters than global average temperature, as:
>
>Precipitation
>Evaporation
>Growing seasons
>Ocean wave climate (for shipping)
>Storm severity
>Heat wave severity
>Cold wave severity
>Sea level

The model was for temperature only. However it would be
easy enough to devise simple models for most of the above. For
instance the model had a heat capacity term which you could take
to be from heating the top of the ocean and use that to figure
the thermal expansion.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 22, 2000, 9:39:40 PM11/22/00
to
In article <3A1B0B96...@math.nwu.edu>,
on Tue, 21 Nov 2000 17:56:06 -0600,

Leonard Evens <l...@math.nwu.edu> writes:
>j...@watson.ibm.com wrote:

<snip>

>> Consider the following simple model. Suppose the average
>> temperature of the earth is determined by CO2 forcing (assumed to
>> be proportional to the log of concentration) plus random noise.
>> Throw in a heat capacity term to give a lagged response. Find the
>> best fit for the last 150 years. Now we have a model that will
>> predict the earth's temperature in 2100 given any CO2 levels you
>> care to assume. I see no particular reason to expect some
>> complicated 3d model to give better predictions.
>

>Okay. Why don't you do exactly that and submit the results for
>publication. Then we can rely on the normal processes of science
>to determine if what you say has any validity.

That was intended more as an example of a simple model. I
imagine serious climatologists have already come up with and
published similar simple models. So there is no reason for me to
reinvent the wheel.

>Most of your arguments come down in the end to arguing by
>personal incredulity.

I think my arguments come down to requiring evidence rather
than blind faith.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 22, 2000, 9:51:05 PM11/22/00
to
In article <3A1B6D55...@sprynet.com>,
on Tue, 21 Nov 2000 22:53:09 -0800,

Phil Hays <spampos...@sprynet.com> writes:
>j...@watson.ibm.com wrote:
>
>> How would the two models differ?
>
>The CO2 to temperature parameters would be different, for one. The change in
>temperature over the past 150 years has been very roughly 0.6C or so, with a CO2
>change of about 280ppm to 360ppm. The change in temperature over the past
>20,000 years has been very roughly 6C, with a CO2 change of about 190ppm to
>360ppm.

Ok, that's different (although the .6 is understated because


of the lag term). I would take my model because I think ice age
transitions are not driven by CO2.

>> Obviously there is some uncertainly so there is no way to


>> say some model is best. However if two plausible simple models give
>> significantly different results then it is probably possible to
>> explain why the difference arises and perhaps what you should look
>> at to try to reduce the amount of uncertainty.
>

>This leads us to more complex models.

Yes as you gain in understanding you can intelligently add


complexity to a model. However there is a long distance between
the kind of simple model I gave and a very complicated coupled
ocean atmosphere model.

>> How is the situation better with complicated models? You


>> will still have significantly different predictions and now it will
>> be much harder to understand the source of the differences. How do
>> you choose between complicated models?
>

>More complex models make multiple predictions that can be tested against distant
>past, current and near future climate. More work, but a much more valuable
>result.

One problem is you may be adding degrees of freedom faster
than testable predictions.
James B. Shearer

Josh Halpern

unread,
Nov 22, 2000, 11:33:09 PM11/22/00
to

j...@watson.ibm.com wrote:

> In article <8ve0lj$euq$1...@saltmine.radix.net>,
> on 21 Nov 2000 09:25:55 -0500,
> bo...@Radix.Net (Robert Grumbine) writes:

SNIP....

> Obviously simple models leave out a lot of stuff. However
> throwing in every interaction you can think of is not necessarily an
> improvement. It can and often does reduce the predictive ability
> of the model.

This appears about equivalent to saying wrong physics gives right
answers, the problem is which right physics do you omit, and how
do you justify it. Clearly three dimensional models have given
a much better representation of the actual climate than the simpler
one dimensional ones.

> > Interest in other parameters than global average temperature, as:
> >
> >Precipitation
> >Evaporation
> >Growing seasons
> >Ocean wave climate (for shipping)
> >Storm severity
> >Heat wave severity
> >Cold wave severity
> >Sea level
>
> The model was for temperature only.

And clearly wrong, since all the above have nasty habits of affecting
each other. Your approach is only justifiable if you can show that the
various variables are not coupled or only weakly coupled or vary on
very different time scales. Right answers for wrong physics in models,
simple or complex, are the worst possible case, because then you think
you know what is happening. Much worse than decent answers with
all known physics included. Models are validated against observation
but useful for teaching us mechanisms, as well as prediction.

> However it would be
> easy enough to devise simple models for most of the above. For
> instance the model had a heat capacity term which you could take
> to be from heating the top of the ocean and use that to figure
> the thermal expansion.

They might not be very useful though. It's pretty easy to find a
function that matches a single time series. It's a lot harder to have
confidence in that function having any predictive power if it leaves
a lot of things out that you know are basic. It is a criticism of any
argument which derives causality from correlation. It also is
at root, the problem that Friis-Christensen et al ran into. Their proposal

matched one series against another, and based on the correlation they
proposed a simple model, to explain variation in global cloud cover,
ie the kind of simple 1-1 model you are championing. In that case,
it turned out that not only was the model too simple, but the data
was also too simple, and that if you looked at the three dimensional
distribution of cloulds the argument (simple model) fell apart.

An interesting question, to me at least, starts with the observation that
most physicists prefer the simplest possible model. This approach
has worked well over the years, and I think is the basis of your
prejudice. The question I would put to Bob Grumbine and W.M.
Connolley, is whether climate is a case where this approach does
not work,

josh halpern

David Ball

unread,
Nov 23, 2000, 3:00:00 AM11/23/00
to
On Wed, 22 Nov 2000 04:54:33 GMT, j...@watson.ibm.com wrote:

[..]


>
>> Climate is not even remotely like flipping a coin. You're
>>looking at the problem as if it was related to curve-fitting or the
>>lack of generalization one gets sometimes with a neural network. The
>>whole process of data assimilation is designed to deal with the issue
>>of getting very different types into the model without having to
>>contend with noise, or as little of it as possible.
>
> Climate in some ways is a lot like flipping a coin. For
>example consider yearly rainfall amounts at some location. You
>may have wet years and dry years more or less at random.
>

Weather and climate are based on physical processes. Things
never happen randomly. They happen for reasons. They may appear to be
random because we don't understand the processes involved, but it
appearance only.

>
>> It is not wrong and again, climate is not a coin toss. If the
>>results coming from a model differ by 3 orders of magnitude, I have a
>>problem with my model. If I know what happened, run my model and
>>everything works correctly except that things happen faster or slower
>>than expected, I have a problem with my model. Of course, you should
>>not expect exact agreement, but the numbers have to be realistic both
>>in time and space. In a climate context, if the global pattern of
>>cooling that took place from the 1950's to the early 1970's is not
>>captured or appears in the 1980's you have a problem with your model.
>
> You are asserting this pattern would be preserved with
>small changes in the initial conditions and is not random noise.
>How do you know that? At what level of detail do the patterns
>in what actually happened become dominated by random noise? How
>do you know?

Again, the physical processes involved in weather and climate
are never random.

>
>>> Once again the climate for the last 50 years is not
>>>known. Furthermore integrating the model forward once is of little
>>>value if there is a sensitivity to initial conditions.
>>

[..]


>>
>> The problem may lie with my use of the word "similar." In this
>>context it does not mean exact, but by the same token it does not mean
>>3 orders of magnitude different. You are modeling reality here. Model
>>output has to look like reality or you have a problem. Dis-similar
>>results, as you term them, are indeed expected, but if a model, for
>>example, shows a uniform increase in temperature world-wide, the
>>results are questionable since we *know* that global temperatures do
>>not behave this way. If the model violates the hydrostatic
>>approximation or conservation of mass or radiative transfer equations
>>you have a problem with your model.
>
> We aren't talking about 3 orders of magnitude difference,
>we are talking about say two models which have a 1 degree difference
>in global temperature in 2100.

And if the various and sundry numerical models out there, all
with different model physics, data assimilations, resolutions,
parameterizations, ... come up with answers that are within 1 degree
of each other, your problem is what?

--
Dave.

Robert Flory

unread,
Nov 23, 2000, 3:00:00 AM11/23/00
to
Could me I once knew Dr. Doctor. ;-)

"Leonard Evens" <l...@math.nwu.edu> wrote in message
news:3A1803CB...@math.nwu.edu...
> David Ball wrote:
> >
> > I decided to start a new thread here largely because the two
> > Mr. Drake has initiated in recent days on climate models have been
> > getting a little congested with several different conversations going
> > on under the same banner.
> > Mr. Drake (I refuse to apply the Dr. as he has refused all
> > requests to tell us exactly what his doctorate is in and it really
> > isn't germaine to the discussion)
>
> I've noticed that he often signs his messages as DDD. Is it possible
> that "Dr." is his first name? I stand ready to be corrected about this
> entirely unsupported conjecture. :-)

j...@watson.ibm.com

unread,
Nov 24, 2000, 12:13:45 AM11/24/00
to
In article <8vhkm3$dtk$1...@nnrp1.deja.com>,
on Wed, 22 Nov 2000 23:25:57 GMT,

wmconnolley <w...@bas.ac.uk> writes:
>j...@watson.ibm.com wrote:
>> wmconnolley <w...@bas.ac.uk> writes:
>
>> >Its a basic assumption that the climate - unlike the weather - is not
>> >sensitive to the initial conditions. Whether or not this is true
>> >might be an interesting topic for discussion.
>>
>> I don't see any particular reason to believe this is true.
>> Over the last million years the earth has been flipping in and out
>> of ice ages. It seems plausible that there are intermediate states
>> that can go either way with small changes in the initial conditions.
>
>This isn't a good example. If current theories are correct, ice ages
>are strongly linked to orbital forcing, and thus the iceage cycle is
>determined by external forcing and is *not* dependent on initial
>conditions.

Is it really the case that ice age transitions are believed
to be completely determined by forcing? I thought the relation was
more probabilistic.
Anyway, suppose they are completely determined by forcing.
Because of the strong positive ice albedo feedback there will be
orbital parameters for which both ice age conditions and non ice
conditions are stable (or at least quasi stable). Suppose for
example orbital forcing varies between 0 and 1 where 0 is most
favorable to ice ages and 1 is least favorable. Then suppose if
you are in an ice age you will stay in an ice age unless the forcing
rises above .75 whereas if you are start not in an ice age you will
stay out of an ice age unless the forcing falls below .25. So if
the forcing is between .25 and .75 you have two stable climates.
So here you have a dependence on initial conditions. How "bad" this
dependence is, is determined by how complex the boundary is between
initial conditions that converge to an ice age situation and initial
conditions which converge to a non ice age situation. Now suppose
the forcing is .5. Then there should be some climate between ice age
and non ice age, which is in a sort of unstable equilibrium, and
which could tip either way. I expect for this climate the chaotic
nature of weather will mean the actual boundary between initial
conditions which tip to ice age and initial conditions which tip to
non ice age would be very complex which is characteristic of chaotic
systems (strange attractors). So for this part of the phase space
you would have a chaotic aspect of climate.

<snip>

>Weather is chaotic - but within limited bounds. There are energy
>constraints. The recurrence times of depressions, cyclones, etc are
>chaotic. But the distributions of temperatures, rainfalls, brought
>by this weather isn't.

Well, there is the question of how long a period of time you
have to average over to obtain these distributions (or to put it
another way, how long you have wait for the system to "forget" its
initial condition). For example if ice ages were not forced but
internally generated then your distributions of temperature etc. would
be some average of ice age and non ice conditions. But you could also
view this as the climate flipping between two distributions (ice age
and non ice age) and this flipping could be chaotic.
James B. Shearer

Robert Grumbine

unread,
Nov 24, 2000, 3:00:00 AM11/24/00
to
In article <20001122....@yktvmv.WATSON.IBM.COM>,

<j...@watson.ibm.com> wrote:
>In article <8ve0lj$euq$1...@saltmine.radix.net>,
> on 21 Nov 2000 09:25:55 -0500,
> bo...@Radix.Net (Robert Grumbine) writes:
>>In article <20001120....@yktvmv.WATSON.IBM.COM>,
>> <j...@watson.ibm.com> wrote:
>>
>> Weather and climate are somewhat different problems, but it should
>>give one pause that the 'more complex = better' equation holds so
>>well for the related (to climate modelling) problem.
>
> When I refer to more complex models I am not referring
>to solving the same model with a finer mesh or smaller timestep.

You should. There's a lot more to changing resolution than
simply recompiling with different deltas.

> I imagine when meteorologists add interactions to their
>weather models it often takes them several interactions to get it

^^^^ you mean iterations
here, I'm assuming.

>right (ie to improve predictions).

Sometimes yes, sometimes no. I did one that was an immediate
improvement.

>Climate modelers have to get
>it right the first time which is a lot harder.

'first time'?

It seems you've got a misdirected notion about how weather and
climate modellers make improvements to the models.

We work on physics, not curve fitting. If we were doing curve
fitting, tossing a new feedback into the soup and then tweaking
each of the N parameters we have in that feedback and the entire
rest of the model, then we'd encounter a lot of the problem you're
talking about regarding needing several iterations.

As we're working on physics, we have the vastly simpler question
of putting in a feedback element (let's say ground hydrology) only
_after_ that element has been studied as an independant model of
ground hydrology. The parameters are tested for modelling ground
hydrology, and then model and parameters (along with their reasonable
uncertainties) are passed over to the full model. The full model
doesn't get to make unbounded tweaks to the rest of the suite of
parameters. Rather, only those few that were used to represent
ground hydrology can be/are retuned.

Again, as we're working on physics, we have a number of years
worth of weather/climate observations against which to test already
in hand. It isn't necessary to wait 30 years to decide that a
change you made in your climate model has improved its physics.
We've already got 30 years of observation.

Before that tired old line of 'but you haven't tested the model
under the new climate it's trying to predict' comes out, yet again,
I'll repeat: We're working on physics. Unless the parameters of
the changed climate start falling outside the range we can test,
the fact that climate has changed matters nothing to the model.
That is, the sea ice cover might decrease under a changed climate.
This would mean that instead of having some area undergo air-ice
interactions, it'll be doing air-sea. This is not a problem to
the model. It already includes air-sea physics. All it does
is start doing them in a new area. The albedo of ice gets replaced
by the albedo of ocean. But we know the albedo of ocean, and already
have that in the model. The collapse of the ice pack, for one
example, would be climate change, but it doesn't take the model
physics anywhere unknown.

> The model was for temperature only. However it would be
>easy enough to devise simple models for most of the above. For
>instance the model had a heat capacity term which you could take
>to be from heating the top of the ocean and use that to figure
>the thermal expansion.

Which would have your model missing half or more of the observed
sea level change of the last century (even if the temperature prediction
and your magically tuned slab thickness for the ocean were correct),
and still totally incapable of dealing with important changes like
a collapse (or extension) of the West Antarctic Ice Sheet.

Robert Grumbine

unread,
Nov 24, 2000, 3:00:00 AM11/24/00
to
In article <3A1C9EE7...@mail.verizon.net>,

Josh Halpern <vze2...@verizon.net> wrote:
>
>An interesting question, to me at least, starts with the observation that
>most physicists prefer the simplest possible model. This approach
>has worked well over the years, and I think is the basis of your
>prejudice. The question I would put to Bob Grumbine and W.M.
>Connolley, is whether climate is a case where this approach does
>not work,

It's an issue I've been thinking about. To one extent, the
simple models that folks can think of readily have already been
done. Arrhenius already did the sheet of paper climate change for
2xCO2, in the 1890's. But there have been a number of other simple
or simplified climate models over the years, including some
relatively simple ones within the last decade regarding, for
example, the thermohaline catastrophe. In the 80's there were
several simplified ice age models out.

So, to a degree, we do build simple models and use them.
You don't hear much about them in the media because simplicity
is not sexy, I guess. They're also not major industries within
the field because, relatively speaking, it is fairly easy for
one person or group to reasonably explore the simple model.
imho, and at a guess.

Part of the issue driving complexity in climate models is
that the people writing grants (ultimately, the public at
large) live inside the thing you're trying to model. They
don't seem to like models that fail to get detailed about
the area in which they live, or fail to predict variables
that they are personally interested in.

But, back to the science side, there's the fact that we
can't directly observe the thing that we care about. We can
observe weather, and not too badly as a rule. But we don't
have a thermometer that will report the current _climate's_
temperature. Just the weather's. Therein lies the problem.
What _exactly_ is climate? Even if we limit consideration
to 2 m temperature*, this is not a trivial question. Obviously
we've discussed it and there are operationally acceptable
definitions.

One of the most fundamental facts, imho, is that although
we want/need thousands (hundreds of thousands, by preference) of
observations to determine the 2m _weather_ temperatures, it
seems that only a dozen or so are needed to represent the
_climate's_ 2m temperature. This fundamental fact is only
an observation of the last decade or two. Takes some moderately
long observational records, at a lot of points, and some
significant processing power (a level passed in the 70's)
to be able to produce the case in a convincing manner that
this few values were sufficient.

Any time, though, you've got a system whose instantaneous
configuration reasonably has thousands of degrees of
freedom but whose climate has only a dozen or so, you've
got something fundamental and important to understand.
So far (at least as far as I know, and I don't have the
best position for this) nobody had described good reasons
for a) why the system collapses to only 12 (as opposed to
3, or 300, at least) degrees of freedom b) why the 12 fields
have the structure they do c) whether the fields will continue
to have this structure in a changed climate (climate change
then merely being a re-weighting of the fields) or that new
structures will appear (then requiring a means of predicting
the new structures and their weights).

So part of the answer is, we're just now getting to the
point of having enough observations to be able to think about
getting simple. As I mentioned to Shearer, I'm working on
some notions of simplicity myself. I think that quite a lot
of folks in the field of climate modelling _like_ the idea
of simplicity. (Oh for those simple days of point masses,
rigid bodies, and countable numbers of particles.) But we
still need to discover what the variables that _matter_ really
are. Kind of like, say, pre-Galilean mechanics. Ugly mess
of particulars (size, composition, 'affinity to the earth',
balance of the 4 elements (e.g., 'earthy' things falling faster
than others), speed, position) that finally got trimmed to
the ones that really matter -- mass, position, velocity (rate
of change of position).

* Note: When meteorologists talk of the 'surface' air temperature,
it is typically the temperature measured at 2 meters (6.6 feet)
above the ground that is meant.

wmconnolley

unread,
Nov 24, 2000, 3:00:00 AM11/24/00
to
j...@watson.ibm.com wrote:
> wmconnolley <w...@bas.ac.uk> writes:
> Is it really the case that ice age transitions are believed
> to be completely determined by forcing? I thought the relation was
> more probabilistic.

No, not completely. In any case the physics is not fully worked out.
But the broad structure is believed to be determined (the evidence
for this being the similarity of orbital periods and the periods
found in cores. Hayes et al Science v194, #4270, p1121,
10/Dec/1976 is good, if you can find it).

> Anyway, suppose they are completely determined by forcing.
> Because of the strong positive ice albedo feedback there will be
> orbital parameters for which both ice age conditions and non ice

> conditions are stable (or at least quasi stable...


> So here you have a dependence on initial conditions.

Sure. Thats fine. But I said before, the question is if the climate
*now* is stable to initial conditions, and the lack of big NH ice
sheets renders your above example irrelevant (to the current climate).

You could, probably, have orbital parameters such that little ice,
and a full ice age, were both stable. But that would be sensitivity
to initial conditions of a very different sort to sensitivity to
initial weather state.

In other words: weather, we know, is such that even the tiniest
perturbation changes the whole structure after, say, one month. But
the "perturbation" you are talking about here: imposition/removal
of vast icesheets: is of a totally different order

> Now suppose
> the forcing is .5. Then there should be some climate between ice age
> and non ice age, which is in a sort of unstable equilibrium, and
> which could tip either way.

This would be true in your though experiment, but its not at all clear
that reality works this way. When the world has orbital forcing half
way between iceage and not (assuming that makes any sense; remember
all this is moving) the system has not just a position, but also
(in some sense) a velocity.

And, of course, nice again: the current forcing is not of this nature
(I think thats accepted).

> >Weather is chaotic - but within limited bounds. There are energy
> >constraints. The recurrence times of depressions, cyclones, etc are
> >chaotic. But the distributions of temperatures, rainfalls, brought
> >by this weather isn't.
>
> Well, there is the question of how long a period of time you

> have to average over to obtain these...

Yup. Well, thats related to how often they occur, and how wide their
distribution is. 30 years is probably good enough, as a balance between
a long enough cliate average and a shifting base state.

> But you could also
> view this as the climate flipping between two distributions

This is what I'm saying doesn't happen: the climate doesn't seem to do
that, at least with current orbital forcing.

And just to repeat what I said at the start: the forcing of ice ages
is still not fully understood (at least by me...).

charliew

unread,
Nov 24, 2000, 3:00:00 AM11/24/00
to

Leonard Evens <l...@math.nwu.edu> wrote in message
news:3A1B0B96...@math.nwu.edu...
> j...@watson.ibm.com wrote:

James B. Shearer wrote:

> > So say a model is simple if it can be implemented in less
> > 100 lines of code, fairly simple for 100-10000 lines of code,
> > and complicated for more than 10000 lines of code.
> > Note software generally has on the order of 1 bug per
> > 100 lines of code.

This is "stretching it". The number of lines of code may or may not
determine the complexity of the model. It is easy to show that one can
write a program which accepts systems of simultaneous non-linear equations
as its input, and works on these equations to arrive at its answer. The
program size remains constant in this situation, but the complexity of the
problem is directly related to the number of equations, their non-linearity,
the interactions between the equations, the constraints that are
encountered, the boundary conditions that must be met, etc.

(cut)

> > Consider the following simple model. Suppose the average
> > temperature of the earth is determined by CO2 forcing (assumed to
> > be proportional to the log of concentration) plus random noise.
> > Throw in a heat capacity term to give a lagged response. Find the
> > best fit for the last 150 years. Now we have a model that will
> > predict the earth's temperature in 2100 given any CO2 levels you
> > care to assume. I see no particular reason to expect some
> > complicated 3d model to give better predictions.

You have clearly demonstrated that you don't appreciate the intricacies of
large and complex mathematical models. I suggest you re-read the reply from
Leonard Evens and think about the reasons that he wrote that reply.

>
> Okay. Why don't you do exactly that and submit the results for
> publication. Then we can rely on the normal processes of science
> to determine if what you say has any validity.
>

> Most of your arguments come down in the end to arguing by
> personal incredulity.
>

> > James B. Shearer

Phil Hays

unread,
Nov 24, 2000, 3:00:00 AM11/24/00
to
j...@watson.ibm.com wrote:

PH> What is the lag term?

> The lag term represents the fact that the temperature does
> respond immediately to the change in CO2 because of for example the
> heat capacity of the oceans. So if the CO2 level stopped rising at
> the current level we could expect the temperature to continue rising
> for a while.

I agree that there are "lag factors". I'm just rather unsure how you are
proposing to handle them as a single "lag term".


> My model is not intended to handle ice age transistions,
> just the next 100 years or so. Suppose ice ages are driven by
> orbital forcing with primarily ice abedo feedback and secondarily
> CO2 feedback. Then for example the 6 degrees might partition as 1
> degree from the forcing, 3 degrees from ice albedo feedback and 2
> degrees from CO2 feedback. Of course you can invent your own
> numbers but it doesn't make sense to assign the entire 6 degrees to
> CO2.

How much of an ice albedo feedback term applies to the next hundred years or
so? Probably less than during an ice age transition, but how do we find a
realistic number? Invented numbers I can do without.

Models, simple or complex, are a tool for understanding how things work. The
problems with simple models is that they depend on constants that not known
accurately, are not constant, and that a simple model may hide things we are
very interested in.

If the constants for your simple model were known AND reasonably constant, then
a model such as you propose would be very useful. As the constants are both not
known to better than a factor of two, and may not be constant, this sort of
model is mostly useful for informal discussions, with care being taken to point
out the limits of the model.

If the only change expected over the next 100 years was simply an increasing
temperature, distributed in known pattern, then your simple model might all all
of the answers we wanted. But we care about many other factors, and the
distribution of temperature change is not likely to be a simple pattern. For
example, what is going to happen to the distribution of rainfall? What is the
fate of the Arctic sea ice? How warm will the permafrost regions get? Your
simple model can not provide more than the more general of insight into these
important questions.


--
Phil Hays

Miguel Aguirre

unread,
Nov 24, 2000, 5:15:40 PM11/24/00
to

Robert Grumbine wrote:

>
>
> One of the most fundamental facts, imho, is that although
> we want/need thousands (hundreds of thousands, by preference) of
> observations to determine the 2m _weather_ temperatures, it
> seems that only a dozen or so are needed to represent the
> _climate's_ 2m temperature. This fundamental fact is only
> an observation of the last decade or two. Takes some moderately
> long observational records, at a lot of points, and some
> significant processing power (a level passed in the 70's)
> to be able to produce the case in a convincing manner that
> this few values were sufficient.
>
> Any time, though, you've got a system whose instantaneous
> configuration reasonably has thousands of degrees of
> freedom but whose climate has only a dozen or so, you've
> got something fundamental and important to understand.
> So far (at least as far as I know, and I don't have the
> best position for this) nobody had described good reasons
> for a) why the system collapses to only 12 (as opposed to
> 3, or 300, at least) degrees of freedom b) why the 12 fields
> have the structure they do c) whether the fields will continue
> to have this structure in a changed climate (climate change
> then merely being a re-weighting of the fields) or that new
> structures will appear (then requiring a means of predicting
> the new structures and their weights).
>
>

Thank you for the pleasure of reading your posting. They really add value to sci.environment

The point that you have done is very interesting. Indeed Climate is not a lot of Weather information
put
together. The error of considering climate equal to a lot of weather information integrated along
time has
appeared in this discussion several times. This is at the root of several postings stating that you
cannot do
climatic forecasting because weather (due to its chaotic nature) cannot be predicted more than one
or
two weeks ahead. The simplicity of the climate mechanisms that we can deduce from the small number
of parameters that control it, makes clear that climate prediction is possible (once that the really
important
parameters are identified).

Something that it has been forgotten is that there is a excellent example of Climate prediction.
During the last
years el Niño and la Niña have been predicted with very high accuracy a a long time ahead (half a
year or so).
This is a very clear demonstration that climate prediction (as opposed to weather prediction) is
possible and it
works. This prediction has been of extreme importance for the lives of millions of persons living in
the area
of influence of this climatic event. It has saved lives, it has saved money. It is science at his
best for the
benefit of mankind


--
Aguirre was considered to be a thoroughly disreputable character, and his name practically became
synonymous with cruelty and treachery
Encyclopaedia Britannica.


j...@watson.ibm.com

unread,
Nov 24, 2000, 6:36:34 PM11/24/00
to
In article <3A1C9B66...@sprynet.com>,
on Wed, 22 Nov 2000 20:21:58 -0800,

Phil Hays <spampos...@sprynet.com> writes:
>j...@watson.ibm.com wrote:
>
>> Ok, that's different (although the .6 is understated because
>> of the lag term). I would take my model because I think ice age
>> transitions are not driven by CO2.
>
>What is the lag term? How much is the 0.6C understated because of the "lag
>term"? What about the "noise term"? How much might the 0.6C under or
>overestimated because of the noise term? What about measurement error? How
>much is the 0.6C under or overestimated because of measurement error? Perhaps a
>longer term and and a larger CO2 level change might help.

The lag term represents the fact that the temperature does


respond immediately to the change in CO2 because of for example the
heat capacity of the oceans. So if the CO2 level stopped rising at
the current level we could expect the temperature to continue rising

for a while. On the other hand the rise in temperature from the last
ice was presumedly finished.

>What needs to be added to your model to handle ice age transitions? Even if ice
>age transitions are not "driven" by CO2, the near doubling of CO2 is surely a
>significant part of the process, yes?

My model is not intended to handle ice age transistions,


just the next 100 years or so. Suppose ice ages are driven by
orbital forcing with primarily ice abedo feedback and secondarily
CO2 feedback. Then for example the 6 degrees might partition as 1
degree from the forcing, 3 degrees from ice albedo feedback and 2
degrees from CO2 feedback. Of course you can invent your own
numbers but it doesn't make sense to assign the entire 6 degrees to
CO2.

James B. Shearer

j...@watson.ibm.com

unread,
Nov 24, 2000, 6:47:10 PM11/24/00
to
In article <3A1C9EE7...@mail.verizon.net>,
on Thu, 23 Nov 2000 04:33:09 GMT,

Josh Halpern <vze2...@mail.verizon.net> writes:
>
>
>j...@watson.ibm.com wrote:
>
>> In article <8ve0lj$euq$1...@saltmine.radix.net>,
>> on 21 Nov 2000 09:25:55 -0500,
>> bo...@Radix.Net (Robert Grumbine) writes:
>
>SNIP....
>
>> Obviously simple models leave out a lot of stuff. However
>> throwing in every interaction you can think of is not necessarily an
>> improvement. It can and often does reduce the predictive ability
>> of the model.
>
>This appears about equivalent to saying wrong physics gives right
>answers, the problem is which right physics do you omit, and how
>do you justify it. Clearly three dimensional models have given
>a much better representation of the actual climate than the simpler
>one dimensional ones.

All models are simplifications of reality and therefore
"wrong" in some sense. A good model captures the essential features
of some phenomenon as simply as possible. It will give a qualitative
picture of what's going on. Adding contributing factors that you
initially omitted in the order of their importance will then give
progressively more accurate quantitative models. However to add more
features successfully it is important to understand the dominant
sources of error in the simple model. If you add features that are
not the dominant sources of error you will likely make the model less
accurate.

>> > Interest in other parameters than global average temperature, as:
>> >
>> >Precipitation
>> >Evaporation
>> >Growing seasons
>> >Ocean wave climate (for shipping)
>> >Storm severity
>> >Heat wave severity
>> >Cold wave severity
>> >Sea level
>>
>> The model was for temperature only.
>
>And clearly wrong, since all the above have nasty habits of affecting
>each other. Your approach is only justifiable if you can show that the
>various variables are not coupled or only weakly coupled or vary on
>very different time scales. Right answers for wrong physics in models,
>simple or complex, are the worst possible case, because then you think
>you know what is happening. Much worse than decent answers with
>all known physics included. Models are validated against observation
>but useful for teaching us mechanisms, as well as prediction.

This is nonsense. You can't possibly include all known
physics so by your criteria no model is justifiable.
Also I am talking about models to be used to set policy
in which case what matters is their predictive ability. It may be
true that more complicated models will help us understand climate
mechanisms. This does not mean they will be more predictive.

>> However it would be
>> easy enough to devise simple models for most of the above. For
>> instance the model had a heat capacity term which you could take
>> to be from heating the top of the ocean and use that to figure
>> the thermal expansion.
>
>They might not be very useful though. It's pretty easy to find a
>function that matches a single time series. It's a lot harder to have
>confidence in that function having any predictive power if it leaves
>a lot of things out that you know are basic. It is a criticism of any
>argument which derives causality from correlation. It also is
>at root, the problem that Friis-Christensen et al ran into. Their proposal
>
>matched one series against another, and based on the correlation they
>proposed a simple model, to explain variation in global cloud cover,
>ie the kind of simple 1-1 model you are championing. In that case,
>it turned out that not only was the model too simple, but the data
>was also too simple, and that if you looked at the three dimensional
>distribution of cloulds the argument (simple model) fell apart.

Well of course there are bad simple models. This doesn't
mean all simple models are bad. However you must not expect too
much from a simple model.

>An interesting question, to me at least, starts with the observation that
>most physicists prefer the simplest possible model. This approach
>has worked well over the years, and I think is the basis of your
>prejudice. The question I would put to Bob Grumbine and W.M.
>Connolley, is whether climate is a case where this approach does
>not work,

Simplest possible model is a bit of a tautology is it not?
Why would you want a model to be more complicated then necessary?
As for physicists I think they definitely prefer simple and
elegant theories (planets move in ellipses) to complicated theories
(planets move in epicycles). I think this is because theories are
most impressive when they successfully predict a lot from a little.
A theory with lots of free parameters can explain be fitted to almost
anything. So it is not so significant that it matches observations.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 24, 2000, 7:29:04 PM11/24/00
to
In article <3a1d097...@news.escape.ca>,
on Thu, 23 Nov 2000 12:29:03 GMT,

wra...@mb.sympatico.ca (David Ball) writes:
>On Wed, 22 Nov 2000 04:54:33 GMT, j...@watson.ibm.com wrote:
>
>[..]
>>
>>> Climate is not even remotely like flipping a coin. You're
>>>looking at the problem as if it was related to curve-fitting or the
>>>lack of generalization one gets sometimes with a neural network. The
>>>whole process of data assimilation is designed to deal with the issue
>>>of getting very different types into the model without having to
>>>contend with noise, or as little of it as possible.
>>
>> Climate in some ways is a lot like flipping a coin. For
>>example consider yearly rainfall amounts at some location. You
>>may have wet years and dry years more or less at random.
>>
>
> Weather and climate are based on physical processes. Things
>never happen randomly. They happen for reasons. They may appear to be
>random because we don't understand the processes involved, but it
>appearance only.

First according to our current understanding some quantum
mechanical processes such as (I believe) radioactive decay are
truly random.
Second even in classical mechanics some things such as
flipping a coin are effectively random because it is not feasible
to observe the initial conditions with sufficient accuracy to
predict the outcome. This is why long range weather prediction
is impossible.

>>> It is not wrong and again, climate is not a coin toss. If the
>>>results coming from a model differ by 3 orders of magnitude, I have a
>>>problem with my model. If I know what happened, run my model and
>>>everything works correctly except that things happen faster or slower
>>>than expected, I have a problem with my model. Of course, you should
>>>not expect exact agreement, but the numbers have to be realistic both
>>>in time and space. In a climate context, if the global pattern of
>>>cooling that took place from the 1950's to the early 1970's is not
>>>captured or appears in the 1980's you have a problem with your model.
>>
>> You are asserting this pattern would be preserved with
>>small changes in the initial conditions and is not random noise.
>>How do you know that? At what level of detail do the patterns
>>in what actually happened become dominated by random noise? How
>>do you know?
>

> Again, the physical processes involved in weather and climate
>are never random.

A model should not be expected to capture features in
the weather of the last 50 years that would go away with changes
in the initial conditions that are smaller than the uncertainty
in our knowledge of the initial conditions.

>> We aren't talking about 3 orders of magnitude difference,
>>we are talking about say two models which have a 1 degree difference
>>in global temperature in 2100.
>

> And if the various and sundry numerical models out there, all
>with different model physics, data assimilations, resolutions,
>parameterizations, ... come up with answers that are within 1 degree
>of each other, your problem is what?

I believe the full range is more than 1 degree, however it
is not 3 orders of magnitude. My assertion is the range of
predictions from complicated models is as large as from simple
models and therefore complicated models are not adding value.
James B. Shearer

David Ball

unread,
Nov 25, 2000, 3:00:00 AM11/25/00
to
On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:

[..]

>> Weather and climate are based on physical processes. Things
>>never happen randomly. They happen for reasons. They may appear to be
>>random because we don't understand the processes involved, but it
>>appearance only.
>
> First according to our current understanding some quantum
>mechanical processes such as (I believe) radioactive decay are
>truly random.

Which has absolutely nothing to do with climate.

> Second even in classical mechanics some things such as
>flipping a coin are effectively random because it is not feasible
>to observe the initial conditions with sufficient accuracy to
>predict the outcome. This is why long range weather prediction
>is impossible.

Interesting philosophy. You believe that you can't model
climate and weather and discount the obvious fact that we can and do
model climate and weather. There is a huge difference between doing
long range weather prediction and climate prediction.

[..]

>>
>> Again, the physical processes involved in weather and climate
>>are never random.
>
> A model should not be expected to capture features in
>the weather of the last 50 years that would go away with changes
>in the initial conditions that are smaller than the uncertainty
>in our knowledge of the initial conditions.

You're right. If I try to forecast thunderstorm development a
hundred years from now, I'd be on a fool's errand. Forecasting
temperature changes on a global scale is an entirely different matter.
There are scales involved here. What you seem to be saying is that you
cannot effectively model fine-scale features well into the future. The
problem is we are not talking in any way shape or form about
fine-scale features.

> I believe the full range is more than 1 degree, however it
>is not 3 orders of magnitude. My assertion is the range of
>predictions from complicated models is as large as from simple
>models and therefore complicated models are not adding value.

Models, as someone else has pointed out, have many purposes
one of which is to provide understanding of the processes involved.
Simple models are just that: simple. To produce realistic results,
short-cuts have to be made, usually in the form of physical
parameterizations. We use such parameterizations out of ignorance.
Better models require better more complete understanding of what is
going on in the atmosphere/ocean system. Claiming that simple models
are "as good" as more sophisticated models neglects fundamental
aspects of modeling.

--
Dave.

Daniel H. Gottlieb

unread,
Nov 25, 2000, 3:00:00 AM11/25/00
to
"Sancho, what is that music?"
"La Marseillaise. So welcome back from Florida, Quixote. How come no sun tan?"
"Ah I was sidetracked away from Florida at the last minute, and sent over to the
Hague. We almost lost it, Sancho. Thank goodness we tied the place up in knots."

"Florida?"
"The Hague. It's been a tiring but productive two weeks for me and my teams."
"So then you did get to work on the election, Quixote?"
"I left a lieutenant in charge--but I sent out a few ballots in my spare time.
Though I lost track of the dates and sent some too late. I'll have to answer for
that at some point, I tell you. But, what the heck--I'll just tell them I was
too busy throwing monkey wrenches into the gears at the Hague."
"How so?"
"Too much science--not enough rhetoric. I hate that. Most days I was busy
keeping the troops in line. Sancho, some of my people even had the audacity to
suggest that we might be wrong putting our economic well-being ahead of the
environment!"
"Did you have them shot, Quixote?"
"Worse. I had to show them who is boss so I sent them off to Dade County to
scream at the canvassing board."
"Talk about rotten duty."
"Oh, the worst. Look do me a favor will you? I need you to feed my pirhanas
while I'm gone."
"Which friends in specific?"
"No, my fish."
"Oh, sorry I get confused some times. Still feeding the little devils mutated
frog?"
"Got to get rid of them somehow, Sancho. You know I get a shipment every week
now."
"It's amazing how well you can cover your tracks--regardless of the number of
feet--with a few carnivorous creatures. So where are you off to, Quixote?"
"Sancho, I can't say--but I'm trying to corner the market on beads. Did you know
they float and can get caught in the propellors of oil tankers?"
"What are you talking about?"
"Forget I said that. Actually, we're closing down our offices in the great state
of Louisiana. I'm selling some real estate and liquidating some assets--that
kind of thing"
"Why are your friends selling their real estate holdings in New Orleans?"
"I didn't say that."
"What happened, your address get given out to Greenpeace?"
"Nope. All I can say, Sancho, is one of the boys from Bermuda Biologicals got a
little too loud in a Karoake bar the other night and so I'm off to liquidate
assets."
"Something going to happen there I should know about, Quixote?"
"Where? What do you mean?"
"What's going to happen in New Orleans, Quixote?"
"I know no-thing, Sancho. Because, as you know, there is no way to predict the
impact of a chaotic system, like weather, on an area as small as a city more
than a few days hence..."
"So who said climate models are the only way to see?"
"Sancho, if anything was going to happen it would be years out. So of course I
wouldn't know anything is going to happen because computer models can't resolve
a small area and the effects of time..."
"I notice you frame everything in climate modeling terms."
"Those are the rules, Sancho. Get it?"
"How do you sleep at night, Quixote?"
"Look--just feed the fish will you. And keep the doors closed or the pirhanas
might catch a cold..."
"We wouldn't want that would we? Quixote, I have relatives in New Orleans."
"Fish food...."
"What?"
"...In the greenhouse."
"What are you saying?"
"The greenhouse, Sancho. Put the frogs in the greenhouse a few days ahead of
feedings to rot a bit. The pirhanas will leave some and you can use the wet rot
for fertilizer for the lilies. I want the lilies strong. I'll need lots of them
by the time it's all over."
"You've started growing lilies, Quixote?"
"I'm thinking about starting a new business. So I am learning how to cultivate
the lilly."
"You are growing flowers? Why does that strike me as odd? What kind of business
are you starting?"
"Among other things, I'll be selling lilies to graveyards, Sancho."
"I should have known. You exude charm, Quixote."
"Thank you. Do you know if any lilies float?"
"I think they're called water lilies."
"Water lilies would be perfect. I'll make a note of that. I'll sell them as
floating remembrances. I could hire a boat and throw them over the side. They'll
grow well in muddy water, I bet. I could get some videos of the service, maybe
some deeply religious jazz riffs. I could put it on DVD with some news pictures.
Man, there's money to be made in that! Hell you know... I could just show the
same damn lilly being tossed in the water and sell it over and over. The profits
will sink right to my bottom line."
"Quixote, walk with me to the fish tank."
"Sure. You seem a bit vexed, Sancho. Something wrong?"
"Nah. By the way, do pirhanas eat human flesh?"
"All the time, Sancho. All the time."

Daniel H. Gottlieb
http://www.rockisland.com/~genian/bannebooks.html


j...@watson.ibm.com

unread,
Nov 25, 2000, 7:18:00 PM11/25/00
to
In article <8vltqi$lqt$1...@saltmine.radix.net>,
on 24 Nov 2000 09:26:26 -0500,
bo...@Radix.Net (Robert Grumbine) writes:
>In article <20001122....@yktvmv.WATSON.IBM.COM>,

> <j...@watson.ibm.com> wrote:
>>In article <8ve0lj$euq$1...@saltmine.radix.net>,
>> on 21 Nov 2000 09:25:55 -0500,
>> bo...@Radix.Net (Robert Grumbine) writes:
>>>In article <20001120....@yktvmv.WATSON.IBM.COM>,
>>> <j...@watson.ibm.com> wrote:
>>>
>>> Weather and climate are somewhat different problems, but it should
>>>give one pause that the 'more complex = better' equation holds so
>>>well for the related (to climate modelling) problem.
>>
>> When I refer to more complex models I am not referring
>>to solving the same model with a finer mesh or smaller timestep.
>
> You should. There's a lot more to changing resolution than
>simply recompiling with different deltas.

Sometimes yes. However I don't see why the finer model
should be more complex (rather than just different) than the
cruder model. In fact in some cases a finer model might be simpler
if the greater resolution allowed you to remove kludges intended to
compensate for an inability to resolve certain important features.
Any error in the initial state of a weather model will
grow as the model is run. So if the accuracy of the initial state
of a weather forecasting code is determined by computational
limitations rather than observational limitations then it is not
surprising that more computational resource allows a smaller
initial error and thus generally more accurate forecasts. One
would expect such gains to diminish if observational limits become
the bottleneck. What is the current relation between these two
contributors to the initial error?
It would also appear that it might be a good idea to
decrease the model resolution as a long range forecast is run as
you would gain speed without losing any important information.
Is this currently done? If not, why not?
As to the benefits of greater resolution for climate
models, the following quote appears in the paper "Global Climate
Models: What and How" (by David Randall, appearing in the book
"Global Warming: Physics and Facts", AIP Conference Proceedings
247, 1992, p. 24-44):
"The wide range of model sensitivities cannot be
narrowed simply by increasing model resolution. If at some future
time all of the existing models could be run with drastically
increased resolution, the differences in their climates would be
quite comparable to those obtained today. Evidence for this was
recently obtained by Tibaldi et al. Ý41¨, who found that although
the forecast skill of the advanced GCM used at the European Centre
for Medium Range Weather Forecasts progressively improves as the
resolution is increased, the systematic error of the model, which
represents the deficiencies of the simulated climate, does not
improve much as the resolution increases beyond the moderate
range."

>> I imagine when meteorologists add interactions to their
>>weather models it often takes them several interactions to get it
>

> ^^^^ you mean iterations
>here, I'm assuming.

Yes, thanks.

>>right (ie to improve predictions).
>

> Sometimes yes, sometimes no. I did one that was an immediate
>improvement.

Well sometimes my programs work the first time but I
don't count on it.

>>Climate modelers have to get
>>it right the first time which is a lot harder.
>

> 'first time'?

If you can't test whether your change is an improvement
or not you only get one shot. You can't twiddle things until they
work.
James B. Shearer

David Ball

unread,
Nov 25, 2000, 11:36:38 PM11/25/00
to
On Sun, 26 Nov 2000 00:18:00 GMT, j...@watson.ibm.com wrote:

[..]

>
> Sometimes yes. However I don't see why the finer model
>should be more complex (rather than just different) than the
>cruder model. In fact in some cases a finer model might be simpler
>if the greater resolution allowed you to remove kludges intended to
>compensate for an inability to resolve certain important features.

You're confusing model complexity with resolution. Reducing
the grid size of a model is completely different than taking an
atmospheric model and coupling it with the oceans. Increasing the
resolution may or may not lead to improved output, especially in a
global model where you are considering broad-scale features. When you
are modeling a complex system that includes both the atmosphere and
the oceans, having a model that only considers the atmosphere is
unlikely to give you meaningful results.

> Any error in the initial state of a weather model will
>grow as the model is run. So if the accuracy of the initial state
>of a weather forecasting code is determined by computational
>limitations rather than observational limitations then it is not
>surprising that more computational resource allows a smaller
>initial error and thus generally more accurate forecasts. One
>would expect such gains to diminish if observational limits become
>the bottleneck. What is the current relation between these two
>contributors to the initial error?

Nonsense! Weather prediction models quite often show their
greatest errors in the first few hours of integration as the model
spins up. You can't simply start a model from a dead stop and merrily
start integrating it. The accuracy of weather prediction model is
based on how well the various data are assimilated into the model, not
on the accuracy of the original data. In fact, improvements in data
assimilation techniques offer far greater rewards than increases in
model resolution do.
You're operating from a false paradigm here. Errors in the
measuring of the initial state of the atmosphere are not the primary
source of errors.

> It would also appear that it might be a good idea to
>decrease the model resolution as a long range forecast is run as
>you would gain speed without losing any important information.
>Is this currently done? If not, why not?

Do you believe that GCM models operate at the same resolution
of weather prediction models? They don't.

>
>>>Climate modelers have to get
>>>it right the first time which is a lot harder.
>>
>> 'first time'?
>
> If you can't test whether your change is an improvement
>or not you only get one shot. You can't twiddle things until they
>work.

Current CMC operational model resolution is being reduced to
16 km from 24. You know what they're doing? Running past cases through
the new model to see how the output changes. You do not run a model
one time. You do not run a model just on current data. You can run
model's using past data to infer their performance.

--
Dave.

Leonard Evens

unread,
Nov 26, 2000, 3:00:00 AM11/26/00
to
j...@watson.ibm.com wrote:
>

> As to the benefits of greater resolution for climate
> models, the following quote appears in the paper "Global Climate
> Models: What and How" (by David Randall, appearing in the book
> "Global Warming: Physics and Facts", AIP Conference Proceedings
> 247, 1992, p. 24-44):
> "The wide range of model sensitivities cannot be
> narrowed simply by increasing model resolution. If at some future
> time all of the existing models could be run with drastically
> increased resolution, the differences in their climates would be
> quite comparable to those obtained today. Evidence for this was
> recently obtained by Tibaldi et al. Ý41¨, who found that although
> the forecast skill of the advanced GCM used at the European Centre
> for Medium Range Weather Forecasts progressively improves as the
> resolution is increased, the systematic error of the model, which
> represents the deficiencies of the simulated climate, does not
> improve much as the resolution increases beyond the moderate
> range."

This quote is from a paper written in 1992. Clearly, it is only
one opinion, and it seems to base its conclusion on a model
for Medium Range Weather forcasting, which may be very different
from a climate model. But we presumably now have
evidence about what has happened with climate models with finer
resolution since then.
Does anyone have any information about the results?

wmconnolley

unread,
Nov 26, 2000, 3:00:00 AM11/26/00
to
j...@watson.ibm.com wrote:
> bo...@Radix.Net (Robert Grumbine) writes:

> > You should. There's a lot more to changing resolution than
> >simply recompiling with different deltas.

> In fact in some cases a finer model might be simpler


> if the greater resolution allowed you to remove kludges intended to
> compensate for an inability to resolve certain important features.

Well, RMG did say there was more to changing res than just changing the
res: adding/altering/removing parametrisations is part of it.

> As to the benefits of greater resolution for climate

> models... "the differences in their climates would be


> quite comparable to those obtained today."

There is some truth to this: models do tend to have characteristic
biases that are not removed in the way one might hope as the res goes
up. But note I say "tends": its not a hard and fast rule, and
(surprise!) often increasing res does make the simulation better.

> recently obtained by Tibaldi et al. Ý41¨, who found that although
> the forecast skill of the advanced GCM used at the European Centre
> for Medium Range Weather Forecasts progressively improves as the
> resolution is increased, the systematic error of the model, which
> represents the deficiencies of the simulated climate, does not
> improve much as the resolution increases beyond the moderate
> range."

Again, probably fair enough (though note that your quote appears to
have extrapolated from one version of one model to them all): but what
is "moderate range"?

There is a certain feeling that t42 (2.5x2.5 degrees, ish)
is about the level where increased res in the *atmosphere* stops being
worth the extra time, in a cliate simulation, given the current (or
rather, the several-years-ago) state of computers.

In fact, this all supports what RMG said: you have to do more than just
bump up the res to get a much better simulation.

-W

Josh Halpern

unread,
Nov 26, 2000, 3:00:00 AM11/26/00
to
David Ball wrote:

> On Sun, 26 Nov 2000 00:18:00 GMT, j...@watson.ibm.com wrote:
> [..]

> > It would also appear that it might be a good idea to
> >decrease the model resolution as a long range forecast is run as
> >you would gain speed without losing any important information.
> >Is this currently done? If not, why not?
> Do you believe that GCM models operate at the same resolution
> of weather prediction models? They don't.

I like the approach where different size grids are used for the oceans
and land. Gives you a headache in joining the cells but that's why
modelers get big bucks. BTW, what is state of the art in GCMs
for introducing local geography like mountains and large lakes?

josh halpern

j...@watson.ibm.com

unread,
Nov 26, 2000, 7:21:47 PM11/26/00
to
In article <8vm07q$on2$1...@saltmine.radix.net>,
on 24 Nov 2000 10:07:38 -0500,
bo...@Radix.Net (Robert Grumbine) writes:

<snip>

> So, to a degree, we do build simple models and use them.
>You don't hear much about them in the media because simplicity
>is not sexy, I guess. They're also not major industries within
>the field because, relatively speaking, it is fairly easy for
>one person or group to reasonably explore the simple model.
>imho, and at a guess.

This is an important advantage to simple models. It also
is easier to check a simple model.
Note it is well known that computer programs become
significantly harder to write and debug as soon as they become too
complex for one person to do by themselves.

> Part of the issue driving complexity in climate models is
>that the people writing grants (ultimately, the public at
>large) live inside the thing you're trying to model. They
>don't seem to like models that fail to get detailed about
>the area in which they live, or fail to predict variables
>that they are personally interested in.

So there is public pressure to go beyond the state of
the art. This is also what keeps astrologers in business.

This sounds interesting but I'm unclear what you mean
exactly. The weather is constantly changing but climate is
supposed to be stable so I don't see how it has any degrees of
freedom. Can you explain further? Thanks.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 26, 2000, 7:38:18 PM11/26/00
to
In article <3A1F638B...@sprynet.com>,
on Fri, 24 Nov 2000 23:00:27 -0800,

Phil Hays <spampos...@sprynet.com> writes:
>j...@watson.ibm.com wrote:
>
>PH> What is the lag term?

>
>> The lag term represents the fact that the temperature does
>> respond immediately to the change in CO2 because of for example the
>> heat capacity of the oceans. So if the CO2 level stopped rising at
>> the current level we could expect the temperature to continue rising
>> for a while.
>
>I agree that there are "lag factors". I'm just rather unsure how you are
>proposing to handle them as a single "lag term".

As a heat capacity term. More CO2 in the atmosphere absorbs
more of the outgoing heat radiation from the earth's surface and
radiates it back to the surface. This causes an energy imbalance
at the earth's surface which heats up until the increased heat
radiation restores a balance. The heat capacity term determines
how fast the surface temperature responds.

>> My model is not intended to handle ice age transistions,
>> just the next 100 years or so. Suppose ice ages are driven by
>> orbital forcing with primarily ice abedo feedback and secondarily
>> CO2 feedback. Then for example the 6 degrees might partition as 1
>> degree from the forcing, 3 degrees from ice albedo feedback and 2
>> degrees from CO2 feedback. Of course you can invent your own
>> numbers but it doesn't make sense to assign the entire 6 degrees to
>> CO2.
>

>How much of an ice albedo feedback term applies to the next hundred years or
>so? Probably less than during an ice age transition, but how do we find a
>realistic number? Invented numbers I can do without.

Ice albedo is not directly included in this model.

>Models, simple or complex, are a tool for understanding how things work. The
>problems with simple models is that they depend on constants that not known
>accurately, are not constant, and that a simple model may hide things we are
>very interested in.

Understanding how things work is not the only use of models.
Another use of models is predicting future events. It has been
repeatedly found that out predicting simple trend following models
is not easy when you don't understand the system very well.

>If the constants for your simple model were known AND reasonably constant, then
>a model such as you propose would be very useful. As the constants are both not
>known to better than a factor of two, and may not be constant, this sort of
>model is mostly useful for informal discussions, with care being taken to point
>out the limits of the model.

The range of error in complex models is as large. It is just
buried deeper.

>If the only change expected over the next 100 years was simply an increasing
>temperature, distributed in known pattern, then your simple model might all all
>of the answers we wanted. But we care about many other factors, and the
>distribution of temperature change is not likely to be a simple pattern. For
>example, what is going to happen to the distribution of rainfall? What is the
>fate of the Arctic sea ice? How warm will the permafrost regions get? Your
>simple model can not provide more than the more general of insight into these
>important questions.

These may be important questions. That doesn't mean we have
any reliable way of answering them.
James B. Shearer

Robert Grumbine

unread,
Nov 27, 2000, 3:00:00 AM11/27/00
to
In article <3A21845D...@mail.verizon.net>,

Josh Halpern <vze2...@verizon.net> wrote:
>
>I like the approach where different size grids are used for the oceans
>and land. Gives you a headache in joining the cells but that's why
>modelers get big bucks.

Dang. I only get normal sized ones. Where can I get the big ones?

>BTW, what is state of the art in GCMs
>for introducing local geography like mountains and large lakes?

Still on the 'subgrid parameterization' level, I believe.

Phil Hays

unread,
Nov 27, 2000, 3:00:00 AM11/27/00
to
j...@watson.ibm.com wrote:

> >I agree that there are "lag factors". I'm just rather unsure how you are
> >proposing to handle them as a single "lag term".
>
> As a heat capacity term. More CO2 in the atmosphere absorbs
> more of the outgoing heat radiation from the earth's surface and
> radiates it back to the surface. This causes an energy imbalance
> at the earth's surface which heats up until the increased heat
> radiation restores a balance. The heat capacity term determines
> how fast the surface temperature responds.

The surface more than just a simple heat capacity. Land surface biology will
respond to changes in climate over on the rough order of decades. Changing
biology will modify properties of the surface such as albedo and evaporative
rate, which will modify the climate again. The oceans are also not simple.


> >How much of an ice albedo feedback term applies to the next hundred years or
> >so? Probably less than during an ice age transition, but how do we find a
> >realistic number? Invented numbers I can do without.
>
> Ice albedo is not directly included in this model.

Why not? Do you really expect that snow coverage will stay the same if the
climate warms by a few degrees? Do you really expect that polar sea ice
coverage will stay the same?


> Understanding how things work is not the only use of models.
> Another use of models is predicting future events. It has been
> repeatedly found that out predicting simple trend following models
> is not easy when you don't understand the system very well.

Complex climate models are part of our attempt to understand the system as fully
as possible. I don't think that this effort is doomed.


> >If the constants for your simple model were known AND reasonably constant, then
> >a model such as you propose would be very useful. As the constants are both not
> >known to better than a factor of two, and may not be constant, this sort of
> >model is mostly useful for informal discussions, with care being taken to point
> >out the limits of the model.

> The range of error in complex models is as large.

You state your conclusion again. You know this because of ________________?


--
Phil Hays

j...@watson.ibm.com

unread,
Nov 27, 2000, 10:22:23 PM11/27/00
to
In article <3a1fe2fc...@news.escape.ca>,
on Sat, 25 Nov 2000 16:24:51 GMT,

wra...@mb.sympatico.ca (David Ball) writes:
>On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:
>
>[..]
>
>>> Weather and climate are based on physical processes. Things
>>>never happen randomly. They happen for reasons. They may appear to be
>>>random because we don't understand the processes involved, but it
>>>appearance only.
>>
>> First according to our current understanding some quantum
>>mechanical processes such as (I believe) radioactive decay are
>>truly random.
>
> Which has absolutely nothing to do with climate.

It is a counterexample to your claim that "Things never
happen randomly." Also it relates to climate in that it is a
source of random low level noise in the climate system.

>> Second even in classical mechanics some things such as
>>flipping a coin are effectively random because it is not feasible
>>to observe the initial conditions with sufficient accuracy to
>>predict the outcome. This is why long range weather prediction
>>is impossible.
>

> Interesting philosophy. You believe that you can't model
>climate and weather and discount the obvious fact that we can and do
>model climate and weather. There is a huge difference between doing
>long range weather prediction and climate prediction.

I have never said we can't model climate or weather.
Obviously we can. However there are limits as to how well.

>>> Again, the physical processes involved in weather and climate
>>>are never random.
>>
>> A model should not be expected to capture features in
>>the weather of the last 50 years that would go away with changes
>>in the initial conditions that are smaller than the uncertainty
>>in our knowledge of the initial conditions.
>

> You're right. If I try to forecast thunderstorm development a
>hundred years from now, I'd be on a fool's errand. Forecasting
>temperature changes on a global scale is an entirely different matter.
>There are scales involved here. What you seem to be saying is that you
>cannot effectively model fine-scale features well into the future. The
>problem is we are not talking in any way shape or form about
>fine-scale features.

The original question I asked, which has not been answered,
is how large a climate feature has to be before we can be confident
it is not just random noise.
James B. Shearer

David Ball

unread,
Nov 27, 2000, 11:15:49 PM11/27/00
to
On Tue, 28 Nov 2000 03:22:23 GMT, j...@watson.ibm.com wrote:

>In article <3a1fe2fc...@news.escape.ca>,
> on Sat, 25 Nov 2000 16:24:51 GMT,
> wra...@mb.sympatico.ca (David Ball) writes:
>>On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:
>>
>>[..]
>>
>>>> Weather and climate are based on physical processes. Things
>>>>never happen randomly. They happen for reasons. They may appear to be
>>>>random because we don't understand the processes involved, but it
>>>>appearance only.
>>>
>>> First according to our current understanding some quantum
>>>mechanical processes such as (I believe) radioactive decay are
>>>truly random.
>>
>> Which has absolutely nothing to do with climate.
>
> It is a counterexample to your claim that "Things never
>happen randomly." Also it relates to climate in that it is a
>source of random low level noise in the climate system.

Where weather is concerned, and climate is nothing more than
weather taken over a long period of time, nothing happens randomly.
Thunderstorms don't develop in arbitrary locations. Snow doesn't
develop at random points. Temperatures reach the levels that they do
because of processes at work in the atmosphere. These processes are
not random. They don't happen willy-nilly.

It depends on what is being modeled, on how the model works,
how the data is assimilated, what the physical parameterizations are,
what the model time step is, how feedbacks operate, ... There is no
one answer that is going to satisfy a question like that. One obvious
answer might be the resolution of the model. Weather models don't
capture thunderstorms terribly well because they can't "see" them. The
same holds true for climate models. You can't model what your model
can't "see". It is restricted, therefore, to depicting coarser fields:
mean temperatures, precipation patterns, etc. as opposed to fine-scale
features like absolute temperatures at a point in space and time.
Even then, models are able to "hint" at sub-grid processes. As
an example, a weather model might not "see" a thunderstorm complex
develop, but it can certainly see a swath of precipitation associated
with that complex moving across a large area. What this means is that
sub-grid processes also have to be accounted for in some measure
inside the model.

--
Dave.

goldfish

unread,
Nov 28, 2000, 3:00:00 AM11/28/00
to

David Ball wrote:

> On Tue, 28 Nov 2000 03:22:23 GMT, j...@watson.ibm.com wrote:
> > The original question I asked, which has not been answered,
> >is how large a climate feature has to be before we can be confident
> >it is not just random noise.

The normal way to answer this question is to determine
either the confidence bands or the uncertainty of the
model, and if a climate feature is larger than this range of
uncertainty, then the model should account for it. But
unfortunately, this gets back to the question I was asking,
"what is the uncertainty of the climate models?" -- which,
likewise to Dr Shearer's question, was never answered.

Dr Grumbine criticised my question as "too vague."
It is true that the way I spelled it out left it open to a number
of different interpretations. Therefore, I offer the following
to specify it more concretely. (The following is taken from
S J Kline and F A McClintock, Mech. Eng., p3, Jan 1953;
please bear with the ascii notation...)

Suppose a set of measurements is made and the uncertainty
in each measurement may be expressed with the same odds.
These measurements are then used to calculate some desired
result of the experiments. This result, R, is a given function of
the set of independent variables x1, x2, x3, ..., xn, as

R = R(x1, x2, x3, ..., xn)

Let wR be the uncertainty of the result and w1, w2, w3, ..., wn
be the uncertainties in the independent variables, all given with
the same odds. The uncertainty of the calculated results is given
by

wR = [ (dR/dx1 * w1)^2 + (dR/dx2 * w2)^2 + ... + (dR/dxn * wn)^2 ]^1/2

where dR/dx1 is the _partial_ derivative of R with respect to x1.

Thus, the uncertainty of the temperature projected over the next
50 years, is defined quantitatively. This can also be used to determine
the uncertainty of averaging different models together, by using
R = sum(models)/(number of models), each model having an
uncertainty. Arriving at the uncertainty of each model is not
necessarily easy with numerical models, but nevertheless, it can
be calculated.

> It depends on what is being modeled, on how the model works,
> how the data is assimilated, what the physical parameterizations are,
> what the model time step is, how feedbacks operate, ... There is no
> one answer that is going to satisfy a question like that.

Indeed there is one answer, as I have shown above.


> One obvious
> answer might be the resolution of the model. Weather models don't
> capture thunderstorms terribly well because they can't "see" them. The
> same holds true for climate models. You can't model what your model
> can't "see". It is restricted, therefore, to depicting coarser fields:
> mean temperatures, precipation patterns, etc. as opposed to fine-scale
> features like absolute temperatures at a point in space and time.

I suspect that the time step defines the uncertainty in time used in the
models. Any feature of weather that is shorter than the time step
is noise, and hence is not relevant to what the model predicts.

> Even then, models are able to "hint" at sub-grid processes. As
> an example, a weather model might not "see" a thunderstorm complex
> develop, but it can certainly see a swath of precipitation associated
> with that complex moving across a large area. What this means is that
> sub-grid processes also have to be accounted for in some measure
> inside the model.
>

Likewise with time, the grid size defines the uncertainty in distance...

Regards,
Peter Mott

David Ball

unread,
Nov 28, 2000, 3:00:00 AM11/28/00
to
On Tue, 28 Nov 2000 13:16:41 -0500, goldfish <p...@xbt.nrl.navy.mil>
wrote:

[..]

You continue to look at this as some type of function that you
plug numbers into and out pops an answer. It doesn't work that way.
To begin with, the various elements that are fed to the model
are not independent of each other. They are coupled in myriad
different ways: insolation, surface albedo, land use, vegetative
cover, winds, precipitation, cloud cover, ... all have an impact on
the surface temperature. That temperature, in turn, affects
evaporation, convection, sensible heat fluxes, latent heat fluxes,
radiative processes. These in turn affect .... And those affect ...
And those affect ...
The data that are measured are not simply plugged into some
equation. The data are carefully quality controlled. They go through a
sophisticated data assimilation process where the data are fitted to
the model grid both spatially and temporally.
I wish it was as easy as you say it is to evaluate model
output. It would make my job a lot easier. Trouble is, it's a hell of
a lot more involved than the simplistic way you are looking at it.
So the short of it is that it cannot be calculated.


>
>> It depends on what is being modeled, on how the model works,
>> how the data is assimilated, what the physical parameterizations are,
>> what the model time step is, how feedbacks operate, ... There is no
>> one answer that is going to satisfy a question like that.
>

>Indeed there is one answer, as I have shown above.

No there is not, because you are looking at the problem in too
simplistic a way. The errors that are produced are dependent on the
factors that I list above. You simply can't ignore them.

>
>
>> One obvious
>> answer might be the resolution of the model. Weather models don't
>> capture thunderstorms terribly well because they can't "see" them. The
>> same holds true for climate models. You can't model what your model
>> can't "see". It is restricted, therefore, to depicting coarser fields:
>> mean temperatures, precipation patterns, etc. as opposed to fine-scale
>> features like absolute temperatures at a point in space and time.
>

>I suspect that the time step defines the uncertainty in time used in the
>models. Any feature of weather that is shorter than the time step
>is noise, and hence is not relevant to what the model predicts.
>
>

>> Even then, models are able to "hint" at sub-grid processes. As
>> an example, a weather model might not "see" a thunderstorm complex
>> develop, but it can certainly see a swath of precipitation associated
>> with that complex moving across a large area. What this means is that
>> sub-grid processes also have to be accounted for in some measure
>> inside the model.
>>
>

>Likewise with time, the grid size defines the uncertainty in distance...
>

Again, sub-grid processes also have an affect on the output.
Using satellite data as an example, imagine an infra-red sensor that
measures surface temperatures at a resolution of 4 km. Now imagine a
forest fire in the middle of that 4 km area covering half the area.
The heat from the fire will have an impact on the temperature that the
sensor measures even though the sensor can't "see" it explicitly.
Models work the same way.

--
Dave.

Phil Hays

unread,
Nov 28, 2000, 3:00:00 AM11/28/00
to
j...@watson.ibm.com wrote:

> Because the predictions of complex models of climate
> sensitivity to CO2 forcing do not appear to be clustered any
> tighter than the predictions of simple models.

Range of predictions doesn't equal accuracy of predictions.

Also, complex models allow for learning more about how the climate engine works,
and simple models do not.


--
Phil Hays

j...@watson.ibm.com

unread,
Nov 28, 2000, 8:37:36 PM11/28/00
to
In article <3a232f74...@news.escape.ca>,
on Tue, 28 Nov 2000 04:15:49 GMT,

wra...@mb.sympatico.ca (David Ball) writes:
>On Tue, 28 Nov 2000 03:22:23 GMT, j...@watson.ibm.com wrote:
>
>>In article <3a1fe2fc...@news.escape.ca>,
>> on Sat, 25 Nov 2000 16:24:51 GMT,
>> wra...@mb.sympatico.ca (David Ball) writes:
>>>On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:
>>>
>>>[..]
>>>
>>>>> Weather and climate are based on physical processes. Things
>>>>>never happen randomly. They happen for reasons. They may appear to be
>>>>>random because we don't understand the processes involved, but it
>>>>>appearance only.
>>>>
>>>> First according to our current understanding some quantum
>>>>mechanical processes such as (I believe) radioactive decay are
>>>>truly random.
>>>
>>> Which has absolutely nothing to do with climate.
>>
>> It is a counterexample to your claim that "Things never
>>happen randomly." Also it relates to climate in that it is a
>>source of random low level noise in the climate system.
>
> Where weather is concerned, and climate is nothing more than
>weather taken over a long period of time, nothing happens randomly.
>Thunderstorms don't develop in arbitrary locations. Snow doesn't
>develop at random points. Temperatures reach the levels that they do
>because of processes at work in the atmosphere. These processes are
>not random. They don't happen willy-nilly.

The weather a year from now is effectively random. If I go
outside and wave my arms this will completely change the weather in a
year in an unpredictable way. This does not mean that anything can
happen. If I flip a coin it may come up heads or it may come up tails
but it is unlikely to do anything else.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 28, 2000, 8:53:48 PM11/28/00
to
In article <3A2347A8...@sprynet.com>,
on Mon, 27 Nov 2000 21:50:32 -0800,

Phil Hays <spampos...@sprynet.com> writes:
>j...@watson.ibm.com wrote:
>
>> >I agree that there are "lag factors". I'm just rather unsure how you are
>> >proposing to handle them as a single "lag term".
>>
>> As a heat capacity term. More CO2 in the atmosphere absorbs
>> more of the outgoing heat radiation from the earth's surface and
>> radiates it back to the surface. This causes an energy imbalance
>> at the earth's surface which heats up until the increased heat
>> radiation restores a balance. The heat capacity term determines
>> how fast the surface temperature responds.
>
>The surface more than just a simple heat capacity. Land surface biology will
>respond to changes in climate over on the rough order of decades. Changing
>biology will modify properties of the surface such as albedo and evaporative
>rate, which will modify the climate again. The oceans are also not simple.

So what. Models are simplifications of reality.

>> >How much of an ice albedo feedback term applies to the next hundred years or
>> >so? Probably less than during an ice age transition, but how do we find a
>> >realistic number? Invented numbers I can do without.
>>
>> Ice albedo is not directly included in this model.
>

>Why not? Do you really expect that snow coverage will stay the same if the
>climate warms by a few degrees? Do you really expect that polar sea ice
>coverage will stay the same?

It is not included because it is a simple model.

<snip>

>> The range of error in complex models is as large.
>

>You state your conclusion again. You know this because of ________________?

Because the predictions of complex models of climate


sensitivity to CO2 forcing do not appear to be clustered any
tighter than the predictions of simple models.

James B. Shearer

Josh Halpern

unread,
Nov 28, 2000, 10:43:13 PM11/28/00
to
j...@watson.ibm.com wrote:

> In article <3a232f74...@news.escape.ca>,


> wra...@mb.sympatico.ca (David Ball) writes:
> >On Tue, 28 Nov 2000 03:22:23 GMT, j...@watson.ibm.com wrote:
> >>In article <3a1fe2fc...@news.escape.ca>,

> >> wra...@mb.sympatico.ca (David Ball) writes:
> >>>On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:
> >>>[..]

SNIp....

> The weather a year from now is effectively random.

Nonsense. There will be a random variation within some small
range, but this is far different from it being truly random. We
can calculate what that range will be.

> If I go outside and wave my arms this will completely change the weather in
> a
> year in an unpredictable way.

At best you can say it might, but even there, over long periods of time
the effect of waving your arms will be washed out completely. Note
that in your argument you are really adopting Ball's argument that
nothing about the weather is random, but that our ability to describe
the initial conditions are limited. If one accepts your contention that
the weather at future dates is random, then the system memory of
your actions will be washed out.

> This does not mean that anything can
> happen. If I flip a coin it may come up heads or it may come up tails
> but it is unlikely to do anything else.

I've had a few roll into a sewer.

josh halpern


goldfish

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to

David Ball wrote:

I take issue with that. To dismiss my question as "you think
this is simple curve fitting" does not mean that what I am asking
is inappropriate or invalid. However, it does gets you out of providing
real answers. You could say that "let us create a computer model of
the climate to provide a prediction as to what this unprecidented
increase in atmospheric CO2 will do" is simple-minded, especially
when compared to the effort necessary to carry that out. Yes, it is
simple, but that does not make it invalid, and climate modellers such
as yourself are in the business to convince the rest of us that what you
do is indeed valid. My question about the uncertainty is straightforward
and simple, and is most typically and easily answered in a curve fitting
situation. However, it is nonetheless a question at the heart of every
scientific endeavor: how do we know that what you have done
is valid?

Nothing you have written disputes the basic mechanism to
distinguish the significant from the noise, as I have outlined:
you determine the uncertainty, and if a feature lies outside
of that range, then the model should account for it.

The mathematical tools are available to answer this question.
If you believe in the models, then you should believe in the
derivation of the uncertainty of the models, which can be defined
in a number of mathematically precise ways. (One reason I
deliberately left my question vague was to let you define the
uncertainty in any way that was felt appropriate.)

By leaving the uncertainty undefined and unquantified,
any answer to a question regarding the validity of the model
remains ambiguous. If you cannot verify that the model(s) are
valid, the work will always have questionable merit.


> To begin with, the various elements that are fed to the model
> are not independent of each other. They are coupled in myriad
> different ways: insolation, surface albedo, land use, vegetative
> cover, winds, precipitation, cloud cover, ... all have an impact on
> the surface temperature. That temperature, in turn, affects
> evaporation, convection, sensible heat fluxes, latent heat fluxes,
> radiative processes. These in turn affect .... And those affect ...
> And those affect ...

I do not want to get into the nuts and bolts here, as that
is obviously your forte. Nevertheless, most factors you
list are not the true governing parameters, which are
independent of the model. Examples: surface reflectance
per square km of ice is something that cannot change, and
is therefore a governing parameter. OTOH, the total ice cover
is something that a model should determine.

> The data that are measured are not simply plugged into some
> equation. The data are carefully quality controlled. They go through a
> sophisticated data assimilation process where the data are fitted to
> the model grid both spatially and temporally.
> I wish it was as easy as you say it is to evaluate model
> output. It would make my job a lot easier. Trouble is, it's a hell of
> a lot more involved than the simplistic way you are looking at it.
> So the short of it is that it cannot be calculated.

I get the feeling that you do not _want_ to calculate it,
because you know it would raise more question than
you can answer concerning climate features, such as ENSO,
that should be accounted for, but are not. This puts many
unaswered, fundamental questions about climate in sharp relief.


> >> It depends on what is being modeled, on how the model works,
> >> how the data is assimilated, what the physical parameterizations are,
> >> what the model time step is, how feedbacks operate, ... There is no
> >> one answer that is going to satisfy a question like that.
> >
> >Indeed there is one answer, as I have shown above.
>
> No there is not, because you are looking at the problem in too
> simplistic a way. The errors that are produced are dependent on the
> factors that I list above. You simply can't ignore them.

If one cannot ignore them, it is clear that you need to figure
out a way to systematically, quantitatively account for them.

Regards,
Peter Mott

wmconnolley

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to
goldfish <p...@xbt.nrl.navy.mil> wrote:
> David Ball wrote:
> > gf wrote:

> > >Suppose a set of measurements is made and the uncertainty

...


> > >This result, R, is a given function of

^^^^^

Ah, but we don't know the "given" function exactly. We know it roughly,
but there are no precise theorems to translate this into error bounds.

> > >wR = [ (dR/dx1 * w1)^2 + (dR/dx2 * w2)^2 + ... + (dR/dxn * wn)^2 ]

You could play with this perhaps: do dR/dx1 where x1 is some random
model parametrisation - but I doubt anyone would bother because:

- doing 240 year runs takes some time, even now, and doing this for
a significant number of variables would take forever
- it wouldn't really capture the uncertainty, because of the "given"
function problem

Do you have a nice version of your formalism where "R" is uncertain?

> > You continue to look at this as some type of function that you
> > plug numbers into and out pops an answer. It doesn't work that way.
>
> I take issue with that. To dismiss my question as "you think
> this is simple curve fitting" does not mean that what I am asking
> is inappropriate or invalid.

I both agree and disagree, for reasons that should be clear from the
above. DB *isn't* accusing you of curve fitting here (at least
directly): if you read what he says, he's accusing you of trying to
stuff numbers into a function, which you are, the point being that
the function is not *precisely* known, as your formalism requires.

> My question about the uncertainty is straightforward
> and simple, and is most typically and easily answered in a curve
> fitting situation.

But we are not in curve fitting. It is not clear that your question can
be answered.

-W.

Leonard Evens

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to
goldfish wrote:
>

> > You continue to look at this as some type of function that you
> > plug numbers into and out pops an answer. It doesn't work that way.
>
> I take issue with that. To dismiss my question as "you think
> this is simple curve fitting" does not mean that what I am asking
> is inappropriate or invalid. However, it does gets you out of providing
> real answers.

[and Etc.]

I've been trying to follow these discussions but largely keeping out.

Let me add a couple of remarks.

First, the 1995 IPCC Scientific Assessment is fool of error estimates
for various different kinds of models. I don't see any point in
describing them here since any interested person can read the
Report (and should). In addition, the most recent Report is supposed
to come out some time early next year, and presumably much of the
1995 Report is now out of date. But there is certainly lots of
material available on model validation, including detailed estimates
of probably error.

Second, one has to distinguish between "predictions" of what has
already been observed and what has not been observed. In the fomer
case, we can see how well models are doing. They do some things
quite well and others not so well. In the latter case, we don't
know how well they are going to do. The models can give us some
indication, but there certainly can be surprises.

Steinn Sigurdsson

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to
wmconnolley <w...@bas.ac.uk> writes:

> goldfish <p...@xbt.nrl.navy.mil> wrote:
> > David Ball wrote:
> > > gf wrote:
>

> > > >Suppose a set of measurements is made and the uncertainty

> ...


> > > >This result, R, is a given function of

> ^^^^^
>
> Ah, but we don't know the "given" function exactly. We know it roughly,
> but there are no precise theorems to translate this into error bounds.

Well yes you do.
The function is _what you calculate_ from the measurements,
so you have the explicit function from the input parameters!
It is intrinsic to the _model_ that you know the functional
form used at whatever level of approximation you are doing the modeling.

> > > >wR = [ (dR/dx1 * w1)^2 + (dR/dx2 * w2)^2 + ... + (dR/dxn * wn)^2 ]

> You could play with this perhaps: do dR/dx1 where x1 is some random


> model parametrisation - but I doubt anyone would bother because:

> - doing 240 year runs takes some time, even now, and doing this for
> a significant number of variables would take forever
> - it wouldn't really capture the uncertainty, because of the "given"
> function problem

> Do you have a nice version of your formalism where "R" is uncertain?

Errors in the input parameters translate into errors in "R" directly
from the explicit functional dependance by construction.
So Taylor expand your _model_ function, 'ware of singularities.

If you are worried that you model is wrong - ie that your
choice of constructing "R" is wrong, then look at the functional
variation of \R as a function of R.

This gives an estimate of the model sensitivity,
which gives you some information on how well you can
hope the model represents real uncertainties - the model
can not do better than this, it may be much worse.

...


> stuff numbers into a function, which you are, the point being that
> the function is not *precisely* known, as your formalism requires.

...

Well, the modelers do stuff numbers into a function,
by construction. Else there is no quantitative model.
And for a given model the function is precisely known,
by construction.


goldfish

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to

wmconnolley wrote:

> goldfish <p...@xbt.nrl.navy.mil> wrote:
> > David Ball wrote:
> > > gf wrote:
>

> > > >Suppose a set of measurements is made and the uncertainty

> ...


> > > >This result, R, is a given function of

> ^^^^^
>
> Ah, but we don't know the "given" function exactly. We know it roughly,
> but there are no precise theorems to translate this into error bounds.

W H Press, S A Teukosky, W T Vetterling and B P
Flannery, "Numerical Recipies in Fortran" 2nd Ed., Cambridge
Univ. Press (Cambridge, UK), 1992. See chapter 10, section 6:
"Conjugate Gradient Methods in Multidimensions".

Regards,
Peter Mott

Phil Hays

unread,
Nov 29, 2000, 3:00:00 AM11/29/00
to
j...@watson.ibm.com wrote:

> If complex models of climate sensitivity to CO2 forcing are
> more accurate than simple models I would expect them to be clustered
> more tightly.

What is the source of the constants in simple models? Studies of past
measurements and/or results from complex models, correct?


--
Phil Hays

David Ball

unread,
Nov 29, 2000, 9:03:46 PM11/29/00
to
On Wed, 29 Nov 2000 12:17:49 -0500, goldfish <p...@xbt.nrl.navy.mil>
wrote:

[..]

>
>

Take issue with it then. It really doesn't matter to me. When
you state, "This result, R, is a given function of the set of


independent variables x1, x2, x3, ..., xn, as R = R(x1, x2, x3, ...,

xn)" you are not looking at the problem properly because: a. the
variables involved are not independent and b. because modeling doesn't
work the way you think it does. You might not like the facts, but they
are facts nonetheless.
The fact is that the only way to evaluate model output is by
seeing how well the model captures the essential features of what is
being modeled. You have to compare the model output with reality. That
is why so much effort goes into looking at past situations. You have a
known reality to compare the model output to.

>Nothing you have written disputes the basic mechanism to
>distinguish the significant from the noise, as I have outlined:
>you determine the uncertainty, and if a feature lies outside
>of that range, then the model should account for it.
>
>The mathematical tools are available to answer this question.
>If you believe in the models, then you should believe in the
>derivation of the uncertainty of the models, which can be defined
>in a number of mathematically precise ways. (One reason I
>deliberately left my question vague was to let you define the
>uncertainty in any way that was felt appropriate.)
>
>By leaving the uncertainty undefined and unquantified,
>any answer to a question regarding the validity of the model
>remains ambiguous. If you cannot verify that the model(s) are
>valid, the work will always have questionable merit.

Does a weather prediction model accurately capture the
features in the weather? Does a meso-scale model of a supercell
thunderstorm develop that storm at the correct point in time and
space? Does a climate model run using data from the 1950s and
integrated forward for 50 years accurately capture the essential
features of the climate during that period? That's how you tell
whether your model is working. This isn't a problem in extracting
spectral lines from background noise. Or curve fitting. Or anything
else for that matter.

>
>
>> To begin with, the various elements that are fed to the model
>> are not independent of each other. They are coupled in myriad
>> different ways: insolation, surface albedo, land use, vegetative
>> cover, winds, precipitation, cloud cover, ... all have an impact on
>> the surface temperature. That temperature, in turn, affects
>> evaporation, convection, sensible heat fluxes, latent heat fluxes,
>> radiative processes. These in turn affect .... And those affect ...
>> And those affect ...
>
>I do not want to get into the nuts and bolts here, as that
>is obviously your forte. Nevertheless, most factors you
>list are not the true governing parameters, which are
>independent of the model. Examples: surface reflectance
>per square km of ice is something that cannot change, and
>is therefore a governing parameter. OTOH, the total ice cover
>is something that a model should determine.

Yes, there are physical laws at work. Laws of atmospheric
motion, the laws of radiative transfer, ... They help define the
processes that work in the atmosphere. The values input to the model
however are things like temperature, insolation, cloud cover, and
those inputs are not independent of each other. They are very much
linked. The physics of the model are not independent of the model.
They are the model. If the physics being used is inappropriate,
ill-considered or absent your model has a serious problem.
BTW, surface albedo in not a constant, but depends on sun
angle, the type and age of the ice, cloud cover, precipitation, etc,
etc, etc.

>
>
>> The data that are measured are not simply plugged into some
>> equation. The data are carefully quality controlled. They go through a
>> sophisticated data assimilation process where the data are fitted to
>> the model grid both spatially and temporally.
>> I wish it was as easy as you say it is to evaluate model
>> output. It would make my job a lot easier. Trouble is, it's a hell of
>> a lot more involved than the simplistic way you are looking at it.
>> So the short of it is that it cannot be calculated.
>
>I get the feeling that you do not _want_ to calculate it,
>because you know it would raise more question than
>you can answer concerning climate features, such as ENSO,
>that should be accounted for, but are not. This puts many
>unaswered, fundamental questions about climate in sharp relief.

So there's a grand conspiracy between modelers and those of us
who use models and all because you are sure that assessing model
performance is so simple. I can't argue with logic like that.

>
>
>> >> It depends on what is being modeled, on how the model works,
>> >> how the data is assimilated, what the physical parameterizations are,
>> >> what the model time step is, how feedbacks operate, ... There is no
>> >> one answer that is going to satisfy a question like that.
>> >
>> >Indeed there is one answer, as I have shown above.
>>
>> No there is not, because you are looking at the problem in too
>> simplistic a way. The errors that are produced are dependent on the
>> factors that I list above. You simply can't ignore them.
>
>If one cannot ignore them, it is clear that you need to figure
>out a way to systematically, quantitatively account for them.
>

Boy, I wish it was as easy as you make it out to be. I
wouldn't waste my time issuing those bust forecasts when the numerical
models lead me down the garden path.

--
Dave.

j...@watson.ibm.com

unread,
Nov 29, 2000, 10:53:08 PM11/29/00
to
In article <3A247C42...@mail.verizon.net>,
on Wed, 29 Nov 2000 03:43:13 GMT,

Josh Halpern <vze2...@mail.verizon.net> writes:
>j...@watson.ibm.com wrote:
>
>> In article <3a232f74...@news.escape.ca>,
>> wra...@mb.sympatico.ca (David Ball) writes:
>> >On Tue, 28 Nov 2000 03:22:23 GMT, j...@watson.ibm.com wrote:
>> >>In article <3a1fe2fc...@news.escape.ca>,
>> >> wra...@mb.sympatico.ca (David Ball) writes:
>> >>>On Sat, 25 Nov 2000 00:29:04 GMT, j...@watson.ibm.com wrote:
>> >>>[..]
>
>SNIp....
>
>> The weather a year from now is effectively random.
>
>Nonsense. There will be a random variation within some small
>range, but this is far different from it being truly random. We
>can calculate what that range will be.

You are quibbling. The weather a year from now is
effectively a random sample from some distribution. And from my
anthropogenic viewpoint the range is not all that "small".

>> If I go outside and wave my arms this will completely change the weather in
>> a
>> year in an unpredictable way.
>
>At best you can say it might, but even there, over long periods of time
>the effect of waving your arms will be washed out completely. Note
>that in your argument you are really adopting Ball's argument that
>nothing about the weather is random, but that our ability to describe
>the initial conditions are limited. If one accepts your contention that
>the weather at future dates is random, then the system memory of
>your actions will be washed out.

The effect does not wash out. The trajectories diverge
exponentially. So the weather in a year will be completely
different. However the climate will (probably) be the same. So by
waving my arms I am causing a completely different random sample to
be taken from the same random distribution.
And I am not adopting Ball's argument. As I noted before
quantum effects are continuously introducing random noise into the
system. Because of the chaotic nature of weather this noise is
amplified until it wipes out any information about the initial
conditions.
James B. Shearer

j...@watson.ibm.com

unread,
Nov 29, 2000, 11:20:33 PM11/29/00
to
In article <3A248DA0...@sprynet.com>,
on Tue, 28 Nov 2000 21:01:20 -0800,
>Range of predictions doesn't equal accuracy of predictions.

If complex models of climate sensitivity to CO2 forcing are


more accurate than simple models I would expect them to be clustered
more tightly.

James B. Shearer

goldfish

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to

David Ball wrote:

I recall seeing somewhere else that climate, in all of its
complexity, can be modelled with around 20 parameters. True?
If not, how many?


> >Nothing you have written disputes the basic mechanism to
> >distinguish the significant from the noise, as I have outlined:
> >you determine the uncertainty, and if a feature lies outside
> >of that range, then the model should account for it.
> >
> >The mathematical tools are available to answer this question.
> >If you believe in the models, then you should believe in the
> >derivation of the uncertainty of the models, which can be defined
> >in a number of mathematically precise ways. (One reason I
> >deliberately left my question vague was to let you define the
> >uncertainty in any way that was felt appropriate.)
> >
> >By leaving the uncertainty undefined and unquantified,
> >any answer to a question regarding the validity of the model
> >remains ambiguous. If you cannot verify that the model(s) are
> >valid, the work will always have questionable merit.
>
> Does a weather prediction model accurately capture the
> features in the weather? Does a meso-scale model of a supercell
> thunderstorm develop that storm at the correct point in time and
> space? Does a climate model run using data from the 1950s and
> integrated forward for 50 years accurately capture the essential
> features of the climate during that period? That's how you tell
> whether your model is working. This isn't a problem in extracting
> spectral lines from background noise. Or curve fitting. Or anything
> else for that matter.

First: climate, as has been stated many times, is not
weather, so any comparisons between the two must be made
carefully. In particular, I do not think your comparison is fruitful.
Second, a climate model run from 1950 does not capture
all of the essential features, as it does not capture unforced
variability. So, if I were to take a strict, uncharitable view,
I would say that the model is not working. (Normally, I would
say that there are differences that indicate the model is "incomplete.")
Third, your defensive tone is not productive.

[...]

> >> The data that are measured are not simply plugged into some
> >> equation. The data are carefully quality controlled. They go through a
> >> sophisticated data assimilation process where the data are fitted to
> >> the model grid both spatially and temporally.
> >> I wish it was as easy as you say it is to evaluate model
> >> output. It would make my job a lot easier. Trouble is, it's a hell of
> >> a lot more involved than the simplistic way you are looking at it.
> >> So the short of it is that it cannot be calculated.
> >
> >I get the feeling that you do not _want_ to calculate it,
> >because you know it would raise more question than
> >you can answer concerning climate features, such as ENSO,
> >that should be accounted for, but are not. This puts many
> >unaswered, fundamental questions about climate in sharp relief.
>
> So there's a grand conspiracy between modelers and those of us
> who use models and all because you are sure that assessing model
> performance is so simple. I can't argue with logic like that.

No grand conspiracy, just a recognition of self-interest.
I have been in the same place.


> >> >> It depends on what is being modeled, on how the model works,
> >> >> how the data is assimilated, what the physical parameterizations are,
> >> >> what the model time step is, how feedbacks operate, ... There is no
> >> >> one answer that is going to satisfy a question like that.
> >> >
> >> >Indeed there is one answer, as I have shown above.
> >>
> >> No there is not, because you are looking at the problem in too
> >> simplistic a way. The errors that are produced are dependent on the
> >> factors that I list above. You simply can't ignore them.
> >
> >If one cannot ignore them, it is clear that you need to figure
> >out a way to systematically, quantitatively account for them.
> >
>
> Boy, I wish it was as easy as you make it out to be. I
> wouldn't waste my time issuing those bust forecasts when the numerical
> models lead me down the garden path.

Please excuse me, my vanity about writting has apparently
gotten the worst of me. I like putting complex questions in
simple terms, and I do not like detail that does not claritfy.
So just because I describe things in schoolboy language does
not mean that I do not appreciate how complex things can be.
Putting something in simple terms does not mean it is easy!

Regards,
Peter Mott

goldfish

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to

goldfish wrote:

> David Ball wrote:
> > >I get the feeling that you do not _want_ to calculate it,
> > >because you know it would raise more question than
> > >you can answer concerning climate features, such as ENSO,
> > >that should be accounted for, but are not. This puts many
> > >unaswered, fundamental questions about climate in sharp relief.
> >
> > So there's a grand conspiracy between modelers and those of us
> > who use models and all because you are sure that assessing model
> > performance is so simple. I can't argue with logic like that.
>
> No grand conspiracy, just a recognition of self-interest.
> I have been in the same place.

I do not want you to get the wrong impression
here; I need to clarify that point a bit.

My initial goal was to find out what the uncertainty
in the climate models. This is a useful thing to know
in any attempt at modelling, as it provides a quantitative
distinction between noise and significance. Knowing this
value, one can objectively, quantitatively answer many
questions concerning validity: for example, it addresses
Dr Shearer's question about the smallest feature that
a model should include; it also applies my question about the
predicted temperature increase based on the increase in
greenhouse gases.

As nobody seems to know the answer to this question,
and further, as some posters have dismissed my question as
'too simple because modelling the climate is complex,'
I must admit that my goal has shifted a bit. (I am being vain,
and in any case I shall soon give up.)

In the spirit of collegiality, my goal is to convince modellers
that it is your best interests to answer this question, because it
makes your work more convincing and valid.

Regards,
Peter Mott

wmconnolley

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to
Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> wmconnolley <w...@bas.ac.uk> writes:

>> Ah, but we don't know the "given" function exactly.

> It is intrinsic to the _model_ that you know the functional
> form used at whatever level of approximation you are doing ...

In this case (the question being the accuracy of the models, compared
one must assume to reality), "R" is not the function of the model, but
the true one, of reality, which is not known.

wmconnolley

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to
goldfish <p...@xbt.nrl.navy.mil> wrote:
> wmconnolley wrote:

> > Ah, but we don't know the "given" function exactly. We know it
> > roughly, but there are no precise theorems to translate this
> > into error bounds.
>
> W H Press, S A Teukosky, W T Vetterling and B P
> Flannery, "Numerical Recipies in Fortran" 2nd Ed., Cambridge
> Univ. Press (Cambridge, UK), 1992. See chapter 10, section 6:
> "Conjugate Gradient Methods in Multidimensions".

I doubt this reference does what is needed. I'm not talking about the
problems of computing things (which is what CG is, no?) but the
problem

given a problem Y=F(X,t), and another one Y'=f(X',t)

where F is reality, f the model, X the real state and X' the model
state, then I don't think there are any *theorems* which say,

given that F ~ f (in some sense) and t not too big,

then Y ~ Y' (in some sense)

Of course, there is practical experience (in particular, in weather
forecasting we know if that t is up to say 10 days, then Y ~ Y', but
in this case there are no theroems to back this up).

Which (to come back to where this began) makes your formalism
inapplicable to this problem, as far as I can see.

Steinn Sigurdsson

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to
wmconnolley <w...@bas.ac.uk> writes:

> Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> > wmconnolley <w...@bas.ac.uk> writes:

> >> Ah, but we don't know the "given" function exactly.

> > It is intrinsic to the _model_ that you know the functional


> > form used at whatever level of approximation you are doing ...

> In this case (the question being the accuracy of the models, compared
> one must assume to reality), "R" is not the function of the model, but
> the true one, of reality, which is not known.

Not as the question was posed by "goldfish",
he referred to "R" as the calculated model output,
which is an explicit (albeit usually an iterated series)
function of some set of input parameters.

And the functional accuracy of the models, their internal
self-consistency, is a lower bound on their true accuracy,
how well they represent the real climate.

Anyway, that was my understanding of what he was getting to.

Interesting thread.


goldfish

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to

wmconnolley wrote:

> goldfish <p...@xbt.nrl.navy.mil> wrote:
> > wmconnolley wrote:
>
> > > Ah, but we don't know the "given" function exactly. We know it
> > > roughly, but there are no precise theorems to translate this
> > > into error bounds.
> >
> > W H Press, S A Teukosky, W T Vetterling and B P
> > Flannery, "Numerical Recipies in Fortran" 2nd Ed., Cambridge
> > Univ. Press (Cambridge, UK), 1992. See chapter 10, section 6:
> > "Conjugate Gradient Methods in Multidimensions".
>
> I doubt this reference does what is needed. I'm not talking about the
> problems of computing things (which is what CG is, no?) but the
> problem
>
> given a problem Y=F(X,t), and another one Y'=f(X',t)
>
> where F is reality, f the model, X the real state and X' the model
> state, then I don't think there are any *theorems* which say,
>
> given that F ~ f (in some sense) and t not too big,
>
> then Y ~ Y' (in some sense)
>
> Of course, there is practical experience (in particular, in weather
> forecasting we know if that t is up to say 10 days, then Y ~ Y', but
> in this case there are no theroems to back this up).
>
> Which (to come back to where this began) makes your formalism
> inapplicable to this problem, as far as I can see.

So far as I understand you, your criteria is much more
demanding than mine.

The uncertainty is a matter of an estimate of the variability
of a model prediction. Lets say that a model is shown to nicely
fit data over some limited range, and based on the fit, you
project results to a range outside of that fit. Because the data
has scatter, the governing parameters of the model have some
uncertainty, which therefore produces some uncertainty of the
projection. If the model is entirely wrong, you are likely to find
that the reality is outside of the uncertainty range of the projection.
Uncertainty defines the best you can do based on how well you
measure the data, but you obviously can do worse. If you are
indeed doing worse, it is useful because it points out that the model is
wrong.

OTOH, if the reality falls within the range of uncertainty,
your model is shown to be _useful_, but not necesarily a genuine
representation of the true physics.

So the unertainty formalism says nothing about the relationship
between reality and the model, only about what a model ought to,
or can, do.

Regards,
Peter Mott

Don Libby

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to
goldfish wrote:

A bone-headed blind empiricist might approach the problem thusly.

First, gather data from CC95 WGI Ch. 5 "Climate Models - Evaluation".


Table 5.2 Coupled model simulations of global average temp and
precip (1975)

Surface air temp Precip
in degrees C in mm/day

DJF JJA DJF JJA
BMRC 12.7 16.7 2.79 2.92
CCC 12 15.7 2.72 2.86
COLA 12.6 15.5 2.64 2.67
CSIRO 12.1 15.3 2.73 2.82
GFDL 9.6 14 2.39 2.5
GISS 13 15.6 3.14 3.13
MRI 13.4 17.4 2.89 3.03
NCAR 15.5 19.6 3.78 3.74
UKMO 12 15 3.02 3.09
MPI(OPYC) 11.2 14.8 2.64 2.73
MPI(LSG) 11 15.2

Observed 12.4 15.9 2.74 2.9

Next, compute summary statistics:

Mean 12.28 15.89 2.87 2.95
Std Dev 1.43 1.46 0.36 0.32

Next, compare expected value to observed value.

Finally, if willing to stick out one's neck and suggest that we have
here a random sample of independent and identically distributed
observations (n=11 for temp, 10 for precip) from an infinite population
of atmosphere-ocean coupled GCMs with unknown variance, look up
Student's-t to find 95% CI for the mean.

eh?

-dl

--

*********************************************************
* Replace "never.spam" with "dlibby" to reply by e-mail *
*********************************************************

Leonard Evens

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to
goldfish wrote:
>
> goldfish wrote:
>
> > David Ball wrote:
> > > >I get the feeling that you do not _want_ to calculate it,
> > > >because you know it would raise more question than
> > > >you can answer concerning climate features, such as ENSO,
> > > >that should be accounted for, but are not. This puts many
> > > >unaswered, fundamental questions about climate in sharp relief.
> > >
> > > So there's a grand conspiracy between modelers and those of us
> > > who use models and all because you are sure that assessing model
> > > performance is so simple. I can't argue with logic like that.
> >
> > No grand conspiracy, just a recognition of self-interest.
> > I have been in the same place.
>
> I do not want you to get the wrong impression
> here; I need to clarify that point a bit.
>
> My initial goal was to find out what the uncertainty
> in the climate models. This is a useful thing to know
> in any attempt at modelling, as it provides a quantitative
> distinction between noise and significance. Knowing this
> value, one can objectively, quantitatively answer many
> questions concerning validity: for example, it addresses
> Dr Shearer's question about the smallest feature that
> a model should include; it also applies my question about the
> predicted temperature increase based on the increase in
> greenhouse gases.
>
> As nobody seems to know the answer to this question,
> and further, as some posters have dismissed my question as
> 'too simple because modelling the climate is complex,'
> I must admit that my goal has shifted a bit. (I am being vain,
> and in any case I shall soon give up.)
>
> In the spirit of collegiality, my goal is to convince modellers
> that it is your best interests to answer this question, because it
> makes your work more convincing and valid.
>
> Regards,
> Peter Mott

In case you hadn't noticed, with the possible exception of Bob
Grumbine, you are not addressing your concerns to "climate modelers",
but rather to educated observers. I think that if you study CC95,
as Don Libby suggested, you will find at least some of the answers
to your questions. If that doesn't suffice, you can always
follow up by looking up the references.

But doing that may be something of a waste of time since that
report is 5 years old and a new report is coming out in the near
future. For example, it has been widely reported that the latest
assessment suggests a higher value for top of the range of projections
for average global temperature by the year 2100. I for one would
like to know exactly what that means.

goldfish

unread,
Nov 30, 2000, 3:00:00 AM11/30/00
to

Leonard Evens wrote:

> goldfish wrote:
> >
> > goldfish wrote:
> >
> > > David Ball wrote:

> > > > >I get the feeling that you do not _want_ to calculate it,
> > > > >because you know it would raise more question than
> > > > >you can answer concerning climate features, such as ENSO,
> > > > >that should be accounted for, but are not. This puts many
> > > > >unaswered, fundamental questions about climate in sharp relief.
> > > >
> > > > So there's a grand conspiracy between modelers and those of us
> > > > who use models and all because you are sure that assessing model
> > > > performance is so simple. I can't argue with logic like that.
> > >
> > > No grand conspiracy, just a recognition of self-interest.
> > > I have been in the same place.
> >

Well no, actually, but I do appreciate all contributions.


> I think that if you study CC95,
> as Don Libby suggested, you will find at least some of the answers
> to your questions. If that doesn't suffice, you can always
> follow up by looking up the references.

The CC95 report is a luxury that I do not have.
My local technical library does not have access to it, and
I cannot afford to buy it. So I would appreciate a journal
article reference, or a web site, etc.


> But doing that may be something of a waste of time since that
> report is 5 years old and a new report is coming out in the near
> future. For example, it has been widely reported that the latest
> assessment suggests a higher value for top of the range of projections
> for average global temperature by the year 2100. I for one would
> like to know exactly what that means.

As Dr Libby pointed out, perhaps more has been produced
than I realize. Duh.

Regards,
Peter Mott

Don Libby

unread,
Nov 30, 2000, 8:38:25 PM11/30/00
to
goldfish wrote:
>
>
> The CC95 report is a luxury that I do not have.
> My local technical library does not have access to it, and
> I cannot afford to buy it. So I would appreciate a journal
> article reference, or a web site, etc.
>

Dr. Mott, the IPCC WGI website is at:

http://www.meto.gov.uk/sec5/CR_div/ipcc/wg1/

If the inter-library loan idea doesn't pan out, you can order this
(probably cheap + shipping and handling) report directly from IPCC WGI
(see the publication section on their website):

"An Introduction to Simple Climate Models used in the IPCC Second
Assessment Report". (1997) J T Houghton, L G Meira Filho, D J Griggs and
K Maskell (Eds.). IPCC Technical Paper 2, IPCC, Geneva, Switzerland, 51
pp. Available from: IPCC WGI Technical Support Unit, Hadley Centre,
Meteorological Office, London Road, Bracknell, Berkshire, RG12 2SY.

Leonard Evens

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to
goldfish wrote:
>
> Leonard Evens wrote:
>
> > goldfish wrote:
> > >
> > > goldfish wrote:
> > >
> > > > David Ball wrote:
> > > > > >I get the feeling that you do not _want_ to calculate it,
> > > > > >because you know it would raise more question than
> > > > > >you can answer concerning climate features, such as ENSO,
> > > > > >that should be accounted for, but are not. This puts many
> > > > > >unaswered, fundamental questions about climate in sharp relief.
> > > > >
> > > > > So there's a grand conspiracy between modelers and those of us
> > > > > who use models and all because you are sure that assessing model
> > > > > performance is so simple. I can't argue with logic like that.
> > > >
> > > > No grand conspiracy, just a recognition of self-interest.
> > > > I have been in the same place.
> > >
> The CC95 report is a luxury that I do not have.
> My local technical library does not have access to it, and
> I cannot afford to buy it. So I would appreciate a journal
> article reference, or a web site, etc.
>
> > But doing that may be something of a waste of time since that
> > report is 5 years old and a new report is coming out in the near
> > future. For example, it has been widely reported that the latest
> > assessment suggests a higher value for top of the range of projections
> > for average global temperature by the year 2100. I for one would
> > like to know exactly what that means.
>
> As Dr Libby pointed out, perhaps more has been produced
> than I realize. Duh.
>
> Regards,
> Peter Mott

You must be poor indeed. I own all the IPCC scientific assessments,
and I am hardly rich. The 1995 assessment seems to have gone up in
price, but it currently costs $41.95.

Don Libby has already suggested some less expensive alternatives,
and you should be able to find these reports at any major university
library. There are several such universities in the Washington area,
which should be not too difficult for you to get to.

wmconnolley

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to
goldfish <p...@xbt.nrl.navy.mil> wrote:
> The uncertainty is a matter of an estimate of the variability
> of a model prediction.

Well, first off, this is not the really interesting question. The
interesting question is, how close are model predictions to what
will become reality. But, we can consider this other question.

> Lets say that a model is shown to nicely
> fit data over some limited range, and based on the fit, you

Well, in this case, this means 2 things:

- running baseline control runs for a "long time" - say 1000 years -
and checking that the model climate remains within bounds for
summary variables (such as global mean T, or total seaice area), and
that individual variables (temperature at level X at latitude Y and
longitude Z at time of year T) aren't anywhere too badly wrong,

- running simulations of the last 120-odd years (people tend to
start in 1860-ish) and checking that these fit roughly the path
that reality took

> project results to a range outside of that fit. Because the data
> has scatter,

the data has 2 sorts of scatter: measurements error (or missing) and
"scatter" due to natural variability

> the governing parameters of the model have some
> uncertainty,

uhu, you've also missed uncertainty due to the total lack of
measurements of some things: stratospheric temperatures in the
last century, for example

> which therefore produces some uncertainty of the
> projection. If the model is entirely wrong, you are likely to find
> that the reality is outside of the uncertainty range of the
> projection.

well, this isn't the case (depending on how tightly you draw your
bounds)

> Uncertainty defines the best you can do based on how well you
> measure the data, but you obviously can do worse. If you are
> indeed doing worse, it is useful because it points out that the model
> is wrong.

You seem to think of wrong as all-or-nothing. Perhaps because you're
thinking (as people so often seem to) of global variables like
avg temperature. But the model doesn't generate this: its not a model
variable: its made afterwards by averaging all the points together.
It can be right in summer and wrong in winter. It can be good on a
hemispheric average, and wrong at a given location.

> OTOH, if the reality falls within the range of uncertainty,
> your model is shown to be _useful_, but not necesarily a genuine
> representation of the true physics.

Well, we *know* the model is a rep of the true physics, at least on
the large scale. The question, though, is does this mean reliability
on smaller scales.

wmconnolley

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to
Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> Not as the question was posed by "goldfish",
> he referred to "R" as the calculated model output,
> which is an explicit (albeit usually an iterated series)
> function of some set of input parameters.

I may well have misunderstood. As I said in reply to him, this is
(to me) a less interesting question.

> And the functional accuracy of the models, their internal
> self-consistency, is a lower bound on their true accuracy,
> how well they represent the real climate.

Um... not quite sure what you mean here. The spread between different
models is no bound on the accuracy of any given model. If you mean
the spread between, say, several ensemble members of the same AOGCM
simulating the next 100-odd years, then this is a weak bound, because
they tend to be very similar. What did you mean by "internal
self-consistency"? They usually conserve, say, heat or moisture.

Steinn Sigurdsson

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to
wmconnolley <w...@bas.ac.uk> writes:

> Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> > Not as the question was posed by "goldfish",
> > he referred to "R" as the calculated model output,
> > which is an explicit (albeit usually an iterated series)
> > function of some set of input parameters.

> I may well have misunderstood. As I said in reply to him, this is
> (to me) a less interesting question.

Well, only Peter knows what he really meant, but that
is how I read it.

> > And the functional accuracy of the models, their internal
> > self-consistency, is a lower bound on their true accuracy,
> > how well they represent the real climate.

> Um... not quite sure what you mean here. The spread between different
> models is no bound on the accuracy of any given model. If you mean
> the spread between, say, several ensemble members of the same AOGCM
> simulating the next 100-odd years, then this is a weak bound, because
> they tend to be very similar. What did you mean by "internal
> self-consistency"? They usually conserve, say, heat or moisture.

Ok, so the (narrow) issue at hand as I understand it, is,
how well can current models represent real natural climate trends.
eg. if the power spectrum is red rather than flat, is it possible
for currently implemented models to capture something like the
real behaviour (or conversely, do they generate model variances
that are not features of real climate).
And, of course, hence, how accurately can the models capture
future natural or forced climate trends.

This is a question that is both relevant to any given numerical
model, and the whole ensemble of models, with different physics
or levels of approximations included, and any one model and the
ensemble of models run with different time or spatial resolution.

Then, the functional variance of the model - its sensitivity
of output to input parameters only know to a finite precision,
is a _lower bound_ on any claimed accuracy the model (or ensemble
of models) can make about future climate.

This is a relatively simple statement, and one which most numerical
modellers avoid considering in depth because the answer is all
too often unsatisfactory.

goldfish

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to

Leonard Evens wrote:

> goldfish wrote:
> You must be poor indeed. I own all the IPCC scientific assessments,
> and I am hardly rich. The 1995 assessment seems to have gone up in
> price, but it currently costs $41.95.

As $42 is a triffling amount for you, I would be
glad to accept a donation.

Regards,
Peter Mott

goldfish

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to

Don Libby wrote:

> goldfish wrote:
> >
> >
> > The CC95 report is a luxury that I do not have.
> > My local technical library does not have access to it, and
> > I cannot afford to buy it. So I would appreciate a journal
> > article reference, or a web site, etc.
> >
>
> Dr. Mott,

I suppose I deserved that.


> the IPCC WGI website is at:
>
> http://www.meto.gov.uk/sec5/CR_div/ipcc/wg1/
>
> If the inter-library loan idea doesn't pan out, you can order this
> (probably cheap + shipping and handling) report directly from IPCC WGI
> (see the publication section on their website):
>
> "An Introduction to Simple Climate Models used in the IPCC Second
> Assessment Report". (1997) J T Houghton, L G Meira Filho, D J Griggs and
> K Maskell (Eds.). IPCC Technical Paper 2, IPCC, Geneva, Switzerland, 51
> pp. Available from: IPCC WGI Technical Support Unit, Hadley Centre,
> Meteorological Office, London Road, Bracknell, Berkshire, RG12 2SY.

Already got that one, but thanks anyway.

It does cover what I have been asking, but not in a comprehensive
way. Looking at figure 8, the forcing and their uncertainties, for
the different greenhouse gases. When you add up the forcing to get
the total, the likewise accumulated uncertainty is nearly the same
as the total forcing. As this is one of the most important parameters,
it thus seems that the model predictions are more doubtful than ever.
OTOH, there is no discussion on the uncertainty of this total --
perhaps the uncertainties of the different gasses are not independant,
providing a lower unertainty than the accumulated total, so I am left
wondering.

Regards,
Peter Mott

goldfish

unread,
Dec 1, 2000, 3:00:00 AM12/1/00
to

Steinn Sigurdsson wrote:

> wmconnolley <w...@bas.ac.uk> writes:
>
> > Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> > > Not as the question was posed by "goldfish",
> > > he referred to "R" as the calculated model output,
> > > which is an explicit (albeit usually an iterated series)
> > > function of some set of input parameters.
>
> > I may well have misunderstood. As I said in reply to him, this is
> > (to me) a less interesting question.
>
> Well, only Peter knows what he really meant, but that
> is how I read it.

You got it dead-on.

Regards,
Peter Mott

Leonard Evens

unread,
Dec 2, 2000, 3:00:00 AM12/2/00
to

It is not a trifling amount. But I am willing to spend it in order
to learn something about an important topic. particularly one in
which I want to express opinions.

>
> Regards,
> Peter Mott

wmconnolley

unread,
Dec 3, 2000, 3:00:00 AM12/3/00
to
Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:
> eg. if the power spectrum is red rather than flat, is it possible
> for currently implemented models to capture something like the
> real behaviour (or conversely, do they generate model variances
> that are not features of real climate).

Its ceratinly possible for models to capture something like the
real behaviour (possible, and they do).

Remember, though, that once you start looking at the tail of the
spectrum you rapidly run out of directly observed data and have to
start relying on proxies.

> And, of course, hence, how accurately can the models capture
> future natural or forced climate trends.

The trouble is, your "hence" doesn't follow, or at least is not known
to follow (though I agree, people building models consider it a
desirable property). It is perfectly possible to imagine 2 models,
one with lovely natural-looking variability, and another thats is
dead-flat with respect to internal variability, but the first does
poorly on forced change, whereas the latter could do an excellent job,
in the sense of accurately predicting the future climate running-mean
state.

> Then, the functional variance of the model - its sensitivity
> of output to input parameters only know to a finite precision,
> is a _lower bound_ on any claimed accuracy the model (or ensemble
> of models) can make about future climate.

I'm not sure this is true. Its a mistake to say, for example, that
the model control climate sensitivity to, say, the unknown value
of parametrisation X is a lower bound on the accuracy of future
simulations. Because the information available is not just value of
X, but how having a value of X affects the model simulation in
concert with a whole lot of other parametrisations, scalings, etc.
In other words, even though direct physical measurement can't tell us
the appropriate value for X, there is indirect evidence for the
correct value for X in a given model. The you have to worry about
whether that value is still appropriate in a changed climate: true.

Halpern

unread,
Dec 3, 2000, 3:00:00 AM12/3/00
to
Steinn Sigurdsson wrote:

> wmconnolley <w...@bas.ac.uk> writes:
> > Steinn Sigurdsson <ste...@najma.astro.psu.edu> wrote:

SNIP...

> Then, the functional variance of the model - its sensitivity
> of output to input parameters only know to a finite precision,
> is a _lower bound_ on any claimed accuracy the model (or ensemble
> of models) can make about future climate.
>

> This is a relatively simple statement, and one which most numerical
> modellers avoid considering in depth because the answer is all
> too often unsatisfactory.

Curious. Most of the modellers that I know (combustion systems
mostly) are obsessed with sensitivitiy ananlysis, because

1. It tells them where they have to work hard.
2. It tells the data providers where they have to be careful and precise.
3. It tells the modellers where one can be a bit less rigorous.
4. It tells the data providers where they don't have to work very hard.
5. It tells the observers, where they should look for the first effects.

AFAIK, the answer to 5 for temperatures given by GCMs is high
latitudes. That also appears to be the place where the strongest
effects have been observed to date. Is there a "simple" model that
gets this result?

josh halpern


Halpern

unread,
Dec 3, 2000, 3:00:00 AM12/3/00
to
I'm going to push hard on a concept that this discussion has
made clear to me.

Climate is the distribution from which weather is sampled.

GCMs calculate the climate (distribution)

Weather programs calculate temporal and spatial selections
from the climate.

There are statistics such as global temperature which are measures
of individual degrees of freedom of the climate (distribution).

As has been shown the GCMs do a pretty good job calculating
these average values.

josh halpern


wmconnolley wrote:

goldfish

unread,
Dec 4, 2000, 3:00:00 AM12/4/00
to

Don Libby wrote:

> goldfish wrote:
> >
> > wmconnolley wrote:
> >
> > >
> > > given a problem Y=F(X,t), and another one Y'=f(X',t)
> > >
> > > where F is reality, f the model, X the real state and X' the model
> > > state, then I don't think there are any *theorems* which say,
> > >
> > > given that F ~ f (in some sense) and t not too big,
> > >
> > > then Y ~ Y' (in some sense)
> > >
> > > Of course, there is practical experience (in particular, in weather
> > > forecasting we know if that t is up to say 10 days, then Y ~ Y', but
> > > in this case there are no theroems to back this up).
> > >
> > > Which (to come back to where this began) makes your formalism
> > > inapplicable to this problem, as far as I can see.
> >
> > So far as I understand you, your criteria is much more
> > demanding than mine.
> >

> > The uncertainty is a matter of an estimate of the variability

> > of a model prediction. Lets say that a model is shown to nicely


> > fit data over some limited range, and based on the fit, you

> > project results to a range outside of that fit. Because the data

> > has scatter, the governing parameters of the model have some
> > uncertainty, which therefore produces some uncertainty of the


> > projection. If the model is entirely wrong, you are likely to find
> > that the reality is outside of the uncertainty range of the projection.

> > Uncertainty defines the best you can do based on how well you
> > measure the data, but you obviously can do worse. If you are
> > indeed doing worse, it is useful because it points out that the model is
> > wrong.
> >

> > OTOH, if the reality falls within the range of uncertainty,
> > your model is shown to be _useful_, but not necesarily a genuine
> > representation of the true physics.
> >

> > So the unertainty formalism says nothing about the relationship
> > between reality and the model, only about what a model ought to,
> > or can, do.
> >
> > Regards,
> > Peter Mott
>
> A bone-headed blind empiricist might approach the problem thusly.

Here!

The problem I have with this is that results from the models
are products of human imagination, and do not deserve the same
weight of genuine data. If one particular model, e.g., NCAR,
does not fit the observation very well, then there is no reason to
include it in the average. But then, if NCAR is excluded, the model
ensemble average decreases, and it does not the match the observation
as well. So you have a cherry-picking situation, where you can
include whatever model you like so that the average matches the
data.

Regards,
Peter Mott

goldfish

unread,
Dec 4, 2000, 3:00:00 AM12/4/00
to

wmconnolley wrote:

> goldfish <p...@xbt.nrl.navy.mil> wrote:
> > The uncertainty is a matter of an estimate of the variability
> > of a model prediction.
>

> Well, first off, this is not the really interesting question. The
> interesting question is, how close are model predictions to what
> will become reality. But, we can consider this other question.

How close the model is to reality is, of course, the
_only_ question. But when projected 50 years into the
future, there is no way to know other than waiting for
the results. So the impatient among us will look to
other factors in the model(s), to provide some measure
how much confidence we may have in their predictions.


> > Lets say that a model is shown to nicely
> > fit data over some limited range, and based on the fit, you
>

> Well, in this case, this means 2 things:
>
> - running baseline control runs for a "long time" - say 1000 years -
> and checking that the model climate remains within bounds for
> summary variables (such as global mean T, or total seaice area), and
> that individual variables (temperature at level X at latitude Y and
> longitude Z at time of year T) aren't anywhere too badly wrong,
>
> - running simulations of the last 120-odd years (people tend to
> start in 1860-ish) and checking that these fit roughly the path
> that reality took

Yep.

> > project results to a range outside of that fit. Because the data
> > has scatter,
>

> the data has 2 sorts of scatter: measurements error (or missing) and
> "scatter" due to natural variability

This is where we disagree. "Scatter" due to natural
variability isn't really scatter. But I will grant that
it is pretty hard to model, and if it were understood and
captured, probably little would be learned regarding
long term trends. OTOH, there is a possibility that
understanding variability that is the key to understanding
long term trends in the climate. So I will never be entirely
convinced until somebody has a handle on this.

> > the governing parameters of the model have some
> > uncertainty,
>

> uhu, you've also missed uncertainty due to the total lack of
> measurements of some things: stratospheric temperatures in the
> last century, for example

Noted.


> > which therefore produces some uncertainty of the
> > projection. If the model is entirely wrong, you are likely to find
> > that the reality is outside of the uncertainty range of the
> > projection.
>

> well, this isn't the case (depending on how tightly you draw your
> bounds)

Uh, I think you misunderstood.
If I fit the climate to a model of baking bread, and over the
short term I will probably get a good fit, if my model
has enough parameters. (There is an old Chem E saying: you
can model an elephant if you have enough parameters.) When
projected over longer periods, however, it is _likely_ that, since
baking bread has nothing in common with climate, the reality will be
outside the uncertainty range of the projection. But not necessarily.

But on the third hand, as you say, if the uncertainty of the model is
large, the bounds on the model projection will always contain
the reality. In that case, however, not much is learned, and the
model has no value.


> > Uncertainty defines the best you can do based on how well you
> > measure the data, but you obviously can do worse. If you are
> > indeed doing worse, it is useful because it points out that the model
> > is wrong.
>

> You seem to think of wrong as all-or-nothing. Perhaps because you're
> thinking (as people so often seem to) of global variables like
> avg temperature. But the model doesn't generate this: its not a model
> variable: its made afterwards by averaging all the points together.
> It can be right in summer and wrong in winter. It can be good on a
> hemispheric average, and wrong at a given location.

Again, I think your criteria is much more demanding than
mine. The global average temperature is a good one-dimensional
indicator, kinda like the Dow Jones Industrial average.
It is not everything, obviously, but it ain't too bad.
And if a model cannot capture it in some sense, then you
know that its predictions are, well, uncertain.

> > OTOH, if the reality falls within the range of uncertainty,
> > your model is shown to be _useful_, but not necesarily a genuine
> > representation of the true physics.
>

> Well, we *know* the model is a rep of the true physics, at least on
> the large scale. The question, though, is does this mean reliability
> on smaller scales.

Regards,
Peter Mott

Don Libby

unread,
Dec 4, 2000, 3:00:00 AM12/4/00
to
goldfish wrote:

>
> Don Libby wrote:
>
> > Finally, if willing to stick out one's neck and suggest that we have
> > here a random sample of independent and identically distributed
> > observations (n=11 for temp, 10 for precip) from an infinite population
> > of atmosphere-ocean coupled GCMs with unknown variance, look up
> > Student's-t to find 95% CI for the mean.
<...>
> So you have a cherry-picking situation, where you can
> include whatever model you like so that the average matches the
> data.

Isn't the idea of validation studies to eliminate the models that don't
match the data and keep the ones that do? It's a start.

What I suggested would help answer your question about the range of
uncertain variability in climate model projections, but it tests only
one aspect of that variability: inter-model variability in 1975. Other
tests could be arranged for intra-model variability too, and have been.
My bone-head approach is admittedly (emphatically) crude and
unsophisticated compared to the evaluation studies described in the SAR,
but at least it gets you on the path toward forming an opinion about the
reliability of GCMs for climate prognostication.

wmconnolley

unread,
Dec 4, 2000, 3:00:00 AM12/4/00
to
goldfish <p...@xbt.nrl.navy.mil> wrote:
> wmconnolley wrote:

> > the data has 2 sorts of scatter: measurements error (or missing) and
> > "scatter" due to natural variability
>
> This is where we disagree. "Scatter" due to natural
> variability isn't really scatter.

You misunderstand. When I say "scatter", I mean that it is a source
of difference between models and observations. A perfect climate model
would not be expected to reproduce the exact path of internal
variability, just the statistics of it.

> But I will grant that
> it is pretty hard to model, and if it were understood and
> captured, probably little would be learned regarding
> long term trends.

No, this is wrong. It is useful, it is modelled.

> OTOH, there is a possibility that
> understanding variability that is the key to understanding
> long term trends in the climate. So I will never be entirely
> convinced until somebody has a handle on this.

But we do have a handle on it.

> > > which therefore produces some uncertainty of the
> > > projection. If the model is entirely wrong, you are likely to find
> > > that the reality is outside of the uncertainty range of the
> > > projection.

> > well, this isn't the case (depending on how tightly you draw your
> > bounds)
>
> Uh, I think you misunderstood.

I think you did. I meant "this isn't the case" in answer to your
"if the model is entirely wrong". This "if" is false, hence everything
dependent on it is null. In the sense that "if the moon is made of
cheese, then I'm a dutchman" is true.

David Ball

unread,
Dec 4, 2000, 9:35:40 PM12/4/00
to
On Mon, 04 Dec 2000 09:56:21 -0500, goldfish <p...@xbt.nrl.navy.mil>
wrote:
[..]

>


>The problem I have with this is that results from the models
>are products of human imagination, and do not deserve the same
>weight of genuine data. If one particular model, e.g., NCAR,
>does not fit the observation very well, then there is no reason to
>include it in the average. But then, if NCAR is excluded, the model
>ensemble average decreases, and it does not the match the observation

>as well. So you have a cherry-picking situation, where you can


>include whatever model you like so that the average matches the
>data.
>

Products of the human imagination? Perhaps you could explain
how the laws of physics, radiative transfer and thermodynamics are
products of the human imagination? We're not modeling Dorothy's
movement through the land of OZ here. We're modeling a physical system
that is governed by physical laws. There is no cherry picking.

--
Dave.

Scott Nudds

unread,
Dec 5, 2000, 1:13:03 AM12/5/00
to
: On Mon, 04 Dec 2000 09:56:21 -0500, goldfish <p...@xbt.nrl.navy.mil>
: wrote:
: >The problem I have with this is that results from the models

: >are products of human imagination, and do not deserve the same
: >weight of genuine data. If one particular model, e.g., NCAR,
: >does not fit the observation very well, then there is no reason to
: >include it in the average. But then, if NCAR is excluded, the model
: >ensemble average decreases, and it does not the match the observation
: >as well. So you have a cherry-picking situation, where you can
: >include whatever model you like so that the average matches the
: >data.

David Ball (wra...@mb.sympatico.ca) wrote:
: Products of the human imagination? Perhaps you could explain


: how the laws of physics, radiative transfer and thermodynamics are
: products of the human imagination? We're not modeling Dorothy's
: movement through the land of OZ here. We're modeling a physical system
: that is governed by physical laws. There is no cherry picking.

Isn't it clear Mr. Ball? It's imagination in the same way that
statistical analysis of the results of the Florida Election which show
that Gore won the popular vote is nothing but "VuDu", and evolution is
nothing but an "unprovable fantasy constructed to destroy the concept of
God."

'The only good indian is a dead indian. We new that back in the
1600s." - Uncle Al. Nov 6, 2000 - Sci.Environment


goldfish

unread,
Dec 5, 2000, 3:00:00 AM12/5/00
to

David Ball wrote:

These are not measurements of a physical process.
The models are idealizations that out of necessary computational
expedience, excludes some of the physics.

The list that Dr Libby provided only showed one model
that did not match the measurement, the NCAR model.
I am sure there are more. What models were not listed
that also did match? That is cherry picking.

Regards,
Peter Mott


Don Libby

unread,
Dec 5, 2000, 3:00:00 AM12/5/00
to
goldfish wrote:
>
> David Ball wrote:
>
> > On Mon, 04 Dec 2000 09:56:21 -0500, goldfish <p...@xbt.nrl.navy.mil>
> > wrote:
> > [..]
> > >as well. So you have a cherry-picking situation, where you can
> > >include whatever model you like so that the average matches the
> > >data.
> > >
> > Products of the human imagination? Perhaps you could explain
> > how the laws of physics, radiative transfer and thermodynamics are
> > products of the human imagination? We're not modeling Dorothy's
> > movement through the land of OZ here. We're modeling a physical system
> > that is governed by physical laws. There is no cherry picking.
>
> These are not measurements of a physical process.
> The models are idealizations that out of necessary computational
> expedience, excludes some of the physics.
>
> The list that Dr Libby provided only showed one model
> that did not match the measurement, the NCAR model.
> I am sure there are more. What models were not listed
> that also did match? That is cherry picking.

You're right that if the sample is not randomly selected from the
population, or it is systematically selected according to the value of
the parameter in question, then sample selection bias becomes a serious
threat to the validity of the parameter estimate.

Recall that the population in question is the infinite variety of GCMs,
and that the population parameter in question is global average surface
temp during DJF in 1975. There are well-established theorems to
determine an estimate of the population parameter from a simple random
sample, provided all conditions are met. Further theorems can determine
the confidence interval surrounding the estimate, which is what you were
asking in the first place. Your critique boils down to "are all of the
conditions met"? In this case, I do not know if the sample of 11 models
has been randomly selected, or if it is a list of just the 11
best-fitting models.

If we threw out the NCAR model, we'd have one less degree of freedom,
which would automatically inflate the CI -- so the width of the CI would
be relatively insensitive. If the assumptions for statistical inference
based on the classic Students'-t distribution are so badly violated that
we can't make a valid estimate the CI, we might turn to so-called
"robust estimators" that generate a sampling distribution from the
observed data (e.g. compute parameter estimate for all samples of size 1
up to 11, then use something akin to Fisher's exact method to determine
probability estimates for the CI).

Robust estimators may get us past the problem of whether or not
classical assumptions for the t-distribution are met, but it will not
resolve the issue of sample selection bias. The 11 models for which
data were given in Table 5.2 is selected from a larger list of 16 models
in Table 5.1, which are "all atmosphere-ocean coupled models completed
or in process as of 1995", and the short list in table 5.2 is "all
completed AOCGCM as of 1995". I suppose the order in which AOCGCM have
been completed is relatively random and therefore the short list is a
random selection from the long list, and the long list is a random
selection from the infinite population of AOCGCM, which are observed
with the special scientific apparatus called "climate modelling".

The observations probably are not independent, however, since we can
expect teams building later models to draw on the work of pioneering
teams who built the earlier models. This would probably make the
observed range of variation in models narrower than would be expected
under the classical assumption that all observations are independent,
thus the CI drawn from the t-distribution is probably too wide, raising
the risk of failing to reject a false null. The error would seem to be
on the conservative side.

-dl

PS Goldfish, one of the teams doing model intercomparisons is the Naval
Research Laboratory in Monterrey -- maybe you could give them a buzz to
see how they are answering your questions.

Miguel Aguirre

unread,
Dec 5, 2000, 3:00:00 AM12/5/00
to

Don Libby wrote:

>
>
> PS Goldfish, one of the teams doing model intercomparisons is the Naval
> Research Laboratory in Monterrey -- maybe you could give them a buzz to
> see how they are answering your questions.
>

All your discussion goes around the handling of the output of the models as
random variables, nevertheless it could be the case that ALL model have some
common biass error, in this case the analysis of results will not add any
supplementary light. This biass could be there, even if the models are 'good'
numerical weather predictors and they have the physics in general right. The
coupling of the ocean and the atmosphere has opened new illuminations on how
climatic change could come (e.g. thermo-haline circulation collapse). This
fact could not have been deduced of any model without a coupling ocean
atmosphere. It is perfectly logic to assume that there are loops that are very
important to predict how 'real' climatic change could be and that we do not
know yet, e.g. couplings between biosphere and atmosphere.

--
Aguirre was considered to be a thoroughly disreputable character, and his name
practically became synonymous with cruelty and treachery
Encyclopaedia Britannica.

It is loading more messages.
0 new messages