Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

computer modeling (was scientific culture)

2 views
Skip to first unread message

j...@watson.ibm.com

unread,
Jan 31, 1995, 10:01:34 PM1/31/95
to

Michael Tobis posted:
: >I think you don't understand the legitimate purposes of computer
: >modelling,and how they fit in with the public policy questions.
I responded:
: It is true I am extremely skeptical of large complicated
: computer models of poorly understood systems. ...
CDChase then posted:
>++ Has anyone else noticed that "large, complicated, computer models" are
>used everywhere including for environmental work? Why is this line of
>argument dot used to debunk forecasts in all fields?

It is, however this is sci.environment.
CDChase continued:
>"large, complicated, computer models"are used to predict most aspects of
>the future for planning purposes. I'm sure the federal budget forecasts
>and all the financial models people are using around Wall Street and to
>try and manipulate financial markets are all based on "large, complicated
>computer models."

Using large complicated computer models to predict the future
is in many cases the modern equivalent of consulting an oracle, an
irrational but psychologically understandable response when people are
required to make important decisions in the presence of uncertainty.
Federal budget forecasts are notoriously inaccurate (and often
politically biased as well). I am sure you can find models that will
happily calculate the effect of changing the capital gains tax or the
minimum wage on the US economy in five years. You would be unwise to
give great weight to the answer.
Wall Street models may be better because of the continous
feedback and the painful consequences of errors. Nevertheless Wall
Streeter's are continually coming up with models that work well most
of the time but then fail castrophically when something unexpected
happens. Remember portfolio insurance or the Granite funds.
When confronted with the output of a large computer model one
should ask how the model was validated. If there is no convincing way
to validate the model the results are probably garbage (by which I
mean are no more reliable than the results of simple models).
CDChase added:
>=) better to use small, simple computer models which have no relationship
>at all to the real world=)

Simple models have the virtue that their flaws tend to be more
obvious which discourages placing excessive weight on their results.
For some reason people will often give greater weight to models that
they do not understand.
James B. Shearer

Len Evens

unread,
Feb 1, 1995, 11:26:03 AM2/1/95
to


I think for once we have a solid core of agreement. I am also glad
to see that James Shearer is just as suspicious of economics models
as he is of climate models.

However, I hope we can also agree that there do exist large complicated
models which do give accurate predictions. I am working from memory,
so I will probably get some details wrong, but I believe the
physicist Kenneth Wilson (?) at Cornell won a Nobel prize for the
use of such models. There are circumstances in which a simple model
is just not accurate enough to tell us what is going to happen, so
we have no choice but to use more elaborate models. Another example
is the models used to predict weather and more recently seasonal
weather patterns. There are many other such examples.

In this respect, climate models are somewhere `in between' . The
underlying physical and mathematical principles are pretty well
understood. In a simple model, you take a grossly inaccurate
picture of the Earth (perhaps even ignoring the fact that it is
roughly a sphere) and apply physical principles to get equations
you can solve fairly easily. In computer models, you use a more
accurate picture of the Earth, but it becomes hopeless to solve
the equations by simple methods, so you use numerical analysis
techniques instead. There are several types of uncertainties. One
is that you probably can't prove rigorously that the numerical
techniques will produce a correct answer or that they converge.
(This is very common in applied numerical analysis which relies
often on extra mathematical considerations to justify belief in
the results.) Indeed, if I understand correctly, such
numerical problems may be a factor leading some coupled models to
drift, as discussed in the much quoted Science article.
A more important issue is that many important phenonmena occur
on scales smaller than the cell size in the grid used to approximate
the fluids under consideration. Hence, these important phenomena
must be introduced into the model in some other way. This
process is called parameterization. Clearly, this is a crucial
matter and subject to improvement. (Included in this are many important
factors like the effects of clouds.) In addition, there are other
processes, for example those involving the biosphere, which might
produce unexpected phenomena. Finally,
climate change over decades or centuries could result in unexpected
shifts for purely dynamic reasons, i.e., there might be alternate
stable states which the system might be drawn to because of
relatively small perturbations. Current models which are still
fairly close to being equilibrium models might not be able to
accurately predict such phenomena.


The way climate models (and also weather models) differ from
models in economics and models in the social sciences is
that the underlying theory is on much firmer ground. Those who
take the conclusions of these models seriously think that it is
likely that what they tell us is at least as accurate as what
simple models tell us, particularly when many different models
using somewhat different approaches to the areas of uncertainty
result in similar predictions. At the most primitive level, this
is certainly true. The elaborate models currently in use produce
seasons, global distribution of climate, and other phenomena that it
would be hopeless to derive from simple models. People like James
Shearer emphasize the uncertainties and so conclude that the models
can't be relied on any more than simple models, at least with respect
to issues like greenhouse warming. In a sense this is a matter of
different predictions about the future. I feel that with further
work, the uncertainties are likely to be reduced and the reliability
of the models will increase. James Shearer appears to think that
these models will always be highly uncertain so it is probably not
worth putting a lot of effort into them. Of course neither side
in this debate can prove that it is correct. The only way to
find out is to continue with the research and to see what happens.

Leonard Evens l...@math.nwu.edu 708-491-5537
Dept. of Mathematics, Northwestern Univ., Evanston, IL 60208

Michael Tobis

unread,
Feb 5, 1995, 7:51:40 PM2/5/95
to
j...@watson.ibm.com wrote:


: Using large complicated computer models to predict the future


: is in many cases the modern equivalent of consulting an oracle, an
: irrational but psychologically understandable response when people are
: required to make important decisions in the presence of uncertainty.
: Federal budget forecasts are notoriously inaccurate (and often
: politically biased as well). I am sure you can find models that will
: happily calculate the effect of changing the capital gains tax or the
: minimum wage on the US economy in five years. You would be unwise to
: give great weight to the answer.

What comparative weight would you give to such models compared to climate
models?

: Wall Street models may be better because of the continous


: feedback and the painful consequences of errors. Nevertheless Wall
: Streeter's are continually coming up with models that work well most
: of the time but then fail castrophically when something unexpected
: happens. Remember portfolio insurance or the Granite funds.

In the case of climate models, this clearly indicates that the models are
intrinsically optimistic, rather than alarmist as is often alleged.

: When confronted with the output of a large computer model one


: should ask how the model was validated. If there is no convincing way
: to validate the model the results are probably garbage (by which I
: mean are no more reliable than the results of simple models).

Agreed. Things like the Club of Rome models have value only for didactic
purposes, and have essentially no predictive value.

That you believe the same holds for climate models shows yet again that you
do not know what a climate model is.

: CDChase added:


: >=) better to use small, simple computer models which have no relationship
: >at all to the real world=)

: Simple models have the virtue that their flaws tend to be more
: obvious which discourages placing excessive weight on their results.
: For some reason people will often give greater weight to models that
: they do not understand.

In your own case, this is obviously untrue. However, it is a bit irritating
that you consistently fail to make an effort to improve your understanding
of the models, while continuing to criticize them.

Things like the Club of Rome models have many degrees of freedom and very few
logical constraints and neglible validation.

Climate models have fewer degrees of freedom than the system they simulate,
and they can be validated in several ways:

1) reproducing the spatial structure of contemporary climate

2) reproducing the seasonal cycle of contemporary climate

3) reproducing paleoclimate conditions

4) predicting the effects of perturbations like Mt Pinatubo

5) reproducing physical conservation principles like mass conservation, energy
conservation and conservation of angular momentum

and believe it or not

6) reproducing the course of greenhouse gas response to date

In all of these matters, any sensible measurement shows skill above zero,
though below perfection.

If you have some specific criticisms, please present them.

At present, you criticisms show decreasing resemblance to the reality that you
claim to criticize.

Also, I remain curious how you would design a greenhouse gas policy in the
absence of any information at all from the models you like to criticize.

Finally, I would like to know where you are getting your information. It does
not appear to arise from the scientific press, much less referred journals.
I would think that you should at least look at the IPCC reports if you are
going to continue to go on at this length.

mt

Dave Halliwell

unread,
Feb 6, 1995, 12:29:58 PM2/6/95
to
There is a rather long section of quoted text before I start discussing
things.

l...@schur.math.nwu.edu (Len Evens) writes:

>In article <19950131....@almaden.ibm.com>, <j...@watson.ibm.com> wrote:
>>
>> Using large complicated computer models to predict the future
>>is in many cases the modern equivalent of consulting an oracle, an
>>irrational but psychologically understandable response when people are
>>required to make important decisions in the presence of uncertainty.
>> Federal budget forecasts are notoriously inaccurate (and often
>>politically biased as well). I am sure you can find models that will

[....]


>> Wall Street models may be better because of the continous
>>feedback and the painful consequences of errors. Nevertheless Wall

[....]


>> When confronted with the output of a large computer model one
>>should ask how the model was validated. If there is no convincing way

[....]


>> Simple models have the virtue that their flaws tend to be more
>>obvious which discourages placing excessive weight on their results.
>>For some reason people will often give greater weight to models that
>>they do not understand.

>I think for once we have a solid core of agreement. I am also glad
>to see that James Shearer is just as suspicious of economics models
>as he is of climate models.

>However, I hope we can also agree that there do exist large complicated
>models which do give accurate predictions. I am working from memory,

[....]


>we have no choice but to use more elaborate models. Another example
>is the models used to predict weather and more recently seasonal
>weather patterns. There are many other such examples.

>In this respect, climate models are somewhere `in between' . The
>underlying physical and mathematical principles are pretty well
>understood. In a simple model, you take a grossly inaccurate
>picture of the Earth (perhaps even ignoring the fact that it is
>roughly a sphere) and apply physical principles to get equations
>you can solve fairly easily. In computer models, you use a more
>accurate picture of the Earth, but it becomes hopeless to solve
>the equations by simple methods, so you use numerical analysis
>techniques instead. There are several types of uncertainties. One

[....]

>The way climate models (and also weather models) differ from
>models in economics and models in the social sciences is
>that the underlying theory is on much firmer ground. Those who

[....]

I hope that nobody will feel that my editing of their text will mean
that their statements are taken out of context. I basically just left in
a few of the more specific statements.

The issue of model validation is an inportant one. However, the
discussion above has seemed to imply that each model is a thing unto
itself, and can only be "validated" in a complete form, by a single
comparison with the system being modeled. This is particularly evident in
Shearer's past comments on climate models, where the only models he seems
to accept as having any basis in reality are the most "complex" ones, and
he rejects these out-of-hand.

In the "real modelling world", models evolve over time. As knowledge
about processes improve, parts of models change, and sub-parts are added.
At each stage, verification of the new parts will be expected. More
importantly, as some stages a _simpler_, _less_extensive_ part may
replace a more physically-based part, because of needs of computing time
or storage, or data availablility.

An atmospheric GCM for climate is _not_ something that was hacked
together on a case of Jolt, and only tested once it was all running. Many
portions of those models are the result of very detailed models of one
aspect of the system. Let us take radiation as an example. One can
develop an extremely detailed radiative transfer code, that integrates
atmospheric emission, absorption, and scattering over a large number of
wavelengths encompassing the terrestrial and solar spectra. However, it
is unlikely that one would want to put this into a GCM, because it takes
too much time to do the calculations at all the gridpoints. Instead, a
simpler model is used - one that includes all the _needed_
characteristics for radiation modelling in a GCM. The radiative transfer
code can be verified *before* it goes into the GCM, by comparison with
measured radiation fluxes (in the case of the detailed model), or by
comparison with a more complex model (in the case of the simplified
version in the GCM).

The division of climate models into "simple" and "complex" ones, which
Shearer has promoted, is a red herring. A one-dimensional radiative-
convective model often has a much more complicated radiative transfer
procedure than an atmospheric GCM. As a result, the characteristic
behaviour of the RCM is an important tool in understanding the
atmosphere. Sure, there are things that are ignored in a RCM (like
geography), but the principles included in a RCM _do_ enable us to
simulate (quite closely) the temperature profiles of a _variety_ of
planetary atmospheres. It gives us confidence in our ability to model
radiation transfer, and that knowledge is then passed on into the GCM.
One would be on thin ice if one used a RCM to imply geographical results,
because it just isn't there. However, one would also be on thin ice to
imply that the GCM is the "best" model of radiation transfer. The
_result_ from RCMs regarding the radiative forcing from 2xCO2 can be used
in both more advanced and simpler models.

Another example is dynamics. Climate GCMs need dynamics. How do we
develop the knowledge needed? We borrow it from the weather forecasters,
who have worried about it since the 1950s. After all, they need the
dynamics at least as much as the climatologists do.

Yet another example is surface energy balance aspects. (Those of you
that know my background know that this is an area of interest to me.) The
"best" models of evaporation processes and such are not the ones in GCMs,
but ones developed specifically for the purpose. GCMs can then borrow
_what_they_need_, to get that aspect that GCMs must concern themselves
with. The other details that are _unnecessary_ for the GCM's purpose can
be ignored.

Every model must be assessed according to its goals. Models _will_
leave out things that are not relevant to the problem at hand. You will
never get a "model of everything", which seems to be what Shearer wants
to see. My guess is that by the time the newest coupled atmosphere-ocean
GCMs sort out their drift tendencies and give "accurate" climates, the
modelling community will have moved on to add yet more factors to the
climate models, and these current "newest" models will be shifted into the
"simpler" class. At that stage, Shearer will _still_ be saying "I accept
that simpler models show warming, but...", and he'll still be arguing
that something has been left out that should be in there (and that this
is why the models still show warming when he is sure that they
shouldn't). [Note: we of course must wait to see whether more detailed
moselds will or will not show the same warming that current models do.]

I'll leave the reader with a quote that I _think_ originated with
Ansel Adams, and thus had to do with photography rather than modelling,
but I think it applies here:

"Perfection is attained, not when there is nothing left to add, but
rather when there is nothing left to take away."

--

Dave Halliwell I don't speak for my employers, and you
Edmonton, Alberta shouldn't expect them to speak for me.

j...@watson.ibm.com

unread,
Feb 7, 1995, 9:33:47 PM2/7/95
to
Michael Tobis asked:

>Finally, I would like to know where you are getting your information. It does
>not appear to arise from the scientific press, much less referred journals.

You don't consider the front pages of Science the "scientific
press"? Let me point out that the current (1/27/95, p454) Science
contains an article "Darker Clouds Promise Brighter Future for
Climate Models" suggesting current climate models substantially err in
the way they handle clouds (specifically that they underestimate the
amount of incoming solar radiation absorbed by clouds). Some quotes:
"As mirrors of the real world, climate models are far from
perfect. These computer simulations of how solar energy and Earth's
ocean and atmosphere can't even get today's climate entirely right.
And when they're asked to prognosticate the results are even worse:
... These shortcomings are no great surprise, given the number of
climate processes that are poorly understood or totally unknown.
... "This is such a basic thing; it throws a big monkey wrench into
the modeling works". But unlike most monkey wrenches thrown into
machinery, this one may effect some much-needed repairs. ... "It's
Mother Nature doing something, something we don't understand." ...
"fairly dramatic" effects on climate models ... Modelers are eager to
see what surprises come out of greenhouse simulations with more
realistically absorbant clouds. "There's no way to guess what this
would do ...""
James B. Shearer
Btw: I have not been challenging your claim that given atmospheric
composition, solar radiation and albedo the surface temperture can
be calculated from first principles. (Is this in fact your claim?)
I now suspect you have been blowing smoke. How do you calculate the
effects of clouds from first principles?

Len Evens

unread,
Feb 8, 1995, 11:02:01 AM2/8/95
to


I think it is a good idea to bring people's attention to articles
in periodicals like Science and Nature. Our library seems to have
a problem keeping these periodicals on display, so I often miss
articles, and I appreciate it when they are brought to
our attention. Scientific American is also sometimes useful as is the
science news in the N. Y. Times which is usually pretty accurate.
It would be helpful if people would periodically post summaries of
such things. In so doing, it is helpful to make clear what the
source is. Science and Nature publish peer reviewed articles,
commentairies summarizing recent research and also articles written
by science reporters. The last are generally pretty good, but
they can be misleading since the reporters sometimes don't put in
all the qualifications you find in the basic literature. Since
these are basically secondary sources, one should exercise some
care in quoting them.

Also,
while these journals from time to time publich articles or reports
which can be construed as questioning certain aspects of climate
change research, they also publish many articles which could be
construed as supporting the belief that global warming is imminent.
One example illustrating this is a pair of articles and an accompanying
summary/commentary which appeared in Science
in the past three months. Both articles employed modelling
to study the effect of oceanic behavior on climate. One author
claimed to be able to simulate recent weather patterns which he
attributed to El Nino,
and the commentary discussed the possibility that recent changes in
El Nino were induced by enhanced greenhouse warming. The other author
tried to simulate observed patterns on the basis of a cycle in the
north Pacific which the model yielded. (I may not have that all
exactly right.) One could easily quote just the first article
as proving something about global warming, but that would hardly
be fair.

sn...@swcp.com

unread,
Feb 8, 1995, 2:08:05 PM2/8/95
to

> You don't consider the front pages of Science the "scientific
>press"? Let me point out that the current (1/27/95, p454) Science
>contains an article "Darker Clouds Promise Brighter Future for
>Climate Models" suggesting current climate models substantially err in
>the way they handle clouds (specifically that they underestimate the
>amount of incoming solar radiation absorbed by clouds). Some quotes:

[snip]

There is an interesting letter (with reply) in this month's "Physics
Today," questioning some details of some global warming models. I
don't understand the arguments well enough to comment knowledgeably.

Snark

j...@watson.ibm.com

unread,
Feb 8, 1995, 10:48:00 PM2/8/95
to
More on computer modeling particulary climate models (replies
to Evens, Tobis and Halliwell):
Leonard Evens posted:
> ... There are circumstances in which a simple model

>is just not accurate enough to tell us what is going to happen, so
>we have no choice but to use more elaborate models. ...

This is the thinking that leads to trouble. The alternative
is to accept that we can not tell with certainty what is going to
happen. An elaborate model should be preferred to a simple model only
when it can be convincingly demonstrated that the elaborate model can
be expected to give better predictions in practice.
Leonard Evens posted:


>In this respect, climate models are somewhere `in between' . The
>underlying physical and mathematical principles are pretty well

>understood. ...

I disagree, the underlying physical principles are not well
understood in any effective sense (saying everything follows from
Schrodinger's equation is not helpful).
Leonard Evens also posted:
> ... Those who


>take the conclusions of these models seriously think that it is
>likely that what they tell us is at least as accurate as what
>simple models tell us, particularly when many different models
>using somewhat different approaches to the areas of uncertainty
>result in similar predictions.

The predictions are not similar. For CO2 doubling complicated
models predict 1.5-4.5 C, a simple model 1 C. If the complicated model
predictions are "similar", then they are all similar to the simple
model so where is the value added?
Michael Tobis asked (concerning econometric models):


>What comparative weight would you give to such models compared to climate
>models?

In general I give low weight to both. A more specific answer
would depend on exactly what you are trying to predict with the models.
I posted:


> Wall Street models may be better because of the continous
> feedback and the painful consequences of errors. Nevertheless Wall

> Streeter's are continually coming up with models that work well most
> of the time but then fail castrophically when something unexpected
> happens. Remember portfolio insurance or the Granite funds.

Michael Tobis responded:


>In the case of climate models, this clearly indicates that the models are
>intrinsically optimistic, rather than alarmist as is often alleged.

An absurd conclusion. Wall Street models attempt to identify
profitable trades. A pessimistic model misses some profitable trades.
This is unlikely to cause serious problems. An optimistic model
identifies some unprofitable trades as profitable a recipe for disaster.
Michael Tobis posted:


>Agreed. Things like the Club of Rome models have value only for didactic
>purposes, and have essentially no predictive value.
>
>That you believe the same holds for climate models shows yet again that you
>do not know what a climate model is.

What is being shown here is the truth of the proverb about the
mote in your neighbor's eye.
I posted:


> Simple models have the virtue that their flaws tend to be more
> obvious which discourages placing excessive weight on their results.
> For some reason people will often give greater weight to models that
> they do not understand.

Michael Tobis responded:


>In your own case, this is obviously untrue. However, it is a bit irritating
>that you consistently fail to make an effort to improve your understanding
>of the models, while continuing to criticize them.

A belief that one is obligated to understand a model before
criticizing it is dangerously wrong, making incomprehensible models
above reproach. Btw how much do you actually know about the
macroeconomics models you denigrate. You certainly haven't shown much
understanding of economics (along with practically everyone else in
this group).
Michael Tobis posted:


>Climate models have fewer degrees of freedom than the system they simulate,
>and they can be validated in several ways:
>
>1) reproducing the spatial structure of contemporary climate
>
>2) reproducing the seasonal cycle of contemporary climate
>
>3) reproducing paleoclimate conditions
>
>4) predicting the effects of perturbations like Mt Pinatubo
>
>5) reproducing physical conservation principles like mass conservation, energy
>conservation and conservation of angular momentum
>
>and believe it or not
>
>6) reproducing the course of greenhouse gas response to date
>
>In all of these matters, any sensible measurement shows skill above zero,
>though below perfection.

These are all very weak forms of validation. Strong valida-
tion would be comparing the model answers to the correct answers over
the entire range of conditions the model is suppposed to handle.
I will also mention that it is very dangerous to use any data
used to develop the model to validate it. This is shown over and over
by the failure of stock trading (or sports betting) schemes which work
great retrospectively when applied to future events.
Do you doubt that it is possible to come up with obviously
wacko models which pass all or most of the above tests. Consider for
example the following model for predicting climate. Find the year x
in the last 100000 with climate most similar to this year's climate.
Then predict next year's climate will be that of year x+1. This model
passes the above tests with flying colors. Are you impressed? (One
might quibble that the climate records for the last 100000 years are not
good enough to actually implement this model, however since you have
making been claims about how well paleoclimatic conditions are known
this shouldn't bother you.)
Michael Tobis asked:


>Also, I remain curious how you would design a greenhouse gas policy in the
>absence of any information at all from the models you like to criticize.

I would use simple models.
Dave Halliwell:


> Every model must be assessed according to its goals. Models _will_
>leave out things that are not relevant to the problem at hand. You will
>never get a "model of everything", which seems to be what Shearer wants
>to see. My guess is that by the time the newest coupled atmosphere-ocean
>GCMs sort out their drift tendencies and give "accurate" climates, the
>modelling community will have moved on to add yet more factors to the
>climate models, and these current "newest" models will be shifted into the
>"simpler" class. At that stage, Shearer will _still_ be saying "I accept
>that simpler models show warming, but...", and he'll still be arguing
>that something has been left out that should be in there (and that this
>is why the models still show warming when he is sure that they
>shouldn't). [Note: we of course must wait to see whether more detailed
>moselds will or will not show the same warming that current models do.]

What I want is for climate modelers and others to quit over-
selling the reliability of model predictions.
When the most realistic models give the same results (and when
making them more realistic still does not significantly change the
results) I will give greater weight to their results. I don't expect
this to happen anytime soon. (Btw the results could still be wrong
because of some factor no one considered. There is no getting around
the fact that climatology suffers from a distinct lack of real data.)
The statement that I am sure the models shouldn't show
warming is the usual Halliwell misrepresentation. Unlike some
people I am not sure of what the models should show.
James B. Shearer

Michael Tobis

unread,
Feb 9, 1995, 11:01:18 AM2/9/95
to
I hope to find time to address Shearer's last effort, which exposes many
misunderstandings.

I can't resist pointing this one out immediately, though.

j...@watson.ibm.com wrote:

: What I want is for climate modelers and others to quit over-


: selling the reliability of model predictions.

I would like Mr Shearer to provide a *single* verifiable in-context quotation
of a climate modeller overselling the reliability of GCMs, which seems to be
the class of model Mr Shearer is criticizing, or of any other approach.

I exclude the statements of professional "environmentalists", who, of course,
are not especially talented at scientific nuance any more than professional
"market conservatives" are. In fact, I think these professional opinion-mongers
are responsible for Shearer's misunderstandings as much as Greenpeace et al
are responsible for others'.

In fact, both "sides" of the so called "debate" habitually misunderstand the
purposes of climate models. The part of the blame that attaches to the
climatological community is that they have done a lousy job of communicating
what they do.

mt

Robert Parson

unread,
Feb 9, 1995, 7:41:17 PM2/9/95
to

>
>There is an interesting letter (with reply) in this month's "Physics
>Today," questioning some details of some global warming models. I
>don't understand the arguments well enough to comment knowledgeably.

The letter is by Richard Lindzen, the most eminent global warming
critic, and the replies are by Henry Charnock, Keith Shine, and Rober Kandel.
It's actually the latest iteration in an exchange that has been going on
for a couple of year (_Physics Today_ seems to have an appallingly long
publication lead time - Lindzen is responding to letters published in
December 1993.) Lindzen argues, on the basis of to-be-published
calculations, that the equilibrium response to doubling CO2 is 0.5
degrees C for clear-sky conditions and only 0.22 C for 40% cloud cover.
This is much smaller than the IPCC estimate of 3 degrees C. The
difference arise from the treatment of water-vapor feedback, but I
should let someone closer to the field summarize it.


Michael Tobis

unread,
Feb 14, 1995, 7:32:49 PM2/14/95
to
Time constraints are piling up and I can't do these misunderstandings
justice in the next few weeks.

I will outline my responses. I've saved the article and will respond to
it eventually, probably next month.

j...@watson.ibm.com wrote:

: I disagree, the underlying physical principles are not well


: understood in any effective sense (saying everything follows from
: Schrodinger's equation is not helpful).

It would if the models were an implementation of Schrodinger's equation.
In fact, they implement the Navier-Stokes equations, and the result is
pretty good.

: The predictions are not similar. For CO2 doubling complicated


: models predict 1.5-4.5 C, a simple model 1 C. If the complicated model
: predictions are "similar", then they are all similar to the simple
: model so where is the value added?

The purpose of the models is not to yield a sensitivity, though even IPCC
is guilty of allowing this impression. The purpose of the models is
to give some insight into the spatial structure and the transient response,
which the simpler models can't. For policy purposes, they present plausible
answers ot the question "1.5-4.5 C, so what?"

: Michael Tobis asked (concerning econometric models):


: >What comparative weight would you give to such models compared to climate
: >models?

: In general I give low weight to both. A more specific answer
: would depend on exactly what you are trying to predict with the models.

Indeed, that is a point I am trying to make. However, I also claim that
models based on highly constrained physical systems are more reliable in
principle.

: I posted:


: > Wall Street models may be better because of the continous
: > feedback and the painful consequences of errors. Nevertheless Wall
: > Streeter's are continually coming up with models that work well most
: > of the time but then fail castrophically when something unexpected
: > happens. Remember portfolio insurance or the Granite funds.
: Michael Tobis responded:
: >In the case of climate models, this clearly indicates that the models are
: >intrinsically optimistic, rather than alarmist as is often alleged.

: An absurd conclusion. Wall Street models attempt to identify
: profitable trades. A pessimistic model misses some profitable trades.
: This is unlikely to cause serious problems. An optimistic model
: identifies some unprofitable trades as profitable a recipe for disaster.

Nothing absurd about it. The models tell us what will happen in the absence
of unknown phenomena. If some unknown phenomenon kicks in, it will break
the model, but unfortunately, it may also break the real climate system.

The concession that some phenomenon is equally likely to arise that will make
the adjustment to a new radiative equilibrium essentially painless as one
that will make the situation worse than we understand seems to me yielding
far too much. The claim that we should base our behavior on the expectation of
such a deus ex machina is hard for me to characterize without being rude.

: I posted:


: > Simple models have the virtue that their flaws tend to be more
: > obvious which discourages placing excessive weight on their results.
: > For some reason people will often give greater weight to models that
: > they do not understand.
: Michael Tobis responded:
: >In your own case, this is obviously untrue. However, it is a bit irritating
: >that you consistently fail to make an effort to improve your understanding
: >of the models, while continuing to criticize them.

: A belief that one is obligated to understand a model before
: criticizing it is dangerously wrong, making incomprehensible models
: above reproach.

No, I express a belief that one should TRY to understand a model before
criticizing it. You haven't made the effort.


Btw how much do you actually know about the
: macroeconomics models you denigrate. You certainly haven't shown much
: understanding of economics (along with practically everyone else in
: this group).

I have an undergrad course in macroeconomics, but I admit I wasn't impressed
with the subject, and find it self-contradictory. It seems to me that doing
something one doesn't like always has "costs" and "impacts" while doing
something one does like has "stimuli" and "promotes employment". I have never
been able to distinguish between these phenomena, and await a sensible
explanation. When I propose cost/benefit analysis, I am proposing a much
more heuristic approach than most economists would, precisely because
I have no idea what "costing the economy X billions of dollars" means.
Where is this "cost" going, Mars?

: Michael Tobis posted:


: >Climate models have fewer degrees of freedom than the system they simulate,
: >and they can be validated in several ways:
: >
: >1) reproducing the spatial structure of contemporary climate
: >
: >2) reproducing the seasonal cycle of contemporary climate
: >
: >3) reproducing paleoclimate conditions
: >
: >4) predicting the effects of perturbations like Mt Pinatubo
: >
: >5) reproducing physical conservation principles like mass conservation, energy
: >conservation and conservation of angular momentum
: >
: >and believe it or not
: >
: >6) reproducing the course of greenhouse gas response to date
: >
: >In all of these matters, any sensible measurement shows skill above zero,
: >though below perfection.

: These are all very weak forms of validation. Strong valida-
: tion would be comparing the model answers to the correct answers over
: the entire range of conditions the model is suppposed to handle.

Yes, this is a common problem in software engineering, not just in
climate modelling. It woould take an essentially infinite amount of
time to verify a model in this way. It would also take a few billion years
to verify almost any significant software product. We have to apply
intelligence rather than brute force testing.

: I will also mention that it is very dangerous to use any data


: used to develop the model to validate it. This is shown over and over
: by the failure of stock trading (or sports betting) schemes which work
: great retrospectively when applied to future events.

: Do you doubt that it is possible to come up with obviously
: wacko models which pass all or most of the above tests. Consider for
: example the following model for predicting climate. Find the year x
: in the last 100000 with climate most similar to this year's climate.
: Then predict next year's climate will be that of year x+1. This model
: passes the above tests with flying colors. Are you impressed? (One
: might quibble that the climate records for the last 100000 years are not
: good enough to actually implement this model, however since you have
: making been claims about how well paleoclimatic conditions are known
: this shouldn't bother you.)

This is the part I'd like to respond to at some length. It shows a complete
misunderstanding of what a climate model is. I wonder if anyone else reading
is familiar with the type of model Mr Shearer is confusing with GCMs, as
well as with GCMs themselves, and can elucidate the difference. If not, I'll
try in a few weeks when I find the time.

: Michael Tobis asked:


: >Also, I remain curious how you would design a greenhouse gas policy in the
: >absence of any information at all from the models you like to criticize.

: I would use simple models.

But the simple models just yield a global average temperature change, and
do not yield a transient response or a spatial response. Thus, they yield
much less information for sensible weighing of costs and benefits.

(I would add that the models have other uses besides direct input to
the policy process. Noting the discussion of cloud opacity in the recent
_Science_ should provide a nice example of how in the earth and space sciences,
a triad of theory, observation, and simulation replaces the traditional
duality of theory and experiment.)

mt

Friesel

unread,
Feb 15, 1995, 11:00:49 AM2/15/95
to
In article <3gocmr$f...@news.acns.nwu.edu>, l...@schur.math.nwu.edu (Len
Evens) wrote:
>

.....

> I think for once we have a solid core of agreement. I am also glad
> to see that James Shearer is just as suspicious of economics models
> as he is of climate models.
>
> However, I hope we can also agree that there do exist large complicated
> models which do give accurate predictions. I am working from memory,
> so I will probably get some details wrong, but I believe the
> physicist Kenneth Wilson (?) at Cornell won a Nobel prize for the
> use of such models.

...


I think that the term 'use' above is key. The model must fit the system,
and this can be done by accomodating the model to the system, or by finding
a system that suits the model. Trying to apply a model without a thorough
familiarity of both the model and the system is guaranteed to fail, and
probably sooner than later. Interestingly enough, this implies that
regardless of the quality and completeness of the model, the knowledge and
ability of the user is perhaps the critical factor. You can expect success
according to the level of ability and knowledge you have available.


Mark Friesel
(509) 375-2235
e-mail: ma_fr...@pnl.gov

Dave Halliwell

unread,
Feb 15, 1995, 5:09:43 PM2/15/95
to
On Tue, 7 Feb 95, j...@watson.ibm.com wrote:

> Michael Tobis asked:
>>Finally, I would like to know where you are getting your information. It does
>>not appear to arise from the scientific press, much less referred journals.
>
> You don't consider the front pages of Science the "scientific
>press"? Let me point out that the current (1/27/95, p454) Science
>contains an article "Darker Clouds Promise Brighter Future for
>Climate Models" suggesting current climate models substantially err in
>the way they handle clouds (specifically that they underestimate the
>amount of incoming solar radiation absorbed by clouds). Some quotes:

You have now shown that you have read at least two articles from
Science. Have you read anything else about climatology, that would
lead us to believe that you _understand_ anything that you read?

In particular, have you read the IPCC report?

[quote deleted]

>Btw: I have not been challenging your claim that given atmospheric
>composition, solar radiation and albedo the surface temperture can
>be calculated from first principles. (Is this in fact your claim?)
>I now suspect you have been blowing smoke. How do you calculate the
>effects of clouds from first principles?

What do you mean by "first principles"? Would you consider Beer's
Law to be "first principles"? Are you familiar with such things as "the
two-stream approximation" for radiative transfer?

Come on: tell us something that shows more than a passing
aquaintance with the principles of atmospheric science.

Barry M. Schlesinger

unread,
Feb 16, 1995, 7:48:00 AM2/16/95
to
In article <3hlsve$b...@selway.umt.edu>, es...@selway.umt.edu (Anthony C Tweedale) writes...
>half off-topic, but i see there's a new, lower, estimate of the sun's acti-
>vity since the 1600's, and gwm s are going to have to be recalculated. sorry
>if this is old news.
>
>tony tweedale

Reference?

j...@watson.ibm.com

unread,
Feb 17, 1995, 12:40:16 AM2/17/95
to
Michael Tobis asked:

>I would like Mr Shearer to provide a *single* verifiable in-context quotation
>of a climate modeller overselling the reliability of GCMs, which seems to be
>the class of model Mr Shearer is criticizing, or of any other approach.
I replied in part:
> For another example consider the testimony of James E. Hansen
> before the US Senate Subcommitte on Energy and Natural Resources on
> June 23, 1988 as quoted in "The Challenge of Global Warming" edited by
> D.E. Abrahamson, Island Press, 1989 (p 36-38).
> "The present observed global warming is close to 0.4 <degree>
> C relative to "climatology," which is defined as as the 30-year (1951-
> 1980) mean. A warming of .4 <degree> C is three times larger than the
> standard deviation of annual mean temperatures in the 30-year
> climatology. The standard deviation of .13 <degree> C is a typical
> amount by which the global temperature flucuates annually about its
> 30-year mean; the probability of a chance warming of three standard
> deviations is about 1%. Thus we can state with about 99% confidence
> that current temperatures represent a real warming trend rather than
> a chance fluctuation over the 30-year period."
Michael Tobis responded:
>I don't like the presentation, since there seems to be an unstated and
>false assumption that interannual temperature variation is uncorrelated.
>However, I did specify *in-context* and a lot of context is missing from the
>above, which may mitigate the statement as made by Hansen, though not as
>quoted by Abrahamson. (fwiw, I don't like Abrahamson's book very much.)

To clarify, the book contains what appears to be Hansen's
entire statement. I extracted the above paragraph.
Michael Tobis continued:
>I suspect all Hansen was trying to say was that there is a verifiable
>warming, not that it is verifiably outside the range of natural variability.
>If so, I would support the statement in such a context. If the statement
>was meant to imply that there is a real warming *outside of natural
>variability*, the reasoning is inadequate and the presentation misleading.

The second bullet at the start of Hansen's statement (p. 35 in
the book) is "The global warming is now sufficiently large that we can
ascribe with a high degree of confidence a cause and effect relationship
to the greenhouse effect." The quoted paragraph came from a section
titled "Relationship of global warming and greenhouse effect". The
conclusion of this section (p. 40 in the book) reads:
"Global warming has reached a level such that we can state
with a high degree of confidence a cause and effect relationship between
the greenhouse effect and the observed warming. Certainly further study
of this issue must be made. The detection of a global greenhouse signal
signal represents only a first step in analysis of the phenomenon."
If this doesn't convince you of the context, I suggest you
look up the staement yourself.
Michael Tobis continued:
>However, it has nothing to do with dynamic climate models, so it doesn't
>provide you with any sort of example whatsoever.

You said "or of any other approach". Here we have a prominent
modeler making a "misleading" presentation to Congress based on
"inadequate" reasoning. This indicates to me that at least some
modelers can not be trusted to give an objective account of the
reliability of their work.
I stated (in reply to Evens):


> I disagree, the underlying physical principles are not well
> understood in any effective sense (saying everything follows from
> Schrodinger's equation is not helpful).

Tobis responded:


>It would if the models were an implementation of Schrodinger's equation.
>In fact, they implement the Navier-Stokes equations, and the result is
>pretty good.

The models contain much more than the Navier-Stokes equations.
In any case it is my understanding that it is computationally infeasible
to solve the Navier-Stokes equations directly because of turbulence.


I said:
> The predictions are not similar. For CO2 doubling complicated
> models predict 1.5-4.5 C, a simple model 1 C. If the complicated model
> predictions are "similar", then they are all similar to the simple
> model so where is the value added?

Michael Tobis responded:


>The purpose of the models is not to yield a sensitivity, though even IPCC
>is guilty of allowing this impression. The purpose of the models is
>to give some insight into the spatial structure and the transient response,
>which the simpler models can't. For policy purposes, they present plausible
>answers ot the question "1.5-4.5 C, so what?"

If the purpose is insight into the transient response, why are
results usually given for the equilibrium response? If the models
don't agree on the sensitivity, why should I expect them to be reliable
for the transient response and the spatial structure which are
generally considered to be harder to predict?


I posted:
> Wall Street models may be better because of the continous
> feedback and the painful consequences of errors. Nevertheless Wall
> Streeter's are continually coming up with models that work well most
> of the time but then fail castrophically when something unexpected
> happens. Remember portfolio insurance or the Granite funds.
Michael Tobis responded:
>In the case of climate models, this clearly indicates that the models are
>intrinsically optimistic, rather than alarmist as is often alleged.

I countered:


> An absurd conclusion. Wall Street models attempt to identify
> profitable trades. A pessimistic model misses some profitable trades.
> This is unlikely to cause serious problems. An optimistic model
> identifies some unprofitable trades as profitable a recipe for disaster.

Michael Tobis replied:


>Nothing absurd about it. The models tell us what will happen in the absence
>of unknown phenomena. If some unknown phenomenon kicks in, it will break
>the model, but unfortunately, it may also break the real climate system.
>
>The concession that some phenomenon is equally likely to arise that will make
>the adjustment to a new radiative equilibrium essentially painless as one
>that will make the situation worse than we understand seems to me yielding
>far too much. The claim that we should base our behavior on the expectation of
>such a deus ex machina is hard for me to characterize without being rude.

Your original argument was that the catastrophic failure of
some Wall Street models indicates climate models are intrinsically
optimistic. This argument is obviously absurd. You may have some other
support for your belief that climate models are intrinsically
optimistic, but you have not presented it. I will note that most
predictions of disaster prove pessimistic.
I posted:


> A belief that one is obligated to understand a model before
> criticizing it is dangerously wrong, making incomprehensible models
> above reproach.

Michael Tobis replied:


>No, I express a belief that one should TRY to understand a model before
>criticizing it. You haven't made the effort.

This is unrealistic. People must decide how much weight to
give computer models without examining each one in detail. As I said
before I discount the results of any model which cannot be convincingly
validated. The climate models have not been convincingly validated.
Your arguments to the contrary are basically wishful thinking. I also
find it amusing that you agree complicated computer models in other
fields are of doubtful validity but you expect me to believe
climatology is different.
I am willing to expend some effort to learn more about the
models. Do you have some accessible references? For that matter are
there any climate models, which you are willing to defend, available
on the net?


I said:
> These are all very weak forms of validation. Strong valida-
> tion would be comparing the model answers to the correct answers over
> the entire range of conditions the model is suppposed to handle.

Michael Tobis replied:


>Yes, this is a common problem in software engineering, not just in
>climate modelling. It woould take an essentially infinite amount of
>time to verify a model in this way. It would also take a few billion years
>to verify almost any significant software product. We have to apply
>intelligence rather than brute force testing.

This is not the problem. If the model computed the correct
answer for 100 random inputs chosen from the entire input space this
would be strong evidence that the model computes the correct answer at
least 90% (for example) of the time. This would not follow if the 100
inputs were chosen from a small subset of the input space. This is
why I said the "entire range".
The real problem is that it is impossible to test any computer
program if you don't know what the right answer is.


I said:
> I will also mention that it is very dangerous to use any data
> used to develop the model to validate it. This is shown over and over
> by the failure of stock trading (or sports betting) schemes which work
> great retrospectively when applied to future events.
>
> Do you doubt that it is possible to come up with obviously
> wacko models which pass all or most of the above tests. Consider for
> example the following model for predicting climate. Find the year x
> in the last 100000 with climate most similar to this year's climate.
> Then predict next year's climate will be that of year x+1. This model
> passes the above tests with flying colors. Are you impressed? (One
> might quibble that the climate records for the last 100000 years are not
> good enough to actually implement this model, however since you have
> making been claims about how well paleoclimatic conditions are known
> this shouldn't bother you.)

Michael Tobis responded:


>This is the part I'd like to respond to at some length. It shows a complete
>misunderstanding of what a climate model is. I wonder if anyone else reading
>is familiar with the type of model Mr Shearer is confusing with GCMs, as
>well as with GCMs themselves, and can elucidate the difference. If not, I'll
>try in a few weeks when I find the time.

I was not claiming the purely empirical model presented above
resembles the GCM computer models. I was giving it as an example to
show your validation tests are not very strong.
In general models may be empirical looking for patterns in
historical data and predicting the future based on these patterns or
they may be intelligent attempting to explain the past data as the
consequence of some general laws and then using these laws to predict
the future. Hybrids are also possible. The reliability of empirical
models cannot be easily estimated from the same data used to derive
them. It is too easy to find patterns in random variations. Empirical
models also cannot be expected to do well if applied to conditions
outside the range of the historical data used to derive them.
Any proposed model should be tested against a few simple
purely empirical models. Early attempts at numerical weather
prediction did not do as well as the empirical methods then in use.
The current general climate models are hybrids containing
large empirical components. The empirical portions decrease one's
confidence in their ability to predict climate conditions outside
the range of historical data.


Michael Tobis asked:
>Also, I remain curious how you would design a greenhouse gas policy in the
>absence of any information at all from the models you like to criticize.

I replied:


> I would use simple models.

Michael Tobis responded:


>But the simple models just yield a global average temperature change, and
>do not yield a transient response or a spatial response. Thus, they yield
>much less information for sensible weighing of costs and benefits.

What you believe to be additional information, I believe to be
noise. Any cost/benefit analysis of global warming (and mitigation
measures) is subject to substantial uncertainty and it is unreasonable
to pretend otherwise.
James B. Shearer

j...@watson.ibm.com

unread,
Feb 17, 1995, 8:57:22 PM2/17/95
to
I posted (replying to Tobis):

>Btw: I have not been challenging your claim that given atmospheric
>composition, solar radiation and albedo the surface temperture can
>be calculated from first principles. (Is this in fact your claim?)
>I now suspect you have been blowing smoke. How do you calculate the
>effects of clouds from first principles?
Dave Halliwell responded:

> What do you mean by "first principles"? Would you consider Beer's
>Law to be "first principles"? Are you familiar with such things as "the
>two-stream approximation" for radiative transfer?

From the Science article I mentioned earlier (1/27/95, p454)
"they found that, on a global average, clouds absorb more than 25 watts
of solar radiation per square meter ... rather than the 6 ... predicted
by theory". So if modelers are calculating the effects of clouds from
first principles they aren't doing it correctly.


Leonard Evens posted:
> ... There are circumstances in which a simple model
>is just not accurate enough to tell us what is going to happen, so
>we have no choice but to use more elaborate models. ...

I replied:


> This is the thinking that leads to trouble. The alternative
>is to accept that we can not tell with certainty what is going to
>happen.

Dave Halliwell asks:
> Define "with certainty", and give _one_ example where science can
>make a prediction "with certainty".

Ok, I should have said "with the desired degree of certainty"
instead "with certainty". This sort of quibble does not address my
point which is elaborate models are not necessarily better than simple
models.
I posted:


> An elaborate model should be preferred to a simple model only
>when it can be convincingly demonstrated that the elaborate model can
>be expected to give better predictions in practice.

Dave Halliwell responded:
> In Len's post, he has said that he is considering a situation
>where simple models are not "accurate enough". An example would be
>where the simple models leave out something that we know can affect
>the result. You have made the argument for _ignoring_ simple models
>because they do not include things like ocean circulation. Thus YOU
>favour more complicated models, using as your sole justification the
>fact that simpler models don't include "everything".
>
> Now, in response to your claim above, why is it that _you_ prefer
>more elaborate models, since you also claim that you are not convinced
>that they are demonstrably better than the simpler ones?

I have never argued for ignoring simple models. I have argued
that their estimate of the effects of CO2 forcing is imprecise because
they ignore important aspects of the climate system. More complicated
models could in principle give better estimates. However I am not
convinced with the current state of the art that they actually do, and
for this reason I prefer simpler models.


I said:
> A belief that one is obligated to understand a model before
>criticizing it is dangerously wrong, making incomprehensible models
>above reproach.

Dave Halliwell commented:
> A belief that one's criticisms, made from a position of little or
>no understanding, can't possibly be wrong is even more dangerous.
>
> A model incomprehensible by _anyone_ would be suspect. Models that
>_are_ understood by a large number of people are not incomprehensible.
>Thus climate models are not in that class, and you are constructing a
>strawman.

I have no such belief that I "can't possibly be wrong".
Anybody should be open to the possibility that they are mistaken, even
"experts".
As for climate models, how many lines of computer code are in
the big models? How many people have read and understood every line?
I doubt very much it is a "large number".


I said:
> I will also mention that it is very dangerous to use any data
>used to develop the model to validate it. This is shown over and over
>by the failure of stock trading (or sports betting) schemes which work
>great retrospectively when applied to future events.

Dave Halliwell commented:
> Now, if you could actually demonstrate that this is what has been
>done with climate models, then you'd have a point. For starters, why
>don't you take Michael's list and tell us which ones suffer from this
>fault? Describe, in detail, just what data is used to develop the model,
>and what data is used to validate it.

The article "Climate Modeling's Fudge Factor Comes Under Fire"
(Science 9/9/94, p. 1528) points out that many models have been forced
into agreement with today's climate. Hence they cannot be validated by
their ability to reproduce contemporary climate.
More generally I consider large computer models "guilty until
proven innocent". Hence I believe the burden of proof is on you to
show this has not occurred, not on me to show that it has occurred.
As I said there are numerous examples from other fields where people
have placed excessive faith in models for this reason. I will also
mention that there have been studies of successive determinations of
physical constants which show that improvements in precision often
cause the accepted values to jump outside the claimed error bars of
the previous determinations. It has been suggested that this occurs
because experimenters stop debugging their experiments as soon as the
results appear reasonable. Is it possible that climate modelers stop
debugging their codes as soon the results seem reasonable? I find it
hard to believe climatology is immune to pitfalls which have repeatedly
tripped up researchers in other fields.
I said (in reply to Tobis):


> Do you doubt that it is possible to come up with obviously
>wacko models which pass all or most of the above tests. Consider for
>example the following model for predicting climate. Find the year x
>in the last 100000 with climate most similar to this year's climate.
>Then predict next year's climate will be that of year x+1. This model
>passes the above tests with flying colors. Are you impressed? (One
>might quibble that the climate records for the last 100000 years are not
>good enough to actually implement this model, however since you have
>making been claims about how well paleoclimatic conditions are known
>this shouldn't bother you.)

Dave Halliwell commented:
> To suggest that your "climate model" bears any resemblance at all
>to the types of climate models that are actually used only demonstrates
>your ignorance again. To begin with, your model can't even begin to
>*try* to model three of the six items on Michael's list, so the chances
>of it "passing with flying colors" are rather remote.
>
> Perhaps you would like to explain to us how _your_ "climate model"
>accepts input that tells it to model the changes in Michael's list?

I made no suggestion that this ("obviously wacko") model
resembles those in actual use. I was just pointing out it would pass
Tobis's tests. This model predicts the past perfectly and for that
reason would pass every one of Tobis's tests.


Michael Tobis asked:
>Also, I remain curious how you would design a greenhouse gas policy in the
>absence of any information at all from the models you like to criticize.
I replied:
> I would use simple models.

Dave Halliwell responded:
> Yet you have clearly argued for rejection of the results of the
>models you now claim to want to use. You haven't even shown any clear
>understanding of just what a "simple" climate model _is_, or what it
>can do.

I have argued that simple models cannot be exepected to give
a precise estimate of the effects of CO2 forcing. I have never argued
that they are completely worthless. I posted the following to this
group over a year ago (on 12/7/93, in reply to Tobis):
> This is the sort of reasoning used in arriving at figures for
>what regulation costs the economy. Of course regulations may have
>benefits in which case they should be estimated as well. You may
>object that such reasoning ignores many things (which I have dismissed
>as quibbles) and hence is imprecise. However such objections apply
>also to the climate models which you have been defending. In both
>cases one needs to be aware that there is some uncertainty in the
>results obtained however I do not believe this justifys totally
>ignoring them (as has been advocated in this group).
This remains my position.
Dave Halliwell posted:
> There have already been *several* levels of "making them more realistic
>still does not significantly change the results", which is the criterion
>you stated for you to "give greater weight to their results".

If the models all give the same results where did the 1.5-4.5
C range come from? Or are you contending 1.5 C and 4.5 C are not
significantly different?


Michael Tobis asked:
>I would like Mr Shearer to provide a *single* verifiable in-context quotation
>of a climate modeller overselling the reliability of GCMs, which seems to be
>the class of model Mr Shearer is criticizing, or of any other approach.

I replied (in part):
> I was thinking of you and Halliwell. Halliwell for example has
>contended that climate models are as reliable as weather prediction
>models (or at least he jumped all over me when I stated they aren't).
Dave Halliwell comments:
> Glad to see that Shearer has selected something that fulfills Michael's
>request for a *verifiable*, in-context quotation. His response to Michael
>also demonstrates why he seems to hold the position that he does: either
>he can't read, or he doesn't understand english.
>
> For the record, here are portions of four posts that I have made
>in the last month or so regarding climate models and weather models.
>NOTHING that I have said can be reasonably interpreted as a claim "that
>climate models are as reliable as weather prediction". In fact, I have
>acknowledged that there are differences bwetween the two. The position of
>Shearer's that I have challenged was his claim that weather models
>contributed _nothing_ to climate models. I challenge him to provide a
>_single_ quote from my posts that makes _any_ direct comparison of the
>overall reliability of climate and weather models.

I have made no claim that weather models contributed _nothing_
to climate models. The start of our exchange on this topic was as
follows.
I stated:
> Science progresses by testing theories against experiments and
>observations of the natural world. Climatologists have essentially no
>ability to experiment and a very limited set of observations.
You asked:
> Beginner-type question: how much is there in common between climate
>models and numerical weather prediction (NWP) models? (There is an
>intentionally-misleading aspect to this question.) Would you put NWP
>models in the same category regarding lack of suitable testing?
I replied:
> No I would not put NWP models in the same category because
>since they predict on a much shorter timescale (and in some cases for
>smaller areas) the observational record is in effect much larger.

You then proceeded to flame me for the above remark. If you
in fact agree that weather prediction models are more reliable than
climate models I don't see the point of criticizing me for saying it.
Similarly I don't see the point of asking whether I can describe the
differences between climate models and weather models if you agree
that they are different.
James B. Shearer
PS: My access to sci.environment seems to have gone away so this may
be my last post for a while.

Anthony C Tweedale

unread,
Feb 19, 1995, 7:50:03 PM2/19/95
to
Barry M. Schlesinger (bschle...@nssdca.gsfc.nasa.gov) wrote:
: In article <3hlsve$b...@selway.umt.edu>, es...@selway.umt.edu (Anthony C Tweedale) writes...

: >half off-topic, but i see there's a new, lower, estimate of the sun's acti-
: >vity since the 1600's, and gwm s are going to have to be recalculated. sorry
: >if this is old news.

: Reference?

_geophysical research letters_ 21.2067. i read about it in _new scientist_
144.1949.21 (29 oct 94 (i was just reading it now, so maybe i mislead you
by saying 'new')):

"the sun was much less active between 1700 & 1850 than astronomers have
thought." "instead [sunspot activity] built up much more gradually [since
1700]." "the snag is that wolf's interpretation of the old records may not be
reliable. diff. observers may count sunspots differently, esp. when deciding
if they form a group. [hoyt (data systems corp), schatten (nasa goddard) and
nesmese-ribes (paris observ., meudon) .. [used new records, also] .. their
technique counts only groups. the index derived this way looks set to become
the new standard." "the maunder & dalton minima still show up clearly .. the
most impt. diff. .. is a steady overall increase in solar activity, starting
from the end of the maunder minimum and lasting 'till today." "one of the 1st
things climetologists will do w/ the new index is see if it provides a better
match w/ solar & climate cycles from 1700-1850."

do any climetologists/modelers here know if that's been done yet? or care
to speculate what the efect on gcm results might be?

t2

Dave Halliwell

unread,
Feb 20, 1995, 7:37:17 PM2/20/95
to

James Shearer has said that he likely won't be posting to this
thread again, but his last post has so many errors that it demands
a response.

j...@watson.ibm.com wrote:
> I posted (replying to Tobis):
>>Btw: I have not been challenging your claim that given atmospheric
>>composition, solar radiation and albedo the surface temperture can
>>be calculated from first principles. (Is this in fact your claim?)
>>I now suspect you have been blowing smoke. How do you calculate the
>>effects of clouds from first principles?
> Dave Halliwell responded:
>> What do you mean by "first principles"? Would you consider Beer's
>>Law to be "first principles"? Are you familiar with such things as "the
>>two-stream approximation" for radiative transfer?
>
> From the Science article I mentioned earlier (1/27/95, p454)
>"they found that, on a global average, clouds absorb more than 25 watts
>of solar radiation per square meter ... rather than the 6 ... predicted
>by theory". So if modelers are calculating the effects of clouds from
>first principles they aren't doing it correctly.

Pretty much the answer I expected: another glorious case of question
avoidance. No attempt to explain what he means by "first principles",
and nothing to suggest that he has the vaguest notion of what Beer's
Law is, or what the two-stream approximation is. In other words, Shearer
has _no_ understanding of radiative transfer, and all he can do is take
a quote out of an article and hope that it provides a counterargument.
It doesn't.

For the record, Beer's Law relates transmission of radiation through
a medium to the concentration of attenuators in that medium. Anyone that
has taken first year chemistry has probably used Beer's Law for calculating
concentrations of solutions using light transmission. The two-stream
approximation basically takes all the transmission, absorption, and
scattering that occurs in three dimensions in the atmosphere (through a
given volume), and treats them as if they can be divided into upward and
downward streams. You'd need to take a radiation transfer course to get
exposed to that one.


> I replied [in response to Len Evens]:


>> This is the thinking that leads to trouble. The alternative
>>is to accept that we can not tell with certainty what is going to
>>happen.
> Dave Halliwell asks:
>> Define "with certainty", and give _one_ example where science can
>>make a prediction "with certainty".
>
> Ok, I should have said "with the desired degree of certainty"
>instead "with certainty". This sort of quibble does not address my
>point which is elaborate models are not necessarily better than simple
>models.

...and you just turned your _specific_ claim (of "certainty") into a
vague, weasly one that doesn't mean anything at all. The "desired degree
of certainty" could be anywhere from zero to 100%. Shearer once again
wants to take something that is less than 100% and make it look as if it
is close to zero %. He completely avoids any attempt to define where in
the range 0-100% things lie, and uses any argument that it must be less
than 100% as if it is proof that the value is far less than it is.

Your claim that "The alternative is to accept that we can not tell with
certainty what is going to happen." implies that Len Evens' statement was
one that required "certainty". It was another strawman, and did nothing
to support an argument about complex and simple models.

> I posted:
>> An elaborate model should be preferred to a simple model only
>>when it can be convincingly demonstrated that the elaborate model can
>>be expected to give better predictions in practice.
> Dave Halliwell responded:

>> Now, in response to your claim above, why is it that _you_ prefer
>>more elaborate models, since you also claim that you are not convinced
>>that they are demonstrably better than the simpler ones?
>
> I have never argued for ignoring simple models. I have argued
>that their estimate of the effects of CO2 forcing is imprecise because
>they ignore important aspects of the climate system. More complicated
>models could in principle give better estimates. However I am not
>convinced with the current state of the art that they actually do, and
>for this reason I prefer simpler models.

Yet you have shown almost no understanding of the models. You can't
give a description of any model in climatology, you don't recognize the
names of any class of model except GCMs, and you treat the wide variety
of climate models as if they fall on a linear scale from "simple" to
"complex". More important, you ignore any attempts to provide you with
additional insight, because you think your ignorance is irrelevant.

You have also claimed that the "simple" models do not justify any
attempt to curtail greenhouse gas emissions, and that there is little
or no justification in further spending on climatology (or, more
specifically, climate models).

> I said:
>> A belief that one is obligated to understand a model before
>>criticizing it is dangerously wrong, making incomprehensible models
>>above reproach.
> Dave Halliwell commented:
>> A belief that one's criticisms, made from a position of little or
>>no understanding, can't possibly be wrong is even more dangerous.
>>
>> A model incomprehensible by _anyone_ would be suspect. Models that
>>_are_ understood by a large number of people are not incomprehensible.
>>Thus climate models are not in that class, and you are constructing a
>>strawman.
>
> I have no such belief that I "can't possibly be wrong".

It sure fits the way you have been behaving here. If it looks like
a duck and sounds like a duck...

>Anybody should be open to the possibility that they are mistaken, even
>"experts".
> As for climate models, how many lines of computer code are in
>the big models? How many people have read and understood every line?
>I doubt very much it is a "large number".

What a wonderful strawman you are constructing! I guess that means that
your word processor or language compiler or spreadsheet program are also
incomprehensible, since I'm sure that nobody has read and understood every
single line of code. Yet thousands, if not millions, of people have an
extremely good understanding of these programs, and know their behaviour
extremely well. One would hardly call all of these programs
"incomprehensible".

For your information, a person need not even know what _language_
a climate model is coded in. If they understand the principles upon which
the model is based, then they can understand the model. I don't need to
know which algorithm my calculator uses to calculate sines in order to
understand sines. All I need to do is make sure it calculates sines
correctly.

Unfortunately, you don't seem to understand any of the basic principles
of climatology, either. It is for _this_ reason that *you* are incapable
of comprehending the models or their use. It's like someone that doesn't
know how to read or write rejecting the argument that a word processor can
be a useful program.

This is yet another example of you comparing two things, and deciding
that the one that is <100% is in the same class as the one that is 0%.

> I said:
>> I will also mention that it is very dangerous to use any data
>>used to develop the model to validate it. This is shown over and over
>>by the failure of stock trading (or sports betting) schemes which work
>>great retrospectively when applied to future events.
> Dave Halliwell commented:
>> Now, if you could actually demonstrate that this is what has been
>>done with climate models, then you'd have a point. For starters, why
>>don't you take Michael's list and tell us which ones suffer from this
>>fault? Describe, in detail, just what data is used to develop the model,
>>and what data is used to validate it.
>
> The article "Climate Modeling's Fudge Factor Comes Under Fire"
>(Science 9/9/94, p. 1528) points out that many models have been forced
>into agreement with today's climate. Hence they cannot be validated by
>their ability to reproduce contemporary climate.

Here is a perfect example of your refusal to learn. The article in
question applies to one specific class of models, and does not apply to
a half-dozen other classes of models. I can conclude one of two things:
either you _still_ don't understand the differences between the various
classes of models that have been discussed, or you are intentionally
ignoring the fact in the hope that your ignorance will not be noted.

I suppose there is one further possibility: you think that your
paragraph above either provides a DETAILED discussion of what is used
to develop a model, and what data is used to validate it, or you think
that a reference to an article supplants a discussion in your own words.
Did you used to copy out of the textbook onto exams when you were a
student?

> More generally I consider large computer models "guilty until
>proven innocent". Hence I believe the burden of proof is on you to
>show this has not occurred, not on me to show that it has occurred.

A great many lines of evidence for a variety of models have been
presented, but your only response has been to restate the opinion that
you do not consider any of those items to be valid evidence. Even
evidence that _fits_ what you have said you would consider as valid
evidence appears to be rejected. You continually stand with your
fingers in your ears, shouting "I'm not listening, I'm not listening!"
You repeatedly show your ignorance regarding models, climatology in
general, and scientific validation in general, yet you still think that
you know enough to decide what is a valid comparison between models and
observations.

[deletia]

> Is it possible that climate modelers stop
>debugging their codes as soon the results seem reasonable? I find it
>hard to believe climatology is immune to pitfalls which have repeatedly
>tripped up researchers in other fields.

This is nothing more than an argument from personal incredulity. It
is an extremely weak argument. It fits in with your reaction to various
forms of evidence: you just simply don't accept them.

> I said (in reply to Tobis):
>> Do you doubt that it is possible to come up with obviously
>>wacko models which pass all or most of the above tests. Consider for
>>example the following model for predicting climate. Find the year x
>>in the last 100000 with climate most similar to this year's climate.
>>Then predict next year's climate will be that of year x+1. This model
>>passes the above tests with flying colors. Are you impressed?

> Dave Halliwell commented:


>> To suggest that your "climate model" bears any resemblance at all
>>to the types of climate models that are actually used only demonstrates
>>your ignorance again. To begin with, your model can't even begin to
>>*try* to model three of the six items on Michael's list, so the chances
>>of it "passing with flying colors" are rather remote.
>>
>> Perhaps you would like to explain to us how _your_ "climate model"
>>accepts input that tells it to model the changes in Michael's list?
>
> I made no suggestion that this ("obviously wacko") model
>resembles those in actual use. I was just pointing out it would pass
>Tobis's tests. This model predicts the past perfectly and for that
>reason would pass every one of Tobis's tests.

Pure bullshit. I note that you have not made any attempt to show
how your "model" would actually *try* to perform the tests in Michael's
list. Probably because you have no idea how to go about _examining_ the
items in Michael's list.

So, I'll do it. Let us look at your model in more detail. It simply
says that year x+1 equals year x. Let's start it in the year 100,000 BP.
Let us presume that the temperature we start the model at is T1. What
does the model say for the next year? T1. And the following year? T1.

We end up with a string of 100,000 years of T1. No variation, no
change. Even if we give it the correct value for T1 in the first year,
it will only have one correct value for T, and then gets 999,999 years
wrong (except if, by chance, it finds another year with climate just
like the first year).

Now, we _could_ revise the model so that instead of using the
_model_ temperature for the previous year, we use the _observed_
temperature for the previous year. However, it _still_ gets the
next 999,999 years *wrong* (again, except by chance that two consecutive
years are identical). At least now it has the correct variation and
such, but it really has just been a regurgitation of the input data.
The model hasn't actually done _anything_ to change the input. It's
a case of a model saying "well, we got _last_ year wrong, so let's ignore
our model value and start over again with the correct observed value."
If this is the way that you intended to run the model (using observed
instead of modelled climate from the previous year) then you have
tacitly admitted that your model is wrong, by your refusal to use
it's own estimate of climate for the succeeding year. To call this
"success" is really stretching the imagination.

Now, let's look at Michael's list again. The one that suggests tests
that Shearer claims are "passed with flying colors" by his "model":

|1) reproducing the spatial structure of contemporary climate

Shearer's model can only pass this if you feed it all the data
beforehand. Hardly worth noting. If you want 1000 locations, you
have to have 1000 sets of correcting observations, and it is still off by
one year.


|2) reproducing the seasonal cycle of contemporary climate

Shearer's model only has yearly data in it. Fail. If we revise
the model to do monthly steps, it only "passes" if you constantly
correct it with observed data. Even then, it is off by one month.
Another failure. If we tell Shearer's model that the difference from
one month to the next is due to changing solar forcing, it fails
because it says "makes no difference: still the same as last month".

|3) reproducing paleoclimate conditions

Same results as the seasonal cycle. Another failure.

|4) predicting the effects of perturbations like Mt Pinatubo

Let's see. We have climate Tx for year x. Without volcanoes, model says
"year x+1 will be like x". If a volcano erupts, the model says "year x+1
will still be exactly like x". So, Shearer's model says that Pinatubo
couldn't have any effect on climate. Another failure.

|5) reproducing physical conservation principles like mass conservation, energy
|conservation and conservation of angular momentum

Not even in the model, it seems. I guess Shearer is arging that because
the model doesn't get it _wrong_, then it must be right. However, he said
above that 'More generally I consider large computer models "guilty until
proven innocent".' Obviously he has a different standard for his own
work. For himself, 0% becomes 100%. For others, anything less than 100%
becomes 0%.

|6) reproducing the course of greenhouse gas response to date

Again, Shearer's model can only do this if we constantly revise the
input data, and then it is out by a year. Another failure. If we treat
it like the volcano test, increasing CO2 still leaves the model saying
"same as last year: no difference if we change CO2".


As far as modelling is concerned, Shearer can't even tell the
difference between input and output. His "model" doesn't even qualify
as an input filter, because it does _nothing_ with the data it reads in
other than delaying the output. The only thing it does is consume time.
In the absence of observations to constantly correct the model, the
simulation of the past 100,000 years shows _none_ of the existing
record.

Yet Shearer argues that "This model passes the above tests with
flying colors." What a low standard he is applying! He goes on to ask
"Are you impressed?", to which the answer is clearly "No". In this case,
the question applies to Shearer's understanding of climatology, rather
than his model.

[deletia]

> Dave Halliwell posted:
>> There have already been *several* levels of "making them more realistic
>>still does not significantly change the results", which is the criterion
>>you stated for you to "give greater weight to their results".
>
> If the models all give the same results where did the 1.5-4.5
>C range come from? Or are you contending 1.5 C and 4.5 C are not
>significantly different?

You seem to be living under the illusion that "same" and "similar"
mean the same thing. You have also once again failed to recognize the
significance of the different classes of models, and obviously don't
have any clue as to _why_ different GCMs give different results. The
comparison being made was the results of different _classes_ of models,
which is the basis for judging them to be "realistic". With each class,
there can be a range. It is not necessary that the range be zero before
we conclude that differences between classes are not significant.
Actually, it is _because_ the ranges within a class is large that small
differences from class-to-class are unlikely to be statistically different.

The range for GCMs is 1.5-4.5C. The range for other classes of
models is similar. The GCM results _confirm_ that the simplicity
of the other models is reasonably realistic (for the types of thinsg that
these simple models are designed to do).

It is obvious that your understanding of statistics is as miserable
as your understanding of climatology. Perhaps you might wish to read up
on t-tests some day.

> Michael Tobis asked:
>>I would like Mr Shearer to provide a *single* verifiable in-context quotation
>>of a climate modeller overselling the reliability of GCMs, which seems to be
>>the class of model Mr Shearer is criticizing, or of any other approach.
> I replied (in part):
>> I was thinking of you and Halliwell. Halliwell for example has
>>contended that climate models are as reliable as weather prediction
>>models (or at least he jumped all over me when I stated they aren't).
> Dave Halliwell comments:
>> Glad to see that Shearer has selected something that fulfills Michael's
>>request for a *verifiable*, in-context quotation. His response to Michael
>>also demonstrates why he seems to hold the position that he does: either
>>he can't read, or he doesn't understand english.
>>
>> For the record, here are portions of four posts that I have made
>>in the last month or so regarding climate models and weather models.
>>NOTHING that I have said can be reasonably interpreted as a claim "that
>>climate models are as reliable as weather prediction". In fact, I have
>>acknowledged that there are differences bwetween the two. The position of
>>Shearer's that I have challenged was his claim that weather models
>>contributed _nothing_ to climate models. I challenge him to provide a
>>_single_ quote from my posts that makes _any_ direct comparison of the
>>overall reliability of climate and weather models.

Note that none of the following will provide _any_ response to the
challenge I make.

>
> I have made no claim that weather models contributed _nothing_
>to climate models. The start of our exchange on this topic was as
>follows.
> I stated:
>> Science progresses by testing theories against experiments and
>>observations of the natural world. Climatologists have essentially no
>>ability to experiment and a very limited set of observations.

Yet weather modellers _do_ have a large ability to experiment.
Generally they can't do much with weather itself, but _much_ of the
physics involved *can* be brought into a lab. Your claim was that
climatologists have _no_ ability to experiment and this is patently
false. It is as if climatologists ignore physics in their models, and
it belies the fact that _portions_ of models can be tested prior to
their inclusion in the overall climate model.

> You asked:
>> Beginner-type question: how much is there in common between climate
>>models and numerical weather prediction (NWP) models? (There is an
>>intentionally-misleading aspect to this question.) Would you put NWP
>>models in the same category regarding lack of suitable testing?
> I replied:
>> No I would not put NWP models in the same category because
>>since they predict on a much shorter timescale (and in some cases for
>>smaller areas) the observational record is in effect much larger.

...which showed your complete ignorance of the existence of other
classes of climate models besides GCMs, and the large extent to which
the physics of GCMs and NWP models overlap.

>
> You then proceeded to flame me for the above remark. If you
>in fact agree that weather prediction models are more reliable than
>climate models I don't see the point of criticizing me for saying it.

You were "flamed" for your ignorance about the other classes of
models, and the overlap I refer to above. To begin with, your idea
of a simple comparison on "reliability" is so incredibly naive that
it isn't worth commenting on. It's like saying "is Excel more reliable
than Word?" Secondly, regardless of what the result of such a vague
comparison might be, your reasoning is so limited (and largely based
in ignorance) that an acceptance of the claim does *not* give credence
to your argument. Even if there is _some_ truth to the results of the
comparison, it does not mean that the argument you use to arrive at
that conclusion is a correct one.

>Similarly I don't see the point of asking whether I can describe the
>differences between climate models and weather models if you agree
>that they are different.

Ahh, but I know *how* they are different, and what that means in
terms of their output. I understand something about the weaknesses of
the models, and what kinds of things can be trusted (or not trusted)
in the results. The point of asking *you* to describe the difference
is to see if your _opinions_ have any basis in fact. Actually, that's
not quite true: the point of asking you to describe the difference is
to DEMONSTRATE that your _opinions_ DO NOT have much basis in fact.

Cats and dogs are different. That doesn't mean that I am justified
in saying that dogs have more legs than cats.

Michael Tobis

unread,
Feb 21, 1995, 12:32:05 PM2/21/95
to
David has misunderstood James' model: it's even worse than David suspected.
The model isn't just a "presistence" model: that would show too much culture.
(I thought the same at first reading)

The model which claims to pass all tests "with flying colors" predicts
the climate of year x+1 by looking for year y in the record most similar
to x, and predicting that year x+1 will resemble year y+1. This isn't
really even "climate", to be sure, but sort of a coarse weather prediction.

It operates on the assumption that averaged weather of year x is a function
of average weather of year x-1. If this were true, it would pass "with flying
colors", but of course, it still wouldn't be a model of climate. Alas, it
isn't particularly true.

In practice, the proposed model is actually a very noisy model of El Nino
cycles! Where the "flying colors" conclusion comes from escapes me entirely.

Here's my response to Shearer, which I wrote a couple of days ago:


Subject: Re: computer modeling (was scientific culture)
Newsgroups: sci.environment
References: <19950216....@almaden.ibm.com>
Distribution:


Sigh. More debating club tactics.

j...@watson.ibm.com wrote:
: Michael Tobis asked:


: >I would like Mr Shearer to provide a *single* verifiable in-context quotation
: >of a climate modeller overselling the reliability of GCMs, which seems to be
: >the class of model Mr Shearer is criticizing, or of any other approach.

Of course, I meant any other modelling approach. I didn't think observations
were at issue.

: To clarify, the book contains what appears to be Hansen's


: entire statement. I extracted the above paragraph.
: Michael Tobis continued:
: >I suspect all Hansen was trying to say was that there is a verifiable
: >warming, not that it is verifiably outside the range of natural variability.
: >If so, I would support the statement in such a context. If the statement
: >was meant to imply that there is a real warming *outside of natural
: >variability*, the reasoning is inadequate and the presentation misleading.

: The second bullet at the start of Hansen's statement (p. 35 in
: the book) is "The global warming is now sufficiently large that we can
: ascribe with a high degree of confidence a cause and effect relationship
: to the greenhouse effect." The quoted paragraph came from a section
: titled "Relationship of global warming and greenhouse effect". The
: conclusion of this section (p. 40 in the book) reads:
: "Global warming has reached a level such that we can state
: with a high degree of confidence a cause and effect relationship between
: the greenhouse effect and the observed warming. Certainly further study
: of this issue must be made. The detection of a global greenhouse signal
: signal represents only a first step in analysis of the phenomenon."

: You said "or of any other approach". Here we have a prominent


: modeler making a "misleading" presentation to Congress based on
: "inadequate" reasoning.

Showing warming is inadequate to showing greenhouse warming, but it isn't
off topic. I apologize to Abrahamson. It is Shearer who dropped the context.

: This indicates to me that at least some


: modelers can not be trusted to give an objective account of the
: reliability of their work.

It isn't remotely relevant. Hansen's statement was thought by many to
be a bit premature, but it isn't grotesquely out of line with the evidence:

- There is warming

- It is at the edge of natural variability, and probably exceeds *unforced*
natural variability

- It is of a magnitude that matches the expected response to anthropogenic
forcing

- No evidence of large changes in natural forcing is evident.

- The usual standards of statistical verification had not yet been reached
at the time Hansen made his statement. (I think they will be very shortly
if one accepts that there has been some suppression of North Atlantic
Deep Water formation.) (By the way, Montana *is* a good place to look
for a global warming signal...) Hansen argues that 99 % certainty
of hypothesis verification is not a good target in an experiment on this scale.

: I stated (in reply to Evens):


: > I disagree, the underlying physical principles are not well
: > understood in any effective sense (saying everything follows from
: > Schrodinger's equation is not helpful).
: Tobis responded:
: >It would if the models were an implementation of Schrodinger's equation.
: >In fact, they implement the Navier-Stokes equations, and the result is
: >pretty good.

: The models contain much more than the Navier-Stokes equations.
: In any case it is my understanding that it is computationally infeasible
: to solve the Navier-Stokes equations directly because of turbulence.

Of course, there is some parameterization of subgrid scale processes, as
there is in any fluid modelling (aerodynamics engineering, for instance).
However, the models do have demonstrable and demonstrably improving skill
in medium range (a week or so) weather prediction. This is not a demonstration
of their applicabiliy to climate studies, granted.

(It is always dangerous to grant something to Shearer. Will he now say "Tobis
admits that weather models are not applicable to climate studies"? I admit
no such thing. I admit that the previous statement doesn't prove the contrary.
Sorry to go on like this, but the debating club tactics around here force
me to do it.)

The effectiveness of the models in predicting medium term weather demonstrates
that the phenomena of weather (whose statistics we call climate) are
increasingly well understood.

: If the purpose is insight into the transient response, why are


: results usually given for the equilibrium response?

I wish they weren't.

: If the models


: don't agree on the sensitivity, why should I expect them to be reliable
: for the transient response and the spatial structure which are
: generally considered to be harder to predict?

You shouldn't. These are realizations of the most similar structures to
the climate system that human ingenuity can construct. The only aspects
that should be relied on are those that the models generally agree on,
and even those shouldn't be accorded certainty.


: Your original argument was that the catastrophic failure of


: some Wall Street models indicates climate models are intrinsically
: optimistic. This argument is obviously absurd.

OK, I left out a couple of steps. First, remove "Wall Street" from the above
characterization of my position. Then note that a catastrophic failure of
the models represents a regime shift from contemporary climate, which
is successfully modelled. It is hardly likely that the effects of such a
regime change will be less diruptive than the extrapolation the models
perform, which assume no sudden regime change that would invalidate the
parameterizations and/or boundary conditions.

: I posted:
: > A belief that one is obligated to understand a model before


: > criticizing it is dangerously wrong, making incomprehensible models
: > above reproach.

: Michael Tobis replied:
: >No, I express a belief that one should TRY to understand a model before
: >criticizing it. You haven't made the effort.

: This is unrealistic. People must decide how much weight to
: give computer models without examining each one in detail. As I said
: before I discount the results of any model which cannot be convincingly
: validated.

So far I agree.

: The climate models have not been convincingly validated.

You mean YOU aren't convinced. But you have to evaluate the evidence
before you can be convinced.

: Your arguments to the contrary are basically wishful thinking. I also


: find it amusing that you agree complicated computer models in other
: fields are of doubtful validity but you expect me to believe
: climatology is different.

*some* other fields. Models based in physics are strongly constrained.
Seismologists and astrophysicists, for example, have the same logical
constraints as climatologists. They have the good fortune of not having
to deliver bad news, however. Their models are more reliable than
economists' because they are testable by other means besides validation
over the entire range.

: I am willing to expend some effort to learn more about the


: models. Do you have some accessible references? For that matter are
: there any climate models, which you are willing to defend, available
: on the net?

You can't run a GCM. Alas, the user interface isn't the string point of
these models, but more to the point they take enormous computational
resources. However, you can learn about them. The best introductions are:

AUTHOR Henderson-Sellers, A.
TITLE A climate modelling primer / A. Henderson-Sellers and K. McGuffie.
-- Chichester ; New York : Wiley, c1987.

and

AUTHOR Washington, Warren M.
TITLE An introduction to three-dimensional climate modeling / Warren M.
Washington, Claire L. Parkinson. -- Mill Valley, CA : University
Science Books ; Oxford, New York : Oxford University Press, 1986.

The former covers the whole spectrum of models, and the latter just GCMs.

And please read the IPCC reports. Please?

: This is not the problem. If the model computed the correct


: answer for 100 random inputs chosen from the entire input space this
: would be strong evidence that the model computes the correct answer at
: least 90% (for example) of the time. This would not follow if the 100
: inputs were chosen from a small subset of the input space. This is
: why I said the "entire range".


: The real problem is that it is impossible to test any computer
: program if you don't know what the right answer is.
: I said:

Well, since high CO2 and ice caps have never coexisted, you are saying that
no skill whatsoever in predicting the climate of that situation is possible,
since no prior validation is possible. What happenned to physics?

: I said:
: > I will also mention that it is very dangerous to use any data
: > used to develop the model to validate it. This is shown over and over
: > by the failure of stock trading (or sports betting) schemes which work
: > great retrospectively when applied to future events.

Yes, but that is the misunderstanding I wanted to address. The models
are much more physical than you seem to want to believe. The parameterizations
of unresolved processes are based on standard physical arguments, including
experiment. There may be a couple of "knobs", values which we have
only an approximate idea of, which are "tweaked" to "tune in" the best
picture of the observed climate. But these choices have far fewer degrees
of freedom than the modelled system. We use the observed system to help
us determine the values for these parameters, but the parameters are
at least approximately independent of the state of the system. (For instance,
the parameters are not changed for the seasons, which after all do represent
a significant climate shift, and which are well represented.)

If a system with a few dozen parameters matches a record of a few
dozen points, it has no predictive value. If a model with a few dozen
parameters matches a record with thousands of degrees of freedom, we
can conclude that there is some predictive value in the model.

: > Do you doubt that it is possible to come up with obviously


: > wacko models which pass all or most of the above tests. Consider for
: > example the following model for predicting climate. Find the year x
: > in the last 100000 with climate most similar to this year's climate.
: > Then predict next year's climate will be that of year x+1. This model

: > passes the above tests with flying colors. Are you impressed? (One


: > might quibble that the climate records for the last 100000 years are not
: > good enough to actually implement this model, however since you have
: > making been claims about how well paleoclimatic conditions are known
: > this shouldn't bother you.)

The more I look at this, the less sense it makes. What is your metric of
"similar"? In practice, this method is a clumsy way of identifying tha
El Nino phase, and I think its skill would be quite small on the tests I
mentioned for a 1-year prediction, and negligible for a 5-year prediction.

What are you trying to prove here? That you think climate is a scalar
variable?

: The current general climate models are hybrids containing
: large empirical components.

OK, there's a testable claim. Please justify it.

: What you believe to be additional information, I believe to be
: noise.

For policy purposes, the information is modest, but claiming it to be
zero is insupportable. Also, there is continual and rather impressive
improvement. Finally, the primary purpose of the models is to contribute
to the improvement of knowledge, not to make policy. Observation, theory,
and calculation are the three requirements for progress in this field,
and it is the resultant knowledge, as strong or weak as it may happen to
be, that is the necessary input to rational policy.

: Any cost/benefit analysis of global warming (and mitigation


: measures) is subject to substantial uncertainty and it is unreasonable
: to pretend otherwise.

I agree completely, and I know of no scientist who "pretends" otherwise.

However, there is little alternative, besides becoming politically
correct of one stripe or another, and choosing either to do nothing
differently (until it is too late), or to do nothing whatsoever (and
start starving early), which are the alternatives generally presented by
those motivated by politics rather than pragmatism.

mt

Jan Schloerer

unread,
Feb 22, 1995, 9:49:24 AM2/22/95
to
In article <3i8ovr$g...@umt.umt.edu>
Anthony C Tweedale (es...@selway.umt.edu) included :

> _geophysical research letters_ 21.2067. i read about it in
> _new scientist_ 144.1949.21 (29 oct 94 (i was just reading it now,
> so maybe i mislead you by saying 'new')):
>
> "the sun was much less active between 1700 & 1850 than astronomers
> have thought." "instead [sunspot activity] built up much more
> gradually [since 1700]." "the snag is that wolf's interpretation

> of the old records may not be reliable. [ ... ... ]


> "the maunder & dalton minima still show up clearly .. the most impt.
> diff. .. is a steady overall increase in solar activity, starting
> from the end of the maunder minimum and lasting 'till today."


Just stumbled over the New Scientist article in a dusty pile.
Tony's report is correct, just one point:

The term "much" employed somewhat muchly ;-) by the New Scientist
is relative. There is a graph of the new sunspot record in the New
Scientist article [1]. An old version of the sunspot record appears
on p 28 of Foukal's 1990 overview [2]. Assuming the new record is
more precise, to my amateurish eye the improvement looks indeed
relevant, though not earth-shaking.

To put things into perspective: conventional wisdom assumes that
during the Maunder Minimum, between about 1645 and 1715, solar
irradiance was about 0.3 % lower than today. To be fair, a larger
difference cannot yet be ruled out, but it doesn't seem too likely
from what is known so far [3]. If the 0.3 % figure is about right,
then the discrepancies between the old and new sunspot record are
of the order of 0.1 % of total solar irradiance.


> do any climetologists/modelers here know if that's been done yet ?
> or care to speculate what the efect on gcm results might be ?

You might ask Karin Labitzke or David Rind ;-) From what they told
Scientific American [4], my impression was there are some bits to be
learned before your question can be reliably answered.


[1] John Gribbin, Surer sunspots should improve warming forecasts.
New Scientist 144, #1949 (29 Oct 1994), 21
[2] Peter V. Foukal, The variable sun.
Scientific American 262, 2 (Feb 1990), 26-33
[3] Peter Foukal, Stellar luminosity variations and global warming.
Science 264 (8 April 1994), 238-239
Richard R. Radick, Peter Foukal, Stellar variability
and global warming. Science 266 (11 Nov 1994), 1072-1073
[4] Corey S. Powell, Talk about the weather: Insights help to
explain solar effects on climate. Scientific American 271, 5
(Nov 1994), 10-14


Jan Schloerer schl...@rzmain.rz.uni-ulm.de
Uni Ulm Klinische Dokumentation D-89070 Ulm Germany

Michael Tobis

unread,
Mar 1, 1995, 8:41:48 PM3/1/95
to
j...@watson.ibm.com wrote:
: I have found an alternative access to sci.environment so (for
: better or worse) I will continue to post as I have time.

In fact, I am pleased. It's useful to have your interesting and intelligent
questions, and I acknowledge that I haven't seen adequate answers to all
of them (in particular, questions about extrapolating the behavior of
the carbon cycle itself, which I don't have much background in, but should
as an ocean modeller.)

However, I would hope you would pay more attention to the responses, some
of which seem to have escaped you, much as:

: Tobis and Halliwell seem to have completely missed the point
: I was trying to make with the model I posted.

I had thought so.

: Actually the closest year to 1863 in the record is obviously
: 1863. This will be true for any reasonable metric for "similar" which
: is why I did not bother to specify one. Thus the model predicts the
: climate of 1864 will be that of 1864 with an error of 0 and in general
: predicts the past perfectly. Of course you may object that this happy
: state will be unlikely to continue if we attempt to use the model to
: predict the future. However that is exactly the point I was trying to
: make.

I think you need to be a little more specific as to how this constitutes
a model at all. It's clear to me, with some DSP background, that there
is a who;e family of implicit models, there, with precisely no predictive
value: say, an N dimensional Mth order vector polynomial, where N is the
number of measurements in each sample of the record and M is the number
of samples. This will make perfectly useless, hop[elessly nonphysical
predictions, even for interpolations, and more so for extrapolations,
temperatures frequently below absolute zero, etc. but will recapitulate
the specified record perfectly. Granted without reservation. What of it?

Do you think climatologists are so 1) stupid or 2) dishonest that they are
peddling such a "model"?

Please be realistic. The models operate in a space of 10 million or so
dimensions (about a dozen parameters at about a million points). That they
replicate the appropriate physical scales at all is an accomplishment in
itself. In fact, this represents some of the strongest evidence that the
broad outlines of climate physics are understood. This is because, unlike
your "model" which has the same number of degrees of freedom as it has
data points, there are only a few dozen "free" parameters in the model,
which are tuned, as in the recent cloud transparency discoveries, by
physical verification.

There is rather a subtle point here - the dimension of the model space,
which is much larger than the dimension of the physical system (at least
as best represented on a large (100 km grid) scale), which is in turn much
larger than the set of tunable parameters.

That these models represents the physical system with enough success to do
increasingly accurate long range weather predictions is a substantial
success.

(Historical note, possibly of interest: the first reasonably successful
attempt to do a computer model of weather was Charney, Fjortofft, &
von Neumann, sometime in the 1940s. I can look it up if anyone cares to
have a citation. Yes, THAT von Neumann.)

: The past record is embedded in the above model in an obvious
: way, however there are much more subtle ways of doing the same thing.
: The effect will still be a model that does not predict the future as
: well as it predicts the past.

This mistakes weather models with climate models. (Unfortunately, "transient"
runs make this my somewhat weaker and more subtle. For simplicity, take
the following to refer to quasi-equilibrium forcing studies only.) Climate
models in that sense are NOT dynamic, so their objective is not to TRACK
climate accross changing forcing and boundary conditions, but to REPRESENT
climate, i.e., the statistics of weather. So the models are not trained on
a sequence, as an adaptive delay-line filter might be.

This the perhaps the main point at which the misunderstanding arises.
People without an EE (or perhaps economics) background will completely
fail to see your point, because the models are far less general and
far less tunable than the general time series estimation stuff I (and
apparently you as well) was raised on.

The models are tuned to represent unresolved
physics, and improvements in the tuning are accomplished not only by matching
the models with observations but also by painstaking study of the physical
principles. Also, the models have FAR FEWER tunable parameters than the
system they represent. Also, at least for equilibrium studies, the
model output is not a functiuon of time, so the

Finally, it is amazing to me that climatologists, who have managed this
rather imp[ressive (though not unprecedented - no one questions that
aerodynamic engineers and racing yacht designers can do similar things)
feat can be accused of being so stupid as to rely on a model untested
except on its training sequence. Of course, this fails to represent how
GCMs are used at all, as I hope I have succeded in explaining to some degree.

: Tobis and Halliwell may deny that this
: could be happening with climate models, however I think it is almost
: certainly happening to some extent.

I hope I have explained how the possible extent that this could be
happening is relatively small. The complaint that the models are of
relatively smaller use the further one gets from established climate
conditions is a valid one, to be sure. Drs. Kutzbach and Foley at our own
institution, and many others, have gone to considerable lengths to test
the models against paleoclimatic evidence, which to be sure is not
as strong as direct measurement, but again has significant non-zero value
in addressing this question.

: On another point I posted:


: > I am willing to expend some effort to learn more about the
: > models. Do you have some accessible references? For that matter are
: > there any climate models, which you are willing to defend, available
: > on the net?

: Michael Tobis replied:
: >You can't run a GCM. Alas, the user interface isn't the string point of


: >these models, but more to the point they take enormous computational
: >resources. However, you can learn about them. The best introductions are:

: You might be surprised at what computational resources I have
: access to. In any case I didn't plan to attempt to do more than few
: timesteps. Just reading the source might be illuminating. Is source
: available? Is it a requirement for publication of model results that
: the source be freely available? (I certainly think it should be a
: requirement.)

In fact, the main modelling centers are a bit reluctant to share their code.
I would tend to agree with your position. I would defend them in part by
saying 1) the codes are relatively freely available within the community
of geophysical fluid dynamicists and 2) grant money is tight and they
seem to expect some sort of collaboration or contribution form people
they share their code with. These are my observations as a somewhat peripheral
player, and may be wrong. To get an idea of what you are up against, I
can tell you that the GFDL ocean model IS available by ftp. If you're
serious, email me and I'll send you details.

As for resources, this fascinates me. I wonder if those who feel so
strongly about the private vs public sector could find private sector resources
to tackle this problem. Let me know - I'm looking for a post-doctoral
position. I'll happily send along a resume.

: Also as Dave Halliwell keeps repeating there are smaller
: models. Is the source to any of them available?

Yes, the Henderson-Sellers book contains a number of them, as does the
Washington book. Shall I post the references again???

: Halliwell has contended that source is not needed to under-
: stand a model. I may dispute this at length if I find time, for now
: I will just note the idea that a computer program is necessarily a
: faithful implementation of a short description of what the model is
: supposed to be is extremely naive.

I'm not one to defend every software engineering practice used in the
main models, but the proof really is in the pudding. They work rather
better than one a priori might expect. That they implement *exactly* what
their documentation claims they implement is not likely. That they
implement dynamical systems which share many properties with the climate
system is not thereby disproven. They are the best tools we have for many
tasks, and urging us to drop them seems senseless. (Urging us to use
better design and documentation techniques is another point entirely, one
with which I'd heartily concur.)

All of the above refers to climate models which take carbon forcing as
given. I do not know as much as I'd like to know about the carbon cycle
models. I hope to be able to give a better picture of those at some time.

These two efforts are reasonably decoupled at present. There may be a
biotic feedback if the changes become severe - from dying boreal forests
and concomitant soil damage. (Woodwell talks about this, e.g.,)

Also, the methane forcing, which appears to be settling down, may come back
into the picture if the tundra starts melting. Such feedbacks are
presently left out of the picture, and the atmospheric composition issues
and the climate dynamics questions are studied separately. Here again, it
seems likely that coupling phenomena left out of the separate models may
exacerbate the predicted response. (I would add that those claiming some
sort of comfort from the fact that CO2 concentration seems to have lagged
global temperature in the ice age cycle should consider that this may
in fact be evidence for an exacerbating biotic feedback.)

mt

Michael Tobis

unread,
Mar 1, 1995, 11:57:02 PM3/1/95
to
sigh. so much for careful editing. anyway, the points still seem sound to
me. sorry about the typos, sentence fragments, etc.

mt

0 new messages