8 views

Skip to first unread message

Sep 10, 2007, 8:06:05 AM9/10/07

to globalchange

Not that I'm suggesting for amoment that it isn't, you understand.

This is more of a technical question.

This is more of a technical question.

Given all of the uncertainties relating to natural variability,

decadal cycles and the like, the ACIA in 2004 went no further than

saying (to paraphrase) 'This looks like an anthropogenically-induced

decline from GW, but we can't say so for certain'.

Things have moved fast since then, in terms of the rate of summer

decline. Has the process gone far enough yet for us to say,

definitively, that this must be an effect of AGW? Is there a numerical/

statistical analysis which places recent losses beyond the possible

bounds of natural variability + error?

Finally, is there any known reasonable alternative hypothesis for the

rate of change in the summer sea ice extent/area?

If it is clear that the uncertainty about the causes of Arctic sea ice

decline has diminished to the point of near certainty, we would then

be strongly placed to shout loudly on all blogs, ours and those of

skeptics; 'Where's the ice?'

Sep 10, 2007, 8:32:41 AM9/10/07

to globalchange

On Mon, 10 Sep 2007, Fergus wrote:

> Things have moved fast since then, in terms of the rate of summer

> decline. Has the process gone far enough yet for us to say,

> definitively, that this must be an effect of AGW? Is there a numerical/

> statistical analysis which places recent losses beyond the possible

> bounds of natural variability + error?

Quite a few people seem to be pushing "the ice has declined faster than the

models predict" line. So that would appear to rule out anthropogenic factors as

the cause :-)

More seriously, I don't think "attribution" of ice decline is done in the way

that T changes are done. It seems to be more of the "look at this and look at

what we predicted" kind of thing.

-W.

William M Connolley | w...@bas.ac.uk | http://www.antarctica.ac.uk/met/wmc/

Climate Modeller, British Antarctic Survey | 07985 935400

--

This message (and any attachments) is for the recipient only. NERC is subject

to the Freedom of Information Act 2000 and the contents of this email and any

reply you make may be disclosed by NERC unless it is exempt from release under

the Act. Any material supplied to NERC may be stored in an electronic

records management system.

Sep 10, 2007, 8:53:20 AM9/10/07

to globalchange

I get the impression that the line tends to be: 'look, we told you

this would happen if AGW kicks in...' , which sets up the sea ice

decline as an indicator of the credibility of the hypothesis and a

verification of model projections (but as you note...)

this would happen if AGW kicks in...' , which sets up the sea ice

decline as an indicator of the credibility of the hypothesis and a

verification of model projections (but as you note...)

The problem I have is working out how we get from the 'what' to the

'why'. In one sense, it really is self-evident; warmer water, less

ice. It seems to make the Polar amplification idea look fairly solid,

too. But there must be a way of applying some kind of statistical tool

to the numbers to work out the significance of the change. There

should also be a way of calculating the ration of feedback to forcing,

with the information that is now available.

I know what you mean, though.

Sep 10, 2007, 9:20:00 AM9/10/07

to globalchange

On Mon, 10 Sep 2007, Fergus wrote:

> I get the impression that the line tends to be: 'look, we told you

> this would happen if AGW kicks in...' , which sets up the sea ice

> decline as an indicator of the credibility of the hypothesis and a

> verification of model projections (but as you note...)

sure. in which case the models have failed, as the obs are outside their range

:-) or does only predicting too little count?

> The problem I have is working out how we get from the 'what' to the

> 'why'. In one sense, it really is self-evident; warmer water, less

> ice. It seems to make the Polar amplification idea look fairly solid,

> too. But there must be a way of applying some kind of statistical tool

> to the numbers to work out the significance of the change. There

> should also be a way of calculating the ration of feedback to forcing,

> with the information that is now available.

you can do sig tests to show that the change is not likely due to chance (cue

JA...). at which point you know immeadiately its due to... solar variation!

-w.

Sep 10, 2007, 9:45:20 AM9/10/07

to globalchange

On Sep 10, 8:32 am, William M Connolley <w...@bas.ac.uk> wrote:

> On Mon, 10 Sep 2007, Fergus wrote:

> > Things have moved fast since then, in terms of the rate of summer

> > decline. Has the process gone far enough yet for us to say,

> > definitively, that this must be an effect of AGW? Is there a numerical/

> > statistical analysis which places recent losses beyond the possible

> > bounds of natural variability + error?

>

> Quite a few people seem to be pushing "the ice has declined faster than the

> models predict" line. So that would appear to rule out anthropogenic factors as

> the cause :-)

>

> More seriously, I don't think "attribution" of ice decline is done in the way

> that T changes are done. It seems to be more of the "look at this and look at

> what we predicted" kind of thing.

I suppose you are referring to the paper which was published in the

GRL just last

Saturday and was given some recent press coverage. Here's the

citation:

Overland, J. E., and M. Wang (2007), Future regional Arctic sea ice

declines,

Geophys. Res. Lett., 34, L17705, doi:10.1029/2007GL030808.

Given that this year's sea-ice extent (and also area) is exhibiting a

very strong

decline, it would be easy to conclude that the models appear to be

understating

the problem. Perhaps there is some other mechanism involved with this

year's

decline which may not be included in the various models. If this

year's minimum

is just due to natural variability, we would see a return of more sea-

ice next

year, right? No worries, everything's OK, just keep moving!!

E. S.

Sep 10, 2007, 9:56:47 AM9/10/07

to globalchange

On 10 Sep, 14:20, William M Connolley <w...@bas.ac.uk> wrote:

>

> you can do sig tests to show that the change is not likely due to chance (cue

> JA...). at which point you know immeadiately its due to... solar variation!

>

Can you also do sig tests to eliminate the solar component? IOW: do we

have enough data yet to match solar variability to sea ice

variability? (I am sure this actually involves far too many other

contingent variables, btw).

Eric: The Overland paper (abstract) reads a bit oddly in the light of

this year's decline (in that the estimates look too conservative).

Then you notice it was submitted back in May, so written before that.

It looks to me like a generally 'low end' estimate is still the

models' best guess, compared to observations, and the ensemble run has

'upped' the estimates of decline somewhat, but still appear to suffer

from some deficiency in their process; perhaps an underestimate of

feedbacks?

Sep 10, 2007, 10:28:57 PM9/10/07

to global...@googlegroups.com

William M Connolley wrote:

> you can do sig tests to show that the change is not likely due to chance (cue

> JA...

...who will observe that significance tests can never answer questions

like "was it due to chance" (or not), but only questions like "how

likely is it that observations as extreme as these would arise in a

hypothetical world with no forcing"?

Which is not the same thing at all.

Here closeth the parenthesis.

:-)

James

Sep 10, 2007, 10:39:01 PM9/10/07

to global...@googlegroups.com

I'm at a conference with some statisticians at this moment. One talk

today discussed a recent paper in Nature which alleged that the

Chixculub meteor had a certain origin with probablity 90%. The gist of

the talk was that the professional statistician was unable to ascribe

enough meaning to the 90% claim to refute it.

today discussed a recent paper in Nature which alleged that the

Chixculub meteor had a certain origin with probablity 90%. The gist of

the talk was that the professional statistician was unable to ascribe

enough meaning to the 90% claim to refute it.

Admittedly he was a "frequentist"; I had no idea how seriously people

take this division, but as a marginally statitistically educated

person with Bayesian sympathies, he left me unable to ascribe much

meaning to it myself.

Statistical attribution was always a red herring.

Global change isn't a drug trial and we can't round up 500 planets to

give half of them CO2 and half a placebo to get a 99% refutation of

the null hypothesis. We actually have to think, not just apply

formulas.

mt

Sep 10, 2007, 11:56:02 PM9/10/07

to global...@googlegroups.com

Michael Tobis wrote:

> I'm at a conference with some statisticians at this moment. One talk

> today discussed a recent paper in Nature which alleged that the

> Chixculub meteor had a certain origin with probablity 90%. The gist of

> the talk was that the professional statistician was unable to ascribe

> enough meaning to the 90% claim to refute it.

> I'm at a conference with some statisticians at this moment. One talk

> today discussed a recent paper in Nature which alleged that the

> Chixculub meteor had a certain origin with probablity 90%. The gist of

> the talk was that the professional statistician was unable to ascribe

> enough meaning to the 90% claim to refute it.

http://www.nature.com/nature/journal/v449/n7158/full/nature06070.html

The paper itself is actually clear enough. If a particular event took

place as described, then it would have produced impacts at a much

greater rate than the background, such that any impact would with 90%

probability come from this event. I don't think it is unreasonable to

think about a single impact as a random sample from an "urn" of rocks

floating around in space. But on the face of it the research does not

justify the claims made in the press (or your phrasing above).

> Global change isn't a drug trial and we can't round up 500 planets to

> give half of them CO2 and half a placebo to get a 99% refutation of

> the null hypothesis. We actually have to think, not just apply

> formulas.

What I find interesting about it all is how little people care. It's not

as if I am the first person to think about it, indeed I am doing nothing

more than following a well-worn path (there are rants aplenty on this

general topic on the web). And yet...as Nature put it, "the concerns you

have raised apply more generally to a widespread methodological

approach" and therefore can safely be ignored.

James

Sep 11, 2007, 12:46:25 AM9/11/07

to global...@googlegroups.com

Yes, that's the one, thanks.

Since this isn't a public talk I won't identify the frequentist in

question, but he was uncomfortable with the very idea of assigning a

probability to an event that "either happened or didn't". Something

about babies and bathwater comes to mind.

That said, he also described a very long and involved set of

calculations that went into the figure, and pointed out that no effort

was made to assign confidence bounds to any of it.

I don't know of any claims about this paper in the press.

I have two very trusted independent sources who haven't the slightest

doubt that the Chicxulub impact was the dinosaur killer; I am not sure

they care very much which deep space event sent that gift our way. I

certainly can't get all that worked up about it. Although I'm a

worrier by nature I have a hard time getting too alarmed by potential

harm from Chicxulub Jr.

While I am mentioning Chicxulub, it is amazing that life survived the

event at all. That's based on the description I heard from

gbeophysicist Sean Gulick of Texas recently, who is working this up as

an outreach talk. Likely all remaining life descends from a few

survivors deep within caves which were immune to the huge temperature

swings.

mt

On 9/10/07, James Annan <james...@gmail.com> wrote:

>

Sep 11, 2007, 1:23:32 AM9/11/07

to global...@googlegroups.com

Michael Tobis wrote:

> Yes, that's the one, thanks.

>

> Since this isn't a public talk I won't identify the frequentist in

> question, but he was uncomfortable with the very idea of assigning a

> probability to an event that "either happened or didn't". Something

> about babies and bathwater comes to mind.

> Yes, that's the one, thanks.

>

> Since this isn't a public talk I won't identify the frequentist in

> question, but he was uncomfortable with the very idea of assigning a

> probability to an event that "either happened or didn't". Something

> about babies and bathwater comes to mind.

I would be interested to know if he listens to (and acts upon) the

weather forecast :-) Tomorrow's weather is not a random repeatable

sample, merely an unknown deterministic event. Of course people

(including me) do talk about frequentist notions such as reliable

probabilities ("reliable" meaning that eg an event has historically

happened on p% of the occasions that it was forecast to happen with p%

probability), but I would hope that most if not all researchers would

agree if they thought about it carefully that in fact the probabilities

can only be Bayesian in nature.

>

> That said, he also described a very long and involved set of

> calculations that went into the figure, and pointed out that no effort

> was made to assign confidence bounds to any of it.

>

> I don't know of any claims about this paper in the press.

http://www.sciencedaily.com/releases/2007/09/070906135629.htm

"the team found a 90 percent probability that the object that formed the

Chicxulub crater was a refugee from the Baptistina family" is a rather

typical example. But it was unfair of me to criticise the press as the

claim appears in the paper itself. Of course my comments don't mean that

the 90% figure is unreasonable, only that it is not directly supported

by the research.

James

Sep 11, 2007, 8:22:31 AM9/11/07

to globalchange

On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

> Yes, that's the one, thanks.

>

> Since this isn't a public talk I won't identify the frequentist in

> question, but he was uncomfortable with the very idea of assigning a

> probability to an event that "either happened or didn't".

> Yes, that's the one, thanks.

>

> Since this isn't a public talk I won't identify the frequentist in

> question, but he was uncomfortable with the very idea of assigning a

> probability to an event that "either happened or didn't".

(Opps!) Unconfortable with the very idea of assigning a 5/6

probability to your survival while playing Russian Roulette once with

a six-shooter, are you?

(I'll let you know when I feel another counter-example coming on.)

Probably, the unconfort you are feeling has to do with accepting the

guy's the model, not with assigning probability to a one-off when the

model is correct.

> Something

> about babies and bathwater comes to mind.

>

> That said, he also described a very long and involved set of

> calculations that went into the figure, and pointed out that no effort

> was made to assign confidence bounds to any of it.

>

> I don't know of any claims about this paper in the press.

>

> I have two very trusted independent sources who haven't the slightest

> doubt that the Chicxulub impact was the dinosaur killer; I am not sure

> they care very much which deep space event sent that gift our way. I

> certainly can't get all that worked up about it. Although I'm a

> worrier by nature I have a hard time getting too alarmed by potential

> harm from Chicxulub Jr.

>

> While I am mentioning Chicxulub, it is amazing that life survived the

> event at all. That's based on the description I heard from

> gbeophysicist Sean Gulick of Texas recently, who is working this up as

> an outreach talk. Likely all remaining life descends from a few

> survivors deep within caves which were immune to the huge temperature

> swings.

The End Permian Extinction was much bigger by all accounts. Only a

handful of proto-mammalian species survived. One of the proto-mammals

had such success afterward that it covered the continent (there was

only one back then) in big herds, a greater mammalian mono-culture

than our modern day herds of farm animals.

But I was not there 250 million years ago, I was just a gleam in the

eye of a Lystrosaurus.

>

> mt

> > James- Hide quoted text -

>

> - Show quoted text -

Sep 11, 2007, 8:51:38 AM9/11/07

to globalchange

On Sep 11, 1:23 am, James Annan <james.an...@gmail.com> wrote:

> Michael Tobis wrote:

> > Yes, that's the one, thanks.

>

> > Since this isn't a public talk I won't identify the frequentist in

> > question, but he was uncomfortable with the very idea of assigning a

> > probability to an event that "either happened or didn't". Something

> > about babies and bathwater comes to mind.

>

> I would be interested to know if he listens to (and acts upon) the

> weather forecast :-) Tomorrow's weather is not a random repeatable

> sample, merely an unknown deterministic event. Of course people

> (including me) do talk about frequentist notions such as reliable

> probabilities ("reliable" meaning that eg an event has historically

> happened on p% of the occasions that it was forecast to happen with p%

> probability), but I would hope that most if not all researchers would

> agree if they thought about it carefully that in fact the probabilities

> can only be Bayesian in nature.

Sometimes these probabilities come from equivalencing the situation to

a model that can be understood via probability theory. No one

actually takes real samples for a real representation of the model,

but one already knows what the frequencies would be if one did.

But maybe there are some examples that can't be interpreted in this

manner?

Sep 11, 2007, 10:18:36 AM9/11/07

to globalchange

On Sep 11, 1:23 am, James Annan <james.an...@gmail.com> wrote:

> Michael Tobis wrote:

> > Yes, that's the one, thanks.

>

> > Since this isn't a public talk I won't identify the frequentist in

> > question, but he was uncomfortable with the very idea of assigning a

> > probability to an event that "either happened or didn't". Something

> > about babies and bathwater comes to mind.

>

> I would be interested to know if he listens to (and acts upon) the

> weather forecast :-) Tomorrow's weather is not a random repeatable

> sample, merely an unknown deterministic event. Of course people

> (including me) do talk about frequentist notions such as reliable

> probabilities ("reliable" meaning that eg an event has historically

> happened on p% of the occasions that it was forecast to happen with p%

> probability), but I would hope that most if not all researchers would

> agree if they thought about it carefully that in fact the probabilities

> can only be Bayesian in nature.

I disagree to some degree. The weather prediction probabilities can

be (and are) model-based frequentist probabilities.

Now there is a hidden assumption: "The model fits reality". The

weatherman is basically acting as if he has a 100% degree of belief in

the model. The degree of belief in the model is perhaps Bayesian in

nature.

Sometimes I have heard local weathermen make a prediction different

from the national predicition. They have some understanding that

makes them doubt the local applicability of the general prediction.

Perhaps that's a lower degree of belief in the model.

I think this is the way it often works. The weather prediction is

obviously not purely Bayesian. It not like the weatherman (or some

committee) measures their psyche to determine a degree of belief.

They just commit to a model.

If this is not the way its done, then it should be done this way. Use

a mixed Bayesian/frequentist method. One thing that you should demand

is that the link between the model and probability be cut and dried:

nothing but pure math and (if real or simulated sampling is needed)

sound sampling procedures. All the fuzzy "degree of belief" stuff

should be confined to the "Does the model fit reality?" issue.

If the weatherman is allowing fuzziness to infect the model-

probability connection part, then he is not being a Bayesian, he is

just making a blunder. I have no doubt that this happen in various

applications, but its just a mistake, not a valid use of Bayesian

probability.

Sep 11, 2007, 8:44:35 PM9/11/07

to global...@googlegroups.com

Tom Adams wrote:

>

>

> On Sep 11, 1:23 am, James Annan <james.an...@gmail.com> wrote:

>> Michael Tobis wrote:

>>> Yes, that's the one, thanks.

>>> Since this isn't a public talk I won't identify the frequentist in

>>> question, but he was uncomfortable with the very idea of assigning a

>>> probability to an event that "either happened or didn't". Something

>>> about babies and bathwater comes to mind.

>> I would be interested to know if he listens to (and acts upon) the

>> weather forecast :-) Tomorrow's weather is not a random repeatable

>> sample, merely an unknown deterministic event. Of course people

>> (including me) do talk about frequentist notions such as reliable

>> probabilities ("reliable" meaning that eg an event has historically

>> happened on p% of the occasions that it was forecast to happen with p%

>> probability), but I would hope that most if not all researchers would

>> agree if they thought about it carefully that in fact the probabilities

>> can only be Bayesian in nature.

>

> I disagree to some degree. The weather prediction probabilities can

> be (and are) model-based frequentist probabilities.

>

>

> On Sep 11, 1:23 am, James Annan <james.an...@gmail.com> wrote:

>> Michael Tobis wrote:

>>> Yes, that's the one, thanks.

>>> Since this isn't a public talk I won't identify the frequentist in

>>> question, but he was uncomfortable with the very idea of assigning a

>>> probability to an event that "either happened or didn't". Something

>>> about babies and bathwater comes to mind.

>> I would be interested to know if he listens to (and acts upon) the

>> weather forecast :-) Tomorrow's weather is not a random repeatable

>> sample, merely an unknown deterministic event. Of course people

>> (including me) do talk about frequentist notions such as reliable

>> probabilities ("reliable" meaning that eg an event has historically

>> happened on p% of the occasions that it was forecast to happen with p%

>> probability), but I would hope that most if not all researchers would

>> agree if they thought about it carefully that in fact the probabilities

>> can only be Bayesian in nature.

>

> I disagree to some degree. The weather prediction probabilities can

> be (and are) model-based frequentist probabilities.

No.

Anyone can (and indeed frequently does) dress up a Bayesian probability

by generating an ensemble of outcomes to describe their posterior pdf.

But that doesn't make the underlying problem frequentist, it is just a

computationally and intuitively convenient method.

The standard paradigm of numerical weather prediction is that the

atmosphere is a deteministic system, which is imperfectly observed. Even

in the case of a perfect model, there is no such thing as the correct

probabilistic forecast (except perhaps pedants may point out the

degenerate case: the correct forecast is the [deterministic] output from

the perfect model run with perfect initial conditions, but we can never

hope to achieve this in reality).

The very best we could ever hope for, if there was a widely available

and agreed set of observations, is that all forecasters would generate

the same ("intersubjective") probabilities. Even this requires not only

a perfect model but also a universally agreed interpretation of all

observations, which is rather unlikely. It also requires a perfect, or

at least universally agreed, method for calculating probabilities, which

is whole other can of worms in itself.

The probability cannot be a function of the atmospheric state itself,

since the forecast will change if different observations are made. In

practice it is quite reasonable for different forecasters to give

different forecasts on any given day - and both can be "right" in the

sense of giving reliable forecasts in the long run.

James

Sep 11, 2007, 8:52:40 PM9/11/07

to global...@googlegroups.com

Tom Adams wrote:

> On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

>> Yes, that's the one, thanks.

>>

>> Since this isn't a public talk I won't identify the frequentist in

>> question, but he was uncomfortable with the very idea of assigning a

>> probability to an event that "either happened or didn't".

>

> (Opps!) Unconfortable with the very idea of assigning a 5/6

> probability to your survival while playing Russian Roulette once with

> a six-shooter, are you?

> On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

>> Yes, that's the one, thanks.

>>

>> Since this isn't a public talk I won't identify the frequentist in

>> question, but he was uncomfortable with the very idea of assigning a

>> probability to an event that "either happened or didn't".

>

> (Opps!) Unconfortable with the very idea of assigning a 5/6

> probability to your survival while playing Russian Roulette once with

> a six-shooter, are you?

Of course a frequentist would be uncomfortable with that idea: their

interpretation of probability does not apply to single events, only as

the limiting frequency of an infinite number of "identical" experiments.

Even this concept is rather hard to define, since in a deterministic

world identical experiments should give identical results.

(note that Michael was reporting someone else's views, not his own).

James

Sep 12, 2007, 8:17:30 AM9/12/07

to globalchange

Seems pragmatic to interpret the Russian Roulette case as 5/6 based on

a frequentist thought experiment. No?

Sep 12, 2007, 8:35:11 AM9/12/07

to global...@googlegroups.com

I certainly think so, but I'm not sure what your point is.

James

Sep 12, 2007, 8:39:09 AM9/12/07

to globalchange

On Sep 11, 8:52 pm, James Annan <james.an...@gmail.com> wrote:

> Tom Adams wrote:

> > On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

> >> Yes, that's the one, thanks.

>

> >> Since this isn't a public talk I won't identify the frequentist in

> >> question, but he was uncomfortable with the very idea of assigning a

> >> probability to an event that "either happened or didn't".

>

> > (Opps!) Unconfortable with the very idea of assigning a 5/6

> > probability to your survival while playing Russian Roulette once with

> > a six-shooter, are you?

>

> Of course a frequentist would be uncomfortable with that idea: their

> interpretation of probability does not apply to single events, only as

> the limiting frequency of an infinite number of "identical" experiments.

> > On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

> >> Yes, that's the one, thanks.

>

> >> Since this isn't a public talk I won't identify the frequentist in

> >> question, but he was uncomfortable with the very idea of assigning a

> >> probability to an event that "either happened or didn't".

>

> > (Opps!) Unconfortable with the very idea of assigning a 5/6

> > probability to your survival while playing Russian Roulette once with

> > a six-shooter, are you?

>

> Of course a frequentist would be uncomfortable with that idea: their

> interpretation of probability does not apply to single events, only as

> the limiting frequency of an infinite number of "identical" experiments.

So, a frequentist would be be just as bothered by a billion events as

by a single event. An infinite number of identical experiments is

always impossible.

Seems to me that it is a conceptual blunder to get hung up on this.

*All* applications of the frequentist interpretation involves

conterfactual conditions as does Newton's first law of motion and many

other useful concepts.

Sep 12, 2007, 8:58:26 AM9/12/07

to global...@googlegroups.com

Hey, I'm just reporting.

The guy was obviously mathematically sphisticated and a professional

statistician, but I couldn't really see how he could get any results

that don't violate his philosophy.

I was amazed to see that this is a real controversy in some circles.

What probability actually means (outside the purely mathematical

measure theory ideas without any connection to realisty) may be a bit

hard to pin down but it's obviously useful in cases other than picking

marbles out of an urn, if even that's permissble.

mt

Sep 12, 2007, 9:44:15 AM9/12/07

to globalchange

On Sep 12, 8:58 am, "Michael Tobis" <mto...@gmail.com> wrote:

> Hey, I'm just reporting.

>

> The guy was obviously mathematically sphisticated and a professional

> statistician, but I couldn't really see how he could get any results

> that don't violate his philosophy.

>

> I was amazed to see that this is a real controversy in some circles.

>

> What probability actually means (outside the purely mathematical

> measure theory ideas without any connection to realisty) may be a bit

> hard to pin down but it's obviously useful in cases other than picking

> marbles out of an urn, if even that's permissble.

We build abstract models and we can prove everything about the models

using formal logic and mathematics. But do the models correspond to

reality? Answering this question is harder, can't be solved with just

logic and math.

This is a general problem, I am not sure why probability gets singled

out so often for special mention.

>

> mt

>

> On 9/12/07, Tom Adams <tadams...@yahoo.com> wrote:

>

>

>

>

>

> > On Sep 11, 8:52 pm, James Annan <james.an...@gmail.com> wrote:

> > > Tom Adams wrote:

> > > > On Sep 11, 12:46 am, "Michael Tobis" <mto...@gmail.com> wrote:

> > > >> Yes, that's the one, thanks.

>

> > > >> Since this isn't a public talk I won't identify the frequentist in

> > > >> question, but he was uncomfortable with the very idea of assigning a

> > > >> probability to an event that "either happened or didn't".

>

> > > > (Opps!) Unconfortable with the very idea of assigning a 5/6

> > > > probability to your survival while playing Russian Roulette once with

> > > > a six-shooter, are you?

>

> > > Of course a frequentist would be uncomfortable with that idea: their

> > > interpretation of probability does not apply to single events, only as

> > > the limiting frequency of an infinite number of "identical" experiments.

>

> > So, a frequentist would be be just as bothered by a billion events as

> > by a single event. An infinite number of identical experiments is

> > always impossible.

>

> > Seems to me that it is a conceptual blunder to get hung up on this.

>

> > *All* applications of the frequentist interpretation involves

> > conterfactual conditions as does Newton's first law of motion and many

> > other useful concepts.

>

> > > Even this concept is rather hard to define, since in a deterministic

> > > world identical experiments should give identical results.

>

> > > (note that Michael was reporting someone else's views, not his own).

>

Sep 12, 2007, 10:56:03 PM9/12/07

to global...@googlegroups.com

Tom Adams wrote:

>

> We build abstract models and we can prove everything about the models

> using formal logic and mathematics. But do the models correspond to

> reality? Answering this question is harder, can't be solved with just

> logic and math.

>

> This is a general problem, I am not sure why probability gets singled

> out so often for special mention.

>

> We build abstract models and we can prove everything about the models

> using formal logic and mathematics. But do the models correspond to

> reality? Answering this question is harder, can't be solved with just

> logic and math.

>

> This is a general problem, I am not sure why probability gets singled

> out so often for special mention.

It's because, in a nutshell, it is specifically through the mechanisms

of probability that we connect these abstract models to practical

decision making (at least in the standard utility-maximising "rational"

paradigm).

James

Sep 13, 2007, 3:19:27 AM9/13/07

to global...@googlegroups.com

Michael Tobis wrote:

> Hey, I'm just reporting.

>

> The guy was obviously mathematically sphisticated and a professional

> statistician, but I couldn't really see how he could get any results

> that don't violate his philosophy.

> Hey, I'm just reporting.

>

> The guy was obviously mathematically sphisticated and a professional

> statistician, but I couldn't really see how he could get any results

> that don't violate his philosophy.

Well getting back to the origins of this sub-thread there would be no

problem in saying that for their particular experiment, 90% of the large

impacts originated from the event they simulated (with the rest coming

from the natural background). That's a perfectly ordinary frequentist

approach, and in fact this is precisely what they calculated.

> I was amazed to see that this is a real controversy in some circles.

Well, a lot of scientists (myself included) were brought up on a diet of

purely frequentist probability, and many of them also seem to be

uncomfortable with the idea of subjectivity in scientific judgements.

Hence uniform priors being defined as "ignorance" (where "ignorance" is

circularly defined as the state of mind described by a uniform

distribution...) and other such drivel.

But I hope this particular horse can be considered well thrashed now.

I'm going to a workshop on probability in climate science in a couple of

weeks, which several other climate scientists are slated to attend, so

it will be interesting to see if and how their thinking has progressed

in the 2 years(!) since I started talking about this sort of thing.

James

Sep 13, 2007, 7:31:44 AM9/13/07

to globalchange

On Sep 12, 3:58 pm, "Michael Tobis" <mto...@gmail.com> wrote:

> I was amazed to see that this is a real controversy in some circles.

> I was amazed to see that this is a real controversy in some circles.

The bayesian vs. frequentist division is well alive, and will be until

the last frequentist is dead. :)

There are practicing frequentists who say that they are in principle

bayesian, but that because formulating one's prior in any more than

2-3 dimensions is impossible, bayesian methods are in practice

unusable. Pointing out the implicit priors behind frequentist methods

(confidence intervals, ridge regression), or suggesting that model

families are actually discrete priors does not influence their

opinion.

Meanwhile, bayesian methods are successfully used in machine learning

and in computational statistics with (kind of uninformative) priors

that are in practice confirmed empirically. And increasingly

hierarchical data forces bayesian thinking even on conservative fields

such as hypotheses testing in clinical research.

But even subjective probabilities are conditioned on some model. That

model needs to make sense and be explicit enough, if one wants the

probabilities to be taken seriously.

In the context of weather prediction etc., it is important to

understand that (1) bayesian techniques can be applied even when the

priors are not anyone's subjective beliefs; (2) bayesian models can be

tested and empirically validated just as any other models.

--

Janne

Sep 13, 2007, 11:53:04 AM9/13/07

to global...@googlegroups.com

The (in my opinion) valid criticism of the 90% number was that there

was a great deal of uncertainty in defining the urn. There was no

suggestion that the draw from the urn was illegitimate, only that the

urn was not a good model of reality.

was a great deal of uncertainty in defining the urn. There was no

suggestion that the draw from the urn was illegitimate, only that the

urn was not a good model of reality.

The speaker went on to suggest that Bayesian methods overstate

confidence. I am not sure on what grounds he thought the asteroid

study was Bayesian, or whether he was simply pointing to a common

problem with statistical inference.

I challenged him afterward about decision-making under uncertainty.

It's my opinion that the hypothesis-testing view of statistics,

designed for climical experiments, is terribly misplaced in discussing

unavoidable decisions where the default is non-obvious.

If we can express only, say 3% confidence in X and say 0.08 %

confidence in not-X, we haven't used all the information available to

us. This hardly matters if the question X is purely theoretical. If we

are trying to decide whether to expend real resources on an asteroid

defense system, the 96.92 % probability, on this view, that

"statistics has nothing to say" is both ridiculous and unhelpful.

Surely we can conclude that X is more likely than not-X.

I understand that there is some philosophical problem with what

"likely" means, but it's hard for me to understand the idea that this

leads to a universal shrug and professed ignorance. At least you can

say these guys aren't cynically motivated in their attachment to the

idea that their work is very nearly useless.

mt

Sep 13, 2007, 4:55:23 PM9/13/07

to globalchange

Philip Stark of UC Berkeley is the frequentist in question. I pointed

him to this discussion and he replied as follows (forwarded with his

permission).

him to this discussion and he replied as follows (forwarded with his

permission).

mt

---------- Forwarded message ----------

From: Philip B. Stark <...>

Date: Sep 13, 2007 1:24 PM

Subject: Re: frequentism discussion

Hi Michael--

Thanks for pointing me to this and for your interest.

My argument--applied to earthquake forecasts by the USGS--is in

this preprint:

http://statistics.berkeley.edu/~stark/Preprints/611.pdf

Slides from Monday's talk are here (in OpenOffice format):

http://statistics.berkeley.edu/~stark/Seminars/sandia07.odp

My points about the frequentist coverage probability of Bayesian

credible regions were not connected to the Chicxulub story...

Luis and I are planning to write something up for the conference

volume on that topic.

The gist of my argument about whether the chance the KT impactor

came from Baptistina Asteroid Family is that the probability

ultimately comes from stochastic assumptions about the collision

that formed the BAF (Gaussians for some things, truncated Gaussians

for some things, uniform for others, etc.), simplified simulations

about how orbits evolve to move objects into the resonances with

Jupiter and Mars, how objects are ejected from resonance, and on and

on, plus ad hoc corrections to astronomical catalogs, extrapolation

of empirical scaling laws well beyond the data, assumptions that

things like albedo, density and specific heat capacity are constant,

etc. There's a huge chain of physical and probabilistic assumptions

comprising an extremely complex--and tenuous--model. Buy the model,

buy the 90% figure. But why buy the model?

This is not like an urn model, and my skepticism is not because this

is a one-off event, or even that it either happened or didn't. (We

could rephrase it to be prospective, rather than retrospective,

and I'd have much the same problem with it.) My main complaint

is that the model is far fetched and only weakly tied to observation.

Best wishes,

Philip

On Thu, 2007-09-13 at 12:47 -0500, Michael Tobis wrote:

> There's an interesting discussion of reasoning under uncertainty on a

> public discussion list which I manage. I mentioned the Chixculub

> argument you presented at the Santa Fe workshop and that has proved an

> anchor of the discussion.

>

> However, I am doing a lousy job of defending your position because of

> some combination of not understanding it and/or not agreeing with it.

> So far nobody else has showed up to advocate for a frequentist

> perspective.

>

> You may at least be interested in reading the discussion so far. Your

> input would be most welcome.

>

> http://groups.google.com/group/globalchange/browse_thread/thread/cd703ecce94de4d4

>

> regards

> Michael Tobis

> http://www.ig.utexas.edu/people/staff/tobis/

--

Philip B. Stark | Professor of Statistics | University of California

Berkeley, CA 94720-3860 | 510-642-1430 | www.stat.berkeley.edu/~stark

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu