The "limited predictions" problem.

10 views
Skip to first unread message

Stephen James

unread,
Mar 6, 2015, 9:37:14 AM3/6/15
to beginning-...@googlegroups.com
Hi all

David may have covered this in his books, or at least touched on it, but it still intrigues/troubles me.  I have posed the question on a few philosophy groups; havent had a clear consensus answer.  Will put my "full" post below, but the nutshell version is:  should we trust a hypothesis H more if it explains an initial set of data d1, and predicts a subsequently verified set of data d2, than we would if we simply had d1 and d2 already when H was proposed?  In both cases (d1+d2) and H remains the same but in the first case H survives an ordeal; is H an equally good explanation in both cases?

The version of the question I have posted elsewhere is given below.  Very interested in any feedback.

-------------


"
Here is a philosophy of science problem that has been bothering me for quite some time:

Imagine we have a scientific hypothesis.  For the hypothesis to be taken seriously, it usually has to agree with most, if not all of the relevant data on the subject that we have to hand.  But no matter how well it fits the data, we tend to be a bit sniffy about it and regard it as "just" a hypothesis until it has been tested a few time.  So what we really want is for our hypothesis to make some testable predictions and subject itself to experimental falsifiability. Once it has passed some of these tests we become a lot more comfortable about tentatively accepting that it as true. (Or at least, a good approximation to the truth in the domain we have been examining).

But now imagine a hypothesis  that can only make a limited number of predictions -  say five.  It might have been plausible to have proposed the hypothesis if one had just had the first two experimental results.  The hypothesis would then have predicted the other three, and in due course it could be tested and promoted from "untested" to "tested" hypothesis.  But now imagine that the experimentalists have gotten ahead of the theorists and the thing isn't proposed until after we already have all five relevant experimental results.  It fits the data perfectly but it can't make any more predictions.  So it is doomed to be "untested" for the rest of its life.  My question is:  Do we have better reason to trust the hypothesis if it had been proposed after 2 results and then tested, than if it were simply proposed after all the results were in?

I vacillate on this.  On one hand, in both cases you end up with exactly the same hypothesis and exactly the same experimental evidence.  On the other hand, the hypothesis which was proposed early was taking a "risk" as it were; I wonder if there are good Bayesian reasons to take it more seriously than in the 2nd instance.  But if so, consider the case of a researcher who simply hasn't got access to all the relevant professional journals.  ( A common problem with today's paywalls?).  Imagine she only knows the first two results, proposes the hypothesis  and the tests - and later finds out that the relevant tests were done two years ago.  Should we then regard the hypothesis as tested, or mere untested speculation?

It seems to me that this problem may be of more than academic interest.  Some ideas in cosmology for instance only make a limited number of predictions.  I believe cosmic inflation predicts gravitational waves for instance; should we have any more or less faith in the idea  if we had already detected the waves before inflation was proposed?

"


Elliot Temple

unread,
Mar 6, 2015, 10:05:18 AM3/6/15
to BoI, Elliot Temple curi@curi.us [fallible-ideas]

On Mar 5, 2015, at 11:35 PM, Stephen James <sbj...@gmail.com> wrote:

> Hi all
>
> David may have covered this in his books, or at least touched on it, but it still intrigues/troubles me. I have posed the question on a few philosophy groups; havent had a clear consensus answer. Will put my "full" post below, but the nutshell version is: should we trust a hypothesis H more if it explains an initial set of data d1, and predicts a subsequently verified set of data d2, than we would if we simply had d1 and d2 already when H was proposed? In both cases (d1+d2) and H remains the same but in the first case H survives an ordeal; is H an equally good explanation in both cases?
>
> The version of the question I have posted elsewhere is given below. Very interested in any feedback.

You’re looking for amounts of trust, weights of credence. This is incompatible with BoI and Popper. It's what we call justificationism.

This is discussed some in BoI chapter 10, which discusses weighting ideas. For more information, see:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas



> -------------
>
>
> "
> Here is a philosophy of science problem that has been bothering me for quite some time:
>
> Imagine we have a scientific hypothesis. For the hypothesis to be taken seriously, it usually has to agree with most, if not all of the relevant data on the subject that we have to hand.

A claim needs to be compatible with ALL data, it can’t contradict any.

(Saying that some guy measured something incorrectly does not contradict the data, it explains how that data could have come about while also the hypothesis is true.)

The issue is not whether the hypothesis should be taken seriously by people, it’s whether it is or can be TRUE.

> But no matter how well it fits the data, we tend to be a bit sniffy about it and regard it as "just" a hypothesis until it has been tested a few time. So what we really want is for our hypothesis to make some testable predictions and subject itself to experimental falsifiability. Once it has passed some of these tests we become a lot more comfortable about tentatively accepting that it as true. (Or at least, a good approximation to the truth in the domain we have been examining).

This is justificationism with corroboration treated as the thing that increases justification (rather than direct positive evidential support). It doesn’t change much.

> But now imagine a hypothesis that can only make a limited number of predictions - say five. It might have been plausible to have proposed the hypothesis if one had just had the first two experimental results. The hypothesis would then have predicted the other three, and in due course it could be tested and promoted from "untested" to "tested" hypothesis. But now imagine that the experimentalists have gotten ahead of the theorists and the thing isn't proposed until after we already have all five relevant experimental results. It fits the data perfectly but it can't make any more predictions. So it is doomed to be "untested" for the rest of its life. My question is: Do we have better reason to trust the hypothesis if it had been proposed after 2 results and then tested, than if it were simply proposed after all the results were in?

What sort of scientific hypothesis makes only 5 predictions, period? Can you give a realistic example?

> I vacillate on this. On one hand, in both cases you end up with exactly the same hypothesis and exactly the same experimental evidence. On the other hand, the hypothesis which was proposed early was taking a "risk" as it were; I wonder if there are good Bayesian reasons to take it more seriously than in the 2nd instance. But if so, consider the case of a researcher who simply hasn't got access to all the relevant professional journals. ( A common problem with today's paywalls?). Imagine she only knows the first two results, proposes the hypothesis and the tests - and later finds out that the relevant tests were done two years ago. Should we then regard the hypothesis as tested, or mere untested speculation?

The right approach is to think critically.

Is there a criticism of this hypothesis? Are there any rivals? Are those rivals criticized?

Tentatively accept ideas when you have exactly one non-refuted candidate. Stop trying to assign ideas scores/weights of any kind. Then all the problems go away.

Whatever factor was inspiring an increase or decrease in weighting either can or can’t inspire a criticism. If it can, make the criticism and go from there. If it can’t, it was worthless. And if something is refuted by a criticism, saying it still has some weight/status/justification instead of none is just a way to irrationally try to ignore criticism.

Also stop trying to assign different statuses to ideas. They are either refuted or not. Calling an idea “speculation” vs other terms is another way of getting back to the scores/weights approach, of trying to decide how much authority/justification they have.

Ideas are ideas.

Also Bayesian epistemology is false.

> It seems to me that this problem may be of more than academic interest. Some ideas in cosmology for instance only make a limited number of predictions. I believe cosmic inflation predicts gravitational waves for instance; should we have any more or less faith in the idea if we had already detected the waves before inflation was proposed?“

No faith at all. And definitely not *amounts* of faith – which is justificationism again.


Yes this problem is very important. It’s one of the biggest problems in epistemology. And while I believe it's solved, only a handful of people seem to have noticed.

You should join:

http://fallibleideas.com/email-discussion

The BoI group is not active, and the FI group is active and has all the best people who know Popper and DD stuff. (The FI group was formed by merging the BoI group and some others. By which I mean, I simply asked all regular posters to switch to FI, and they all did.)

Elliot Temple
www.fallibleideas.com
www.curi.us

Reply all
Reply to author
Forward
0 new messages