Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Q statistically IN-significant

25 views
Skip to first unread message

Cosine

unread,
Apr 21, 2021, 8:52:27 AM4/21/21
to
Hi:

When we have the result of the experiment to be p-value <= alpha, we could simply claim that the null hypothesis could be rejected. However, what if the result is p-value > alpha? What kind of constructive discussions could we make more than simply saying that we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?

Say when we are testing a new drug or a new method and we get the result of p-value > alpha, what useful information could we deduce, in addition to H0 could not be rejected?

Thank you,

David Jones

unread,
Apr 21, 2021, 9:52:34 AM4/21/21
to
In general, the result of a significance test on its own is not enough.
You should almost always go on to derive a confidence interval for an
"effect size". You then need to think, in the particular context,
about whether that range of effect-size is important. If you were
planning to go on to a further experiment, you could pick a value for
the effect-size (possibly from within the confidence interval, but some
important-to-detect value) and set-up the experiment so as to be able
to detect an effect of that size. This might just mean choosing a
sample-size, either via a test-power-type analysis or by an argument
based on an a formula for a standard error for the estimated
effect-size, using results from the first experiment.

But, haven't you asked this same question here previously?

Rich Ulrich

unread,
Apr 21, 2021, 7:33:47 PM4/21/21
to
Good comments.

I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".

Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.

Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their results
confounded by the covid epidemic.)

I remember reading of one clinical study which "failed to replicate"
the treatment results of an earlier study; the original authors
complained that the experiment, as PERFORMED, did NOT use the
most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge whether
the differences should be important.

Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.

If the experiment failed to find its interesting result, even though
carried out without obvious problems emerging, then it must be
time to revise the hypothesis (if you are unwilling to abandon it).
Does it need a specific sub-sample or circumstances or some
variation in how something is done ("double the dose")?

--
Rich Ulrich

duncan smith

unread,
Apr 22, 2021, 11:26:06 AM4/22/21
to
And after the various hypothesis revisions and subgroup analyses how
would you justify any claims arising from the exercise?

Duncan

David Jones

unread,
Apr 22, 2021, 1:36:12 PM4/22/21
to
The context here is that one would protect oneself from the dangers of
data-dredging or over-analysing a single set of data by going on to do
an independent experiment and an independent analysis of data from that
experiment.

duncan smith

unread,
Apr 22, 2021, 8:50:45 PM4/22/21
to
There still needs to be some justifiable, pre-specified criterion for
claiming a positive result (or stopping and generating no such claim).

Duncan

David Jones

unread,
Apr 23, 2021, 5:14:22 AM4/23/21
to
Well yes, one can't just go on doing new experiments until the result
one wants is found. You can't just ignore all the individual steps
taken in an endeavour. One approach would be to set up a simulation
experiment that replicates all those individual steps. Alternatively,
there would be approaches via the sequential-testing standards used in
quality-control. However, a lot is left to the experimenter's integrity.

Rich Ulrich

unread,
Apr 23, 2021, 2:04:25 PM4/23/21
to
On Fri, 23 Apr 2021 09:14:18 +0000 (UTC), "David Jones"
We have been open-ended about whose science we could be speaking
of, and whose conventions apply. In clinical research (psychiatry), I
divided our hypotheses into the ones that we were confirming from
the start, justifying the study; and all others. Something found by
data-dredging would be a "speculative"result, to be considered in the
future.

>taken in an endeavour. One approach would be to set up a simulation
>experiment that replicates all those individual steps. Alternatively,
>there would be approaches via the sequential-testing standards used in
>quality-control. However, a lot is left to the experimenter's integrity.

The report of results will face critiques, before and after
publication. The investigator has standards to meet.

--
Rich Ulrich

David Duffy

unread,
Apr 30, 2021, 1:06:53 AM4/30/21
to
Have a look at R.A. Fisher's analysis of Mendel's experimental data -
spoiler, he thought they fitted the hypothesis much better than he would
expect by chance.
0 new messages