On Sat, 9 Oct 2021 17:08:53 -0700 (PDT), Cosine <
ase...@gmail.com>
wrote:
>Hi:
>
> Suppose we did a study. In this study, we tested the effects of drugs A and B and of the placebo to treat the disease Z. We could use the t-test statistic of the random variables A and B to see if the difference between the two drugs is statistically significant. The formula requires the sample means, standard errors, and the numbers of samples of the two samples of the drug A and B.
>
> Now, suppose we found another study that tested the effects of drugs C and D and of the placebo to treat the disease Z. Could we determine if there are differences in treating the disease Z between drug A and C and between drug A and D again by using the t-test statistic, given only those sample information but not the raw data?
>
Most studies only test ONE drug against placebo. They
care about one drug, and they want all their "power" to
go to that comparison.
For the purpose of your question, comparing A to C
(or to D), you would be looking at the performance
of each drug in comparison to pbo.
Describing the studies as having "two drugs" is a red
herring, or it is a non-informative complication.
Here is a modern form of your question, of current interest --
If one Covid vaccine shows 95% protection in its main study
and another vaccine shows 90% protection in its study, can
we conclude that the first is better than the second? What
about, compared to 80%?
Well, as a mechanical proposition, we certainly can take the
estimates and their SEs and generate a test. But we KNOW
that the samples differed (location; age/sex/ethnicity?). If they
were in a different time frame (or, even if not), maybe they
were tested against a different dominate mutation of the virus.
The instructions for case-ascertainment may have differed.
And so on.
95% vs 90% is based on small enough numbers that, if p < 0.05,
it probably is not p< 0.001 (or better). So that "tested" difference
is unpersuasive. We /know/ that uncontrolled factors /exist/
and thus could be responsible. For establishing one is better,
a test is necessary but not sufficient. We would have heard more
if one of the vaccines had come in at only (say) 75%, which
a-priori, before the studies, based on flu vaccines, did not seem
like a terrible effiicacy.
We want to see an "effect size" large enough that it is unlikely
to have happened by chance. If those "confounding factors"
see small, or if they exist such that they would bias /against/
the better performing drug, then a test on their difference
showing a bigger difference can be a bit persuasive. There's
all those (educated) readers whom you have to convince.
For Covid, they seem to use all three obvious criteria --
getting symptoms, getting hospitalized, dying. A vaccine
does look better if it looks better on all three criteria.
Performance in whole populations (states, countries) also
washes out the idiosyncracies of the original studies.
--
Rich Ulrich