power_proportions_2indep: Can someone provide reference explaining how normal_power_het works?

20 views
Skip to first unread message

Charles Ayotte-Trépanier

unread,
Dec 1, 2022, 4:07:59 PM12/1/22
to pystatsmodels
Hi,

I realized that power_proportions_2indep was returning lower power than other libraries (ex: R's power.prop.test() ). Looking into how it calculated power, I noticed the variance adjustment in normal_power_het - but I do not understand what's going on.

Could someone please provide references explaining the stats behind what normal_power_het does?

Thanks!

josef...@gmail.com

unread,
Dec 1, 2022, 4:14:16 PM12/1/22
to pystat...@googlegroups.com
Can you provide an example with R and statsmodels results?

unit tests have comment that comparison is with R and Stata.
But I don't remember any details right now.

There was one bugfix

my reference for power is often the NCSS/PASS documentation

Josef



--
You received this message because you are subscribed to the Google Groups "pystatsmodels" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pystatsmodel...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pystatsmodels/a1b3a748-eac9-4ebf-9090-34b4fda3ca14n%40googlegroups.com.

Charles Ayotte-Trépanier

unread,
Dec 1, 2022, 4:25:29 PM12/1/22
to pystatsmodels
Here's an example - power_proportions_2indep consistantly returns lower power:
Screen Shot 2022-12-01 at 4.22.03 PM.png
Screen Shot 2022-12-01 at 4.22.49 PM.png

josef...@gmail.com

unread,
Dec 1, 2022, 4:55:05 PM12/1/22
to pystat...@googlegroups.com
On Thu, Dec 1, 2022 at 4:25 PM 'Charles Ayotte-Trépanier' via pystatsmodels <pystat...@googlegroups.com> wrote:
Here's an example - power_proportions_2indep consistantly returns lower power:

difficult to copy-paste from pictures

from the docstring
proportions for the first case will be computing using p2 and diff p1 = p2 + diff

so you need negative diff in your example

power_proportions_2indep(-0.1, 0.2, nobs1=300, ratio=1, alpha=0.05)

<class 'statsmodels.tools.testing.Holder'>
power = 0.9311797595418433
p_pooled = 0.15000000000000002
std_null = 0.5049752469181039
std_alt = 0.5
nobs1 = 300
nobs2 = 300
nobs_ratio = 1
alpha = 0.05

The basic idea behind normal_power_het  is that the variance is different between null and alternative hypothesis.
In the standard t-test, we assume that variance is the same for all mean parameters.
Proportions and poisson rates have inherent heteroscedasticity, variance depends on the mean or expected value.
Because the latter differ between null and alternative, the standard deviation of the effect will also differ.

In this case, the (absolute) diff is the same in your case and the R case, but evaluated at different proportions, the std can have a large difference.

Josef
 

Charles Ayotte-Trépanier

unread,
Dec 1, 2022, 10:17:22 PM12/1/22
to pystatsmodels
Oh, I'm so sorry, it was the diff sign all along. I went right away at blaming normal_power_het (which I still need to better understand!)

Thanks!
Reply all
Reply to author
Forward
0 new messages