It's still possible that the logit transformation introduces nans during the optimization.
Which statsmodels version are you using? IIRC we had one change for the corner case recently.
If I read your data correctly into pandas, I don't get any SVD failure, fit finishes, but the results look a bit strange
It looks like a perfect prediction case, we warn or raise in discrete Logit but I guess we don't have a check for it in GLM, But I don't know whether Binomial with counts has a perfect prediction problem, I've never heard of it.
>>> res.fittedvalues.values
array([ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])
>>> res.model.endog
array([ 0. , 0. , 0.05645161, 0. , 0.0546875 ,
0. , 0.00234742, 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0.00409836,
0. , 0. , 0.01744186, 0.04268293, 0. ,
0.03846154, 0.5 , 0. , 0.04545455, 0.02325581,
0. , 0. , 0. , 0.01639344, 0. ,
0. , 0. , 0. , 0. , 0. , 0. ])
or something else is strange in this case.
Josef
>
> R
>