Errors

75 views
Skip to first unread message

oic...@gmail.com

unread,
Dec 4, 2014, 1:35:16 PM12/4/14
to comparegr...@googlegroups.com
What is the meaning of these errors?

glm.fit: fitted probabilities numerically 0 or 1 occurred
glm.fit: algorithm did not converge

Thanks

Isaac Subirana

unread,
Dec 5, 2014, 4:21:59 AM12/5/14
to comparegr...@googlegroups.com
If I'm not wrong, these messages comes from when estimating the Odds Ratio (OR) of a continuous variable. Probably, they are not errors but warnings and they may indicate that this continuous variable separates the two groups (cases/controls) totally, and provide non-finite OR estimates...

Regards,

Isaac.

oic...@gmail.com

unread,
Dec 9, 2014, 2:04:09 PM12/9/14
to comparegr...@googlegroups.com
I am not estimating the Odds Ratio.
I used the following commands:
dmtab<-compareGroups(dmtab1s, y = dmtab1$Treatment)
and I obtained 8 error messages
Mensajes de aviso perdidos
1: glm.fit: algorithm did not converge
2: glm.fit: fitted probabilities numerically 0 or 1 occurred
3: glm.fit: fitted probabilities numerically 0 or 1 occurred
4: glm.fit: fitted probabilities numerically 0 or 1 occurred
5: glm.fit: fitted probabilities numerically 0 or 1 occurred
6: glm.fit: fitted probabilities numerically 0 or 1 occurred
7: glm.fit: algorithm did not converge
8: glm.fit: fitted probabilities numerically 0 or 1 occurred

Also, the results for p.overall resulted were different from the ones obtained with the t.test function.
Can you help me please?
Thank you very much in advance,
Rocío Zamanillo

Isaac Subirana

unread,
Dec 29, 2014, 11:56:59 AM12/29/14
to comparegr...@googlegroups.com
I suspect that these warnings arise when fitting a logistic model, whether the odds ratio are displayed or not.
In general, the p-values obtained from a logistic model (where the treatment is the response) are different from the ones obtained by a t-test. However, these two p-values should not differ very much.
The p.overall column displayed in the bivariate table computed with compareGroups package corresponds to a t-test (t.test function in R) if the variable is continuous and considered normal-distributed. So, I do not understand why the p.overall values you have are different from the ones obtained with a t-test. Please, could you provide the data to the compareGroups package mantainer, in order to better answer to you question? Thanks.
Isaac. 

Isaac Subirana

unread,
Dec 31, 2014, 7:20:55 AM12/31/14
to comparegr...@googlegroups.com
The warnings messages appeared because there are extreme (not outliers) values in the markers (variables) that makes the fitted probabilities in the logistic model be 0 or 1. But in your case, it does not affect your results, since you report means and SD per group of treatment and p.overall which is computed by a t-test and not a logistic model. So, you can ignore these warnings.

Regarding the p.overall values, the compareGroups performs a t.test without assumiing equal variances (default). In fact, under statistical point of view, it is suggested to compute the Welch t-test directly (default option of t.test function) instead of testing previously whether the variances are equal with a var.test, So, you can check that calling t.test setting var.equal=FALSE provides exactly the same p-value than the p.overall column obtained with compareGroups. 

Isaac.  
Reply all
Reply to author
Forward
0 new messages