Max. :1.0000 Max. :0.3183683 Max. : -1.145 #mixture modelmix_rloss <- mixture(gaussian, gaussian)
prior_rloss <- c(
prior(normal(-14, 10), Intercept, dpar = mu1),
prior(normal(-5, 10), Intercept, dpar = mu2))
fit_rloss <- brm(bf(log_rloss ~ wst + con, theta2 ~ wst),
data=data, family = mix_rloss, prior = prior_rloss,
inits = 0, chains = 2, iter=3000)
#does not converge:
fit_rloss1 <- brm(bf(log_rloss ~ wst + con + log_d, theta2 ~ wst + log_d),
data=data, family = mix_rloss, prior = prior_rloss,
inits = 0, chains = 2, iter=3000)
pp_check(fit_rloss) #see Figure_4 in the attachment
> summary(fit_rloss)
Family: mixture(gaussian, gaussian)
Formula: log_rloss ~ wst + con
theta2 ~ wst
Data: raw (Number of observations: 431)
Samples: 2 chains, each with iter = 3000; warmup = 1500; thin = 1;
total post-warmup samples = 3000
ICs: LOO = NA; WAIC = NA; R2 = NA
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
mu1_Intercept -12.94 0.08 -13.10 -12.77 2675 1.00
mu2_Intercept -5.24 0.10 -5.43 -5.03 3000 1.00
theta2_Intercept 2.33 0.17 2.01 2.69 2659 1.00
mu1_wst 0.00 0.00 -0.00 0.00 3000 1.00
mu1_con1 -0.18 0.22 -0.62 0.27 2172 1.00
mu2_wst 0.00 0.00 0.00 0.00 3000 1.00
mu2_con1 0.59 0.19 0.20 0.96 2260 1.00
theta2_wst 0.00 0.00 0.00 0.01 3000 1.00
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sigma1 0.47 0.06 0.37 0.60 2470 1.00
sigma2 1.63 0.06 1.52 1.74 2703 1.00
Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
is a crude measure of effective sample size, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
> summary(fit_rloss1)
Family: mixture(gaussian, gaussian)
Formula: log_rloss ~ wst + con + log_d
theta2 ~ wst + log_d
Data: raw (Number of observations: 431)
Samples: 2 chains, each with iter = 3000; warmup = 1500; thin = 1;
total post-warmup samples = 3000
ICs: LOO = NA; WAIC = NA; R2 = NA
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
mu1_Intercept 2691973.17 4472441.77 -897613.26 13400546.72 3 2.56
mu2_Intercept -6.48 0.53 -7.45 -5.69 1 2.41
theta2_Intercept 175.37 204.51 1.37 598.19 1 2.10
mu1_wst -131184.73 163061.46 -476645.10 0.00 1 2.63
mu1_con1 4091472.60 8892720.32 -14036559.30 22885133.19 5 1.38
mu1_log_d 109411.50 287674.89 -365882.46 853932.49 11 1.11
mu2_wst 0.00 0.00 0.00 0.01 1 1.89
mu2_con1 0.68 0.32 0.15 1.36 2 1.42
mu2_log_d 0.34 0.09 0.17 0.55 90 1.03
theta2_wst -0.02 0.30 -0.77 0.63 277 1.00
theta2_log_d 2.82 30.40 -64.02 74.25 209 1.02
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sigma1 5.89 10.40 0.38 32.91 2 1.22
sigma2 2.13 0.54 1.50 2.82 1 8.63
Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
is a crude measure of effective sample size, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Warning messages:
1: The model has not converged (some Rhats are > 1.1). Do not analyse the results!
We recommend running more iterations and/or setting stronger priors.
2: There were 1336 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help.