Running Pkg.test(“Stan”) using CmdStan 2.7.0 works fine with the current version of Stan.jl (v”0.2.1”) on Julia-0.3.10.
Early August I will create an update for Stan.jl that works on Julia-0.4-dev. Julia-0.4 requires a couple of changes on how multiple chains are scheduled in parallel.
Fyi, I compiled CmdStan on OS X 10.11 Beta, XCode-7.0-beta and with clang++ and O/O_STANC=3. I typically disable the template-depth section for clang in the Mac specific make. Attached the detailed output (just for the binormal example, all other tests ran fine as well).
I haven’t had time to look at the new (experimental?) features (ADVI) yet, will take that along in August as well unless someone pokes me earlier. If ADVI creates a .csv file that should be pretty simple.
--- Translating Stan model to C++ code ---
bin/stanc /Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal.stan --o=/Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal.hpp
Model name=binormal_model
Input file=/Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal.stan
Output file=/Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal.hpp
--- Linking C++ model ---
clang++ -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -I src -I stan/src -isystem stan/lib/stan_math_2.7.0 -isystem stan/lib/eigen_3.2.4 -isystem stan/lib/boost_1.58.0 -Wall -pipe -DEIGEN_NO_DEBUG -Wno-unused-function -Wno-tautological-compare -Wno-c++11-long-long -O3 -o /Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal src/cmdstan/main.cpp -include /Users/rob/.julia/v0.3/Stan/Examples/Binormal/tmp/binormal.hpp
Inference for Stan model: binormal_model
4 chains: each with iter=(1000,1000,1000,1000); warmup=(0,0,0,0); thin=(1,1,1,1); 4000 iterations saved.
Warmup took (0.016, 0.016, 0.016, 0.017) seconds, 0.066 seconds total
Sampling took (0.023, 0.026, 0.026, 0.027) seconds, 0.10 seconds total
Mean MCSE StdDev 5% 50% 95% N_Eff N_Eff/s R_hat
lp__ -1.0e+00 3.1e-02 1.0 -3.1 -6.7e-01 -5.2e-02 1054 10393 1.0e+00
accept_stat__ 9.1e-01 1.9e-03 0.12 0.66 9.5e-01 1.0e+00 4000 39443 1.0e+00
stepsize__ 9.4e-01 9.9e-02 0.14 0.82 8.8e-01 1.2e+00 2.0 20 9.2e+13
treedepth__ 1.9e+00 6.6e-02 0.54 1.0 2.0e+00 3.0e+00 67 663 1.0e+00
n_leapfrog__ 3.1e+00 2.6e-01 1.6 1.0 3.0e+00 7.0e+00 39 382 1.0e+00
n_divergent__ 0.0e+00 0.0e+00 0.00 0.00 0.0e+00 0.0e+00 4000 39443 nan
y[1] -1.5e-02 2.1e-02 1.0 -1.7 9.4e-03 1.7e+00 2411 23772 1.0e+00
y[2] 1.9e-02 1.9e-02 0.98 -1.6 3.7e-03 1.6e+00 2585 25486 1.0e+00
Samples were drawn using hmc with nuts.
For each parameter, N_Eff is a crude measure of effective sample size,
and R_hat is the potential scale reduction factor on split chains (at
convergence, R_hat=1).
Iterations = 1:1000
Thinning interval = 1
Chains = 1,2,3,4
Samples per chain = 1000
Empirical Posterior Estimates:
Mean SD Naive SE MCSE ESS
lp__ -1.008360828 1.0150792 0.016049811 0.030513647 1106.6533
y.1 -0.014683177 1.0253441 0.016212114 0.022366559 2101.5559
y.2 0.019096266 0.9790049 0.015479427 0.016814434 3390.0431
Quantiles:
2.5% 25.0% 50.0% 75.0% 97.5%
lp__ -3.6776187 -1.4042550 -0.670830500 -0.29177000 -0.024094995
y.1 -2.0134163 -0.7226885 0.008652330 0.65454450 2.071252500
y.2 -1.8838548 -0.6644820 0.003470805 0.67720425 1.967234250
PSRF 97.5%
lp__ 1.004000×10⁰ 1.0090000×10⁰
accept_stat__ 1.140000×10⁰ 1.3140000×10⁰
stepsize__ 3.0214979×10¹⁴ 5.89831932×10¹⁴
treedepth__ 1.029000×10⁰ 1.0870000×10⁰
n_leapfrog__ 1.036000×10⁰ 1.1040000×10⁰
n_divergent__ NaN NaN
y.1 1.000000×10⁰ 1.0010000×10⁰
y.2 1.001000×10⁰ 1.0020000×10⁰
Multivariate NaN NaN
Z-score p-value
lp__ 0.158 0.8746
y.1 -0.321 0.7486
y.2 0.610 0.5416
Z-score p-value
lp__ -0.777 0.4370
y.1 -0.404 0.6865
y.2 -1.166 0.2438
Z-score p-value
lp__ 0.339 0.7350
y.1 0.492 0.6226
y.2 0.482 0.6298
Z-score p-value
lp__ 1.424 0.1545
y.1 1.014 0.3104
y.2 0.354 0.7232
95% Lower 95% Upper
lp__ -3.09777 -0.00034912
y.1 -2.01522 2.07113000
y.2 -1.78774 2.04616000
lp__ y.1 y.2
lp__ 1.000000000 -0.020537582 -0.040924916
y.1 -0.020537582 1.000000000 0.066567001
y.2 -0.040924916 0.066567001 1.000000000
Lag 1 Lag 5 Lag 10 Lag 50
lp__ 0.44395293 0.028651176 -0.0053760634 -0.006793620
y.1 0.30353776 0.018117719 0.0043199533 -0.067077180
y.2 0.25876139 0.027311501 -0.0089279184 0.024112327
Lag 1 Lag 5 Lag 10 Lag 50
lp__ 0.540196036 0.085138378 0.04371068130 0.032327092
y.1 0.241527920 0.009184393 -0.00038747966 -0.041938622
y.2 0.071547072 -0.027107125 0.03388795739 -0.012981663
Lag 1 Lag 5 Lag 10 Lag 50
lp__ 0.46308768 -0.010879561 0.094918605 -0.031567440
y.1 0.28269055 -0.009826277 0.012703330 -0.002452647
y.2 0.24493764 -0.019832863 -0.034029837 -0.042601331
Lag 1 Lag 5 Lag 10 Lag 50
lp__ 0.6145697 0.13338730371 0.101844429 -0.026271114
y.1 0.2426259 -0.00988255455 -0.014203974 0.036094890
y.2 0.2848844 0.00097344396 -0.014377092 -0.015794737