Dear Yuxuan,
Thank you for including VCell is your performance evaluation.
Traditionally, we have focused on server-side scalability of multiple simulations, across all of our solvers (odes, pdes, spatial stochastic, non-spatial stochastic, etc). In this way, parameter scans can be sent to our servers and run as independent jobs on our batch scheduler, yielding some level of parallelism. We also provide native executables for all of our solvers, but they require special input files and are not designed to be scripted outside of VCell.
The new way of scripting VCell solvers is through the
pyvcell python package (see
https://github.com/virtualcell/pyvcell and
https://pypi.org/project/pyvcell/). Our approach is to create lower-level python packages (e.g.
pyvcell-fvsolver for our PDE solver) which are driven by
pyvcell. We do not yet have a
pyvcell-ode, but we are working toward this, possibly by the end of this summer.
This Docker CLI is not designed for performance but rather for compatibility and reproducibility within the community for simulations described in SED-ML.
If you really want to evaluate our native ODE solver executable today, you may install VCell Desktop (see instructions on
vcell.org) and find the SundialsSolverStandalone_x64 executable within the installation directory (e.g. /Applications/VCell_Rel.app/Contents/Resources/app/localsolvers/mac64/SundialsSolverStandalone_x64 on my macbook). In order to generate the solver input file, you need to first run the simulation in VCell using the 'local run' option (the blue button), and then look in $HOME/.vcell/simdata/temp/ directory. Then run the solver with --help to determine how to proceed, the results are stored in a tab delimited file.
Hope this helps - please respond in vcell-discuss with any more questions.