This is a very important paper and therefore needs publication. So the overall decision is Accept.
The importance lies in the fact that the authors did take the effort to use multiple modeling frameworks and evaluate them. I treat this as a review paper rather than a development paper of a new framework. This evaluation is priceless since it essentially provides a one stop shop for new modelers. I know for sure from first hand that this work was done faithfully since one of the authors contacted me is the last year and asked questions about MIST that I maintain. And I see how this effort can be quite extensive when going over several systems – this amount of work is a great contribution that should be acknowledged and reported for a greater benefit.
The paper can be published as is since the advantage of publishing it outweighs any consideration against publication. Although I am ok with the paper being published as it, I suggest that the authors take the time to improve the paper. The other two reviewers suggested some corrections and I think that it is prudent that the authors make an effort to address those suggestions. It will be nice if the new version will acknowledge reviewers effort with links to the public reviews.
I also have a few comments.
If possible, the authors may wish to reach out to developers of software they evaluated and check for correctness of facts. One reviewer already pointed out some issues specific to TreeAge, and I can point some details about MIST that authors may want to double check.
MIST is capable of running over Linux Cluster. See installation on https://github.com/Jacob-Barhak/MIST
Please also see the instructions provided to run over the cloud in https://htmlpreview.github.io/?https://github.com/Jacob-Barhak/MIST/blob/master/Documentation/MIST-over-the-Cloud.html
If the requirements include a specific Unix/Linux distribution, please be explicit of the needs.
Considering that large variety of options available is it not a straightforward deduction why the authors decided to create their own framework. The justifications provided in the discussion have to be expanded. Speed is an issue that is a long discussion and has to do with tradeoffs. A dedicated system will for sure beat any general framework at a price of losing some general functionality – I am curious what balance point was selected for the design of the new system.
Yet more importantly what level of correlation between parameters do you require? Can authors give an example of what you need that cannot be implemented by most systems? Unless I missed something, the code example given seem pretty standard and I want to see what authors are trying to do that breaks some systems.
The authors are correct to discuss the limitations of mapping all the modeling frameworks available out there – there are countless number of systems and the team has done very much to bring all those together – which is great. I will point towards other options that the authors can choose to look at in the future. I suggest that the authors have a look at the Python library PyMC - it has MCMC code that may be reused. It will also be nice to look at discrete event simulation systems and agent based simulation systems – there are countless number of those and those can be adjusted to perform microsimulation. Finally, looking at the event driven design in the code, I suggest the development team check SBML – it has new capabilities that may help with microsimualtion in the new specifications. The last few recommendations may be beyond the scope of this paper, yet important in a larger context of modeling tools available.
Hopefully the authors will choose to revise the paper to address the issues, although this is left at the level of suggestion.