SummerSim 2017: Microsimulation Models for Cost-Effectiveness Analysis: a Review and Introduction to CEAM - Reviewer Chris Kypridemos

40 views
Skip to first unread message

Jacob Barhak

unread,
Apr 30, 2017, 12:28:11 PM4/30/17
to public-scien...@googlegroups.com

Thank you for giving the opportunity to review this interesting paper presenting a new microsimulation modelling framework for CEA. I have some concerns regarding the search strategy and assessment of the modelling frameworks. Therefore, I will classify it as ‘Borderline’. I would recommend this paper for acceptance though provided that the authors will address my concerns.

I have no comments for the title, abstract and intro
Methods:
1. Please provide more clarity about your search strategy and the inclusion criteria. Some are heavily subjective (i.e. “active user community”, “quality of documentation”). It would be helpful if you describe how you make decisions based on those. Furthermore, I do not understand why some criteria were “preferred”.
2. I agree that computation speed is an important aspect. However, development speed is also important. It would be helpful if you also provide some information on how fast the development of the ‘hello world’ model was with each approach.

Results:
1. Why were 8 frameworks excluded from further testing? Please provide the reason(s) for each one. Table 1 is not clear enough on stating the reason. First of all, some cells have an ‘x’, and some don’t. For the coloured cells, it is unclear what is the difference between the marked with ‘x’ and unmarked cells. Second, it is not clear why some packages qualified for further testing. For example, Anylogic and jamsim have both 2 yellows and 2 green cells, but one qualified and one not.
2. In the methods, you describe 2 assessment frameworks with specific markers (i.e. computation times, lines of code, debugging time, etc.). I would expect to see how each approach scored based on these assessment frameworks. While I understand the difficulties of a systematic search for this topic, a systematic approach to your assessment of the modelling frameworks is doable.
3. Please consider providing the ‘hello world’ code that you’ve used for your assessment.
4. The “… the lack of individual-level coupling between the natural history and intervention simulations” is not necessarily bad. It is just a different approach that some may argue that capture real-life uncertainty better. Depending on your philosophical views, an action in real life can have unintended and unpredictable consequences that may alter the life course of individuals; therefore your approach may give artificially narrow uncertainty intervals because it keeps the life course of simulants fixed and only considers the intended effects of the intervention. You may consider expanding this part a bit in this paper to consider both approaches.
5. I understand that the CEAM example is a ‘showcase’ for demonstration purposes, so I will not comment on this as the detailed structure of the microsimulation framework is not presented here. I am waiting to see the technical specification of your model in a separate paper, and I welcome your choice to make this project OSS.

Discussion:
1. The second paragraph that discusses uncertainty is quite vague. I cannot offer a suggestion for improvement because it does not link to the previous results and I do not understand its purpose and function.
2. Consider summarising the benefits of your approach and their practical implications here. What does your approach brings to the end-user that was not available so far?

I have no comments regarding references.

I have no conflict of interest.


Chris Kypridemos, MD, MPH, PhD
Research Associate in Public Health Modelling
University of Liverpool.
Department of Public Health & Policy,
Institute of Psychology, Health & Society.
Whelan Building, Quadrangle,
Office 220
LIVERPOOL, L69 3GB
United Kingdom

Email: cky...@liverpool.ac.uk


Reply all
Reply to author
Forward
0 new messages