SummerSim 2016: Investigating the Fidelity of an Improvement-Assessment Tool After One Vacuum Bell Treatment Session - Reviewer Jacob Barhak

12 views
Skip to first unread message

Jacob Barhak

unread,
Jun 8, 2016, 4:37:19 AM6/8/16
to public-scien...@googlegroups.com
Submission #99
Investigating the Fidelity of an Improvement-Assessment Tool After One Vacuum Bell Treatment Session
Mohammad F. Obeid, Robert Obermeyer, Nahom Kidane, Robert Kelly and Frederic McKenzie
 

Review 1 - Jacob Barhak
Submission: 99Secondary Reviewer: None
Originality of the Work (1-5): 4Presentation Quality (1-5): 4
Overall Recommendation: AcceptNominate for Best Paper Award: No
Reviewer's Confidence (1-5): 4

Comments

The paper deals with measuring results of a medical procedure using novel 3D measurement that should replace human measurement.

This paper was submitted late to the BMPM track and after organizer approval, and it went through an expedited review process.

The authors have revealed that it was rejected from WinterSim on account of not being of interest to the modeling community and submitted the reviews received.

I disagree with the blind WinterSim reviewers. There are plenty of modeling issues in this paper and it is not an entirely medical paper. Since I am not a medical practitioner, my review is limited to modeling issues as seen below. The paper is short and to the point, written well and is suitable for publication in a simulation conference in my opinion, even without change. However, I would like to see some more information there to make it more informative.

The authors should add a few words on the topic of pre and post processing. How do you handle outliers that are quite common in 3d scanning? How do you crop the scans? How do you select initial positioning before registration? Is it fully automated or is there much human intervention? Also, did you try to perform multiple separate scans and compare the results to establish repeatability?

Looking at figure 2 it seems that the differences become larger on the left and smaller on the right as if the scans are tilted a bit. Is it possible that registration failed in this case? Or are you showing only part of the distance color map while registration took into account many more points not shown? Your conclusions are interesting stating that there is not much difference between new method and the human measurement. However, you show the difference in one patient that seems significant. Does this imply that the human ground truth measurement is incorrect? Is it reasonable to deduce that there was human error in the ground truth measurement? Or, is this a machine algorithm error in this case?

If you eventually decide to use the new method in practice, will it complicate the process for the human handling it and require special setup and training? And what will be the benefit of extra effort in case the process is fully automated?

These are some modeling and process questions that rose in my mind and I would ask for answers for those, either as a reply to reviewers file or by actually adding information to the paper. The answers will be made public in either way.

Note that my acceptance is partial and requires the authors to satisfy all other reviewers within a few days and be able to complete all other clerical requirements. Hopefully it will be possible.

Reply all
Reply to author
Forward
0 new messages