MSOM Conference
unread,May 8, 2026, 5:09:54 PM (7 days ago) May 8Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to msom-confe...@googlegroups.com
08-May-2026
Re: SIG-2026-0225, "To Abandon or Not? Understanding the Impact of Participant Retention on Clinical Trial Termination"
SIG Day Decision: Reject
Dear Author (this is to ensure anonymity):
We received many excellent submissions for the Healthcare Operations Management SIG-Day Conference. Unfortunately, we could not accept all of them to be included in the program, and we are sorry to say that your paper was not accepted to the SIG-Day conference.
If you also submitted an extended abstract of your paper to the main MSOM Conference, a decision on that submission will be made separately.
Sincerely,
Healthcare Operations;SIG Co-Chairs
MSOM Healthcare Operations Management SIG-Day Co-Chair
---------------------
Referee: 1
Strengths SIG Only: This paper takes on an important area -- how to optimally invest to retain participants in a clinical trial. Clearly the topic is of interest to the MSOM Healthcare SIG community. The analytical modeling combined with empirical data analysis are a nice combination that aims to both provide generalizable insights while simultaneously ground the findings in the reality of what is seen in actual clinical trials.
Referee: 2
Strengths SIG Only: Below is my understanding of their main contributions.
One strength of the manuscript is that it studies an important and relevant problem. The paper focuses on participant retention in clinical trials and connects it to the sponsor’s termination decision, rather than treating retention only as a recruitment or compliance issue. That gives the paper's motivation and makes the research question meaningful for both the clinical trial literature and the operations management audience.
Another positive aspect is the paper’s overall structure. The authors combine an analytical model with an empirical analysis and then add a counterfactual exercise. The model is used to develop intuition, the empirical section provides supporting evidence from real clinical trial data, and the counterfactual analysis is intended to translate the results into more practical insights.
They also offer some theoretical results. In particular, the paper suggests that the effect of retention-related decisions on termination is not entirely straightforward, and that monetary payment may have diminishing or non-monotone effects depending on the setting. However, some of their claims may need further scrutiny and clarification as they rely on limiting assumptions.
A further strength is the effort the authors put into the data work. The empirical analysis uses a fairly large sample of trials and constructs payment information from informed consent documents, which is not an easy variable to obtain. The paper also tries to go beyond simple correlation by using matching and additional robustness analyses.
Finally, the paper has some practical orientation, which is valuable. The counterfactual analysis and discussion aim to show how sponsors might think about payment and effort choices when designing retention strategies. This gives the manuscript some managerial relevance and helps explain why the problem matters beyond the specific dataset.
Referee: 3
Strengths SIG Only: Overall I found this paper well-motivated and well-written. The proposed analytical model is easy to understand, and the findings (theoretical and empirical) are intuitive given the model proposed. The application itself (understanding the impact of participant retention in multi-phase clinical trials on whether the trial is terminated) is important and very relevant for the conference.
Referee: 1
Limitations: One concern I have is about how much this paper adds over the rather large literature on experimentation intensity within the clinical trials space. Multiple papers (e.g., the cited Kouvelis et al. (2017), Tian et al. (2021), and Tian et al. (2023) papers) optimize recruitment rate based on what we have learned so far in the trial. I understand that the authors differentiate by optimizing retention effort instead of recruitment rate, but the effect seems more or less the same -- you have a convex increasing cost function that controls how much patient information you get over a fixed period of time. This literature has already produced many insights about when you should spend at a higher vs. lower rate. This paper did not clarify what the new findings are compared to this already well-established literature.
I had a concern about the model itself. The authors model profit as a linear combination of posterior mean and posterior variance in (1), but in reality there's a huge boost (highly non-linear) in getting a drug over the threshold where a regulator will approve it. This drives much of the insights in the recruitment rate literature, yet is absent here.
Lastly, I found the numerics hard to interpret. There are fundamental differences between a trial that pays $100 and $1000 as an incentive. Probably the $1000 trial has many more trial visits and a longer duration, or perhaps it is asking much more of patients. These are probably hard to compare in a meaningful way; there are big differences between these drugs that probably lead to big differences in other things like termination decisions. The numerical analysis would be improved with more focus, e.g. limiting to trials for one disease group with similar duration and "ask" of the patients.
Referee: 2
Limitations: One limitation is the gap between the analytical model and the real complexity of clinical trials. The model uses a very stylized two-period setting with a single interim analysis, linear retention, and a fairly simple profit structure. That helps tractability, but it also means the framework may miss important features of real trials, such as multiple interim reviews, changing enrollment patterns, protocol amendments, or richer forms of patient behavior. Furthermore, in my understanding, their model treats the number of retained participants in a deterministic way once payment and effort are chosen. In other words, retention enters through expected rates rather than through a richer stochastic dropout process at the patient level. That may understate the uncertainty that sponsors actually face when making interim termination decisions, especially in smaller or riskier trials.
Another concern that I have is their strong distributional structure. The model relies heavily on normal priors, normal outcome distributions, Bayesian updating, and closed-form expressions that come from these assumptions. While this is standard and analytically helpful, it narrows the scope of the theory. The main results may depend in part on this convenient parametric structure rather than reflecting a broader, more robust phenomenon.
A next limitation is the identification challenge in the empirical analysis. The paper tries to address endogeneity through matching and robustness checks, but it also admits that payment decisions are not random and that important unobserved factors may still affect both payment and termination. As a result, the empirical findings are helpful and suggestive, but they are still not fully convincing as strong causal evidence.
The next issue in my view is the construction of the payment variable. The authors extract payment information from informed consent forms using text extraction, an LLM-based step, and manual review. This is a creative effort, but it also raises concerns about measurement error, especially when trials involve multiple payments, complicated payment schedules, or language that is not fully standardized across consent forms. Another limitation is the sample selection in the empirical study. The analysis only uses trials for which consent forms and payment information are available, and the paper notes that such documents are only available for a subset of trials. After additional restrictions, the usable sample becomes much smaller than the overall population of trials. This creates a concern that the estimation sample may not be representative of clinical trials more broadly. Moreover, a key limitation is that the paper’s practical conclusions may be stronger than what the evidence can fully support. The counterfactual section gives concrete recommendations on payment and effort choices, but those recommendations come after several layers of assumptions: the stylized dynamic model, the structural calibration, the imputed effort, and approximated continuation probabilities. So, their managerial insights are interesting, but they should probably be presented more cautiously.
My final limitation is that some of the paper’s most interesting mechanisms are more stated than fully demonstrated in the data. For example, the manuscript emphasizes the role of retention as a pathway through which payment affects termination, but its own mediation results suggest that this indirect channel is relatively small and only marginally significant. That creates some tension between the conceptual story and the empirical evidence.
Referee: 3
Limitations: A number of assumptions are made in Section 2 to obtain the rather simple Gaussian setup that's being analyzed, and while many appear assumptions appear standard as they relate to assumptions made by existing researchers, it would be helpful to give a sense of how sensitive the findings are to the different assumptions made. In addition to this sort of sensitivity analysis, I would have liked to see a little more discussion on how documentation on study protocols/informed consent forms/retention strategies could possibly change findings (or enable findings to be more granular conditional on specific retention strategy characteristics perhaps). Overall, the discussion of limitations in Section 5 (the last paragraph) is just very, very terse.
Referee: 1
Comments to the Author
(There are no comments.)
Referee: 2
Comments to the Author
(There are no comments.)
Referee: 3
Comments to the Author
(There are no comments.)