Decision on SIG-2026-0052

0 views
Skip to first unread message

MSOM Conference

unread,
May 8, 2026, 5:13:54 PM (7 days ago) May 8
to msom-confe...@googlegroups.com
08-May-2026

Re: SIG-2026-0052, "Underrepresentation Bias in Clinical Trials: The Ripple Effect on Medical Decision-Making and Patient Outcomes"

SIG Day Decision: Reject

Dear Author (this is to ensure anonymity):

We received many excellent submissions for the Healthcare Operations Management SIG-Day Conference. Unfortunately, we could not accept all of them to be included in the program, and we are sorry to say that your paper was not accepted to the SIG-Day conference.

If you also submitted an extended abstract of your paper to the main MSOM Conference, a decision on that submission will be made separately.


Sincerely,

Healthcare Operations;SIG Co-Chairs

MSOM Healthcare Operations Management SIG-Day Co-Chair

---------------------
Referee: 1
Strengths SIG Only: please see attached

Referee: 2
Strengths SIG Only: 1. The research question is timely and relevant. Underrepresentation is widely recognized in medicine, but the paper pushes on an operational question: once biased evidence exists, how does it propagate through sequential decision-making and learning to affect outcomes?
2. Modeling is simple and easy to interpret. The separation between a biased evidence clinical trial and experienced learning (own patient outcomes), plus a single parameter w that scales the effective weight of trial evidence, gives a transparent way to talk about trusting trials vs. trusting experience. The dual process is also straightforward to explain and simulate.

Referee: 3
Strengths SIG Only: The manuscript tackles an important problem at the intersection of healthcare operations and decision science: the downstream impact of underrepresentation bias in clinical trials on physician behavior and patient outcomes. While prior literature has largely documented the existence of such bias, this paper advances the field by explicitly modeling how biased evidence propagates through decision-making and learning dynamics, thereby offering the potential for a novel contribution. The focus on operational consequences is both timely and relevant to the MSOM audience.

A second key strength is the modeling framework, which integrates a Bayesian multi-armed bandit formulation with a behavioral dual-process perspective. This combination is insightful: it allows the authors to connect classical results on incomplete learning with psychologically grounded decision rules (e.g., “win-stay, lose-shift”), generating nuanced predictions about when bias is amplified versus mitigated. The analytical results (especially the asymmetry between less-effective and more-effective treatments and the corrective role of exploration) are non-trivial.

The paper also provides managerial relevance through its simulation analysis and case study on aspirin use. The empirical illustration demonstrates the “ripple effects” of biased trials on real patient populations, linking modeling results to tangible health outcomes (e.g., unnecessary treatments and missed prevention opportunities).

Referee: 1
Limitations: please see attached

Referee: 2
Limitations: 1. The rational system benchmark is not the right benchmark. The paper models the rational system as a discounted infinite-horizon Bayesian bandit solved via the Gittins index. The main negative result is then interpreted as "incomplete learning" that harms patients. The concern is that a Gittins-index policy is optimal for a particular objective, discounted expected reward, not for "eventual convergence to the best arm." So it is not clear whether the result is a behavioral prediction, a normative warning, or an artifact of the benchmark definition. The paper uses "rational" to mean optimal for the discounted DP but evaluates it using an asymptotic convergence criterion. 2. Underrepresentation is modeled as a biased mean, but the mechanism is not fully aligned with how “underrepresentation” works in practice. In the model trial outcomes for treatment i are generated from a success probability \tilda{p_i} (which is dominated by other subgroups), while the true subgroup probability is p_i. Physicians are assumed unaware of this discrepancy. This is a model of extrapolation error, but underrepresentation in trials often creates imprecision/low power for subgroup effects, not necessarily a biased mean that is known to be wrong. 3. The case study is interesting but currently feels too assumption-driven for the strength of its quantitative claims.

Referee: 3
Limitations: A first limitation concerns the abstraction of physician decision-making within the proposed framework. While the dual-process structure is grounded in prior literature, both components rely on stylized representations that may only partially capture real-world clinical behavior. In particular, modeling the rational system as a Bayesian multi-armed bandit abstracts away from important contextual factors such as patient heterogeneity, comorbidities, guideline adherence, and institutional constraints. These often shape treatment choices. Similarly, representing intuition through a “win-stay, lose-shift” heuristic provides analytical tractability but may not reflect the diversity and variability of heuristic-driven decisions across physicians and clinical settings. As such, my impression is that the framework is best interpreted as a parsimonious representation of learning and decision dynamics and its applicability would depend on how closely a given context aligns with these assumptions. It would be helpful if the paper could further expand on which settings would match better with the paper's more constraining assumptions.

A second limitation relates to the learning dynamics embedded in the model and their implications for the simulation analysis. The framework assumes that physicians update beliefs sequentially after each individual patient outcome, which facilitates the analytical results and numerical experiments. However, in practice, learning is often more episodic or “batched” as physicians incorporate information from multiple patients, updated guidelines, or external evidence sources over time. This difference raises a question about how well the simulated dynamics map to real-world learning processes, and whether the speed and direction of convergence observed in the simulations are sensitive to this assumption.

Referee: 1

Comments to the Author
(There are no comments.)

Referee: 2

Comments to the Author
(There are no comments.)

Referee: 3

Comments to the Author
(There are no comments.)
SIG-2026-0052_RefereeReport.pdf
Reply all
Reply to author
Forward
0 new messages