Decision on SIG-2026-0126

0 views
Skip to first unread message

MSOM Conference

unread,
May 8, 2026, 5:12:22 PM (7 days ago) May 8
to msom-confe...@googlegroups.com
08-May-2026

Re: SIG-2026-0126, "Do Inspection Delays Lead to Quality Decline? Evidence from U.S. Nursing Homes"

SIG Day Decision: Reject

Dear Author (this is to ensure anonymity):

We received many excellent submissions for the Healthcare Operations Management SIG-Day Conference. Unfortunately, we could not accept all of them to be included in the program, and we are sorry to say that your paper was not accepted to the SIG-Day conference.

If you also submitted an extended abstract of your paper to the main MSOM Conference, a decision on that submission will be made separately.


Sincerely,

Healthcare Operations;SIG Co-Chairs

MSOM Healthcare Operations Management SIG-Day Co-Chair

---------------------
Referee: 1
Strengths SIG Only: The paper provides clear, empirically grounded evidence that prolonged inspection delays in U.S. nursing homes lead to a decline in health deficiency citations, but without corresponding changes in continuously monitored staffing levels. This contrast supports the idea that nursing homes may focus on metrics under constant oversight while neglecting care areas only visible during inspections. These findings can offer practical implications for regulators on how to design monitoring mechanisms better.

Referee: 2
Strengths SIG Only: Examining the impact of inspection delays, or time between inspections, on outcomes, is a valuable question.

Referee: 3
Strengths SIG Only: *The paper studies an important and timely policy question by examining how inspection delays affect nursing home quality, particularly in the context of COVID-induced inspector shortages, which enhances its practical relevance.

*The analysis leverages a large and rich panel dataset covering 8 years and a wide set of facilities, allowing the authors to examine multiple dimensions of quality, including both inspection-based deficiencies and continuously monitored staffing measures.

*The paper highlights a clear divergence between continuously monitored and inspection-based quality measures, which supports an intuitive "teaching-to-the-test" interpretation and offers useful insights into how monitoring intensity shapes provider behavior.

Referee: 1
Limitations: The contribution could be made clearer relative to existing literature on inspection effects in other industries (e.g., restaurants, drug manufacturing) by better explaining how nursing homes differ as a context. Also, the instrumental variable approach using peer inspection delays needs stronger justification, since regional factors (shared workforce or economic conditions) could influence both inspection timing and quality outcomes, potentially violating the exclusion assumption.

Referee: 2
Limitations: This question has been examined in other studies, such as Anand et al. 2012, which the authors cite. I'm not sure there is sufficient contribution above that study.

In addition, while deficiencies is a clear outcome of inspections and a clear quality indicator, staffing levels are not very similar. Staffing levels are not a clear quality measure, and further, the organization is inherently motivated to keep staffing levels at a high enough level to retain current residents and earn new ones. Quality deficiencies and staffing levels are not really two dimensions of quality as the authors are proposing. I think that the inspection delays are likely to have little impact on staffing levels, but a strong impact on quality problems, which is what was found in Anand et al. 2012.

Because of this concern, I think the "teaching-to-the-test" perspective is not aligned with the outcome measures.

Overall, these concerns create some fundamental problems with the study.

Referee: 3
Limitations: *Since deficiency citations are only recorded during inspections, it is not entirely clear to me whether increases reflect true declines in quality or simply the fact that more issues are uncovered after longer periods without inspection.

*While the paper argues that inspection timing is random and unannounced, it would be helpful to more clearly discuss how much discretion remains at the regulator level, as prioritization of certain facilities could still introduce endogeneity.

*The mechanism interpretation that authors provided is suggestive but not fully causal, as the evidence for "teaching-to-the-test" behavior relies on differences across monitored versus non-monitored outcomes rather than a direct identification of strategic responses by facilities.

*One potential issue is that the instrument, based on peer inspection delays in the same county, may be picking up broader local factors such as staffing shortages or regional disruptions that influence both inspection timing and quality outcomes. The paper would benefit from a more detailed discussion of the identifying assumptions behind the instrument, particularly why peer inspection delays are unlikely to reflect shared local conditions that directly affect quality.

*Finally, it would be very helpful to discuss how these findings might extend to other regulated settings where monitoring frequency varies across dimensions.

Referee: 1

Comments to the Author
(There are no comments.)

Referee: 2

Comments to the Author
(There are no comments.)

Referee: 3

Comments to the Author
(There are no comments.)
Reply all
Reply to author
Forward
0 new messages