Unconventional anchor items in bifactor (longitudinal) model

73 views
Skip to first unread message

Patrick Edwards

unread,
Sep 8, 2023, 12:31:22 AM9/8/23
to mirt-package
Hi phil!

I really appreciate your tutorials and package. The latter is much easier to work with than STAN. I just have a couple questions about using unconventional anchor items with a bifactor (longitudinal) model.

I'm a political science PhD student using the MIRT package to estimate candidates' ideological positions. My data consists of parliamentary candidate survey responses on a variety of issue positions. I plan to use these survey responses---which were collected for all recent parliamentary elections---to study how candidate's ideological positions shift over time. I'm using a two-tier/bifactor model based on your tutorial for longitudinal data. The survey items vary considerably between election years since inter-year rescaling was not the intended goal of these candidate surveys. Thus, anchor items often have slight differences in response categories across election years. 

I have three broad questions:
  • Is it appropriate to constrain some but not all anchor item parameters to be equal between time periods, including the item-time slopes? In particular, is it okay to leave some or all intercept parameters unconstrained?
  • Is this item linking procedure appropriate when each anchor item is estimated using different IRT models? 
  • Would either case bias the latent factor estimates?
My fear is that leaving some parameters between anchor items unconstrained will bias the time-slope or item-time slope parameter estimates and fail to place both time period on the same scale. I haven't found answers to these questions in this google group or the internet more broadly. 



Here are two illustrative examples of anchor items with differences between their response categories. Please disregard these if you already understand the context behind my questions.
  • Anchor item example 1:  two otherwise identical Likert-type anchor items have a neutral option in one election year but no neutral option in the next election year. 
    • Q1.1: Church and state must be separated (Likert-type + no neutral option).
      • b0: Strongly agree
      • b1: Somewhat agree
      • b2: Somewhat disagree
      • b3: Strongly disagree
    • Q1.2: Church and state must be separated (Likert-type + neutral option).
      • b0: Strongly agree
      • b1: Somewhat agree
      • b2: Neither agree nor disagree
      • b3: Somewhat disagree
      • b4: Strongly disagree
    • Constraint Procedure: In anchor item example 1, this means constraining the time slopes, item-time slopes, and intercept pairs b0-b0, b1-b1, b2-b3, and b3-b4 parameters to be equal between both items. However, this leaves the neutral "neither agree nor disagree" intercept parameter from Q1.2 unconstrained (freely estimated).
  • Anchor item example 2:  two anchor items are identically worded, but one item has dichotomous (2PL) response categories and the other has Likert-type (graded) response questions.
    • Q2.1: Finland should join NATO (Dichotomous)
      • Yes
      • No
    • Q2.2: Finland should join NATO (Likert-type)
      • Strongly agree
      • Somewhat agree
      • Somewhat disagree
      • Strongly disagree
    • Constraint Procedure: in anchor item example 2, the dichotomous item Q2.1 is estimated using the 2PL model while the Likert-type item Q2.2 is estimated using the graded response model. This means constraining the time slopes and item-time slopes of each item to be equal but leaving the intercept parameters of each item unconstrained (freely estimated). 
I've been struggling with this for a couple of weeks. Any help or references would be immensely appreciated!

Best regards,
Patrick Edwards

Phil Chalmers

unread,
Jul 11, 2024, 11:37:16 AM7/11/24
to Patrick Edwards, mirt-package
On Fri, Sep 8, 2023 at 12:31 AM Patrick Edwards <edwardsp...@gmail.com> wrote:
Hi phil!

I really appreciate your tutorials and package. The latter is much easier to work with than STAN. I just have a couple questions about using unconventional anchor items with a bifactor (longitudinal) model.

I'm a political science PhD student using the MIRT package to estimate candidates' ideological positions. My data consists of parliamentary candidate survey responses on a variety of issue positions. I plan to use these survey responses---which were collected for all recent parliamentary elections---to study how candidate's ideological positions shift over time. I'm using a two-tier/bifactor model based on your tutorial for longitudinal data. The survey items vary considerably between election years since inter-year rescaling was not the intended goal of these candidate surveys. Thus, anchor items often have slight differences in response categories across election years. 

I have three broad questions:
  • Is it appropriate to constrain some but not all anchor item parameters to be equal between time periods, including the item-time slopes? In particular, is it okay to leave some or all intercept parameters unconstrained?

Sure, if you believe that the nature of the item changes over time. In this measurement literature this is often called "parameter drift", which is like a kind of unwanted differential item functioning due to changes in the causal mechanisms as a function of time.
 
  • Is this item linking procedure appropriate when each anchor item is estimated using different IRT models? 
That's tricky. Part of me wants to say no, but theoretically if the response curves are stochastically invariant then this would work nearly as well (this issue is how to guarantee that). Keeping the functional forms of the items over time constant forces the model to an exact invariance position, which is easier to track, but yes in principle you could work around this if you're willing to make some strong assumptions.
 
  • Would either case bias the latent factor estimates?

Complicated question. The side-stepping answer is "it depends".

 
My fear is that leaving some parameters between anchor items unconstrained will bias the time-slope or item-time slope parameter estimates and fail to place both time period on the same scale. I haven't found answers to these questions in this google group or the internet more broadly. 

I see what you are saying but as long as there is enough anchoring I wouldn't worry too much. There will be issues related to bias efficiency tradeoffs, but largely the goal of anchoring is to try and force the model through a point that allows other aspects of the response data/model interaction to manifest.
 



Here are two illustrative examples of anchor items with differences between their response categories. Please disregard these if you already understand the context behind my questions.
  • Anchor item example 1:  two otherwise identical Likert-type anchor items have a neutral option in one election year but no neutral option in the next election year. 
    • Q1.1: Church and state must be separated (Likert-type + no neutral option).
      • b0: Strongly agree
      • b1: Somewhat agree
      • b2: Somewhat disagree
      • b3: Strongly disagree
    • Q1.2: Church and state must be separated (Likert-type + neutral option).
      • b0: Strongly agree
      • b1: Somewhat agree
      • b2: Neither agree nor disagree
      • b3: Somewhat disagree
      • b4: Strongly disagree
    • Constraint Procedure: In anchor item example 1, this means constraining the time slopes, item-time slopes, and intercept pairs b0-b0, b1-b1, b2-b3, and b3-b4 parameters to be equal between both items. However, this leaves the neutral "neither agree nor disagree" intercept parameter from Q1.2 unconstrained (freely estimated).

That's somewhat dangerous as with ordered parameter models with the GRM you'll encounter probability function crossing issues. The better conceptualization would be to imagine there was a "Neither agree nor disagree" option in the first test and score the b2 and b3 responses as b3 and b4, and then constrain all paired parameters to be equal over time.
 
  • Anchor item example 2:  two anchor items are identically worded, but one item has dichotomous (2PL) response categories and the other has Likert-type (graded) response questions.
    • Q2.1: Finland should join NATO (Dichotomous)
      • Yes
      • No
    • Q2.2: Finland should join NATO (Likert-type)
      • Strongly agree
      • Somewhat agree
      • Somewhat disagree
      • Strongly disagree
    • Constraint Procedure: in anchor item example 2, the dichotomous item Q2.1 is estimated using the 2PL model while the Likert-type item Q2.2 is estimated using the graded response model. This means constraining the time slopes and item-time slopes of each item to be equal but leaving the intercept parameters of each item unconstrained (freely estimated). 

This is notably more tricky as strongly/somewhat agree closely corresponds to Yes, and strongly/somewhat disagree correspond to No. You best be in this case, though I don't love it, is probably to create a modified version of Q2.2 to dichotomize the responses for the purpose of invariance testing as you don't have the more nuanced agreement information in Q2.1. This is similar to the forced choice format, though I'd recommend including this information in a write-up as it's really a "post-hoc forced choice" setup, and your readers should be aware of that. Otherwise, this may simply be an item no worth treating as equivalent across time as the response stimuli differ, and there may be other cognitive processes disrupting the equivalence of the question-response pairing anyway (more options = higher cognitive demand, while forced-choice invites more random reponses around the threshold). 

HTH for now, and apologies for the long delay.

Phil
 
I've been struggling with this for a couple of weeks. Any help or references would be immensely appreciated!

Best regards,
Patrick Edwards

--
You received this message because you are subscribed to the Google Groups "mirt-package" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mirt-package...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mirt-package/2d4af62d-bf34-4e6c-9cbc-2bc6ea48c386n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages