On the other hand, if you want to include manual weights, and don't necessarily require the likelihood/Bayesian sampling variability information, then you can always fall back to more ad-hoc estimators such as the (weighted) least squares family. In this case you can focus on low-level model implied information, such as the expected probability/score functions, and compare these to the observed responses so as to criteria that "best matches" the observations given the squared discrepancy. Below I show how to estimate a given response pattern using ULS and WLS, where weights are chosen to give a more 'expert weight' flavour to the last two items in the test:
############
library(mirt)
dat <- expand.table(LSAT7)
mod <- mirt(dat, 1)
# target pattern
pat <- c(0,1,1,0,1)
# ML estimate
fscores(mod, response.pattern = pat, method = 'ML')
items <- lapply(1:5, function(i) extract.item(mod, i))
score <- function(items, theta) sapply(items, function(x)
expected.item(x, matrix(theta)))
score(items, theta = 0)
score(items, theta = 1)
score(items, theta = -.25)
#---------------------------
# ULS
uls <- function(theta, pat) sum( (pat - score(items, theta))^2 )
uls(theta = 1, pat=pat)
optimize(uls, interval = c(-15, 15), pat=pat)
#---------------------------
# WLS
w <- c(1,1,1,2,2)
w <- w / sum(w)
wls <- function(theta, pat) sum( w * (pat - score(items, theta))^2 )
wls(theta = 1, pat=pat)
optimize(wls, interval = c(-15, 15), pat=pat)