Need to some help in making use of OSCATS

83 views
Skip to first unread message

Uday

unread,
Nov 18, 2011, 7:45:32 AM11/18/11
to OSCATS Discussion Group
Dear Michael,

I have gone through various books of Computer Adaptive Testing,
now i have started the implementation work. I have an item bank with
difficulty levels say [-2 > b > 2] and ability levels as same range,
my problem is to conduct a test and display the final ability of each
examinee using OSCATS 0.5. I have added the libraries [.jars and dlls]
provided and GLib files too and able to execute, see the results of
the four tests considered.
But after going through the examples provided in OSCATS, i was
confused in making the work successful. So please help me in the
implementation process.

Thanks & Regards
Uday


Michael Culbertson

unread,
Nov 18, 2011, 9:59:05 AM11/18/11
to osc...@googlegroups.com
Hello,

I would recommend using version 0.6 instead of 0.5. There were some
fundamental changes in the basic data structures between the two
versions, not to mention a number of fixed bugs.

I take it you're using Java.

What item response model(s) have you chosen? What item selection
algorithm have you chosen? Would you like maximum likelihood or
expected a posteriori estimation? What will you be using for the
initial ability estimate?

With these questions answered, it should be relatively
straightforward to adapt one of the example programs to your
specifications.

Michael Culbertson

QUERIES Division
Department of Educational Psychology
University of Illinois, Urbana-Champaign

> --
> You received this message because you are subscribed to the Google Groups "OSCATS Discussion Group" group.
> To post to this group, send email to osc...@googlegroups.com.
> To unsubscribe from this group, send email to oscats+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/oscats?hl=en.
>
>

Uday

unread,
Nov 20, 2011, 11:59:22 PM11/20/11
to OSCATS Discussion Group
Dear Michael,
Thanks for the reply. Certainly i follow your suggestion, already
started working on version 0.6.

Various parameters/models chosen are
Item Response Model: One-Parameter Logistic (1PL) Model and Two-
Parameter Logistic (2PL) Model

Item Selection Algorithm: Random selection, as most of others
algorithms wont work fine with multiple items of same optimality
metric.

Maximum likelihood is needed, i didn't understand the posteriori
estimation.
the initial ability estimate for the pretest [1PL] is -2 [theta
ranges from -2 to 2] and resultant ability becomes the input for the
post test [2PL]. So that at the end of each test i have to display the
final ability of each user and after the posttest i have to update the
difficulty level of each item based on the examinee's response [if
required].

Kindly suggest me if any modification is required in the above
parameters which can provide me the better result.

Thanks & Regards Uday

Michael Culbertson

unread,
Nov 22, 2011, 11:12:50 AM11/22/11
to osc...@googlegroups.com
Hello,

In general, the random item selection method is only for comparison
of the performance of other item selection methods---random item
selection isn't adaptive, and thus defeats the purpose of a CAT.
Unless you have many items with exactly the same item parameters, the
items will all have a different optimality metric (FI, KLI, or
location), so the caveat in the documentation won't apply. If you
have many items with exactly the same item parameters, I wonder how
you go the item parameters: It would be exceedingly unusual to see
parameters calibrated from actual item response patterns come out the
same. If you're generating the item parameters yourself, I wouldn't
recommend creating many items with the same parameters, as this
doesn't mimic real item banks (unless you have a very specific
research aim that requires it).

Example 1 can be adapted fairly easily to your scenario. You would
need two item banks: One with the 1PL items for the pre-test and one
with the 2PL items for the post-test. For setting the item
parameters, though, I would recommend using the setParamByName()
function, as in example 2, instead of the setParamByIndex() function
used in example 1. The parameter names are "Diff" for the item
location (note the 2PL parameterization in the documentation: you may
need to multiply your item difficulties by the discrimination,
depending on how you obtained your item parameters) and "Discr.Cont.1"
for the discrimination (unless you have changed the name of the
dimension in the latent space).

You'll need to register to your two tests AlgSimulate, AlgEstimate,
AlgFixedLength (currently the only stopping criterion available), and
whichever item selection algorithm you go with. By default,
AlgEstimate will use MLE. Then, set the initial value of -2 for the
examinee using setEstTheta() and administer the pre-test. If you
follow the pre-test immediately with the post-test, the estimated
latent ability from the pre-test will be used as the initial value for
the post-test. As in example 1, you can display the final ability
using getEstTheta() and getCont().

As for updating the item parameters: It sounds like you're doing
some kind of online calibration, which isn't built-in to OSCATS at the
moment, so you'll have to write your own code to calculate and set the
updated parameters. I would think this would be relatively
straightforward using getParamByName() and setParamByName().

In any case, study example 1 (including the comments) for the gist
of creating and running tests, and you can refer to the documentation
for more details about any particular function.

Hope this gets you going in the right direction.

Michael Culbertson

QUERIES Division
Department of Educational Psychology
University of Illinois, Urbana-Champaign

Uday

unread,
Dec 1, 2011, 1:20:45 AM12/1/11
to OSCATS Discussion Group
Dear Michael,

Information provided in the above post helped me a lot in solving
all my problems, i truly appreciate for the assistance. I want to set
the response [which is 1/0] of the user attempt and i need the Item ID
which is going to be displayed from the item bank [each item has an
ID].

I have gone through the steps provided for administering the test, but
i didn't understood how to set the user response so that the next item
can be delivered. In the example 1 the test is executed in a single
shot when test[j].administer(examinees[i]) is fired.

Please brief about the steps mentioned below:

The test proceeds in the following order:
1) The examinee's item/response vectors are reset
2) The OscatsTest initialize signal is emitted
3) The item eligibility vector is initialized with the current hinted
value (Default: all items).
4) The OscatsTest filter, select, and approve signals are emitted
5) If the approval handler returns TRUE, go to 3 step (Not more than
OscatsTest:itermax_select times).
6) The OscatsTest administer signal is emitted (Which should add the
item/response pair to e as Necessary)?
7) The OscatsTest administered and stopcrit signals are emitted.
8) If the stopping criterion has not been met, go to 3 (not more than
OscatsTest:itermax_items times).
9) The OscatsTest finalize signal is emitted.


Thanks & Regards
Uday

Michael Culbertson

unread,
Dec 1, 2011, 9:49:47 AM12/1/11
to osc...@googlegroups.com
Hello,

By default, OSCATS is equipped for simulating CAT. As such, the
test administration can proceed all by itself. If you want to specify
the examinee responses through some other means, you will need to
write your own administration class (a class that connects to the
OscatsTest::administer signal). For an example of this and a
description of how the OscatsTest object operates, see:

http://hdl.handle.net/2142/27706

as well as example 4 provided with OSCATS 0.6.


Michael Culbertson

QUERIES Division
Department of Educational Psychology
University of Illinois, Urbana-Champaign

Nimje Himanshu Hari

unread,
Apr 28, 2015, 4:42:09 AM4/28/15
to osc...@googlegroups.com
Hi Michael,

Thanks for the information you have provided in this thread as it has been quite helpful.

With the help of link you provided, I was able to build a CAT in python where I used the CustomSimulateAlg() class to define a function to be called when 'administer' signal is received. Following is the code for the same:
def administer(self, examinee, item):
resp = int(raw_input("Enter 1 for correct, 0 for incorrect: "))
resp = resp if resp == 1 else 0
examinee.add_item(item, resp)
return resp

def __reg__ (self, test) :
test.connect_object("administer", CustomSimulateAlg.administer, self)

While in the above function I use the terminal to enter the examinee responses, I would need a bit more help in understanding how could I incorporate a code in this function to call a view, take the response from the client & then pass the response in the examinee.add_item function.

Any help in this regard would be appreciated.

Thanks,
Himanshu
Reply all
Reply to author
Forward
0 new messages