Bernard Veldkamp, University of Twente: Item selection, exposure control, and test specifications in CAT

The process of Computerized Adaptive Testing (CAT) has five basic steps: (1) an initial ability estimate is made for the candidate, (2) an item is selected, (3)  the item is administered, (4)the ability estimate is updated, and (5) steps 2 - 4 are repeated until a stopping criterion has been met. Although seemingly straightforward, there are a number of issues that have to be dealt with for each of these steps. In this workshop we will focus on step (2) of the algorithm, the selection of the next item in computerized adaptive testing. We will deal with three important issues:

1.

Which item selection criterion to apply.

2.

How to deal with exposure control.

3.

How to deal with test specifications.

Several item selection rules have been proposed in the literature to deal with the first issue. In the workshop, the focus will be on Maximum Fisher information, Fisher interval information, Kullback-Leibler information and several Bayesian item selection criteria. How do they differ, what are the advantages and disadvantages, and which of them should be applied?  

The Sympson Hetter method is most commonly applied for exposure control.  This method will be introduced and several modifications of the method will be discussed. Alternatives such as the Alpha-stratified method, the Progressive method, and the Item eligibility method will also be compared.

Finally, several approaches for implementing test specifications in the item selection algorithm will be presented to address the third issue.

This workshop will be a mix of theory, discussion, sharing experiences, and exercises. It will deal with some more advanced issues in computerized adaptive testing. In order to participate, you need to have some basic knowledge of Item Response Theory models, such as the Rasch-model, the 2PLM,  the 3PLM, and polytomous IRT models as well as an understanding of the basic principles underlying CAT.