Computerized adaptive text-based testing in psychological and educational measurement, Qiwei He

Computerized adaptive testing (CAT) has become increasingly popular during the past decade in both educational and psychological measurement. The flexibility of CAT combined with the possibilities of internet-based testing seems profitable for many operational testing programs. In CAT, the items are adapted to the level of the respondent, that is, the difficulty of the items is adapted to the estimated level of the respondent. If the performance on previous items has been rather weak, an easy item will be presented next, and if the performance on previous items has been rather strong, a more difficult item will be selected for administration. The main advantage of this approach is that the test length can be reduced considerably without loosing measurement precision. Besides, the respondents are administered items at their specific ability level, which implies that they won’t get bored by to easy items or frustrated by too difficult ones.

The technology of CAT has been developed for multiple-choice items. For these items, both the correct and the incorrect answers are precisely defined and automated scoring can be implemented on the fly. For other item types, application of CAT is less straightforward. For example for open-answer questions, automated scoring rules can be much more complicated.

In this PhD project, the focus is on open answer questions where more complicated automated scoring algorithms have to be developed. Applications are either within the context of psychological or educational measurement. Initially, the present project will focus on the assessment of post traumatic stress disorder (PTSD). 

The project is funded by Stichting Achmea Slachtoffer en Samenleving.