Assessing writing ability: a study on reliability and validity, Hiske Feenstra

The assessment of writing ability is notoriously complex. A typical writing assessment consists of a writing task, the result of which is to be judged by one or more raters. The scores given by different raters usually diverge, as do the scores given by the same rater when asked to re-evaluate the essay, thus posing a threat to the reliability of the assessment. Furthermore, rating schemes tend to cover only some linguistic aspects of text quality and often ignore aspects the writing process, giving rise to questions on the validity of this type of writing assessment.

In this study, three alternative methods to assess writing ability will be evaluated:

  • Anchor essays
    The use of anchor essays as a fixed reference is expected to improve reliability.
  • Revision tests
    Adding an objective test on revision ability will arguably add to the validity of the assessment.
  • Automated essay scoring
    The evaluation of specific linguistic text features is expected to provide for an informative measure of text quality.

By evaluating these three assessment methods, this research project aims to answer the following question: How can writing ability be assessed reliably and validly in mid and end of primary education?

This research project is conducted within PPON (Periodieke Peiling Onderwijsniveau; Periodical Survey Educational Level), a national assessment on primary education executed by Cito.