Evaluation of construct-irrelevant variance yielded by machine and human scoring of a science teacher PCK constructed response assessment

TitleEvaluation of construct-irrelevant variance yielded by machine and human scoring of a science teacher PCK constructed response assessment
Publication TypeJournal Article
Year of Publication2020
AuthorsZhai, X, Haudek, KC, Stuhlsatz, MAM, Wilson, C
JournalStudies in Educational Evaluation
Volume67
Pagination100916
ISSN0191-491X
KeywordsConstruct-irrelevant variance, Constructed response assessment, Machine learning, Pedagogical content knowledge, Science teacher
AbstractMachine learning has been frequently employed to automatically score constructed response assessments. However, there is a lack of evidence of how this predictive scoring approach might be compromised by construct-irrelevant variance (CIV), which is a threat to test validity. In this study, we evaluated machine scores and human scores with regard to potential CIV. We developed two assessment tasks targeting science teacher pedagogical content knowledge (PCK); each task contains three video-based constructed response questions. 187 in-service science teachers watched the videos with each had a given classroom teaching scenario and then responded to the constructed-response items. Three human experts rated the responses and the human-consent scores were used to develop machine learning algorithms to predict ratings of the responses. Including the machine as another independent rater, along with the three human raters, we employed the many-facet Rasch measurement model to examine CIV due to three sources: variability of scenarios, rater severity, and rater sensitivity of the scenarios. Results indicate that variability of scenarios impacts teachers’ performance, but the impact significantly depends on the construct of interest; for each assessment task, the machine is always the most severe rater, compared to the three human raters. However, the machine is less sensitive than the human raters to the task scenarios. This means the machine scoring is more consistent and stable across scenarios within each of the two tasks.
URLhttp://www.sciencedirect.com/science/article/pii/S0191491X20301644
DOI10.1016/j.stueduc.2020.100916
Refereed DesignationRefereed

thumbnail of small NSF logo in color without shading

This material is based upon work supported by the National Science Foundation (DUE grants: 1438739, 1323162, 1347740, 0736952 and 1022653). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.