Assessing Argumentation Using Machine Learning and Cognitive Diagnostic Modeling
In this study, we developed machine learning algorithms to automatically score students’ written arguments and then applied the cognitive diagnostic modeling (CDM) approach to examine students’ cognitive patterns of scientific argumentation. We abstracted three types of skills (i.e., attributes) critical for successful argumentation practice: making claims, using evidence, and providing warrants. We developed 19 constructed response items, with each item requiring multiple cognitive skills. We collected responses from 932 students in Grades 5 to 8 and developed machine learning algorithmic models to automatically score their responses. We then applied CDM to analyze their cognitive patterns. Results indicate that machine scoring achieved the average machine–human agreements of Cohen’s κκ = 0.73, SD = 0.09. We found that students were clustered in 21 groups based on their argumentation performance, each revealing a different cognitive pattern. Within each group, students showed different abilities regarding making claims, using evidence, and providing warrants to justify how the evidence supports a claim. The 9 most frequent groups accounted for more than 70% of the students in the study. Our in-depth analysis of individual students suggests that students with the same total ability score might vary in the specific cognitive skills required to accomplish argumentation. This result illustrates the advantage of CDM in assessing the fine-grained cognition of students during argumentation practices in science and other scientific practices.