Using automated analysis to assess middle school students competence with scientific argumentation

Abstract

Argumentation is fundamental to science education, both as a prominent feature of scientific reasoning and as an effective mode of learning?a perspective reflected in contemporary frameworks and standards. The successful implementation of argumentation in school science, however, requires a paradigm shift in science assessment from the measurement of knowledge and understanding to the measurement of performance and knowledge in use. Performance tasks requiring argumentation must capture the many ways students can construct and evaluate arguments in science, yet such tasks are both expensive and resource-intensive to score. In this study we explore how machine learning text classification techniques can be applied to develop efficient, valid, and accurate constructed-response measures of students competency with written scientific argumentation that are aligned with a validated argumentation learning progression. Data come from 933 middle school students in the San Francisco Bay Area and are based on three sets of argumentation items in three different science contexts. The findings demonstrate that we have been able to develop computer scoring models that can achieve substantial to almost perfect agreement between human-assigned and computer-predicted scores. Model performance was slightly weaker for harder items targeting higher levels of the learning progression, largely due to the linguistic complexity of these responses and the sparsity of higher-level responses in the training data set. Comparing the efficacy of different scoring approaches revealed that breaking down students arguments into multiple components (e.g., the presence of an accurate claim or providing sufficient evidence), developing computer models for each component, and combining scores from these analytic components into a holistic score produced better results than holistic scoring approaches. However, this analytical approach was found to be differentially biased when scoring responses from English learners (EL) students as compared to responses from non-EL students on some items. Differences in the severity between human and computer scores for EL between these approaches are explored, and potential sources of bias in automated scoring are discussed.

Author

Christopher Wilson, Kevin Haudek, Jonathan Osborne, Zoë Bracey, Tina Cheuk, Brian Donovan, Molly Stuhlsatz, Marisol Santiago, Xiaoming Zhai

Keywords

Assessment, Machine learning, automated analysis, bias, argumentation

Year of Publication

2024

Journal

Journal of Research in Science Teaching

Volume

61

Start Page

38

Issue

1

Date Published

01/2024

ISSN Number

0022-4308

URL

https://doi.org/10.1002/tea.21864

DOI

10.1002/tea.21864