Utilizing Deep Learning AI to Analyze Scientific Models: Overcoming Challenges

Abstract

Scientific modeling is a vital educational practice that helps students apply scientific knowledge to real-world phenomena. Despite advances in AI, challenges in accurately assessing such models persist, primarily due to the complexity of cognitive constructs and data imbalances in educational settings. This study addresses these challenges by employing diverse analytic strategies, including the Synthetic Minority Over-sampling Technique (SMOTE), aimed at enhancing fairness and efficacy in automated scoring systems. We analyze the impact of these strategies through a robust methodology, utilizing a combination of tenfold cross-validation and independent testing phases to ensure the reliability of AI assessments. Our findings highlight the effectiveness of deep learning AI in mirroring human judgment, with improvements in accuracy, precision, recall, and F1 scores across varied model assessments. Specifically, the application of SMOTE significantly improved the scoring fairness for minority class instances, which are often underrepresented in educational datasets. This study also delves into the discrepancies between AI and human evaluations, particularly in interpreting creatively expressed student models, which reveals the areas where AI technologies require further enhancements to better align with human evaluative standards. This study lays a foundation for future research to explore advanced AI techniques and training strategies, thus promoting fair and supportive feedback mechanisms that enhance student learning and creativity. By advancing AI applications in science education, this research addresses essential challenges in the automated analysis of complex student responses and supports broader academic goals.

Author

Tingting Li, Kevin Haudek, Joseph Krajcik

Year of Publication

2025

Journal

Journal of Science Education and Technology

Date Published

apr

ISSN Number

1573-1839

URL

https://doi.org/10.1007/s10956-025-10217-0

DOI

10.1007/s10956-025-10217-0