How can computers analyze writing?
Of course computers do not understand student writing like a human, but computers can “learn” to identify and categorize student writing like an expert would. Our tools extract terms and phrases from student responses and use these as variables in machine learning (ML) algorithms. This forms the basis of the Constructed Response Classifier (CRC) tool, which you can use at this site.
How does the scoring work?
The ML models are built by training the computer with hundreds of previously expert-scored responses. This allows the computer to “learn” important patterns in the text of student responses using information from discipline experts! The computerized ML model is actually making a prediction: a prediction of how an expert would assign a student response to a category within a scoring rubric. For the developed models available to use, the predictions the computer model makes are as accurate as another expert scoring the student responses.
What type of score is assigned to responses?
When we say ‘score’ we really mean: assign each student response to pre-defined rubric categories. We do not give a score in the sense of points or grades. Some models predict placement of student responses into a single category in a scale of scientific/naive or an ordered level of complexity (what we call a holistic score). Other models will categorize responses by using multiple specific ideas that students express, for example various inputs, products and processes in photosynthesis (what we call an analytic score). The interactive report from each model will include the basics of the applied scoring rubric for that model.
Want to know even more?
Check out some of our research publications which explain more details of the scoring tool.