CALICO Journal, Vol 28, No 1 (2011)

The Nature of Automated Essay Scoring Feedback

Semire Dikli
Issued Date: 7 Aug 2014


The purpose of this study is to explore the nature of feedback that English as a Second Language (ESL) students received on their writings either from an automated essay scoring (AES) system or from the teacher. The participants were 12 adult ESL students who were attending an intensive English center at a university in Florida. The drafts of the students were analyzed in depth from a case-study perspective. While the document (essay) analysis was the main data collection method, observations and interviews provided crucial information regarding the context in which the students wrote and the nature of each type of feedback they received. The results revealed that the nature of the AES feedback and written teacher feedback (TF) feedback was different from each other. While the written TF was shorter and more focused, the AES feedback was quite long, generic, and redundant. The findings suggested that AES systems are not entirely ready to meet the needs of ESL or English as a Foreign Language (EFL) students. The developing companies need to improve the feedback capabilities of the program for nonnative English-speaking students, that is, less redundancy, shorter feedback, simpler language for feedback, and feedback for short/off topic/repetitious essays.

Download Media

PDF Subscribers Only

DOI: 10.11139/cj.28.1.99-134


  • There are currently no refbacks.

Equinox Publishing Ltd - 415 The Workstation 15 Paternoster Row, Sheffield, S1 2BX United Kingdom
Telephone: +44 (0)114 221-0285 - Email:

Privacy Policy