The Short-Term and Long-Term Effects of AWE Feedback on ESL Students’ Development of Grammatical Accuracy

Authors

  • Zhi Li Paragon Testing Enterprises, Inc./Iowa State University
  • Hui-Hsien Feng Iowa State University
  • Aysel Saricaoglu TED University

DOI:

https://doi.org/10.1558/cj.26382

Keywords:

Automated Writing Evaluation, Corrective feedback, Short-term effects, Long-term effects, Grammatical accuracy

Abstract

This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students’ development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight out of nine categories from first drafts to final drafts within the same papers (short-term effects). However, there is only one error reduction of statistical significance in the category of Run-on Sentence from the first drafts of the first paper to the first drafts of the subsequent papers for both levels of students (long-term effects). The findings from interviews with the participants reveal students’ perceptions of Criterion feedback and help us understand the feedback effect. Implications for a more effective use of AWE tools in ESL classrooms are discussed.

Author Biographies

  • Zhi Li, Paragon Testing Enterprises, Inc./Iowa State University
    Zhi Li is a language assessment specialist at Paragon Testing Enterprises, BC, Canada. He holds a PhD degree in applied linguistics and technology from Iowa State University, USA and an MA degree in applied linguistics from Hunan University, China. His research interests include language assessment, computer- assisted language learning, corpus linguistics, and systemic functional linguistics. He has presented his work at a number of professional conferences such as AAAL, LTRC, and TESOL. His research papers have been published in System and Language Learning &Technology.
  • Hui-Hsien Feng, Iowa State University
    Hui-Hsien Feng is a postdoctoral research associate at Iowa State University. She holds an MA in TESOL at the Ohio State University and a PhD in Applied Linguistics and Technology at Iowa State University. Her research interests include second language writing, automated writing evaluation, English for specific purposes, computational linguistics, and computer-assisted language learning. She has disseminated her research findings regularly in national and international conferences, including the conference of the American Association for Applied Linguistics (AAAL), the Computer Assisted Language Instruction Consortium (CALICO), Second Language Research Forum (SLRF) Conference, Symposium on Second Language Writing (SSLW), and International Conference on Computers in Education (ICCE).
  • Aysel Saricaoglu, TED University
    Aysel Saricaoglu (Ph.D., Applied Linguistics and Technology, Iowa State University) is an assistant professor in English Language Education, TED University. She investigates academic writing with a focus on automated formative assessment, and corpus linguistics. Her work has appeared in journals such as Computer-Assisted Language Learning and CALICO.

References

Bazeley, P. (2007). Qualitative data analysis with NVivo. Thousand Oaks, California: SAGE Publications.


Bickel, R. (2007). Multilevel analysis for applied research: It’s just regression! New York: The Guilford Press.


Chandler, J. (2003). The efficacy of various kinds of error feedback for improvement in the accuracy and fluency of L2 student writing. Journal of Second Language Writing, 12(3), 267–296. https://doi.org/10.1016/S1060-3743(03)00038-9


Chapelle, C. A. (2003). English language learning and technology: Lectures on applied linguistics in the age of information and communication. Amsterdam: John Benjamin Publishing Company. https://doi.org/10.1075/lllt.7


Chen, H. J., Chiu, T. L., & Liao, P. (2009). Analyzing the grammar feedback of two automated writing evaluation systems: My Access and Criterion. English Teaching and Learning, 33(2), 1–43.


Chen, C. F., & Cheng, W. Y. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94–112.


Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Los Angeles: SAGE Publications.


Dikli, S., & Bleyle, S. (2014). Automated essay scoring feedback for second language writers: How does it compare to instructor feedback? Assessing Writing, 22, 1–17. https://doi.org/10.1016/j.asw.2014.03.006


Ebyary, K., & Windeatt, S. (2010). The impact of computer-based feedback on students’ written work. International Journal of English Studies, 10(2), 121–142.


Elliot, N., Gere, A. R., Gibson, G., Toth, C., Whithaus, C., & Presswood, A. (2013). Uses and limitations of automated writing evaluation software. WPA-CompPile Research Bibliographies, 23. Retrieved from http://comppile.org/wpa/bibliographies/Bib23/AutoWritingEvaluation.pdf


Ellis, R. (2008). A typology of written corrective feedback types. ELT Journal, 63(2), 97–107. https://doi.org/10.1093/elt/ccn023


Grimes, D., & Warschauer, M. (2010). Utility in a fallible tool: A Multi-Site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8(6). Retrieved from http://www.jtla.org


Ferris, D. R. (2006). Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp. 81–104). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139524742.007


Ferris, D. R. (2012). Technology and corrective feedback for L2 writers: Principles, practices, and problems. In G. Kessler, A. Oskoz, and I. Elola (Eds.), Technology across writing contexts and tasks. CALICO Monograph. San Marcos, TX: CALICO.


Hayes, A. F. (2006). A primer on multilevel modeling. Human Communication Research, 32, 385–410. https://doi.org/10.1111/j.1468-2958.2006.00281.x


Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39, 83–101. https://doi.org/10.1017/S0261444806003399


IBM Corp. (2012). IBM SPSS Statistics for Windows, Version 21.0. Armonk, NY: IBM Corp.


Kennedy, S. (2010). Corrective feedback for learners of varied proficiency levels: A teacher’s choices. TESL Canada Journal, 27(2), 31–50. https://doi.org/10.18806/tesl.v27i2.1054


Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27, 1–18. https://doi.org/10.1016/j.jslw.2014.10.004


Li, S. (2010). The effectiveness of corrective feedback in SLA: A meta-analysis. Language Learning, 60(2), 309–365. https://doi.org/10.1111/j.1467-9922.2010.00561.x


Li, Z., Link, S., Ma, H., Yang, H., & Hegelheimer, V. (2014). The role of automated writing evaluation holistic scores in the ESL classroom. System, 44, 66–78. https://doi.org/10.1016/j.system.2014.02.007


Link, S., Dursun, A., Karakaya, K., & Hegelheimer, V. (2014). Towards better ESL practices for implementing automated writing evaluation. CALICO Journal, 31(3), 323–344. https://doi.org/10.11139/cj.31.3.323-344


Otoshi, J. (2005). An analysis of the use of Criterion in a writing classroom. The JALT CALL Journal, 1(1), 30–38.


Rich, C. S. (2012). The impact of online automated writing evaluation: A case study from Dalian. Chinese Journal of Applied Linguistics, 35(1), 63–79. https://doi.org/10.1515/cjal-2012-0006


Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing, 19, 51–65. https://doi.org/10.1016/j.asw.2013.11.007


Tabachnick, B. G., & Fidell, L. S. (2013). Using Multivariate Statistics (4th ed.). Boston: Allyn and Bacon.


Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327–369. https://doi.org/10.1111/j.1467-1770.1996.tb01238.x


Truscott, J. (2007). The effect of error correction on learners’ ability to write accurately. Journal of Second Language Writing, 16(4), 255–272. https://doi.org/10.1016/j.jslw.2007.06.003


Wang, P. (2013). Can automated writing evaluation programs help students improve their English writing? International Journal of Applied Linguistics & English Literature, 2(1), 6–12. https://doi.org/10.7575/ijalel.v.2n.1p.6


Wang, Y.-J., Shang, H.-F., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning, 26(3), 234–257. https://doi.org/10.1080/09588221.2012.655300


Ware, P., & Hellmich, E. (2014). CALL in the K–12 context: Language learning outcomes and opportunities. CALICO Journal, 31(2), 140–157. https://doi.org/10.11139/cj.31.2.140-157

Downloads

Published

2017-08-24

Issue

Section

Articles

How to Cite

Li, Z., Feng, H.-H., & Saricaoglu, A. (2017). The Short-Term and Long-Term Effects of AWE Feedback on ESL Students’ Development of Grammatical Accuracy. CALICO Journal, 34(3), 355-375. https://doi.org/10.1558/cj.26382