Exploring the Feedback Quality of an Automated Writing Evaluation System Pigai

Jianmin Gao


The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.


feedback quality; Automated Writing Evaluation system; Pigai

Full Text:


Copyright (c) 2021 Jianmin Gao

International Journal of Emerging Technologies in Learning (iJET) – eISSN: 1863-0383
Creative Commons License
Scopus logo Clarivate Analyatics ESCI logo EI Compendex logo IET Inspec logo DOAJ logo DBLP logo Learntechlib logo EBSCO logo Ulrich's logo Google Scholar logo MAS logo