Show simple item record

dc.contributor.advisorLink, Stephanie
dc.contributor.authorKoltovskaia, Svetlana
dc.date.accessioned2023-04-05T16:20:59Z
dc.date.available2023-04-05T16:20:59Z
dc.date.issued2022-07
dc.identifier.urihttps://hdl.handle.net/11244/337290
dc.description.abstractThe purpose of this dissertation is to investigate automated writing evaluation (AWE) from both system- and user-centric perspectives. The system-centric research focused on error-correction/detection performance of the AWE system, Grammarly. The study was based on fifty-three argumentative essay drafts written by undergraduate students enrolled in a second language (L2) writing course. Grammarly’s feedback given to those essay drafts was measured using precision (accuracy) and recall (system coverage) and compared to human annotators' feedback. Results revealed that Grammarly’s precision rates for flagging and correction (92% and 91%, respectively) exceeded a benchmark of 80%. This means that Grammarly was accurate in detecting and correcting common L2 errors. However, Grammarly’s recall rate was low (51%), which means that Grammarly missed half of the errors found by human annotators. Two user-centric studies focused on teachers and students. The first study explored six postsecondary, L2 writing teachers’ use and perceptions of Grammarly as a complement to their feedback. The participants’ feedback was analyzed to understand Grammarly’s impact on their feedback activity. The participants then had a semi-structured interview aimed at exploring their perceptions of Grammarly as a supplementary tool. Findings revealed that despite using Grammarly to complement their feedback, teachers still provided feedback on sentence-level issues. Overall, the majority of teachers were positive about using Grammarly to complement their feedback, notwithstanding its limitations. The second study explored two English as a second language (ESL) college students’ behavioral, cognitive, and affective engagement with Grammarly’s feedback when revising a final draft. The behavioral engagement was explored through the analysis of QuickTime-based screencasts of students’ Grammarly usage. Cognitive and affective engagement were measured through the analysis of students’ comments during stimulated recall of the aforementioned screencasts and semi-structured interviews. According to findings, one student showed greater cognitive engagement through his questioning of AWCF but did little to verify the accuracy of feedback, which resulted in moderate changes to his draft. The other’s overreliance on AWCF indicated more limited cognitive engagement, which led to feedback’s blind acceptance. Nevertheless, this also resulted in moderate changes to her draft. The dissertation provides implications to meaningfully use AWE in L2 writing classrooms.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleAutomated writing evaluation for formative second language assessment: Exploring performance, teacher use, and student engagement
dc.contributor.committeeMemberCheng, An
dc.contributor.committeeMemberSicari, Anna
dc.contributor.committeeMemberThompson, Penny
osu.filenameKoltovskaia_okstate_0664D_17733.pdf
osu.accesstypeOpen Access
dc.type.genreDissertation
dc.type.materialText
dc.subject.keywordsautomated writing evaluation
dc.subject.keywordsgrammarly
dc.subject.keywordsL2 writing
dc.subject.keywordsprecision and recall
dc.subject.keywordsstudent egagement
dc.subject.keywordsteacher's perception
thesis.degree.disciplineEnglish
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record