DSpace Repository

AI partner versus human partner: comparing AI-based peer assessment with human-generated peer assessment in examining writing skills

Show simple item record

dc.rights.license CC BY eng
dc.contributor.author Al-Obayidi, Liqaa Habeb cze
dc.contributor.author Pikhart, Marcel cze
dc.date.accessioned 2025-12-05T16:03:35Z
dc.date.available 2025-12-05T16:03:35Z
dc.date.issued 2025 eng
dc.identifier.issn 2229-0443 eng
dc.identifier.uri http://hdl.handle.net/20.500.12603/2415
dc.description.abstract This paper delves into the critical role of feedback in students’ peer assessment, highlighting its variability across different educational settings and its profound potential to enhance learning outcomes, particularly within traditional and AI learning environments. It explores the evolution of feedback influenced by technological advancements such as AI, focusing on their application to improve EFL college students’ writing skills through peer assessment. Using a rigorous a qualitative-dominant mixed method approach by integrating both quantitative and qualitative data, the study contrasts the effectiveness of traditional peer assessment (group A) with AI-based assessment using ChatGPT (group B) among fourth-year students from two different contexts, namely the University of Diyala, Iraq, and the University of Hradec Kralove, Czech Republic. While group A shows consistent improvement in writing skills, group B demonstrates slightly lower scores but offers quicker, accurate, and more precise feedback. The results of the study reveal significant differences between group A, utilizing traditional peer assessment, and group B, employing ChatGPT for AI-based assessment. In both contexts, group A’s students demonstrated consistent improvement in writing skills, with final scores ranging from 7 to 14 out of 15. Group B also showed improvement, albeit slightly less pronounced, with scores ranging from 6 to 12 out of 15. Teachers’ evaluations indicated that while group A benefited from reciprocal learning processes (writing and assessment), and greater social and cognitive engagement, group B experienced more accurate, quicker, and comprehensive feedback during peer assessment process from AI, albeit with less emotional and cognitive engagement. Ultimately, both methods contributed positively to students’ writing skills, highlighting the strengths and trade-offs between human and AI feedback mechanisms. Correction (publisher's corrections - author names and affiliations: https://www.webofscience.com/wos/woscc/full-record/WOS:001528974100001 ; Accession Number WOS:001528974100001 eng
dc.format p. "Article Number: 38" eng
dc.language.iso eng eng
dc.publisher SPRINGERNATURE eng
dc.relation.ispartof Language Testing in Asia, volume 15, issue: 1 eng
dc.subject Artificial intelligence in education eng
dc.subject Feedback eng
dc.subject Foreign language learning eng
dc.subject Peer assessment eng
dc.title AI partner versus human partner: comparing AI-based peer assessment with human-generated peer assessment in examining writing skills eng
dc.type article eng
dc.identifier.obd 43882105 eng
dc.identifier.doi 10.1186/s40468-025-00375-8 eng
dc.publicationstatus postprint eng
dc.peerreviewed yes eng
dc.source.url https://languagetestingasia.springeropen.com/articles/10.1186/s40468-025-00383-8 cze
dc.relation.publisherversion https://languagetestingasia.springeropen.com/articles/10.1186/s40468-025-00383-8 eng
dc.rights.access Open Access eng


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account