Original Research
Rater reliability, the consistency of marking across different raters and times, is one important component of reliability regarding the quality of test scores. It is essential regarding performance assessment, such as writing, when the fairness of assessment results can come into question due to the subjectivity when scoring. The present study, part of a larger-scale funded research project, aimed to study this overlooked area in the Omani context, i.e., the reliability in scoring the writing section of the final exams in the University of Technology and Applied Sciences (UTAS). More specifically, the study investigated the estimates of inter-rater and intra-rater reliability among 10 writing markers assessing 286 and 156 students' writing scripts belonging to four levels of proficiency at three different levels of analysis: the whole writing tests, tasks 1 and tasks 2, and the constituent criteria of both tasks. The results indicated a rather high value of inter-rater reliability and a moderate one for intra-rater reliability in general. However, when interpreted regarding raters' personal and background information, some low estimates shed light on the importance of factors influencing scoring consistency across different assessors and times.
Download Count : 60
Visit Count : 274
Intra-rater Reliability; Inter-rater Reliability; UTAS; Writing Test
Acknowledgments
Not applicable.
Funding
Not applicable.
Conflict of Interests
No, there are no conflicting interests.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. You may view a copy of Creative Commons Attribution 4.0 International License here: http://creativecommons.org/licenses/by/4.0/