Show simple item record

dc.contributor.advisorBarnes, Laura
dc.contributor.authorKeener, Ashley
dc.date.accessioned2020-09-09T20:49:13Z
dc.date.available2020-09-09T20:49:13Z
dc.date.issued2020-05
dc.identifier.urihttps://hdl.handle.net/11244/325442
dc.description.abstractIn order to quantify the degree of agreement between raters when classifying subjects into predefined categories, inter-rater reliability (IRR) experiments are often conducted in the medical field. Originally, percent agreement was used to calculate the extent of agreement between raters; however, it was criticized for not taking into account chance-agreement. Chance-agreement refers to the propensity for raters to guess when classifying nondeterministic subjects to categories. In other words, raters can be certain that some subjects are textbook and are associated with a true category membership, whereas, other subjects are ambiguous and require true random guessing (Schuster & Smith, 2002). A commonly used chance-corrected agreement coefficient has been Cohen's Kappa. Limitations have been associated with the Kappa statistic such as Kappa's tendency to overcorrect for chance-agreement in the presence of high prevalence rates (i.e., highly skewed data). Due to such issues, Gwet (2014) proposed a new chance-corrected agreement coefficient called the AC1 statistic. The purpose of this study was to examine Cohen's Kappa and Gwet's AC1 with respect to prevalence rates and rater uncertainty using a newly developed classification system for mass shooters. A new methodology for identifying textbook and ambiguous subjects was demonstrated. Specifically, the purposes of the present study were (1) to examine how Cohen's Kappa and Gwet's AC1 are affected by prevalence rates and (2) to determine whether there are differences in the observable discrepancies between Cohen's Kappa and Gwet's AC1 for subjects classified as textbook compared to subjects classified as ambiguous. Findings indicated that observable discrepancies between Cohen's Kappa and Gwet's AC1 could be seen in both the textbook and ambiguous conditions. Specifically, analyses suggested that percent agreement was likely to overestimate the extent of true agreement among raters and Cohen's Kappa was likely to underestimate the extent of true agreement among raters. The ambiguous analysis revealed larger discrepancies between Gwet's AC1 and Cohen's Kappa in the presence of highly skewed data, however, discrepancies between Gwet's AC1 and Cohen's Kappa appeared to be more dependent on the number of observable disagreements between raters during the textbook analysis. Recommendations for practice and future research are discussed.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleComparison of Cohen's Kappa and Gwet's AC1 with a mass shooting classification index: A study of rater uncertainty
dc.contributor.committeeMemberMwavita, Mwarumba
dc.contributor.committeeMemberWheeler, Denna
dc.contributor.committeeMemberBeaman, Jason
osu.filenameKeener_okstate_0664D_16644.pdf
osu.accesstypeOpen Access
dc.type.genreDissertation
dc.type.materialText
dc.subject.keywordschance agreement
dc.subject.keywordscohen's kappa
dc.subject.keywordsgwet's ac1
dc.subject.keywordsinter-rater agreement
dc.subject.keywordsinter-rater reliability
dc.subject.keywordsrater uncertainty
thesis.degree.disciplineEducational Psychology
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record