Measuring And Promoting Inter-Rater Agreement Of Teacher And Principal Performance Ratings

Indicators of the classification of care outcomes (NOC) must be verified for their validity and reliability. One method of measuring the reliability of the NOC is the use of the reliability of the interrater. Kappa and percentage of consent are common statistical analysis methods that are used to measure the reliability of an instrument`s interrater. The simultaneous use of these two methods is due to the fact that these methods of statistical analysis have a simple interpretation of reliability. Two potential conflicts can arise if there are asynchronities between the kappa value and the percentage agreement. This article is intended to serve as a guideline when a researcher is confronted with these two potential conflicts. This guide refers to the measure of Interrater`s reliability with two advisors. Kvseth T. Measurement of Interobserver Disagreement: Correction of Cohen`s Kappa for Negative Values.

J Probab appearing. 2015;2015.c) Reliability — Inter-rater and inter-rater agreement O`Leary S, Lund M, Ytre-Hauge TJ, Holm SR, Naess K, Dailand LN, et al. Fallstricke in the use of Kappa in interpreting the concordance between several advisors in reliability studies. Physiotherapy. 2014;100 (1):27-35. Landis J, Koch G. The measure of the compliance agreement for categorical data. biometrics. 1977:159-74. Fleiss JL. Measure of the scale rated correspondence between many advisors. Psychological bulletin.

1971;76 (5): 378-82. Feinstein A, Cicchetti D. High compliance, but low kappa: Me. The problems of two paradoxes. J clin epidem. 1990;43 (6):543-9. McCray G, ed. Evaluation of the Interrater agreement for nominal rating variables. The language testing forum House A, House B, Campbell M. Measures of interobserver agreement: Calculation formulas and distribution effects. J Behav evaluation.

1981;3 (1):37-57. Kottner J, Audigé L, Brorson S, Thunder A, Gajewski B, Hr`bjartsson A, et al. Guidelines have been proposed for reliability reporting and agreement studies (GRRAS). The stud of Int J. 2011;48 (6):661-71. Graham M, Milanowski A, Miller J. Measurement and promotion of the more interdisciplinary agreement on teacher evaluations and key performance. Online deposit. Center for Educator Compensation Reform. Viera A, Garrett J. Understanding interobserver agreement: the kappa statistic. Med Fam.

2005;37 (5):360-3. Sim J, Wright C. Kappa statistics in reliability studies: requirements for use, interpretation and sampling.