Abstract
Numerous pediatric research protocols are designed for assessing the degree of concordance between two observers; in other words, to determine the extent of their agreement. In order to determine the interobserver concordance, a frequently used statistical tool is available: the Kappa Coefficient (k). The present article explains the theoretical background of this coefficient, the methodology employed for its calculation and the way in which its value is correctly interpreted. In simple terms, Kappa Coefficient (k) corresponds to the proportion of concordances observed among the total number of observations, having excluded all random concordances. Kappa Coefficient (k) adopts a value between -1 y +1, being the strongest degree of interobserver concordance equal to +1. On the contrary, a value of K = 0 reflects that the observed concordance is precisely the one that is expected by chance. The interpretation of Kappa Coefficient (k) is performed by correlating its value with a qualitative scale, which includes six level of strength of agreement ("poor", "slight", "fair", "moderate", "substantial" and "almost perfect"), simplifying its comprehension.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2008 Revista Chilena de Pediatría
