Academic
Publications
Measuring nominal scale agreement among many raters

Measuring nominal scale agreement among many raters,10.1037/h0031619,Psychological Bulletin,Joseph L. Fleiss

Measuring nominal scale agreement among many raters   (Citations: 790)
BibTex | RIS | RefWorks Download
Introduced the statistic kappa to measure nominal scale agreement between a fixed pair of raters. Kappa was generalized to the case where each of a sample of 30 patients was rated on a nominal scale by the same number of psychiatrist raters (n = 6), but where the raters rating 1 s were not necessarily the same as those rating another. Large sample standard errors were derived.
Journal: Psychological Bulletin - PSYCHOL BULL , vol. 76, no. 5, pp. 378-382, 1971
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
Sort by: