Replication data for: Coder Reliability and Misclassification in the Human Coding of Party Manifestos
Harvard Dataverse (Africa Rice Center, Bioversity International, CCAFS, CIAT, IFPRI, IRRI and WorldFish)
View Archive InfoField | Value | |
Title |
Replication data for: Coder Reliability and Misclassification in the Human Coding of Party Manifestos
|
|
Identifier |
https://doi.org/10.7910/DVN/4KJZJL
|
|
Creator |
Slava Mikhaylov
Michael Laver Kenneth Benoit |
|
Publisher |
Harvard Dataverse
|
|
Description |
The Comparative Manifesto Project (CMP) provides the only time series of estimated party pol- icy positions in political science and has been extensively used in a wide variety of applications. Recent work (e.g. Benoit, Laver, and Mikhaylov 2009; Klingemann et. al. 2006, chs. 4â5) focuses on non-systematic sources of error in these estimates that arise from the text generation process. Our concern here, by contrast, is with error that arises during the text coding process, since nearly all manifestos are coded only once by a single coder. First, we discuss reliability and misclassification in the context of hand-coded content analysis methods. Second, we re- port results of a coding experiment that used trained human coders to code sample manifestos provided by the CMP, allowing us to estimate the reliability of both coders and coding categories. Third, we compare our test codings to the published CMP âgold standardâ codings of the test documents to assess accuracy, and produce empirical estimates of a misclassification matrix for each coding category. Finally, we demonstrate the effect of coding misclassification on the CMPâs most widely used index, its left-right scale. Our findings indicate that misclassification is a serious and systemic problem with the current CMP dataset and coding process, suggesting the CMP scheme should be significantly simplified to address reliability issues.
|
|