Record Details

Replication data for: Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results

Harvard Dataverse (Africa Rice Center, Bioversity International, CCAFS, CIAT, IFPRI, IRRI and WorldFish)

View Archive Info
 
 
Field Value
 
Title Replication data for: Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results
 
Identifier https://doi.org/10.7910/DVN/28766
 
Creator Franco, Annie
Malhotra, Neil
Simonovits, Gabor
 
Publisher Harvard Dataverse
 
Description The accuracy of published findings is compromised when researchers fail to report and adjust for multiple testing. Pre-registration of studies and the requirement of pre-analysis plans for publication are two proposed solutions to combat this problem. Some have raised concerns that such changes in research practice may hinder inductive learning. Without knowing the extent of underreporting, it is difficult to assess the costs and benefits of institutional reforms. This paper examines published survey experiments conducted as part of the Time-sharing Experiments in the Social Sciences (TESS) program, where the questionnaires are made publicly available, allowing us to compare planned design features against what is reported in published research. We find that: (1) 30% of papers report fewer experimental conditions in the published paper than in the questionnaire; (2) roughly 60% of papers report fewer outcome variables than what is listed in the questionnaire; and (3) about 80% of paper fail to report all experimental conditions and outcomes. These findings suggest that published statistical tests understate the probability of type I errors.
 
Subject underreporting, research transparency
 
Date 2015-01-21