Replication Data for: How dropping subjects who failed manipulation checks can bias your experimental results. An illustrative case
Harvard Dataverse (Africa Rice Center, Bioversity International, CCAFS, CIAT, IFPRI, IRRI and WorldFish)
View Archive InfoField | Value | |
Title |
Replication Data for: How dropping subjects who failed manipulation checks can bias your experimental results. An illustrative case
|
|
Identifier |
https://doi.org/10.7910/DVN/7DXBGG
|
|
Creator |
Varaine, Simon
|
|
Publisher |
Harvard Dataverse
|
|
Description |
Manipulations checks are post-experimental measures widely used to verify that subjects understood the treatment. Some researchers drop subjects who failed manipulation checks in order to limit the analyses to attentive subjects. This short report offers a novel illustration on how this practice may bias experimental results: in the present case, through confirming a hypothesis that is likely false. In a survey experiment, subjects were primed with fictional news stories depicting an economic decline versus prosperity. Subjects were then asked whether the news story depicted an economic decline or prosperity. Results indicate that responses to this manipulation check captured subjects’ pre-existing beliefs about the economic situation. As a consequence, dropping subjects who failed the manipulation check mixes the effects of pre-existing and induced beliefs, increasing the risk of false positive findings. Researchers should avoid dropping subjects based on post-treatment measures and rely on pre-treatment measures of attentiveness.
|
|
Subject |
Social Sciences
Manipulation checks Survey experiments Type I error |
|
Contributor |
Varaine, Simon
|
|