"Ours is the first study to evaluate the effectiveness of sugary drink warning labels," touts Grant Donnelly, a lead author of a joint study by the Harvard Business School and Harvard University Behavioral Insights Group. Kudos for their smart approach to testing the effect of images as part of those warning labels (objective measures showed that images indeed brought about the desired reduction in purchases).
But shame on the researchers for ignoring or missing decades of psychological and behavioral-economics research on the best ways of investigating cause and effect. For the study also incorporated a naive direct question asking participants "how seeing a graphic warning label would influence their drink purchases." An abundant literature, from Nisbett and Wilson (1977) to my own recent article, shows that it would be foolish to trust in such subjective interpretations of the factors behind each person's decision-making process. After acquiring such good, objective information, why would Donnelly et al. water it down with subjective findings that are sure to introduce bias? UPDATE: the original study materials made available by the authors at Open Science Framework tell a different story than the summary in the Harvard Gazette quoted above. The survey did not ask respondents "how seeing a graphic warning label would influence their drink purchases." Instead, the survey asked for reactions to the images and then separately asked about intention to buy a soft drink. Evaluated in this way, each topic was much more amenable to unbiased reporting by a participant than that person's causal assessment would be. The responses would then be linked "in the back end" by the researchers to investigate any causal connection. A good design after all.
0 Comments
Leave a Reply. |
AuthorRoland B. Stark Archives
October 2024
|