Analysts, researchers and statisticians tend to fall into one of four categories with respect to the way they handle claims about causation.
A. Many who report bivariate results (e.g., correlations or group differences) as if they indicate causal effects, plain and simple. This approach is unfortunately common among most of us when just beginning our research careers. I encounter this group frequently when I peer-review manuscripts. B. Some who build in controls for obvious variables or variables easily obtainable and then (maybe with unjustified optimism, or even hubris) report those results as if they indicate causal effects. C. Some who try their best to use statistical or other means to control for relevant variables as fits the situation; who try multiple methods; and who then take pains to report those results as indicating causal effects to one degree or another. D. A few purists such as Gregory Miller, Jean Chapman, Donald Rubin, and Elazar Pedhazur. They stand by the phrase "no causation without randomization" and claim that almost no published analysis of non-experimental data ever succeeds at revealing causal effects. Being a card-carrying member of group C, I am constantly on the lookout for good ways to isolate causes and effects using quantitative methods. Somewhere I ran across the following ingenious approach. Suppose we want to explain the daily volume of men's shoe sales using the amount of money spent daily on radio ads for a store's men's shoes. Sometimes such ads coincide with days when shoe sales are high anyway. We might say there's "anticipation" in the timing of the ads. So on that score there would be correlation even in the absence of causation. A surprisingly helpful approach is to see whether that correlation is much higher than the correlation between the amount of advertising money spent and the volume of sales at another shoe store down the street. Or, the first store's sales of women's shoes. This is a low-tech method requiring no specialized statistical skills, but it promises to productively isolate the connection of interest, keeping at bay the confounding variable that threatens the causal claim. What smart quantitative designs have you encountered lately? *** Contact: [email protected]
3 Comments
David Dodenhoff
10/21/2024 12:50:28 pm
One of my professors used to say, "Your data don't demonstrate causation. That's what your theory is for. The data is just evidence."
Reply
David Dodenhoff
10/21/2024 12:52:23 pm
Don't ask me why my professor treated "data" as plural in one sentence and singular in a nearby sentence.
Reply
Leave a Reply. |
AuthorRoland B. Stark Archives
October 2024
|