The role of exploratory analyses in accounting research
There is a fine line in accounting research between not having enough theory to support a prediction and having so much theory that there is no tension or surprise in the findings. Occasionally, I have found my research straddling that line uncomfortably, with some readers saying I need to add tension, and others saying that they don’t buy my hypotheses and I need more literature to support it.
What role, then, do experiments have in advancing theory? Can we only make predictions that are so robustly supported by prior research that there is no tension? Or are we limited to situations where different lines of research have different predictions? Should our theory be so airtight that we could replace the actual experiment with a thought experiment or an analytical model?
Sir Arthur Conan Doyle famously said that “I have no data yet. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” The quote is in the context of solving crime, but I often think it can be applied to research as well.
The problem with having data before developing theory is that the predictions are post hoc, meaning that we can come up with pretty much any story to fit the observed pattern of results. As a single, stand-alone study, that type of research typically does not hold weight. How do we know the results are robust and not just a random outcome?
I heard a story once of a researcher who made a prediction before running a study. To avoid embarrassing anyone, I will refer to his theory simply as “Increasing X will increase Y”. Well, after he ran the study, he was surprised to find that X actually decreased Y. So, he spun a brand-new story and convinced his co-authors that “increasing X will decrease Y” should have been their prediction all along. They started to rewrite the paper accordingly, only to have him come back a week later and say, “Wait! I accidentally reverse coded Y.” The co-authors asked about the new theory and his reply was, “Yeah, that was all junk, let’s go back to the original theory.”
Many of us have seen or been part of that same phenomenon. Although our situation may not have been quite so egregious, most of us can admit to having learned a thing or two from doing research. Things do not always work out as we predicted; we are sometimes surprised! Heck, I have occasionally even found myself gleaning insights from participants’ comments about the study. So, what do we do?
Sir Arthur Conan Doyle suggests that we revise our theory to fit our data. I agree, but I suggest we do so in a measured fashion, following a few guiding principles.
- First, be honest with yourself. If you did not predict something ex-ante, you should not use one-tail t-tests, let alone planned contrasts. Take a step back and ask yourself, “Do I believe this?” If you do not believe it, you should not be trying to sell it in a paper.
- Second, be open to exploratory analyses. Run pilot studies. Include extra variables that you think might be interesting, even if you do not have theory to back it up. Run post hoc tests. Throw spaghetti at the wall and see what sticks!
- Third, learn from your exploratory analyses and failed experiments. If you find something unexpected, think about it. Could it be robust? Is there some theory that would support that finding? Is the result generalizable or just an artifact of your setting?
- Finally, realize that there is a difference between exploratory analysis and a final study. If you think you may have stumbled upon something interesting, run with it! However, do not try to spin that exploratory analysis into a test of predictions that gets published as though it were a masterfully designed study with ex-ante predictions. Instead, use it to guide your theory, and then test that new theory! Design a new experiment based on what you learned in the exploratory analysis. Now you have ex-ante predictions, and the new study can be a test of hypotheses.
As a brief example, I once ran a study with some co-authors where we predicted that X would increase Y, but we actually found that X decreased Y. As a team, we sat down and thought about it, A LOT. We mulled over why the results worked out differently than we predicted. We reran analyses to check that we did not miscode something. We did power analyses to see if it was just an artifact or a real finding (it was like p<0.001 with a large sample, and all co-authors independently arrived at the same conclusion). When we finally admitted that this finding was real, we started looking for theory that might support the finding. We found theory. Then we decided to redo our experiment based on our new theory and new predictions. We designed a new scenario (including a new manipulation) with new participants. Our findings supported the predictions. Now we had a publishable experiment! Truth be told, the original experiment never made it in the paper. We just did not feel that the exploratory analysis could take up space when we had several other experiments we needed to fit in. That being said, we learned more from that original experiment than we did from the final experiment. That is the one that surprised us, and that is the one that guided the rest of our research. However, we cut it out because it was exploratory, not a robust test of theory, and but readers did not want to hear about our thought process in a “what we did this summer” fashion.
The point is that exploratory analysis led us to a new way of thinking and guided the design of our “real” experiment. That “failed” experiment taught us more about human nature than any “successful” experiments we ran afterward. Do not be afraid of exploratory analyses, pilot studies, or the chance to have your priors proved wrong. That is when we can really start to learn.