Breaking out of the Methodological Cage
Share this article
Esteemed corruption author and academic Michael Johnston encourages us to look up from our data from time to time, to challenge the evidence we often rely on to define and analyze corruption. In this latest post from our series on corruption in fragile states, Professor Johnston asks us to take a fresh look at the lives and histories of those living in corrupt contexts, in hopes of finding richer, more useful, possibilities for peacebuilding and corruption control.
What does corruption research have in common with a drunk looking for his car keys?
An old and probably apocryphal story has it that when a drunk, while looking for his lost keys, was asked why he looked only under a lamppost, he responded “Because the light is better there.”
Even if mythical, the story points to problems with much research on corruption as an issue linked to peacebuilding. Confronted with the rich variety of corruption issues, too often we fall back upon crunching one-dimensional corruption indices because we think the evidence is best there.
As a result, we flatten out critical contrasts, focus on the wrong levels of analysis, and often confuse minute methodological tinkering with telling a substantively rich story. We lose important opportunities to explore corruption control as an aspect of peacebuilding and, worse yet, miss the implications of peacebuilding for sustaining reform coalitions in real societies.
Every good blog post contains a Don’t-Get-Me-Wrong paragraph, and here’s mine: we social scientists don’t carry on like drunks, ordinarily, and I’m hardly anti-statistics on principle. Quantification is a mainstay of good social science, even if it should not be the whole story. Corruption indices have definite uses – not least, focusing attention on leaders who wish we would look the other way. Corruption control is by no means the only issue in peacebuilding, and peacebuilding will not be simplified by better corruption research alone.
Still, our research methods can narrow as well as sharpen our vision. Perceptions of corruption are not the same as corruption itself. One-dimension indices tell us, in effect, that corruption is the same thing everywhere, varying only in amount — which we cannot directly measure in any event. When we plug index scores into an equation we are arguing, in most instances, that the causes and effects of corruption are the same everywhere.
Follow the conversation between Michael Johnston and Matthew Stephenson! Read next > “The Level-of-Aggregation Question in Corruption Measurement” > “1.39 Cheers for Quantitative Analysis” > “Are Aggregate Corruption Indicators Coherent and/or Useful?: Further Reflections“
Not even the right lamppost
Most problematic of all is that relying only on country-level data is to assume that corruption is a national attribute, like GDP per capita or land area, when the truth is that corruption arises in highly specific processes, structural niches, and relationships. Factors such as a country’s public-sector percentage of the economy, or its aggregate economic dependence upon extractive resources, may shape opportunities and constraints. But real people decide to take advantage of them — or not — in complex situations, defined by perceived alternatives and consequences. National-level indicators tell us little about those complexities.
Yet I continue to spend time reviewing manuscripts based on panel regressions of national indicators that are diced and spliced in ever more arcane ways. I have seen – and argued unsuccessfully for rejecting – manuscripts that offer (say) some highly-advanced technique for dealing with error terms, yet have no substantive story to tell. Should sophisticated treatments of data that cannot measure what we claim they do, and causal arguments pitched at the wrong level of analysis, even be taken seriously?
The worst – apologies, here, to some unknown author – was a journal submission that claimed to incorporate quantitative measures of culture. That would have been interesting, but for the fact that they were dummy variables for locations in Africa, Asia, and so forth. They were handled in a sound manner; left unsaid was why we might believe that everyone in Asia, or Africa, holds one common cultural outlook, much less what those values might be. That one, fortunately, was rejected.
Not just R-squares, but richness
Why get worked up about such issues? After all, at times in my own misspent youth I have engaged in those same methodological guilty pleasures. Papers that develop useful statistical techniques ought to be published, although perhaps in more methodologically-focused journals. But these things matter because neither corruption control nor peacebuilding can be accomplished by manipulating aggregate national characteristics. They must mobilize lasting interests, reshape expectations, and build trust among people living in complex situations.
That requires understanding their histories, needs and wants, how officials treat and mistreat them, how they perceive risks and alternatives, and how the pitfalls of collective action might be overcome. It requires, as Susan Rose-Ackerman points out, a standard of goodness: we don’t like corruption, but what would be better, and can we recognize it in a messy reality?
We would do far better to go back to basics. That means, first of all, remembering why we care about peacebuilding and corruption. We need to avoid reducing those processes to abstract entities, and reifying attributes just so we can count occurrences. We definitely need theory, but not only about whole countries; theories should also sort out how people and groups act and respond in actual situations. Those theories might rest on qualitative evidence and standards of disproof, but could tell us what to look for in a given case and offer a rich understanding of what we find. Over time we can, and should, reach for higher-level generalizations, but generalizations based on actual situations and an appreciation of how people act. Not all of our work has to be immediately useful, but it must connect with observed reality.
Of course, other traps are out there too – notably, getting so wrapped up in details that we ignore broader similarities and contrasts that might aid our understanding. Still, developing a better appreciation of the commonalities between controlling corruption and peacebuilding – and, of course, of their contrasts – means looking up from our data from time to time, and taking a fresh look at people as they live and deal with their histories. They have complex and important stories to tell us, if we will listen and learn.
About this article
This post is part of the corruption in fragile states series. The series provides a space for conversation about corruption in fragile states. Since its inception in 2016 as part of the CDA Perspectives Blog, the series has sought to challenge status quo thinking with a particular emphasis on exploring systems-based approaches to understanding and acting on corruption dynamics. Topics in the series range from new research findings in Uganda, Iraq or the DRC to provocative thought pieces intended to contest dominant paradigms or practices.
Now hosted by the Institute for Human Security at the Fletcher School of Law and Diplomacy, series contributions are inspired by, but not limited to, the Corruption, Justice and Legitimacy project as well as the, now concluded, Central Africa Accountable Service Delivery Initiative. All blog posts published after March 1, 2018, information about submitting guest posts, and subscribing to future series updates is available here.
To receive blog posts on other topics from CDA subscribe here. You may contact email@example.com if you are interested in submitting a guest post on the latest work in the fields of accountability and feedback loops, conflict sensitivity, peacebuilding effectiveness, and responsible business.
About the author(s)
Michael Johnston is Charles A. Dana Professor of Political Science Emeritus at Colgate University. He lives, works, and overindulges in enchiladas in Austin, Texas.