Empiricness

Posted in Spanish on May 26, 2015.

(@LeopoldoTweets)

Paul Romer sparked a debate about “mathiness” in the theory of economic growth with a recent article presented in the latest meetings of the American Economic Association. The paper blends several arguments, not all of them entirely fair and some connected to the ideological dispute on state intervention in the economy, but his main message is simple and powerful: math is not science.

Romer is worried that scholars use math (and sometimes sloppy math) not as a tool to explore abstractions and rigorously investigate the answers to relevant questions, but as a way to disguise as science a defense of their preferred theories. This mathiness, he fears, may be so prevalent that the market for ideas may collapse: it will be impossible to separate mathiness from mathematical theory. He concludes:

«Only mathiness will be left. It will be worth little, but cheap to produce, so it might survive as entertainment (…) Presenting a model is like doing a card    trick. Everybody knows that there will be some sleight of hand. (…) Perhaps our norms will soon be like those in professional magic; it will be impolite, perhaps even an ethical breach, to reveal how someone’s trick works.»

Al lot has been said about the debate from the theory side. I want to focus on Romer’s optimism about empirical work: “In the new equilibrium: empirical work is science; theory is entertainment.”

In particular, and without pretending to dispend of all methods, in empirical work there are phenomena akin to mathiness, and similar risks. Mathiness stems from certain obsession, healthy to some extent, with formal economic analysis. Similarly, in empirical work many risks arise from a healthy concern about being more rigorous when analyzing data: the Identification Taliban.

I am a (moderate) member of this sect, which takes very seriously that correlation is not causation. Thus, it believes that one of the main challenges of social sciences (where doing controlled experiments is, while not impossible, quite hard) is to disentangle the complex web of causality underlying any empirical relation.

To start the discussion, I summarize a few risks.

  1. Trivializing science

Doing controlled social experiments is complicated. Even when we can do them, the type of questions we can tackle is limited (we cannot inject, for instance, democracy to some societies as if injecting medicines to a mouse to study the consequences). Therefore, economists (social scientists in general) obsessed with identifying the causal effect (yes, it is redundant and yet we love it) can fall into the trap of studying comparatively minor problems: those on which we can do experiments or for which we can find natural accidents in history.

In his (otherwise great) article on writing advice for PhD students, John Cochrane asks: “What are the three most important things for empirical work?” His response: “Identification, Identification, Identification”.

Wrong. The most important thing, always, is that we tackle an interesting question.

  1. Relegate theory

The urge to find a clean and convincing causal effect may lead to the bad habit of relegating theory to a secondary role. This has been widely discussed in development economics, where certain fascination with finding “what works and what does not work”, has driven us away from big questions and big theories which, perhaps, we also need to understand the world.

On the topic, I recommend this interesting analysis by Jimena Hurtado on the tension between (big) theories and the current emphasis on controlled experiments in the context of discussing Albert Hirschman’s perspective on economic development. And also this special issue from the Journal of Economic Perspectives, and this paper by Daron Acemoglu.

  1. Ethical hazards

As we do more and more experiments in the field, the discussion on the ethical implications of our interventions will increase. Serious mistakes as the Montana experiment, though perhaps not typical, underscore the importance of discussing the impact that our experiments have on society. Going back to the previous example, even if it were possible, are we ethically entitled to inject some more or some less democracy to societies?

  1. Confuse and rule I

Romer, in his attack of mathematicity, criticizes the distance between mathematical abstractions and real phenomena. With this trick, researchers are able to defend their preferred positions with a fake scientific halo.

The truth is that in empirical work, the distraction around identification sometimes allows researchers to get away with conclusions that are quite distant from their results. Maybe this is not a major problem in good publications, where careful referees are supposed to make sure there is a strict connection between empirical results and proposed implications. But it still happens to some extent.

My favorite example is this extreme case (unpublished, and lacking a good identification strategy, but just to show the point). The paper concludes about guerrillas in Colombia: “Far from being social and political ideals what cause conflict, rebel leaders need to invoke them to justify their business”. The evidence? Municipal-level regressions showing that guerrilla attacks are more prevalent in areas where there have been attacks in the past, close to where there are attacks, and where there are rents to grab. The conclusion may well be true, but I hope I do not have to say much to show that the statistical evidence hardly proves it. Let’s just say that very ideological guerrillas, just as mere drug traffickers, could be interested in concentrating their activity in areas with a bigger booty.

This point relates to the earlier one on theory. Without at least the rudiments of a theory, it is easy that academics approach the data blindly. They will be thrilled to find a causal effect. But they may fail to give the finding its precise implications.

  1. Confuse and rule II.

The best way to solve the problem of causality is of course to do a controlled experiment: sleeping pills for some mice, placebo for the rest. When this can be done, finding the effect we are interested is not more complicated than comparing the average hours of sleep for mice with and without medicine.

Lacking an experiment, the economists of the Identification Taliban use some slightly more sophisticated methods that simply try to replicate experimental conditions with observational (as opposed to experimental) data: difference in differences, matching methods, instrumental variables, and regression discontinuity are the most famous approaches.

The problem is that as methods get more complicated, the trick Romer refers to is easier to do. In the middle of confusion, economists can be tempted to hold a card under their sleeve, especially with audiences that understand the problem of correlation versus causations (which anyone can understand), but are not experts in these methods (which few have reasons to bother learning).

Of course this is not a problem if our application of the methods is rigorous. But that is not always the case, and it can have real implications, as when the trick’s victim is a public official evaluating the effectiveness of a social program.

These risks are not more than that. Part and parcel of the job. And I repeat that the interest in correctly identifying causal effects is essentially healthy. It’s the extreme obsession, on the one hand, and the sloppy pursuit, on the other, what may materialize these risks. What can be done? First, not mixing up ends with means: there is no point in cute causal identification without an interesting question and a reasonable theory. Second, maintaining strict rigor. For this, we have the famous peer review process, which is far from perfect (as the recent incident with a Science article revealed). Although we may want to revise existing incentives to guarantee rigor in academia, at least in our profession the debate around increasing transparency and replication of scientific results has gained increasing attention.