Economics focus Cause and defect Aug 13th 2009 From The Economist print edition Instrumental variables help to isolate causal relationships. But they can be taken too far Illustration by Jac Depczyk âLIKE elaborately plumed birdsâ¦we preen and strut and display our t-values.â That was Edward Leamerâs uncharitable description of his profession in 1983. Mr Leamer, an economist at the University of California in Los Angeles, was frustrated by empirical economistsâ emphasis on measures of correlation over underlying questions of cause and effect, such as whether people who spend more years in school go on to earn more in later life. Hardly anyone, he wrote gloomily, âtakes anyone elseâs data analyses seriouslyâ. To make his point, Mr Leamer showed how different (but apparently reasonable) choices about which variables to include in an analysis of the effect of capital punishment on murder rates could lead to the conclusion that the death penalty led to more murders, fewer murders, or had no effect at all. In the years since, economists have focused much more explicitly on improving the analysis of cause and effect, giving rise to what Guido Imbens of Harvard University calls âthe causal literatureâ. The techniques at the heart of this literatureâin particular, the use of so-called âinstrumental variablesââhave yielded insights into everything from the link between abortion and crime to the economic return from education. But these methods are themselves now coming under attack. Instrumental variables have become popular in part because they allow economists to deal with one of the main obstacles to the accurate estimation of causal effectsâthe impossibility of controlling for every last influence. Mr Leamerâs work on capital punishment demonstrated that the choice of controls matters hugely. Putting too many variables into a model ends up degrading the results. Worst of all, some relevant variables may simply not be observable. For example, the time someone stays in school is probably influenced by his innate scholastic ability, but this is very hard to measure. Leaving such variables out can easily lead econometricians astray. What is more, the direction of causation is not always clear. Working out whether deploying more policemen reduces crime, for example, is confused by the fact that more policemen are allocated to areas with higher crime rates. Instrumental variables are helpful in all these situations. Often derived from a quirk in the environment or in public policy, they affect the outcome (a personâs earnings, say, to return to the original example) only through their influence on the input variable (in this case, the number of years of schooling) while at the same time being uncorrelated with what is left out (scholastic ability). The job of instrumental variables is to ensure that the omission of factors from an analysisâin this example, the impact of scholastic ability on the amount of schoolingâdoes not end up producing inaccurate results. In an influential early example of this sort of study, Joshua Angrist of the Massachusetts Institute of Technology (MIT) and Alan Krueger of Princeton University used Americaâs education laws to create an instrumental variable based on years of schooling. These laws mean that children born earlier in the year are older when they start school than those born later in the year, which means they have received less schooling by the time they reach the legal leaving-age. Since a childâs birth date is unrelated to intrinsic ability, it is a good instrument for teasing out schoolingâs true effect on wages. Over time, uses of such instrumental variables have become a standard part of economistsâ set of tools. Freakonomics, the 2005 bestseller by Steven Levitt and Stephen Dubner, provides a popular treatment of many of the techniques. Mr Levittâs analysis of crime during American election cycles, when police numbers rise for reasons unconnected to crime rates, is a celebrated example of an instrumental variable. Two recent papersâone by James Heckman of Chicago University and Sergio Urzua of Northwestern University, and another by Angus Deaton of Princetonâare sharply critical of this approach. The authors argue that the causal effects that instrumental strategies identify are uninteresting because such techniques often give answers to narrow questions. The results from the quarter-of-birth study, for example, do not say much about the returns from education for college graduates, whose choices were unlikely to have been affected by when they were legally eligible to drop out of school. According to Mr Deaton, using such instruments to estimate causal parameters is like choosing to let light âfall where it may, and then proclaim[ing] that whatever it illuminates is what we were looking for all along.â IV leagues This is too harsh. It is no doubt possible to use instrumental variables to estimate effects on uninteresting subgroups of the population. But the quarter-of-birth study, for example, shone light on something that was both interesting and significant. The instrumental variable in this instance allows a clear, credible estimate of the return from extra schooling for those most inclined to drop out from school early. These are precisely the people whom a policy that sought to prolong the amount of education would target. Proponents of instrumental variables also argue that accurate answers to narrower questions are more useful than unreliable answers to wider questions. A more legitimate fear is that important questions for which no good instrumental variables can be found are getting short shrift because of economistsâ obsession with solving statistical problems. Mr Deaton says that instrumental variables encourage economists to avoid âthinking about how and why things workâ. Striking a balance between accuracy of result and importance of issue is tricky. If economists end up going too far in emphasising accuracy, they may succeed in taking âthe con out of econometricsâ, as Mr Leamer urged them toâonly to leave more pressing questions on the shelf.