Category Archives: Other

Two New Papers on Militarized Police

The so-called militarization of police has become a major issue both in libertarian policy circles and in the civil rights community. Radley Balko has done yeoman’s work showing the harms, including outrageous civil liberty violations, generated by the use of military-grade armor and weapons, the rise of the SWAT team, and the intimidating clothing preferred by many modern police. The photos of tanks on the streets of Ferguson were particularly galling. As a literal card-carrying member of the ACLU, you can imagine my own opinion about this trend.

That said, the new issue of AEJ: Policy has two side-by-side papers – one from a group at the University of Tennessee, and one by researches at Warwick and NHH – that give quite shocking evidence about the effects of militarized police. They both use the “1033 Program”, where surplus military equipment was transferred to police departments, to investigate how military equipment affects crime, citizen complaints, violence by officers, and violence against police. Essentially, when the military has a surplus, such as when the changed a standard gun in 2006, the decommissioned supplies are given to centers located across the country which then send those out to police departments within a few weeks. The application forms are short and straightforward, and are not terribly competitive. About 30 percent of the distributions are things like vests, clothing and first aid kits, while the rest is more tactical: guns, drones, vehicles, and so on.

Causal identification is, of course, a worry here: places that ask for military equipment are obviously unusual. The two papers use rather different identification strategies. The Tennessee paper uses the distance to a distribution center as an instrument, since the military wants to reduce the cost of decommissioning and hence prefers closer departments. Therefore, a first-stage IV will predict whether a sheriff gets new military items on the joint basis of total material decommissioned combined with their distance to decommissioning centers. The Warwick-NHH paper uses the fact that some locations apply frequently for items, and others only infrequently. When military spending is high, there is a lot more excess to decommission. Therefore, an instrument combining overall military spending with previous local requests for “1033” items can serve as a first stage for predicted surplus items received.

Despite the different local margins these two instruments imply, the findings in both papers are nearly identical. In places that get more military equipment, crime falls, particularly for crime that is easy to deter like carjacking or low-level drug crime. Citizen complaints, if anything, go down. Violence against police falls. And there is no increase in officer-caused deaths. In terms of magnitudes, the fall in crime is substantial given the cost: the Warwick-NHH paper finds the value of reduced crime, using standard metrics, is roughly 20 times the cost of the military equipment. Interestingly, places that get this equipment also hire fewer cops, suggesting some sort of substitutability between labor and capital in policing. The one negative finding, in the Tennessee paper, is that arrests for petty crimes appear to rise in a minor way.

Both papers are very clear that these results don’t mean we should militarize all police departments, and both are clear that in places with poor community-police relations, militarization can surely inflame things further. But the pure empirical estimates, that militarization reduces crime without any objectively measured cost in terms of civic unhappiness, are quite mind-blowing in terms of changing my own priors. It is similar to the Doleac-Hansen result that “Ban the Box” leads to worse outcomes for black folks, for reasons that make perfect game theoretic sense; I couldn’t have imagined Ban the Box was a bad policy, but the evidence these serious researchers present is too compelling to ignore.

So how are we to square these results with the well-known problems of police violence, and poor police-citizen relations, in the United States? Consider Roland Fryer’s recent paper on police violence and race, where essentially the big predictor of police violence is interacting with police, not individual characteristics. A unique feature of the US compared to other developed countries is that there really is more violent crime, hence police are rationally more worried about it, therefore people who interact with police are worried about violence from police. Policies that reduce the extent to which police and civilians interact in potentially dangerous settings reduce this cycle. You might argue – I certainly would – that policing is no more dangerous than, say, professional ocean fishing or taxicab driving, and you wouldn’t be wrong. But as long as the perception of a possibility of violence remains, things like military-grade vests or vehicles may help break the violence cycle. We shall see.

The two AEJ: Policy papers are Policeman on the Frontline or a Soldier?” (V. Bove & E. Gavrilova) and Peacekeeping Force: Effects of Providing Tactical Equipment to Local Law Enforcement (M. C. Harris, J. S. Park, D. J. Bruce and M. N. Murray). I am glad to see that the former paper, particularly, cites heavily from the criminology literature. Economics has a reputation in the social sciences both for producing unbiased research (as these two papers, and the Fryer paper, demonstrate) and for refusing to acknowledge quality work done in the sister social sciences, so I am particularly glad to see the latter problem avoided in this case!

Advertisement

Site Note: 2014 Job Market

Apologies for the slow rate of posting throughout the fall. Though I have a big backlog of posts coming, the reason for the delay has been that I’ve spent the last few months preparing for the job market. As the academic job market is essentially a matching problem, below I describe my basic research program; if you happen to be a reader of the blog on the “demand side” from an economics department, business school or policy school, and my research looks like it might be a good fit, I would be eternally grateful for a quick email (k-bryan AT kellogg.northwestern.edu).

Most broadly, I study the process of innovation. Innovation (inclusive of the diffusion of technology) is fundamental to growth, and growth is fundamental to improving human welfare. What, then, could be a more exciting topic to study? Methodologically, I tend to find theory the most useful way to answer the questions I want to answer. Because the main benefit of theory is generalizability (to counterfactuals, welfare estimates, and the like), I try to ensure that my theory is well grounded both by using detailed historical examples within the papers, and by drawing heavily on existing empirical work by economists, historians and sociologists. Beyond innovation, I have a side interest in pure theory and in the history of thought; both these areas provide the “tools” that an applied theorist uses.

Recently, I’ve worked primarily on two questions: why do firms work on the type of research they do, and how does government policy affect the diffusion of invention? On the first question, I have three papers.

My coauthor Jorge Lemus and I have developed an analytically tractable model of direction choice in invention, where there are many inventions available at any time, and successful invention by some firm affects which research targets become available next. We shut down all sources of inefficient firm behavior in the existing literature, and still find three sources of inefficiency generated by direction choice alone. We fully characterize how this inefficiency operates on a number of “invention graphs”. This is actually a pretty cool model which is really easy to use if you are familiar with the patent race literature.

In my job market paper, I use the invention graph model to study how government R&D works when firms may have distorted directional incentives. The principal result is a bit sobering: many policies like patents and R&D tax credits that are effective at increasing the rate of invention on socially valuable projects will, in general, exacerbate distortions on the direction of invention. Essentially, firms may distort to projects that are easy even though those projects are not terribly profitable. With this type of distortion, any policy that increases the payoff to R&D generally will increase the payoff of the inefficient research target by a larger percentage than the payoff of the efficient research target. I show how these policy distortions may have operated in the early nuclear reactor industry, where historians have long worried that the eventual dominant technology, light water reactors, was ex-ante inefficient.

My third paper on directional inefficiency is more purely historical. How can a country invent a pioneer technology but wind up having no important firms in the commercial industry building on that technology? I suggest that commercial products are made up of a set of inventions. A nation may have within its borders everything necessary for a technological breakthrough, but lack comparative advantage in the remaining steps between that breakthrough and a commercial product; roughly, Teece’s complementary assets operates at a national and not only a firm level. Worse, patent policy can give pioneer inventors incentives to limit the diffusion of knowledge necessary for the eventual commercial product. I show how these ideas operate in the airplane industry, where ten years after the Wright Brothers’ first flight, there was essentially no domestic production of US-designed planes.

On diffusion more purely, I have three papers in progress. The first, which we are preparing for a medical journal and hence cannot put online, suggests that open access to medical research makes that research much more likely to be used in an eventual invention. My coauthor Yasin Ozcan and I generate this result from a dataset that merges every research article in 46 top medical journals since 2005 with every US patent application since that date; if you’ve ever worked with the raw patent application files, for which there is no standard citation practice, you will understand the data challenge here! We have a second paper in the works taking this merged dataset back to the 1970s, with the goal of providing a better descriptive understanding of the trends in lab-to-market diffusion. My third paper on diffusion is more theoretical. I consider processes that diffuse simultaneously across multiple networks (or a “multigraph”): inventions may diffuse via trade routes or pure geography, recessions may spread across geography or the input-output chain, diseases may spread via sexual or non-sexual contact, etc. I provide an axiomatic measure of the “importance” of each network even when only a single instance of the diffusion can be seen, and show how this measure can answer counterfactual questions like “what if the diffusion began in a different place?”

My CV, a teaching statement, and an extended research statement can be found on my academic website. I am quite excited about the research program above; I would love to chat if you are at a convivial department filled with bright students and curious academics that is hiring this year.

A Brief Word on the 2013 Nobel

The 2013 Economics Nobel was awarded this morning to Fama, Shiller and Hansen, the first two major figures in finance and the third a somewhat odd choice who, I would think, is much more associated with econometrics and general macro via his introduction of GMM; indeed, the Nobel press conference seemed to be straining on how to include Hansen in the trio. Even stranger, Hansen was a natural fit only a couple years ago when the price went to Sargent and Sims. In any case, he is fully deserving.

Finance is well outside my area of expertise, but I do recommend Shiller’s 2003 article in the Journal of Economic Perspectives, which nicely summarizes persistent anomalies in financial world. Note that, as Shiller also points out, irrationality or behavioral quirks on the part of investors is not sufficient to generate deviations from the standard efficient markets model. We must also have some reason that rational traders do not simply take advantage of investors with these deviations. There is sufficient reason to believe that Malkiel’s prescription from the 1970s – that you as an individual investor are wasting your time trying to predict stocks – is correct (though Jeff Ely pointed out in an offhand comment that if you are risk-neutral and transaction costs are small enough to ignore, efficient markets also imply that whatever crazy trading strategy you want to use will generate identical returns in expectation!).

So why is it that the market doesn’t just knock out the idiots, in Larry Summers’ diplomatic phrasing? One pretty compelling reason, due to Summers and three coauthors, is roughly that noise traders behave unpredictably, hence it is risky to bet against them, hence you need to be compensated for doing so if you are rational, hence prices can deviate from fundamentals. Alp Simsek has a very nice recent paper on when optimists will be able to get loans to continue bidding up the bubble; consider the housing crisis, where clearly “irrational exuberance” required the exuberant to somehow get loans from supposedly staid bankers. There is also a literature about how limits on the ability to short assets restrict the rational from betting the other way, but I have not followed the extent to which these limits are empirically important.

(Even more fundamental than puzzles about why trades deviate from random walk conditions is the puzzle of why trades in an efficient market happen at all. Take any common-value good like a stock. If I, given my information, believe the future dividend payments mean the stock is worth 5 bucks, and you offer it at 4, then I should infer that you know something I don’t; this is the fundamental principle from Aumann and Milgrom/Stokey. There are many solutions, but I’m partial to the adverse selection models begun with Gloster and Milgrom, which generate bid-ask spreads in perfectly competitive broker markets.)

Nobel 2012, Roth and Shapley

There will be many posts summarizing the modern market design aspect of Roth and Shapley, today’s winners of the Econ Nobel. So here let me briefly discuss certain theoretical aspects of their work, and particularly my read of the history here as it relates to game theory more generally. I also want to point out that the importance of the matching literature goes way beyond the handful of applied problems (school choice, etc.) of which most people are familiar.

Pure noncooperative game theory is insufficient for many real-world problems, because we think that single-person deviations are not the only deviations worth examining. Consider marriage, as in Gale and Shapley’s famous 1962 paper. Let men and women be matched arbitrarily. Do we find such a set of marriages reasonable, meaning an “equilibrium” in some sense? Assuming that every agent prefers being married (to anyone) to being unmarried, then any set of marriages is a Nash equilibrium. But we find it reasonable to think that two agents, a man and a woman, can commit to jointly deviate, breaking their marriage and forming a new one. Gale and Shapley prove that there always exists a match that is “pairwise stable” meaning that no pair of men and women wish to deviate in this way.

Now, if you know your game theory, you may be thinking that such deviations sound like a subset of cooperative games. After all, cooperative (or coalitional) games involve checking for deviations by groups of agents, who may or may not be able to arbitrarily distribute their joint utility among their coalition. Aspects of such cooperation are left unmodeled in their noncooperative sense. It turns out (and I believe this is a result due to Mr. Roth, though I’m not sure) that pairwise stable matches are equivalent to the (weak) core of the same cooperative game in one-to-one or many-to-one matching problems. That means both that checking deviations by one potential set of marrying partners is equivalent to checking deviations by any sized group of marrying partners. But more importantly, this link between the core and pairwise stability allows us to utilize many results in cooperative game theory, known since the 1950s and 60s, to answer questions about matching markets.

Indeed, the link between cooperative games and matching, and between cooperative and noncooperative games, allows for a very nice mathematical extension of many well-known general problems: the tools of matching are not restricted solely to school choice and medical residents, but indeed can answer important questions about search in labor markets, about financial intermediation, etc. But to do so requires reframing matching as simply mechanism design problems with heterogeneous agents and indivisibility. Ricky Vohra, of Kellogg and the Leisure of the Theory Class blog, has made a start at giving tools for such a program in his recent textbook; perhaps this post can serve as a siren call across the internet for Vohra and his colleagues to discuss some examples on this point on their blog. The basic point is that mechanism design problems can often be reformulated as linear programs with a particular set of constraints (say, integer solutions, or “fairness” requirements, etc.). The most important set of constraints, surely, are incomplete information which allows for strategic lying, as Roth discovered when he began working on “strategic” matching theory in the 1980s.

My reading of much of the recent matching literature, and there are obviously exceptions of which Roth and Shapley are both obviously included as well as younger researchers like Kojima, is that many applied practitioners do not understand how tightly linked matching is to classic results in mechanism design and cooperative games. I have seen multiple examples, published in top journals, of awkward proofs related to matching which seem to completely ignore this historical link. In general, economists are very well trained in noncooperative game theory, but less so in the other two “branches”, cooperative and evolutionary games. Fixing that imbalance is worthwhile.

As for extensions, I offer you a free paper idea, which I would be glad to discuss at further length. “Repeated” matching has been less often studied. Consider school choice. Students arrive every period to match, but schools remain in the game every period. In theory, I can promise the schools better matches in the future in exchange for not deviating today. The use of such dynamic but consistent promises is vastly underexplored.

Finally, who is left for future Nobels in the areas of particular interest to this blog, micro theory and innovation? In innovation, the obvious names are Rosenberg, Nelson and Winter; Nelson and Winter’s evolutionary econ book is one of the most cited texts in the history of our field, and that group will hopefully win soon as they are all rather old. Shapley’s UCLA colleagues Alchian and Demsetz are pioneers of agency theory. I can’t imagine that Milgrom and Holmstrom will be left off the next micro theory prize given their many seminal papers (along with Myerson, they made the “new” game theory of the 70s and 80s possible!), and a joint prize with either Bob Wilson or Roy Radner would be well deserved. An econ history prize related to the Industrial Revolution would have to include Joel Mokyr. There are of course many more that could win, but these five or six prizes seem the most realistically next in line.

“The Future of Taypayer-Funded Research,” Committee for Economic Development (2012)

It’s one month after SOPA/PIPA. Congress is currently considering two bills. The Federal Research Public Access Act would require federal funders to insist on open-access publication of funded research papers after an embargo period. The NIH currently has such a policy, with a one year embargo. As of now, the FRPAA has essentially no chance of passing. On the other hand, the Fair Copyright in Research Works Act would reverse the current NIH policy and ban any other federal funders from setting similar access mandates. It has heavy Congressional support. How should you think of this as an economist? (A quick side note for economists: the world we live in, where working papers are universally available on author’s personal websites, is almost unheard of in other fields. Only about 20% of academic papers published last year were available online in ungated versions. This is about 100% in economics and high energy physics and a few other fields, and close to 0% otherwise.)

I did some consulting in the fall for a Kaufmann-funded CED report released yesterday called The Future of Taxpayer-Funded Research. There is a simple necessary condition that any government policy concerning new goods should not violate: call it The First Law of Zero Marginal Product Goods. The First Law says that if some policy increases consumption of something with zero marginal cost (an idea, an academic paper, a song, an e-book, etc.), a minimum, necessary condition to restrict that policy is that the variety of affected new goods must decrease. So if music piracy increases the number of songs consumed (and the number of songs illegally downloaded in any period of time is currently much higher than worldwide sales during that period), a minimum economic justification for a government crackdown on piracy is that the number of new songs created has decreased (in this case, they have not). Applying The First Law to open access mandates, a minimum economic justification for opposing such mandates is that either open access has no benefits, or that open access will make peer reviewed journals economically infeasible. To keep this post from becoming a mess of links, I leave out citations, but you can find all of the numbers below in the main report.

On the first point, open access has a ton of benefits even when most universities subscribe to nearly all the important journals. It “speeds up” the rate at which knowledge diffuses, which is important because science is cumulative. It helps solve access difficulties for private sector researchers and clinicians, who generally do not have subscriptions due to the cost; this website is proof that non-academics have interest in reading academic work, as I regularly receive email from private sector workers or the simply curious. Most importantly, even the minor access difficulties caused by the current gated system, such as having to go to a publisher website, having to click “Accept terms & conditions”, etc., versus just reading a pdf, matter. Look at the work by Fiona Murray and Scott Stern and Heidi Williams and others, much of which has been covered on this website: minor restrictions on ease can cause major deviations to efficiency in a world where results are cumulative. Such effects are only going to become more important as we move into a world where computer programs search and synthesize and translate research results.

The second point, whether open access makes peer review infeasible, is more important. The answer is that open access appears to have no such effects. Over time, we have seen many funders and universities, from MIT to the Wellcome Trust, impose open access mandates on their researchers. This has, to my knowledge, not led to the shutdown of even a single prominent journal. Not one. Profits in science publishing remain really, really high, as you’d expect in an industry with a lot of market power due to lock-in. Cross-sectionally, there is a ton of heterogeneity in norms: every high energy physicist and mathematician puts their work on arXiv, and every economist backs up their work online, yet none of this has led to the demise of peer reviewed journals and their dissemination function in those fields. Even within fields, radically different policies have proven sustainable. The New England Journal of Medicine makes all articles freely accessible after 6 months. The PLoS journals are totally open access, charging only a publication fee of $1350 upon acceptance. Other journals keep their entire archive gated. All are financially sustainable models, though of course they may differ in terms of how much profit the journal can extract.

One more point, and it’s an important one. Though the American Economics Association has not taken a position on these bills – as far as I know, the AEA does very little lobbying at all, keeping its membership fee low, for which I’m glad! – many other scholarly societies have taken a position. And I think many of their members would be surprised that their own associations oppose public access, something which I think can safely be said to be supported by nearly all of their members. Here is a full list of responses to the recent White House RFI on public access mandates. The American Anthropological Association opposes public access. The American Sociological Association and the American Psychological Association both strongly oppose public access. These groups all claim first that there is no access problem to begin with – simply untrue for the reasons above, all of which are expanded on in the CED paper – and that open access is incompatible with social science publishing, where articles are long and even rejected articles regularly receive many comments from peer review. But we know from the cross section that this isn’t true. Many learned societies publish open access journals, even in the social sciences, and many of them don’t charge any publication fee at all. The two main societies in economics, thankfully, both publish OA journals: the AEA’s Journal of Economic Perspectives, and the Econometric Society’s TE and QE. And even non-OA economics journals essentially face an open access mandate with a 0-month embargo, since everyone puts their working papers online. Econ is not unique in the social sciences: the Royal Society’s Philosophical Transactions, for instance, is open access. If you’re a member of the APA, ASA or AAA, you ought voice your displeasure!

http://www.ced.org/images/content/issues/innovation-technology/DCCReport_Final_2_9-12.pdf (Final published version of CED report – freely available online, of course!)

A Note on Openness

While the NBER continues its rather ridiculous policy of gating access to NBER Working Papers – they are nearly all available freely after a quick search on Google Scholar, so why not just make the link in my NBER New Papers emails go to a pdf I can read? – Yale’s wonderful Cowles Foundation has taken a great step in the opposite direction and made a huge number of their classic papers and monographs freely available online. A few of the books you might be particularly interested in if you like the regular content on this site are Marschak and Radner’s legendary work on team incentives and the internal organization of firms, and Debreu’s Theory of Value (which still might be suitable as a textbook on general equilibrium analysis).

The full collection of Cowles’ work online can be found here.

“How to Count to One Thousand,” J. Sobel (1992)

You have a stack of money, supposedly containing one thousand coins. You want to make sure that count is accurate. However, with probability p, you will make a mistake at every step of the counting, and will know you’ve made the mistake (“five hundred and twelve, five hundred and thirteen, five hundred and….wait, how many was I at?). What is the optimal way to count the coins? And what does this have to do with economics?

The optimal way to count to one thousand turns out to be precisely what intuition tells you. Count a stack of coins, perhaps forty of them, set that stack aside, count another forty, set that aside, and so on, then count at the end to make sure you have twenty-five stacks. If your probability of making a mistake is very high, you may wish only to count ten coins at a time, set them aside, then count ten stacks of ten, setting those superstacks aside, then counting at the end to make sure you have ten stacks of one hundred. The higher the number of coins, and the higher your probability of making a mistake, the more “levels” you will need to build. Proving this is a rather straightforward dynamic programming exercise.

Imagine you’ve hired workers to perform these tasks. If tasks cannot be subdivided, the fastest workers should be assigned to count the first layer of stacks (since they will be repeating the task most often after mistakes are made) and the most accurate are assigned to do the later counts (since they “destroy more value” when a mistake is made, as in Kremer’s O-Ring paper). The counting process will suffer from decreasing returns to scale – the more coins to count, the more value is destroyed on average by a mistake. With optimal subdivision, the number of extra counts needed to make sure the number of stacks is accurate grows slower than the number of coins to be counted, and the optimal stack size is independent of the total number of coins, so counting technology has almost-constant returns to scale.

The basic idea here tells us something about the boundary and optimal organization of a firm, but in a very stylized way. If workers only imperfectly know when mistakes are made, the problem is more difficult, and is not solved by Sobel. If workers definitely do not know when a mistake is made, there still can be gains to subdividing. Sobel mentions a parable about prisoners told by Rubinstein. There are two prisoners who want to coordinate an escape 89 days from now. Both prisoners can see the sun out their window. The odds of one of the two mistaking the day count after that long is quite high, causing a lack of coordination. If both prisoners can also see the moon, though, they need only count three full moons plus five days.

http://www.jstor.org/stable/pdfplus/2234847.pdf?acceptTC=true (JSTOR gated version – I couldn’t find an ungated copy. Prof. Sobel, hire one of your students to put all of your old papers up on your website!)

“Secrets,” D. Ellsberg (2002)

Generally, the public won’t know even the most famous economists – mention Paul Samuelson to your non-economist friends and watch the blank stares – but a select few manage to enter the zeitgeist through something other than their research. Friedman had a weekly column and a TV series, Krugman is regularly in the New York Times, and Greenspan, Summers and Romer, among many others, are famous for their governmental work. These folks at least have their fame attributable to their economics, if not their economic research. The real rare trick is being both a famous economist and famous in another way. I can think of two.

First is Paul Douglas, of the Cobb-Douglas production function. Douglas was a Chicago economist who went on to become a long-time U.S. Senator. MLK Jr. called Douglas “the greatest of all Senators” for his work on civil rights. In ’52, with Truman’s popularity at a nadir, Douglas was considered a prohibitive favorite for the Democratic nomination would he have run. I think modern-day economists would very much like Douglas’ policies: he was a fiscally conservative, socially liberal reformist who supported Socialists, Democrats and Republicans at various times, generally preferring the least-corrupt technocrat.

The other famous-for-non-economics-economist, of course, is Daniel Ellsberg. Ellsberg is known to us for the Ellsberg Paradox, which in many ways is more important than the work of Tversky and Kahneman for encouraging non-expected utility derivations by decision theorists. Ellsberg would have been a massive star had he stayed in econ: he got his PhD in just a couple years, published his undergrad thesis (“the Theory of the Reluctant Duelist”) in the AER, his PhD thesis in the QJE, and was elected to the Harvard Society of Fellows, joining Samuelson and Tobin in that still-elite group.

As with many of the “whiz kids” of the Kennedy and Johnson era, he consulted for the US government, both at RAND and as an assistant to the Undersecretary of Defense. Government was filled with theorists at the time – Ellsberg recounts meetings with Schelling and various cabinet members where game theoretic analyses were discussed. None of this made Ellsberg famous, however: he entered popular culture when he leaked the “Pentagon Papers” early in the Nixon presidency. These documents were a top secret, internal government report on presidential decisionmaking in Vietnam going back to Eisenhower, and showed a continuous pattern of deceit and overconfidence by presidents and their advisors.

Ellsberg’s description of why he leaked the data, and the consequences thereof, are interesting in and of themselves. But what interests me in this book – from the perspective of economic theory – is what the Pentagon Papers tell us about secrecy within organizations. Governments and firms regularly make decisions, as an entity, where optimal decisionmaking depends on correctly aggregating information held by various employees and contractors. Standard mechanism design is actually very bad at dealing with desires for secrecy within this context. That is, imagine that I want to aggregate information but I don’t want to tell my contractors what I’m going to use it for. A paper I’m working on currently says this goal is basically hopeless. A more complicated structure is one where a firm has multiple levels (in a hierarchy, let’s say), and the bosses want some group of low-level employees to take an action, but don’t want anyone outside the branch of the organizational tree containing those employees to know that such an action was requested. How can the boss send the signal to the low-level employees without those employees thinking their immediate boss is undermining the CEO? Indeed, something like this problem is described in Ellsberg’s book: Nixon and Kissinger were having low-level soldiers fake flight reports so that it would appear that American plans were not bombing Laos. The Secretary of Defense, Laird, did not support this policy, so Nixon and Kissinger wanted to keep this secret from him. The jig was up when some soldier on the ground contacted the Pentagon because he thought that his immediate supervisors were bombing Laos against the wishes of Nixon!

In general, secrecy concerns make mechanism problems harder because they can undermine the use of the revelation principle – we want the information transmitted without revealing our type. More on this to come. Also, if you can think of any other economists who are most famous for their non-economic work, like Douglas and Ellsberg, please post in the comments.

(No link – Secrets is a book and I don’t see it online. Amazon has a copy for just over 6 bucks right now, though).

“Understanding PPPs and PPP-based National Accounts,” A. Deaton & A. Heston (2010)

Every economist knows what PPP adjustments are: we adjust consumption/GDP/whatever comparisons to account for differences in the price of nontradables and to remove the effect of economically insignificant swings in market exchange rates. But how exactly is this done? Is the data reliable? What precautions should be taken? Anyone who has seen how economic data is created – I’ve worked briefly at the Fed and at the Dept of Commerce – is rightfully worried: even simple statistics in a developed country like the US are often surprisingly inaccurate. In the new AEJ:Macro, Deaton and Huston explain what procedures were used in the recent 2005 International Comparison Project, which gathers the prices used in World Bank and PWT data; you may remember that China’s GDP was nearly halved as a result of this data.

First, we don’t even have “a” definition of PPP. GEKS (usually EKS, though Deaton and Huston think Gini should be credited for the idea as well) PPP ensures transitivity of bilateral price levels, and in a limited sense allows welfare comparisons if we assume identical preferences across any two countries, but do not allow GDP to be disaggregated into PPP-adjusted consumption, investment, etc. GK PPP does allow such aggregation, but in so doing overstates the value of nontraded goods in poor countries, therefore overstating living standards in poor countries; further, GK has no link to welfare theory.

Once an index has been selected, the data themselves are problematic. How do we account for different consumption bundles in different regions (the authors use Ethiopian teff and Thai rice as a bilateral problem)? First compute PPP within regions with similar product availability, then use a “ring” of countries with good data availability to link the regions. Even if price data is good, is the underlying GDP calculation in poor countries any good? Probably not. How do we account for services? This is generally problematic, though some “quality adjustments”, such as adjusting education for internationally comparable test scores is being done as of 2005. Are prices nationally representative, or are only urban areas samples? Prices are not representative in many countries, particularly China, where only 11 cities were sampled. How do we adjust for quality? Each good is very specifically described in terms of packaging and content, though this specificity leads to problems of data availability.

The number of problems are huge. Should we worry? In some sense, when claims like “variable x is important for growth based on this regression using PPP data, and y is not,” obviously the above data problems can be very important. But I think the “smell test” generally works: I travel heavily and generally find that when countries “feel richer”, they tend to be so under PPP income per capita comparisons, so there must be some value in the exercise. On the other hand, these types of data problems are a major reason I see my future work as lying in theory rather than empirics!

http://www.princeton.edu/~deaton/downloads/deaton_heston_complete_nov10.pdf (Final WP – published in AEJ: Macro 2.4)

A Useful Link

As in probably clear from my comments here, I think philosophy, and in particular epistemology, is an area that is both incredibly important for social scientists and also a source of woeful ignorance, in general, for social scientists. Given that the study of causal relations is a huge part of economics, it blows my mind how many economists are unfamiliar with really, really basic philosophical arguments like preemption or the link between causality and counterfactuals.

Luckily, the Internet continues to pay dividends in its role as an information diffuser with Philosophy TV. This week’s episode is a conversation by two well-known philosophers, Ned Hall and L.A. Paul, on what contemporary philosophy thinks about causality. Well worth a watch, along with many others on the site. And with that, a promise from me to severely limit posts here that do not directly comment on economic research – I’m a good Ricardian and therefore believe in specialization!

%d bloggers like this: