Category Archives: Political Economy

“A Theory of Democratic Transitions,” D. Acemoglu & J. Robinson (2001)

In this 2001 AER, Acemoglu and Robinson propose a simple theoretical model of coups and revolutions meant to explain the economic aspects of political transition and consolidation. The basic tradeoff is straightforward: in a democracy, the proletariat must set taxes such that they do not encourage an elite to overthrow the democratic leader in order to escape onerous taxes, and in a dictatorship, the elite must provide sufficient transfers to the proletariat that they do not begin a revolution. Coups are uncommon because a coup leads to a one-time downward shock to the economy. Revolutions are uncommon because they in turn lead to temporary lower output, and because the elite can coopt revolutions by extending the democratic franchise, which de facto allows commitment to higher transfers to the poor in the future by shifting the median voter.

More extensively, the model is as follows. There are two classes of fixed size, an elite and a proletariat. The elite are in the minority, and at time zero, the elite control political power. The level of income is stochastic, with two states, an uncommon low level (recession) and a common high level (“normal times”). A stock of assets, with higher levels held by members of the elite, provides the stochastic income. Taxation with a convex deadweight loss can be imposed on income, with revenue transferred equally to all members of society; since their income is lower, the proletariat prefer higher tax and transfers than the elite, who prefer none. In any period, the poor may begin a permanent revolution, seizing an exogenous portion of the elite’s assets forever, at the cost of destroying an exogenous amount of output in the first period of the revolution. The elite can voluntarily extend democracy in order to prevent revolutions; in that case, the median voter rather than the elite will set the tax rate. If the current state is democracy, the elite can mount a coup, at the cost of destroying an exogenous portion of the economy’s output in the period of the coup. The equilibrium concept is Markov perfection, where each agent chooses a (possibly mixed) strategy in each of six states: dictatorship, democracy and revolutionary government under the recessionary and normal governments. The action space includes whether to mount coups or revolutions, whether to extend the democratic franchise, and what tax rate to set. Agents are not myopic: they solve to maximize their total future welfare, conditional on equilibrium Markov perfect actions by both players. There is no coordination problem: every member of the proletariat agrees on the optimal action, as does every member of the elite. Assumptions on parameters are such that coups will not take place during normal times, and that no transfers will be offered to the poor in normal times.

Since there is no threat of coup in normal times, a democratic government (or a revolutionary government) will choose taxes and transfers in order to maximize the income of the proletariat. Coups are more likely when recessions are severe, since the percentage loss in income due to a coup is lower, yet the gains from not having to offer transfers in the future are unchanged. Coups are also more likely in unequal societies – where inequality is defined in terms of assets, though this maps directly into pre-tax income – because the optimal tax rate in democracy is higher. In particularly unequal societies with particularly strong recessions, democratic governments cannot set their optimal tax rate: rather they lower taxes during recessions because they are worried about a coup occurring. When the level of inequality gets very high, the democratic government cannot prevent coups during recessions even with zero taxes.

If the economy is in dictatorship, the elite may wish to give transfers to the poor during recessions to prevent a revolution which could expropriate the elite income forever. The elite always want to prevent coups, so they will do this whenever the economy is in recession. However, for some parameter values, even offering the transfer-maximizing tax rate to the poor is not enough to prevent a recession. In that case, the democratic franchise will be extended; this allows the elite to commit to higher taxes in nonrecessionary future periods by shifting the median voter, although coups may occur during future recessions, and the proletariat take that into account when deciding to revolt in the face of an offer for democracy. In particular, when recessions are relatively common, it is easier for the elite to avoid extending democracy because the elite can now credibly commit to sharing wealth more often as they will always do so in recessionary periods due to the threat of revolution.

There are then, depending on four parameters, four paths for society: the dictatorship remains forever, society democratizes in the first recession and thereafter sets transfers optimally from the perspective of the poor, society democratizes but sets taxes lower than the level optimal for the poor in order to preemptively prevent coups, or society oscillates between democratic and nondemocratic government. The last two cases are more likely in more unequal societies with more severe recessions.

A final section of the paper discusses how asset redistribution may be used to consolidate democracy or dictatorship. Redistribution is irreversible and causes a negative shock to the total level of assets. Democracies can redistribute assets in the first period they come to power. In particularly unequal societies, the democratic government will redistribute in order to prevent future coups and thus consolidate democracy, since coups are mounted more often in more unequal society. This redistribution may actually make the democratic proletariat myopically worse off, since in the absence of a coup threat, they would prefer to leave the land with the elite, tax them, and transfer income; under some parameters, this is less costly than that caused when assets are destroyed under redistribution. Acemoglu and Robinson also consider the case where redistribution takes another period to implement; if a recession occurs during the implementing period, the elite may wish to mount a coup and reverse the land reform. For some parameters, the democratic government will choose to implement land reform which will fully stabilize democracy forever if the following period is nonrecessionary, but which will lead to a coup otherwise. Finally, if the current state is dictatorship, and inequality is high, the dictators themselves may find it cheaper to redistribute assets rather than to extend the democratic franchise; with the post-redistribution state more equal, the threat of revolution is less salient, and hence following redistribution, taxes can be lower.

I’m not totally convinced either that political transitions are particularly driven by the business cycle (try explaining 1989 or 2011 that way!) or that, in such a model, the punishment-free Markov perfect equilibrium is the right refinement. That said, this is a solid parsimonious model in the usual Acemoglu fashion – that is one productive dude. (Final AER version; three cheers to Acemoglu for putting the final version on his website!)


“Learning While Voting: Determinants of Collective Experimentation,” B. Strulovici (2010)

Politics has a well-known status quo bias. Surely one could explain this as the result of some psychological factors. But can a preference for the status quo result from the voting mechanism itself, even if all voters are expected utility maximizers? Bruno Strulovici develops some results from optimal control (“If the proofs seem hard, it’s because they actually are rocket science”) to show that the answer is yes.

In a standard multi-armed bandit problem, agents choose whether to pull a “safe” arm with a known payoff or a “risky” arm with an unknown payoff. Pulling the risky arm has an option value, because if it turns out that the risky arm payoff is high, then I will get that high payoff from now until the game ends. If the risky arm payoff is low, then at some point I will just switch to the safe arm, and will play that arm forever since I never learn anything more about either arm. In the presence of externalities – we both pull arms and I can see the result of your pulls – there is too little experimentation since everyone wants to free ride. Results of this type are well known by now.

Strulovici’s model is a little different. We all (in continuous time) are voting on whether society plays a safe arm or a risky arm, but the individual payoff from the risky arm is different for each individual. That is, some people are “winners” from a policy, and some are “losers”. Everyone begins unsure about which type they are, and with some Poisson process, winners receive information that they are indeed winners. Anyone who has not received news that they are a winner considers themselves a loser with more and more probability over time, simply by Bayes’ Law. Let x be the cutoff for what percentage of voters must approve the risky arm if we are to continue pulling it; in majority voting, x is just fifty percent. Note that there is no learning externality here: we all have independent types, and anyway, if society decides to play risky, then everyone in society is forced to play the risky arm.

In such a world, there is too little experimentation, vis-a-vis the utilitarian social optimum. That is, there is a bias for status quo policies. Why? On the one hand, experimenting and finding out you are a winner is less valuable than in the single-agent bandit problem, because even if you pay experimentation costs and learn you are a winner, society may have enough non-winners that the majority votes for the safe arm at some later date. The discounted profits of learning you are a winner, then, are lower than in the single-agent problem. On the other hand, if you are more and more sure you are a loser, you will want to end the risky experiment quickly, because there is a chance that a sufficiently high number of other agents will later find out they are winners and therefore trap you, by majority vote, into playing the risky arm forever.

What if we, for instance, lowered or raised the cutoff where the risky policy is continued? Instead of majority vote, we could require unanimity in order to keep implementing the risky arm. This will satisfy the potential losers: they never need fear that their vote to continue experimentation will trap them in the policy they don’t like. But it only makes things worse for potential winners: finding out I am a winner if even less valuable than in the majority rule case since a single agent can end the risky policy which benefits me, and which I paid to learn benefits me. It turns out that for any fixed cutoff, there is suboptimal experimentation under some parameter values. This is worrying: the majority rule, for example, violates what Strulovici calls nonadversity: I cannot be made worse off by experimenting and finding out that I am a winner. But consider three voters, voting with majority rule on which arm to pull. If one agent receives notice that she is a winner, the other two know that if one of them also receives a winner signal, the risky arm will be pulled forever. In order to avoid being trapped in the policy they won’t like, these two remaining voters will stop the risky policy sooner. If learning is slow and the harm of being trapped in the risky policy when you are a loser is high, then the value of experimentation is negative even if, given your current Bayesian beliefs, the immediate payoff from experimentation is positive.

However, there is a way to save voting under experimentation: make the cutoff an increasing function of time. If you require more and more people to vote for the risky arm as time increases, in a particular parameter-dependent way, the socially optimal level of experimentation is achieved. The intuition here is that the number of sure winners – those that have received notification from the Poisson process that they are in fact winners – is an increasing function of time.

All of the results stated are robust to correlating types, to making revelation of who is a winner and who is not private information (in the particular sense where the number of voters for the risky policy at any time is public knowledge). (Final WP – final version published in May 2010 Econometrica)

“Harmonization and Side Payments in Political Cooperation,” B. Harstad (2007)

Two phenomena are widespread in politics: policies with externalities are often required to be harmonized, and side payments are often not allowed. That is, two EU countries are forced to implement the same environmental regulation, without regard to local preferences. Also, one country or state is not allowed to request a payment from the other agent to go along with the scheme. More generally, you can think of “side payments” as a form of horse-trading: I will support environmental policy X which is optimal for you if you support trade policy Y which is optimal for me. Negotiation rules often limit the amount of such horse-trading. At first glance, these phenomena seem suboptimally restrictive. They also seem to have little in common with each other.

This 2007 AER by Harstad explains both phenomena. Consider a bargaining game, where each of two states chooses to buy a certain amount of the public good. The public good is not necessarily pure, but rather I get a percentage x weakly above 50 of the public good that I buy, and I get 100-x of the public good that you guy. Utility from the public good is linear in the amount of the good multiplied by (privately known) bounded coefficients v(i). One unit of the public good costs 1. The total amount of public good is assumed to be capped at 1, and the coefficients v(i) are high enough that in the social optimum, each agent prefers a total amount of the public good equal to 1. Bargaining occurs over who pays for it. We consider the cases both with and without side payments.

At time 0, agent 1 makes an offer specifying what percentage of the public good should be bought by him, and what percentage by agent 2. Both agents discount at common discount factor delta. At any time after time 0, agent 2 can accept or reject the offer. If she rejects, then she proposes a new split. Following that second offer, agent 1 can that, after any delay of his choosing, reject or accept, and make a new offer if he rejects. This continues until an offer is accepted. The equilibrium concept is sequential equilibrium satisfying the Cho-Kreps intuitive criterion. (A technical note: though time is continuous, an assumption is made that offers can only be made at discrete times with arbitrarily small intervals between them is necessary for the sequential equilibrium concept to have meaning. Subgames in continuous time without such an assumption are often a giant mathematical mess.)

There is a unique equilibrium. If agent 1 has the highest value possible for the public good, he proposes at time 0 an equal split. This is accepted immediately by agent 2 if agent 2 is also of the highest type. Otherwise, agent 2 will delay a sufficiently long time, and then propose a split where agent 1 pays a greater share of the good. This is credible because since agent 2 discounts time, she will only delay if she in fact has a lower value for the public good than agent 1. Likewise, if agent 1 has the lowest possible type, he will propose that agent 2 pay for everything after waiting a sufficiently long time after time 0 to make the offer. After the offer is made, an agent 2 of the highest possible type will accept immediately, and otherwise agent 2 will delay long enough that her proposal for a “more just split” is seen as credible. In any case, the final split is precisely what would be achieved in the unique equilibrium if there was no imperfect information; in some sense, this means that no one will want to renegotiate. But note that with perfect information, we would agree on the split immediately and would not have any delay, so welfare would be higher.

What if the law required harmonized policy where each agent contributes equally? In that case, agent 1 proposes and agent 2 accepts right away. There is no point in delaying (if we are using the intuitive criterion) since when the final agreement is reached, each agent will pay for half of the public good anyway. The tradeoff then is less delay in exchange for less payment by the agent who values the public good more, and therefore would prefer more parks, for instance, in his town than in the neighboring town. This tradeoff argues in favor of harmonization when preferences are fairly similar (the range of v(i) is small) and spillovers are large.

What about when side payments are allowed? If we require harmonized policy, the only reason to delay with side payments is for the low value agents to extract money from the high value agents, since the loss from delay is higher for the high-value agents. If policy does not need to be harmonized, the high value type in equilibrium will provide more of the public good, but depending on parameter values, he may either pay the low value type a side payment (to agree to a split more quickly) or get paid by the low value type (in exchange for paying for more of the public good in a classic gains-from-trade scenario). In either case, agreement is reached quicker and the distribution of who buys public goods is more efficient than in the case with side payments alone. That is, legality of side payments and unharmonized policy are in some sense complementary goods! Again, if preferences are fairly similar and spillovers are large, we are better off requiring both no side payments and harmonization.

One final comparison: what about allowing differentiated policies but not allowing side payments? We noted above when differentiated policies alone are better or worse than harmonization with no side payments. How, though, does differentiation alone compare to allowing differentiation and side payments? If the possible “gains from trade” are large, then side payments allow those gains to be more efficiently and more quickly reached. If the possible gains from trade are small, then side payments are allow low value types to extract rents through delay, and therefore side payments should not be allowed.

You may wonder how robust these results are to the particular bargaining game chosen. It turns out they are in a sense very robust. In particular, consider any mechanism mapping revealed types into required public good purchases (with or without side payments). The equilibria solved for above implement the outcomes of the most efficient dominant strategy mechanisms that are “fair” in the sense that no one wants to renege ex-post. (Final WP – final version published in AER 2007)

“Multiple Referrals and Multidimensional Cheap Talk,” M. Battaglini (2002)

Mechanism design and game theory is often radically different when the state is multidimensional instead of unidimensional: finding the differences has been one of the most productive parts of economic theory over the past decade. This classic paper by Battaglini is the one to read when it comes to multidimensional cheap talk.

Consider a president listening to two expert advisors, or a median voter in Congress listening to two members of a committee. Everyone is biased. The experts know exactly the results of some policy, but the receiver does not: that is, the outcome x=y+a, where y is the policy chosen, and a is some noise whose realization is known only to experts. When the policy and state are unidimensional, a number of classic results (Gilligan & Krehbiel 1989, for example) note that cheap talk from the experts can only be influential in equilibrium if the biases of the expert are small. Even then, equilibrium existence relies on out-of-equilibrium beliefs (the solution concept is Perfect Bayesian Nash) that are in some sense crazy.

This turns out not to be true in a multidimensional world. Consider potential policies which will affect both global warming and unemployment, where these are mapped into utilities in two-dimensional Euclidean space. The two experts know exactly how these policies will affect the environment and the economy, while the receiver only knows the effect of policy y, and knows that the signal a has expected value of zero. In this case, it turns out full revelation is almost always possible, no matter what the biases are; this result does not rely on crazy out-of-equilibrium beliefs and it is robust to a specific form of collusion among the experts.

What magic is being used? The basic idea is to find dimensions upon which each agent has preferences that are aligned with those of the receiver, and ask agents only about those preferences. Intuitively, ask the environmentally-conscious guy about which policy is best given that the economy is affected in the optimal way, and ask the economically-minded guy about which policy is best for the environment given that the economy is affected in the optimal way. Mathematically, let the optimal outcome of the receiver be represented at the origin, and consider the vectors tangent to each expert’s indifference curves at the origin. Ask each expert only to reveal a dimension of the state he prefers along this line in two-dimensional space. By construction, if an expert has to choose from only that line, he will choose the origin. This intuition will always work as long as utility is quasiconcave and the gradients of each agent’s utilities are linearly independent at the origin.

This clears up some puzzles in political economy. For instance, the unidimensional result suggested that biased committees are uninformative, but committee members in Congress tend to be made up of Congressmen with the strongest biases. So why do such committees persist if they aren’t influential? Battaglini’s result shows that on multidimensional problems, committees are indeed useful, even when made up of very biased members, because they still transmit information to Congress at large in equilibrium.

A quick mathematical caveat: Ambrus and Takahashi note in a 2008 Theoretical Economics that Battaglini’s result is not just a dimensionality of state space argument, but also one that relies on the state space being the entire Euclidean space. When the state space is compact (say, the policy is spending on education and military, and there is a fixed budget), it is not true, under some robustness conditions, that information is always fully revealed. The trick is dealing with out-of-equilibrium cases that are “impossible”, such as when the strategies of the experts imply that the optimal spending is strictly greater than the budget. If you like Battaglini’s paper, it’s probably worth taking a look at Ambrus & Takahashi. (Final WP – published in Econometrica 2002)

“The Persistent Effects of Peru’s Mining Mita,” M. Dell (2010)

We’ve had quite a few papers on this site recently by job market candidates, so let’s up the ante with paper written by a PhD student mostly before she even started her doctoral degree yet nonetheless published in Econometrica.

Institutions and their effect on long-run growth have been one of the most productive areas of economic research in the past decade or so. There are a number of results that discuss broad trends – English legal system colonies tend to have done better than Spanish, for instance. The exact mechanisms by which an institution from 200 years previous can still affect economic outcomes is less well understood. Dell discusses Engerman and Sokoloff’s contention than high inequality in Latin America in the colonial era led to bad economic outcomes today. Rather than compare across countries, she examines a particular colonial policy, Peru’s mita system of forced labor, shows large modern differences across the mita region boundary, and traces what historical processes may have led the mita to have effects persisting hundreds of years into the future.

The mita was a colonial system, begun in the 16th century, whereby villages in some areas were required to send a fraction of their working-age men to work in the state’s silver and mercury mines (how the colonial government avoided the agency problem here and wound up with anything but the most feeble workers, I don’t know…). Regions were sometimes included in the mita for geographical regions, but often were included solely because of their proximity to a colonial-era path leading to the mines. There was (and is) no significant difference in language, percentage indigenous, etc. along the mita border. The mita boundary has had no official meaning in 200 years.

Running a regression discontinuity (in two directions, since the boundary is located in geographical space) shows that health outcomes (stunted growh) and consumption are quite a bit lower for villages within the old mita boundary even today. For instance, people are nine percentage points more likely to have stunted growth, a sign of poverty. There are a number of potential explanations, but most come down to the fact that large haciendas did not develop in the colonial period within the mita region, since the state didn’t want competition for labor. Those haciendas later used their political power to ensure road networks and other inputs to production were built in their regions. Further, when the hacienda system was dismantled in the 1960s, the hacienda land was distributed to peasants, given them properly-titled land. That is, a case can be made that, at least in Peru, the particularly unequal regions, with large-scale landowners, were in some sense good for growth; this is the opposite conclusion of Engerman and Sokoloff, and a suggestion that idiosyncratic features can overwhelm more obvious theoretical insights when we talk about processes lasting hundreds of years.

I am confused a bit here, though this may be simply because I’m a terrible econometrician. Doesn’t the use of regression discontinuity require that the effect of the treatment is discontinuous at the boundary? Consider regional roads. A village on one side of the boundary is x kilometers from the nearest road. A village right on the other side is x+1 kilometers away. How is this a discontinuity? This has implications for interpreting the results as well. When the paper says the mita lowers household consumption by 25%, RD implies that household consumption falls by 25% at the boundary of the Mita region. If roads and network infrastructure are the reason, it’s tough to see first why you would have such a large effect at the boundary, and second, why I care particularly about the effect at the boundary vis-a-vis the average effect within the mita region. Perhaps someone can explain to me why RD is appropriate here. (Final WP – published in Econometrica 2010)

“Overcoming Adverse Selection: How Public Intervention Can Restore Market Functioning,” J. Tirole (2010)

Jean Tirole stopped here in Chicago this week to present his new paper, an explanation of how mechanism design can aid government policy when credit markets freeze. Since I’ve got Jean at number 3 on my Fantasy Econ Nobel draft board (you don’t have one? Our group of old Fed colleagues goes eight rounds deep!), that’s not a presentation I could miss.

The model is simple. Let firms all hold an asset which pays xR, where R is a constant, and x is a variable in [0,1] representing the asset’s type. Financial institutions know the type of their own asset, but no one knows anyone else’s type. Let each firm want to invest in a new project with positive net present value. If the project is financed at cost I, it returns more than I if the seller “behaves”, but returns nothing but a private benefit to the seller if he “misbehaves”. As in a standard mechanism design problem, the seller will need to be a given a stake in the new project in order to be coerced into behaving. Though sellers own assets of varying quality, they all are proposing identical new projects.

The problem in the market at large is that some shift in beliefs about asset quality has caused the distribution of x to be such that, because of asymmetric information, no one is able to sell their asset, and because of this, no one is able to finance their new project. This occurs even though people with high quality assets would be able the finance the new project should the market know they had high quality assets. But assume that the government can take some action first, then allow the market for assets to open, with every seller deciding whether to participate in the government mechanism or to wait for the market. Is there any welfare-enhancing way to “unfreeze” the market? Note the two problems, from a theoretical standpoint. First, the participation constraint in the government mechanism is endogenous, since market outcomes depend on who joins the government mechanism. Second, all of the standard informational issues from a signaling/adverse selection problem are present.

Tirole shows the optimal government policy is as follows. First, the government buys the assets (or some share of the assets) only of people with low quality assets; of course, government cannot see asset quality, but they are able to make such purchases by offering a sufficiently low price. Second, sellers with “medium-quality” assets only have a share of their assets bought by the government. Third, the government purchases must be of a sufficiently large scale in order to “unfreeze” the market. Fourth, the government will always lose money when they buy assets, even though sellers are desperate to sell assets for less than their worth in order to finance the new positive net present value project.

Why? Buying bad assets instead of good is optimal because it leaves only good assets to the market, which will then be able to make money by financing the remaining sellers. Only a portion of low quality assets are bought in order to keep as many of the assets sold in the market as possible. The government must buy a sufficiently large number of bad assets in order for the market to earn at least I, in expectation, from the assets they buy. And the government always loses money because, even though the sellers are eager to sell their old assets, they were also eager to sell even before the government intervention. The government thus is paying more than what the sellers could have gotten in the market without intervention, and since the market after intervention is competitive (has a zero-profit condition), the government will be the one losing money. This does not suggest intervention should not happen: the government can still lose money even when intervention is socially optimal.

Finally, Tirole points out that this policy is optimal given a credit freeze because of adverse selection (e.g., a shock to the return of assets of varying quality currently on financial institution balance sheets). Of course, alternative policies (such as government-provided liquidity) might be better if the reason for credit market seize-up is different. In any case: mechanism design is a quality tool.

“Beyond Markets and States: Polycentric Governance of Complex Economic Systems,” E. Ostrom (2010)

It’s summer, which economists mark not with sun and beaches but rather with the appearance of expanded Nobel lectures in the June AER. Often these articles can be classified as “nothing new” – hopefully we know the greatest hits of Nobel prize winners! Last year, however, Indiana’s Elinor Ostrom won. Though she is very well known in her field (common property), she was generally unknown to the profession as a whole. This article, perhaps, does something to rectify that.

Ostrom is at heart an applied game theorist. She studies what happens when people face tragedy of the commons problems, whether they be fisheries, forests, municipal policing, or whatever. The basic insight is that, though these situations look like prisoner’s dilemmas, there are countless empirical examples of common property management without direct government involvement. In particular, she notes that the rules, meaning the available actions and payoffs, are not fixed as they might be assumed to be in a simplistic analysis. Rather, when games are repeated, the players themselves can agree on penalties, on restricted future actions, etc., such that prisoner’s dilemmas are avoided. This “altering of the rules” does not require a social planner, but rather can be done within the properly-written larger game.

This insight would be not be enough for a Nobel, though. Ostrom, along with her workshop partners, essentially gathered every empirical study on common property management done by sociologists, historians, economists and political scientists, and, in a consistent, game-theoretic manner, coded the methods used to overcome (or not overcome!) the tragedy of the commons. Such paradigmatic theory is close to economists’ heart – it is the reason we use math as our language, and, I would argue, the reason economics has been so successful in exporting the results of our field to policymakers and other social scientists.

The original goal of such standardization appears to have been extracting which rules were successful, in general, and which failed, in general. This was unsuccessful; there is simply too much heterogeneity between fishery rules used in a Sakhalin port and rules for lumber extraction in a forest in the Congo. Nonetheless, “lessons”, in Ostrom’s words, could be extracted, which provide guides to successful common property management. I am reminded of the urban planner Christopher Alexander and his beautiful A Pattern Language: social science is not about rules, but rather about sets of principles which can guide decisionmaking. That is, social science is about finding general patterns which help us think about specific situations. This, as I like to discuss on this site, is very different from the scientific method of the hard sciences.

Ostrom’s empirical method strikes me as the right one. Empirics do not tell us which theory to develop. Theory does not tell us which empirics to examine. Rather, theory and data develop together, feeding back on each other, in order to help us find the patterns above. Though Ostrom is known as a researcher who spent a great amount of time in the field, her influence lies in the standardized theoretical lens through which she examines her field results. No one, except the Congolese government, really cares about the specific results of a study of forest management in the Congo, but to the extent that the data from that experiment can be compared to similar studies across time and space, we can begin to learn something about humanity more generally, a much more valuable result. (Html version of AER article; I cannot find a non-gated pdf. Why does the American Economic Association, a non-profit dedicated to advancing economics knowledge, gate their articles? It’s nonsensical. Hopefully we have a reader with enough sway to end such ridiculousness!)

“Persistence of Civil Wars,” D. Acemoglu, D. Ticchi & A. Vindigni (2010)

The average length of civil wars has more than doubled in the postcolonial era. Might such a solution be the result of rational action on the part of governments? Acemoglu et al consider an infinite period model with Elites, Citizens (who may work, or join the military), and Rebels. The economy begins in a state of civil war. The government chooses one of three army sizes. In the minimum size, the war ends with probability p, and there is no chance of coup. In the medium size, the war ends immediately, in future periods the military size can be reduced to minimum, but a coup can be staged as long as the military is medium size. In the “oversized” case, the war ends immediately and coups can be attempted, but the military is sufficiently strong that the government cannot reduce its size after the war ends. All three choices are Markov Perfect Equilibria. In particular, if the civil war has little effect on Elite revenue, and if the small military has a decent chance of ending the war, the government will not purchase a military large enough to stop the rebels immediately. If the army is made large enough to end the war, the government may prefer to oversize it as a commitment device; a medium sized army may declare a coup right after the war ends because they know the rational Elites will just shrink the army in the next period, and paying extra (in discounted terms) from now until forever for an oversize army may cost less for the Elites than being removed in a coup.

Interestingly, a more developed state, who can more effectively raise tax revenue, is more likely to let a civil war linger than a weaker state because if the state has effective institutions, it is “more valuable” to potential coup plotters, and therefore the wages necessary to avoid a coup will increase as a function of state effectiveness. Note though that the model assumed coups are possible at all, so essentially we are discussing countries which can raise a lot of revenue but who do not have institutions effective enough to prevent militaries from enacting a coup.

More generally, and as usual in a paper by Acemoglu (my pick for the best “young” economist), this is also a great example of a parsimonious but useful model.

“Quis Custodiet Ipsos Custodes?: Civilian Control over the Military,” T. Besley & J. Robinson (2010)

Militaries are needed to protect the state from external threats, but what protects the state from coups led by the military? As Plato asked, who guards the guards? Besley and Robinson develop a simple two-period model of coups. In period one, the government collects revenue (perhaps a function of natural resources), pays a wage to the military, and purchases public goods, which the military values at some discount; for instance, the military would not value money that is siphoned off by lower ministers, but would value money pumped into military depots. In period 2, the military can either mount a coup or not; if not, the government then pays them a new wage. If the government cannot commit to the second period wage, then they essentially pay the military only enough to keep the army from deserting. Whether or not they can commit, states where the military does not value the public goods the government likes to buy will see more coups, and therefore the state will choose a suboptimal army size in order to prevent the coup. This paper is somewhat less convincing that the model by Acemoglu et al also discussed today on this site.

%d bloggers like this: