Author Archives: afinetheorem

“Epistemic Game Theory,” E. Dekel & M. Siniscalchi (2014)

Here is a handbook chapter that is long overdue. The theory of epistemic games concerns a fairly novel justification for solution concepts under strategic uncertainty – that is, situations where what I want to do depends on other people do, and vice versa. We generally analyze these as games, and have a bunch of equilibrium (Nash, subgame perfection, etc.) and nonequilibrium (Nash bargain, rationalizability, etc.) solution concepts. So which should you use? I can think of four classes of justification for a game solution. First, the solution might be stable: if you told each player what to do, no one person (or sometimes group) would want to deviate. Maskin mentions this justification is particularly worthy when it comes to mechanism design. Second, the solution might be the outcome of a dynamic selection process, such as evolution or a particular learning rule. Third, the solution may be justified by certain axiomatic first principles; Shapley value is a good example in this class. The fourth class, however, is the one we most often teach students: a solution concept is good because it is justified by individual behavior assumptions. Nash, for example, is often thought to be justified by “rationality plus correct beliefs”. Backward induction is similarly justified by “common knowledge of rationality at all states.”

Those are informal arguments, however. The epistemic games (or sometimes, “interactive epistemology”) program seeks to formally analyze assumptions about the knowledge and rationality of players and what it implies for behavior. There remain many results we don’t know (for instance, I asked around and could only come up with one paper on the epistemics of coalitional games), but the results proven so far are actually fascinating. Let me give you three: rationality and common belief in rationality implies rationalizable strategies are played, the requirements for Nash are different depending on how players there are, and backward induction is surprisingly difficult to justify on epistemic grounds.

First, rationalizability. Take a game and remove any strictly dominated strategy for each player. Now in the reduced game, remove anything that is strictly dominated. Continue doing this until nothing is left to remove. The remaining strategies for each player are “rationalizable”. If players can hold any belief they want about what potential “types” opponents may be – where a given (Harsanyi) type specifies what an opponent will do – then as long as we are all rational, we all believe the opponents are rational, we all believe the opponents all believe that we all are rational, ad infinitum, the only possible outcomes to the game are the rationalizable ones. Proving this is actually quite complex: if we take as primitive the “hierarchy of beliefs” of each player (what do I believe my opponents will do, what do I believe they believe I will do, and so on), then we need to show that any hierarchy of beliefs can be written down in a type structure, then we need to be careful about how we define “rational” and “common belief” on a type structure, but all of this can be done. Note that many rationalizable strategies are not Nash equilibria.

So what further assumptions do we need to justify Nash? Recall the naive explanation: “rationality plus correct beliefs”. Nash takes us from rationalizability, where play is based on conjectures about opponent’s play, to an equilibrium, where play is based on correct conjectures. But which beliefs need to be correct? With two players and no uncertainty, the result is actually fairly straightforward: if our first order beliefs are (f,g), we mutually believe our first order beliefs are (f,g), and we mutually believe we are rational, then beliefs (f,g) represent a Nash equilibrium. You should notice three things here. First, we only need mutual belief (I know X, and you know I know X), not common belief, in rationality and in our first order beliefs. Second, the result is that our first-order beliefs are that a Nash equilibrium strategy will be played by all players; the result is about beliefs, not actual play. Third, with more than two players, we are clearly going to need assumptions about how my beliefs about our mutual opponent are related to your beliefs; that is, Nash will require more, epistemically, than “basic strategic reasoning”. Knowing these conditions can be quite useful. For instance, Terri Kneeland at UCL has investigated experimentally the extent to which each of the required epistemic conditions are satisfied, which helps us to understand situations in which Nash is harder to justify.

Finally, how about backward induction? Consider a centipede game. The backward induction rationale is that if we reached the final stage, the final player would defect, hence if we are in the second-to-last stage I should see that coming and defect before her, hence if we are in the third-to-last stage she will see that coming and defect before me, and so on. Imagine that, however, player 1 does not defect in the first stage. What am I to infer? Was this a mistake or am I perhaps facing an irrational opponent? Backward induction requires that I never make such an inference, and hence I defect in stage 2.

Here is a better justification for defection in the centipede game, though. If player 1 doesn’t defect in the first stage, then I “try my best” to retain a belief in his rationality. That is, if it is possible for him to have some belief about my actions in the second stage which rationally justified his first stage action, then I must believe that he holds those beliefs. For example, he may believe that I believe he will continue again in the third stage, hence that I will continue in the second stage, hence he will continue in the first stage then plan to defect in the third stage. Given his beliefs about me, his actions in the first stage were rational. But if that plan to defect in stage three were his justification, then I should defect in stage two. He realizes I will make these inferences, hence he will defect in stage 1. That is, the backward induction outcome is justified by forward induction. Now, it can be proven that rationality and common “strong belief in rationality” as loosely explained above, along with a suitably rich type structure for all players, generates a backward induction outcome. But the epistemic justification is completely based on the equivalence between forward and backward induction under those assumptions, not on any epistemic justification for backward induction reasoning per se. I think that’s a fantastic result.

Final version, prepared for the new Handbook of Game Theory. I don’t see a version on RePEc IDEAS.

“The Tragedy of the Commons in a Violent World,” P. Sekeris (2014)

The prisoner’s dilemma is one of the great insights in the history of the social sciences. Why would people ever take actions that make everyone worse off? Because we all realize that if everyone took the socially optimal action, we would each be better off individually by cheating and doing something else. Even if we interact many times, that incentive to cheat will remain in our final interaction, hence cooperation will unravel all the way back to the present. In the absence of some ability to commit or contract, then, it is no surprise we see things like oligopolies who sell more than the quantity which maximizes industry profit, or countries who exhaust common fisheries faster than they would if the fishery were wholly within national waters, and so on.

But there is a wrinkle: the dreaded folk theorem. As is well known, if we play frequently enough, and the probability that any given game is the last is low enough, then any feasible outcome which is better than what players can guarantee themselves regardless of other player’s action can be sustained as an equilibrium; this, of course, includes the socially optimal outcome. And the punishment strategies necessary to get to that social optimum are often fairly straightforward. Consider oligopoly: if your firm produces more than half the monopoly output, then I produce the Cournot duopoly quantity in the next period. If you think I will produce Cournot, your best response is also to produce Cournot, and we will do so forever. Therefore, if we are setting prices frequently enough, the benefit to you of cheating today is not enough to overcome the lower profits you will earn in every future period, and hence we are able to collude at the monopoly level of output.

Folk theorems are really robust. What if we only observe some random public signal of what each of us did in the last period? The folk theorem holds. What if we only privately observe some random signal of what the other people did last period? No problem, the folk theorem holds. There are many more generalizations. Any applied theorist has surely run into the folk theorem problem – how do I let players use “reasonable” strategies in a repeated game but disallow crazy strategies which might permit tacit collusion?

This is Sekeris’ problem in the present paper. Consider two nations sharing a common pool of resources like fish. We know from Hotelling how to solve the optimal resource extraction problem if there is only one nation. With more than one nation, each party has an incentive to overfish today because they don’t take sufficient account of the fact that their fishing today lowers the amount of fish left for the opponent tomorrow, but the folk theorem tells us that we can still sustain cooperation if we interact frequently enough. Indeed, Ostrom won the Nobel a few years ago for showing how such punishments operate in many real world situations. But, but! – why then do we see fisheries and other common pool resources overdepleted so often?

There are a few ways to get around the folk theorem. First, it may just be that players do not interact forever, at least probabalistically; some firms may last longer than others, for instance. Second, it may be that firms cannot change their strategies frequently enough, so that you will not be punished so harshly if you deviate from the cooperative optimum. Third, Mallesh Pai and coauthors show in a recent paper that with a large number of players and sufficient differential obfuscation of signals, it becomes too difficult to “catch cheaters” and hence the stage game equilibrium is retained. Sekeris proposes an alternative to all of these: allow players to take actions which change the form of the stage game in the future. In particular, he allows players to fight for control of a bigger share of the common pool if they wish. Fighting requires expending resources from the pool building arms, and the fight itself also diminishes the size of the pool by destroying resources.

As the remaining resource pool gets smaller and smaller, then each player is willing to waste fewer resources arming themselves in a fight over that smaller pool. This means that if conflict does break out, fewer resources will be destroyed in the “low intensity” fight. Because fighting is less costly when the pool is small, as the pool is depleted through cooperative extraction, eventually the players will fight over what remains. Since players will have asymmetric access to the pool following the outcome of the fight, there are fewer ways for the “smaller” player to harm the bigger one after the fight, and hence less ability to use threats of such harm to maintain folk-theorem cooperation before the fight. Therefore, the cooperative equilibrium partially unravels and players do not fully cooperate even at the start of the game when the common pool is big.

That’s a nice methodological trick, but also somewhat reasonable in the context of common resource pool management. If you don’t overfish today, it must be because you fear I will punish you by overfishing myself tomorrow. If you know I will enact such punishment, then you will just invade me tomorrow (perhaps metaphorically via trade agreements or similar) before I can enact such punishment. This possibility limits the type of credible threats that can be made off the equilibrium path.

Final working paper (RePEc IDEAS. Paper published in Fall 2014 RAND.

“Housing Market Spillovers: Evidence from the End of Rent Control in Cambridge, MA,” D. Autor, C. Palmer & P. Pathak (2014)

Why don’t people like renters? Looking for rental housing up here in Toronto (where under any reasonable set of parameters, there looks to be a serious housing bubble at the moment), it seems very rare for houses to be rented and also very rare for rental and owned homes to appear in the same neighborhood. Why might this be? Housing externalities is one answer: a single run-down house on the block greatly harms the value of surrounding houses. Social opprobrium among homeowners may be sufficient to induce them to internalize these externalities in a way that is not true of landlords. The very first “real” paper I helped with back at the Fed showed a huge impact of renovating run-down properties on neighborhood land values in Richmond, Virginia.

Given that housing externalities exist, we may worry about policies that distort the rent-buy decision. Rent control may not only limit incentives for landlords to upgrade the quality of their own property, but may also damage the value of neighboring properties. Autor, Palmer and Pathak investigate a quasiexperiment in Cambridge, MA (right next door to my birthplace of Boston, I used to hear Cambridge referred to as the PRC!). In 1994, Massachusetts held a referendum on banning rent control, which was enforced very strongly in Cambridge. It passed 51-49.

The units previously under rent control, no surprise, saw a big spurt of investment and a large increase in their value. If the rent controlled house was in a block with lots of other rent controlled houses, however, the price rose even more. That is, there was a substantial indirect impact where upgrades on neighboring houses increases the value of my previously rent-controlled house. Looking at houses that were never rent controlled, those close to previously rent-controlled units rose in price much faster than otherwise-similar houses in the same area which didn’t have rent-controlled units on the same block. Overall, Autor et al estimate that rent decontrol raised the value of Cambridge property by 2 billion, and that over 80 percent of this increase was due to indirect effects (aka housing externalities). No wonder people are so worried about a rental unit popping up in their neighborhood!

Final version in June 2014 JPE (IDEAS version).

“Upstream Innovation and Product Variety in the U.S. Home PC Market,” A. Eizenberg (2014)

Who benefits from innovation? The trivial answer would be that everyone weakly benefits, but since innovation can change the incentives of firms to offer different varieties of a product, heterogeneous tastes among buyers may imply that some types of innovation makes large groups of people worse off. Consider computers, a rapidly evolving technology. If Lenovo introduces a laptop with a faster processor, they may wish to discontinue production of a slower laptop, because offering both types flattens the demand curve for each, and hence lowers the profit-maximizing markup that can be charged for the better machine. This effect, combined with a fixed cost of maintaining a product line, may push firms to offer too little variety in equilibrium.

As an empirical matter, however, things may well go the other direction. Spence’s famous product selection paper suggests that firms may produce too much variety, because they don’t take into account that part of the profit they earn from a new product is just cannibalization of other firm’s existing product lines. Is it possible to separate things out from data? Note that this question has two features that essentially require a structural setup: the variable of interest is “welfare”, a completely theoretical concept, and lots of the relevant numbers like product line fixed costs are unobservable to the econometrician, hence they must be backed out from other data via theory.

There are some nice IO tricks to get this done. Using a near-universe of laptop sales in the early 2000s, Eizenberg estimates heterogeneous household demand using standard BLP-style methods. Supply is tougher. He assumed that firms get a fixed cost per product line shock, then pick their product mix each quarter, then observe consumer demand, then finally play Nash-Bertrand differentiated product pricing. The problem is that the pricing game often has multiple equilibria (e.g., with two symmetric firms, one may offer a high-end product and the other a low-end one, or vice versa). Since the pricing game equilibria are going to be used to back out fixed costs, we are in a bit of a bind. Rather than select equilibria using some ad hoc approach (how would you even do so in the symmetric case just mentioned?), Eizenberg cleverly just partially identifies fixed costs as backed out from any possible pricing game equilibrium, using bounds in the style of Pakes, Porter, Ho and Ishii. This means that welfare effects are also only partially identified.

Throwing this model at the PC data shows that the mean consumer in the early 2000s wasn’t willing to pay any extra for a laptop, but there was a ton of heterogeneity in willingness to pay both for laptops and for faster speed on those laptops. Every year, the willingness to pay for a given computer fell $257 – technology was rapidly evolving and lots of substitute computers were constantly coming onto the market.

Eizenberg uses these estimates to investigate a particularly interesting counterfactual: what was the effect of the introduction of the lighter Pentium M mobile processor? As Pentium M was introduced, older Pentium III based laptops were, over time, no longer offered by the major notebook makers. The M raised predicted notebook sales by 5.8 to 23.8%, raised mean notebook price by $43 to $86, and lowered Pentium III share in the notebook market from 16-23% down to 7.7%. Here’s what’s especially interesting, though: total consumer surplus is higher with the M available, but all of the extra consumer surplus accrues to the 20% least price-sensitive buyers (as should be intuitive, since only those with high willingness-to-pay are buying cutting edge notebooks). What if a social planner had forced firms to keep offering the Pentium III models after the M was introduced? Net consumer plus producer surplus may have actually been positive, and the benefits would have especially accrued to those at the bottom end of the market!

Now, as a policy matter, we are (of course) not going to force firms to offer money-losing legacy products. But this result is worth keeping in mind anyway: because firms are concerned about pricing pressure, they may not be offering a socially optimal variety of products, and this may limit the “trickle-down” benefits of high tech products.

2011 working paper (No IDEAS version). Final version in ReStud 2014 (gated).

Laboratory Life, B. Latour & S. Woolgar (1979)

Let’s do one more post on the economics of science; if you haven’t heard of Latour and the book that made him famous, all I can say is that it is 30% completely crazy (the author is a French philosopher, after all!), 70% incredibly insightful, and overall a must read for anyone trying to understand how science proceeds or how scientists are motivated.

Latour is best known for two ideas: that facts are socially constructed (and hence science really isn’t that different from other human pursuits) and that objects/ideas/networks have agency. He rose to prominence with Laboratory Life, which followed two years observing a lab, that of future Nobel Winner Roger Guillemin at the Salk Institute at UCSD.

What he notes is that science is really strange if you observe it proceeding without any priors. Basically, a big group of people use a bunch of animals and chemicals and technical devices to produce beakers of fluids and points on curves and colored tabs. Somehow, after a great amount of informal discussion, all of these outputs are synthesized into a written article a few pages long. Perhaps, many years later, modalities about what had been written will be dropped; “X is a valid test for Y” rather than “W and Z (1967) claim that X is a valid test for Y” or even “It has been conjectured that X may be a valid test for Y”. Often, the printed literature will later change its mind; “X was once considered a valid test for Y, but that result is no longer considered convincing.”

Surely no one denies that the last paragraph accurately describes how science proceeds. But recall the schoolboy description, in which there are facts in the world, and then scientists do some work and run some tests, after which a fact has been “discovered”. Whoa! Look at all that is left out! How did we decide what to test, or what particulars constitute distinct things? How did we synthesize all of the experimental data into a few pages of formal writeup? Through what process did statements begin to be taken for granted, losing their modalities? If scientists actually discover facts, then how can a “fact” be overturned in the future? Latour argues, and gives tons of anecdotal evidence from his time at Salk, that providing answers to those questions basically constitutes the majority of what scientists actually do. That is, it is not that the fact is out there in nature waiting to be discovered, but that the fact is constructed by scientists over time.

That statement can be misconstrued, of course. That something is constructed does not mean that it isn’t real; the English language is both real and it is uncontroversial to point out that it is socially constructed. Latour and Woolgar: “To say that [a particular hormone] is constructed is not to deny its solidity as a fact. Rather, it is to emphasize how, where and why it was created.” Or later, “We do not wish to say that facts do not exist nor that there is no such thing as reality. In this simple sense we are not relativist. Our point is that ‘out-there-ness’ is the consequence of scientific work rather than its cause.” Putting their idea another way, the exact same object or evidence can at one point be considered up for debate or perhaps just a statistical artefact, yet later is considered a “settled fact” and yet later still will occasionally revert again. That is, the “realness” of the scientific evidence is not a property of the evidence itself, which does not change, but a property of the social process by which science reifies that evidence into an object of significance.

Latour and Woolgar also have an interesting discussion of why scientists care about credit. The story of credit as a reward, or credit-giving as some sort of gift exchange is hard to square with certain facts about why people do or do not cite. Rather, credit can be seen as a sort of capital. If you are credited with a certain breakthrough, you can use that capital to get a better position, more equipment and lab space, etc. Without further breakthroughs for which you are credited, you will eventually run out of such capital. This is an interesting way to think about why and when scientists care about who is credited with particular work.

Amazon link. This is a book without a nice summary article, I’m afraid, so you’ll have to stop by your library.

“Why Did Universities Start Patenting?: Institution Building and the Road to the Bayh-Dole Act,” E. P. Berman (2008)

It goes without saying that the Bayh-Dole Act had huge ramifications for science in the United States. Passed in 1980, Bayh-Dole permitted (indeed, encouraged) universities to patent the output of federally-funded science. I think the empirical evidence is still not complete on whether this increase in university patenting has been good (more, perhaps, incentive to develop products based on university research), bad (patents generate static deadweight loss, and exclusive patent licenses limit future developers) or “worse than the alternative” (if the main benefit of Bayh-Dole is encouraging universities to promote their research to the private sector, we can achieve that goal without the deadweight loss of patents).

As a matter of theory, however, it’s hard for me to see how university patenting could be beneficial. The usual static tradeoff with patents is deadweight loss after the product is developed in exchange for the quasirents that incentivize fixed costs of research to be paid by the initial developer. With university research, you don’t even get that benefit, since the research is being done anyway. This means you have to believe the “increased incentive for someone to commercialize” under patents is enough to outweight the static deadweight loss; it is not even clear that there is any increased incentive in the first place. Scientists seem to understand what is going on: witness the license manager of the enormously profitable Cohen-Boyer recombinant DNA patent, “[W]hether we licensed it or not, commercialisation of recombinant DNA was going forward. As I mentioned, a non-exclusive licensing program, at its heart, is really a tax … [b]ut it’s always nice to say technology transfer.” That is, it is clear why cash-strapped universities like Bayh-Dole regardless of the social benefit.

In today’s paper, Elizabeth Popp Berman, a sociologist, poses an interesting question. How did Bayh-Dole ever pass given the widespread antipathy toward “locking up the results of public research” in the decades before its passage? She makes two points of particular interest. First, it’s not obvious that there is any structural break in 1980 in university patenting, as university patents increased 250% in the 12 years before the Act and about 300% in the 12 years afterward. Second, this pattern holds because the development of institutions and interested groups necessary for the law to change was a fairly continuous process beginning perhaps as early as the creation of the Research Corporation in 1912. What this means for economists is that we should be much more careful about seeing changes in law as “exogenous” since law generally just formalized already changing practice, and that our understanding of economic events driven by rational agents acting under constraints ought sometimes focus more on the constraints and how they develop rather than the rational action.

Here’s the history. Following World War II, the federal government became a far more important source of funding for university and private-sector science in the United States. Individual funding agencies differed in their patent policy; for instance, the Atomic Energy Commission essentially did not allow university scientists to patent the output of federally-funded research, whereas the Department of Defense permitted patents from their contactors. Patents were particularly contentious since over 90% of federal R&D in this period went to corporations rather than universities. Through the 1960s, the NIH began to fund more and more university science, and they hired a patent attorney in 1963, Norman Latker, who was very much in favor of private patent rights.

Latker received support for his position from two white papers published in 1968 that suggested the HEW (the parent of the NIH) was letting medical research languish because they wouldn’t grant exclusive licenses to pharma firms, who in turn argued that without the exclusive license they wouldn’t develop the research into a product. The politics of this report allowed Latker enough bureaucratic power to freely develop agreements with individual universities allowing them to retain patents in some cases. The rise of these agreements led many universities to hire patent officers, who would later organize into a formal lobbying group pushing for more ability to patent federally-funded research. Note essentially what is going on: individual actors or small groups take actions in each period which change the payoffs to future games (partly by incurring sunk costs) or by introducing additional constraints (reports that limit the political space for patent opponents, for example). The eventual passage of Bayh-Dole, and its effects, necessarily depend on that sort of institution building which is often left unmodeled in economic or political analysis. Of course, the full paper has much more detail about how this program came to be, and is worth reading in full.

Final version in Social Studies of Science (gated). I’m afraid I could not find an ungated copy.

“How do Patents Affect Follow-On Innovation: Evidence from the Human Genome,” B. Sampat & H. Williams (2014)

This paper, by Heidi Williams (who surely you know already) and Bhaven Sampat (who is perhaps best known for his almost-sociological work on the Bayh-Dole Act with Mowery), made quite a stir at the NBER last week. Heidi’s job market paper a few years ago, on the effect of openness in the Human Genome Project as compared to Celera, is often cited as an “anti-patent” paper. Essentially, she found that portions of the human genome sequenced by the HGP, which placed their sequences in the public domain, were much more likely to be studied by scientists and used in tests than portions sequenced by Celera, who initially required fairly burdensome contractual steps to be followed. This result was very much in line with research done by Fiona Murray, Jeff Furman, Scott Stern and others which also found that minor differences in openness or accessibility can have substantial impacts on follow-on use (I have a paper with Yasin Ozcan showing a similar result). Since the cumulative nature of research is thought to be critical, and since patents are a common method of “restricting openness”, you might imagine that Heidi and the rest of these economists were arguing that patents were harmful for innovation.

That may in fact be the case, but note something strange: essentially none of the earlier papers on open science are specifically about patents; rather, they are about openness. Indeed, on the theory side, Suzanne Scotchmer has a pair of very well-known papers arguing that patents effectively incentivize cumulative innovation if there are no transaction costs to licensing, no spillovers from sequential research, and no incentive for early researchers to limit licenses in order to protect their existing business (consider the case of Farnsworth and the FM radio), and if potential follow-on innovators can be identified before they sink costs. That is a lot of conditions, but it’s not hard to imagine industries where inventions are clearly demarcated, where holders of basic patents are better off licensing than sitting on the patent (perhaps because potential licensors are not also competitors), and where patentholders are better off not bothering academics who technically infringe on their patent.

What industry might have such characteristics? Sampat and Williams look at gene patents. Incredibly, about 30 percent of human genes have sequences that are claimed under a patent in the United States. Are “patented genes” still used by scientists and developers of medical diagnostics after the patent grant, or is the patent enough of a burden to openness to restrict such use? What is interesting about this case is that the patentholder generally wants people to build on their patent. If academics find some interesting genotype-phenotype links based on their sequence, or if another firm develops a disease test based on the sequence, there are more rents for the patentholder to garner. In surveys, it seems that most academics simply ignore patents of this type, and most gene patentholders don’t interfere in research. Anecdotally, licenses between the sequence patentholder and follow-on innovators are frequent.

In general, it is really hard to know whether patents have any effect on anything, however; there is very little variation over time and space in patent strength. Sampat and Williams take advantage of two quasi-experiments, however. First, they compare applied-for-but-rejected gene patents to applied-for-but-granted patents. At least for gene patents, there is very little difference in terms of measurables before the patent office decision across the two classes. Clearly this is not true for patents as a whole – rejected patents are almost surely of worse quality – but gene patents tend to come from scientifically competent firms rather than backyard hobbyists, and tend to have fairly straightforward claims. Why are any rejected, then? The authors’ second trick is to look directly at patent examiner “leniency”. It turns out that some examiners have rejection rates much higher than others, despite roughly random assignment of patents within a technology class. Much of the difference in rejection probability is driven by the random assignment of examiners, which justifies the first rejected-vs-granted technique, and also suggested an instrumental variable to further investigate the data.

With either technique, patent status essentially generates no difference in the use of genes by scientific researchers and diagnostic test developers. Don’t interpret this result as turning over Heidi’s earlier genome paper, though! There is now a ton of evidence that minor impediments to openness are harmful to cumulative innovation. What Sampat and Williams tell us is that we need to be careful in how we think about “openness”. Patents can be open if the patentholder has no incentive to restrict further use, if downstream innovators are easy to locate, and if there is no uncertainty about the validity or scope of a patent. Indeed, in these cases the patentholder will want to make it as easy as possible for follow-on innovators to build on their patent. On the other hand, patentholders are legally allowed to put all sorts of anti-openness burdens on the use of their patented invention by anyone, including purely academic researchers. In many industries, such restrictions are in the interest of the patentholder, and hence patents serve to limit openness; this is especially true where private sector product development generates spillovers. Theory as in Scotchmer-Green has proven quite correct in this regard.

One final comment: all of these types of quasi-experimental methods are always a bit weak when it comes to the extensive margin. It may very well be that individual patents do not restrict follow-on work on that patent when licenses can be granted, but at the same time the IP system as a whole can limit work in an entire technological area. Think of something like sampling in music. Because all music labels have large teams of lawyers who want every sample to be “cleared”, hip-hop musicians stopped using sampled beats to the extent they did in the 1980s. If you investigated whether a particular sample was less likely to be used conditional on its copyright status, you very well might find no effect, as the legal burden of chatting with the lawyers and figuring out who owns what may be enough of a limit to openness that musicians give up samples altogether. Likewise, in the complete absence of gene patents, you might imagine that firms would change their behavior toward research based on sequenced genes since the entire area is more open; this is true even if the particular gene sequence they want to investigate was unpatented in the first place, since having to spend time investigating the legal status of a sequence is a burden in and of itself.

July 2014 Working Paper (No IDEAS version). Joshua Gans has also posted a very interesting interpretation of this paper in terms of Coasean contractability.

“Agricultural Productivity and Structural Change: Evidence from Brazil,” P. Bustos et al (2014)

It’s been a while – a month of exploration in the hinterlands of the former Soviet Union, a move up to Canada, and a visit down to the NBER Summer Institute really put a cramp on my posting schedule. That said, I have a ridiculously long backlog of posts to get up, so they will be coming rapidly over the next few weeks. I saw today’s paper presented a couple days ago at the Summer Institute. (An aside: it’s a bit strange that there isn’t really any media at SI – the paper selection process results in a much better set of presentations than at the AEA or the Econometric Society, which simply have too long of a lag from the application date to the conference, and too many half-baked papers.)

Bustos and her coauthors ask, when can improvements in agricultural productivity help industrialization? An old literature assumed that any such improvement would help: the newly rich agricultural workers would demand more manufactured goods, and since manufactured and agricultural products are complements, rising agricultural productivity would shift workers into the factories. Kiminori Matsuyama wrote a model (JET 1992) showing the problem here: roughly, if in a small open economy productivity goes up in a good you have a Ricardian comparative advantage in, then you want to produce even more of that good. A green revolution which doubles agricultural productivity in, say, Mali, while keeping manufacturing productivity the same, will allow Mali to earn twice as much selling the agriculture overseas. Workers will then pour into the agricultural sector until the marginal product of labor is re-equated in both sectors.

Now, if you think that industrialization has a bunch of positive macrodevelopment spillovers (via endogenous growth, population control or whatever), then this is worrying. Indeed, it vaguely suggests that making villages more productive, an outright goal of a lot of RCT-style microdevelopment studies, may actually be counterproductive for the country as a whole! That said, there seems to be something strange going on empirically, because we do appear to see industrialization in countries after a Green Revolution. What could be going on? Let’s look back at the theory.

Implicitly, the increase in agricultural productivity in Matsuyama was “Hicks-neutral” – it increased the total productivity of the sector without affecting the relative marginal factor productivities. A lot of technological change, however, is factor-biased; to take two examples from Brazil, modern techniques that allow for double harvesting of corn each year increase the marginal productivity of land, whereas “Roundup Ready” GE soy that requires less tilling and weeding increases the marginal productivity of farmers. We saw above that Hicks-neutral technological change in agriculture increases labor in the farm sector: workers choosing where to work means that the world price of agriculture times the marginal product of labor in that sector must be equal to world price of manufacturing times the marginal product of labor in manufacturing. A Hicks-neutral improvement in agricultural productivity raises MPL in that sector no matter how much land or labor is currently being used, hence wage equality across sectors requires workers to leave the factor for the farm.

What of biased technological change? As before, the only thing we need to know is whether the technological change increases the marginal product of labor. Land-augmenting technical change, like double harvesting of corn, means a country can produce the same amount of output with the old amount of farm labor and less land. If one more worker shifts from the factory to the farm, she will be farming less marginal land than before the technological change, hence her marginal productivity of labor is higher than before the change, hence she will leave the factory. Land-augmenting technological change always increases the amount of agricultural labor. What about farm-labor-augmenting technological change like GM soy? If land and labor are not very complementary (imagine, in the limit, that they are perfect substitutes in production), then trivially the marginal product of labor increases following the technological change, and hence the number of farm workers goes up. The situation is quite different if land and farm labor are strong complements. Where previously we had 1 effective worker per unit of land, following the labor-augmenting technology change it is as if we have, say, 2 effective workers per unit of land. Strong complementarity implies that, at that point, adding even more labor to the farms is pointless: the marginal productivity of labor is decreasing in the technological level of farm labor. Therefore, labor-augmenting technology with a strongly complementary agriculture production function shifts labor off the farm and into manufacturing.

That’s just a small bit of theory, but it really clears things up. And even better, the authors find empirical support for this idea: following the introduction to Brazil of labor-augmenting GM soy and land-augmenting double harvesting of maize, agricultural productivity rose everywhere, the agricultural employment share rose in areas that were particularly suitable for modern maize production, and the manufacturing employment share rose in areas that were particularly suitable for modern soy production.

August 2013 working paper. I think of this paper as a nice complement to the theory and empirics in Acemoglu’s Directed Technical Change and Walker Hanlon’s Civil War cotton paper. Those papers ask how changes in factor prices endogenously affect the development of different types of technology, whereas Bustos and coauthors ask how the exogenous development of different types of technology affect the use of various factors. I read the former as most applicable to structural change questions in countries at the technological frontier, and the latter as appropriate for similar questions in developing countries.

Debraj Ray on Piketty’s Capital

As mentioned by Sandeep Baliga over at Cheap Talk, Debraj Ray has a particularly interesting new essay on Piketty’s Capital in the 21st Century. If you are theoretically inclined, you will find Ray’s comments to be one of the few reviews of Piketty that proves insightful.

I have little to add to Ray, but here are four comments about Piketty’s book:

1) The data collection effort on inequality by Piketty and coauthors is incredible and supremely interesting; not for nothing does Saez-Piketty 2003 have almost 2000 citations. Much of this data can be found in previous articles, of course, but it is useful to have it all in one place. Why it took so long for this data to become public, compared to things like GDP measures, is an interesting one which sociology Dan Hirschman is currently working on. Incidentally, the data quality complaints by the Financial Times seem to me of rather limited importance to the overall story.

2) The idea that Piketty is some sort of outsider, as many in the media want to make him out to be, is very strange. His first job was at literally the best mainstream economics department in the entire world, he won the prize given to the best young economist in Europe, he has published a paper in a Top 5 economics journal every other year since 1995, his most frequent coauthor is at another top mainstream department, and that coauthor himself won the prize for the best young economist in the US. It is also simply not true that economists only started caring about inequality after the 2008 financial crisis; rather, Autor and others were writing on inequality well before date in response to clearer evidence that the “Great Compression” of the income distribution in the developed world during the middle of the 20th century had begun to reverse itself sometime in the 1970s. Even I coauthored a review of income inequality data in late 2006/early 2007!

3) As Ray points out quite clearly, the famous “r>g” of Piketty’s book is not an explanation for rising inequality. There are lots of standard growth models – indeed, all standard growth models that satisfy dynamic efficiency – where r>g holds with no impact on the income distribution. Ray gives the Harrod model: let output be produced solely by capital, and let the capital-output ratio be constant. Then Y=r*K, where r is the return to capital net of depreciation, or the capital-output ratio K/Y=1/r. Now savings in excess of that necessary to replace depreciated assets is K(t+1)-K(t), or

Y(t+1)[K(t+1)/Y(t+1)] – Y(t)[K(t)/Y(t)]

Holding the capital-output ratio constant, we have that savings s=[Y(t+1)-Y(t)]K/Y=g[K/Y], where g is the growth rate of the economy. Finally, since K/Y=1/r in the Harrod model, we have that s=g/r, and hence r>g will hold in a Harrod model whenever the savings rate is less than 100% of current income. This model, however, has nothing to do with the distribution of income. Ray notes that the Phelps-Koopmans theorem implies that a similar r>g result will hold along any dynamically efficient growth path in much more general models.

You may wonder, then, how we can have r>g and yet not have exploding income held by the capital-owning class. Two reasons: first, as Piketty has pointed out, r in these economic models (the return to capital, full stop) and r in the sense important to growing inequality, are not the same concept, since wars and taxes lower the r received by savers. Second, individuals presumably also dissave according to some maximization concept. Imagine an individual has $1 billion, the risk-free market return after taxes is 4%, and the economy-wide growth rate is 2%, with both numbers exogenously holding forever. It is of course true true that this individual could increase their share of the economy’s wealth without bound. Even with the caveat that as the capital-owning class owns more and more, surely the portion of r due to time preference, and hence r itself, will decline, we still oughtn’t conclude that income inequality will become worse or that capital income will increase. If this representative rich individual simply consumes 1.92% of their income each year – a savings rate of over 98 percent! – the ratio of income among the idle rich to national income will remain constant. What’s worse, if some of the savings is directed to human capital rather than physical capital, as is clearly true for the children of the rich in the US, the ratio of capital income to overall income will be even less likely to grow.

These last couple paragraphs are simply an extended argument that r>g is not a “Law” that says something about inequality, but rather a starting point for theoretical investigation. I am not sure why Piketty does not want to do this type of investigation himself, but the book would have been better had he done so.

4) What, then, does all this mean about the nature of inequality in the future? Ray suggests an additional law: that there is a long-run tendency for capital to replace labor. This is certainly true, particularly if human capital is counted as a form of “capital”. I disagree with Ray about the implication of this fact, however. He suggests that “to avoid the ever widening capital-labor inequality as we lurch towards an automated world, all its inhabitants must ultimately own shares of physical capital.” Consider the 19th century as a counterexample. There was enormous technical progress in agriculture. If you wanted a dynasty that would be rich in 2014, ought you have invested in agricultural land? Surely not. There has been enormous technical progress in RAM chips and hard drives in the last couple decades. Is the capital related to those industries where you ought to have invested? No. With rapid technical progress in a given sector, the share of total income generated by that sector tends to fall (see Baumol). Even when the share of total income is high, the social surplus of technical progress is shared among various groups according to the old Ricardian rule: rents accrue to the (relatively) fixed factor! Human capital which is complementary to automation, or goods which can maintain a partial monopoly in an industry complementary to those affected by automation, are much likelier sources of riches than owning a bunch of robots, since robots and the like are replicable and hence the rents accrued to their owners, regardless of the social import, will be small.

There is still a lot of work to be done concerning the drivers of long-run inequality, by economists and by those more concerned with political economy and sociology. Piketty’s data, no question, is wonderful. Ray is correct that the so-called Laws in Piketty’s book, and the predictions about the next few decades that they generate, are of less interest.

A Comment on Thomas Piketty, inclusive of appendix, is in pdf form, or a modified version in html can be read here.

On Gary Becker

Gary Becker, as you must surely know by now, has passed away. This is an incredible string of bad luck for the University of Chicago. With Coase and Fogel having passed recently, and Director, Stigler and Friedman dying a number of years ago, perhaps Lucas and Heckman are the only remaining giants from Chicago’s Golden Age.

Becker is of course known for using economic methods – by which I mean constrained rational choice – to expand economics beyond questions of pure wealth and prices to question of interest to social science at large. But this contribution is too broad, and he was certainly not the only one pushing such an expansion; the Chicago Law School clearly was doing the same. For an economist, Becker’s principal contribution can be summarized very simply: individuals and households are producers as well as consumers, and rational decisions in production are as interesting to analyze as rational decisions in consumption. As firms must purchase capital to realize their productive potential, humans much purchase human capital to improve their own possible utilities. As firms take actions today which alter constraints tomorrow, so do humans. These may seem to be trite statements, but that are absolutely not: human capital, and dynamic optimization of fixed preferences, offer a radical framework for understanding everything from topics close to Becker’s heart, like educational differences across cultures or the nature of addiction, to the great questions of economics like how the world was able to break free from the dreadful Malthusian constraint.

Today, the fact that labor can augment itself with education is taken for granted, which is a huge shift in how economists think about production. Becker, in his Nobel Prize speech: “Human capital is so uncontroversial nowadays that it may be difficult to appreciate the hostility in the 1950s and 1960s toward the approach that went with the term. The very concept of human capital was alleged to be demeaning because it treated people as machines. To approach schooling as an investment rather than a cultural experience was considered unfeeling and extremely narrow. As a result, I hesitated a long time before deciding to call my book Human Capital, and hedged the risk by using a long subtitle. Only gradually did economists, let alone others, accept the concept of human capital as a valuable tool in the analysis of various economic and social issues.”

What do we gain by considering the problem of human capital investment within the household? A huge amount! By using human capital along with economic concepts like “equilibrium” and “private information about types”, we can answer questions like the following. Does racial discrimination wholly reflect differences in tastes? (No – because of statistical discrimination, underinvestment in human capital by groups that suffer discrimination can be self-fulfilling, and, as in Becker’s original discrimination work, different types of industrial organization magnify or ameliorate tastes for discrimination in different ways.) Is the difference between men and women in traditional labor roles a biological matter? (Not necessarily – with gains to specialization, even very small biological differences can generate very large behavioral differences.) What explains many of the strange features of labor markets, such as jobs with long tenure, firm boundaries, etc.? (Firm-specific human capital requires investment, and following that investment there can be scope for hold-up in a world without complete contracts.) The parenthetical explanations in this paragraph require completely different policy responses from previous, more naive explanations of the phenomena at play.

Personally, I find human capital most interesting in understanding the Malthusian world. Malthus conjectured the following: as productivity improved for some reason, excess food will appear. With excess food, people will have more children and population will grow, necessitating even more food. To generate more food, people will begin farming marginal land, until we wind up with precisely the living standards per capita that prevailed before the productivity improvement. We know, by looking out our windows, that the world in 2014 has broken free from Malthus’ dire calculus. But how? The critical factors must be that as productivity improves, population does not grow, or else grows slower than the continued endogenous increases in productivity. Why might that be? The quantity-quality tradeoff. A productivity improvement generates surplus, leading to demand for non-agricultural goods. Increased human capital generates more productivity on those goods. Parents have fewer kids but invest more heavily in their human capital so that they can work in the new sector. Such substitution is only partial, so in order to get wealthy, we need a big initial productivity improvement to generate demand for the goods in the new sector. And thus Malthus is defeated by knowledge.

Finally, a brief word on the origin of human capital. The idea that people take deliberate and costly actions to improve their productivity, and that formal study of this object may be useful, is modern: Mincer and Schultz in the 1950s, and then Becker with his 1962 article and famous 1964 book. That said, economists (to the chagrin of some other social scientists!) have treated humans as a type of capital for much longer. A fascinating 1966 JPE [gated] traces this early history. Petty, Smith, Senior, Mill, von Thunen: they all thought an accounting of national wealth required accounting for the productive value of the people within the nation, and 19th century economists frequently mention that parents invest in their children. These early economists made such claims knowing they were controversial; Walras clarifies that in pure theory “it is proper to abstract completely from considerations of justice and practical expediency” and to regard human beings “exclusively from the point of view of value in exchange.” That is, don’t think we are imagining humans as being nothing other than machines for production; rather, human capital is just a useful concept when discussing topics like national wealth. Becker, unlike the caricature where he is the arch-neoliberal, was absolutely not the first to “dehumanize” people by rationalizing decisions like marriage or education in a cost-benefit framework; rather, he is great because he was the first to show how powerful an analytical concept such dehumanization could be!

Follow

Get every new post delivered to your Inbox.

Join 182 other followers

%d bloggers like this: