In the real world, there is a lot of information that simply can’t be contracted on, whether for legal reasons, for information verification reasons, or for cost of contracting reasons. Nonetheless, we still see attempts to maintain incentive structures even without contracts when a relationship in repeated: consider a worker in a long-term relationship with a firm who expects a bonus, given at the firm’s discretion, each year. Jon Levin – fair bet for this year’s Clark? – calls these “relational contracts,” where the incentive to not break the implicit equilibrium contract comes from a desire to not get minmax punished for the rest of the game. What might an optimal repeated relational contract look like if this is the only incentive agents have not to deviate ex-post, under various informational assumptions?
In general, this will be a very difficult problem; even today, fully specifying general optimal mechanisms has made little progress since Tirole and Laffont (1988). Levin has a clever trick, though, that shows some intuition from the auction theory literature. If every actor in the game has quasilinear utility and is risk-neutral, then there is no scope for risk-offloading in the optimal contract, and further simple money transfers in any period can be used, in one shot, to stand in for potentially complicated multi-period reward and punishment strategies. In particular, if any self-enforcing contract can achieve total average surplus per period s, then any outcome given each player at least her minmax with total surplus below s is achievable in equilibrium. This is not just a variation of Fudenberg-Levine-Maskin’s 1994 folk theorem for repeated games (since the discount rate is arbitrary here), but rather just comes from making one of the actors pay the other a lump sum in period one: incentives at all future times do not change and each actor still gets at least the minmax, so the equilibrium remains. But now note that the maximum social surplus that can be achieved is achievable with a stationary incentive structure, meaning incentives that depend only on current period variables. The reason is that if I’m going to maintain some incentive with a complicated string of rewards and punishments in the future, those rewards and punishments have an equilibrium expected value to the actor. By risk-neutrality and quasilinearity, I can just transfer the expected sum of that string to (or from) the actor in the current period. There is a brief argument ensuring that in equilibrium, since the principle’s action is perfectly observed by both agents, there is no reason the principle would destroy or create social surplus in the future, so the total social surplus is just a fixed value to be moved shifted among the two players.
With these nice properties in hand, it turns out that optimal relational contracts have a relatively simple form. With perfect information, the relevant constraints are that neither the principle nor the agent wants to walk away from the promised continuation utility, which is just the discounted difference of all future stage game payoffs higher that the outside option. The IC constraint inducing optimal effort for the agent is the normal one, but there is also a dynamic constraint which requires the largest total payment in any period to the agent minus the smallest total contingent payment to be bounded, since if not, one of the actors has an incentive to walk away and take their outside option forever at the end of the current contract instead of paying the specified bonus transfer. This limitation on incentives is essentially the cost of not being able to contract.
What about the limited information cases, moral hazard and adverse selection? Let the agent have a cost of production that is unobservable by the principle, and let that agent choose a level of effort which is observable. Make the standard assumptions on the cost function that allow full separation of types in the static hidden information problem. The lack of contracts in the dynamic problem give an highest-total-surplus equilibrium where equilibrium effort for all types is lower than the first-best. By self-selection arguments, getting more effort from a higher cost type means raising the slope of the bonus schedule in effort. But the total variance in incentives is bounded as described above. So sometimes, relatively high cost types are all pooled at a suboptimal level of effort.
If there is moral hazard rather than hidden information (agent’s cost is observed by everyone, but not agent’s effort), assuming normal Rogerson conditions so the first-order approach can be used to solve the program, risk-neutrality allows us to use a “one step” incentive system: if output is high enough, pay the maximal bonus, else pay the minimal one.
A couple final notes. In the case of subjective performance measures (only the principal observes final output which has some stochastic component), the optimal contract is a termination contract: if output is sufficiently low, terminate the job, else pay a bonus. The reason termination is necessary is that the principal must be punished for trying to cheat the agent by reporting low output, and terminating the job punishes the principal by giving him only his outside option forever. Second, there’s no worry here about unrealistically using an infinite game, since we discount: you can let some exogenous chance of the contract ending at any time enter the problem through the discount rate and by risk-neutrality this interpretation is not worrisome.
http://www.stanford.edu/~jdlevin/Papers/RIC.pdf (Final version, AER 2003)