The mathematician and game theorist David Blackwell, who to my mind is the most important African-American in the history of economics, passed away this week. Incredibly, early in his career, both Princeton (when he was there as a fellow) and Berkeley (where he offered a position by the mathematics department) denied him the opportunity to teach because of his race, and therefore much of his early work was done while teaching at historically black colleges.
Aside from “The Big Match,” which will be discussed presently, Blackwell shows up in economics most often for two reasons. First are “Blackwell Games”, or (countably) infinitely repeated simultaneous-play zero-sum games. Despite their importance in economics – society itself is basically a Blackwell game! – I am unaware of any proof of determinacy (that is, an existence of a value V such that player 1 can guarantee at least V, and player 2 can keep him from scoring more than V) for such games. Eran Shmaya, here at Kellogg, has a recent working paper which I believe represents the weakest possible restriction on Blackwell games for which value is known to exist.
Blackwell also shows up very frequently in economics for his “Blackwell sufficiency” conditions (in his 1965 “Discounted Dynamic Programming”) which guarantee the existence of a fixed point for an operator. In particular, many types of games have simple proofs of existence using Blackwell conditions.
Less known is that, as far as I know, Blackwell’s 1954 book “Theory of Games and Statistical Decisions” is the origin of the modern concept of a strategy. He describes a strategy in the context of chess as a list of instructions given to a deputy, who then given those instructions, plays the game for you. Such instructions clearly must specify what is to be done under any contingency. Development of game theory under uncertainty was in its early development when the book was released, but Blackwell’s description of strategy necessitating instructions even for branches of the game tree that will never be reached is particularly important in the context of such games.
So back to “the Big Match”. Consider the following repeated game where each player maximizes the average payoff across periods. In each stage, players simultaneously choose 0 or 1. If they match, Player 1 wins, else Player 2 wins. However, Player 1 can also try, in any period, for a “Big Match”: if the two guess the same number when Player 1 is going for a Big Match, then the game ends, with Player 1 getting value 1 forever and Player 2 getting zero; if they don’t match, Player 2 gets value one forever and Player 1 zero. In particular, if Player 1 chooses “1”, that is a big match, but if she chooses “0”, that is just a normal match.
It’s pretty clear that Player 2 can get .5 by just tossing a coin every day: the best response by Player 1 wins half the time whether 1 is going for a big match of not. What’s interesting about this game, though, is Player 1’s value. Player 1 can optimally get .5N/(N+1), for any natural number N, but doing so requires knowledge of the entire history of the game (so there is no Markovian strategy), and the optimal strategy can only come epsilon-close, but cannot attain, this value .5. The proof is not terribly long, but here I’ll just give some idea why stationary strategies cannot be optimal for player 1. If player 1 chooses 0 with p=1, then player 2 can always play “1” and get payoff 1. If player one chooses “1” with p>0, then if player 2 always chooses “1”, he will eventually win a Big Match, and therefore get payoff 1 for the rest of time (hence will get average payoff 1 in the limit). Either way, player 1 gets payoff 0, which is much worse than the value of the game, .5. One further note: the existence of value in average payoff zero-sum stochastic games was eventually proved by Mertens and Neyman (1981). I see a note from Nature that Nicolas Vieille at HEC Paris has proven, in a very complicated way, the solvability of non-zero-sum stochastic games, but I do not have a cite; perhaps a reader can add one in the comments.
http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.aoms/1177698513 (Available in Open Source from Project Euclid, as are all Annals of Math Stats articles from before the split into Annals of Prob and Annals of Stats)
Here is a link to Nicolas’ papers http://www.springerlink.com/content/10428qu45474378h/ (this is the first in two papers). Nicolas’ Theorem is for two-player stochastic games. The question about existence of equilibrium in stochastic games with more than two players is still open.
A note about Blackwell games — the existence of value in games with simultaneous moves was proven by Donald Martin in 1998. My paper is about more general monitoring structures.
Thanks Eran…and I read that Martin paper earlier this year, which just shows how my bad my memory is!
[…] to posts about Blackwell’s life and research: Jake Abernethy, Rajiv Sethi, Anand Sarwate, Kevin Bryan, jesús Fernández-Villaverde (spanish), Andrew […]
Throughout the second half of the 20th century, Dave Blackwell was a role model for many young African American academics with a mathematical bent, and I am no exception. I taught at the U of Pittsburgh from ’76 to ’88 and was fortunate enough to be at the ceremony where he was given an honorary doctorate at Carnegie-Mellon. I introduced myself and we have been friends ever since.
I have met few individuals as gentle and as fundamentally decent as David Blackwell.
Senior Scholar (ret.)
The Carnegie Foundation
University of North Carolina, Greensboro
[…] A Fine Theorem https://afinetheorem.wordpress.com/2010/07/18/the-big-match-d-blackwell-and-t-s-ferguson-1968/ […]
[…] Talwalkar on his blog “mind your decisions.” You also can read about the big match in this post of Kevin Bryan’s economics blog “a fine […]