Bart Lipman, a great micro theorist back at my alma mater Boston University, has argued that the lack of interest among economists in language is puzzling. We write volumes on signaling games, on contracts and their incompleteness, on game theoretical message spaces, yet in the world we are describing, all of these things are expressed in natural language. The present paper has to do with vagueness, where vagueness is a statement describing a set that is not well-defined, and precision is the opposite. That is, “obese” meaning “BMI of 30+” is precise (intervals can be precise!), whereas “large” is not. Lipman’s paper has been floating around for at least a decade and is still unpublished, despite a lot of interesting ideas. Perhaps the fantastic abstract has something to do with it. “Abstract: I don’t know.”
I have three points about vagueness that are worth keeping in mind before discussing Lipman’s result. First, vagueness is not a property of language itself. That is, we generally choose to use vague terms like “tall” or “red” even though the English language allows us to say “6 foot 5.2 inches” or to define a color in terms of CMYK. Second, vagueness is often determined by who is speaking to whom. Two NBA scouts will describe a potential signee as “7 foot 2” whereas you or I would just say he is tall. We would probably just say he is tall even if we knew his actual height in inches. Third, vagueness is not solely determined by the identity of the speaker and listener, but by the context. Two graphic designers working on a project may describe precisely the color they wish to use, but when out for a walk may simply say that the sunset was a lovely red that evening.
Lipman imagines the following model. I have a message space of some set cardinality, say 2. I observe a signal h from a distribution H (say, someone’s height). My observation may be a point or a subjective distribution, so we are not assuming that height is precisely known by the speaker. The speaker then chooses a message from the message space, and sends it to the listener. The listener then takes action conditional on the message. For example, the speaker may need the listener to pick up a friend at the airport, and want to describe the friend’s height. Both players have the same utility function, so there is no conflict of interest, though see Blume and Board (2009) for an interesting discussion of “strategic” vagueness.
In this story, a vague statement is just a mixed strategy conditional on the signal. For example, imagine the message space is “tall” and “short”. A vague language has the speaker say “tall” when the friend is above six feet, say “short” if below five foot eight, and play a mixed strategy if the height is inbetween. A precise strategy picks some cutoff, says “tall” is above the cutoff, and “short” otherwise. A one-line proof shows that the precise language always gives higher utility to both players than the vague language.
So what explains vagueness when interests are aligned? It’s not a matter of using a more limited vocabulary: in the example above, “tall” and “short” are the only words in both the vague or the precise cases. It’s also not a matter of context-dependent flexibility. In both the vague and precise cases, we still need some sense of what tall means when referring to coffee and what tall means when referring to NBA players. It’s not even a matter of the impossibility of precision: the phrase “tall” can precisely refer to an interval, or precisely refer to a distribution; in any case, the first and third uses of vagueness I mentioned above seem to mitigate against the idea that vagueness simply happens because we can’t measure or speak precisely.
There are a few better stories. First, a lot of vagueness is not really vague as the computer scientist Kees van Deemter, among others, has pointed out. When we say exercise is “good for young and old”, the phrases “young” and “old” are vague in and of themselves, but as a whole, the phrase precisely means “everybody”. Nouns affect the meaning of adjectives. A second response is that people use vague speech because they have a vague understanding of the world; that is, people do not actually form, say, a Savage-style subjective probability distribution over the height of who they are talking about. This is roughly Lipman’s best stab at an explanation, but given that the same people often vary between vague and precise language, I don’t find it terribly convincing. A better reason, also due to van Deemter and a game theorist named Rohit Parikh, is that vagueness can actually help search. Imagine asking someone to grab a blue book for you, and imagine that we have slight perceptual differences in how we see color. If blue is precisely defined, then your friend will first look through your blue books (as he perceives them), and if he does not see the book you want, will have to search through the rest of your collection at random. If blue is vaguely defined, your friend will first look through all the books he considers “bluish”, and only after doing that will search the rest of your collection. When there is a sufficient lack of overlap in our conceptions of blue, the vague search will be quicker.
So where to go from here? The intuition about “bluishness” is very tough to incorporate into a learning about the state space model of knowledge – somehow we need to incorporate fuzzy logic, perhaps. A lot of useful results can come from this, though. Applying the above idea to, for example, communication within firms, I think you can learn a lot about why certain types of communication are used at different times. I have the beginnings of a paper along these lines, but any comments about modeling vagueness are vastly appreciated.
http://people.bu.edu/blipman/Papers/vague5.pdf (Nov. 2009 working paper. Much of the discussion here is informed by a great followup by a computer scientist named Kees van Deemter who has written a lot of vague speech. His paper is called “Utility and Language Generation: The Case of Vagueness”. Ariel Rubinstein has also written an interesting (and free to download!) book called Economics and Language; it is worth looking through, but I wouldn’t say it’s Rubinstein’s best work. Admittedly, matching the quality of Rubinstein’s best is difficult for anyone, even Rubinstein himself!
Hi, it would be more interesting if he talked about the vagueness of the economic research, like is own.
Making a model completly unattached from psychological knowledge about human behavior is clearly not very important.
Economic research is taking a road of leaving psychology behind, but psychology is all that economists have to described human behavior, why shoulnd’t we use it more often?
Here my two cents.
Once upon a time I aimed to write a paper relating mordern economic thinking (particularly the idea of equilibrium) with Aristotelian concepts of physics. But I stopped studying philosophy and I am not able anymore to keep this project. Anyway, I think it may help you.
Aristotle conceived of heavy and light not as a quantitative variable, but as a qualitative variable. Something was heavy if it tended to go to the center of earth, and light if it tended to go up to the sky.
Now, you may be asking, what the hell is all this related to vagueness? Well, my huntch is that we are vague because we see things in a qualitative way, not in a quantitative way. Actually, we have to think hard to think in a quantitative way (only after Galileo and Descartes we started to approach science as quant guys).
So, if I were you, I would read how Aristotle explain these – or at least, some comments about Aristotle, because reading Aristotle directly is a real pain.
You may even connect this with emergent properties of phenomena. For instance, tha transition of water from liquid to solid and vice-versa is a quantitative phenomena, but we only observe the qualitative difference. So we are vague because that’s how we process the world around us.
I hope it may help you,
That’s an interesting statement Monoel. Can revealed preferences be vague as well?