The game got its name from the following hypothetical situation: imagine two criminals arrested under the suspicion of having committed a crime together. However, the police does not have sufficient proof in order to have them convicted. The two prisoners are isolated from each other, and the police visit each of them and offer a deal: the one who offers evidence against the other one will be freed. If none of them accepts the offer, they are in fact cooperating against the police, and both of them will get only a small punishment because of lack of proof. They both gain. However, if one of them betrays the other one, by confessing to the police, the defector will gain more, since he is freed; the one who remained silent, on the other hand, will receive the full punishment, since he did not help the police, and there is sufficient proof. If both betray, both will be punished, but less severely than if they had refused to talk. The dilemma resides in the fact that each prisoner has a choice between only two options, but cannot make a good decision without knowing what the other one will do.
Action of A\Action of B 
Cooperate 
Defect 
Cooperate 
Fairly good [+ 5] 
Bad [  10] 
Defect 
Good [+ 10] 
Mediocre [0] 
Table: outcomes for actor A (in words, and in hypothetical "points") depending on the combination of A's action and B's action, in the "prisoner's dilemma" game situation. A similar scheme applies to the outcomes for B.
In the mid1920's, the Nebraska State Police achieved what may still be
their finest moment. After a 400mile car chase over dirt roads and cornfields,
they finally caught up with the notorious bank robbers Bonnie and Clyde. The
two criminals were brought back to the police station in Omaha for further
interrogation.
Bonnie and Clyde were questioned in separate rooms, and each was offered the
same deal by the police. The deal went as follows (since both are the same, we
need only describe the version presented to Bonnie):
``Bonnie, here's the offer that we are making to both you and Clyde. If you
both hold out on us, and don't confess to bank robbery, then we admit that we
don't have enough proof to convict you. However, we will be able to
jail you both for one year, for reckless driving and endangerment of corn. If
you turn state's witness and help us convict Clyde (assuming he doesn't
confess), then you will go free, and Clyde will get twenty years in prison. On
the other hand, if you don't confess and Clyde does, then he will go
free and you will get twenty years.''
``What happens if both Clyde and I confess?'' asked Bonnie.
``Then you both get ten years,'' said the interrogator.
Bonnie, who had been a math major at Cal Tech before turning to crime,
reasoned this way: ``Suppose Clyde intends to confess. Then if I don't confess,
I'll get twenty years, but if I do confess, I'll only get ten years. On the
other hand, suppose Clyde intends to hold out on the cops. Then if I don't
confess, I'll go to jail for a year, but if I do confess, I'll go free. So no
matter what Clyde intends to do, I am better off confessing than holding out.
So I'd better confess.''
Naturally, Clyde employed the very same reasoning. Both criminals confessed, and both went to jail for ten years.^{1} The police, of course, were triumphant, since the criminals would have been free in a year had both remained silent.
Play
an interactive Prisoners'
Dilemma.
Links
on Prisoner’s Dilemma
By the time Tucker invented the Prisoners' Dilemma, Game Theory was already a growing theory. But most of the earlier work had focused on a special class of games: zerosum games.
In his earliest work, von Neumann made a striking discovery. He found that if poker players maximize their rewards, they do so by bluffing; and, more generally, that in many games it pays to be unpredictable. This was not qualitatively new, of course  baseball pitchers were throwing changeup pitches before von Neumann wrote about mixed strategies. But von Neumann's discovery was a bit more than just that. He discovered a unique and unequivocal answer to the question "how can I maximize my rewards in this sort of game?" without any markets, prices, property rights, or other institutions in the picture. It was a very major extension of the concept of absolute rationality in neoclassical economics. But von Neumann had bought his discovery at a price. The price was a strong simplifying assumption: von Neumann's discovery applied only to zerosum games.
For example, consider the children's game of "Matching Pennies." In this game, the two players agree that one will be "even" and the other will be "odd." Each one then shows a penny. The pennies are shown simultaneously, and each player may show either a head or a tail. If both show the same side, then "even" wins the penny from "odd;" or if they show different sides, "odd" wins the penny from "even". Here is the payoff table for the game.


Odd 



Head 
Tail 
Even 
Head 
1,1 
1,1 
Tail 
1,1 
1,1 
If we add up the payoffs in each cell, we find 11=0. This is a "zerosum game."
DEFINITION: ZeroSum game If we add up the wins and losses in a game, treating losses as negatives, and we find that the sum is zero for each set of strategies chosen, then the game is a "zerosum game."
In less formal terms, a zerosum game is a game in which one player's winnings equal the other player's losses. Do notice that the definition requires a zero sum for every set of strategies. If there is even one strategy set for which the sum differs from zero, then the game is not zero sum.
In a twoperson zero sum game, each of the two players is given a choice between several prescribed strategies at each turn, and each player's loss is equal to the other player's gain.
The payoff matrix of a twoperson zero sum game has rows labeled by the row player's strategies and columns labeled by the column player's strategies.
The a entry of the matrix is the payoff that accrues to the row player in
the event that the row player uses strategy I and the column player uses
strategy I.
The b entry of the matrix is the payoff that accrues to the row player in the
event that the row player uses strategy I and the column player uses strategy II.
The c entry of the matrix is the payoff that accrues to the row player in
the event that the row player uses strategy II and the column player uses
strategy I.
The d entry of the matrix is the payoff that accrues to the row player in the
event that the row player uses strategy II and the column player uses strategy II.
Fundamental Principles of Game Theory
When analyzing any game, we make the following assumptions about both players:
· Each player makes the best possible move.
· Each player knows that his or her opponent is also making the best possible move.
A player uses a pure strategy if he or she uses the same strategy at each round of the game.
A saddle point is a payoff that is simultaneously a row minimum and a column maximum. To locate saddle points, circle the row minima and box the column maxima. The saddle points are those entries that are both circled and boxed.
Another Example
Here is another example of a zerosum game. It is a very simplified model of price competition. Like Augustin Cournot (writing in the 1840's) we will think of two companies that sell mineral water. Each company has a fixed cost of $5000 per period, regardless whether they sell anything or not. We will call the companies Perrier and Apollinaris, just to take two names at random.
The two companies are competing for the same market and each firm must choose a high price ($2 per bottle) or a low price ($1 per bottle). Here are the rules of the game:
1) At a price of $2, 5000 bottles can be sold for a total revenue of $10000.
2) At a price of $1, 10000 bottles can be sold for a total revenue of $10000.
3) If both companies charge the same price, they split the sales evenly between them.
4) If one company charges a higher price, the company with the lower price sells the whole amount and the company with the higher price sells nothing.
5) Payoffs are profits  revenue minus the $5000 fixed cost.
Here is the payoff table for these two companies


Perrier 



Price=$1 
Price=$2 
Apollinaris 
Price=$1 
0,0 
5000, 5000 
Price=$2 
5000, 5000 
0,0 
(Verify for yourself that this is a zerosum game.) For twoperson zerosum games, there is a clear concept of a solution. The solution to the game is the maximin criterion  that is, each player chooses the strategy that maximizes her minimum payoff. In this game, Appolinaris' minimum payoff at a price of $1 is zero, and at a price of $2 it is 5000, so the $1 price maximizes the minimum payoff. The same reasoning applies to Perrier, so both will choose the $1 price. Here is the reasoning behind the maximin solution: Apollinaris knows that whatever she loses, Perrier gains; so whatever strategy she chooses, Perrier will choose the strategy that gives the minimum payoff for that row. Again, Perrier reasons conversely.
SOLUTION: Maximin criterion For a twoperson, zero sum game it is rational for each player to choose the strategy that maximizes the minimum payoff, and the pair of strategies and payoffs such that each player maximizes her minimum payoff is the "solution to the game."
A game is strictly determined if it has at least one saddle point. The following statements are true about strictly determined games:
· All saddle points in a game have the same payoff value.
· Choosing the row and column through any saddle point gives optimal strategies for both players.
· The value of a strictly determined game is the value of the saddle point entry. A fair game has value of zero, otherwise it is unfair or biased.
A mixed strategy is a method of playing a game where the rows or columns are played at random so that each is used a given fraction of the time.
We represent a mixed strategy for a player by a the (probability values) (p, 1p) or (q,1q); where p and q represent the fraction of times the strategy I is played (or the probability of using that strategy).
The formula for evaluating p and q is given by:
and
The expected value of the game with payoff matrix P corresponding to the mixed strategies S and T is given by
or
The expected value of the game is the average payoff per round if each player uses the associated mixed strategies for a large number of rounds.
Now let's look back at the game of matching pennies. It appears that this game does not have a unique solution. The minimum payoff for each of the two strategies is the same: 1. But this is not the whole story. This game can have more than two strategies. In addition to the two obvious strategies, head and tail, a player can "randomize" her strategy by offering either a head or a tail, at random, with specific probabilities. Such a randomized strategy is called a "mixed strategy." The obvious two strategies, heads and tails, are called "pure strategies." There are infinitely many mixed strategies corresponding to the infinitely many ways probabilities can be assigned to the two pure strategies.
DEFINITION: Mixed strategy If a player in a game chooses among two or more strategies at random according to specific probabilities, this choice is called a "mixed strategy."
The game of matching pennies has a solution in mixed strategies, and it is to offer heads or tails at random with probabilities 0.5 each way. Here is the reasoning: if odd offers heads with any probability greater than 0.5, then even can have better than even odds of winning by offering heads with probability 1. On the other hand, if odd offers heads with any probability less than 0.5, then even can have better than even odds of winning by offering tails with probability 1. The only way odd can get even odds of winning is to choose a randomized strategy with probability 0.5, and there is no way odd can get better than even odds. The 0.5 probability maximizes the minimum payoff over all pure or mixed strategies. And even can reason the same way (reversing heads and tails) and come to the same conclusion, so both players choose 0.5.
An optimal mixed strategy for the row player is a mixed strategy for which the lowest expected payoff (over all possible column player mixed strategies) is as large as possible. An optimal mixed strategy for the column player is a mixed strategy for which the highest expected payoff (over all possible row player mixed strategies) is as small as possible.
The process of finding optimal mixed strategies for both players is called solving the game.
Here are some facts about optimal mixed strategies:
· In a strictly determined game, the optimal mixed strategies are pure strategies, and are the optimal pure strategies.
· Every game has optimal mixed strategies for both players.
· The value of a game is its expected value if optimal mixed strategies are used. A fair game has value of zero, otherwise it is unfair or biased.
Von Neumann's Discovery
We can now say more exactly what von Neumann's discovery was. Von Neumann showed that every twoperson zero sum game had a maximin solution, in mixed if not in pure strategies. This was an important insight, but it probably seemed more important at the time than it does now. In limiting his analysis to twoperson zero sum games, von Neumann had made a strong simplifying assumption. Von Neumann was a mathematician, and he had used the mathematician's approach: take a simple example, solve it, and then try to extend the solution to the more complex cases. But the mathematician's approach did not work as well in game theory as it does in some other cases. Von Neumann's solution applies unequivocally only to "games" that share this zerosum property. Because of this assumption, von Neumann's brilliant solution was and is only applicable to a small proportion of all "games," serious and nonserious. Arms races, for example, are not zerosum games. Both participants can and often do lose. The Prisoners' Dilemma, as we have already noticed, is not a zerosum game, and that is the source of a major part of its interest. Economic competition is not a zerosum game. It is often possible for most players to win, and in principle, economics is a winwin game. Environmental pollution and the overexploitation of resources, again, tend to be loselose games: it is hard to find a winner in the destruction of most of the world's ocean fisheries in the past generation. Thus, von Neumann's solution does not  without further work  apply to these serious interactions.
The serious interactions are instances of "nonconstant sum games," since the winnings and losses may add up differently depending on the strategies the participants choose. It is possible, for example, for rival nations to choose mutual disarmament, save the cost of weapons, and both be better off as a result  so the sum of the winnings is greater in that case. In economic competition, increasing division of labor, specialization, investment, and improved coordination can increase "the size of the pie," leading to "that universal opulence which extends itself to the lowest ranks of the people," in the words of Adam Smith. In cases of environmental pollution, the benefits to each individual from the polluting activity is so swamped by others' losses from polluting activity that all can lose  as we have often observed.
Poker and baseball are zerosum games. It begins to seem that the only zerosum games are literal games that human beings have invented  and made them zerosum  for our own amusement. "Games" that are in some sense natural are nonconstant sum games. And even poker and baseball are somewhat unclear cases. A "friendly" poker game is zerosum, but in a casino game, the house takes a proportion of the pot, so the sum of the winnings is less the more the players bet. And even in the friendly game, we are considering only the money payoffs  not the thrill of gambling and the pleasure of the social event, without which presumably the players would not play. When we take those rewards into account, even gambling games are not really zerosum.
Von Neumann and Morgenstern hoped to extend their analysis to nonconstant sum games with many participants, and they proposed an analysis of these games. However, the problem was much more difficult, and while a number of solutions have been proposed, there is no one generally accepted mathematical solution of nonconstant sum games. To put it a little differently, there seems to be no clear answer to the question, "Just what is rational in a nonconstant sum game?" The welldefined rational policy in neoclassical economics  maximization of reward  is extended to zerosum games but not to the more realistic category of nonconstant sum games.
The maximin strategy is a "rational" solution to all twoperson zero come games. However, it is not a solution for nonconstant sum games. The difficulty is that there are a number of different solution concepts for nonconstant sum games, and none is clearly the "right" answer in every case. The different solution concepts may overlap, though. We have already seen one possible solution concept for nonconstant sum games: the dominant strategy equilibrium. Let's take another look at the example of the two mineral water companies. Their payoff table was:


Perrier 



Price=$1 
Price=$2 
Apollinaris 
Price=$1 
0,0 
5000, 5000 
Price=$2 
5000, 5000 
0,0 
We saw that the maximin solution was for each company to cut price to $1. That is also dominant strategy equilibrium. It's easy to check that: Apollinaris can reason that either Perrier cuts to $1 or not. If they do, Apollinaris is better off cutting to 1 to avoid a loss of $5000. But if Perrier doesn't cut, Apollinaris can earn a profit of 5000 by cutting. And Perrier can reason in the same way, so cutting is a dominant strategy for each competitor.
But this is, of course, a very simplified  even unrealistic  conception of price competition. Let's look at a more complicated, perhaps more realistic pricing example:
Another Price Competition Example
Following a long tradition in economics, we will think of two companies selling "widgets" at a price of one, two, or three dollars per widget. the payoffs are profits  after allowing for costs of all kinds  and are shown in Table. The general idea behind the example is that the company that charges a lower price will get more customers and thus, within limits, more profits than the highprice competitor. (This example follows one by Warren Nutter).


Acme Widgets 



p=1 
p=2 
p=3 
Widgeon Widgets 
p=1 
0,0 
50, 10 
40,20 
p=2 
10,50 
20,20 
90,10 

p=3 
20, 40 
10,90 
50,50 
We can see that this is not a zerosum game. Profits may add up to 100, 20, 40, or zero, depending on the strategies that the two competitors choose. Thus, the maximin solution does not apply. We can also see fairly easily that there is no dominant strategy equilibrium. Widgeon company can reason as follows: if Acme were to choose a price of 3, then Widgeon's best price is 2, but otherwise Widgeon's best price is 1  neither is dominant.
We will need another, broader concept of equilibrium if we are to do anything with this game. The concept we need is called the Nash Equilibrium, after Nobel Laureate (in economics) and mathematician John Nash. Nash, a student of Tucker's, contributed several key concepts to game theory around 1950. The Nash Equilibrium conception was one of these, and is probably the most widely used "solution concept" in game theory.
DEFINITION: Nash Equilibrium If there is a set of strategies with the property that no player can benefit by changing her strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute the Nash Equilibrium.
Let's apply that definition to the widgetselling game. First, for example, we can see that the strategy pair p=3 for each player (bottom right) is not a Nashequilibrium. From that pair, each competitor can benefit by cutting price, if the other player keeps her strategy unchanged. Or consider the bottom middle  Widgeon charges $3 but Acme charges $2. From that pair, Widgeon benefits by cutting to $1. In this way, we can eliminate any strategy pair except the upper left, at which both competitors charge $1.
We see that the Nash equilibrium in the widgetselling game is a lowprice, zeroprofit equilibrium. Many economists believe that result is descriptive of real, highly competitive markets  although there is, of course, a great deal about this example that is still "unrealistic."
Let's go back and take a look at the dominantstrategy equilibrium in the Prisoners' Dilemma. It, too, is a NashEquilibrium. In fact, any dominant strategy equilibrium is also a Nash Equilibrium. The Nash equilibrium is an extension of the concepts of dominant strategy equilibrium and of the maximin solution for zerosum games.
It would be nice to say that that answers all our questions. But, of course, it does not.
Here is just the first of the questions it does not answer: could there be more than one NashEquilibrium in the same game? And what if there were more than one?
If you wish to learn more about game theory, there are a variety of good books on the topic.
Would you like to learn about game theory? For the complete novice, I would recommend Thinking Strategically by A. Dixit and B. Nalebuff (Norton, 1991). This is good general reading. If you are interested in getting deeper into the subject (at roughly the undergraduate level) you should try Game Theory with Economic Applications by H. Bierman and L. Fernandez (AddisonWesley, 1993). The next step up is the graduate level text Game Theory by D. Fudenberg and J. Tirole from MIT Press. There are also two other advanced textbooks worth taking a look at: Fun and Games: A Text on Game Theory by K. Binmore (D.C. Heath, 1992), a very focused and mathematical treatment aimed at the advanced undergraduate, and R. Myerson's Game Theory: Analysis of Conflict from Harvard University Press. Also of interest is D. Kreps A Course in Microeconomic Theory (Princeton University Press, 1990). This is a general purpose graduate level textbook in microeconomics, but the treatment of game theoretic issues is especially good.