
small (250x250 max)
medium (500x500 max)
Large
Extra Large
large ( > 500x500)
Full Resolution


Game Theory and Von Neumann’s Minimax Theorem Megan Hall Foundations of Mathematics Interim 2011 Dr. Kinney Abstract: We encounter many situations of conflict on a day‐to‐day basis. When trying to find a solution, we consider all the possible consequences of our choices. Sometimes, the outcome of a situation depends on the decisions of others. This is where the discussion of game theory begins. Game theory seeks to construct a mathematical model for these decision making processes. The players in a game want to find the best outcome for themselves; is it possible to guarantee that a particular outcome, no matter what the other player does? Perhaps the most important result of game theory is the Minimax Theorem, proved by John von Neumann. He showed that in a game with two players, such that the loss of one is the gain of the other, there are strategies that each player should use that will result in the best outcome for both of the players. Since this theorem was proved, a number of extensions have also been found. 1 Table of Contents I. Introduction to Game Theory…………………………………………………….…2 II. Definitions and Examples………………………………………………………….….2 III. The Minimax Theorem…………………………………………………………….……9 IV. Linear Programming……………………………………………………………….…….9 V. Minimax Theorem Proof…………………………………………………………….11 VI. Illustration of the Minimax Theorem for a Two‐Person, Zero‐Sum Game………………………………………………………………………….15 VII. Computation of Optimal Strategies…………………………………………….21 VIII. Classic Example of a Two‐Person, Zero‐Sum Game…………………….26 IX. Extensions of the Minimax Theorem…………………………………….……28 X. Conclusion…………………………………………………………………………….……29 XI. Resources….………………………………………………………………………….……30 2 I. Introduction to Game Theory In everyday life, a “game” usually refers to a pastime. Card games, board games, or athletic games come to mind. However, consider the game of chess, for example. This is as game that involves much more than solely entertainment. It requires seeking strategies to gain an advantage over the other player. It is in these situations where the outcome of the game depends on the strategies used by the players that we may begin a conversation about game theory. Life requires us to make choices in many domains, whether it is social, personal, economic, or otherwise. The purpose of game theory is to develop a mathematical model for these decision processes, or situations of conflict. The first mathematical work in game theory appeared in 1912 as an article by Ernst Zermelo, in which he proved that the game of chess is determined, i.e. there is a strategy for white to always win, a strategy for black to always win, and a strategy for each player that always results in a tie. However, his result did not state which of these three actually occurs, and therefore has no practical application. Emile Borel laid the foundations of game theory in 1921. He posed the question of determining which situations have an optimal strategy and how to find it. John Von Neumann proved the first theorem of game theory in 1928. This theorem established that in two‐person zero‐sum games (i.e. games in which one player’s gain equals the other player’s loss) with perfect information (each player knows the possible strategies, and resulting consequences, of the other player), there exists a minimax point. This minimax point means that both players minimize their maximum loss, thereby being the optimal strategy for both players. The Minimax Theorem, proved by Von Neumann, is the pivotal result in the mathematical theory of games, for it forms the foundation of a long string of further results. This theorem was extended and improved by Von Neumann, all building up to the publication of Theory of Games and Economic Behavior, co‐written with the economist Oscar Morgenstern in 1944. II. Definitions and Examples Before a proof of the Minimax Theorem is developed, it is necessary to cover the fundamentals of game theory and introduce important definitions. A game will be technically defined as a situation of conflict. In a game, the participants are called players. For our purposes, we will focus on games with only two players, call them I and II. Conflict exists because each player wants to improve his circumstances by gaining as much as possible, to the loss of the other player. Each player has strategies to use, one at a time, in an attempt to achieve the desired outcome. Let us consider a simple game called Penny‐Matching to illustrate some of these terms. 3 Suppose players I and II each hold a penny and they display them simultaneously. If the pennies match (either both show heads or both show tails), then Player I keeps both the pennies, therefore gaining one penny. Otherwise, the pennies do not match, and Player II keeps the pennies. In this game, the players each have two strategies, “heads” or “tails.” The game can be summarized in the following matrix: H T H (1,!1) (!1,1) T (!1,1) (1,!1) This matrix is said to be in normal form. Each row represents a strategy of Player I and each column represents a strategy of Player II. Each entry in the matrix is called a payoff vector, with the first component representing the payoff to Player I and the second component representing the payoff to Player II. For example, the entry (1,‐1) implies a gain of 1 for Player I and a gain of ‐1 (or a loss of 1) for Player II. The matrix can be represented more simply as H T H 1 !1 T !1 1 An entry of 1 is a gain and ‐1 is a loss for Player I; since Player I gains a penny as Player II loses a penny, Player II’s payoff is the opposite of Player I’s. Therefore, an entry of 1 implies that Player I gained 1 and Player II lost 1. This is why we only include the first component of the payoff vector; we will call this simply the payoff. It is understood that the second player gives this amount to the first player. This game is said to be zero‐sum, since everything won by a player must be lost by someone else. Let us assume a general knowledge of matrix algebra. We can now make a few general statements. A two‐person zero‐sum game can be reduced to a matrix, A, with as many rows as Player I has strategies and as many columns as Player II has strategies. Thus, the payoff when Player I chooses his ith strategy and Player II chooses his jth strategy is the element aij in the ith row and jth column of the matrix. The players are interested in deciding which strategy is best in terms of optimizing their payoff. An element, aij , will be an equilibrium point if and only if it is the largest in its column and the smallest in its row. An element of this kind is called a saddle point. For example, the game matrix, 0 !1 2 3 " # $ % & ' 4 has a saddle point in the second row and the first column. Naturally, Player I would choose his second strategy in an attempt to maximize his payoff, and Player II, choosing between paying two units or three, would choose his first strategy, to minimize how much is lost. Notice that two is the smallest in its row and the largest in its column. The game matrix, 5 1 3 3 2 4 !3 0 1 " # $$ % & '' (1.1) has a saddle point in the second row and the second column. However, the game matrix of the Penny‐Matching game, 1 !1 !1 1 " # $ % & ' has no such point. In the case where all the entries of a matrix are equal, each entry is considered a saddle point. If there is a saddle point in a game, it is said to be strictly determined, i.e. there exists an entry such that a column maxima equals the row minima, and both players would do best to use these strategies. Let us suppose that Players I and II are actually engaged in a game. In choosing strategies, Player I chooses, in effect, a row, i, while Player II chooses a column, j. The payoff is given by entry aij of the game matrix, A. Since this is the amount that Player I will receive from Player II, and assuming the players are playing rationally, Player I will want to maximize aij , while Player II wants to minimize it. However, neither player knows exactly which strategy the other will choose. Now suppose we are considering the game (1.1). In this game, Player I would probably decide to play his first or second strategy, since these have higher payoffs than the third strategy. Player I may reason in this way: “I should use either my first or second strategy, but if Player II thinks I will do this, he will choose his second strategy, so I better choose my second strategy.” Even if Player II correctly guesses Player I’s strategy, Player I is better off sticking with his chosen strategy. Thus, the fact that this game matrix has a saddle point plays a significant role in how the players choose their strategies. The majority of games, however, do not contain saddle points. Consider the following game matrix: 4 2 1 3 ! " # $ % & (1.2) 5 There is no element aij that is the greatest in its column and the smallest in its row. We cannot tell what will happen in this game. But let us assume that Player II will correctly guess whatever Player I decides. It is called perfect information when the players are informed of each others’ strategies. In this case, Player I will choose his first strategy, so he cannot win less than two units, whereas with the second strategy he would only win by one unit. We call this win of at least two units the “gain‐floor”, denoted by v 1 ' : v1 ' = max i {min j aij} Player II should choose, under the same conditions, the second strategy, giving a “loss‐ceiling” of three units, meaning that no matter which strategy Player I chooses, Player II can choose his strategy such that his maximum loss will be three units. The loss ceiling will be denoted v 2 ' : v2 ' = min j {max i aij} Now we have the idea of a gain‐floor and a loss‐ceiling. It would be absurd for Player I’s gain‐floor to be greater than Player II’s loss‐ceiling, therefore we say v 1 ' ! v 2 ' (1.3) If there is equality in (1.3), then there is a saddle point; if not, then it is a game without saddle points. If this is the case, we cannot be certain what will happen, but we can say that Player I should not win less than v I ' and Player II should not lose more than v II ' . If the players are allowed to know each other’s strategies, then the best Player I and Player II can do is the gain‐floor and loss‐ceiling, respectively. If the players want to do better than this, the only option is to refuse to let the opponent know the strategy chosen. However, this is difficult as the players are playing rationally, since it is easy for the opponent to reconstruct for himself the reasoning behind the choices of the other player. This fact seems to suggest that the players choose their strategies irrationally; but then there is no point for all this analysis. The answer is that the strategies should be chosen at random (the irrational piece), but the randomization should be chosen rationally. This is what lies behind the idea of a mixed strategy. The players “randomly” selects their strategies so as to keep the opponent guessing and not give him an opportunity to predict his move. A mixed strategy is formally defined as a list of numbers x = (x 1, x 2 ,..., x m ) such that 0 ! x i ! 1 for each i = 1,2,…,m and x i = 1, i=1 m ! where x i indicates the frequency with which Player I chooses strategy i , the strategy in the ith row. Similarly for Player II. Player II’s mixed strategy is y = ( y1, y2 ,..., yn ) , with 0 ! yj ! 1, for each j = 1,...,n and yj = 1 j=1 n ! . Let X denote the set of all mixed strategies for Player I and let Y denote the set of all mixed strategies for Player II. 6 A pure strategy is one that calls for the exclusive use of a row/column, choosing a single course of action. A mixed strategy is one that is not pure; the player uses some strategies more than others. Therefore, a pure strategy can be considered a special case in the set of mixed strategies. Let us consider an example to help us understand mixed strategies. Suppose Player I has a mixed strategy of (2 3,1 3). Notice that these two numbers add to 1 and they are both between 0 and 1. Now, this mixed strategy does not necessarily mean that Player I should use the first strategy twice and then use the second strategy once, but at each play of the game, Player I should play strategy 1 with a probability of 2 3and strategy 2 with a probability of 1 3. For example, Player I may roll a die at each play and choose strategy 1 if 1, 2, 3, or 4 comes up, and choose strategy 2 if 5 or 6 comes up. Suppose the two players are playing the matrix game A = (aij ),with 1 ! i ! mand 1 ! j ! n . If Player I chooses mixed strategy x = (x 1, x 2 ,..., x m ) and Player II chooses y = ( y1, y2 ,..., yn ) , then the expected payoff is A(x, y) = xiaijyj j=1 n ! i=1 m ! (1.4) To explain, if Player I chooses mixed strategy x and Player II chooses mixed strategy y, the likelihood of this actually taking place is the probability of Player I choosing the ith row and Player II choosing the jth column, which equals xiyj . Therefore, since aij gives the payoff of the ith row and jth column, the expected payoff for this specific outcome is xiyjaij , and the total expected payoff when Player I uses strategy x and Player II uses strategy y is the sum of all these contributions, giving us equation (1.4). It can easily be shown that the summation in matrix notation is A(x, y) = xAyT . (1.5) This is shown below: x1 ! xm ( ) a11 … a1n " # " am1 ! amn ! " ## $ % && y1 " yn ! " ## $ % && = x1 ! xm ( ) a11y1 + ! +a1nyn " # " am1yy + ! +amnyn ! " ### $ % &&& = x1a11y1 +!+ x1a1nyn +!+ xmam1y1 +!+ xmamnyn = xiaijyj j=1 n' i=1 m' 7 Note that since mixed strategies are expressed as row vectors, the transpose of vector Y gives us the appropriate column vector for this matrix multiplication. Let us consider an example. Let A be a game with payoff matrix 1 3 4 0 ! " # $ % & . Suppose Player I is using mixed strategy x = (1 2,1 2) . If Player II uses his first strategy, then the expected payoff is 1(1 2) + 4(1 2) = 5 2 , and if Player II uses his second strategy the expected payoff is 3(1 2) + 0(1 2) = 3 2 . Thus, Player I guarantees an expected payoff of 3 2 . This is not to say that Player I will ever receive less than this on any given play, but Player I will win on the average at least this payoff per game. Let us consider this more generally. Consider a game with payoff matrix A. Player I will attempt to maximize his gain‐floor. So for each mixed strategy x, Player I must determine the worst possible outcome of the game if he were to use this particular strategy, while knowing that Player II is also using mixed strategies. This can be represented as min y!Y xAyT . This minimum is Player I’s minimum gain using strategy x when Player II is using any of his strategies. Player I wants to gain as much as possible, so the player should seek a strategy that is the maximum of these minimums. This means Player I should seek a strategy x 0 !X such that min y!Y x0AyT = max x!X min y!Y xAyT . This is called Player I’s maximin strategy, meaning maximizing the minimum gain. Let v1 = max x!X min y!Y xAyT . Player II will attempt to minimize his loss‐ceiling. That is, for each mixed strategy y, Player II must determine the worst possible outcome of the game, knowing that Player I is using mixed strategies. This is represented as max x!X xAyT . This maximum represents the most Player II could lose using strategy y, given that Player I is using any of his mixed strategies. Player II wants to find the strategy y0 that minimizes his maximum loss, i.e. 8 max x!X xAy0 T = min y!Y max x!X xAyT . This is Player II’s minimax strategy, minimizing the maximum loss. Let v2 = min y!Y max x!X xAyT . Now we have obtained the two numbers v I and v 2 . These numbers are called the values of the game for Players I and II, respectively. For simplicity’s sake, let us also find a way to rewrite the product xAyT . Let A i denote the ith row of A and let Aj denote the jth row of A. For a game with payoff matrix A, we have that if Player I uses a pure strategy x i and Player II uses strategy y, the expected payoff is AiyT . Similarly, if Player II uses a pure strategy yj and Player I uses a strategy x, the payoff is xAj . Using this notation, we can decompose the product xAyT in the following ways: xAyT = (xA)yT = xAjyj j=1 n ! = x(AyT ) = xiAiyT i=1 m ! where x = (x1, x2 ,..., xm ), y = (y1, y2 ,..., yn ) . This leads us to the following theorem: Theorem: For a fixed strategy x for Player I, min y!Y xAyT = min 1" j"n xAj and for a fixed strategy y for Player II, max x!X xAyT = max 1"i"m AiyT . The proof for these statements is as follows: Consider a fixed strategy x for Player I. For each j, xAj is a real number. Let w = min 1! j!n xAj . Then, for any y !Y , xAyT = xAj yj 1! j!n " ! wyj 1" j"n # = w yj 1" j"n # = w$1 = w 9 But the outcomes xAj are contained in xAyT for all y !Y as they correspond to the outcomes when Player II uses his pure strategies. Therefore, min y!Y xAyT = min 1" j"n xAj and similarly, max x!X xAyT = max 1"i"m Ai yT for a fixed y !Y . // This theorem allows us to rewrite the definitions of v 1 and v 2 as v1 = max x!X min y!Y xAyT = max x!X min 1" j"n xAj v2 = min y!Y max x!X xAyT = min y!Y max 1"i"m AiyT (1.6) These are the definitions we will use to prove the Minimax Theorem. III. The Minimax Theorem Von Neumann first proved the Minimax Theorem in 1928 and offered a different proof, written with Morgenstern, in 1944. The Minimax Theorem asserts that there exists a mixed strategy for each player such that the average payoff for each player is the same when these strategies are used. Additionally, this payoff is the best each player can hope to achieve given that the opponent plays rationally. Stated formally, Von Neumann proved that v 1 = v 2 . (1.7) Through proving this theorem, Von Neumann established the existence of a rational choice that either player can determine and announce before the start of the game without giving the opponent any advantage. Neither player will be able to improve their expected payoff by departing from the optimal mixed strategy. Von Neumann reflects, “As far as I can see, there could be no theory of games…without that theorem…I thought there was nothing worth publishing until the ‘Minimax Theorem’ was proved” (Casti 19). IV. Linear Programming We will use linear programming to prove the Minimax Theorem. There are a few definitions and a theorem to state before we can begin the proof. A linear programming problem stated in the following form is said to be in primal form: 10 Maximize z = c1x1 +!+ c n x n Subject to a 11 x 1 + a 12 x 2 +!a 1n x n ! b 1 a 21 x 1 + a 22 x 2 +!a 2n x n ! b 2 " a m1 x 1 + a m2 x 2 +!a mn x n ! b m x 1, x 2 ,..., x n " 0 The dual of the primal problem is the following linear programming problem: Minimize v = b1y1 +!+ bmym Subject to a11y1 + a12 y2 +!am1ym ! c1 a12 y1 + a22 y2 +!am2 ym ! c2 " a1n y1 + a2n y2 +!amn ym ! cn y1 , y2 ,…, ym ! 0 Matrix notation can help us to express these definitions. Define matrix A and column vectors b, c, and x as follows: A = a 11 a 12 ! a 1n a 21 a 22 ! a m2 " a 1n a 2n ! a mn ! " #### $ % &&&& ,b = b 1 b 2 " b m ! " #### $ % &&&& ,c = c 1 c 2 " c n ! " #### $ % &&&& , x = x 1 x 2 " x n ! " #### $ % &&&& Therefore, the primal problem is equivalent to maximizing z = c ! x subject to Ax ! b, x " 0 . Now let y be the m‐dimensional column vector (y1, y2 ,..., ym ) . Then the dual problem is to minimize y = b ! y subject to AT y ! c, y ! 0. In summary, we have Maximize z = c ! x subject to Ax ! b, x " 0 Minimize v = b ! y subject to AT y ! c, y ! 0. (1.8) The Duality Theorem will also be used in this proof, which states that if either the primal problem or the dual problem has a finite optimal solution, then so does the other; moreover, the optimal values of the objective functions are equal; that is, max z = minv . 11 We are now able to prove the Minimax Theorem using linear programming techniques. V. Minimax Theorem Proof Theorem: v 1 = v 2 . Proof: Suppose the game has an m! n payoff matrix A = (aij ) , 1 ! i ! m, 1 ! j ! n . In order to use linear programming techniques in this proof, we need to assume that v 1 ,v 2 are positive. This will obviously be the case if all the entries of matrix A, aij , are positive. Thus, we will consider this proof in two cases, the first case being all aij > 0. This first case contains the heart of the proof; the second case is more general and allows us to use the results of the first case. Case 1. All aij > 0. Let us consider the game from Player I’s viewpoint. We know that v1 = max x!X min 1" j"n xAj . In order to determine v 1 for any strategy x !X , we must determine the minimum of xAj . That is, we must find the minimum of the n quantities a 11 x 1 + a 21 x 2 +!+ a m1 x m a 12 x 1 + a 22 x 2 +!+ a m2 x m " a 1n x 1 + a 2n x 2 +!+ a mn x m The minimum of these n quantities is less than or equal to any one of the n quantities, and is actually equal to one of them (the smallest one). Let w be this minimum, i.e. w is the largest real number satisfying the n inequalities a 11 x 1 + a 21 x 2 +!+ a m1 x m ! w a 12 x 1 + a 22 x 2 +!+ a m2 x m ! w " a 1n x 1 + a 2n x 2 +!+ a mn x m ! w (1.9) Thus, for each x !X the maximum w satisfying (1.9) must be determined. Now let us consider v 1 , which is the maximum quantity over all the x !X of these w’s. Since X = {(x 1, x 2 ,..., x m )  x 1 + x 2 +!x m = 1, x i ! 0,1 " i " m}, 12 it follows that v 1 is the maximum w satisfying a 11 x 1 + a 21 x 2 +!+ a m1 x m ! w " a 1n x 1 + a 2n x 2 +!+ a mn x m ! w x 1 +!+ x m = 1 x i ! 0,1 " i " m (1.10) Thus, the strategy x = (x 1, x 2 ,..., x m ) at which this maximum is attained is the best possible strategy for Player I. Since all the aij > 0, we know that v 1 > 0, so we can consider only the w > 0 that satisfy (1.10). We divide the equation and inequalities in (1.10) by w and now have the problem of Maximizing w Subject to a 11 x 1 w +!+ a m1 x m w ! 1 " a 1n x 1 w + a 2n x 2 +!+ a mn x m w ! 1 x 1 w +!+ x m w = 1 w x i w ! 0,1 " i " m (1.11) Let x i ' = (x i / w),1 ! i ! m. Since the problem of maximizing w is equivalent to the problem of minimizing 1/w, the problem of (1.11) can be rewritten as Minimizing x 1 ' + x 2 ' +!+ x m ' Subject to a 11 x 1 ' +!+ a m1 x m ' ! 1 " a 1n x 1 ' +!+ a mn x m ' ! 1 x i ' ! 0,1 " i " m (1.12) 13 We can express this more concisely in vector notation. Let x' = (x 1 ' , x 2 ' ,..., x m ' )T , b = (1,1,...,1)T , and c = (1,1,...,1)T , where b is an m‐tuple and c is an n‐tuple. We can rewrite (1.12) then as the problem of Minimizing b ! x' b ! x ' Subject to AT x' ! c, x' ! 0 To summarize, the reciprocal of the minimum value of the function b ! x ' b ! x'is equal to v 1 v 1 . Also, since w(x 1 ' ,..., x m ' ) = (x 1,..., x m ) , multiplying the coordinates of an x ' at which this minimum is attained by v 1 gives us the maximin strategy for Player I. In a similar manner, we will now show that the problem of determining v 2 and the minimax strategy for Player II brings us to the dual problem of (1.12). Recall that v2 = min y!Y max 1"i"m AiyT . For a y !Y , the maximum of the m quantities AiyT is the smallest real number z such that a11y1 + a12 y2 +!+ a1n yn ! z a21y1 + a22 y2 +!+ a2n yn ! z " am1y1 + am2 y2 +!+ amn yn ! z Since v 2 is the minimum over all the y !Y of these z’s, it follows that v 2 is equal to the minimum z satisfying a11y1 + a12 y2 +!+ a1n yn ! z " am1y1 + am2 y2 +!+ amn yn ! z y1 + y2 +!+ yn = 1 yj " 0,1 ! j ! n (1.13) The point at which this minimal value is achieved is the minimax strategy for Player II. As before, we know that v 2 > 0 and therefore any z that satisfies (1.13) must also be positive. We divide the equation and inequalities of (1.13) by z to get the problem of 14 Minimizing z Subject to a11 y1 z +!+ a1n yn z ! 1 " am1 y1 z +!+ amn yn z ! 1 y1 z +!+ yn z = 1 z yj z " 0,1 ! j ! 1 (1.14) Let yj ' = (yj / z),1 ! j ! n . Then (1.14) is equivalent to the following problem: Maximizing y1 ' + y2 ' +!+ yn ' Subject to a11y1 ' +!+ a1nyn ' ! 1 " am1y1 ' +!+ amnyn ' ! 1 yj ' ! 0,1 " j " n (1.15) As before, we put this in vector notation to achieve Maximize c ! y' Subject to Ay' ! b, y' " 0 The reciprocal of the maximum value of the function c ! y' is equal to v 2 . Also, since z(y1 ' ,..., yn ' ) = (y1,..., yn ) , multiplying the coordinates of a y' at which this minimum is attained by v 2 gives us the minimax strategy for Player II. Now we can see that (1.13) and (1.16) are dual linear problems. Maximizing c ! y' subject to Ay' ! b, y' " 0 is the primal problem, and minimizing b ! x ' subject to AT x' ! c, x' ! 0 is the dual problem. Moreover, the dual problem must have a finite optimal solution, because the objective function b ! x' = x 1 ' +!+ x m ' is bounded below by zero, and since all aij are positive, there exist feasible solutions to the system of constraints AT x' ! c . Thus the primal problem 15 also has a finite optimal solution. Therefore, by the Duality Theorem, both problems have finite solutions obtaining the same optimal value. Hence, there exist optimal strategies, the maximin strategy and minimax strategy, for Player I and Player II, respectively, and these values are equal, i.e. v 1 = v 2 . Case 2. General Case. Suppose that some entries aij are non‐positive. Let us choose any constant r with the property that aij +r > 0 for all i, j. Let E be the m x n matrix with all entries equal to 1. Consider the game with payoff matrix A + rE. Thus, the expected payoff for any pair of strategies (x,y) is x(A + rE)yT = xAyT + rxEyT But x = (x 1,..., x m ) and y = (y1,..., yn ) are strategies, and x i i=1 m ! = 1 , yj j=1 n ! = 1 . So then xE equals the n‐vector (1,1,…,1) and so xEyT = (1,1,...,1)yT = 1. Then the expected payoff becomes xAyT + r , which is the expected payoff for the game matrix plus the constant r. Since the expected payoffs xAyT and x(A + rE)yT differ only by the constant r, it follows that games with payoff matrices A and A + rE will have the same optimal strategies (maximin and minimax strategies), with the game values differing by the constant r. But all the entries of the game matrix A + rE are positive, and thus we can apply the results of Case 1 to this game. Therefore, for the game with payoff matrix A, the maximin and minimax strategies exist and v 1 = v 2 . Q.E.D. Since v 1 = v 2 , we say they both equal v, the value of the game. The strategies x and y (the maximin and minimax strategies) are called the optimal strategies for Player I and Player II, respectively. A pair of optimal strategies, (x, y), is called the solution of the game. VI. Illustration of the Minimax Theorem for a Two‐Person, Zero‐Sum Game Suppose we are given the general 2x2 matrix game A = a 11 a 12 a 21 a 22 ! " # $ % & = a b c d ! " # $ % & for some numbers a, b, c, and d. Let us also assume the game has no saddle point, so the solution of the game is not obvious. Let x = (x 1, x 2 ) be the set of mixed strategies for Player I where 0 ! x 1 , x 2 ! 1 and x 1 + x 2 = 1. Similarly, let y = ( y1, y2 ) be the set of mixed strategies for Player II where 0 ! y1 , y2 ! 1 and y1 + y2 = 1. The expected payoff, given by (1.4), is then A(x, y) = xAyT = xiaij yj j=1 2 ! i=1 2 ! = x1a11y1 + x1a12 y2 + x2a21y1 + x2a22 y2 = x1ay1 + x1by2 + x2cy1 + x2dy2 . By the definitions of v 1 and v 2 , we know that 16 v1 = max x!X min y!Y A(x, y) = max x!X min y!Y xAyT = max x!X min y!Y (x1ay1 + x1by2 + x2cy1 + x2dy2 ) v2 = min y!Y max x!X A(x, y) = min y!Y max x!X xAyT = min y!Y max x!X (x1ay1 + x1by2 + x2cy1 + x2dy2 ) . Additionally, by the Minimax Theorem, we know that v 1 = v 2 . We will now consider a specific 2x2 game matrix, calculate v 1 and v 2 , and show that these values are indeed equal. Consider the matrix A = 4 2 1 3 ! " # $ % & . Then v1 = max x!X min y!Y (4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 ) v2 = min y!Y max x!X (4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 ) . Let X = {(x 1, x 2 ) !!2 : 0 " x 1, x 2 " 1, x 1 + x 2 = 1}and let Y = {( y1, y2 ) !!2 : 0 " y1, y2 " 1, y1 + y2 = 1}. We will consider these line segments between the points (0,1) and (1,0) in the two‐dimensional plane and then use contour plots to see to help us find the maximin ( v 1 ) and minimax ( v 2 ) values. To calculate v 1 , which is the maximin, we must first minimize the quantity 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 when x 1 and x 2 are fixed values, maintaining the properties that both are between 0 and 1 inclusive and add to 1. Then, we must maximize that minimum by letting x 1 and x 2 vary over X. By fixing x = (x 1, x 2 ) !X , we have a linear function of y1 and y2 given by Ax ( y1, y2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = (4x1 + x2 ) y1 + (2x1 + 3x2 ) y2 . As we let y = ( y1, y2 ) vary over Y, the linear nature of the function and the given constraints of Y imply that the minimum is likely going to occur at one of the endpoints of Y, either (0,1) or (1,0), or possibly at all points along Y. At the endpoint ( y1, y2 ) = (1,0) , the value of the function is A x (1,0) = 4x 1 + x 2 and at the endpoint ( y1, y2 ) = (0,1) the value of the function is A x (0,1) = 2x 1 + 3x 2 . Therefore, if the minimum occurs at (1,0), we have 4x 1 + x 2 < 2x 1 + 3x 2 and the minimum value is 4x 1 + x 2 . If the minimum occurs at (0,1), we have 2x 1 + 3x 2 < 4x 1 + x 2 4x 1 + x 2 and the minimum value is 2x 1 + 3x 2 . If the minimum occurs at all points along Y, the minimum value is 4x 1 + x 2 = 2x 1 + 3x 2 . We can even say that the minimum will be 4x 1 + x 2 4x 1 + x 2 if 4x 1 + x 2 4x 1 + x 2 ! 2x 1 + 3x 2 and the minimum will be 2x 1 + 3x 2 if 2x 1 + 3x 2 ! 4x 1 + x 2 . These statements remain consistent if 4x 1 + x 2 = 2x 1 + 3x 2 . 17 We have two cases to consider, then, to find the maximin value v 1 . The first is when 4x 1 + x 2 ! 2x 1 + 3x 2 , which is equivalent to 2x 1 ! 2x 2 , or x 1 ! x 2 . The second is when 2x 1 + 3x 2 ! 4x 1 + x 2 , or equivalently 2x 1 ! 2x 2 and x 1 ! x 2 . Let us first consider the case when x 1 ! x 2 . Then we will get min y!Y Ax ( y1, y2 ) = min y!Y 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 4x1 + x2 . To find the maximin, we now want to maximize this quantity by letting x = (x 1, x 2 ) vary over X, while still maintaining that x 1 ! x 2 . Geometrically, this means we must maximize the quantity 4x 1 + x 2 4x 1 + x 2 subject to the condition that x = (x 1, x 2 ) is on the line segment between (0,1) and 1 2 , 1 2 ! " # $ % & . These points are determined by the conditions that x 1 ! x 2 , 0 ! x 1 , x 2 ! 1, and x 1 + x 2 = 1. The contour plot below shows that the quantity 4x 1 + x 2 is clearly maximized at the point 1 2 , 1 2 ! " # $ % & . The plot shows that x1 and x2 are varying (note the labels on the axes) and the quantity 4x 1 + x 2 is plotted from (0,1) to 1 2 , 1 2 ! " # $ % & . As larger values are represented as lighter, we see the maximum of the quantity occurring at 1 2 , 1 2 ! " # $ % & . We then calculate the maximin as 4 ! 1 2 + 1 2 = 2 + 1 2 = 5 2 = 2.5 . Thus, when x 1 ! x 2 , v 1 = 2.5 . 18 Now let us consider the case when x 1 ! x 2 . Here, we will get min y!Y Ax ( y1, y2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 2x1 + 3x2 and we now want to maximize the quantity 2x 1 + 3x 2 as x = (x 1, x 2 ) varies over X while maintaining that x 1 ! x 2 . Geometrically, this means that we must maximize the quantity 2x 1 + 3x 2 subject to the condition that x = (x 1, x 2 ) is on the line segment between 1 2 , 1 2 ! " # $ % & and (1,0). These points are determined by the conditions that x 1 ! x 2 , 0 ! x 1 , x 2 ! 1, and x 1 + x 2 = 1. The contour plot below shows that the quantity 2x 1 + 3x 2 is clearly maximized at the point 1 2 , 1 2 ! " # $ % & . The plot shows that x1 and x2 are varying (note the labels on the axes) and the quantity 2x 1 + 3x 2 is plotted from 1 2 , 1 2 ! " # $ % & to (1,0). As larger values are represented as lighter, we see the maximum of the quantity again occurring at 1 2 , 1 2 ! " # $ % & . We then calculate the maximin as 2 ! 1 2 + 3! 1 2 = 1+ 3 2 = 5 2 = 2.5 . Thus, when x 1 ! x 2 , v 1 = 2.5 . In both cases, we found that v 1 = 2.5 . We have also found that the optimal strategy for Player I is (x 1, x 2 ) = 1 2 , 1 2 ! " #$ % & . 19 To calculate v 2 , which is the minimax, we will follow the same process as when v 1was calculated. We must first maximize the quantity 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 when y1 and y2 are fixed values, maintaining the properties that both are between 0 and 1 inclusive and add to 1. Then, we must minimize that maximum by letting y1 and y2 vary over Y. By fixing y = ( y1, y2 ) !Y , we have a linear function of x 1 and x 2 given by Ay (x1, x2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = (4y1 + 2y2 )x1 + ( y1 + 3y2 )x2 . As we let x = (x 1, x 2 ) vary over X, the linear nature of the function and the given constraints of X imply that the minimum is likely going to occur at one of the endpoints of X, either (0,1) or (1,0), or possibly at all points along X. At the endpoint (x 1, x 2 ) = (1,0) , the value of the function is Ay (1,0) = 4y1 + 2y2 and at the endpoint (x 1, x 2 ) = (0,1) the value of the function is Ay (0,1) = y1 + 3y2 . Therefore, if the maximum occurs at (1,0), we have 4y1 + 2y > y1 + 3y2 and the maximum value is 4y1 + 2y2 . If the maximum occurs at (0,1), we have y1 + 3y2 > 4y1 + 2y2 and the maximum value is y1 + 3y2 . If the maximum occurs at all points along X, the maximum value is 4y1 + 2y2 = y1 + 3y2 . We can even say that the maximum will be 4y1 + 2y2 if 4y1 + 2y2 ! y1 + 3y2 and the maximum will be y1 + 3y2 if y1 + 3y2 ! 4y1 + 2y2 . These statements remain consistent if 4y1 + 2y2 = y1 + 3y2 . We have two cases to consider, then, to find the minimax value v 2 . The first is when 4y1 + 2y2 ! y1 + 3y2 , which is equivalent to 3y1 ! y2 . The second is when 4y1 + 2y2 ! y1 + 3y2 , or equivalently 3y1 ! y2 . Let us first consider the case when 3y1 ! y2 . Then we will get max x!X Ay (x1, x2 ) = max x!X 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 4y1 + 2y2 . To find the minimax, we now want to minimize this quantity by letting y = ( y1, y2 ) vary over Y, while still maintaining that 3y1 ! y2 . Geometrically, this means we must minimize the quantity 4y1 + 2y2 subject to the condition that y = ( y1, y2 ) is on the line segment between 1 4 , 3 4 ! " # $ % & and (1,0). These points are determined by the conditions that 3y1 ! y2 , 0 ! y1 , y2 ! 1, and y1 + y2 = 1. The contour plot below shows that the quantity 4y1 + 2y2 is clearly minimized at the point 1 4 , 3 4 ! " # $ % & . The plot shows that y1 and y2 are varying (note the labels on the axes) and the quantity 4y1 + 2y2 is plotted from 1 4 , 3 4 ! " #$ % & to (1,0). As smaller values are represented as darker, we see the minimum of the quantity occurring at 1 4 , 3 4 ! " # $ % & . 20 We then calculate the minimax as 4 ! 1 4 + 2 ! 3 4 = 1+ 3 2 = 5 2 = 2.5. Thus, when 3y1 ! y2 , v 2 = 2.5 . Now let us consider the case when 3y1 ! y2 . Here, we will get max x!X Ay (x1, x2 ) = max x!X 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = y1 + 3y2 and we now want to minimize the quantity y1 + 3y2 as y = ( y1, y2 ) varies over Y while maintaining that 3y1 ! y2 . Geometrically, this means that we must minimize the quantity y1 + 3y2 subject to the condition that y = ( y1, y2 ) is on the line segment between (0,1) and 1 4 , 3 4 ! " # $ % & . These points are determined by the conditions that 3y1 ! y2 , 0 ! y1 , y2 ! 1, and y1 + y2 = 1. The contour plot below shows that the quantity y1 + 3y2 is clearly minimized at the point 1 4 , 3 4 ! " # $ % & . The plot shows that y1 and y2 are varying (note the labels on the axes) and the quantity y1 + 3y2 is plotted from (0,1) to 1 4 , 3 4 ! " # $ % & . As smaller values are represented as darker, we see the minimum of the quantity again occurring at 1 4 , 3 4 ! " # $ % & . 21 We then calculate the minimax as 1 4 + 3! 3 4 = 1 4 + 9 4 = 10 4 = 2.5 . Thus, when 3y1 ! y2 , v 2 = 2.5 . In both cases, we found that v 2 = 2.5 . We have also found that the optimal strategy for Player II is ( y1, y2 ) = 1 4 , 3 4 ! " # $ % & . Therefore, we have verified the Minimax Theorem for this specific example by showing that v 1 = v 2 = 2.5 . We have also discovered the optimal strategies for Players I and II, which are (x 1, x 2 ) = 1 2 , 1 2 ! " # $ % & and ( y1, y2 ) = 1 4 , 3 4 ! " # $ % & , respectively. VII. Computation of Optimal Strategies While the Minimax Theorem guarantees that in every two‐person, zero‐sum game there exist optimal strategies for Players I and II, it does not tell us how to compute these strategies. Of course, if a saddle point exists, i.e. if there exists an aij that is the maximum in its column and the minimum in its row, the pure strategies i and j, or equivalently the mixed strategies x and y 22 with xi = 1, yj = 1 corresponding to this entry will be the optimal strategies for Players I and II, respectively. The other way to determine the optimal strategy is through a method called domination. Let us first define this term. In a matrix A, we say the ith row dominates the kth row if aij ! akj for all j aij > akj for at least one j Similarly, we say the jth column dominates the lth column if aij ! ail for all i aij < ail for at least one i. To interpret what this means, a pure strategy (represented by its row or column) is said to dominate a second pure strategy if the choice of the first (dominating strategy) is at least as good as the second (dominated) strategy, and in some cases better. Thus, a player can always eliminate any dominated strategies and use only the dominating strategies. Once the dominated rows and columns are eliminated, we can work with a smaller matrix. Consider the game with matrix 2 0 1 4 1 2 5 3 4 1 3 2 ! " ## $ % && . We can see that the second column dominates the fourth column; each entry in the second column is less than or equal to each entry in the fourth column (keep in mind, Player II is trying it minimize the maximum payoff, so smaller numbers are better). Therefore, we can eliminate the fourth column and have the resulting matrix 2 0 1 1 2 5 4 1 3 ! " ## $ % && . We also see that the third row dominates the first. Player I is trying to maximize the minimum payoff, and the entries of the third row are at least as big as the first. So we eliminate the first row and have the resulting matrix 1 2 5 4 1 3 ! " # $ % & . In this matrix, the third column is dominated by the second, and now we have 23 1 2 4 1 ! " # $ % & . Now we only need to look for optimal strategies to the 2x2 matrix game. We will derive a formula for computing the optimal strategies and the value for such a game with no saddle points. The solution and optimal strategies of a 2x2 game with no saddle points can be expressed in terms of the adjoint matrix. Suppose we are given the 2x2 matrix game A, a b c d ! " # $ % & . A* is the adjoint of A. For a 2x2 matrix, A* is defined as d !b !c a " # $ % & ' If the matrix game A has a saddle point, then we have no problem; we already know the optimal strategies and value of the game. Suppose that there is no saddle point. Then we have optimal strategies x = (x 1, x 2 ), x i ! 0 and y = (y1, y2 ), yj ! 0. Now, if the value of the game is v, we have ax1y1 + bx1y2 + cx2y1 + dx2y2 = v or x1(ay1 + by2 ) + x2 (cy1 + dy2 ) = v. (1.16) The two terms in parentheses on the left side of (1.16) are both less than or equal to v, since y is an optimal strategy by hypothesis and Player II will never lose more than v. Suppose that one of them was strictly less than v, i.e. ay1 + by2 < v cyy + dy2 ! v. Then, since x i ! 0, x 1 + x 2 = 1, the left side of (1.16) would be less than v, which is a contradiction. Thus, both the terms in parentheses must equal v, hence, ay1 + by2 = v cy1 + dy2 = v Similarly, it is shown that ax 1 + cx 2 = v bx 1 + dx 2 = v And in matrix form, we have the equations 24 AyT = v v ! " # $ % & xA = (v,v) (1.17) Recall also that x1 + x2 = 1 y1 + y2 = 1 (1.18) Let us now solve for x, y, and v. Starting with xA = (v,v) from (1.17), we rewrite (v,v) as vJ , where J is the vector (1,1),to obtain xA = vJ . Multiply both sides by A* to get xAA* = vJA*. (1.19) It can be shown that AA* = A I , where I is the identity matrix. Thus, we rewrite (1.19) as x A I = vJA* and since A is a constant, we can write A xI = vJA* . Since xI = x , we have A x = vJA*. (1.20) Then multiply both sides by J T to get A xJ T = vJA* J T . We can now find v by noting that the sum of the components of x equals 1, i.e. xJT = 1. Thus, A = vJA* J T and v = A JA* J T . (1.21) By substituting v into (1.20), we find that x = JA* JA* J T . (1.22) Similarly, we can find that yT = A* J T JA* J T . (1.23) Thus, we can put these laws into the following theorem: 25 Theorem: Let A be a 2x2 matrix game. Then if A does not have a saddle point, its unique optimal strategies and value will be given by x = JA* JA* J T yT = A* J T JA* J T v =  A  JA* J T (1.24) where A* is the adjoint of A, A is the determinant of A, and J is the vector (1, 1). One of the key points of this theorem is that the matrix A does not have a saddle point. First of all, if A has a saddle point, there is no need for calculation; one only needs to find the saddle point and use the pure strategy that corresponds with that entry in the game matrix. If there is no saddle point, then the optimal strategy must be calculated. Issues with the above theorem arise when the formulas are used on matrices that have saddle points. Consider the matrix 6 4 4 2 ! " # $ % & . This matrix has a saddle point at a 12 . Suppose we were to use the formulas to calculate the optimal strategies and the value of the game. The expression JA* J T is equal to the expression a + d ! b ! c . When JA* J T is calculated, a + d ! b ! c = 6+2‐4‐4 = 0. Since JA* J T = 0 in this case, and JA* J T is the denominator in the formulas, we cannot follow through with our calculations. We can prove, however, that if a matrix has no saddle point, then the formulas will always work. Let us prove that if a matrix A has no saddle points, then det A ! 0 and a + d ! b ! c ! 0, equivalently a + d ! b + c . Theorem: Let A = a b c d ! " # $ % &be a matrix with no saddle points. Then A ! 0 and a+d ! b+c. Proof: Since a cannot be a saddle point, then either a < c or a > b by definition of a saddle point. Let us consider each case separately. If a < c, then c > d since c also cannot be a saddle point. Now since d is not a saddle point, then d < b. Thus, since a < c and d < b, then ad ! bc (i.e. A ! 0) and also a + d ! b + c (i.e. JA* J T ! 0). 26 If a > b, then b < d since b also cannot be a saddle point. Now since d is not a saddle point, then d > c. Thus, since a > b and d > c, then ad ! bc (i.e. A ! 0) and also a + d ! b + c (i.e. JA* J T ! 0). // Therefore, we find that when the matrix A has no saddle points, then det A ! 0 and a + d ! b + c and we may use the derived formulas to calculate x, y, and v. We have two methods, then, to find the optimal strategies and the value of a 2x2 matrix game, depending on whether or not the matrix has a saddle point. If there is a saddle point at aij , the game is strictly determined. The saddle point gives the value of the game and the optimal strategies are the pure strategies in which Player I always chooses strategy i and Player II always chooses strategy j. If there is no saddle point, we must calculate the optimal strategies for each player and the value of the game using the formulas given in (1.24). Let us consider an example to demonstrate how the formulas for the optimal strategies and the value are used. We will find the optimal strategies for Player I and Player II, and also find the value of the game 1 0 !1 2 " # $ % & ' This game has no saddle point because there is no entry such that it is the minimum of its row and the maximum of its column. We compute A* = 2 0 1 1 ! " # $ % &  A = 2 JA* = (3,1) A* J T = 2 2 ! " # $ % & JA* J T = 4 so x = 3 4 , 1 4 ! " # $ % & y = 2 4 , 2 4 ! " # $ % & = 1 2 , 1 2 ! " # $ % & v = 2 4 = 1 2 It can be checked that the strategies x, y do guarantee a payoff of ½. 27 Thus, we have successfully found a way to compute the optimal strategies and value of a 2x2 matrix game. VIII. Classic Example of a Two‐Person, Zero‐Sum Game Let us discuss a “real‐life” example of how the Minimax Theorem may be applied. This example is called Fighters and Bombers. During World War II, fighter pilots typically attacked bombers by coming down at them from the direction of the sun. This tactic is known as “Hun‐in‐the‐Sun.” But when every plane started using this strategy, the bombers put on their sunglasses and stared into the direction of the sun to look for the fighter pilots. So the fighter pilots came up with a new tactic: an attack from straight below. If the fighter was not spotted, this strategy was very effective, but if he was spotted, it was invariably fatal since planes climb slower than they dive. This strategy is called “Ezak‐Imak,” kamikaze spelled backward. Thus, we have a two‐person, zero‐sum game between fighter pilots and bombers. The fighters can use the Hun‐in‐the‐ Sun or the Ezak‐Imak strategies, and the bombers can either look up or look down as their strategies. We can agree to measure the payoff as the chances of survival in a single mission. Then the game matrix would be Bomber Crew Look up Look Down Fighter Pilot Hun‐in‐the‐Sun 0.95 1 Ezak‐Imak 1 0 This game has no saddle point, so there are no pure strategies that may be used by fighters or bombers that cannot be exploited by the opponent if the opponent knows which strategy is going to be used. Therefore, both the fighters and the bombers must mix their actions; i.e. use mixed strategies, randomly selecting an action from among the possibilities. This does not mean that the strategies should be chosen with an equal likelihood; clearly, the Hun‐in‐the‐Sun is almost always successful for the fighter pilots. Intuitively, this strategy should be used significantly more than the Ezak‐Imak tactic, which involves certain death if the bomber happens to look down. Therefore, the mixed strategy will undoubtedly favor the Hun‐in‐the‐ Sun strategy, while only employing the Ezak‐Imak strategy occasionally to keep the bombers guessing. Since this is a two‐person, zero‐sum game, the Minimax Theorem tells us that there are optimal strategies for each player such that the values of the game for each are equal. Let us use our theorem to compute the optimal mixed strategies and value for this game. It is clear that there is no saddle point in this game matrix. So we have 28 A* = 0 !1 !1 .95 " # $ % & '  A = (0)(.95) ! (!1)(!1) = !1 J = (1,1) and then we can compute the following: JA* = (!1,!.05) A * JT = (!1,!.05) JA * JT = !1.05 Thus, we have strategies and value x = JA * JA * JT = !1 !1.05 , !.05 !1.05 " # $ % & ' = 20 21 , 1 21 " # $ % & ' = (.9524,.0476) y = A * JT JA * JT = !1 !1.05 , !.05 !1.05 " # $ % & ' = 20 21 , 1 21 " # $ % & ' = (.9524,.0476) v =  A  JA * JT = !1 !1.05 = 20 21 = .9524 So we conclude that when using these mixed strategies, the fighter pilots will survive 95.24% of the time. This is the best survival rate possible. When the fighter pilots use the Hun‐in‐the‐Sun strategy 20 out of 21 times, their survival rate is .9524, and we notice this is slightly higher than .95, the survival rate if they use strictly the Hun‐in‐the‐Sun strategy. The bombers, too, will fare better to look up 20 out of 21 times. We must note, however, that these strategies should be employed randomly. The fighter pilot, for example, should not use the Hun‐in‐the‐Sun strategy 20 times in a row, and then use the Ezak‐Imak strategy; this would defeat the purpose of coming up with a mixed strategy! These ratios are to be employed on the average. IX. Extensions of the Minimax Theorem The Minimax Theorem has been improved and extended several times by Von Neumann. He was able to draw conclusions about games with imperfect information and to games with more than two players. This results in the possibility of cooperation among some of the players in the form of alliances. Von Neumann’s work is crucial to the arena of game theory; it truly provides the foundation upon which much of the theory rests. Perhaps the most astounding achievement, however, was made by John Nash. Nash proved a theorem that generalizes the Minimax Theorem to the case of nonzero‐sum games involving two or more players when these players are in competition, i.e. non‐cooperative games. Stated informally, Nash’s Theorem says any n‐person, non‐cooperative game, whether it is zero‐sum or nonzero‐sum, for which each player has a finite 29 number of pure strategies has at least one equilibrium set of strategies. This differs from Von Neumann’s Minimax Theorem because 1) it guarantees the existence of an equilibrium for both zero‐sum and nonzero‐sum games, 2) it shows that there may be more than one solution, and 3) it extends to the case of any finite number of players. John Nash, along with John Harsanyi and Reinhard Selten, won the Nobel Prize in Economic Science in 1994 for his work in game theory. This was the first time in the 93‐year history of the Nobel prizes that an award had been won for work done purely in mathematics. Trying to find satisfactory solutions to n‐person cooperative games has proved to be elusive. Many solutions concepts have been presented, some offering unique solutions and others not. None of the solutions offered has achieved a consensus in the game theory community as a way of determining rational action among the players when some degree of cooperation is allowed. There are many difficulties with trying to make these predictions, as it is no longer even possible to agree on what behaving rationally means in these n‐person cooperative games. X. Conclusion It is natural to wonder how the theory of games is actually useful; is it just an intellectual exercise for mathematicians, or is there something valuable to the rest of humankind? Truthfully, it is difficult to find a connection between the theory of games and an actual real‐world problem. Game theory is, as of yet, too under‐developed and rests on idealistic assumptions to entirely bridge the gap between the mathematics and reality. The essential assumption of game theory is that the players are acting rationally, making decisions that are completely self‐serving. However, we know that humans do not always behave rationally, in both the traditional and game theory sense of the word. The abstract players of game theory do not have much in common with players in the real world. Even if game theory is not directly applicable to reality, it does offer many important insights into competition and cooperation in real‐world situations. Economics, voting theory, and auctions are three examples of areas where game theory can be applied. Business is likely the prime example of successfully applying game theory. A business can be thought of as a giant nonzero‐sum game. Businesses interact with other businesses and people, and its success depends on the decisions regarding those with whom the business interacts. In turn, these agents are making their own decisions. It is easy to see how business can be thought of as one very big, complicated game. So although game theory can be very mathematical and technical, it also has important implications for reality. Von Neumann’s Minimax Theorem was the catalyst for much deeper study in the arena of game theory, and although much has been learned, there is still much more that we do not know. As we further our study in game theory, we will hopefully gain even more insight into interactive and strategic decision‐making. After all, in the words of American poet Edwin Arlington Robinson, “Life is the game that must be played.” 30 Resources Casti, John L. Five Golden Rules: Great Theories of 20th‐century Mathematics and Why They Matter. New York: Wiley & Sons, 1996. Print. Odifreddi, Piergiorgio. The Mathematical Century: The 30 Greatest Problems of the Last 100 Years. Princeton, NJ: Princeton UP, 2004. Print. Owen, Guillermo. Game Theory. Philadelphia: W. B. Saunders, 1968. Print. Stahl, Saul. A Gentle Introduction to Game Theory. Providence, RI: American Mathematical Society, 1999. Print. Stevens, Scott P. “Game People Play: Game Theory in Life, Business, and Beyond.” The Teaching Company Great Courses. 2008. DVD Lecture Series. Thie, Paul R. An Introduction to Linear Programming and Game Theory. New York: John Wiley & Sons, 1979. Print.
Click tabs to swap between content that is broken into logical sections.
Title  Game Theory and Von Neumann's Minimax Theorem 
Creator  Hall, Megan 
Date Accepted/Awarded  2011 Spring 
Department/Program  Department of Mathematics and Computer Science 
Document Type  Honors Paper 
Abstract  We encounter many situations of conflict on a day to day basis. When trying to find a solution, we consider all the possible consequences of our choices. Sometimes, the outcome of a situation depends on the decisions of others. This is where the discussion of game theory begins. Game theory seeks to construct a mathematical model for these decision making processes. The players in a game want to find the best outcome for themselves; is it possible to guarantee that a particular outcome, no matter what the other player does? Perhaps the most important result of game theory is the Minimax Theorem, proved by John von Neumann. He showed that in a game with two players, such that the loss of one is the gain of the other, there are strategies that each player should use that will result in the best outcome for both of the players. Since this theorem was proved, a number of extensions have also been found. 
Keywords  Game theory; Minimax Theorem; Von Neumann, John, 19031957 
Digital Collection 
Bethel Undergraduate Research and Scholarship Bethel Honors Projects 
Collection ID 
bdlurs0001 bdlhon0001 
Rights Information  Copyright 2011 Megan Hall 
Type  text 
Format  application/pdf 
Identifier  MHall_2011_MAT.pdf 
Transcript  Game Theory and Von Neumann’s Minimax Theorem Megan Hall Foundations of Mathematics Interim 2011 Dr. Kinney Abstract: We encounter many situations of conflict on a day‐to‐day basis. When trying to find a solution, we consider all the possible consequences of our choices. Sometimes, the outcome of a situation depends on the decisions of others. This is where the discussion of game theory begins. Game theory seeks to construct a mathematical model for these decision making processes. The players in a game want to find the best outcome for themselves; is it possible to guarantee that a particular outcome, no matter what the other player does? Perhaps the most important result of game theory is the Minimax Theorem, proved by John von Neumann. He showed that in a game with two players, such that the loss of one is the gain of the other, there are strategies that each player should use that will result in the best outcome for both of the players. Since this theorem was proved, a number of extensions have also been found. 1 Table of Contents I. Introduction to Game Theory…………………………………………………….…2 II. Definitions and Examples………………………………………………………….….2 III. The Minimax Theorem…………………………………………………………….……9 IV. Linear Programming……………………………………………………………….…….9 V. Minimax Theorem Proof…………………………………………………………….11 VI. Illustration of the Minimax Theorem for a Two‐Person, Zero‐Sum Game………………………………………………………………………….15 VII. Computation of Optimal Strategies…………………………………………….21 VIII. Classic Example of a Two‐Person, Zero‐Sum Game…………………….26 IX. Extensions of the Minimax Theorem…………………………………….……28 X. Conclusion…………………………………………………………………………….……29 XI. Resources….………………………………………………………………………….……30 2 I. Introduction to Game Theory In everyday life, a “game” usually refers to a pastime. Card games, board games, or athletic games come to mind. However, consider the game of chess, for example. This is as game that involves much more than solely entertainment. It requires seeking strategies to gain an advantage over the other player. It is in these situations where the outcome of the game depends on the strategies used by the players that we may begin a conversation about game theory. Life requires us to make choices in many domains, whether it is social, personal, economic, or otherwise. The purpose of game theory is to develop a mathematical model for these decision processes, or situations of conflict. The first mathematical work in game theory appeared in 1912 as an article by Ernst Zermelo, in which he proved that the game of chess is determined, i.e. there is a strategy for white to always win, a strategy for black to always win, and a strategy for each player that always results in a tie. However, his result did not state which of these three actually occurs, and therefore has no practical application. Emile Borel laid the foundations of game theory in 1921. He posed the question of determining which situations have an optimal strategy and how to find it. John Von Neumann proved the first theorem of game theory in 1928. This theorem established that in two‐person zero‐sum games (i.e. games in which one player’s gain equals the other player’s loss) with perfect information (each player knows the possible strategies, and resulting consequences, of the other player), there exists a minimax point. This minimax point means that both players minimize their maximum loss, thereby being the optimal strategy for both players. The Minimax Theorem, proved by Von Neumann, is the pivotal result in the mathematical theory of games, for it forms the foundation of a long string of further results. This theorem was extended and improved by Von Neumann, all building up to the publication of Theory of Games and Economic Behavior, co‐written with the economist Oscar Morgenstern in 1944. II. Definitions and Examples Before a proof of the Minimax Theorem is developed, it is necessary to cover the fundamentals of game theory and introduce important definitions. A game will be technically defined as a situation of conflict. In a game, the participants are called players. For our purposes, we will focus on games with only two players, call them I and II. Conflict exists because each player wants to improve his circumstances by gaining as much as possible, to the loss of the other player. Each player has strategies to use, one at a time, in an attempt to achieve the desired outcome. Let us consider a simple game called Penny‐Matching to illustrate some of these terms. 3 Suppose players I and II each hold a penny and they display them simultaneously. If the pennies match (either both show heads or both show tails), then Player I keeps both the pennies, therefore gaining one penny. Otherwise, the pennies do not match, and Player II keeps the pennies. In this game, the players each have two strategies, “heads” or “tails.” The game can be summarized in the following matrix: H T H (1,!1) (!1,1) T (!1,1) (1,!1) This matrix is said to be in normal form. Each row represents a strategy of Player I and each column represents a strategy of Player II. Each entry in the matrix is called a payoff vector, with the first component representing the payoff to Player I and the second component representing the payoff to Player II. For example, the entry (1,‐1) implies a gain of 1 for Player I and a gain of ‐1 (or a loss of 1) for Player II. The matrix can be represented more simply as H T H 1 !1 T !1 1 An entry of 1 is a gain and ‐1 is a loss for Player I; since Player I gains a penny as Player II loses a penny, Player II’s payoff is the opposite of Player I’s. Therefore, an entry of 1 implies that Player I gained 1 and Player II lost 1. This is why we only include the first component of the payoff vector; we will call this simply the payoff. It is understood that the second player gives this amount to the first player. This game is said to be zero‐sum, since everything won by a player must be lost by someone else. Let us assume a general knowledge of matrix algebra. We can now make a few general statements. A two‐person zero‐sum game can be reduced to a matrix, A, with as many rows as Player I has strategies and as many columns as Player II has strategies. Thus, the payoff when Player I chooses his ith strategy and Player II chooses his jth strategy is the element aij in the ith row and jth column of the matrix. The players are interested in deciding which strategy is best in terms of optimizing their payoff. An element, aij , will be an equilibrium point if and only if it is the largest in its column and the smallest in its row. An element of this kind is called a saddle point. For example, the game matrix, 0 !1 2 3 " # $ % & ' 4 has a saddle point in the second row and the first column. Naturally, Player I would choose his second strategy in an attempt to maximize his payoff, and Player II, choosing between paying two units or three, would choose his first strategy, to minimize how much is lost. Notice that two is the smallest in its row and the largest in its column. The game matrix, 5 1 3 3 2 4 !3 0 1 " # $$ % & '' (1.1) has a saddle point in the second row and the second column. However, the game matrix of the Penny‐Matching game, 1 !1 !1 1 " # $ % & ' has no such point. In the case where all the entries of a matrix are equal, each entry is considered a saddle point. If there is a saddle point in a game, it is said to be strictly determined, i.e. there exists an entry such that a column maxima equals the row minima, and both players would do best to use these strategies. Let us suppose that Players I and II are actually engaged in a game. In choosing strategies, Player I chooses, in effect, a row, i, while Player II chooses a column, j. The payoff is given by entry aij of the game matrix, A. Since this is the amount that Player I will receive from Player II, and assuming the players are playing rationally, Player I will want to maximize aij , while Player II wants to minimize it. However, neither player knows exactly which strategy the other will choose. Now suppose we are considering the game (1.1). In this game, Player I would probably decide to play his first or second strategy, since these have higher payoffs than the third strategy. Player I may reason in this way: “I should use either my first or second strategy, but if Player II thinks I will do this, he will choose his second strategy, so I better choose my second strategy.” Even if Player II correctly guesses Player I’s strategy, Player I is better off sticking with his chosen strategy. Thus, the fact that this game matrix has a saddle point plays a significant role in how the players choose their strategies. The majority of games, however, do not contain saddle points. Consider the following game matrix: 4 2 1 3 ! " # $ % & (1.2) 5 There is no element aij that is the greatest in its column and the smallest in its row. We cannot tell what will happen in this game. But let us assume that Player II will correctly guess whatever Player I decides. It is called perfect information when the players are informed of each others’ strategies. In this case, Player I will choose his first strategy, so he cannot win less than two units, whereas with the second strategy he would only win by one unit. We call this win of at least two units the “gain‐floor”, denoted by v 1 ' : v1 ' = max i {min j aij} Player II should choose, under the same conditions, the second strategy, giving a “loss‐ceiling” of three units, meaning that no matter which strategy Player I chooses, Player II can choose his strategy such that his maximum loss will be three units. The loss ceiling will be denoted v 2 ' : v2 ' = min j {max i aij} Now we have the idea of a gain‐floor and a loss‐ceiling. It would be absurd for Player I’s gain‐floor to be greater than Player II’s loss‐ceiling, therefore we say v 1 ' ! v 2 ' (1.3) If there is equality in (1.3), then there is a saddle point; if not, then it is a game without saddle points. If this is the case, we cannot be certain what will happen, but we can say that Player I should not win less than v I ' and Player II should not lose more than v II ' . If the players are allowed to know each other’s strategies, then the best Player I and Player II can do is the gain‐floor and loss‐ceiling, respectively. If the players want to do better than this, the only option is to refuse to let the opponent know the strategy chosen. However, this is difficult as the players are playing rationally, since it is easy for the opponent to reconstruct for himself the reasoning behind the choices of the other player. This fact seems to suggest that the players choose their strategies irrationally; but then there is no point for all this analysis. The answer is that the strategies should be chosen at random (the irrational piece), but the randomization should be chosen rationally. This is what lies behind the idea of a mixed strategy. The players “randomly” selects their strategies so as to keep the opponent guessing and not give him an opportunity to predict his move. A mixed strategy is formally defined as a list of numbers x = (x 1, x 2 ,..., x m ) such that 0 ! x i ! 1 for each i = 1,2,…,m and x i = 1, i=1 m ! where x i indicates the frequency with which Player I chooses strategy i , the strategy in the ith row. Similarly for Player II. Player II’s mixed strategy is y = ( y1, y2 ,..., yn ) , with 0 ! yj ! 1, for each j = 1,...,n and yj = 1 j=1 n ! . Let X denote the set of all mixed strategies for Player I and let Y denote the set of all mixed strategies for Player II. 6 A pure strategy is one that calls for the exclusive use of a row/column, choosing a single course of action. A mixed strategy is one that is not pure; the player uses some strategies more than others. Therefore, a pure strategy can be considered a special case in the set of mixed strategies. Let us consider an example to help us understand mixed strategies. Suppose Player I has a mixed strategy of (2 3,1 3). Notice that these two numbers add to 1 and they are both between 0 and 1. Now, this mixed strategy does not necessarily mean that Player I should use the first strategy twice and then use the second strategy once, but at each play of the game, Player I should play strategy 1 with a probability of 2 3and strategy 2 with a probability of 1 3. For example, Player I may roll a die at each play and choose strategy 1 if 1, 2, 3, or 4 comes up, and choose strategy 2 if 5 or 6 comes up. Suppose the two players are playing the matrix game A = (aij ),with 1 ! i ! mand 1 ! j ! n . If Player I chooses mixed strategy x = (x 1, x 2 ,..., x m ) and Player II chooses y = ( y1, y2 ,..., yn ) , then the expected payoff is A(x, y) = xiaijyj j=1 n ! i=1 m ! (1.4) To explain, if Player I chooses mixed strategy x and Player II chooses mixed strategy y, the likelihood of this actually taking place is the probability of Player I choosing the ith row and Player II choosing the jth column, which equals xiyj . Therefore, since aij gives the payoff of the ith row and jth column, the expected payoff for this specific outcome is xiyjaij , and the total expected payoff when Player I uses strategy x and Player II uses strategy y is the sum of all these contributions, giving us equation (1.4). It can easily be shown that the summation in matrix notation is A(x, y) = xAyT . (1.5) This is shown below: x1 ! xm ( ) a11 … a1n " # " am1 ! amn ! " ## $ % && y1 " yn ! " ## $ % && = x1 ! xm ( ) a11y1 + ! +a1nyn " # " am1yy + ! +amnyn ! " ### $ % &&& = x1a11y1 +!+ x1a1nyn +!+ xmam1y1 +!+ xmamnyn = xiaijyj j=1 n' i=1 m' 7 Note that since mixed strategies are expressed as row vectors, the transpose of vector Y gives us the appropriate column vector for this matrix multiplication. Let us consider an example. Let A be a game with payoff matrix 1 3 4 0 ! " # $ % & . Suppose Player I is using mixed strategy x = (1 2,1 2) . If Player II uses his first strategy, then the expected payoff is 1(1 2) + 4(1 2) = 5 2 , and if Player II uses his second strategy the expected payoff is 3(1 2) + 0(1 2) = 3 2 . Thus, Player I guarantees an expected payoff of 3 2 . This is not to say that Player I will ever receive less than this on any given play, but Player I will win on the average at least this payoff per game. Let us consider this more generally. Consider a game with payoff matrix A. Player I will attempt to maximize his gain‐floor. So for each mixed strategy x, Player I must determine the worst possible outcome of the game if he were to use this particular strategy, while knowing that Player II is also using mixed strategies. This can be represented as min y!Y xAyT . This minimum is Player I’s minimum gain using strategy x when Player II is using any of his strategies. Player I wants to gain as much as possible, so the player should seek a strategy that is the maximum of these minimums. This means Player I should seek a strategy x 0 !X such that min y!Y x0AyT = max x!X min y!Y xAyT . This is called Player I’s maximin strategy, meaning maximizing the minimum gain. Let v1 = max x!X min y!Y xAyT . Player II will attempt to minimize his loss‐ceiling. That is, for each mixed strategy y, Player II must determine the worst possible outcome of the game, knowing that Player I is using mixed strategies. This is represented as max x!X xAyT . This maximum represents the most Player II could lose using strategy y, given that Player I is using any of his mixed strategies. Player II wants to find the strategy y0 that minimizes his maximum loss, i.e. 8 max x!X xAy0 T = min y!Y max x!X xAyT . This is Player II’s minimax strategy, minimizing the maximum loss. Let v2 = min y!Y max x!X xAyT . Now we have obtained the two numbers v I and v 2 . These numbers are called the values of the game for Players I and II, respectively. For simplicity’s sake, let us also find a way to rewrite the product xAyT . Let A i denote the ith row of A and let Aj denote the jth row of A. For a game with payoff matrix A, we have that if Player I uses a pure strategy x i and Player II uses strategy y, the expected payoff is AiyT . Similarly, if Player II uses a pure strategy yj and Player I uses a strategy x, the payoff is xAj . Using this notation, we can decompose the product xAyT in the following ways: xAyT = (xA)yT = xAjyj j=1 n ! = x(AyT ) = xiAiyT i=1 m ! where x = (x1, x2 ,..., xm ), y = (y1, y2 ,..., yn ) . This leads us to the following theorem: Theorem: For a fixed strategy x for Player I, min y!Y xAyT = min 1" j"n xAj and for a fixed strategy y for Player II, max x!X xAyT = max 1"i"m AiyT . The proof for these statements is as follows: Consider a fixed strategy x for Player I. For each j, xAj is a real number. Let w = min 1! j!n xAj . Then, for any y !Y , xAyT = xAj yj 1! j!n " ! wyj 1" j"n # = w yj 1" j"n # = w$1 = w 9 But the outcomes xAj are contained in xAyT for all y !Y as they correspond to the outcomes when Player II uses his pure strategies. Therefore, min y!Y xAyT = min 1" j"n xAj and similarly, max x!X xAyT = max 1"i"m Ai yT for a fixed y !Y . // This theorem allows us to rewrite the definitions of v 1 and v 2 as v1 = max x!X min y!Y xAyT = max x!X min 1" j"n xAj v2 = min y!Y max x!X xAyT = min y!Y max 1"i"m AiyT (1.6) These are the definitions we will use to prove the Minimax Theorem. III. The Minimax Theorem Von Neumann first proved the Minimax Theorem in 1928 and offered a different proof, written with Morgenstern, in 1944. The Minimax Theorem asserts that there exists a mixed strategy for each player such that the average payoff for each player is the same when these strategies are used. Additionally, this payoff is the best each player can hope to achieve given that the opponent plays rationally. Stated formally, Von Neumann proved that v 1 = v 2 . (1.7) Through proving this theorem, Von Neumann established the existence of a rational choice that either player can determine and announce before the start of the game without giving the opponent any advantage. Neither player will be able to improve their expected payoff by departing from the optimal mixed strategy. Von Neumann reflects, “As far as I can see, there could be no theory of games…without that theorem…I thought there was nothing worth publishing until the ‘Minimax Theorem’ was proved” (Casti 19). IV. Linear Programming We will use linear programming to prove the Minimax Theorem. There are a few definitions and a theorem to state before we can begin the proof. A linear programming problem stated in the following form is said to be in primal form: 10 Maximize z = c1x1 +!+ c n x n Subject to a 11 x 1 + a 12 x 2 +!a 1n x n ! b 1 a 21 x 1 + a 22 x 2 +!a 2n x n ! b 2 " a m1 x 1 + a m2 x 2 +!a mn x n ! b m x 1, x 2 ,..., x n " 0 The dual of the primal problem is the following linear programming problem: Minimize v = b1y1 +!+ bmym Subject to a11y1 + a12 y2 +!am1ym ! c1 a12 y1 + a22 y2 +!am2 ym ! c2 " a1n y1 + a2n y2 +!amn ym ! cn y1 , y2 ,…, ym ! 0 Matrix notation can help us to express these definitions. Define matrix A and column vectors b, c, and x as follows: A = a 11 a 12 ! a 1n a 21 a 22 ! a m2 " a 1n a 2n ! a mn ! " #### $ % &&&& ,b = b 1 b 2 " b m ! " #### $ % &&&& ,c = c 1 c 2 " c n ! " #### $ % &&&& , x = x 1 x 2 " x n ! " #### $ % &&&& Therefore, the primal problem is equivalent to maximizing z = c ! x subject to Ax ! b, x " 0 . Now let y be the m‐dimensional column vector (y1, y2 ,..., ym ) . Then the dual problem is to minimize y = b ! y subject to AT y ! c, y ! 0. In summary, we have Maximize z = c ! x subject to Ax ! b, x " 0 Minimize v = b ! y subject to AT y ! c, y ! 0. (1.8) The Duality Theorem will also be used in this proof, which states that if either the primal problem or the dual problem has a finite optimal solution, then so does the other; moreover, the optimal values of the objective functions are equal; that is, max z = minv . 11 We are now able to prove the Minimax Theorem using linear programming techniques. V. Minimax Theorem Proof Theorem: v 1 = v 2 . Proof: Suppose the game has an m! n payoff matrix A = (aij ) , 1 ! i ! m, 1 ! j ! n . In order to use linear programming techniques in this proof, we need to assume that v 1 ,v 2 are positive. This will obviously be the case if all the entries of matrix A, aij , are positive. Thus, we will consider this proof in two cases, the first case being all aij > 0. This first case contains the heart of the proof; the second case is more general and allows us to use the results of the first case. Case 1. All aij > 0. Let us consider the game from Player I’s viewpoint. We know that v1 = max x!X min 1" j"n xAj . In order to determine v 1 for any strategy x !X , we must determine the minimum of xAj . That is, we must find the minimum of the n quantities a 11 x 1 + a 21 x 2 +!+ a m1 x m a 12 x 1 + a 22 x 2 +!+ a m2 x m " a 1n x 1 + a 2n x 2 +!+ a mn x m The minimum of these n quantities is less than or equal to any one of the n quantities, and is actually equal to one of them (the smallest one). Let w be this minimum, i.e. w is the largest real number satisfying the n inequalities a 11 x 1 + a 21 x 2 +!+ a m1 x m ! w a 12 x 1 + a 22 x 2 +!+ a m2 x m ! w " a 1n x 1 + a 2n x 2 +!+ a mn x m ! w (1.9) Thus, for each x !X the maximum w satisfying (1.9) must be determined. Now let us consider v 1 , which is the maximum quantity over all the x !X of these w’s. Since X = {(x 1, x 2 ,..., x m )  x 1 + x 2 +!x m = 1, x i ! 0,1 " i " m}, 12 it follows that v 1 is the maximum w satisfying a 11 x 1 + a 21 x 2 +!+ a m1 x m ! w " a 1n x 1 + a 2n x 2 +!+ a mn x m ! w x 1 +!+ x m = 1 x i ! 0,1 " i " m (1.10) Thus, the strategy x = (x 1, x 2 ,..., x m ) at which this maximum is attained is the best possible strategy for Player I. Since all the aij > 0, we know that v 1 > 0, so we can consider only the w > 0 that satisfy (1.10). We divide the equation and inequalities in (1.10) by w and now have the problem of Maximizing w Subject to a 11 x 1 w +!+ a m1 x m w ! 1 " a 1n x 1 w + a 2n x 2 +!+ a mn x m w ! 1 x 1 w +!+ x m w = 1 w x i w ! 0,1 " i " m (1.11) Let x i ' = (x i / w),1 ! i ! m. Since the problem of maximizing w is equivalent to the problem of minimizing 1/w, the problem of (1.11) can be rewritten as Minimizing x 1 ' + x 2 ' +!+ x m ' Subject to a 11 x 1 ' +!+ a m1 x m ' ! 1 " a 1n x 1 ' +!+ a mn x m ' ! 1 x i ' ! 0,1 " i " m (1.12) 13 We can express this more concisely in vector notation. Let x' = (x 1 ' , x 2 ' ,..., x m ' )T , b = (1,1,...,1)T , and c = (1,1,...,1)T , where b is an m‐tuple and c is an n‐tuple. We can rewrite (1.12) then as the problem of Minimizing b ! x' b ! x ' Subject to AT x' ! c, x' ! 0 To summarize, the reciprocal of the minimum value of the function b ! x ' b ! x'is equal to v 1 v 1 . Also, since w(x 1 ' ,..., x m ' ) = (x 1,..., x m ) , multiplying the coordinates of an x ' at which this minimum is attained by v 1 gives us the maximin strategy for Player I. In a similar manner, we will now show that the problem of determining v 2 and the minimax strategy for Player II brings us to the dual problem of (1.12). Recall that v2 = min y!Y max 1"i"m AiyT . For a y !Y , the maximum of the m quantities AiyT is the smallest real number z such that a11y1 + a12 y2 +!+ a1n yn ! z a21y1 + a22 y2 +!+ a2n yn ! z " am1y1 + am2 y2 +!+ amn yn ! z Since v 2 is the minimum over all the y !Y of these z’s, it follows that v 2 is equal to the minimum z satisfying a11y1 + a12 y2 +!+ a1n yn ! z " am1y1 + am2 y2 +!+ amn yn ! z y1 + y2 +!+ yn = 1 yj " 0,1 ! j ! n (1.13) The point at which this minimal value is achieved is the minimax strategy for Player II. As before, we know that v 2 > 0 and therefore any z that satisfies (1.13) must also be positive. We divide the equation and inequalities of (1.13) by z to get the problem of 14 Minimizing z Subject to a11 y1 z +!+ a1n yn z ! 1 " am1 y1 z +!+ amn yn z ! 1 y1 z +!+ yn z = 1 z yj z " 0,1 ! j ! 1 (1.14) Let yj ' = (yj / z),1 ! j ! n . Then (1.14) is equivalent to the following problem: Maximizing y1 ' + y2 ' +!+ yn ' Subject to a11y1 ' +!+ a1nyn ' ! 1 " am1y1 ' +!+ amnyn ' ! 1 yj ' ! 0,1 " j " n (1.15) As before, we put this in vector notation to achieve Maximize c ! y' Subject to Ay' ! b, y' " 0 The reciprocal of the maximum value of the function c ! y' is equal to v 2 . Also, since z(y1 ' ,..., yn ' ) = (y1,..., yn ) , multiplying the coordinates of a y' at which this minimum is attained by v 2 gives us the minimax strategy for Player II. Now we can see that (1.13) and (1.16) are dual linear problems. Maximizing c ! y' subject to Ay' ! b, y' " 0 is the primal problem, and minimizing b ! x ' subject to AT x' ! c, x' ! 0 is the dual problem. Moreover, the dual problem must have a finite optimal solution, because the objective function b ! x' = x 1 ' +!+ x m ' is bounded below by zero, and since all aij are positive, there exist feasible solutions to the system of constraints AT x' ! c . Thus the primal problem 15 also has a finite optimal solution. Therefore, by the Duality Theorem, both problems have finite solutions obtaining the same optimal value. Hence, there exist optimal strategies, the maximin strategy and minimax strategy, for Player I and Player II, respectively, and these values are equal, i.e. v 1 = v 2 . Case 2. General Case. Suppose that some entries aij are non‐positive. Let us choose any constant r with the property that aij +r > 0 for all i, j. Let E be the m x n matrix with all entries equal to 1. Consider the game with payoff matrix A + rE. Thus, the expected payoff for any pair of strategies (x,y) is x(A + rE)yT = xAyT + rxEyT But x = (x 1,..., x m ) and y = (y1,..., yn ) are strategies, and x i i=1 m ! = 1 , yj j=1 n ! = 1 . So then xE equals the n‐vector (1,1,…,1) and so xEyT = (1,1,...,1)yT = 1. Then the expected payoff becomes xAyT + r , which is the expected payoff for the game matrix plus the constant r. Since the expected payoffs xAyT and x(A + rE)yT differ only by the constant r, it follows that games with payoff matrices A and A + rE will have the same optimal strategies (maximin and minimax strategies), with the game values differing by the constant r. But all the entries of the game matrix A + rE are positive, and thus we can apply the results of Case 1 to this game. Therefore, for the game with payoff matrix A, the maximin and minimax strategies exist and v 1 = v 2 . Q.E.D. Since v 1 = v 2 , we say they both equal v, the value of the game. The strategies x and y (the maximin and minimax strategies) are called the optimal strategies for Player I and Player II, respectively. A pair of optimal strategies, (x, y), is called the solution of the game. VI. Illustration of the Minimax Theorem for a Two‐Person, Zero‐Sum Game Suppose we are given the general 2x2 matrix game A = a 11 a 12 a 21 a 22 ! " # $ % & = a b c d ! " # $ % & for some numbers a, b, c, and d. Let us also assume the game has no saddle point, so the solution of the game is not obvious. Let x = (x 1, x 2 ) be the set of mixed strategies for Player I where 0 ! x 1 , x 2 ! 1 and x 1 + x 2 = 1. Similarly, let y = ( y1, y2 ) be the set of mixed strategies for Player II where 0 ! y1 , y2 ! 1 and y1 + y2 = 1. The expected payoff, given by (1.4), is then A(x, y) = xAyT = xiaij yj j=1 2 ! i=1 2 ! = x1a11y1 + x1a12 y2 + x2a21y1 + x2a22 y2 = x1ay1 + x1by2 + x2cy1 + x2dy2 . By the definitions of v 1 and v 2 , we know that 16 v1 = max x!X min y!Y A(x, y) = max x!X min y!Y xAyT = max x!X min y!Y (x1ay1 + x1by2 + x2cy1 + x2dy2 ) v2 = min y!Y max x!X A(x, y) = min y!Y max x!X xAyT = min y!Y max x!X (x1ay1 + x1by2 + x2cy1 + x2dy2 ) . Additionally, by the Minimax Theorem, we know that v 1 = v 2 . We will now consider a specific 2x2 game matrix, calculate v 1 and v 2 , and show that these values are indeed equal. Consider the matrix A = 4 2 1 3 ! " # $ % & . Then v1 = max x!X min y!Y (4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 ) v2 = min y!Y max x!X (4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 ) . Let X = {(x 1, x 2 ) !!2 : 0 " x 1, x 2 " 1, x 1 + x 2 = 1}and let Y = {( y1, y2 ) !!2 : 0 " y1, y2 " 1, y1 + y2 = 1}. We will consider these line segments between the points (0,1) and (1,0) in the two‐dimensional plane and then use contour plots to see to help us find the maximin ( v 1 ) and minimax ( v 2 ) values. To calculate v 1 , which is the maximin, we must first minimize the quantity 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 when x 1 and x 2 are fixed values, maintaining the properties that both are between 0 and 1 inclusive and add to 1. Then, we must maximize that minimum by letting x 1 and x 2 vary over X. By fixing x = (x 1, x 2 ) !X , we have a linear function of y1 and y2 given by Ax ( y1, y2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = (4x1 + x2 ) y1 + (2x1 + 3x2 ) y2 . As we let y = ( y1, y2 ) vary over Y, the linear nature of the function and the given constraints of Y imply that the minimum is likely going to occur at one of the endpoints of Y, either (0,1) or (1,0), or possibly at all points along Y. At the endpoint ( y1, y2 ) = (1,0) , the value of the function is A x (1,0) = 4x 1 + x 2 and at the endpoint ( y1, y2 ) = (0,1) the value of the function is A x (0,1) = 2x 1 + 3x 2 . Therefore, if the minimum occurs at (1,0), we have 4x 1 + x 2 < 2x 1 + 3x 2 and the minimum value is 4x 1 + x 2 . If the minimum occurs at (0,1), we have 2x 1 + 3x 2 < 4x 1 + x 2 4x 1 + x 2 and the minimum value is 2x 1 + 3x 2 . If the minimum occurs at all points along Y, the minimum value is 4x 1 + x 2 = 2x 1 + 3x 2 . We can even say that the minimum will be 4x 1 + x 2 4x 1 + x 2 if 4x 1 + x 2 4x 1 + x 2 ! 2x 1 + 3x 2 and the minimum will be 2x 1 + 3x 2 if 2x 1 + 3x 2 ! 4x 1 + x 2 . These statements remain consistent if 4x 1 + x 2 = 2x 1 + 3x 2 . 17 We have two cases to consider, then, to find the maximin value v 1 . The first is when 4x 1 + x 2 ! 2x 1 + 3x 2 , which is equivalent to 2x 1 ! 2x 2 , or x 1 ! x 2 . The second is when 2x 1 + 3x 2 ! 4x 1 + x 2 , or equivalently 2x 1 ! 2x 2 and x 1 ! x 2 . Let us first consider the case when x 1 ! x 2 . Then we will get min y!Y Ax ( y1, y2 ) = min y!Y 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 4x1 + x2 . To find the maximin, we now want to maximize this quantity by letting x = (x 1, x 2 ) vary over X, while still maintaining that x 1 ! x 2 . Geometrically, this means we must maximize the quantity 4x 1 + x 2 4x 1 + x 2 subject to the condition that x = (x 1, x 2 ) is on the line segment between (0,1) and 1 2 , 1 2 ! " # $ % & . These points are determined by the conditions that x 1 ! x 2 , 0 ! x 1 , x 2 ! 1, and x 1 + x 2 = 1. The contour plot below shows that the quantity 4x 1 + x 2 is clearly maximized at the point 1 2 , 1 2 ! " # $ % & . The plot shows that x1 and x2 are varying (note the labels on the axes) and the quantity 4x 1 + x 2 is plotted from (0,1) to 1 2 , 1 2 ! " # $ % & . As larger values are represented as lighter, we see the maximum of the quantity occurring at 1 2 , 1 2 ! " # $ % & . We then calculate the maximin as 4 ! 1 2 + 1 2 = 2 + 1 2 = 5 2 = 2.5 . Thus, when x 1 ! x 2 , v 1 = 2.5 . 18 Now let us consider the case when x 1 ! x 2 . Here, we will get min y!Y Ax ( y1, y2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 2x1 + 3x2 and we now want to maximize the quantity 2x 1 + 3x 2 as x = (x 1, x 2 ) varies over X while maintaining that x 1 ! x 2 . Geometrically, this means that we must maximize the quantity 2x 1 + 3x 2 subject to the condition that x = (x 1, x 2 ) is on the line segment between 1 2 , 1 2 ! " # $ % & and (1,0). These points are determined by the conditions that x 1 ! x 2 , 0 ! x 1 , x 2 ! 1, and x 1 + x 2 = 1. The contour plot below shows that the quantity 2x 1 + 3x 2 is clearly maximized at the point 1 2 , 1 2 ! " # $ % & . The plot shows that x1 and x2 are varying (note the labels on the axes) and the quantity 2x 1 + 3x 2 is plotted from 1 2 , 1 2 ! " # $ % & to (1,0). As larger values are represented as lighter, we see the maximum of the quantity again occurring at 1 2 , 1 2 ! " # $ % & . We then calculate the maximin as 2 ! 1 2 + 3! 1 2 = 1+ 3 2 = 5 2 = 2.5 . Thus, when x 1 ! x 2 , v 1 = 2.5 . In both cases, we found that v 1 = 2.5 . We have also found that the optimal strategy for Player I is (x 1, x 2 ) = 1 2 , 1 2 ! " #$ % & . 19 To calculate v 2 , which is the minimax, we will follow the same process as when v 1was calculated. We must first maximize the quantity 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 when y1 and y2 are fixed values, maintaining the properties that both are between 0 and 1 inclusive and add to 1. Then, we must minimize that maximum by letting y1 and y2 vary over Y. By fixing y = ( y1, y2 ) !Y , we have a linear function of x 1 and x 2 given by Ay (x1, x2 ) = 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = (4y1 + 2y2 )x1 + ( y1 + 3y2 )x2 . As we let x = (x 1, x 2 ) vary over X, the linear nature of the function and the given constraints of X imply that the minimum is likely going to occur at one of the endpoints of X, either (0,1) or (1,0), or possibly at all points along X. At the endpoint (x 1, x 2 ) = (1,0) , the value of the function is Ay (1,0) = 4y1 + 2y2 and at the endpoint (x 1, x 2 ) = (0,1) the value of the function is Ay (0,1) = y1 + 3y2 . Therefore, if the maximum occurs at (1,0), we have 4y1 + 2y > y1 + 3y2 and the maximum value is 4y1 + 2y2 . If the maximum occurs at (0,1), we have y1 + 3y2 > 4y1 + 2y2 and the maximum value is y1 + 3y2 . If the maximum occurs at all points along X, the maximum value is 4y1 + 2y2 = y1 + 3y2 . We can even say that the maximum will be 4y1 + 2y2 if 4y1 + 2y2 ! y1 + 3y2 and the maximum will be y1 + 3y2 if y1 + 3y2 ! 4y1 + 2y2 . These statements remain consistent if 4y1 + 2y2 = y1 + 3y2 . We have two cases to consider, then, to find the minimax value v 2 . The first is when 4y1 + 2y2 ! y1 + 3y2 , which is equivalent to 3y1 ! y2 . The second is when 4y1 + 2y2 ! y1 + 3y2 , or equivalently 3y1 ! y2 . Let us first consider the case when 3y1 ! y2 . Then we will get max x!X Ay (x1, x2 ) = max x!X 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = 4y1 + 2y2 . To find the minimax, we now want to minimize this quantity by letting y = ( y1, y2 ) vary over Y, while still maintaining that 3y1 ! y2 . Geometrically, this means we must minimize the quantity 4y1 + 2y2 subject to the condition that y = ( y1, y2 ) is on the line segment between 1 4 , 3 4 ! " # $ % & and (1,0). These points are determined by the conditions that 3y1 ! y2 , 0 ! y1 , y2 ! 1, and y1 + y2 = 1. The contour plot below shows that the quantity 4y1 + 2y2 is clearly minimized at the point 1 4 , 3 4 ! " # $ % & . The plot shows that y1 and y2 are varying (note the labels on the axes) and the quantity 4y1 + 2y2 is plotted from 1 4 , 3 4 ! " #$ % & to (1,0). As smaller values are represented as darker, we see the minimum of the quantity occurring at 1 4 , 3 4 ! " # $ % & . 20 We then calculate the minimax as 4 ! 1 4 + 2 ! 3 4 = 1+ 3 2 = 5 2 = 2.5. Thus, when 3y1 ! y2 , v 2 = 2.5 . Now let us consider the case when 3y1 ! y2 . Here, we will get max x!X Ay (x1, x2 ) = max x!X 4x1y1 + 2x1y2 + x2 y1 + 3x2 y2 = y1 + 3y2 and we now want to minimize the quantity y1 + 3y2 as y = ( y1, y2 ) varies over Y while maintaining that 3y1 ! y2 . Geometrically, this means that we must minimize the quantity y1 + 3y2 subject to the condition that y = ( y1, y2 ) is on the line segment between (0,1) and 1 4 , 3 4 ! " # $ % & . These points are determined by the conditions that 3y1 ! y2 , 0 ! y1 , y2 ! 1, and y1 + y2 = 1. The contour plot below shows that the quantity y1 + 3y2 is clearly minimized at the point 1 4 , 3 4 ! " # $ % & . The plot shows that y1 and y2 are varying (note the labels on the axes) and the quantity y1 + 3y2 is plotted from (0,1) to 1 4 , 3 4 ! " # $ % & . As smaller values are represented as darker, we see the minimum of the quantity again occurring at 1 4 , 3 4 ! " # $ % & . 21 We then calculate the minimax as 1 4 + 3! 3 4 = 1 4 + 9 4 = 10 4 = 2.5 . Thus, when 3y1 ! y2 , v 2 = 2.5 . In both cases, we found that v 2 = 2.5 . We have also found that the optimal strategy for Player II is ( y1, y2 ) = 1 4 , 3 4 ! " # $ % & . Therefore, we have verified the Minimax Theorem for this specific example by showing that v 1 = v 2 = 2.5 . We have also discovered the optimal strategies for Players I and II, which are (x 1, x 2 ) = 1 2 , 1 2 ! " # $ % & and ( y1, y2 ) = 1 4 , 3 4 ! " # $ % & , respectively. VII. Computation of Optimal Strategies While the Minimax Theorem guarantees that in every two‐person, zero‐sum game there exist optimal strategies for Players I and II, it does not tell us how to compute these strategies. Of course, if a saddle point exists, i.e. if there exists an aij that is the maximum in its column and the minimum in its row, the pure strategies i and j, or equivalently the mixed strategies x and y 22 with xi = 1, yj = 1 corresponding to this entry will be the optimal strategies for Players I and II, respectively. The other way to determine the optimal strategy is through a method called domination. Let us first define this term. In a matrix A, we say the ith row dominates the kth row if aij ! akj for all j aij > akj for at least one j Similarly, we say the jth column dominates the lth column if aij ! ail for all i aij < ail for at least one i. To interpret what this means, a pure strategy (represented by its row or column) is said to dominate a second pure strategy if the choice of the first (dominating strategy) is at least as good as the second (dominated) strategy, and in some cases better. Thus, a player can always eliminate any dominated strategies and use only the dominating strategies. Once the dominated rows and columns are eliminated, we can work with a smaller matrix. Consider the game with matrix 2 0 1 4 1 2 5 3 4 1 3 2 ! " ## $ % && . We can see that the second column dominates the fourth column; each entry in the second column is less than or equal to each entry in the fourth column (keep in mind, Player II is trying it minimize the maximum payoff, so smaller numbers are better). Therefore, we can eliminate the fourth column and have the resulting matrix 2 0 1 1 2 5 4 1 3 ! " ## $ % && . We also see that the third row dominates the first. Player I is trying to maximize the minimum payoff, and the entries of the third row are at least as big as the first. So we eliminate the first row and have the resulting matrix 1 2 5 4 1 3 ! " # $ % & . In this matrix, the third column is dominated by the second, and now we have 23 1 2 4 1 ! " # $ % & . Now we only need to look for optimal strategies to the 2x2 matrix game. We will derive a formula for computing the optimal strategies and the value for such a game with no saddle points. The solution and optimal strategies of a 2x2 game with no saddle points can be expressed in terms of the adjoint matrix. Suppose we are given the 2x2 matrix game A, a b c d ! " # $ % & . A* is the adjoint of A. For a 2x2 matrix, A* is defined as d !b !c a " # $ % & ' If the matrix game A has a saddle point, then we have no problem; we already know the optimal strategies and value of the game. Suppose that there is no saddle point. Then we have optimal strategies x = (x 1, x 2 ), x i ! 0 and y = (y1, y2 ), yj ! 0. Now, if the value of the game is v, we have ax1y1 + bx1y2 + cx2y1 + dx2y2 = v or x1(ay1 + by2 ) + x2 (cy1 + dy2 ) = v. (1.16) The two terms in parentheses on the left side of (1.16) are both less than or equal to v, since y is an optimal strategy by hypothesis and Player II will never lose more than v. Suppose that one of them was strictly less than v, i.e. ay1 + by2 < v cyy + dy2 ! v. Then, since x i ! 0, x 1 + x 2 = 1, the left side of (1.16) would be less than v, which is a contradiction. Thus, both the terms in parentheses must equal v, hence, ay1 + by2 = v cy1 + dy2 = v Similarly, it is shown that ax 1 + cx 2 = v bx 1 + dx 2 = v And in matrix form, we have the equations 24 AyT = v v ! " # $ % & xA = (v,v) (1.17) Recall also that x1 + x2 = 1 y1 + y2 = 1 (1.18) Let us now solve for x, y, and v. Starting with xA = (v,v) from (1.17), we rewrite (v,v) as vJ , where J is the vector (1,1),to obtain xA = vJ . Multiply both sides by A* to get xAA* = vJA*. (1.19) It can be shown that AA* = A I , where I is the identity matrix. Thus, we rewrite (1.19) as x A I = vJA* and since A is a constant, we can write A xI = vJA* . Since xI = x , we have A x = vJA*. (1.20) Then multiply both sides by J T to get A xJ T = vJA* J T . We can now find v by noting that the sum of the components of x equals 1, i.e. xJT = 1. Thus, A = vJA* J T and v = A JA* J T . (1.21) By substituting v into (1.20), we find that x = JA* JA* J T . (1.22) Similarly, we can find that yT = A* J T JA* J T . (1.23) Thus, we can put these laws into the following theorem: 25 Theorem: Let A be a 2x2 matrix game. Then if A does not have a saddle point, its unique optimal strategies and value will be given by x = JA* JA* J T yT = A* J T JA* J T v =  A  JA* J T (1.24) where A* is the adjoint of A, A is the determinant of A, and J is the vector (1, 1). One of the key points of this theorem is that the matrix A does not have a saddle point. First of all, if A has a saddle point, there is no need for calculation; one only needs to find the saddle point and use the pure strategy that corresponds with that entry in the game matrix. If there is no saddle point, then the optimal strategy must be calculated. Issues with the above theorem arise when the formulas are used on matrices that have saddle points. Consider the matrix 6 4 4 2 ! " # $ % & . This matrix has a saddle point at a 12 . Suppose we were to use the formulas to calculate the optimal strategies and the value of the game. The expression JA* J T is equal to the expression a + d ! b ! c . When JA* J T is calculated, a + d ! b ! c = 6+2‐4‐4 = 0. Since JA* J T = 0 in this case, and JA* J T is the denominator in the formulas, we cannot follow through with our calculations. We can prove, however, that if a matrix has no saddle point, then the formulas will always work. Let us prove that if a matrix A has no saddle points, then det A ! 0 and a + d ! b ! c ! 0, equivalently a + d ! b + c . Theorem: Let A = a b c d ! " # $ % &be a matrix with no saddle points. Then A ! 0 and a+d ! b+c. Proof: Since a cannot be a saddle point, then either a < c or a > b by definition of a saddle point. Let us consider each case separately. If a < c, then c > d since c also cannot be a saddle point. Now since d is not a saddle point, then d < b. Thus, since a < c and d < b, then ad ! bc (i.e. A ! 0) and also a + d ! b + c (i.e. JA* J T ! 0). 26 If a > b, then b < d since b also cannot be a saddle point. Now since d is not a saddle point, then d > c. Thus, since a > b and d > c, then ad ! bc (i.e. A ! 0) and also a + d ! b + c (i.e. JA* J T ! 0). // Therefore, we find that when the matrix A has no saddle points, then det A ! 0 and a + d ! b + c and we may use the derived formulas to calculate x, y, and v. We have two methods, then, to find the optimal strategies and the value of a 2x2 matrix game, depending on whether or not the matrix has a saddle point. If there is a saddle point at aij , the game is strictly determined. The saddle point gives the value of the game and the optimal strategies are the pure strategies in which Player I always chooses strategy i and Player II always chooses strategy j. If there is no saddle point, we must calculate the optimal strategies for each player and the value of the game using the formulas given in (1.24). Let us consider an example to demonstrate how the formulas for the optimal strategies and the value are used. We will find the optimal strategies for Player I and Player II, and also find the value of the game 1 0 !1 2 " # $ % & ' This game has no saddle point because there is no entry such that it is the minimum of its row and the maximum of its column. We compute A* = 2 0 1 1 ! " # $ % &  A = 2 JA* = (3,1) A* J T = 2 2 ! " # $ % & JA* J T = 4 so x = 3 4 , 1 4 ! " # $ % & y = 2 4 , 2 4 ! " # $ % & = 1 2 , 1 2 ! " # $ % & v = 2 4 = 1 2 It can be checked that the strategies x, y do guarantee a payoff of ½. 27 Thus, we have successfully found a way to compute the optimal strategies and value of a 2x2 matrix game. VIII. Classic Example of a Two‐Person, Zero‐Sum Game Let us discuss a “real‐life” example of how the Minimax Theorem may be applied. This example is called Fighters and Bombers. During World War II, fighter pilots typically attacked bombers by coming down at them from the direction of the sun. This tactic is known as “Hun‐in‐the‐Sun.” But when every plane started using this strategy, the bombers put on their sunglasses and stared into the direction of the sun to look for the fighter pilots. So the fighter pilots came up with a new tactic: an attack from straight below. If the fighter was not spotted, this strategy was very effective, but if he was spotted, it was invariably fatal since planes climb slower than they dive. This strategy is called “Ezak‐Imak,” kamikaze spelled backward. Thus, we have a two‐person, zero‐sum game between fighter pilots and bombers. The fighters can use the Hun‐in‐the‐ Sun or the Ezak‐Imak strategies, and the bombers can either look up or look down as their strategies. We can agree to measure the payoff as the chances of survival in a single mission. Then the game matrix would be Bomber Crew Look up Look Down Fighter Pilot Hun‐in‐the‐Sun 0.95 1 Ezak‐Imak 1 0 This game has no saddle point, so there are no pure strategies that may be used by fighters or bombers that cannot be exploited by the opponent if the opponent knows which strategy is going to be used. Therefore, both the fighters and the bombers must mix their actions; i.e. use mixed strategies, randomly selecting an action from among the possibilities. This does not mean that the strategies should be chosen with an equal likelihood; clearly, the Hun‐in‐the‐Sun is almost always successful for the fighter pilots. Intuitively, this strategy should be used significantly more than the Ezak‐Imak tactic, which involves certain death if the bomber happens to look down. Therefore, the mixed strategy will undoubtedly favor the Hun‐in‐the‐ Sun strategy, while only employing the Ezak‐Imak strategy occasionally to keep the bombers guessing. Since this is a two‐person, zero‐sum game, the Minimax Theorem tells us that there are optimal strategies for each player such that the values of the game for each are equal. Let us use our theorem to compute the optimal mixed strategies and value for this game. It is clear that there is no saddle point in this game matrix. So we have 28 A* = 0 !1 !1 .95 " # $ % & '  A = (0)(.95) ! (!1)(!1) = !1 J = (1,1) and then we can compute the following: JA* = (!1,!.05) A * JT = (!1,!.05) JA * JT = !1.05 Thus, we have strategies and value x = JA * JA * JT = !1 !1.05 , !.05 !1.05 " # $ % & ' = 20 21 , 1 21 " # $ % & ' = (.9524,.0476) y = A * JT JA * JT = !1 !1.05 , !.05 !1.05 " # $ % & ' = 20 21 , 1 21 " # $ % & ' = (.9524,.0476) v =  A  JA * JT = !1 !1.05 = 20 21 = .9524 So we conclude that when using these mixed strategies, the fighter pilots will survive 95.24% of the time. This is the best survival rate possible. When the fighter pilots use the Hun‐in‐the‐Sun strategy 20 out of 21 times, their survival rate is .9524, and we notice this is slightly higher than .95, the survival rate if they use strictly the Hun‐in‐the‐Sun strategy. The bombers, too, will fare better to look up 20 out of 21 times. We must note, however, that these strategies should be employed randomly. The fighter pilot, for example, should not use the Hun‐in‐the‐Sun strategy 20 times in a row, and then use the Ezak‐Imak strategy; this would defeat the purpose of coming up with a mixed strategy! These ratios are to be employed on the average. IX. Extensions of the Minimax Theorem The Minimax Theorem has been improved and extended several times by Von Neumann. He was able to draw conclusions about games with imperfect information and to games with more than two players. This results in the possibility of cooperation among some of the players in the form of alliances. Von Neumann’s work is crucial to the arena of game theory; it truly provides the foundation upon which much of the theory rests. Perhaps the most astounding achievement, however, was made by John Nash. Nash proved a theorem that generalizes the Minimax Theorem to the case of nonzero‐sum games involving two or more players when these players are in competition, i.e. non‐cooperative games. Stated informally, Nash’s Theorem says any n‐person, non‐cooperative game, whether it is zero‐sum or nonzero‐sum, for which each player has a finite 29 number of pure strategies has at least one equilibrium set of strategies. This differs from Von Neumann’s Minimax Theorem because 1) it guarantees the existence of an equilibrium for both zero‐sum and nonzero‐sum games, 2) it shows that there may be more than one solution, and 3) it extends to the case of any finite number of players. John Nash, along with John Harsanyi and Reinhard Selten, won the Nobel Prize in Economic Science in 1994 for his work in game theory. This was the first time in the 93‐year history of the Nobel prizes that an award had been won for work done purely in mathematics. Trying to find satisfactory solutions to n‐person cooperative games has proved to be elusive. Many solutions concepts have been presented, some offering unique solutions and others not. None of the solutions offered has achieved a consensus in the game theory community as a way of determining rational action among the players when some degree of cooperation is allowed. There are many difficulties with trying to make these predictions, as it is no longer even possible to agree on what behaving rationally means in these n‐person cooperative games. X. Conclusion It is natural to wonder how the theory of games is actually useful; is it just an intellectual exercise for mathematicians, or is there something valuable to the rest of humankind? Truthfully, it is difficult to find a connection between the theory of games and an actual real‐world problem. Game theory is, as of yet, too under‐developed and rests on idealistic assumptions to entirely bridge the gap between the mathematics and reality. The essential assumption of game theory is that the players are acting rationally, making decisions that are completely self‐serving. However, we know that humans do not always behave rationally, in both the traditional and game theory sense of the word. The abstract players of game theory do not have much in common with players in the real world. Even if game theory is not directly applicable to reality, it does offer many important insights into competition and cooperation in real‐world situations. Economics, voting theory, and auctions are three examples of areas where game theory can be applied. Business is likely the prime example of successfully applying game theory. A business can be thought of as a giant nonzero‐sum game. Businesses interact with other businesses and people, and its success depends on the decisions regarding those with whom the business interacts. In turn, these agents are making their own decisions. It is easy to see how business can be thought of as one very big, complicated game. So although game theory can be very mathematical and technical, it also has important implications for reality. Von Neumann’s Minimax Theorem was the catalyst for much deeper study in the arena of game theory, and although much has been learned, there is still much more that we do not know. As we further our study in game theory, we will hopefully gain even more insight into interactive and strategic decision‐making. After all, in the words of American poet Edwin Arlington Robinson, “Life is the game that must be played.” 30 Resources Casti, John L. Five Golden Rules: Great Theories of 20th‐century Mathematics and Why They Matter. New York: Wiley & Sons, 1996. Print. Odifreddi, Piergiorgio. The Mathematical Century: The 30 Greatest Problems of the Last 100 Years. Princeton, NJ: Princeton UP, 2004. Print. Owen, Guillermo. Game Theory. Philadelphia: W. B. Saunders, 1968. Print. Stahl, Saul. A Gentle Introduction to Game Theory. Providence, RI: American Mathematical Society, 1999. Print. Stevens, Scott P. “Game People Play: Game Theory in Life, Business, and Beyond.” The Teaching Company Great Courses. 2008. DVD Lecture Series. Thie, Paul R. An Introduction to Linear Programming and Game Theory. New York: John Wiley & Sons, 1979. Print. 



A 

B 

C 

H 

S 

U 

[ 


