Prefatory to more basketball talk, I want to take a bit of time out to describe what I think is a rather elegant area of mathematics: game theory. Even the name is elegant—simple and to-the-point. As its name implies, game theory is the study of game strategies and tactics from a mathematical point of view. Rather than describe its foundations and move on from there, as a textbook would, I'm going to leap right in and use game theory in a couple of simple situations, which I hope will be a less obscure way of conveying what it's all about.

Suppose that David and Joshua are playing a friendly game of Global Thermonuclear War. At some point, both players have to decide whether to launch an attack or not. If David attacks and Joshua does not, then David wins and earns 5 points (this is a game, after all) and Joshua earns 1 point for being peaceable. Conversely, if Joshua attacks and David does not, Joshua earns 5 points and David earns just 1 point. If both attack, the Earth is rendered a wasteland and neither side earns any points; if both sides do not attack, everybody wins and both sides earn 6 points. The foregoing can be summarized in tabular form, as below.

David's payoffs are shown in blue, Joshua's in red. Let's run through a simple game-theoretical analysis of GTW. If we focus on just the blue numbers (David's payoffs), we see that if Joshua attacks, David's best strategy is to stand down (1 > 0). If Joshua stands down, David's best strategy is, again, to stand down (6 > 5). No matter what Joshua does, in short, David should stand down.

Moving over to the red numbers, we come to a similar conclusion for Joshua's strategy: No matter what David does, Joshua is better off standing down (either 1 > 0, or 6 > 5). As a result, both sides stand down; the only way to win is, indeed, not to play. Whew!

Suppose that David and Joshua are playing a friendly game of Global Thermonuclear War. At some point, both players have to decide whether to launch an attack or not. If David attacks and Joshua does not, then David wins and earns 5 points (this is a game, after all) and Joshua earns 1 point for being peaceable. Conversely, if Joshua attacks and David does not, Joshua earns 5 points and David earns just 1 point. If both attack, the Earth is rendered a wasteland and neither side earns any points; if both sides do not attack, everybody wins and both sides earn 6 points. The foregoing can be summarized in tabular form, as below.

David's payoffs are shown in blue, Joshua's in red. Let's run through a simple game-theoretical analysis of GTW. If we focus on just the blue numbers (David's payoffs), we see that if Joshua attacks, David's best strategy is to stand down (1 > 0). If Joshua stands down, David's best strategy is, again, to stand down (6 > 5). No matter what Joshua does, in short, David should stand down.

Moving over to the red numbers, we come to a similar conclusion for Joshua's strategy: No matter what David does, Joshua is better off standing down (either 1 > 0, or 6 > 5). As a result, both sides stand down; the only way to win is, indeed, not to play. Whew!

But maybe we shouldn't all relax quite yet. Suppose that we were to adjust the payoff matrix (which is what we call that table up there). Heads of state often get a little nationalistic, and they may well decide that a world without the enemy is better after all, even if we do have to suffer from a little radioactive fallout. At the same time, perhaps it is better to go out fighting and take out the enemy, even as we ourselves are getting wiped out. Then, possibly, the payoff matrix would look like this:

The numbers have changed only a little, but the conclusion is quite different: This time, no matter what Joshua does, David is better off attacking, and no matter what David does, Joshua is better off attacking. The upshot is that both sides end up attacking and wiping each other off the map. The rest of you, I hope you look forward to serving your cockroach overlords...

In the two examples above, the eventual solution has the property that the strategy for both sides was the best they could do, no matter what the opponent did. Such a solution is called a Nash equilibrium, after John Forbes Nash, who won a Nobel Prize in economics for his work in such games. (Yes, the Beautiful Mind guy.) In fact, in each case, the winning strategy was a single choice: either "always attack" or "always stand down."

That is not always the case. Consider a rather less violent game of football (well, somewhat less violent). On a crucial third down, the Steelers can choose to run the ball, or throw the ball; the Packers, on the other hand, can choose to defend the pass or defend the run.

Here's how we might model things. We'll let the payoff be simply the chance, the probability, that the Steelers make a first down. We'll also say that if the Steelers pass, they make a first down 60 percent of the time when the Packers defend the run, but only 20 percent of the time when they defend the pass. If the Steelers run, they make a first down 50 percent of the time when the Packers defend the pass, but only 30 percent of the time when they defend the run. Here's the payoff table:

(We've included the Packers' payoffs in red, although they can be derived from the Steelers' payoff by subtracting from 100 percent.) This time, matters are not as clear cut: For obvious reasons, the Steelers' best option depends on what the Packers do, and the Packers' best option depends on what the Steelers do. But, for our purposes, they both have to show their hands at the same time. What does game theory have to say about this kind of situation?

To figure that out, we'll have to consider a new kind of strategy, called a mixed strategy. A mixed strategy (as opposed to the pure strategies we considered above) is simply one that chooses each of the options with a certain probability. For instance, one possible mixed strategy the Steelers could employ is to run the ball half the time, and pass it the other half. Similarly, the Packers could defend the pass 60 percent of the time, and defend the run 40 percent of the time. There are an infinite number of different mixed strategies both teams could employ. How do we figure out what mixed strategies are actually the best for each side?

Here's where game theory gets a bit hairy (hence the "kind of" in the title). Essentially, what the Steelers want to do is to make their strategy "resistant" against the Packers, in the sense that no matter what the Packers do, they can't damage the Steelers' chances of making their first down. And the Packers want to set their strategy so that no matter what the Steelers do, they can't improve their chances of making the first down. Such a situation, where neither side can do any better without the other side changing what they do, is also a Nash equilibrium. The brilliant thing that Nash did, which earned him that Nobel Prize, was to show that in such games, there is always a set of mixed or pure strategies that yields a Nash equilibrium.

What follows is pretty heavy mathematical stuff. If you don't want me to go all calculus upside your head, feel free to skip it and go to the conclusion in bold, below. Here's what we do. We characterize the Steelers' strategy by p, the probability that the Steelers pass the ball, and the Packers' strategy by q, the probability that they defend the pass. From the Steelers' perspective, there are four distinct possibilities:

The numbers have changed only a little, but the conclusion is quite different: This time, no matter what Joshua does, David is better off attacking, and no matter what David does, Joshua is better off attacking. The upshot is that both sides end up attacking and wiping each other off the map. The rest of you, I hope you look forward to serving your cockroach overlords...

In the two examples above, the eventual solution has the property that the strategy for both sides was the best they could do, no matter what the opponent did. Such a solution is called a Nash equilibrium, after John Forbes Nash, who won a Nobel Prize in economics for his work in such games. (Yes, the Beautiful Mind guy.) In fact, in each case, the winning strategy was a single choice: either "always attack" or "always stand down."

That is not always the case. Consider a rather less violent game of football (well, somewhat less violent). On a crucial third down, the Steelers can choose to run the ball, or throw the ball; the Packers, on the other hand, can choose to defend the pass or defend the run.

Here's how we might model things. We'll let the payoff be simply the chance, the probability, that the Steelers make a first down. We'll also say that if the Steelers pass, they make a first down 60 percent of the time when the Packers defend the run, but only 20 percent of the time when they defend the pass. If the Steelers run, they make a first down 50 percent of the time when the Packers defend the pass, but only 30 percent of the time when they defend the run. Here's the payoff table:

(We've included the Packers' payoffs in red, although they can be derived from the Steelers' payoff by subtracting from 100 percent.) This time, matters are not as clear cut: For obvious reasons, the Steelers' best option depends on what the Packers do, and the Packers' best option depends on what the Steelers do. But, for our purposes, they both have to show their hands at the same time. What does game theory have to say about this kind of situation?

To figure that out, we'll have to consider a new kind of strategy, called a mixed strategy. A mixed strategy (as opposed to the pure strategies we considered above) is simply one that chooses each of the options with a certain probability. For instance, one possible mixed strategy the Steelers could employ is to run the ball half the time, and pass it the other half. Similarly, the Packers could defend the pass 60 percent of the time, and defend the run 40 percent of the time. There are an infinite number of different mixed strategies both teams could employ. How do we figure out what mixed strategies are actually the best for each side?

Here's where game theory gets a bit hairy (hence the "kind of" in the title). Essentially, what the Steelers want to do is to make their strategy "resistant" against the Packers, in the sense that no matter what the Packers do, they can't damage the Steelers' chances of making their first down. And the Packers want to set their strategy so that no matter what the Steelers do, they can't improve their chances of making the first down. Such a situation, where neither side can do any better without the other side changing what they do, is also a Nash equilibrium. The brilliant thing that Nash did, which earned him that Nobel Prize, was to show that in such games, there is always a set of mixed or pure strategies that yields a Nash equilibrium.

What follows is pretty heavy mathematical stuff. If you don't want me to go all calculus upside your head, feel free to skip it and go to the conclusion in bold, below. Here's what we do. We characterize the Steelers' strategy by p, the probability that the Steelers pass the ball, and the Packers' strategy by q, the probability that they defend the pass. From the Steelers' perspective, there are four distinct possibilities:

- Steelers pass (p), Packers defend the pass (q): Payoff is 20 percent.
- Steelers pass (p), Packers defend the run (1-q): Payoff is 60 percent.
- Steelers run (1-p), Packers defend the pass (q): Payoff is 50 percent.
- Steelers run (1-p), Packers defend the run (1-q): Payoff is 30 percent.

Putting it all together, we get an expression for the Steelers' payoff S:

S = 0.2 pq + 0.6 p(1-q) + 0.5 (1-p)q + 0.3 (1-p)(1-q)

S = 0.3 + 0.3 p + 0.2 q - 0.6 pq

S = 0.2 pq + 0.6 p(1-q) + 0.5 (1-p)q + 0.3 (1-p)(1-q)

S = 0.3 + 0.3 p + 0.2 q - 0.6 pq

We want to find the value of p that makes the partial derivative of S with respect to q equal to 0. That is, we need the value of p that makes it utterly irrelevant what the Packers do with their q (as it were).

∂S/∂q = 0.2 - 0.6 p = 0

which happens when p = 1/3. We can do a similar expression for the Packers' payoff P from their side of the payoff matrix:

P = 0.8 pq + 0.4 p(1-q) + 0.5 (1-p)q + 0.7 (1-p)(1-q)

P = 0.7 - 0.3 p - 0.2 q + 0.6 pq

which happens when p = 1/3. We can do a similar expression for the Packers' payoff P from their side of the payoff matrix:

P = 0.8 pq + 0.4 p(1-q) + 0.5 (1-p)q + 0.7 (1-p)(1-q)

P = 0.7 - 0.3 p - 0.2 q + 0.6 pq

and then the optimal strategy for the Packers is dictated by

∂P/∂p = -0.3 + 0.6 q = 0

which happens when q = 1/2. So the Nash equilibrium happens when the Steelers pass the ball 1/3 of the time and run the ball 2/3 of the time, and the Packers defend the run 1/2 of the time, and defend the pass 1/2 of the time. Under these strategies, the Steelers make their first down 40 percent of the time, and nothing the Steelers do on their own can increase it, and nothing the Packers do on their own can decrease it (given the payoff table). That's what makes it a Nash equilibrium.

Notice that none of the pure strategies work as well as the mixed strategies do. If the Steelers always ran the ball on third down, the Packers knew that, they would just defend the run and limit the Steelers to making first downs 30 percent of the time. It's even worse if the Steelers passed all the time; they'd make a first down only 20 percent of the time. Conversely, if the Packers always defended the run, and the Steelers knew that, they'd just pass all the time and make their first down with 60 percent efficiency. And so on.

The salient thing to take from all this, though, which I'll get into in my next post, is that although the Steelers' odds of making the first down don't depend on the Packers' strategy, at the Nash equilibrium, their best strategy does depend on how good the Packers are at defending the various options (which is represented by the payoff matrix). Although that is hardly earth-shattering in this particular case, we'll see that has interesting repercussions when trying to rate individual player achievement.

∂P/∂p = -0.3 + 0.6 q = 0

which happens when q = 1/2. So the Nash equilibrium happens when the Steelers pass the ball 1/3 of the time and run the ball 2/3 of the time, and the Packers defend the run 1/2 of the time, and defend the pass 1/2 of the time. Under these strategies, the Steelers make their first down 40 percent of the time, and nothing the Steelers do on their own can increase it, and nothing the Packers do on their own can decrease it (given the payoff table). That's what makes it a Nash equilibrium.

Notice that none of the pure strategies work as well as the mixed strategies do. If the Steelers always ran the ball on third down, the Packers knew that, they would just defend the run and limit the Steelers to making first downs 30 percent of the time. It's even worse if the Steelers passed all the time; they'd make a first down only 20 percent of the time. Conversely, if the Packers always defended the run, and the Steelers knew that, they'd just pass all the time and make their first down with 60 percent efficiency. And so on.

The salient thing to take from all this, though, which I'll get into in my next post, is that although the Steelers' odds of making the first down don't depend on the Packers' strategy, at the Nash equilibrium, their best strategy does depend on how good the Packers are at defending the various options (which is represented by the payoff matrix). Although that is hardly earth-shattering in this particular case, we'll see that has interesting repercussions when trying to rate individual player achievement.

## No comments:

## Post a Comment