Thursday, February 17, 2011

A Little Learning (Game Theory, Part Deux)

Here, as promised, is the dangerous thing.

Suppose you're getting a sequence of playing cards, and you're trying to figure out some statistics for the playing cards. At first, the cards seem utterly random, but after a while, a pattern emerges: There are slightly more black face cards than red ones, and there are slightly more red low-rank cards than black ones. You're a statistician, so you can quantify the bias—measure the correlation coefficient between color and rank, estimate the standard error in the observed proportions, and so forth. There are rigorous rules for computing all these things, and they're quite straightforward to follow.

Except, you're playing gin rummy, and the reason you're receiving a biased sequence of cards is that you're trying to collect particular cards. If you change your collection strategy, you'll affect the bias. You may have followed all the statistical rules, but you've forgotten about the context.

It might seem entirely obvious to you, now that I've told you the whole story, what the mistake is, and how to avoid it, but I contend that a wholly parallel thing is happening in sports statistics. I'm going to talk about basketball again, because I'm most familiar with it, but the issue transcends that individual sport, yes?

I've previously touched upon this, but this time, with the first post on game theory as background, I'm actually going to go through some of the analysis. Again, we won't be able to entirely avoid the math, but I'll try to describe in words what's going on at the same time. If calculus makes you squeamish, feel free to skip the following and move down to the part in bold.

In our simple model, the offense has two basic options: have the perimeter player shoot the ball, or pass it into the post player, and have him shoot the ball. The defense, in turn, can vary its defensive pressure on the two players, and it can do that continuously: It can double team the perimeter player aggressively, double the post player off the ball, or anything in between. We'll use the principles of game theory to figure out where the Nash equilibrium for this situation is.

We'll denote the defensive strategy by b, for on-ball pressure: If b = 1, then all of the pressure is on the perimeter ball-handler; if b = 0, all of it's on the post player. An intermediate value, like b = 1/2, might mean that the defense is equally split between the two of them (man-to-man defense on each), but the exact numbers are not important; the important thing is that the defensive strategy varies smoothly, and its effects on the offensive efficiency also vary smoothly.

Each of the two offensive options has an associated efficiency, which represents how many points on average are scored when that player attempts a shot. We'll call the perimeter player's efficiency r, and the post player's efficiency s. As you might expect, both efficiencies depend on the defensive strategy, so we'll actually be referring to the efficiency functions r(b) and s(b). The perimeter player is less efficient when greater defensive pressure is placed on him, naturally, so r(b) is a decreasing function of b. On the other hand, the post player is more efficient when greater defensive pressure is placed on the perimeter player, so s(b) is an increasing function of b.

Now let's look at this situation from a game theory perspective. Will the Nash equilibrium of this system involve pure strategies, or mixed strategies? (A pure defensive strategy in this instance consisting of either b = 0 or b = 1.) Right away, we can eliminate the pure strategies as follows: If the offense funnelled all of its offense through one of those players, and the defense knew it, they would muster all their defensive pressure on that player. On the other hand, if the defense always pressured one of the players, and the offense knew it, they would always have the other player shoot it. Since those two scenarios are incompatible with one another, the Nash equilibrium must involve mixed strategies. Our objective, then, is to figure out what those mixed strategies are.

The offensive mix, or strategy, we'll represent by p, the fraction of time that the perimeter player shoots the ball. The rest of the time, 1-p, the post player shoots the ball. The overall efficiency function of the offense, as a function of defensive strategy b, is then

Q(b) = p r(b) + (1-p) s(b)

The objective of the defense, in setting its defensive strategy, will be to ensure that the offense cannot improve its outcome by varying its strategy p. That is, it will set the value b such that the partial derivative of Q with respect to p (not b) is equal to 0:

Q/p = r(b) - s(b) = 0

which happens when r(b) = s(b)
in other words, when the efficiencies of the two options are equal. The offense, in setting its strategy p, will aim to zero out the partial derivative of Q with respect to b:
Q/b = p r'(b) + (1-p) s'(b) = 0

which happens when

p = s'(b) / [s'(b) - r'(b)]

where b is taken to be the point where the two efficiency curves meet, since the offense knows the defense will play there.

But let's not worry about the offensive strategy; the important thing to take away is that at the Nash equilibrium, the defense will adjust its pressure until the efficiencies of the two offensive options are equal. Let's show what that looks like graphically.

We'll see here how game theory tells us what should be common sense: If the current defensive strategy were somewhere else than at the Nash equilibrium—say, if it were further to the left—the offense could improve its outcome by shifting more of its offensive load to the perimeter player, since he's the more efficient option on the left side of the graph. The reverse holds on the right side of the graph. Only at the point where they cross is the offense powerless to improve its situation by changing its offensive mix, which is exactly the outcome the defense wants.

As a corollary, the exact location of the Nash equilibrium depends vitally on the efficiency functions of the offensive components. If, for instance, one of the efficiency functions drops, the observed efficiency of the offense (that is, the efficiency measured by statistics) will also drop. Let's take a look at that graphically:


In this figure, the efficiency function of the post player, represented by s(b), has dropped. This has the effect of sliding the Nash equilibrium point down and to the right, which indicates increased ball pressure and a decrease in the observed efficiency of both the post player and the perimeter player. It's important to recognize that the efficiency function of a player refers to the entire curve, from b = 0 to b = 1, but when we gather basketball statistics, we merely get the observed efficiency, the value of that curve at a single point—the point where the team strategies actually reside (in this case, the Nash equilibrium).

Consider: Why might the efficiency function of the post player drop, as depicted above? It might be because the backup post player came in. It might be because a defensive specialist post player came in. In short, it might be because of a variety of things, none of which have to do with the perimeter player and his efficiency function—and yet the perimeter player's observed efficiency (whether we're talking about PER, or WP48, or whatever) drops as a result.

There's nothing special about the perimeter player in this regard; we would see the same effect on the post player if the perimeter player (or his defender) were swapped out. In general, the observed efficiency of a player goes up or down owing, in part, to the efficiency function of his teammates.

We see here an analogy to the distinction, drawn in economics, between demand and quantity demanded. Suppose we see that sales of a particular brand of cheese spread have dropped over the last quarter. That is to say, the quantity demanded has decreased. Does that necessarily mean that demand itself has dropped? Not necessarily. It could be that a new competing brand of cheese spread has arrived on the market. Or, it could be that production costs of the cheese spread have increased, leading to a corresponding increase in price. Both of these decrease the quantity demanded, but only the former represents a decrease in actual demand. Demand is a function of price; quantity demanded is just a number. If all we measure is quantity demanded, and we ignore the price, we haven't learned all we need to carry on our business. As economists, we would be roundly criticized (and rightly so) for neglecting this critical factor.

We are, in the basketball statistics world (and that of sports statistics in general), at a point where all we measure is the number. We don't, as a rule, measure the function. We apply our statistical rules with rigor and expect our results to acquire the patina of that rigor. But we mustn't be hypnotized by that patina and forget what we are measuring. If our aim is to describe the observed situation, then the number may be all we need. But if our aim is to describe some persistent quality of the situation—as must be the case if we are attempting to (say) compare players, or if we are hoping to optimize strategies—then we are obligated to measure the function. Doing so is very complex indeed for basketball; there are an array of variables to account for, and we have at present only the most rudimentary tools for capturing them. It is OK to punt that problem for now. But in the meantime, we must not delude ourselves into thinking that by measuring that one number, we have all we need to carry on our business.

3 comments:

  1. Thank you, thank you, thank you.

    Now try explaining this to someone whose argument against Player X is to compare his stats to Player Y, accompanied by "stats are facts!". It is near impossible.

    ReplyDelete
  2. Thanks Gil. As you say, it's a tough row to hoe. But I've been thinking about this for a while, and I'm glad I got it down. I can just use this URL. :)

    ReplyDelete
  3. You might have caught this already, but this paper http://www.justinmrao.com/goldman_rao_sloan.pdf was presented at the MIT Sloan conference. It deals with ... Mixed Strategy Nash Equilibrium in relation to the optimal time to shoot vs the 24 second shot clock!

    ReplyDelete