Wednesday, November 24, 2010

Too Many Damned Monkeys

What do you need more monkeys to do: (a) guarantee the writing of all of Shakespeare's plays, or (b) be able to sink an infinite number of basketball shots in a row? OK, I realize that this is entirely inconsequential, but it actually came up a couple of days ago in what would otherwise have been fairly ordinary coffeehouse conversation, so let me bring you up to speed.

The anchor point is the notion that by having an infinite number of monkeys, each of them sitting in front of a typewriter, randomly typing away, you could guarantee that one of them would surely generate a perfect typescript of Hamlet. Or Macbeth. On the other hand, you'd also guarantee that one of them would generate a "perfect" version of Astrology for Dummies.

What this is really about (since few of us are likely to corral together an infinite number of monkeys) is the so-called cardinality of possible books of arbitrary (but finite) length. Now what's cardinality? The cardinality of a finite set is simply the number of things in the set. So, for example, the cardinality of the U.S. Supreme Court justices is nine, usually. The cardinality of the English alphabet is 26. And the cardinality of the sand grains on the Earth is some almost unimaginably large number. But it's still finite.

Infinite sets are a whole 'nother kettle of fish. Maybe the simplest example of an infinite set is ℕ, the set of natural numbers: 0, 1, 2, ... We use the ellipsis (...) to indicate that the natural numbers go on, forever, without end. There is no last number; in other words, infinity is not really a number in the usual sense. Nonetheless, we might say that the cardinality of ℕ is infinity, which is conventionally denoted ∞.


But in so doing, we would be ambiguous, for as it turns out, there are different varieties of infinity. The infinity of ℕ is the smallest possible infinity, but there are larger infinities. That sounds kind of paradoxical: How could a set go on longer than forever?

Well, let's see if we can construct an infinity that's larger than the cardinality of ℕ. The first thing we might do is add some more numbers to ℕ and see if that yields a set with larger cardinality: we might add in all the negative whole numbers, to get ℤ, the set of all integers. Shouldn't ℤ, which is (naively) almost twice as big as ℕ, have nearly twice as large a cardinality?


No, and here we run into one of the fundamental differences between finite sets and infinite sets. Suppose we divide ℕ into two mutually distinct subsets: O (1, 3, 5, ...) and E (0, 2, 4, ...). Intuitively, both O and E are infinite sets. But if ℕ is the unionthe sum set, so to speakof O and E, is ℕ then doubly infinite?

Mathematicians decided that was too much. So cardinality is defined, less intuitively but more consistently, as follows. We say that the cardinality of the English alphabet is 26, because there are 26 letters in the alphabet. Another way of saying the same thing is that the letters of the alphabet can be placed into a one-to-one correspondence with the set of numbers from 1 through 26: 1-A, 2-B, 3-C, and so on, up to 26-Z. You can try a similar exercise with the U.S. Supreme Court justices.

If we define the notion of cardinality this way, then it follows that two sets have the same cardinality if there exists a one-to-one correspondence between the sets. Somewhat amazingly, then, the set of odd numbers O has exactly the same cardinality as ℕ, because one can define a one-to-one correspondence that matches each number in ℕ with a number in O, and vice versa: 0-1, 1-3, 2-5, 3-7, ..., in each case pairing a number n from ℕ with the number 2n+1 from O. It doesn't matter that one can define a correspondence in which the two sets don't match one-to-one; all that matters it that there exists at least one correspondence where they do match.

Pretty clearly, we can do the same thing with E, matching n from ℕ with 2n from E. So all three sets
ℕ, O, and Ehave the same cardinality, even though O and E combine to make up ℕ. The question then arises: Are there infinite sets that can't be matched up one-to-one with ℕ, no matter how you try? We can certainly do that for ℤ, matching up all odd numbers m in ℕ with (-1-m)/2 from ℤ, and all even numbers n in ℕ with n/2 from ℤ.

Well then, what about ℚ, the set of rational numbers
all possible fractions involving only whole numbers in the numerator and denominator? Surely that is a bigger set. But as it turns out, ℚ also has the same cardinality as ℕ, even though there are an infinite number of possible numerators and an infinite number of denominators. This state of affairs has led people to write such semi-sensical equations as

∞ + ∞ = ∞

since O and E combine to make ℕ, and

∞ × ∞ = ∞

since all the infinite pairings of ℕ make up ℚ. (By the way, in case you're wondering, ℕ stands for Natural Numbers, of course; ℤ stands for Zahlen, the German word for number; and ℚ stands for Quotient.)


All right, what about ℝ, the set of real numbers? Can that set be placed into a one-to-one correspondence with ℕ? Based on the way things have been going, you might suppose that they could, but in 1891, the German mathematician Georg Cantor (1845-1918) showed that in fact they could not, that ℝ was a strictly larger set than ℕ.

His argument was clever one, employing proof by contradiction. Suppose, Cantor said, that you could find such a one-to-one correspondence. You could write out a catalogue of real numbers then, as follows: 

1 - 0.14159265...
2 - 0.71828182...
3 - 0.41421356...
 
and so forth. Now, suppose you construct a new number g, using the following process: The first digit of g will be the first digit of the first number in your catalogue, plus one; the second digit of g will be the second digit of the second number, plus one; the third digit of g will be the third digit of the third number, plus one; and so on. We could read out g along a diagonal in our catalogue of real numbers, like this:
 
1 - 0.24159265...
2 - 0.72828182...
3 - 0.41521356...
 
So g would be the number 0.225... This number g has an amazing propertyit cannot appear anywhere in our catalogue of real numbers. Why not? Because it differs from the first number at the first digit, it differs from the second number at the second digit, it differs from the third number at the third digit, ... in short, it differs from every single number in the catalogue.

We have a contradiction: Either g is not a real number, or our catalogue is not complete as we thought it was. Well, g is clearly a real number, so the problem must lie with the other partour catalogue is not complete. After all, we only assumed we could create such a catalogue. Since it seems we cannot, no one-to-one correspondence exists between ℝ and ℕ.

You might think that there's a simple way around this, if we simply add g to our catalogue, or rearrange it in some way. But Cantor's diagonalization argument, as it is usually called, would apply just as well to this new catalogue. No matter what catalogue you attempt to compile and amend, there's no way to avoid the construction of a real number that's nowhere in the list. Those two sets fundamentally have different cardinalities. And because of that, we can't use the single symbol ∞ to denote their cardinalities. Instead, mathematicians use the aleph-numbers: The cardinality of ℕ is 0 (pronounced "aleph-null"), and under certain commonly held assumptions, that of ℝ is 1 (pronounced "aleph-one").

So what about all those scripts for Shakespeare? Each of them can clearly be entered into a computer document, which is represented by a finite string of digits in the computer. We can therefore place the set of possible scripts into a one-to-one correspondence with the integers in ℕ, meaning that the set of scripts has cardinality 0, so 0 monkeys would be enough for at least one monkey to write any given script. (In fact, 0 monkeys would write that script.)

But what about the infinite string of makes in a basketball game? These are infinitely long strings of basketball shots (each one with
0 shots), so there would be a one-to-one correspondence between those strings and infinitely long sequences of digitsi.e., ℝ, the reals. So it would take 1 monkeys to guarantee that at least one monkey would shoot any given sequence (in particular, the one sequence consisting of all makes).
I don't even want to know about the bananas.

Friday, September 17, 2010

An Unusual Series

Which may not be all that interesting to you, unless you're interested in recreational math. For lots of you, that may be sort of an oxymoron. (Although, I'm hoping it's less likely among readers of my blog than it would be among the general population.)

Here's the idea. Start with an integer. Add its digits together. If that sum is even, halve the number (not the sum of digits) to get the next number. If the sum is odd, add one to the number.

For instance, suppose we start with the number 10. Its digits sum to 1+0 = 1, so we add 1 to get 11. Those digits sum to 1+1 = 2, so we halve it to get 11/2 = 5.5. Those digits sum up to 5+5 = 10, so we again halve the number to get 2.75. Those digits sum up to 2+7+5 = 14, so we again halve the number to get 1.375...well, I think you get the idea.

On the other hand, suppose you start out with the number 1. Its one digit sums to 1, so we add 1 to get 2. Its single digit sums to 2, so we halve it to get 1 again. Obviously, this series repeats forever: 1, 2, 1, 2, 1, etc.

The first eight numbers, 1 through 8, all end up at that same repeating sequence. The next number, 9, leads immediately to 10, which starts out as I worked out above, and then goes on indefinitely: Each number has one more digit after the decimal point than the preceding number, so the series never repeats, and it never reaches zero, either.

In my limited trials, every integer I've started out with either ends up with the repeating sequence 1, 2, 1, 2, 1, ..., or else it eventually merges with the same series that you get if you start with 10 (or 9, for that matter). So, two questions for those of you who might like to play with this kind of thing:
  1. Is it true that the series for any integer always either ends with the sequence 1, 2, 1, 2, 1, ..., or else merges with the series that starts with 10?
  2. Consider the series that starts with 10. As we said, it goes on forever, without repeating. What is the average of the numbers in that infinite series?
Neither of these questions can be answered definitively (as far as I can tell) with brute-force computation, although the results might be suggestive. If you do want to try some computations, use an infinite-precision package; our friend Bernie has already tried it with ordinary floating-point numbers (eight-byte doubles, I think), and roundoff error rendered everything after about the 15th number quickly invalid.

P.S. Don't ask me how I got started thinking about the series. It's inspired in part by this guy, but I've already forgotten how I decided to think about this variant.

Friday, September 3, 2010

Grasping at Genius

No, this isn't about me trying to become a genius. My aim is a lot more modest: trying to draw a bead on what genius is. Partly this is motivated by my last post about music, but mostly it came out of a discussion I had several years ago with a co-worker over whether athletes could be geniuses at their sport. I thought they could, and he thought not. He conceded that they had some outstanding skill, but felt that it would be demeaning the word "genius" to call it that. I was willing to be a bit more expansive with the term. One does have to be a little careful—probably half the parents out there think their precious little ones are geniuses—but limiting genius to a specified list of fields seemed unnecessarily restrictive to me.

The discussion more or less had to end there because we never really grappled with the larger issue of what genius really is, and without that any debate over whether it means anything in sports is putting the cart before the horse. I want to tackle that now, so I can go back and win the original argument.

First of all—because I'm sick and tired of hearing about it, even now—what is genius not? It is not a high IQ, or intelligence quotient. Lots of folks are intimidated by numbers (especially, but not exclusively, those who do not feel comfortable around them), to the point that any description using them feels more objective and unassailable. Well, they might be that, but what's lost when a number is attached to anything is the process by which that number was derived. If you don't know and understand that process, the number—while not exactly meaningless—is not as reliable as it sounds.

In the case of IQ, the formula is generally straightforward; what's not so clear are the principles on which questions are selected for IQ tests. If you've ever taken one, you know that questions on such tests are fairly narrowly circumscribed: which one of these things doesn't belong, how many blocks are there, numerical or word analogies, etc. The only thing that we can be sure IQ tests measure is how well someone takes IQ tests. Beyond that patently circular assertion, it gets hazy. Does it measure intelligence? How about genius? There are lots of folks who have very high IQs (Marilyn vos Savant—really? that kind of name?—comes to mind) who nonetheless evince no obvious signs of genius. To her credit, vos Savant doesn't make any claims of genius for herself.

If we can't rely on a test to identify genius, we are back to Potter Stewart's famous dictum (in his concurring opinion in Jacobellis v. Ohio regarding hard-core pornography): "I know it when I see it." So where do we see it?

If we start with the so-called hard sciences (physics and chemistry), plus mathematics, I think you'll find little argument that folks like Archimedes, Isaac Newton, Carl Friedrich Gauss, and Albert Einstein were geniuses. Expand that to all of letters and sciences, and you embrace other noted geniuses, such as Charles Darwin, Louis Pasteur, and B.F. Skinner. But maybe these get a little dicier. These are great scientists, to be sure, but what about them promotes them beyond the ordinary rabble?

You might expect that things would get dicier still when we go to the fine arts, but at least in my experience I find less argument about ascribing genius to artists like Leonardo da Vinci (also an engineer), William Shakespeare, Auguste Rodin. How about musicians? Ludwig van Beethoven, Richard Wagner, and Igor Stravinsky all wear the mantle of genius, and wear it rather comfortably at that. (Yes, I realize these are all dead white dudes. I'll get to that in a moment.)

Let's pause a while and take stock of what we have. Accepting for the sake of discussion that these people are all geniuses, what makes them so? They don't just do what ordinary people in their professions do, only better—although by and large, they do do those things better. They also don't just do what ordinary people can't do—although, again, they do do that, too. What sets them apart is that they do things that ordinary people in their profession could never even conceive of, before the geniuses did. Arthur Schopenhauer put it this way:
"Talent hits a target no one else can hit; genius hits a target no one else can see."
I must emphasize that innovation is a vital part of this. One of Newton's most important contributions to physics was a mathematical demonstration of the law of universal gravitation (the so-called "inverse square law" of gravitation) from Kepler's observations and laws of planetary orbits. That same law is derived countless times over by students in undergraduate physics classes around the world (albeit using analysis, rather than the essentially geometrical means that Newton employed). That doesn't mean that any of them, let alone each of them, is a budding Newton, for likely none of them, plucked at birth and set down in a pre-Newtonian world, could have done what Newton did. Newton's genius lay in blazing the trail that future scientists and students would follow.

In that context, then, let me add a few other names to the list: Charlie Parker, Miles Davis, Herbie Hancock. Jazz is an art form, among others, that combines composition and performance in a single moment, adding for the first time—to my list, anyway—the element of dynamism. (I don't mean to slight other performance geniuses, such as actors and stand-up comedians, but I'm trying to make a point!) Although jazz tunes are composed to a certain extent, a fundamental aspect of jazz performance is improvisation. No two jazz performances are ever exactly the same—not, at any rate, to the extent that classical music performances are alike. The music is constantly written and rewritten by each new performer that approaches it, and each new performer must contend not only with the structure of the music, but with the performers around him or her, in an endeavor that is, in the best of cases, at once collaborative and competitive. And genius denotes the ability, moment to moment, to conceive and perform what others in that situation could not even imagine.

From that point, how far of a step can it be to arrive at sports? I'm going to talk about basketball, because it's the sport with which I'm most familiar, but similar arguments could be made for other sports. (Imagine, for instance, the shots that Tiger Woods can execute that others would never even attempt, or the sudden volley, deft but fierce, of Pete Sampras.) Basketball, like jazz, requires the constant attention of the athlete to the ever-changing state of the game, from the highest level down to the smallest detail, and the ability to respond to that state, all on the spur of the moment. Where's that pick going to be in five seconds? What are the possible tactical options available to me, given the current score and time remaining? Seeing the passing lane halfway down the court is a geometric exercise in negotiating tangled world-lines in the four dimensions of space and time; to actually complete the pass, when everyone else is watching, one must summon the legerdemain of a practiced conjurer.

We think of sports as an essentially physical activity (which is probably why my co-worker could never attach the genius label to an athlete), but in its own way it is as demanding on the intellect as the most abstruse mathematical theorem, and unlike the mathematicians, who can return now and again to their labors when it suits them, the athlete has only the splittiest of split-seconds to act—or else the instant is gone. Who are we to say that genius could not act here, as well as anywhere else?



We may debate whether or not Wilt Chamberlain, Michael Jordan, or Magic Johnson merit the label of genius, whether or not what they do exceeds the conception of their colleagues. But not, in my opinion, whether the question makes sense. Even we non-geniuses can see that, I think.

Friday, July 30, 2010

The Sound of Music

I've always been intrigued by music; there's something almost incomprehensible about its appeal, which, nevertheless, you desperately want to comprehend. At least I do. And the best I can do is sort of nibble 'round the edges.

For one thing, it's a temporal art form. Mostly you experience it over time, however long it takes to hear a performance (or a recording thereof). And if you feel its impact, be it sadness, suspense, gladness, or even a kind of horror, that too is felt over the duration of the music. It never happens that a piece of music saves up all of its emotional impact for a single whap in the face, like a painting or a sculpture might. Yes, I'm aware that those art forms have nuances that can take extended or repeated viewings to appreciate. But for those forms, it is possible for the entire gestalt to strike you at a single moment, followed by a sustained decay of gradual discovery.

To be sure, trained musicians can look at a musical score and apprehend it. But even then—unless they are familiar with the music, and sometimes even then—they hear the music in their head, once again over time.

And the emotions you feel—oh! Music seems to speak to us in a language that is uniquely suited, not for communication, but only for emotional transference. A strain of music can connote hope or despair, struggle or triumph, seemingly no matter your roots or background. You almost think that if only somehow that universality could be harnessed, you could solve the world's problems in a single swoop—but then, that sounds like a travesty to be visited on music. At times I feel as though it should be protected from that kind of directed use.

Music stays in us. We have a tune stuck in our head. As much as we may appreciate the Mona Lisa or the David, how often do we complain that one of those (or their modern counterparts) are stuck in the same way? Maybe music gets a leg up from being a primarily auditory art form. We get so much of our information about the world from our eyes; our ears are generally accompanists, not the featured performer. As a result, though, it works its magic subliminally, providing a soundtrack for us. Seeing a visual art form may put us in an ecstatic trance of exploration, but rarely does it pull something directly out of us, something we recognize. Whereas surely all of us have songs that invariably draw forth some sharp memory. Music makes us aware that we have a story.

None of which brings me much closer to being able to comprehend its appeal in any meaningful way.

Thursday, July 15, 2010

A Tale to Tell

People love to tell stories. It's something that I think is fundamentally built into the human psyche. Having others' attention and entertaining them with a good story is as strong a rush as there is. I've heard that the vast majority of criminals, when arrested, will simply confess because the urge to tell their story to a captive audience is just too strong.

This tendency manifests itself even when there is, quite literally, no story to tell. The clustering illusion denotes the human impulse to see significance in random patterns. Suppose a series of ten coin flips goes as follows: T, H, H, H, T, T, T, T, T, T. A lot of people (but hopefully not too many of my own readers) would see the coin as streaky, though how they would react to that perception might vary: Some might conclude that the coin was "due" for heads and bet that way, while others might conclude that it was on a "tails" streak and bet that way. (For what it's worth, I flipped a quarter ten times and that's exactly the way they came out.)


This has major implications for how we watch and remember sporting events. Maybe the most obvious example of this is the so-called "hot hand" in basketball: the idea that a shooter is "in the zone," and more likely than normal to hit any given shot. Various studies have looked for and failed to find evidence for the hot hand. It's entirely possible that the hot hand is wholly illusory, that it's just the clustering illusion in play. However, as Carl Sagan was wont to say, absence of evidence is not evidence of absence. Except for free throws, in which shot selection and defense have no play, shooting accuracy is highly contextual. Some shots are wide open, while others are tightly contested. They are shot from all over the field. Some are shot on the run, others are shot on the step back, while still others are spot up shots. What's more, players are intensely aware that they're hot, and as a result may shoot any hot hand they have in the foot (as it were). All these factors conspire to make the hot hand difficult indeed to discern. (For free throws, there is apparently a moderate hot hand; see this paper (or at least its abstract) by Jeremy Arkes.)

But a more basic example is in how we all remember and talk about the game afterward. We talk about the shooting struggles of such and such a player, and how (if our team won) he overcame that adversity and pushed through to get the win. We look back in our memory and find events that, although they seemed minor at the time, turned out to have momentous impact on the outcome of the game. Consider this account of Game 7 of the 2010 NBA Finals:
With 8:24 left in the third quarter, Celtics point guard Rajon Rondo picked up a loose rebound off Paul Pierce's miss from 19 feet, and pushed it back in to put the Celtics up 49-36. And through 28 minutes of play, Kobe Bryant had had an abysmally poor night on the offensive end. He had shot three of 17 from the field and one of three from the free throw line for seven points and a true shooting percentage of only 19 percent. Largely as a result of his terrible performance, the Lakers found themselves down by 13. To be sure, Bryant had eight rebounds (four of them on the offensive end), but that hardly put a dent in his overall play.

On the play, however, Pierce injured his shoulder and had to sit out for a spell. Bryant thought he saw something that he could exploit as a result, and went to work. On the very next play, he drove into the lane and drew a shooting foul on forward Rasheed Wallace. He only made one of his two free throws, but from then on his performance surged abruptly upward. Starting with that play and for the rest of the game, Bryant gathered seven more rebounds and shot three of seven from the field and 10 of 12 from the free throw line for 16 points and a true shooting percentage of 65 percent, leading his team to a 83-79 win for the title.
Sounds pretty interesting, doesn't it? Makes you wonder what it was that Kobe saw that he could take advantage of. I would wonder, too, except that I just now made it up. Everything else is true, but the sentence in bold is conjured out of whole cloth. Actually, Kobe simply tossed his hands in frustration for a second before taking the inbounds pass and dribbling it upcourt. In trying this narrative out on a couple of folks, though, I found that it was compelling because once people see the remarkable contrast between Kobe's play before that moment and his play after it, they assume that something equally remarkable must have happened to precipitate it. We will latch onto any little thing as an explanation, even if it had no more to do in fact with the game than any other little thing. Right place, right time.

As far as I can tell, though, there was nothing in that game that happened to Kobe. Aside from a trio of truly horrible shots that he took with the shot clock running down, his shot selection was not noticeably worse while the Lakers were falling behind than it was during their comeback. Sometimes, you know, a cigar really is just a cigar.

Friday, July 2, 2010

Points on the Board

In the wake of the Lakers' mud-slogging Game 7 win in the NBA Finals over the Boston Celtics by the score of 83-79, some fans were incredulous that a team could shoot 32.5 percent (27 of 83) and still win. In fact, many of them felt that the Celtics lost the game, rather than the Lakers winning it. To me, that sounds a little silly, inasmuch as basketball is a head-to-head sport. If the Lakers were shooting that poorly, presumably the Celtics had something to do with that, and just as presumably, the Lakers were doing something else to win the game.

So what was that something? I'll give you a little hint. It begins with "offensive," and it rhymes with "rebounding."

In the unlikely event you haven't caught on, a major key to the Lakers' victory was their offensive rebounding; they won that battle 23-8 over the Celtics. To be sure, gathering 23 offensive rebounds is usually a dubious feat, for it requires the team to miss far in excess of 23 shots. So to a large extent, the dominance of the Lakers on the offensive boards was a reflection of their miserable 32.5 percent shooting clip.

However, the Celtics only gathered 32 defensive rebounds, meaning that of the 55 rebounds available after Lakers misses, the Lakers collected almost 42 percent of them. So not only did the Lakers get a lot of offensive rebounds, they got them at an stunning rate, and that doesn't depend on how many shots they missed. To give you an idea of just how stunning that is, the NBA league average is about 26 percent. The Lakers were more than half again as effective at getting offensive rebounds. By contrast, there were 38 rebounds available on the Celtics' offensive end, and they got only 8 of them, for an offensive rebounding rate of 21 percent, a bit lower than average.

That suggested the following little puzzle: All those offensive rebounds increased the Lakers' overall efficiency at the offensive end, by giving them extra shots at the basket on each possession. Can we express that increased efficiency in terms of shooting percentage—in effect, collapsing the two figures into one?

I believe we can. Suppose for the moment that we don't care about free throws, three-point shots, and all those aspects of scoring that in truth are rather important. We only care about the raw shooting percentage. The Lakers hit 0.325 of their shots. If their offensive rebounding rate was 0 percent, then the fraction of their shooting possessions (as opposed to possessions that end with a turnover, say) that they score on is 0.325.

However, in truth, they rebounded 0.42 of their misses. They miss 1 - 0.325 = 0.675 of the time, so out of all their shooting possessions, they end up with the ball again on 0.42 × 0.675 = 0.28 of the time. Then they'll score 0.28 × 0.325 = 0.09 of the time, and so on. If they miss, they can rebound again, which they'll do 0.28 × 0.675 × 0.42 of the time. And so on.

It's much more concise to put this symbolically, as follows:

Fraction of shooting possessions ending with a score = 0.325 + 0.675 × 0.42 × 0.325 + 0.675 × 0.42 × 0.675 × 0.42 × 0.325 + ...

Each time, there's an extra factor of 0.675 × 0.42, representing the Lakers missing and then picking up the rebound. Since this can happen an arbitrary number of times in a given possession, this equation can have an infinite number of terms (well, limited only by the length of the game). This is called a geometric series, and fortunately, there's a simple formula that allows you to calculate the sum without adding and multiplying an infinite number of terms. Therefore,

Fraction of shooting possessions ending with a score = 0.325 ÷ (1 - 0.675 × 0.42) = 0.45

That is to say, 45 percent of shooting possessions end in a score for the Lakers. Not to put too fine a point on it, that's still fairly awful. But not as awful as the original shooting percentage suggested.

But now, as I said, I'm going to try to combine the offensive rebounding and the shooting percentage into a single composite figure, by asking this question: Suppose the Lakers gathered only 26 percent of their misses as rebounds (the league average), instead of the 42 percent they actually gathered. How much better would their shooting have had to be in order to match that 45 percent per-possession efficiency? In symbolic terms, solve for x:

x ÷ (1 - (1- x) × 0.26) = 0.45

I'm not going to make you do that for homework; I'll just give you the answer: It turns out that x = 0.38. In other words, if the Lakers had crashed the offensive boards like an average team, they would have had to shoot 38 percent in order to score on 45 percent of their shooting possessions. Like I said: Bad, but not historically bad—not bad like 32.5 percent bad.

To put it another way, their tremendous offensive rebounding was worth 5.5 percentage points on their shooting. That's huge: 5.5 percentage points is usually worth about 10 points on the scoreboard by the end of the game.

We can turn this approach to the Celtics, too. They shot 41 percent (29 of 71), and picked up 21 percent of their offensive rebounds. That means that they ended 47 percent of their shooting possessions with scores:

0.41 ÷ (1 - 0.59 × 0.21) = 0.47

However, if they had just rebounded like an average team on their offensive end, they could have shot a bit worse and still matched that per-possession efficiency. Solve for y:

y ÷ (1 - (1- y) × 0.26) = 0.47

Again, I'll save you the algebra and give you a peek in the back of the book: y = 0.395. That is, if the Celtics were an average rebounding team, they would have achieved that efficiency by shooting just 39.5 percent. A bit better than the Lakers, but I think you'll agree that 39.5 to 38 is a lot closer than 41 to 32.5. Almost six times closer, even.

Now, the Lakers actually won, which means they must have done other things as well to get the win. For one, they turned the ball over somewhat less often, even with all the extra cracks at their offensive end: just 11 turnovers to Boston's 14. And the Lakers also visited the foul line more often (although some of those free throws were toward the end of the game, when the Celtics were fouling to stop the clock, and the Lakers shot poorly on their extra free throws just the same). Those two factors were enough to put the Lakers over the top. But the dominant factor in overcoming an awful shooting performance was their persistence in rebounding on the offensive end.

Friday, June 4, 2010

Say It, You Know You Want To

I think it's safe to say the time has arrived.


You should be able to get a somewhat larger version by clicking on the above image.

Friday, April 30, 2010

Bending Over Backwards

One of my favorite science bedtime stories (didn't you have those when you were a kid? or now, if you're still one?) involves the French physicist Prosper-René Blondlot (1849-1930), whose principal claim to fame, sadly, was a non-discovery.

In this particular case, Blondlot was working in his laboratory in the wake of a flush of discoveries concerning radioactivity and X-rays. Apparently, he was trying to polarize X-rays (a tricky task owing to their high frequency and short wavelength), and as part of his attempt he placed a spark gap in front of an X-ray beam. After a few experiments with this set-up, it seemed to him that the spark was brighter when the beam was on than when it was off.

He attributed this to a new form of radiation, which he called N-rays after his home town and university of Nancy. He may have been influenced by all the work on radioactivity and X-rays then going on, but at any rate, he set about immediately to investigate attributes of the new radiation. It appeared, he said, to be emanated by many objects, including the human body. It was refracted by prisms made from various metals, although these had to be specially treated in order to prevent them from radiating N-rays themselves.

It was all very interesting, and for some time, there was a burst of scientific activity on N-rays. The problem was, the N-rays themselves were very shy and retiring, and many physicists had trouble reproducing the results obtained by Blondlot and his staff. But Blondlot always maintained either that they had inferior equipment, or inferior perception.

You see, there was no objective recording of N-rays. All one had was a subtle brightening of a spark, which Blondlot and his colleagues were already prepared to see. To lend at least some notion of objectivity to the research, Blondlot took photographs of the sparks and other N-ray phenomena, but this merely replaced subjective judgment of a live spark with subjective judgment of a photograph. Means for measuring the light output were not sufficiently reliable or accurate yet to resolve the matter.

What did resolve the matter in the end was a visit to Nancy by the American physicist Robert Wood (1868-1955). Wood had tried himself to detect N-rays and had failed signally. Frustrated at his wasted efforts, and curious as to the differences between Blondlot's staff and equipment and his own, he travelled across the ocean to see for himself.

Wood had by this time in his career established himself as something of a debunker, a sort of turn-of-the-century James Randi. But Blondlot was no charlatan; on the contrary, he was firmly convinced of his own discovery. So he had no misgivings about demonstrating his N-rays before Wood and others. He darkened the laboratory (the better to see the increase in brightness). He set his aluminum prism on a platform to refract the N-rays, made some measurements, rotated the platform a bit, made some more measurements, and so on, all the while casually detecting the N-rays. For his own part, Wood could see nothing of what Blondlot was describing. But he kept quiet, waiting for the experiments to conclude.

When they did, and the lights were turned back on, there was general astonishment, for despite all the careful measurements on the refraction of N-rays, there was no aluminum prism sitting on the platform. Wood had, it turned out, pocketed the prism early on in the experiment. The entire time, Blondlot and his staff had been obtaining gradually changing measurements of an unchanging experimental set-up. That spelled the end, for all intents and purposes, of N-rays.

What happened? Intentional deception can be ruled out rather easily, since Blondlot would have known that careful experimentation would eventually disprove N-rays; it would have been a most temporary fame. Nor was he a shoddy scientist. Before the N-ray affair, he was known for having measured both the speed of light and the speed of electricity through wires, a task that had stymied others, and which established that the two were very close (though not quite the same).

Consensus today is that Blondlot had simply wanted to believe in N-rays, expected and wanted to see the predicted brightening, so much that he really did see it, sincerely. It has been suggested that he may have been motivated by nationalism; X-rays were discovered by the German physicist Wilhelm Roentgen, and Germany had recently taken a sizable chunk of France, so that Nancy was now uncomfortably close to the French-German border. But my own feeling is that it almost doesn't matter. At some point, the desire to see his discovery of N-rays vindicated became its own driving force.

The N-ray affair is often cited in support of what is, in my opinion, a central—perhaps even the central—insight of scientific discovery: The easiest person to fool is yourself. And fooling yourself is a necessary prelude to fooling others; charlatanry would have been easier to expose. Exhibit A in support of this position is the sad fact that although N-rays essentially died a hard death in 1904, Blondlot lived on for another quarter century, continued to be productive in science, and took his belief in the existence of N-rays to his death.


It is because it is so easy to fool oneself that science is, and must be, an essentially social activity. It is often said that in science, experimental data rules the day. That's overstating it a bit. Experimental data is indeed necessary for science to progress, but that data means little without scientific theory to organize it (and vice versa). It's not that the data is more important than the theory, but that it validates it, makes it less likely to fool yourself or anyone else. And there's a strong social pressure, within the scientific community, for one to bend over backwards in an attempt to subject one's theories to as much scrutiny as possible. It's that intense examination, which eliminates many theories but marks the ones that survive with an imprimatur of robustness, that distinguishes science from so many other human activities (ahem, politics?) and has made it one of the most successful endeavors of all.

Wednesday, April 14, 2010

Back, Slash, Back!

Remember this post? Probably not, but now I have some company and/or vindication. Check out this xkcd comic, drawn by the redoubtable Randall Munroe.


Observe: Friends don't let friends say "backslash" in their URLs.

Tuesday, March 23, 2010

A Beginning, a Middle, and an End

One thing I alluded to in my previous post, but never made entirely explicit, is the notion that there are distinct phases to a basketball game (and indeed to many sports competitions), which we might call—by analogy to chess—the opening, the midgame, and the endgame. The difference between the opening and the midgame is pretty ill-defined, and in my conception is based on the feeling that teams like to start games by trying out the various things they've worked on in practice, but within a general framework, and by the time they've gotten some ways into the game (after the first set of substitutions, say), they've got an idea for what's going to work in this game, and put it into practice in earnest. As I say, it's not a clear-cut distinction and we could argue endlessly (and, I think, pointlessly, though I'd be happy to be proved wrong) about where the exact division is.

But in my opinion, from a stats geek point of view, there is a clear-cut distinction between the midgame and the endgame. And the strategies are, empirically, different in the two parts of the game.

The whole objective of a basketball game (and in most games that involve points) is to outscore your opponent. And as basketball consists primarily of a sequence of alternating possessions, the goal should be to score more in each possession than your opponent does, by and large. That's why statistics such as points per possession are supplanting others like points per game, and rightly so. The former accounts for the fact that a game consists of a rather arbitrary but evenly matched number of possessions for each team, and the latter doesn't.

In fact, I'd argue that that objective—outscoring your opponent on a per-possession basis—is exactly the definition of the midgame. During this phase, which lasts for most of the game, you are trying to be as efficient as you can on the offensive end, while preventing your opponent from doing the same. Makes sense, doesn't it?

The question that you might be asking, though, is why this isn't your objective the entire game, why this is only the goal for the midgame. And the answer to that (you knew I had one coming, didn't you?) is that during practically any game, there comes a point where the actual scoring margin outweighs average efficiency.

Perhaps the simplest example is the decision about whether or not to shoot a two-point shot (a "deuce") or a three-point shot (a "trey"). Suppose the shooting percentage on the former is x percent, and on the latter is y. In the midgame, where all you're concerned about is the average number of points scored on the shot, you prefer the deuce if 2x > 3y, and you prefer the trey otherwise (ignoring offensive rebounding and the like, which we shouldn't do in a more extensive example).

In the endgame, however, it can be quite different. Suppose you're down two, and you have the ball with the shot clock off. You're going to hold for the final shot. The question is, what shot should that be?

If you shoot the deuce and you make it, you'll tie the game and go into overtime, where you'll win about half the time (studies apparently show that any apparent "skill" at winning overtime games is just a matter of small sample size). The winning probability is therefore x/2. On the other hand, if you shoot the trey and make it, you'll win the game outright, with probability y. So in this case, in the endgame, you prefer the deuce only if x > 2y (a strictly stronger condition than in the midgame), and you prefer the trey otherwise. (And as the defensive team, you probably want to shift more of your attention to the three-point line than you would during the midgame.) The point of this little example is that your objective is shifted, from efficiency in the midgame, to winning probability in the endgame.

The next question: When does this shift take place?

There's no one right answer, but I think one place to start is one I mentioned in connection with a rule of thumb I came up with for determining when a game is mostly out of reach. (Not to put too fine a point on it, a fellow by the name of Bill James also came up with the same rule.) To first order, I think, that same epoch in the game is where the switch between midgame and endgame happens (or "ought" to happen). After that point, the team that's trailing tries tactics that are not the most efficient (and therefore wouldn't be used during the midgame) but nevertheless maximize one's chances of winning the game; the team that's ahead plays to prevent their opponents from utilizing their preferred endgame tactics.

There's a bit of a catch, though, in that my rule (OK, Bill James's and my rule), strictly speaking, applies only to evenly matched teams. For the most part, that's not a stretch in the NBA, but you could imagine a game between an NBA team and a college team, even a very good college team. If both teams just try to be as efficient as they can, the NBA team will blow out the college team. In order to win, the college team would have to play their endgame practically from the opening jump, by employing some kind of gimmick, such as a non-stop trapping defense. Lest you think this is some kind of merely theoretical possibility, such a ploy has been tried in some circles, to some success.

And it likely has some statistical validity, for inferior teams can generally win only by introducing more chaos into the game (in the non-technical sense), which increases scoring variance. And there's no question gimmicks usually do that. Most of the time, they still won't work, but they'll give you a puncher's chance.

What's the point, in the end? As a kind of pie-in-the-sky proposal, since the objectives in the various phases are different, analyze them differently. Collect or synthesize different statistics for them. And maybe, as a result, you learn something new about why some teams can finish, and others can't.

Thursday, March 11, 2010

Unifying Statistics

As a sometime scientist, I love to unify things—that is, discover that two things that look completely different are actually intimately related at some abstract level. Without unification, science is largely stamp collecting, to paraphrase Ernest Rutherford. (Actually, he said that all science is either physics or stamp collecting, but I like to think that by "physics," he really meant unification, so it's all the same.)

The state of basketball statistics is one of substantial disunion. The box score is a hodgepodge of parameters with little or nothing tying them together. Points, rebounds, assists, steals, blocks, turnovers, fouls, etc.: These all clearly have some role to play in a team's overall goal—to outscore its opponent—but comparing one to another is impossible from those statistics alone. It would be useful if all of these aspects of performance could be put on equal footing. That would enable a proper assessment of the relative importance of the box score statistics.

Maybe, even, it would enable something else: That "equal footing" might just be able to stand on its own two feet as an independent statistic.

This thought grew out of a couple of recent posts I found on ESPN's TrueHoop blog. One was Henry Abbott's take on Kobe Bryant's crunch-time performance, which by subjective standards has been through the roof this year, but certainly (one would think) well above the average in any year, given his long history of hitting game winners. By most objective quantifiers thus far, however, Kobe is human—a good, but by no means great, clutch player. Abbott has a fair point to make against these quantifiers: His pedestrian shooting percentage at the ends of games might not be an indicator of substandard crunch-time shooting, but that his skill allows him to fight his way to shots that lesser players would never even be able to take. The same shots that lower his endgame shooting percentage (but which give his team a puncher's chance to win) are ones that never end up in the box score at all for other players.



Abbott's solution to this statistical problem is to find video of any situation where big-time players have the ball in crunch time, whether they hit, miss, or even fail to get a shot off at all, and watch it all. That certainly would give a better visceral idea of how stars perform at the ends of games, but it doesn't quite help in quantifying endgame performance.

The second post was an examination on Hardwood Paroxysm of a new way to view assists. In the box score, all assists are created equal, whether they lead to a highly contested three that just happened to swish through, or to an automatic, wide open dunk. Tom Haberstroh's suggestion is to weight those assists based on the expected scoring from the shot. So an assist to a dunk that scores 60 percent of the time would be worth 1.2, while one to a long deuce that scores 40 percent of the time would be worth 0.8, and one that goes to a wide open trey that scores 35 percent of the time would be worth 1.05. And so on.

My immediate thought on this proposal was that it sort of leaves unsuccessful attempted assists out in the cold. Suppose Chris Paul puts the ball on a dime to David West at the rim ten times throughout the course of a game, and West scores four times on those passes. (We'll assume for the sake of simplicity that he never gets fouled on these.) By the traditional count, CP3 gets 4 assists. By Haberstroh's count, he gets 4 times 1.2, or 4.8 adjusted assists. He gets a boost for having made West's job easier; West just didn't make very many of them. But why should Paul get penalized for West's misses? There was, plausibly, no real difference between the passes that led to scores and the ones that led to misses. Shouldn't they all count the same?

My not-so-immediate thought was that one could unify all this by putting it on a consistent statistical foundation. The foundation? Expected scoring at the beginning of any usage, where a usage is the period of time during which the ball is in a player's possession. Put aside, for the moment, all notions of personal points, assists, rebounds, etc. Define a usage to start when a player gains possession of a ball. He can optionally dribble it for some period of time. That usage ends when he releases the ball, which is either a shot (and goes in or it doesn't, in which case it ends with either defensive or offensive possession), a pass to a teammate, or a turnover. There are some interesting corner cases to deal with, but let's ignore that for the sake of discussion.

The statistic I'm proposing is, what is the expected points scored on this possession when a player starts his usage, and what is the expected points scored on the possession when he ends it? The difference between those two is a measure of his offensive value for that usage.

Example: Chris Paul dribbles the ball up court, with everybody already set in a halfcourt stance. In this scenario, the Hornets score, let's say, 0.8 points per possession on average. (Lower than their typical points per possession because all the high-value transition points are eliminated.) He dribbles around, and locates David West open underneath the basket, and gets the ball to him, whereupon the Hornets expected scoring at this juncture is 1.5 points. (Not exactly 2.0 because maybe he geeks the dunk, gets fouled, or whatever.) Let's suppose West actually does score the basket. The ledger for this possession is as follows:

Initial expected scoring: 0.8
Increment by Chris Paul: +0.7
Increment by David West: +0.5
Actual score: 2.0

Let's take another, somewhat more complicated case. Jason Williams comes up the floor in semi-transition. The Magic's expected score in this situation is, let's say, 1.1 points per possession. He dribbles around for a few seconds, however, and doesn't locate anything easy, so he pulls the ball back out and passes it to Vince Carter on the left wing with 16 seconds left on the shot clock. Williams hasn't done anything terribly negative with the ball (no turnover), but he hasn't broken anyone down, and in the meantime he's frittered away 8 seconds, and that lowers the expected score for the possession to 0.7 points. Vince shot fakes a few times, then takes it toward the baseline, drawing a few defenders to him, and then passes to Dwight Howard in the lane. Doing so increases the Magic's expected score up to 1.2 points. Howard dribbles left, fakes, goes back to his right, then tosses up a right hand hook that bounces off the rim and is rebounded by the other team. Final score on this possession is, of course, 0.0 points. So the ledger looks like this:

Initial expected scoring: 1.1
Increment by Jason Williams: -0.4
Increment by Vince Carter: +0.5
Increment by Dwight Howard: -1.2
Actual score: 0.0

On average, the initial expected scoring equals the actual score, so the typical player would score an average increment of 0.0. (For instance, suppose that 60 percent of the time, Howard makes that shot and scores an increment of 0.8; then, 40 percent of the time, he misses it and scores an increment of -1.2. Those two balance each other out exactly.) Higher is better, naturally, and lower is worse. This approach dispenses with the coarse categorization of basketball actions into scores, turnovers, assists, rebounds, and non-box-score actions, and assesses every single usage in terms of its contribution to the final score. I think it would be much more representative of everybody's activity. (One thing that is left out: screens.) One could also rate defense this way, to a certain extent, although zone defenses and double teams definitely make things challenging.

The drawback is that it's tremendously more work to encode all this information about the game. But diagnostically it might be worth it for teams to pay someone to do it; if you could figure out what a player is doing when his increment is 0.4 lower than average, that'd be very useful information. One benefit to this approach is that it only cares about what happens when the ball changes hands. Whatever a player does throughout his usage can be discarded as far as this statistic is concerned, so that would reduce the burden of encoding information.

The application to crunch-time shooting? I think it's pretty obvious. You've got 3.4 seconds left, down two, inbounding the ball 40 feet from the basket. In this case, you're in the endgame, not the midgame, so your objective is not to maximize scoring, but to maximize chance of winning. (A two-pointer is better than a three-pointer in midgame if it succeeds more than one and a half times as often, but it's only better in a two-point endgame if it succeeds about twice as often.) When you start this possession, your probability of winning is, let's say, 0.15. You get the ball, and you can the trey. Your actual winning probability is 1.0 (you won the game). Your win increment is therefore +0.85. If you had missed it, it'd been -0.15. So, when the situation looks dire, success is rewarded much more than failure is penalized.

Now, on the other hand, suppose you went for the deuce. If you miss it, the winning probability still goes to 0.0 and the increment is -0.15, but if you make it, the increment is only +0.35 (assuming you have a 50 percent chance of winning in OT). You've improved matters significantly, but you still haven't won the game. By this analysis, the cold-blooded assassin quality that Kobe Bryant supposedly personifies is not only bravado, but potentially sound tactical thinking, and this aspect would be captured by compiling expected win increments.


You could even go so far as to assess the impact on winning the title (much as Hollinger's playoff calculator does). By that metric, LeBron's fadeaway three against Hedo Turkoglu in Game 2 of last season's ECF was an absolute monster. Assuming that the Cavaliers would have been even money against the Lakers in the NBA Finals, that shot (which took the Cavaliers from at best a 0.1 win to a 1.0 win) was worth in the neighborhood of 0.1 to 0.2 of a title, an incredible value for a pre-Finals make. The fact that the Cavaliers did not go on to even make the Finals is immaterial in this valuation, as it couldn't have been known at the time. On the other side of the balance sheet would be Frank Selvy's miss at the end of regulation in Game 7 of the 1962 Finals, which ended up being worth an increment of about -0.2 or -0.3 of a title, as instead of winning the title outright on the shot, the Lakers had to go on to play OT, where they eventually lost.

Friday, January 22, 2010

The Suspension of Belief

You may think I've mistitled that, but no, not really. Suppose I put to you two ways to say a common sentiment:
  1. All that glitters is not gold.
  2. Not all that glitters is gold.
Now, put aside all notions of poetic rhythm or provenance. (Or that the original version in Shakespeare's Merchant of Venice had "glisters" instead of "glitters." The former comes from Dutch, while the latter comes from Norse. In our day, the Norse version has entirely displaced the Dutch version, but in Shakespeare's day, they both had currency. Or at least so Shakespeare would have us believe.) Does either of these seem "righter" to you than the other?

I've put little quizlets like this to various people and they seem to fall mostly into two groups. One group of people can't see anything at all to recommend one over the other. Moreover, when the particular distinguishing feature is pointed out, they either don't see it or can't see why anyone would care. (You might, if you fall into this group, see if you can figure out before reading on what this distinguishing feature is, if you don't already know.)

The second group, of course, sees a logical distinction between the two and what's more, they're irritated that there's a mismatch between intent and wording. What's still more, they're irritated that the first group doesn't acknowledge this. To this group, the above two sentences are logically equivalent to the following:
  1. All glittery things are non-gold.
  2. Some glittery things are non-gold.
A quick glance at the script for Merchant of Venice indicates that Shakespeare chose the first wording ("All that glitters...") but his meaning is clearly the second. Does this bother you?

OK, that's not really all that important, as we all know what Shakespeare meant. Here's another one:
  1. I don't believe we have a coherent plan for the Middle East.
  2. I believe we don't have a coherent plan for the Middle East.
Obviously, when it's presented this baldly, it's clear what the difference between this two (especially, I hope, in light of the previous example), but I can't count the number of times that people have interpreted #1 (or minor variations thereof) as #2. And honestly, I don't think it's because they can't think logically. I think it's because they're impatient with disbelief.

Nowhere is this more evident than in politics. It's practically a cliché to demand politicians give their position on some issue or another, to the point that it's considered a weakness if they can't immediately spit one out. While I'm all for politicians being prepared for new situations (and as a by-product, for questions from the press), is having a response for all such questions really preferable to being able to suspend belief when the situation warrants? We've seen the dangers that feigned certainty can bring. And it's not as though suspension of belief necessarily means suspension of action. We can act rationally on uncertainty just as well as we can act on strong belief.

As prominent as it is in politics, though, this rejection of uncertainty permeates our whole world, including science, where it has no business. Political truths may last for a generation or two (think about how long the Democratic party has been on the side of civil rights), but scientific truths, once verified, last essentially for eternity, subject only to occasional refinement. Given that, what's the rush to judgment? Why not suspend belief until we know for sure? Impatience with uncertainty is fine as long as it motivates us to reduce it, but not if it forces belief before we're ready.

Monday, January 11, 2010

Cutting Your Losses

I was standing at the vending machine at work today, buying some chips with lots of small coins (nickels and dimes). And as I often do, I carefully inserted the nickels first, then the dimes; if I had used any quarters, they'd have come last.

You may—assuming you've read this far—wondered why this is. To be fair, having done this for a long time, I wondered myself for a moment. And then I remembered.

See, when I first started doing this, I was in college. I was living in the dorms. The dorms had vending machines, which were balky, much like anything in the dorms. They would, occasionally, find something objectionable about your change. They were even particular about the way you inserted your change; sometimes, it would take six or seven tries for you to get it to accept a specific dime. I would bring extra change just in case, if I had any, but sometimes even that would run out. So there I would be standing, with 45 cents that the machine was refusing to take, and more money back in the dorm room that I could try out on the Keeper of the Fizzies. But in order to get that money, I'd actually have to back to the dorm room. Away from the vending machine.

I'd run downstairs, get the change, run back upstairs, and hope that in the meantime, no dormitory Grinch had decided to get a 30-cent discount on his Coke.

Because, as it happens, sometimes they would. I'd get back and there would be no credit at all in the vending machine. You might suppose that Whoever It Was would at least leave the credit they had benefited from in change on the side, but noooooo.

That's when this business with inserting change in ascending order of value started. It was a way of cutting my losses. You might think that it would be simpler for me to just push the coin return and withdraw my change before heading downstairs, but in the first place, the coin return lever was balky, like everything else, and in the second place, it had often taken me lots of effort to get those coins in and I was reluctant to relinquish those hard-won gains.

Eventually, I managed to obtain a small dorm fridge and thereafter bought my drinks at the market. But this was before all that. Just the same, I continued my coin-sorting practice even to the present day, where (I daresay) my co-workers are far less likely to stiff me out of a handful of change than my dormmates were.

You know me, always looking for something mathy about the situation, so here's the question: Suppose that I only used n nickels and d dimes (no quarters), that I foolishly brought exact change, and that the vending machine refuses to take exactly one coin, randomly and uniformly selected from all the coins. On average, how much less money did I place at risk going nickels first than I did going dimes first?

The answer: The average reduction in risk was equal to the value of the nickels multiplied by the fraction of coins that were dimes.

I had thought to try to tie this story to something deeper, but I just can't bring myself to do it.