Saturday, June 27, 2009

Do Not Pass Text Design, Do Not Collect $200

A long time ago (I won't say just how long, but it'll soon become fairly obvious), I worked my first summer job at a computer-controlled font engraving place in Mountain View called Xybergraphics. Lots of stories from that summer, which I'll eventually get to when I want to talk about what happens when a 95-pounder drinks 42 ounces of caffeine-laden soda pop, or the beginnings of my fascination with the Police, or what a cubic spline is.

At any rate, my job, which paid me the princely sum of $2.85 each hour (workers under 16 could be paid somewhat under minimum wage), required me to encode fonts for the aforementioned computer-controlled engraver. Typically, the engraved letters would be in the neighborhood of an inch or two in height, but for better precision, the letterforms I worked from, designed by my immediate supervisor, were about 14 inches tall and drafted in pencil on vellum. My job was to take a mouse (this was before the Mac, mind you), and trace along the letterforms, clicking at appropriately spaced points, until the letter was entirely traced. A simple letter, like a capital I, might require 50 points; a more involved letter, such as lower-case m, might require as many as 150. This was a tedious and time-consuming job, you could well imagine.

But I'm nothing if not efficient at boring tasks, especially if I've got my tunes in the background, and in the meantime, I learned quite a lot about fonts—what they should look like, how features are shared in common by various letters, what design rules not to break, and so forth. All this that I learned is both fascinating (to me) and almost entirely useless, which means that it has lodged tight in my memory banks and won't budge.

As a result, I'm exceptionally sensitive to bad typography. For example, I was walking one day in the Denver airport, waiting for my next flight, when a store sign caught my eye. Bad Typography Alert! The capital A in the sign was reversed; its broad stroke went down to the left, rather than to the right, as it should in traditional serifed fonts. And this on the sign for a stationery store! I think you will properly apprehend the depth of my mania when I tell you that I actually reported it to the clerk. She seemed quite receptive to my complaint, although she probably promptly forgot about/rejected it as soon as I turned my head.

There's a similar problem with one of the signs where I work—the name of the building has a capital V in it, with its broad stroke, again, going down to the left. Alas, this is welded on and probably unmulliganable.

But my encounter with the very bottom, the absolute worst, came when I began frequenting a supermarket that opened near our house. There was little wrong with the typography at the supermarket, but across the street was this abomination:

This signage is simply staggering in its wrongness. It's hard for me to convey just how staggering, but the fact that you're reading this post is some indication. Click on the image to see an enlarged version and look at this tangle of thorns.

Practically any letter that could have been misplaced, was, and those that weren't, seem to have been correctly placed to highlight how wrong their identical partners were. To wit: The E in GERMAN is upside-down. The M and A in the same word are backwards. Unbelievably, the N is correctly placed. A mistake must have been made.

In CAR, the C is upside-down. The A is backwards. After the bad E in GERMAN, the E's in SERVICE are right, but the S and the C (again!) are upside-down, and the V is backwards.

The M in BMW is backwards. The W is very strange, it seems to have been made with a pair of leftover V's, both with one stroke broken in half. If so, they should have broken the other stroke on the right half, but I'll give them credit for showing some resourcefulness.

The horror continues in Volkswagen. This V is correct (what happened to the V in SERVICE?), but the l is backwards, the s is upside-down, and the w and even the g (!) are backwards again. How do you screw up the g? The lower-case s is upside-down again in Porsche, and the lower-case c joins its capital brethren in being inverted also.

The Audi is correct, but is set in a narrower font. Must have been added on later.

Considering that many of the letters couldn't have been placed incorrectly (either because they're totally symmetrical or totally unsymmetrical), the percentage of letters placed incorrectly runs at about 15/22 = 68 percent, by my reckoning. A dolphin without opposable thumbs flinging letters up randomly with its tail could have done better.

OK, I realize that I'm probably clinically unhinged on this point, but can we agree that someone screwed up royally here? I mean, please.

Tuesday, June 23, 2009

Game Theory and the Wing-Block Dynamic

In 2004, when the Lakers played the Pistons in the NBA Finals, a lot was made of Kobe Bryant continually jacking up outside jumper after outside jumper—none too efficiently, most of the time—while monster center Shaquille O'Neal was taking fewer shots, but making them much more efficiently. On the surface, it sure seemed as though Shaq should have been getting more shots, and of course Shaq, never a wallflower at the quietest of times, was not loathe to point this out.

In 2009, when the Lakers played the Magic in the NBA Finals, a lot was made of Kobe Bryant continually taking jumper after jumper—somewhat more efficiently than before—while his "newly tough" post player Pau Gasol was taking far fewer shots, but making them more efficiently. On the surface, it sure seemed as though Pau should have been getting more shots, and surprisingly Pau, generally a quiet fellow, pointed this out with a certain degree of mordacity.

Obviously, in retrospect, the two series turned out rather differently for the Lakers, which is why the former case was judged by many as the reason the Lakers lost the series, and the latter is just a footnote. Bryant's reputation as a ballhog, already in force before the 2004 Finals, was substantially bolstered by that series, and has only just faded within the last year or two. But is that fair? Is that the only possible interpretation for Kobe's shot-taking? Or could ballhoggery conceivably help a team?

Let me be clear here. There's no question in my mind that Kobe could stand to take fewer shots than he does (unless he's just red hot). The question isn't whether he should take as many shots as he does, but whether he should take shots even when he's shooting them at a lower percentage than the post players. And this really goes for any wing player who dominates the ball (e.g., LeBron, Wade, etc.). I just mention Kobe because I watch all the Lakers games.

I'm going to look at this from a game theory standpoint. Put into elementary game theory terms, Kobe and the Lakers have a set of tactical options, and the defenders have a set of tactical options. If each side optimizes its strategy with respect to the other side, then in the end, the game will reach what's called a Nash equilibrium: Neither side can improve its result by changing its strategy unless its opponent changes it too. (The equilibrium is not named after award-winning point guard Steve Nash of the Phoenix Suns, but John Nash, award-winning mathematician and subject of the award-winning book/movie, A Beautiful Mind.)

Suppose we simplify matters by assuming that the Lakers have just two options: Kobe shoots, or Kobe passes to the post, which then shoots. And the opponents likewise have just two options: double Kobe, or play man-to-man. And naturally, we assume that Kobe shoots a better percentage over man defense than over a double team, and the post shoots better when Kobe draws a double team than when the defense plays man-to-man.

The conditions of the game do not require either side to do the same thing each time. Strategies can be mixed. So Kobe can shoot 60 percent of the time, and pass 40 percent of the time. The defense can double 70 percent of the time, and play man 30 percent of the time. The defense can even have partial strategies like a weak double versus a strong double. Under these simple assumptions, it's fairly straightforward to find the Nash equilibrium, where neither side can unilaterally improve their result. What's interesting about this Nash equilibrium is that both Kobe and the post should shoot exactly the same percentage.

Plainly, that doesn't happen very often. Very often, Kobe shoots a lower percentage than the post (even when factors such as free throws and the three-point line are taken into account); by comparison, it's relatively rare that it happens the other way around. Ostensibly, with Kobe shooting the ball so much, he's not adequately punishing the defense for doubling him. He should instead pass the ball into the post more, gradually causing the defense to double less and play more man defense, up to the point where his shooting percentage rises to match that of the post.

[EDIT: The rest of this post is largely different from what it used to be, because what follows totally swamps in significance what used to be here.]

Having said all that, I'm going to go back and suggest that that strategy actually isn't optimal. How can it be sub-optimal, if it's at the Nash equilibrium? Because the game doesn't stop when the ball hits the rim, so the game theory shouldn't, either.

When players shoot the ball against straight-up defense, the defense has the advantage on rebounding any misses, because they're usually between their man and the basket. However, when a perimeter player shoots against a double team, the rest of the players have a man advantage. In our scenario, this advantage plays out in the post, which means that (a) the chances are much improved for an offensive rebound, and (b) if an offensive rebound is gained, it usually leads to a high-percentage shot.

What effect does that have? Suppose that the man advantage on rebounding leads to an increase of 15 percent in the offensive rebound rate; for example, if the offensive used to get 20 percent of the rebounds, they now get 35 percent. And suppose also that this leads to a successful shot 60 percent of the time. If the wing player misses, let's say, 60 percent of his shots against a double team, and he faces a double team 50 percent of the time, the offensive rebounds effectively amount to an increase in shooting percentage of 0.5 × 0.6 × 0.6 × 0.15, or 2.7 percent. That doesn't sound like much, perhaps, but it's about a standard deviation's worth, the difference between a top-10 guard and a middle-of-the-road guard. And it's how much worse the wing should shoot than the post at the true optimal strategy.

Again, I'm not suggesting that this is how Kobe thinks (although I'm pretty sure he does think that his misses can lead to easy baskets for his team), or that Kobe shoots exactly as much as he ought to. But it might explain why, even if he's shooting a lower (true) percentage than his post players are, he shouldn't necessarily shoot it less.

Thursday, June 18, 2009

Inconsistent Bracketology and Non-Transitivity

Does anyone who routinely does NCAA playoff brackets know the answer to this one? Can you fill in a bracket inconsistently, so that you have (let's say) team A beating team B and team C beating team D in the first round, and yet you have either team B or team D coming out of the second round?

Because it's not hard for the probabilities to come out that way. One simple way is for A and C to be mild favorites over B and D, respectively, but for B and D to be prohibitive favorites over C and A respectively. (Matchups between A and C, or between B and D, can be pick-ems.) So you fill out your bracket to have A and C come out of the first round, but either B or D to come out of the second round. This requires a certain amount of non-transitivity in the teams: For instance, A edges B, which trounces C, which edges D, which trounces A again. But that's hardly unknown in the basketball world, and is usually trotted out as the inevitable "matchup issue" between two teams.

Somewhat more surprising is that it's possible for the same inconsistency to happen without any non-transitivity. Suppose A is a huge favorite over B in the first round, while C is a mild favorite over D in the second round. So you have A and C come out of the first round. But suppose C is also a mild favorite over A, but D is a huge favorite over A. There's no non-transitivity—you can place the teams in the total ordering C > D > A > B—but D is nevertheless the favorite to come out of the second round, despite not being the favorite to come out of the first.

Even though there's no non-transitivity in the second example, it's vaguely unsatisfying because it doesn't match our intuition. We'd like to think that since C beats D, C should be a bigger favorite over A than D is. But the inconsistent bracket result only comes about here because that intuition is violated. So, the semi-open question ("semi-open" because I suspect it won't be that difficult to resolve): Is it possible for a set of tournament contestants to fall under a total ordering in the intuitive way suggested above, and still yield an inconsistent playoff bracket in a binary, single-elimination tournament? It need not be limited to a four-team bracket, but it does have to be 2n for some n.

Sunday, June 14, 2009

Kobe, Once More Unto the Light

The bare facts: The Los Angeles Lakers dominated the Orlando Magic over the last three quarters to take Game 5 and clinch the NBA title, winning going away, 99-86. After a 16-0 run (capped by a nifty Lamar Odom reverse lay-up) took the Lakers from a 40-36 deficit to a 52-40 lead, the Magic never seriously threatened to take the game back, getting no closer than five points the rest of the way and spending most of the second half down by double digits.

Bryant was, I felt, the clear-cut MVP of this series, and of the playoffs, and even when his game was somewhat off in the middle three games of the series, he cast his enormous shadow over how the games were contested. Whether or not you thought he was over-dominating the ball, whenever he was on the floor, he set the tone for the other nine players.

In some sense, for most of his career, he has cast that same shadow on the NBA. For better or for worse (and there have been no shortage of those who see it for the worse), he has been the top talking point of the league. From his unbelievable moves on the court to his embarrassing personal problems in Colorado, his life trajectory thus far has been an eventful one. His triumphs and travails have galvanized public opinion like no other player, possibly in the history of the league. To Kobe haters, Kobe fans are as thin-skinned as their hero, reacting to any perceived slight as though it were heresy; to Kobe fans, Kobe haters seize any opportunity, twist any circumstance, and trample any logic to put the target of their envy in a negative light. Each group sees the other as the yin to its yang, a state of affairs that would be ludicrous with respect to any other player. But apparently it's de rigeur in Kobe's World.

Through it all, Bryant was insouciant, an outwardly joyous 18-year-old rookie; then a driven talent, rising with center Shaquille O'Neal to dominate a trembling league; and then a fallen hero, commonly considered to have forced O'Neal and then coach Phil Jackson off the team. The haters had a field day watching Kobe try, and fail, to lead a ragtag crew to even the lower echelons of the playoffs, pride going before the fall. Jackson returned the following season, but the next two years were barely an improvement, with the Lakers falling to the Phoenix Suns each year in the first round. His undeniable skills on the court were only further testament, it seemed, to his failure to lead his team off it. Bryant himself appeared to adopt the demeanor of a flawed, secretive superhero with a dark past and a darker future, Batman to O'Neal's Superman. The 2007 off-season was the darkest yet, with Bryant railing to all within hearing range about the front office's inability to provide him with a sufficient supporting cast.

The next season brought a pleasant surprise, however, in the unexpected form of a contending team. And when rising young center Andrew Bynum went down with what turned out to be a season-ending knee injury, the beleaguered Lakers' front office obtained multi-talented Pau Gasol from the Memphis Grizzlies for a song, and the Lakers barely missed a beat. Bryant seemed readier than ever to share the ball with his teammates, making the team less predictable, more formidable. There was a regular season MVP for Bryant, his first, matching O'Neal's award from 2000. Even with Bynum out, the Lakers manhandled the rest of the Western Conference on the way to the NBA Finals. The Batsuit was ready to crack. But the Celtics sunk the Lakers in six games, trouncing L.A. by 39 in the clincher.

Back to the cave. Not alone, not to sulk, but this time with all his teammates, forging something of a defensive identity. Bryant and the Lakers were determined that this time would not be the monstrous disappointment of the previous season. There would be no MVP award this year. That would go to LeBron James, the new King. Bryant had no time for regular season plaudits anyway. He wasn't looking for redemption, either; he never felt he had anything he had to redeem himself for. What he was looking for, I like to imagine, was a lighter Kobe Bryant...

Saturday, June 13, 2009

The Power of Flexibility

With 10.8 seconds left in regulation in Game 4 of the NBA Finals between the Lakers and the Magic, the Lakers had the ball out of a timeout after Dwight Howard had just missed two free throws to leave the Lakers down three, 87-84. Lakers coach Phil Jackson decided to take the ball near their own baseline (where the timeout had been called), rather than advance it to halfcourt. Trevor Ariza inbounded the ball to Kobe Bryant, who was immediately double-teamed. Bryant advanced the ball near halfcourt back to Ariza, who then cross-courted the ball to Derek Fisher. Fisher dribbled the ball up toward the three-point line on the right wing. Since Jameer Nelson was playing so far off Fisher, Fisher decided to hoist up the trey at that point and sank it to tie the game with 4.6 seconds remaining. The Magic failed to score in their final possession of regulation and the Lakers would go on to win the game in overtime to take a 3-1 lead in the series.

Tim Legler of ESPN later suggested that Jackson's decision made it easier on the Magic, because of the extra time that bringing the ball the length of the court would consume. I think this takes a narrow and unnecessarily time-centric view of the play.

In the first place, 10.8 seconds is a lot of time for a "last-second" play. It's nearly half of a full shot clock. The Phoenix Suns could probably run three whole plays in that amount of time. It's unlikely the Magic could delay the Lakers long enough to avoid giving them a decent look. Indeed, Fisher sank the three-pointer with 4.6 seconds left, but he actually released it with 6.2 seconds; the whole play took less than five seconds to execute.

Secondly, Legler underestimates the pressure that having to play full-court defense places on the Magic. If the Lakers had inbounded the ball at halfcourt, they would have had to set their offensive positions for the most part, showing their hand on the playcall and allowing the Magic to set their defense. Whether or not the decision to bring the ball up surprised the Magic, it concealed the Lakers' play from them and required them to cover a multitude of options.

As it happens, the Magic decided to double Kobe, and the Lakers took advantage by quickly advancing the ball out of the double-team to give the Lakers a 4-3 man advantage on the rest of the court. The Lakers had used this ploy, a kind of basketball aikido, several times in the second half of Game 5 of the Western Conference Finals against the Denver Nuggets. In that game, the Nuggets decided to double team Kobe aggressively, pushing him all the way toward the halfcourt line. Kobe obliged them, drawing his two defenders so far away from the basket that by the time Kobe passed out of the double team, they were effectively out of the play, giving the Lakers a man advantage for long enough to get an easy shot. In hindsight, this strategic decision by the Nuggets was a main reason they lost the game and the series.

But even had the Magic chosen not to double Kobe, the Lakers still had a multitude of options to run, starting from the backcourt, and the Magic would have had to anticipate them all. Most options put the Lakers in a kind of semi-transition game, placing the Magic defense in jeopardy. Normally, teams run very unimaginative sets at the end of a period, and the Lakers are no different in this regard, typically putting the ball in Kobe's hands and letting him go 1-on-N. The fantastic play run by the Magic at the end of Game 2, freeing up Courtney Lee for an alley-oop attempt, was very much the exception rather than the rule. And in this game, Fisher still had to make the jumper. But Jackson's decision to bring the ball up the length of the court broke the usual mold and gave the Lakers their best chance at tying the game.

Wednesday, June 10, 2009

See, Why Would You Do This?

Look everybody: It's pet peeve time! (I'm writing this because it's against my beliefs to blog about basketball after a Lakers loss, heh. Or, rather, I'm copying this—it's originally from another site of mine. But it's recent.)

I like to bicycle. Not like some friends, who like to bicycle like they like to breathe. (Credit goes to Sandra Boynton for that delectable quote, adapted for the nonce.) But I still like to pedal here and there. So I take it personally when a cyclist every now and then decides they wish to flout one of the basic rules of bicycle behavior—even more that they do it in the name of safety.

They prefer, they say, to ride on the left side of the road.

Note that I'm not talking about people who ride briefly on the left side of the road because their home is on that side, and they're about to make a left turn at the corner, but about people who speak of riding on the left side as though it's some sort of closely guarded Success Secret® of the cycling guild. The justification, apparently, is that it allows them to see oncoming traffic, the better to avoid it. This is wrongheaded on so many levels that it's mindboggling to comprehend. Don't be one of those mindbogglers. Ride on the right side of the road. (Unless you ride in the U.K., or some other of those countries where they drive on the left side of the road.) The reasons are legion:
  • First of all, the most important: Collisions will happen from time to time, whether you can see them coming or not. But riding toward cars means that your velocity is added to theirs, not subtracted, not to mention the prone-to-flipping head-on collision rather than the more stable rear-ender. Drivers will suffer a somewhat higher repair cost. You will suffer a substantially higher chance of dying, and if not that, of serious injury.
  • The higher relative velocity has another penalty: Cars are much larger than you, are moving much faster than you, and take much longer (both in distance and in time) to stop than you. As a result, the car has by far the greater part of control over whether there's an accident. Your ability to see oncoming traffic is as nothing, in terms of importance, when compared to the drivers' ability to see you. And by riding toward cars, you give drivers correspondingly less time to see you and avoid an accident.
  • What's more, most cyclists ride on the right side of the road. Drivers know this, whether they know that they know it, and scan the road accordingly, especially when they're making a right turn onto a major road: At such times, they look back down the road to their left for traffic to avoid. Not traffic to the right, which is where you will be coming if you ride down the left (i.e., wrong) side of the road. A similar comment applies to cars parked on the side of the road pulling out into traffic.
  • The one time when a car is relatively maneuverable on the road is when it is stopped to make an unprotected left turn (i.e., yielding to oncoming traffic). Drivers are then looking into oncoming traffic, where they will see all potential hazards, except—you guessed it—cyclists riding down the left side of the road.
  • Pedestrians are advised to walk down the left side of the road, because they can stop on a dime, unlike bicycles. If you ride down the left side of the road, you will be coming up behind such pedestrians, who may chose that moment to turn on a dime and smack right into you (potentially throwing both of you into oncoming traffic).
I hear this less and less, fortunately, but it still crops up from time to time, from people who should know better. (That's approximately everybody, in my opinion.) I think it stems from the natural inclination to think that if you only have all the information you want, you'll be in complete control over your destiny, but whatever the reason, it's just a vastly inferior decision.

EDIT: Since I first wrote this, I've been informed that there is a specific term for this: salmoning. So don't salmon. You know what happens to salmon after they make their way upstream, right? It's not exactly an unalloyed happy ending.

Monday, June 8, 2009

Points are Points, Sort Of

In the wake of last night's Finals Game 2 between the Lakers and the Magic, which the Lakers won in overtime, 101-96, a lot of attention was focused on various plays that the Lakers made down the stretch and the Magic didn't. Now, obviously, in a game that close, there were plays—even down the stretch, at least in regulation—that the Magic made and the Lakers didn't, and if the game had gone the other way, we'd be talking about those plays. But this just by the way.

Some folks pointed out that although those plays late in the game are magnified in our mind, they aren't worth more on the scoreboard than plays earlier in the game, even in the first quarter. A clutch shot made with the game clock running down is not given more points than an identical shot made in the opening minutes. So undue attention, it is claimed, is placed on, say, Courtney Lee's missed lay-up with 0.6 seconds left in regulation, a shot that would have given Orlando the win and a split in the first two games of the series. (I was watching the game, by the way, and I somehow totally missed the play developing: the jab step, the perfect screen from Rashard Lewis, everything. Thank goodness!) If the Magic simply make one of the two-pointers they missed earlier in the game, it doesn't come down to Lee's make or miss at the end of the game.

It all sounds reasonable, doesn't it? It did to me, too, at first.

Except how do we square this line of thought with the end of Game 1, which the Lakers won going away? At the very end of the game, the Lakers are leading 97-75, and they inbound the ball with the game clock showing ever so slightly more time than the shot clock. If it had been the other way around, it is one of the Great Unbreakable Rules of the game that you are not supposed to shoot, and just let time run out. But for some reason, if the shot clock isn't turned off, you get to shoot with impunity. Never mind that the Magic couldn't possibly have fired off a 22-point shot with only a couple of seconds left in the game. Anyway, with time running out, end-of-the-bench Lakers forward Josh Powell dribbles to his left and hoists up a three-pointer that amazingly goes in. It is the first three-pointer of his entire career, playoffs or regular season.

So, I don't think you'd have any problem convincing anyone that this shot was meaningless. It turned a 97-75 blowout into a 100-75 blowout. It almost certainly didn't mean much in Vegas: I'm sure the Lakers beat the spread, pretty sure that this kept the game in the under.

The problem is, if this shot is meaningless, and three points is three points, then isn't every other shot the Lakers made similarly meaningless? Are we supposed to think this shot was almost meaningless? Perhaps, if we add up enough "almost meaningless" shots, we actually get a meaningful result. Personally, I don't buy that. In terms of the actual game and series (in other words, ignoring Vegas, which probably had some incredibly tangential bet involving Powell and a trey at the end of the game), this shot was not just mostly meaningless, it was entirely meaningless.

What I'm going to propose is a kind of probabilistic importance—the idea being that points matter to the extent that the game is in doubt at the moment, to the extent that they bear on the result of the game. I've seen, as a kind of experimental thing with the NFL on some sports Web sites, a play-by-play measure of the winning probability for the team that makes the play. If the Baltimore Ravens score a touchdown, it increases their chance of winning from, let's say, 43 percent to 59 percent. And so on.

Now, imagine the same gadget being used for basketball. How much do you suppose a two-point basket is worth in the opening moments of the game, when the winning probability for both of two evenly matched teams is 50 percent? Actually, more than you might think. Suppose the standard deviation on scoring difference between the two teams is 10 points, and that teams score about a point per possession, close enough. A two-point basket is an increase of one point over what was expected for that possession, and a single point—0.1 standard deviations—is worth about 3.6 percent. In other words, that two-point basket would increase the winning percentage from 50 percent to 53.6 percent. If, on the other hand, the shot was missed, the winning percentage would drop from 50 percent to 46.4 percent. That shot is a swing of 7.2 percent, believe it or not.

Now let's consider the same shot in the closing seconds of the game. The team with the ball is down one, and is holding for the final shot. Obviously, if they make the shot, their winning probability is 100 percent; if they miss it, it's 0 percent. The percentage swing here is 100 percent, and clearly 100 >> 7.2.

But this huge swing is counteracted by the fact that in most cases, the game doesn't come down to that. Most of that time, that shot would be worth 0.4 percentage points, or 1.1, or something like that. At the very end of the game, it would be worth 0 most of the time. On average, that two-pointer would be worth 7.2 percent, just like the earlier shot was. It's sort of like the lottery: Would you rather have 35 cents, or a lottery ticket that gives you a one in 100,000,000 chance of winning 35 million dollars? On average, they're both worth 35 cents. But I think you'd have a hard time convincing yourself they're exactly the same.

So, I guess, I'm not letting Courtney Lee off the hook. Make the shot, and the winning probability swings from 50 percent (overtime) to 100 percent (game over, Magic win). Two points is two points, but I think people's intuition is right: When the points happen matters, and matters a lot.

EDIT: I corrected some of the above exposition to account for the fact that the hypothetical early-game two-pointer can be missed, which is one point lower than expected for the possession.

Secondly, here's a more self-contained example of this kind of probabilistic importance. Suppose that the two teams are evenly matched—50/50 to win each game, home or away. In a seven-game series, the swing for the series win in a Game 7 is obviously 100 percent: The team that wins Game 7 wins the series. However, Game 7 only gets played when the series goes 3-3, which happens about 31.2 percent of the time. In contrast, Game 1 gets played 100 percent of the time. However, it isn't as pivotal as Game 7: It can be shown that the Game 1 winner's odds of winning the series go from 50 percent to 65.6 percent, and the losing team's odds from 50 percent to 34.4 percent. That's a swing of 31.2 percent. So Game 1 swings the odds by 31.2 percent, 100 percent of the time, whereas Game 7 swings the odds 100 percent, 31.2 percent of the time. They therefore have exactly the same average importance, but Game 7 is obviously more important when it does get played.

Saturday, June 6, 2009

The Infamous Fisher "0.4" Shot


Perhaps no playoff shot has been dissected, debated, or deconstructed as much as the "0.4 Shot" made by Derek Fisher in Game 5 of the 2004 Western Conference Semifinals between the Los Angeles Lakers and the San Antonio Spurs. The Lakers did not win the title that year (they went on to be defeated by the Detroit Pistons in five games), but the closeness of the timing and the marquee nature of the two teams, who had combined to win the last five championships, conspired to focus unprecedented attention on the game-ending jumper.

Much speculation centered around whether Fisher could humanly have caught the ball, turned around, and released the ball, all in the 0.4 seconds available to him. Spurs partisans insisted that he couldn't possibly have done all of those things in so short a time; Lakers fans responded that Fisher didn't do all of those things sequentially, but combined them so that he could do them all. My own personal impression (possibly colored by my bias as a Lakers fan) was that the clock started somewhat late, but not substantially so.

Fortunately, there's no need to rely on anything so nebulous as whether Fisher's shot was plausible or not. Missing from all these speculations was an examination of the actual footage. Video from the game captures instants of the game that, for the live angle at least, are equally spaced in time. The video can therefore be used as a kind of "clock" to determine the interval of time that Fisher had possession of the ball. In assembling this particular look at the Fisher shot, I used a video file that was encoded at 25 frames per second (as I determined by stepping through frames at the end of each quarter). Unfortunately, this was not the native frame rate of the original broadcast, and this increases the random error involved in timing intervals between events. It should not, however, produce any systematic bias one way or the other.

By figuring out how many frames pass between the time that Fisher catches the ball and the time he releases it, and dividing by 25 frames per second, the elapsed time can be calculated. The bottom line, for those who are impatient or don't care about analysis: About five to six tenths of a second elapsed between the time that Fisher caught the ball and the time that he released it.

The Game

On May 13, 2004, the San Antonio Spurs played host to the Los Angeles Lakers in Game 5 of the Western Conference Semifinals. After leading most of the game by as many as 16 points, the Lakers went cold from the outside while the Spurs came steadily back, eventually going ahead 71-68 on a layup by Tony Parker with a little more than two minutes left in regulation.

After a timeout, Shaquille O'Neal responded with a turnaround eight-foot jumper in the lane to bring the Lakers to within a point. The teams traded empty possessions until Kobe Bryant sank a 19-footer from the left wing on a screen by Karl Malone, putting the Lakers ahead 72-71 with 11.5 seconds remaining.

After a non-shooting foul by Derek Fisher, the Spurs inbounded the ball in their frontcourt with 5.4 seconds left. Manu Ginobili passed the ball into Tim Duncan, and tried to cut to the basket for a return pass, but got tangled up with O'Neal and was out of the play. With no other clear options, Duncan faked one way, then dribbled the other toward the top of the key, taking a blind fadeaway jumper that touched nothing but net. The clock read just 0.4 seconds.

The Lakers called timeout. Dejected and weary players trudged slowly back to the bench, none wearier than Bryant, who was exhausted not only by the 47 minutes he had played in the game, but also by the constant jetting back and forth between the team and his legal troubles in Colorado. The Lakers' play out of the timeout called for the players to stand in a stack near the top of the key, in an attempt to break out one of their stars, O'Neal or Bryant, for a quick shot or a tip-in. But before the Lakers could inbound the ball, the Spurs called a timeout. They had seen enough, they hoped, in order to defend the play well.

After the timeout, the Lakers came out in a different set, with the players scattered across the halfcourt. Players cut, especially Bryant, but with Robert Horry doubling on Bryant rather than playing Payton inbounding the ball, Payton couldn't find an open teammate and had to call the Lakers' final timeout.

When the ball was brought into play for the final time, the Lakers returned to their original set. Bryant broke out from the stack toward halfcourt, tailed by Horry and Devin Brown. O'Neal curled toward the basket, while Malone drifted toward the top of the key. Finally, Fisher broke toward Payton.

Payton tossed the ball, leading Fisher toward a spot about 18 feet from the basket on the left wing. Fisher began angling his body for the turn before catching the ball in mid-air, then coiled on his legs and prepared to shoot over Ginobili's outstretched arms. At seemingly the same instant, Fisher released the ball, the game horn sounded, and the backboard's red light came on. Nineteen thousand people held their collective breath. The ball arced upward and came down; Fisher thought he had pushed it off too hard, but it was offset just enough by his backward motion from the basket, and the ball fell perfectly through.

A hush fell over the crowd as Fisher ran down the court in celebration, eluding his mobbing teammates and streaming down the tunnel toward the locker rooms. Rasho Nesterovic and Kevin Willis waved their hands to indicate the shot got off late. Duncan stood unmoving, hoping they were right. The referees, who had called the shot good when it happened live, convened at the scorer's table to examine the video of the play from the ABC cameras. A few tense minutes passed before the referees confirmed their initial call was correct: The shot was good. The Lakers had won Game 5, 74-73, and returned home to trounce the Spurs in Game 6 to win the series, 4-2.

The Aftermath

Writing on May 14, the day after Game 5, Dusty Garza, the editor of Spurs Report, relayed news that the Spurs had filed a formal protest with the league office, claiming that the clock started too late after Fisher touched the ball, and that the shot should not have counted. The league denied the protest that same day.

Garza also offered his personal opinion that Fisher's shot did not get off in time—indeed, could not have gotten off in time—based on the notion that human reaction time is, on average, three-fourths of a second (750 milliseconds). Since the clock couldn't have started any faster than that, Garza wrote, Fisher could have had anywhere up to a bit more than a second to shoot the ball.

This seems an unreasonable conclusion. In the first place, Garza contends that the average human reaction time is three-fourths of a second, then says that unless the referees are superhuman, they couldn't possibly have pushed the button less than three-fourths of a second after Fisher touched the ball. Well, if the three-fourths of a second is an average, wouldn't half the human population be able to do it faster (assuming negligible skew in the distribution)? And presumably NBA referees are trained to be a bit faster than average.

Secondly, research indicates (Laming 1968, Welford 1980) that simple reaction time—the time required to do something simple like push a button after a visual stimulus—is more like one-fifth of a second (200 milliseconds), rather than three-fourths.

What's more, it's unclear that human reaction time is involved here at all. Bennett Salvatore once said, speaking to Henry Abbott of ESPN's TrueHoop blog, that NBA referees don't anticipate calls; they only observe the game. However, that can't possibly be literally true all the time. When Payton passes the ball in-bounds, it is immediately evident that the ball will be caught (or at least touched) first by Fisher. It is human nature to anticipate this first contact, and act accordingly. But what does it mean to "act accordingly"?

For years, the clock was operated manually, by the timekeeper, based on the rules of the game and the whistles of the referees. The system worked well most of the time, but placed a lot of reliance on the alertness of the timekeeper.

In 1999, the NBA installed a new system developed by Mike Costabile, an NCAA referee who previously officiated in the NBA. Each referee carries a small transmitter attached to his or her belt, with a button. When the clock is to start, each referee pushes the button at the exact instant at which he or she believes the ball to be in play. The first button push triggers an automatic start to the clock. The system also includes a microphone that is sensitive to the particular frequency of the whistles used by NBA referees, and stops the clock when the whistle is blown.

In order to activate the clock, at least one of the referees must push a button at the instant he or she believes the ball to be first touched. Obviously, this generally doesn't happen right at the moment the clock is "supposed" to start. There are two potential delays here: reaction time, and execution time (the time it takes the finger to actually push the button).

To understand the relationship between these two, and how the actual delay is affected by context, suppose I ask you to clap your hands as soon as one of the following happens:

  • A basketball I drop from four feet hits the floor.
  • I clap my hands.
  • I move my hands at all.

In the first case, it takes about half a second for a basketball released from four feet up to hit the floor. That is enough time for you to react and execute the act of clapping your hands at the precise moment the ball hits the floor. In the second case, both of us need our execution times to clap our hands, but you have to react to the start of my clapping motion. In counting the delay, your execution time is cancelled out by my execution time, leaving just your reaction time. And in the last case, I can move my hands without warning, meaning that the delay is your reaction time plus your execution time.

In the case of the play in question, it took about half a second for the ball to pass from Payton's hands to Fisher's hands. For a referee who is ordinarily alert, this is plenty of time to predict the path of the ball and press the button almost immediately upon contact. Even if we accept that referees do not anticipate events, they must at least be prepared for the potential event of contact between Fisher and the ball; there is no reason, at any rate, for the delay to be anywhere near three-fourths of a second.

But let's not be too hard on Dusty Garza. He was writing in the heat of the moment, and from honest feeling. Besides, let any of us without team favoritism cast the first stone. Let's get down to brass tacks: Garza sincerely believes that the video shows that Fisher took about a second to get the shot off. Did he?

The Video

The video file I used in putting this article together is encoded at 25 frames per second. (I determined this by advancing the video frame by frame, 200 frames, at the end of each of the four quarters, when the clock is counting down at tenths of a second. Each time, 200 frames corresponded to an interval of exactly 8 seconds, so the video must be progressing at 200/8 = 25 frames per second.) Therefore, each frame represents 1/25 = 0.04 second. This is not the frame rate of the original broadcast, which would probably have been 30 frames per second. As part of the re-encoding process, frames would lose some definition, increasing the error involved in estimating precisely when events happen. Since these errors do not accumulate, however, they add a small random error, but they do not systematically bias estimates of interval lengths.

One problem in reviewing this particular game video is that only the live shot actually keeps time accurately. In all subsequent replays, the ABC crew slowed the video at a variable rate, in order to allow Al Michaels and Doc Rivers to comment on it. But the live shot has the camera in line with Fisher and Payton, making the determination of the instant Fisher first caught the ball difficult. Here, for instance, are three successive frames of the live shot, obtained using the snapshot function of the xine video player. Which one of these frames do you think shows Fisher actually catching the ball?

I think it's pretty clear that this angle can't be used to reliably determine when Fisher touched the ball (which would have started the clock). Fortunately, we can still make use of other camera angles. This has precedent in the NFL, in which "composite" video reviews are conducted. This allows the referee (and video replay official) to assess multiple angles in order to come to a firm conclusion, even when no single angle provides all of the information necessary. This isn't to say that the NFL uses fancy three-dimensional visualization tools (à la The Matrix), since they can't do that in time, and neither is it necessary here. We'll just combine the angles mentally.

Here are video stills from the opposite baseline. It shows the ball and Fisher approaching one another. In this first frame, it's not very easy to see the ball, but you can see it superimposed on referee Joe Forte's right foot. At this point, the ball is still a couple of feet from Fisher's outstretched hands.

The frame below shows Fisher and the ball considerably closer to one another, but there still appear to be several inches in between them.

This third frame shows Fisher's hands possibly touching the ball for the first time. They don't clearly touch, but this is the first frame from this angle where contact is plausible.

Note the positions of the other players. The positioning of the left foot of Duncan (guarding Malone at the free-throw line) is especially revealing. That foot covers the free-throw line, as seen from this camera angle. Duncan's foot must therefore be in reality at least as far out as the free-throw line. It could be beyond it, if it's above the floor, but it certainly cannot be between the basket and the free-throw line. This crucial piece of video detail shows that Fisher does not touch the ball until the second of the three live video frames above. Below are successive frames from the live video, starting from the one where Fisher first touches the ball, and running until he releases it.

Frame 1:

Frame 2:

Frame 3:

Frame 4:

Frame 5:

Frame 6:

Frame 7:

Frame 8:

Frame 9:

Frame 10:

Frame 11:

Frame 12:

Frame 13:

Frame 14:

Frame 15:

In my opinion, Frame 14 (which shows the clock switching from 0.1 to 0.0) shows Fisher apparently having released the ball—about as apparently as he touches it in Frame 1. If we take those two frames as the endpoints of Fisher's possession of the ball, then he has the ball for 14 minus 1, or 13 frames in all. At 25 frames per second, that works out to 13/25 = 0.52 seconds. The method I've used to produce our composite review I estimate to have an error of a frame or two in either direction, which works out to plus or minus 0.06 seconds; add another 0.02 seconds for the video re-encoding at 25 frames per second.

In addition, I should account for my status as a Lakers fan. (Who else would go through this much trouble for Fisher's shot?) I remember sitting in bed, having twisted my ankle in my own basketball game earlier that afternoon, and feeling pretty good about the Lakers until the fourth quarter, then anxious, then frustrated, then angry, and finally elated. It is sensible, to account for this possible systematic bias, to add a frame's worth of time to the figure above to yield 0.56 seconds. Note that the ball has clearly left Fisher's hands in Frame 15, which also shows the red light on the backboard going on for the first time.

To summarize, this video shows that Fisher had possession of the ball for about 0.5 to 0.6 seconds. One corollary of this finding is that the referees started the clock approximately 0.1 to 0.2 seconds after he caught the ball. This is entirely typical and in line with usual execution times; it would be unreasonable to claim the clock was started "late." It's certainly shorter than the three-quarters of a second that Garza claimed was necessarily human reaction time; after all, Fisher executed his entire possession in less time than that.

Final Thoughts

Some Lakers fans pointed out, in the aftermath of the series, that prior to Fisher's game-winner, Duncan's shot swished through the hoop with considerably more than 0.4 seconds left on the clock. See, for instance, this video frame:

If so, claimed Lakers fans, the Lakers should have had more time on the clock, possibly rendering the above dispute moot. NBA rules stipulate that the clock should be stopped at the moment the ball exits the bottom of the basket, including the nylon, not when it enters the basket. The frame above shows the ball exiting the basket with the clock switching from 0.8 to 0.7 seconds. By that token, it must have taken somewhere between 0.3 and 0.4 seconds for the referees to whistle the clock stopped after the shot swished through. Why did it take longer for the clock to stop after Duncan's shot than it did for it to start after Fisher's contact?

It's impossible to state for certain, but one possibility is that because it's less predictable that Duncan's shot will exit the bottom of the basket than it is that Fisher will touch the ball, the referees had to wait longer to be sure that the basket was made before whistling the clock stopped. Then, too, it's a whistle blow that stops the clock, as opposed to a button press that starts it again, and those two actions may well have different execution times. But it seems plausible that had perfect timekeeping prevailed in the final seconds, Fisher's shot would have been good by about 0.1 seconds.

Friday, June 5, 2009

Superstars and the PER

And now, a few words about the Player Efficiency Rating, or PER.

As a statistics guy, I am generally wary of how statistics are used in sports. This is not a matter of not believing in what I do, it's more that I know where the numbers come from, so I know what they can say and what they can't. And it drives me a little batty to see some statisticians—people who I think should know better—take too much stock in their statistics, especially if it's statistics that they had a hand in crafting.

Take, for instance, the PER, which has its roots in the Sabermetric movement in baseball and is the basketball equivalent of OPS (On-Base Percentage plus Slugging Average). Roughly speaking, we can divide all basketball statistics into two broad groups. One group consists of raw observables, such as steals, blocks, minutes played, three-pointers attempted and made, and so forth. PER does not fall into this category.

PER falls in the second category of aggregate statistics, which are combinations (often but not always linear combinations) of other statistics. As a way of accounting for all the various things that a player might do to help his team out, PER combines a slew of raw observables into a formula, which reduces to a single number. There is no unique PER formula, but the most popular one was developed by John Hollinger. Its output is normalized, so that the league average is 15. Hollinger has developed a heuristic for judging players based on PER:

  • A Year for the Ages: 35.0
  • Runaway MVP Candidate: 30.0
  • Strong MVP Candidate: 27.5
  • Weak MVP Candidate: 25.0
  • Bona Fide All-Star: 22.5
  • Borderline All-Star: 20.0
  • Solid 2nd Option: 18.0
  • 3rd Banana: 16.5
  • Pretty Good Player: 15.0
  • In the Rotation: 13.0
  • Scrounging for Minutes: 11.0
  • Definitely Renting: 9.0
  • On the Next Plane to Yakima: 5.0

Forget for the moment the bottom end of the ranking. By definition, the average player has a PER of 15, and since there are so many apparently average players in the NBA, there should be a lot of players around 15, and there are.

What about the top end? We would expect that there would be precious few players beyond a PER of 25 for any given season, and that turns out to be true. Doesn't that on its own mean that PER is a good measure of player performance?

On its own, no. The PER formula is not derived from first principles; it's an individual attempt to capture the effectiveness of a player, and as such is a carrier of the arbitrary priorities of the PER designer. One could also design PER to positively weight turnovers, missed shots, and personal fouls, and still have most of the players in the league around 15, and a precious few above 25. Only now it would be the very worst players who would show up at the top. That's an extreme example, of course—no one would actually design PER that way—but all that means is that the arbitrary nature of PER is more constrained.

To see what I mean, suppose for the sake of simplicity that we're only interested in capturing two raw observables: points and rebounds. Let's look at a few hypothetical players.

Wade James: 30 points, 5 rebounds
Howard Williams: 20 points, 14 rebounds
Chris Bryant: 28 points, 8 rebounds

And suppose also that the league average for points is 10 and the league average for rebounds is 5. So one possible formula for PER would be points + rebounds. It's easy to see that the league average for this PER would be 10 + 5 = 15. By this measure, Wade James has a PER of 35, Howard Williams a PER of 34, and Chris Bryant a PER of 36. So Bryant has the highest PER. But it's close.

It's so close, in fact, that it's almost an incidental consequence of the way we designed PER. If we wanted a higher weighting for rebounds and a lower one for points, we could have another formula for PER: 0.5 × points + 2 × rebounds. In that case, the PERs would be 25 for James, 38 for Williams, and 30 for Bryant. Here, it's a runaway for Williams. Or, we could do the reverse, and make the formula 1.25 × points + 0.5 × rebounds. Then the PERs would be 40 for James, 32 for Williams, and 39 for Bryant, and James now has the highest PER. In all these cases, the league average PER is 15, and yet any of the three superstars could end up on top, depending on which PER formulation you choose.

There is, in mathematics, the notion of vector domination. In these terms, one player dominates another if none of his statistics are lower than the other's, and at least one is higher. For instance, 20 points and 6 rebounds dominates 14 points and 4 rebounds, and is in turn dominated by 25 points and 6 rebounds. None of them dominates, or is dominated by, 28 points and 3 rebounds. It can be shown that with any sensible definition of PER, in our limited context where we're only interested in points and rebounds, if one player dominates another, his PER is guaranteed to be higher. That's not surprising, since there should be no doubt that he's better, if we only care about points and rebounds.

Note that none of our three hypothetical players is dominated by any of the others. That's almost inevitable when you're comparing superstars. Because they're superstars, chances are good that each one does one thing better than all the rest, which means that no superstar can dominate another. Superstars will dominate the majority of players in the league, but not each other. As a result, one can define PER in such a way to put almost any given superstar on top, and which one ends up on top says as much (if not more) about the PER designer's predilections for skills as it does about the top players.

The crazy thing is that PER is probably very good indeed for comparing journeyman players, and Hollinger routinely uses it for that. But most PER fans don't seem to be interested in that. They only want to compare the top players with PER, and as you've just seen if you read this far, I think it's a pretty subjective way to do that. But most people associate statistics with objectivity, and people with subjectivity, with the end result that (a) fans of the player that ends up with the highest PER lord over fans of the other stars, and (b) those fans of the other stars start hurling invectives and accusations of bias at the PER designer (usually Hollinger). I can't count the number of times Hollinger has been called a Lakers hater just because Kobe Bryant doesn't end up with the highest PER.

To be fair, I think Hollinger brings some of that on himself, since he himself uses PER to compare the top players. Although I think he should know better than to do that, I don't really blame him; if I designed a PER, I'd probably use it for that, too.

Which is the very first reason I've never been tempted to design a PER.

EDIT: Here's a graphical representation of the various PER formulas in our hypothetical scenario. (Click to enlarge.) Points are plotted along the vertical axis, and rebounds along the horizontal axis. The green lines represent "iso-PERs": lines along which the initial PER is constant, at either 15, 25, or 35. Red lines represent the rebound-heavy iso-PERs, and blue lines represent the scoring-heavy iso-PERs.

Thursday, June 4, 2009

The NBA Finals and the 2/3/2 Format

I'm not going to talk about these upcoming NBA Finals, actually. I want to, but I also want to avoid the wrath of the Gods of Woof.

Instead, I'm going to say a few words about the so-called "2/3/2 format" of the NBA Finals, which refers to the sequence of venues for the seven games. As in, the first two games and last two games are played at the court of the team with the better record, while the middle three games are played at the court of the team with the worse record. (Usual tiebreakers apply.) This is in contrast to the other 14 series in the NBA playoffs—and, for what it's worth, every series in the NHL playoffs—which use the 2/2/1/1/1 format. You'll excuse me if I don't spell out in gory detail how that format goes.

Each year, at around this time, we read and hear the same time-worn opinion pieces about how the difference in format (which is intended to minimize travel for the teams) affects the chances for the two teams. Some think that the difference favors the underdog, because if home-court advantage holds for the first five games, the favorite has to return to its own court having to win the last two games. Others think the difference favors the favorite, because to get to that point after five games, the underdog has to win three games in a row.

The first thing I want to do is dispense with this notion that the format confers any kind of inherent advantage to either team. Occasionally, one sees it pointed out that the odds for the two teams are unaffected by the difference in format. One relatively simple way to see this is that the result of the series is not changed at all if we somehow force both teams to play the full seven games, even if the series is already decided before that point. (I'm sure Madison Avenue is all for that.) Since the favorite hosts four games and the underdog hosts three, no matter which format is used, the odds should be the same.

It's important to note that this line of reasoning assumes that the game results are independent of each other. If the results of earlier games can statistically affect the results of later games, that argument loses force and it becomes quite possible that the series result could in fact be affected by the format.

In this light, one interesting observation that I haven't seen before (and it might just be that I haven't looked hard enough) is that the two formats are identical except for one small change: The venues for Games 5 and 6 are switched. Otherwise, Games 1, 2, 3, 4, and 7 are played in the same place in both formats. So let's restrict our analysis of the format to just those two games.

What are the possible situations going into Game 5? We can eliminate series sweeps, because in those cases, Game 5 never gets played. So either the series is tied 2-2, or else one team is up 3-1. And let's suppose that we believe in the notion that players tighten up under pressure (which we'll assume is the case if they're playing to stay in the series), lowering their winning percentage. What effect does the difference in format have under these assumptions?

If the series is 3-1, the team that's down is under the gun for both Games 5 and 6, and the difference in format doesn't have much effect at all. If, however, the series is tied 2-2, there's no more pressure on one team than the other in Game 5, but the loser of Game 5 has the pressure on them in Game 6. Now, look at this from the perspective of the underdog. Let h represent their winning percentage at home, v their winning percentage on the road, and µ the effect of pressure. In the 2/2/1/1/1 format, with Game 5 on the road for the underdog, the expected number of wins in Games 5 and 6 is then

E[wins] = v (1 + h + µ) + (1 - v) (h - µ) = v + h + 2µv - µ

In the 2/3/2 format, with Game 5 at home for the underdog, the expected number of wins is

E[wins] = h (1 + v + µ) + (1 - h) (v - µ) = h + v + 2µh - µ

Under these assumptions, the 2/3/2 format is better for the underdog, yielding on average 2µ (h - v) more wins for them when the series is tied 2-2 going into Game 5. In ordinary terms, playing Game 5 at home gives them a better chance of taking advantage of the pressure factor in Game 6, and a lower chance of suffering from pressure themselves. What's more, the win differential goes up in proportion with both the pressure factor µ and the home-court advantage (h - v).

Of course, you should take this with a sizable grain of salt. I'm using a very simplistic model of home-court advantage and pressure. If you like, you can extend the model to let the two teams have different pressure adjustments, or even have lack of pressure (instead of pressure itself) depress winning percentage. The more important thing to take out of this is the set of factors that bear into the difference between the two formats, because under most conditions, they really aren't as different as they're made out to be.

Hello? Is this thing on?

Someone is always listening, even if it's just the googlebots.

I have no really clear idea what this is going to be about. The name of the blog is supposed to conjure up images of an inoculation of statistical levelheadedness, but beyond that, who knows? I have this vague notion that I'll throw my hat in the ring on my various passions: in no particular order, astronomy, sports (basketball especially), music, poetry, the way people think and act, etc. But of course I feel no compulsion to stay within those bounds.

Deep breath. We'll see how this goes.