Monday, November 26, 2012

Going Whole Ballhog

If you're one of the tens of readers who follow me, then unless the bottom of your rock doesn't carry ESPN, you've probably heard something about this kid from Grinnell who dropped 138 on a hapless Faith Baptist Bible College basketball team.  Now, granted, this was a Division III basketball game—hardly the acme of organized basketball.  Still, as Kobe Bryant said, "I mean, I don't care what level you're at, scoring 138 points is pretty insane."  Jack Taylor is a household name now, people.

Rather predictably, there was some backlash, with some people claiming that it was rigged, or that it was selfish basketball, or at least not The Way That Basketball Should Be Played (because anything that portentous has to be written upstyle).   I can't say anything as to whether it was rigged, although it didn't look like it to me, and as with any conspiracy theories, it's easy to say something like that when you don't have to offer any proof.  All you have to do is throw out your hands and say, "It's common sense!"

But we can say something about whether it was selfish or bad basketball.  Some folks have taken it upon themselves to make a virtue out of evenly distributed teamwork.  That's fine as a matter of personal opinion, but they make a mistake, I think, who believe that it's an intrinsic virtue of basketball.  It wasn't an intrinsic virtue of basketball when Naismith put up the first peach baskets, and until someone invents a game that makes teamwork an explicit scoring feature, there won't be a sport where it's an intrinsic virtue.  (I also think that some of these folks could benefit from playing with a scoring phenom, just to see what it's like, but that's neither here nor there.)

What makes it a virtue—when it is a virtue—is that it makes a team more efficient, by and large.  On the occasions when a player goes out and consciously attempts to score a bunch, it quite frequently turns out that the other players on the team are more efficient, and thus the team as a whole would have been more efficient if the offense had been more evenly distributed.  This is a basic result from game theory.

But that didn't turn out to be the case here.  Taylor scored 138 out of his team's 179 points.  That's 77 percent.  To get those points, of course, he used up a lot of his team's possessions: 69 percent, according to ESPN.  It is a lot, but it shouldn't overshadow the fact that the rest of his team used up the remaining 31 percent of the possessions and ended up scoring only 23 percent of the points.

Let's see how that stacks up against two other phenomenal scoring performances of the past: Wilt Chamberlain's mythic 100-point night in Hershey, and Kobe's own 81-point barrage at home against the Toronto Raptors.  (Taylor nearly had 81 just in the second half.)  I'm going to ignore claims that the Warriors game was a farce in the second half, or that the Toronto Raptors were a defensive sieve; I'm only interested in the efficiency figures.

Chamberlain's Warriors scored 169 points that night, so Chamberlain scored 59 percent of his team's points, using (again according to ESPN) 47 percent of his team's possessions.  Kobe's Lakers scored 122 points, so he contributed 66 percent of his team's points, while using (ESPN again) just 51 percent of the team's possessions.

One way to look at these feats is to consider how much more efficient the individual players were than the rest of the team.  So, on a percentage basis, Taylor scored 77 percent of the points on 69 percent of the possessions, whereas the rest of the team scored 23 percent of the points on 31 percent of the possessions.  Taylor, therefore, was (77/69) / (23/31) = 1.50 times as efficient as his teammates.  Similarly, Chamberlain was (59/47) / (41/53) = 1.62 times as efficient, and Kobe was (66/51) / (34/49) = 1.87 times as efficient.

However, such a measure can easily be misleading.  If someone plays a single minute, puts up a single three-pointer, and makes it, they might (as a normal example) have 3 percent of the team's points with only 1 percent of its possessions.  By the same metric, such a player would be (3/1) / (97/99) = 3.06 times as efficient as his teammates.  What's missing is some measure of the magnitude of the player's impact.

A more representative measure of the player's efficiency impact can be obtained by considering how efficient the team would have been if the other players had managed to use up all of their team's possessions, at the same efficiency they had been exhibiting.  For instance, Taylor's teammates used up 31 percent of the possessions, scoring 23 percent of the points they eventually scored.  If they had continued at that same clip, but used up 100 percent of the possessions, they would have eventually scored 133 points—about 74 percent as much as they actually did.  To put it another way, the team with Taylor was 31/23 = 1.35 times as efficient as they would have been without him.

Using that as our guideline, the Warriors with Chamberlain were 53/41 = 1.29 times as efficient as they would have been without him, and Kobe's Lakers were 1.44 times as efficient as they would have been without him.

Just as a demonstration of how amazing all of these numbers are, if a team averages a true shooting percentage of 50 percent amongst four players, and the remaining player uses up half the possessions with a true shooting percentage of 70 percent, that team is only 1.20 times as efficient as they would be without that player.  To increase their teams' efficiency as much as they did, these three athletes had to be remarkably efficient and prolific.

Thursday, October 25, 2012

The Tragedy of Optimality

I have children.  And because I have had them in the era of DVDs and iPods, I have watched all of the Pixar feature films, many of them well over a hundred times.  It's an occupational hazard, although there are some countervailing benefits; for one, my impression of Shark, doing his impression of Woody, is spot on.

Some time ago, as I was watching Finding Nemo for probably about the 267th time, I thought: Now isn't it too bad that Nemo just didn't stay away from the boat (butt)?  All this hassle could have been avoided.  But on the other hand, if they avoided the hassle, Nemo would have spent the rest of his childhood being helicopter-parented around by Marlin.  And it occurred to me that a lot of the other Pixar films had similar moments.  If only Woody hadn't sent Buzz flying out the window.  But then they would never have bonded or saved the neighborhood from a bully.  If only Bob hadn't sent his boss Mr Huph flying through four office walls and gotten fired.  But then he never would have stumbled onto a plot on the lives of the former supers and regained the trust of the people.  And so on.  I think this pattern, as formulaic as it might seem in retrospect, is part of the strength of Pixar stories.

I was reminded of all of this a couple of days ago as I was making my way through James Gleick's Genius, a biography of Richard Feynman that, for a wonder, wasn't written by the man himself.  As a boy, Gleick writes, possibly in reaction to not having been blessed with extraordinary or even ordinary athleticism, Feynman disdained the fine arts—music, drawing, poetry, and so forth—as not being masculine enough, for being too impractical.  You almost want to go back through the decades and slap some sense into the boy...and yet, if he had not disdained those things, would the world have been deprived of the great genius of Feynman?  As the mathematician G.H. Hardy said of the Indian prodigy Ramanujan,
He would probably have been a greater mathematician if he had been caught and tamed a little in his youth; he would have discovered more that was new, and that, no doubt of greater importance.  On the other hand he would have been less of a Ramanujan, and more of a European professor and the loss might have been greater than the gain.
The general notion is one of the trade-off as yet unseen.  We hear all the time about the value of being willing to fail, of being ready to risk substantial loss in search of almost inestimable gain.  But it's easier to be brave, I think, when you know what you might gain.  Few will wholly fault you then.  As both fiction and fact tell us, though, there are plenty of moments when things of value are risked, and seemingly without even the hope of gain, simply through recklessness or stubbornness, and yet things of value too are gained nevertheless.  Under the circumstances, without willful insistence, it seems an error even to call these risks.  More accurate to call it routine imperfection (although certainly also more of a mouthful).

In light of my posts on game theory and the like, it may sound as though I'm advocating for occasionally suboptimal behavior as a way to obtain optimal results.  That's not exactly right; as far as they go, the results of game theory are inviolate.  You can't get optimal results from suboptimal choices.  But what you can do is discover that your measure of what's optimal wasn't quite right.  You can optimize perfectly for dollars (or regular-season wins, or family time, or whatever), and yet thereby miss greatness.

Or, you might miss nothing.  In fact, most of the time, and for the vast majority of people, that's exactly what you miss.  And that's what makes routine imperfection so unappealing on an individual level, because it's regularly unrewarding.  But on a social level, with millions or billions of people operating in general autonomy, it's at once unavoidable and essential.

Tuesday, October 2, 2012

Speaking of the Electoral College

About four years ago, I made a somewhat long-winded post (not by the standards of this blog, I suppose, but generally) about the electoral college, prompted by Bernie visiting my office to ask me about it.  Of course, I had primed him by saying I had something nerdy to say about it, and he's unable to resist that kind of bait.  One of the best things about nerdy posts of this sort is that timeliness is not a big attribute, so here it is, four years on:

Bernie just came into my office because he wanted to hear my spiel on the electoral college. Put aside for the moment the question of whether this indicates he's some kind of pedagogical masochist; what started this was the question of whether voters in a big state like California suffer because their vote is diluted, or are favored because the state's electoral power is so huge. The short answer is that it's mostly the latter, but there are a few interesting wrinkles along the way.

One way to approach the question is to consider how many Missouris (11 electoral votes) it would take to match the electoral power of California (55 electoral votes). The obvious answer is five Missouris, but this assumes that the Missouris all vote as a bloc, as California would (in a presidential election). In general, assuming independent Missouris, this is unlikely. Because some of the Missouris would be likely to cancel others out, the swing power they hold would not be 11 times 5, but 11 times the square root of 5, or about 24 electoral votes. (Remember random walks and square roots?) If all Missouris were independentand why shouldn't they be?!it would take 25 Missouris to match one California (in terms of the states' electoral power).

Now, each individual voter in California has less power to impact the state's overall direction, just because there are oodles of people in Californiaroughly five times as many as in Missouri, to match the disparity in electoral votes. (It's actually a little more than that, but we'll deal with that in a bit.) That means that it takes roughly five Californians to make the same percentage impact on their state's result as one Missourian. Again, that only happens if the Californians vote as a bloc; assuming they vote independently, it would take 25 Californians to equal one Missourian.

So at first blush, it seems that these two effects cancel each other out: California has 25 times as much electoral power as Missouri, but each Missouri voter has 25 times as much individual impact on the Missouri result as a California voter has on the California result. However, there is one additional effect of California's large population: The required swing in close votes in California is smaller, percentage-wise, than it is in Missouri. It's basically the law of large numbers: In any evenly contested election, the outcome probably won't ever be exactly 50-50, but the more populous the state, the closer it will be to 50-50, and the smaller the percentage swing required to change that outcome. This factor is again equal to the square root of 5, and it's what drops out in the final resultthat a California voter has a larger impact on the national electoral result than a Missouri voter.

One complicating factor is that the number of electoral votes belonging to a state is not quite proportional to that state's population, not even when rounding is taken into account. The reason is that only the number of Representatives belonging to the state is proportional to the state's population; there are also the Senators, which are two a state. Since there is one electoral vote per Congress member (Representatives and Senators combined), small states have a much higher representation per capita than large states.

The upshot is that the most overall power is held by voters in the largest states, like California, Texas, and New York. Intermediate are voters in states with moderately large populations, such as Ohio or Illinois, as well as the smallest states. The weakest are voters in states like Arizona and Colorado, which are too large to gain much advantage from the "bonus" two electoral votes corresponding to Senators, but are too small to gain advantage from the enormous impact of a large population (and large electoral college representation).

It should be pointed out that the foregoing discussion only applies to votes where each state is contesteda "battleground" or "swing state," in recent election parlance. In practice, the impact a California voter has in the 2008 presidential election is nearly nil, since the state is almost guaranteed to go to Obama. (We'll see if I eat those words. [Obviously, I didn't. —brian]) The necessary swing is way too large for a reasonable number of California voters to overcome. That the predisposition of a state's voters is more than enough to swamp the effect of the largest population in the Union is, in my opinion, an indication that those trying to "fix" the election system (typically by replacing it with direct popular vote) are barking up the wrong tree, often in an irrational attempt to right a wrongthe 2000 Bush victory, say, which went against the popular votethat ultimately had to do with factors distinctly different from the structure of the electoral college.

Thursday, August 16, 2012

Sealing Up Time

"But I thoughtwhat about changing your own past?  What about the paradoxes?"

Dr. Vanner pursed her lips.  "Yes, I wondered about that too."

"So what happens if I shoot my grandfather?  Not that I would, but I could."

"Well, Jason, it turns out that's a bit of an interesting question, whether you could or not.  But grandfathers are very large, complicated things.  People are always trying to figure out how time travel could possibly work with grandfathers, and bullets, and messy macroscopic objects like that.  It's easier just to deal with simple particles first.  You figure out the particles, the grandfathers take care of themselves."


"Well, grandfathers are made out of particles, aren't they?"

"I guess that's one way to think of them."

"You know, I had a grandfather too."  Dr. Vanner smiled warmly.  "Anyway, I think the best way to answer your question is by way of example.  Say you're a particle.  An electron.  You didn't come from nowhere, you started out as a muon.  But muons don't live very long; they decay in a few microseconds to yield a couple of neutrinos, and you."  She sketched on the board as she spoke.

"Meoh right, I'm an electron."

"That's right.  Now you, as an electron, can live essentially forever.  You step into the time machine (or a smaller version of it), and you go back in time.  Just a little bit: say, a microsecond."

"Ahh, I think I see where you're going.  I'm going to bump the muon just enough so that it decays somewhere else, and even if it decays into me, I'm nowhere near the time machine to go back in time.  Paradox."

"Exactly.  So what's the resolution?  The resolution is that particles aren't billiard balls.  As an electron, you don't really bump into the muon.  You 'interact' with it."

"What difference does that make?"

"The difference is that the interaction has a random element.  If I hit a cue ball into another billiard ball in the same exact way, over and over again, both balls will go off in the same directions, over and over again.  It's predictable, deterministic.  That's why you can have expert billiard players.  But subatomic particles aren't the same way.  They can hit in exactly the same way, as far as we can tell, but the results may be completely different from one time to the next.  There are no expert electron players.

"And that's the key.  There's going to be one way or another that you could end up hitting that muon that will end up with it decaying into you at the right place at the right time.  Maybe you give it an extra nudge, and it goes a bit faster in the same direction, but it decays sooner than it would have.  Maybe it goes off in a different direction, but when it decays into you, you still end up heading toward the time machine."

"But if there's so many different ways it can happen, which one actually does happen?"

"That's a complicated question.  The simplest way I can think of to understand it is to imagine the universe as a kind of simulation.  If we conduct an ordinary quantum-mechanical experiment, there's a certain probability that the experiment will end up one way, and the rest of the time, it ends up another way.  It can do that because the experiment is anchored on only one end: the start.

"But in the time travel case, it's actually anchored on both ends.  When you the electron exist at a particular time and place, there's an anchor at that point.  The universe is in a more or less definitive state at that point.  Normally, that's the only anchor.  But in this case, when you travel back in time, there's a second anchor, in that we know you have to end up back (or should I say 'forward'?) in the time machine.  In between, nearly anything can happen—subject to the laws of physics.

"So imagine that the universe runs a simulation.  How many different ways can you start at the first anchor point, and end up at the second anchor point?  Which ones are most likely, when you adhere to the laws of physics?  We don't even know which ones are most likely beforehand, except in the very simplest of cases."

"So as an electron, I end up taking the most likely path back to the time machine?"

"No, not quite.  If the chances of you taking that path are three in five, then three times out of five, that's the path you'll take.  Or you could end up taking a once-in-a-million path (like bouncing off of three other particles before entering the time machine); it's just that you only have a one-in-a-million chance of doing that."

"But I always end up back in the time machine."

"That's right."

"But then it sounds like I can't ever change anything.  If the universe is anchored on both ends, what point is there in going back in time?"

"Very good question!  The point is that the second anchor point is not a 'complete' anchor point.  The first point is.  It covers the whole universe.  But the second anchor point only consists of you.  The only thing that's required is that youthe original you, remember, not the one that goes back in timeyou have to end up in the time machine.  Everything else can change."

"Wait a minute.  So forget about me being an electron and everything.  I'm me, Jason Sawyer.  I enter the time machine, and I go back a day or so.  I could see anything.  I could see methe original me.  And anything might happen, but in a day or so, that original me has to end up getting into that same time machine.  But everything else could change.  I might forget to do yardwork that I actually did earlier today.  When Ithe time-traveller mecatches back up to this present, I would know that the yardwork didn't get done.  But if you were watching, you'd all of a sudden see the leaves suddenly strewn across the yard, instead of put away in the yard trash?"

"Not quite.  Remember, when you rake the leaves, it affects more than just the leaves.  If I were watching and the leaves never got done, I'd never have any memory of the leaves being in the trash, would I?  So in fact, yes, the leaves would still be on the ground, but it wouldn't look to me like they suddenly appeared on the lawn.  In my brain, there would be lost any impression that the leaves were ever raked in the first place.  At least, I assume that's the most probable outcome.  There are all sorts of other outlandish possibilities that are more drastic but which I probably wouldn't have to worry about."

"All right, forget about the trash.  What about if I went back, really far back, far back enough to...well, let's not say I shot my grandfather, but I somehow set him up with someone other than my grandmother.  How can I possibly end up in the time machine then?"

"Hmm, let's think..."  Dr. Vanner considered this.  "OK, well, how well did you know your grandfather?"

"Huh?  Uhh, kind of?  Ihe died when I was eight.  How does that matter?"

"It matters because it's not sufficient that you end up in the time machine.  You also have to end up going into the time machine in a state that's sufficiently 'consistent' (in a technical sense) with the way you ended up going into it 'the first time.'  And that state includes your brain: everything you remember and know about yourself and your experiences.

"So what happens?  You obviously have to go into the time machine.  So somehow, some collection of particles comes together to form you.  The way it actually happened is just one possibility: Your parents conceive you, and you start out as a small number of particles.  Over time, you take on some particles, and lose some other particles, and eventually, grow up to be who you are today.

"Now one other possibility is that somewhere, near here, just a few moments ago, a collection of particles just randomly happened to show up to make...well, you, but including all the memories you currently have (which would, in that case, be completely fictional).  In a classical world, that's impossible.  In a quantum-mechanical world, it's merely improbablealthough we're talking really, really improbable.  Like you could run the universe a googol times and it wouldn't even come close to happening.  Still, it's possible.  And because by setting your grandfather up with someone other than your grandmother, you've already eliminated a bunch of probable outcomes that end up with you in the time machine, all the improbable options get a boost, so to speak.  As Sherlock Holmes said, 'When you have eliminated the impossible, whatever remains, however improbable, must be the truth.'  In this case, by setting up your grandfather on a date, you've made certain options impossible, and they're eliminated.  More than one path remainsall of them improbable a priori, perhapsbut one of them has to happen, in order for you to get back in that time machine.  The option of just a bunch of particles coming together to make a fully-formed you is always available.  If nothing is left besides that, then that's what'll happen.  In this case, though, I bet something else is more likely than that."

"Like what?"

"Like...suppose that after you set up your grandfather with someone else, you go off to visit the world.  You're not going to hang out with him forever, do you?  So after you leave him, suppose he breaks it off with the other person, and gets back with your grandmother.  And everything else happens more or less the same, so far as producing you is concerned.  His life history would be a bit different, but not in ways that are really all that critical.  That's why I asked you how well you knew him.  Do you know who he was with before he met your grandmother?"

"Hmm...not really."

"Exactly.  Remember, what happens to everyone else can change, but you have to stay more or less the same.  So my guess is that the most probable outcome is that his life would change in ways that you never knew about in the first place, so that when you go into the time machine 'the second time,' whatever you knew about your grandfather the first time remains true."

"Whoa.  Wait.  That means that I have...I can't really change too much about the people that I know really well.  Like Mom, Dad, my sister, and even you a little bit.  But, everything else and everyone else could change drastically?  Maybe I'd go back, and as a result of my getting my grandfather to go out on even one date with someone different, I still come back with the rest of my family, butoh, let's just sayno World War II, ever?"

"Well, that's possible, but still unlikely, even after you eliminate the impossible.  Remember that World War II wasn't started by just one thing.  There were triggers, but there were broad forces too that were behind it.  The second anchor, and the fact that what happens is likely the most probable thing, means that the whole 'butterfly effect' thing is not as chaotic as one might think.  Just setting up your grandfather on one date is unlikely to reset all of world history.  In order to change that, you'd probably have to go back quite a bit further in time.

"Also, by the way, keep in mind that your time-traveled self is still aging.  If you go back far enough to set up your grandfather on a date, by the time you get back here to 2027, you'll probably be about as old as your grandfather would have been today.  That's assuming you make it all the way back.  There's no guarantee of that."

"Wait, I thought I had to get back.  To get into the time machine."

"No, remember, that's the original you who has to do that, not the time-traveled you."

Jason hesitated for a second.  "Right," he said finally.  "It's very confusing.  And it's weird to think that the results of going back in time are so intimately connected with me.  I can change really distant things a lot, but everything I know well is savedat least to the degree that I know them.  How is that possible?  I mean, this machine doesn't know a thing about me."

"It doesn't know it in the usual sense, Jason, but when you enter it and go back in time, it knows everything about your current state, at the moment you go back.  It fixes it.  That's the second anchor.  The thing that makes all the other changes possible is that it is only you who is anchored in time and space.  Everything else is floating partly freeand the less you know about them, the freer they are.  Even me, to a point.  Although I still have to be able to invent the machine.  So I feel pretty safe, especially if I know the person well who's traveling in it."

"Why don't you get into it?"

Dr. Vanner looked for a moment as though she were going to answer that.  "II can't explain that to you yet," she said finally.  "You'll have to take my word for it that there's a good reason for me not to get into it."

"Hmm, OK."  Jason looked thoughtful again.  "All right, one more question.  Suppose I do something really drastic.  I shoot myself before I get into the machine.  Or I do something really memorable to myself, something that didn't happen the first time.  How can Ithe original meend up back in this time machine in a...what did you call it?"

"A consistent state."

"Yeah, that."

"Well, there's a limit to how reliable our senses are.  How do you know you didn't already go back in time and make a big noise in front of yourself?  Because you don't remember it.  But what makes you so sure that it didn't happen?  Is your memory that reliable?  There must have been some things that have happened that you don't remember.  Normally, it's because those things happened so far in the past that the memory has faded.  We have the notion that the memory is still there, locked inside us somewhere, that we just can't find it.  But what if the memory really went away?  My guessalthough I really don't know, I haven't tested itis that you'd just lose all memory that it happened."

"Weird.  But what if I shot myself?"

"That depends.  Maybe the shot misses, even though you think you hit yourself?  Maybe you miraculously heal in seconds?  Those sound ridiculously improbable, but you've already eliminated as impossible all the normal paths, so the truth must be something outlandish.  Even if you stay to watch yourself die, at some point, your body, reasonably healthy, must make its way into the time machine.  In that case, you might very well see your dead body vanish in front of your eyes, just in time to make it into the time machine at the right moment.  Again, impossible in the classical world, but possible in the quantum-mechanical world, andif you've already eliminated everything elseeven, in a sense, inevitable in that world."

"So the original me is immortal."

"The original you, yes," agreed Dr. Vanner.  "But only up to the point you get in that machine.  From that point on, that you vanishes, and the time-traveled you reappears at some point further back in time.  And that you is vulnerable.  Anything at all could happen to that you."

Excerpt from "Time Binder" copyright (c) 2012 Brian Tung

Friday, July 27, 2012

Review: Ready Player One

Ernest Cline's Ready Player One (Crown Publishing, 2011) features no grand, sweeping philosophical statements, no startling revelation about human nature, no moral judgments or object lessons.  Like the game that forms the backbone of its plot, it is an adventure with a beginning, a middle, and an end, and it rests its case on that straightforward simplicity.

Wade Watts, like the protagonist of many a science fiction novel, is a high-school student-cum-computer-geek in a dystopian society, but unlike many a science fiction novel, RP1 hardly dwells a second on the dystopia part.  The world of 2044 is in ruins, due to a catastrophic shortfall in fossil fuels, but this crisis is put in primarily to motivate the near-universal emotional investment in OASIS (Ontologically Anthropocentric Sensory Immersive Simulation), a sort of virtual-reality massively multiplayer online game that serves simultaneously as school, work, and escape for most citizens, including the continually impoverished Wade.

OASIS was the brainchild of James Halliday, a reclusive, Wozniakian genius with an intense penchant for 1980s pop culture, who, with his more affable business partner, Ogden Morrow, built a multi-billion dollar computer game empire starting in the 1990s.  Over the years, they gradually drifted apart, as Morrow focused on sustaining their company after the death of his wife (his and James's childhood friend) and Halliday seemed to fall deeper into mental illness.

The events of RP1 are set in motion when Halliday dies in 2039.  His death is announced not on the obituary page, but in a video will and testament shot by Halliday himself.  Halliday had no wife, no children, no surviving relatives at all, so in his video, he explains that he will bequeath his entire estate (valued at about a quarter of a trillion dollars) to the first person to solve a series of puzzles embedded into OASIS and its millions of fictional worlds.

Wade had a boundless admiration for Halliday even before his death, so he knows all about 1980s culture, an asset that will stand him in good stead in his quest for the billions.  The problem is, so do many of the other OASIS users, including his best friend Aech (pronounced "H"), geek-girl blogger Art3mis, and the obligatory bad guys, the faceless multi-national corporation Innovative Online Industries (or IOI)—none of whom Wade has actually met face-to-face.  Throughout RP1, Wade will have to contend with each of them and his other rivals, some of who are ready and willing to commit murder and worse in their race for the prize, as well as Halliday's own devilish imagination and his obsession with the 1980s. 

RP1 is written in a quick, breezy style with pulpish overtones.  (Of course, for those who grew up with the golden age of science fiction, the pulp might be a positive.)  In developing his story, Cline feels compelled to explain a bit of Halliday's world creation, and as a result occasionally gets caught up in his own world creation.  From time to time, we are treated with technical details on how his characters connect to OASIS—details that will abruptly jar many readers from his otherwise breathlessly scripted (and somewhat thin) plot.

The 1980s pop culture references are another matter.  They are dotted liberally throughout the book, sometimes merely for flavor, other times integral to the plot.  And Cline's novel features a slightly implausible ending, albeit one that mirrors that of popular movies in its time frame.  For those of us who grew up in the 1980s, reading RP1 is a bit like watching retrospective "clip" episodes of shows like Silver Spoons and Family Ties (one of Halliday's favorites); others will just be bemused by the constant parade of cultural touchstones they have no connection with. 

To a certain extent, Cline is trying to maintain his footing on a slippery slope.  The technical and pop-culture references are consistent enough to suggest that he had more in his back pocket, details that would have appealed to a very specific audience, but which he held back in aiming for a broader audience.  On the other hand, if he had held back even more, RP1 might have been more accessible, but it would have lost much of its childlike appeal.

Ultimately, RP1 spreads it on thick with its geek and pop culture details, thick enough to turn off readers who don't sympathize with its emphasis.  One gets the distinct impression, however, that it otherwise wouldn't be substantial enough to satisfy those of us who do, and while it teeters on the precipice from time to time, RP1 just does get the job done.

Brian's 0-10 score: 6.0

Friday, July 20, 2012

Sense and Mind

One of the staples of parapsychology is extrasensory perception—ESP.  Apparently, in the early days of ESP investigations, a standard deck of cards was used, but it had some infelicities: The cards had in some cases complex designs that some claimed would interfere with the measuring of ESP ability, and the backs could be used by charlatans to identify cards by means other than honest ESP.  Thus were born the Zener cards.

Zener cards are those specially designed ESP cards that you've no doubt seen: a circle, a plus, wavy lines, a square, and a star—five of each in a 25-card deck.

Not only are they simple, straightforward designs, but when placed in the foregoing order, they also embody (in some intuitive way) the numbers one, two, three, four, and five.  Nonetheless, when they were first introduced, Zener cards had many of the same problems as did the ordinary playing cards.  The first Zener cards were made of thin enough paper that it soon became evident that some purported ESPers were simply looking through the cards.  They were subsequently made with thicker paper with opaque backs.

Another problem that arose was that some of the first ESP experiments allowed the participants to see the cards as they were guessed.  In a 25-card deck, random guessing should permit you to correctly guess one-fifth of the cards, or five of them.

However, if you are permitted to see each card after guessing, you can determine which pattern is most likely to show up on the next card.  For instance, suppose you guess that the first card is a circle.  It comes up, let us say, a square.  That one is guessed wrong, but by seeing that the first card is actually a square, you gain the knowledge that the second card is slightly less likely to be a square than any of the other patterns (since there are only four squares left, but five of each of the others).  Each succeeding card gives you even more information.  By the end, with careful counting, the last card is precisely determined; it is the only one that has only shown up four times.

If you always guess optimally, you will correctly guess almost nine cards out of the 25.  That is a level of accuracy that one would otherwise obtain with a probability of only 0.05—the level at which one is provisionally determined to possess genuine ESP.  Needless to say, such experiments were quickly barred.

Suppose, though, that you did an ESP exhibition.  You are not permitted to see the cards after each guess, but you do get to hear the response of the audience to each successive guess and card.  Even without collusion, it isn't much of a stretch to imagine that you'd be able to determine whether you guessed correctly or not.  How many will you guess correctly now, on average?  It should be clear that the number should be somewhere between five and nine, since you have more information than when you didn't get any feedback, but somewhat less information than when you saw each card.

To see some of the issues in determining the expectation, consider a much shorter deck: a five-card deck, with one of each design.  If you receive no feedback at all, you should be able to guess each card correctly with probability 1/5, or an average of one correct card in all.

On the other hand, suppose you see each card after you guess it.  The first card, you guess correctly with probability 1/5.  Having seen what card it actually is, you know which four cards remain, so you guess the second card with probability 1/4.  The third card is guessed with probability 1/3, the second card with probability 1/2, and the last card with complete certainty—probability 1.  To determine the average number of correct cards, simply add up the probabilities: 1/5 + 1/4 + 1/3 + 1/2 + 1 = 137/60; a bit more than two-and-a-quarter cards.

Now, suppose you find out only if your guess was correct.  For the first card, you have no reason to expect that any pattern is more likely than any other; therefore, without loss of generality, suppose that you guess a circle.  You will be correct with probability 1/5.

Suppose you guess correctly.  You are left with a four-card deck, each card equally likely.  Again, since you have no reason to believe any pattern is more likely to be in the second position than any other, you can guess any pattern; let us suppose you guess a plus sign.  This time, you will be correct with probability 1/4.

On the other hand, suppose your guess of a circle on the first card was incorrect.  In that case, the circle must be somewhere in the four remaining cards; it is 1/4 likely to be in the second position.  The other patterns, however, could still have been in any of the five original positions, including the first one.  You know only that it was not a circle.  Therefore, you should also guess that the second card is a circle; we are once again correct with probability 1/4.

It is with the third card that matters become more interesting.  Suppose you guessed correctly on both of the first two cards: the first was a circle, and the second was a plus sign.  You can guess any of the three remaining patterns for the third card; let us suppose that you guess the wavy lines.  You will be correct with probability 1/3.

Or, if you guessed both cards wrong—neither of the first two cards was a circle—you should guess a circle once again for the third card, which will be correct with probability 1/3 again.

Or, if you guessed right on the first card but wrong on the second—that is, if the first card was a circle, but the second card was not a plus sign—you should guess a plus sign again on the third card, which will be correct with probability 1/3 yet again.

The last case is the difference.  If you guessed wrong on the first card, but right on the second, then we know only that the second card was a circle.  The other remaining cards could have been any of the four remaining patterns with equal likelihood.  You can guess any of them, but you can do no better than a probability of 1/4 of guessing correctly.

The analyses of the fourth and fifth cards are more complex.  So the question of this post is: What is a better way of approaching the problem?  What is the average number of cards you will guess correctly?

EDIT: The second question of this post is: Suppose you have k cards, all distinct.  If you always guess optimally, and find out whether each guess was correct (but not what the card actually was), then what is the expected number of correct guesses?  Does this number approach a limit as k increases without bound?  If so, what is that limit?  The answer may surprise you. 

(If you are not given any feedback on your guess, then the expected number of correct guesses is always k × 1/k = 1.  If you get to see each card after you guess it, the expected number of correct guesses is 1 + 1/2 + 1/3 + ··· + 1/k, which goes up as the natural log of k (and is therefore unbounded).  The question above has to do with the situation where you only get feedback on whether your guess was correct or not.)

Friday, May 11, 2012

The Limitations of Sense

As I've mentioned previously, I lived in the dorms in college.  In addition to balky vending machines, the dorms also had a number of loungescommon areas on selected floors for people to gather for the purpose of studying (if they didn't mind a bit of noise), watching TV, or generally screwing around.  And, from time to time, there was the occasional Bible study group.

I hasten to emphasize that the study group people (who generally lived in the dorms themselves) were very reasonable about their use of the lounge.  They were perfectly willing to wander around in search of a mostly unused lounge, and they asked the others instead of just plopping themselves down and using the space.  In my own turn, I was perfectly willing to move over to defrag the chairspace in the lounge and allow them their own section.

Once, though, they did manage to irritate me.

I had settled in with my Walkman, listening to an album.  (For the benefit of those of you who were born in this millennium: Songs used to be sold on physical media, called "vinyl" or "records."  These records could be "singles," or they could be multiple songs sold on one "album."  We had this innovation—developed by Sony, a company that existed even thencalled a Walkman, which played "tapes," on which songs could be transferred from the record.  It was called a Walkman because you could walk around with it.  You could listen to a whole entire album and not be tethered to your "component stereo system," which was a collection of devices used to play music at a time when computers had memory sizes measured in kilobytes.  We thought it was great.)

Anyway, the Bible group came in and said they wanted to use the lounge and they promised not to be too loud.  Since I was the only other one in the room and I didn't want to be a complete jackass, I cheerfully agreed and moved over to the other side.  But in doing so, I took off my headphones.  And so, as they began discussing the Bible, I listened to them.  It was interesting, after all.

After some time, however, I guess it became increasingly evident that I was listening to them, and since it was apparently one of their objectives to spread the word to as many people as they could, they began working on me.  Now, I was brought up without any religious background.  (Oddly, I do recall that we had a napkin holder that had some strange incantation on it about "daily bread," although that was never explained to me.  I had to find out about it on my own.  But that's a story for another time.  Essentially, there was no religion in my upbringing, at all.)

What's more, I had by this time become fascinated by science, and the scientific method.  I didn't have a firm idea, perhaps, of how science got done, exactly, but I did have the notion that people were fallible, and experiments were conducted so that we could find things out without relying solely on fallible humans.  And it seemed to me that the more fantastical stories in the Bible (as opposed to the moral precepts, say) simply would not stand up to any kind of scientific inquiry.  I did not believe that there existed anything like the Christian god.  And I'm sorry to say that, somehow, that came out.

Well, the floodgates opened up after that.  And I just could not get them closed back up.  For some reason, I was made to answer for the slightest failing or shortcoming of science as it pertained to anything, and I mean anything, in the Bible.  To be sure, I was not blameless in this; at that age, I had not learned to adopt the sort of detached self-doubt that I can effect these days, and I was unfoundedly certain about the points I made, which landed me in some hot water.

I don't remember how I managed to extricate myself from the "discussion," but I do know that it took a couple of hours, after which I went to my room and lay down.  I was exhausted.

A few of them came up to me the next day, and apologized for their aggressiveness.  I said I understood, and apologized for my unseemly certainty.  But it set me to thinking: I did feel pretty certain about my atheism.  Why?  What made me feel so certain?  I had some vague sense that it had something to do with a kind of epistemological conservatism (though I wouldn't have known to put it in such a way)the idea that one believes in as few things as is possible to understand the world—and the proposition that extraordinary claims require extraordinary evidence.

It took me some years, however, before I could fully work out what my situation was with regard to atheism, and agnosticism, and all that.  It came about like this:

Much later, I was talking to this fellow, and I mentioned some of this mess I got in with the Bible study group.  And so he asked me, what did make me so certain?  He thought that people who could feel so certain that there was no god were just as scientifically irresponsible as those who could feel certain that there was one.

Fortunately, by this time, I had read Wittgenstein (I'll bet that's the only time you'll hear anyone consider it fortunate to have read Wittgenstein, and by the way, he looks just about that crazy in every picture of him I've ever seen), and I knew he had, too, so I could express it a bit more concisely.  I said that I was about as certain that there was no god as Wittgenstein was that he had a hand.  What good ol' Wittgensteinand I, by extensionmeant by that was that the knowledge that one has a hand represents an upper limit of certainty: a limit imposed by our senses.  We know it not because it is logically proven beyond a shadow of a doubt, but because doubt itself is pointless in this regard.  In other words, the degree to which we know it is a milestone of certainty—in a very real sense, defines it.  In fact, I think Wittgenstein says as much, right at the very start of his final work, On Certainty:
If you do know that here is one hand, we'll grant you all the rest.
My friend was satisfied by that, I believe, and he walked away.  As he walked out, though it hit me that that was it—that the limitations of my senses were the basis of my "certainty" that there was no god.

To begin with: From time to time, some atheist wag will remark that we have no more evidence for the existence of the Christian god than we do for, say, the Flying Spaghetti Monster.  Which is true, so far as it goes, but it doesn't really establish atheism (the belief that there is no god) as it does agnosticism (the lack of a belief that there is a god).

So then, the hypothetical line of questioning goes, what would it take to establish the existence of a god in any kind of scientific way?  Because, as I tell others, if you take a position against something, then as a self-check, you must ask yourself what it would take to convince yourself you were wrong.  Because if there's no amount of evidence that would do it, then your position isn't a scientific one; it can't be falsified.

I thought about all the miracles that are said to be the work of some god or another, all the things that happened that could not be explained.  In most cases, I rather thought that these were evidence less for a god than for the selective ingenuity of humans: If people wanted to believe in something, they were remarkably ingenious about how they managed to assemble evidence in its favor.  But if they didn't want to believe it, that ingenuity mysteriously went away.  In other cases, I couldn't come up with a plausible explanation, except to say that the people who related these stories (thousands of years ago, remember) were either mistaken or, possibly, exaggerating.  That might not have satisfied anyone who was truly on the fence, but it satisfied me.

It boiled down, therefore, to what I could personally witness that would convince me I was wrong.  What could a supernatural being do that would sway me?  It quickly occurred to me that whatever evidence could possibly support the claim to existence of a god had to be much more extraordinary than the possibility that my senses were fallible.  When it came to the existence of a god, I could not grant that I had a hand.

We hear "Seeing is believing," but we see things all the time that, it later turns out, aren't true.  And so, not as an expression of any desire, but simply as an acknowledgement that my senses can fail, catastrophically at times, I flatly admit an incapacity to believe in a god, any god (as normally represented—I obviously don't mean just a super-powerful being, but someone who brought about the world).  It's a personal incapacity, not one that I could possibly extend to anyone else, but it's insuperable just the same.

Sunday, April 15, 2012

If It's Negative in Area, Do I Get a Refund for Buying It?

I realize this is mostly crazy on my part, but honestly, I really wish real estate people would stop using the plus-minus sign (±) in this jackass way.

Sunday, April 1, 2012

The Tip of the Iceberg

A couple of weeks ago, as I write this, Dharun Ravi was found guilty of invasion of privacy and a host of other charges in a sequence of incidents, including spying via webcam, that ultimately culminated in the suicide of his roommate Tyler Clementi (left).  Ravi faces up to ten years' imprisonment, and deportation to his native India.

Now, since it's been a couple of weeks, a lot has already been written about whether or not Ravi was culpable, whether others had a role, what it says about us as a society that we continue to demonize and ridicule homosexuality (or conversely, what it says about us that we are able to demonize and ridicule someone for being a peeping Tom and a loudmouth).  I'm not going to say anything about that.  As is my wont, I'm going to talk about statistics, but with an eye toward how we perceive events like this.

In a way, those who wonder how we can hound Ravi the way we do have a point, even if I disagree with their larger perspective: What Ravi did, as wrong as it was, is probably happening all over the country—or the world—as we speak.  Is Ravi wronger because what he did led to Clementi's suicide?  Should he, in effect, be the scapegoat on which we place all the otherwise indistinguishable wrongs that, by sheer dumb luck, resulted in nothing more than a change of roommates?  I've been following the Ravi/Clementi case for a few months, after Clementi's suicide but before the trial began, and I seem to recall that Clementi did in fact look into switching rooms, but for whatever reason did not manage to do so before his death.  If he had changed rooms, where would we be now?  Would we be up in arms about homophobia and scapegoating?

This is only part of a general problem that human beings have with assessing rare events.  To be sure, it's not simply a matter of placing too great an emphasis on the result of those events, although we do do that.  (Many of us greatly fear the rare airplane crashes, even though they are at least an order of magnitude safer than road travel by practically any metric you care to choose.)  More than that, it's that we just do not have the vocabulary to compare these rare events, and their consequences, with their more typical brethren.

Interestingly, we don't really run into significant roadblocks with their opposite number, the rare non-events.  If someone intentionally shoots a bullet into a crowd, and against incredible odds, manages to hit no one at all, we still find them guilty of reckless endangerment.  The rare non-homicide doesn't conceal from us from the essential wrongness of the act.

But Ravi's case, and others like it, put us in a quandary.  Despite what others have said, I don't believe what Ravi (right) did led inevitably to Clementi's suicide.  We tend to think so because Clementi did in fact die, and what Ravi did is reprehensible and did in fact lead materially to Clementi's death.  But to think that it was the unavoidable outcome of what Ravi did is to assume that his actions are as rare as Clementi's suicide, that whenever this kind of thing happens, we will hear of it.  This strikes me as burying one's head in the sand.  It's not appealing, because many of us really do want to blame Ravi, but one can't consistently believe both that Ravi inevitably caused Clementi's suicide, and that their situation is common. 

But if the opposite is true, and similar situations are playing themselves out all the time (just with much lighter consequences), then what are we, as a society, to do with Ravi?  What are we to do with anyone who does something criminal that then leads, against (let us say) hundred-to-one odds, to someone's death?  Ostensibly, their actions put them in a lottery of sorts.  We punish the lottery losers, and everyone else goes unscathed, perhaps even unnoticed.

Is this justice?  Does the punishment really fit the crime, or is it more that it fits the consequences?  If it fits the crime, what should we do about those who do not lead to any substantive damage?  On a more abstract level, are we doing what we should to protect potential victims?  Even from the point of view of American jurisprudence, in which the results matter, the situation is unclear.  By throwing the book at Ravi, and missing the others, do we send the message that what Ravi did was wrong?  Or do we just send the message that one just needs to avoid getting caught?

Someday, perhaps, situations like Ravi/Clementi will cease to happen.  It seems unlikely to me, but just perhaps!  But in the meantime, we must think hard about the consequences of punishing people for the results of their crimes, when those results are rare.

Wednesday, March 14, 2012

No Two Alike

Another meandering post.  You've been warned.

I'm re-reading Isaac Asimov's informal autobiography, I. Asimov (a play on his collection of robot stories, entitled I, Robot, and to be distinguished from his formal autobiographies published earlier in his life), and finding it quite entertaining.  Partly, this is because I'm an inveterate re-reader and re-watcher.  My enjoyment of a piece of writing or a movie or a TV program doesn't diminish because I know how it goes.  If I enjoyed it the first time, I'll enjoy it just as much the seventh time, or the fifty-seventh.  Even a sporting event isn't diminished because I know how the final score (although I do prefer to watch it live the first time, if I can).  All this just by the way.

Anyway, in this book, Asimov mentions his facility at giving impromptu talks, and mentions by way of illustration that he has given a couple of thousand talks, no two exactly alike.

And that phrase, "no two exactly alike," is so characteristic of snowflakes that I immediately thought of them.  In fact, I'd go so far as to wager that if you asked people what the first thing was that they thought about snowflakes, it would be that no two are alike.

But is that actually so?  Have there really never been two snowflakes alike?  If you're like most people, you'd probably just as soon leave well enough alone and assume it's true.  For the heck of it, though, take a trip with me down the rabbit hole.

The whole idea that no two snowflakes are exactly alike has been around for time immemorial, but things really got moving with a man named Wilson Bentley (1865-1931), who grew up in Vermont.  When he was fifteen, his mother gave him an old microscope to experiment with.  Well, Vermont winters being what they were, I suppose it's natural that Wilson should have been drawn to snowflakes.  And so he took to maneuvering snowflakes under his microscope and sketching them.

It turned out, however, that they melted quickly—far too quickly for him to sketch in time.  So Bentley assembled a contrivance, a camera attached to a microscope attached to a board covered with black velvet, which permitted him to take pictures of the snowflakes before they melted.  Over his lifetime, he took images of over five thousand snowflakes, and sure enough, no two of them were exactly alike.

Five thousand, though a lot to take pictures of, is still a minuscule fraction of all the snowflakes that ever were, or even of those that are currently in existence (a constantly changing population, to be sure).  Surely there is no way that we could possibly take pictures of all the ones that currently exist, let alone those that have ever existed.  Is there perhaps another way to answering the question?

Consider: Each year, a substantial portion of the Earth is hit by snowstorms sufficient to dump several meters of snow on the ground.  I'm not sure of my statistics, but we probably wouldn't be far off if we assumed that the total annual snowfall amounted to a depth of, let's say, two tenths of a meter over the entire surface of the Earth, if it was spread around evenly.  Since the surface area of the Earth is about 5×10^14 square meters, we're talking about 10^14 cubic meters of snow.  When packed tightly (tightly enough to crush them), snowflakes might occupy a cube about a tenth of a millimeter on a side.  So each year, we get something like 10^26 snowflakes.  Taking into account the fact that there has been snowfall for a few billion years, there have been perhaps 10^36 snowflakes, ever, in the Earth's history.  That's a lot of snowflakes.

However, there are also lots of different shapes that any particular snowflake might take on.  Snowflakes exhibit six-fold symmetry because they're constructed from ice crystals, which have six-fold symmetry.  (You can find a picture of one in this article.)  So let's represent a snowflake as a hexagonal lattice, a bit like a honeycomb of cells, each of which might be occupied by an ice crystal, or not.  An individual hexagonal ice crystal is a few tenths of a nanometer across, whereas an entire snowflake might be a few tenths of a millimeter across.  So the hexagonal lattice representing our snowflake would have a diameter of about a million cells, and would contain about 750 billion cells in all.

Does that mean that there are nearly a trillion possible snowflakes?  No, because each one of those cells could either have an ice crystal, or not.  We could represent the snowflake by filling each one of those cells with a 1 if it had an ice crystal, or a 0 if it did not.  In other words, each snowflake would be represented by a huge binary number with 750 billion digits.  Such a tremendous number is on the order of 10 raised to the 230 billionth power.

It's hard to overstate how big a number this is.  Even if you were, somehow, to write a 100 digits a second, every second of every hour of every day, without interruption for sleep or eating, you have perhaps only an even-money chance of just writing this number out during your entire lifetime.  It goes without saying that it's much, much, much larger than 10^36.  (It is, however, much smaller than a googleplex.  I just thought I'd point that out.)

However, we're not playing quite fair, because we've completely neglected the symmetry exhibited by most snowflakes.  If we take that into account, it turns out that the number of possible snowflakes drop to something more like 10 raised to the 40 billionth power.  Quite a bit smaller, but still much larger than 10^36.

There's another thing, too.  Bentley took his photographs with an optical microscope, which was of course incapable of resolving ice crystals down to the individual molecular level.  These days, we're now capable of doing that, but it would be unfair to insist that snow crystals, which in an ordinary atmospheric environment would be constantly changing anyway, be identical to that level of precision.  A typical photograph of a snowflake might be able to resolve crystals to a level of detail that would take a hundred thousand cells to fill the entire snowflake.  Remembering to take into account the symmetry of snowflakes, there could still be on the order of 10^5,000 different snowflakes, at this reduced level of resolution.  Still much larger than 10^36.

OK, how about this?  If one looks at an array of Bentley's photographs, one notices that the ice crystals are not arranged haphazardly around the snowflakes, even after one takes into account the six-fold symmetry.  Instead, there is order at all different scales.  In fact, people have likened snowflakes to fractals; there are even simulations of snowflake generation that build upon the fractal arrangement.

That reduces the level of variation accessible to the snowflake.  It's hard to say for sure, but in most of the Bentley images, I think one can make out about six levels of detail.  (That's consistent with a scale ratio of about two to three.)  What's more, each unit of detail has within it detail that only goes about three or four levels down, which means that each level can be represented using about fifty bits or so.  That means a total of three hundred bits might suffice to denote a snowflake to the level of precision needed to figure out whether they match or not.  That would still mean about 10^90 distinct snowflakes, though.

All right, one last thing, which at first will seem to be a significant digression.  There is, in probability, something called the birthday paradox, which goes something like this: Suppose you get fifty otherwise randomly selected people together in a room.  What are the odds that at least one pair of them will share the same birthday (possibly different year)?  One in four?  One in ten?  How many people do you think you need to make the odds even?  Would forty do it?  How about sixty?  A hundred?

The answer, surprising to most people who haven't heard this question before, is that the odds are about even that out of just 23 people, at least one pair will share a birthday in common.  It's a bit surprising because there are 365 days in a year (not counting leap day), but consider what happens if you choose the people one by one.  The first, of course, can have any birthday at all.  In order to avoid a pair sharing the birthday, the second must not share a birthday with the first.  The third must avoid sharing a birthday with both the first and the second.  The fourth must avoid sharing a birthday with the first, the second, and the third.  And so on.  By the time you get to 23 people, there are about 250 birthday sharings that must be independently avoided.  It's not surprising that such sharings are avoided only half the time.

It turns out that this "paradox" (not truly a paradox at all, naturally, but just a counter-intuitive result of probability theory) has very wide applicability.  The number of samples that can be randomly selected before you stand a good chance of getting a pair is much smaller than the total number of choices.  In fact, it's on the same order as the square root of the number of choices.  (There's that square root again!)  The square root of 365 is a bit over 19, and sure enough, 23 isn't very far over 19.  If one takes into account the year of birth over the course of a century, then there are about 36,500 different birthdates, but the square root of 36,500 is only about 191, so that only about 200 randomly selected people are needed before you have a good chance of matching the entire birthdate.  And the square root of 10^90 is 10^45, so the size of the collection of snowflakes you need to have a good chance of pairing two of them is about 10^45.

It's more than 10^36, but not much more.  (What's a factor of a billion between friends?)  And there are a lot of back-of-the-envelope manipulations in what I wrote, so perhaps there are other deeper symmetries to take advantage of.  But I think it's rather magical that the numbers work out nicely so that it's quite possible that somewhere, across the vast mists of time, there were at (probably very different) points, two identical snowflakes!

Wednesday, February 15, 2012

Slip Sliding Away

Here's a counter-sliding game I came up with a while back, while visiting my parents.

My parents have these small flags of various countries, which can be stood up, UN-style, because they're on flagpoles that are stuck into circular bases.  The flags can be removed from the bases to be waved, and when they are, you're left with just the circular bases.  One day, while idly sliding them around the table, I thought about using them for various geometrical exercises.  Of course, if one doesn't have small circular flagpole bases, one can use any kind of equally sized circular tokens; any kind of circular coin should work just fine.

The rules I set up for myself were as follows:
  1. One starts out with two touching counters.  This counts as two moves.  (For "historical reasons.")
  2. On any subsequent move, one may add a counter; this counter must touch two existing counters on the table.  (There is an exception, which I will mention later, in connection with an outstanding puzzle.)
  3. Or, one may remove a counter.  One must remove the counter by sliding it, however, not by lifting it up off the table.
The following picture shows an example.

Here, counters 1 and 2 are placed first.  One may then place counters 3, 4, and 5 in that order.  Removing counters 3 and 4 then leaves a straight line of three counters.  One could not construct that straight line directly, by just putting down counters 1, 2, and 5, because counter 5 would not have been placed in contact with two counters.

One could continue twice more around counter 2, creating a filled hexagon of seven counters.  If, however, one wanted to create a hollow hexagon, one would have to remove that middle counter at some point.  It seems tempting to place one more counter below counters 2 and 5, and then remove counter 2 to place at the last corner of the hexagon, but the following diagram shows why that won't work:


The space between the two counters is not wide enough to fit the center counter through (in fact, that space has a width only √3 - 1 = 0.732+ times as wide as necessary), so it cannot be slid out in accordance to Rule 3, above.  You might like to see if you can figure out a solution for the hollow hexagon before reading on.

The trick is to set up support for the fifth corner first, then slide out the center counter to become the fifth corner; the sixth corner is then easily slid into place.  Begin by placing six counters in a parallelogram arrangement:

Then slide counter 2 to touch counters 4 and 6:

Now slide counter 4 into the place previously occupied by counter 2:

Finally, slide counter 1 around to touch counters 2 and 4, at the sixth and last corner of the hexagon.  VoilĂ !

I leave you with two puzzles, one fairly simple, and one open (that is, unsolved):
  1. Follow the above rules to construct a hollow triangle of side 4 (just like the arrangement in ten-pin bowling, but without the center pin), in as few moves as possible.  There is more than one solution.
  2. Suppose we add an exception to Rule 2, above: We permit a counter to be placed in an arbitrary location on the table, but with the proviso that no required property of the final arrangement can depend on the exact location of that counter.  (For instance, a construction of a rectangle that depends on a counter being placed somewhere between 1 and 2 counter widths away from another is OK, but one that depends on it being placed exactly 1-1/2 counter widths away is not.)  In that case, is it possible to construct a perfect square of four counters, of any side length?  The sides of the square need not be filled in with any counters.

Monday, February 13, 2012

Weighted Fair Division

I'm sure this is an old puzzle somewhere in the world, but it came upon us a few years ago here at work in connection with driving to lunch.

Where I work, the company provides a cafeteria where one may purchase lunch.  Unfortunately, the lunch is either too expensive or not good enough, depending on your point of view, so we generally eat out.  We're lucky that we can do that.  Anyway, in general, we try to take turns driving so that we're all likely to drive about equally often.  It doesn't always work out that way, but that's the aim.

If we all ate out every meal, it'd be simple; we'd all drive with equal probability.  But what happens if, as has been the case occasionally throughout the years I've worked here, one of us can only eat out once per week?  How often should that person drive, when they do eat out?

To make things simpler, let's assume that there are two of us daily diner (five times per week), and one single-day diner.  Four days out of the week, there are only two diners.  If each one drives one-half of the time, then both of them end up driving two days out of the four.

On the last remaining day of the week, when there are three diners, should each drive one-third of the time?  Well, if we do things that way, then each of the two diners drives 2-1/3 days, on average, whereas the single-day diner drives just 1/3 day per week, on average.  That's not fair, because the two daily diners drive seven times as much as the single-day diner, even though they only eat five times as often.  The single-day diner should have to shoulder more of the driving burden on that one day.

Let's denote by p the probability that the single-day diner drives on that day.  Then the two daily diners drive on that day with probability (1-p)/2, and over the course of the week, they drive (5-p)/2 days, on average.  According to our fairness metric, we must find p such that (5-p)/2 = 5p, which yields

5-p = 10p

11p = 5

p = 5/11

So the single-day diner should drive nearly half of the time, on those days when he or she joins the two daily diners.  By a similar line of reasoning, if there are three daily diners, that probability drops to 5/16, and in general, with n daily diners, the probability is 5/(1+5n), with each of the daily diners driving five times more often, or 25/(1+5n).

What happens if there are m one-day diners (each of them eating on the same day)?  Then the probability p that any of the one-day diners should drive on that one day drops even further, to 5/(m+5n).

One might well consider (providing one is still reading) extending these to k-day diners, and whether the results depend on the k-day diners eating on the same k days, or if the results are insensitive to the distribution of those k days.

Tuesday, February 7, 2012

Roll Over, You Pats!

This past weekend's Super Bowl XLVI (that's forty-six) provided yet another confluence of probability, tactics, and sports.  That's never a bad thing.

I'm speaking, of course, of the decision on the part of Patriots coach Bill Belichick to permit the Giants to score on second down and goal from the Patriots' six-yard line, with about a minute left in the game.  The Patriots put up only token defense, so that when Ahmad Bradshaw took the handoff from Eli Manning, he was able to waltz into the end zone.  Almost literally: Bradshaw had a moment of indecisiveness as he reached the one-yard line, but soon backed into the end zone for the touchdown.

Even before that play began, color commentator Cris Collinsworth had already suggested that the Patriots might permit the Giants to score easily, because the Patriots only had one timeout remaining.  They would therefore be able to stop the clock after second down, but not after third down.  Since the play clock starts at forty seconds once the ball is set, the Giants would attempt a field goal on fourth down with only about ten to fifteen seconds remaining on the game clock.  Collinsworth reasonably contended that the Patriots should prefer trying to score a touchdown with a minute left (plus their one timeout) over trying to score a field goal with ten to fifteen seconds left (without any timeouts).

(It's worth pointing out that then-Packers coach Mike Holmgren had been roundly criticized for making a similar tactical decision fourteen years earlier, in Super Bowl XXXII against the Broncos.  Times change.)

And now, once Bradshaw had scored, Collinsworth decried Bradshaw's touchdown as a tactical error.  Well, setting aside the tendency of sports broadcasters to exaggerate practically anything, was it a tactical error?  Which outcome is better for each team?

Well, first of all, there's the intuitive argument that if one team wants you to do something, then your best strategy ought to be to resist that.  So if the Patriots are parting the Red Sea, maybe your best bet is to lie down.  And indeed, the Giants had considered that.  Manning later reported that he was telling Bradshaw to go down in the field of play.  The Patriots, for their part, said that it was immaterial, that they would have shoved Bradshaw into the end zone, but that tactic would not have worked if Bradshaw had taken a knee: Any subsequent bump by a defender, even the lightest touch, would have made Bradshaw down by contact at the one-yard line.

But let's not let psychological ploys decide the question.  Which tactical choice is the right one here?

The Patriots have two choices—allow the touchdown, or play straightforward defense—but there are more than two possible outcomes.  If the Patriots play defense, there are still multiple possibilities:
  • The Giants might score on second down anyway.
  • Or they might score on third down.
  • Or they might score a field goal on fourth down.  (We'll assume they wouldn't try to score a touchdown.)
  • Or they might fail to score at all, either because of a turnover or a missed field goal.
If the Patriots allow the touchdown, and we assume for the time being that the Giants don't refuse that touchdown, then the Patriots would have to score a touchdown in about a minute, with one timeout remaining.  Let's say they're able to do that with some probability qTD.

On the other hand, if the Patriots play defense, then there are those four possibilities:
  • If the Giants score on second down, the Patriots still have to score a touchdown with about a minute remaining, and one timeout.
  • If the Giants score on third down, the Patriots have to score a touchdown with about a minute remaining, but no timeouts.
  • If the Giants score a field goal, the Patriots have to score a field goal with ten to fifteen seconds left, and no timeouts.
  • If the Giants fail to score at all, the Patriots can simply run out the clock.
If the Giants score on second or third down against straightforward defense, the Patriots are left in pretty much the same situation as if they just let them score on second down, modulo that timeout.  So as it stands, they're just a bit worse off if they play defense.

Now let's take a look at those last two cases.  If they don't score on second or third down, the remaining possibilities are a turnover, a missed field goal, or a made field goal.  Out of those, I'd guess the made field goal happens nineteen times out of twenty.  In the remaining cases, the Patriots just have to sit on the ball, which I'd also guess would happen nineteen times out of twenty (remember, they might have to avoid the safety).  So the question roughly boils down to, which is more likely: Scoring a touchdown in a minute, or one of the following happening—scoring a field goal in ten to fifteen seconds, securing a turnover, or the Giants missing a field goal?  If it's the touchdown, the Patriots should let the Giants score.  If it's any of the remaining three choices, they should play straightforward defense.

Given that Lawrence Tynes hadn't missed a field goal of thirty yards or less in forever, the Giants were going to play possession football, and the Patriots would have no timeouts left for a field goal attempt, I'd go with letting them score, just as Belichick did.  But there's no way this is a foregone conclusion.  Sometimes, it's just a close call.