Friday, December 11, 2009

Square Roots, Lasers, and Mobilization

I promised (threatened) that I would say more about square roots, and so I am. This is me, talking about square roots again. In typical fashion, though, I'm going to start with something else that will seem, for a time, completely unrelated.

Galileo, he of the telescope, the balls rolling down inclined planes (and probably not in actuality from the Tower of Pisa), the sotto voce thumbing of the nose at the Inquisition—Galileo also discovered, or more likely rediscovered, that pendulums mark out roughly even time, no matter how far they swing. It isn't perfectly even time, owing to friction and to the circular track of the pendulum bob (although both of those can be—and were—accounted for, starting with Huygens's employment of cycloid guides). But it's pretty close.

Since the pendulum keeps fairly even time, that must mean that if the pendulum swings in twice as big an arc, it must also be moving twice as fast, in order to keep beating out even time. Now, as it's defined in Newtonian physics, the kinetic energy of the pendulum bob—that is, the energy of the bob due to its motion—goes as the square of its velocity:

KE = ½ mv²

So, twice the arc, twice the velocity, four times the kinetic energy; three times the arc, three times the velocity, nine times the kinetic energy. And so on.

That swinging motion of the pendulum bob is an example of periodic or wave motion, so called by virtue of it swinging back and forth as a water wave swings up and down, if you were to watch it passing by a buoy. Wave motion is primarily characterized by two parameters: its frequency, which is how often it returns to its starting point; and its amplitude, which is how wide it swings. So the arc through which the pendulum bob swings is essentially its amplitude. (Actually, for historical reasons, the amplitude is defined as half of that arc, from the center point of the swing to either of its extremes, but this won't affect our discussion.) So we can say that the pendulum's energy is proportional to the square of its amplitude.

This turns out to be common to many different kinds of waves—including light waves. Light is a wave. (It's also a particle, in many ways, but we'll ignore that for now.) And being a wave, it has an amplitude, which is the extent to which the light oscillates. What is it that's oscillating, anyway? In the case of water waves, it's water, and in the case of sound, it's the molecules in the air. You can't have water waves without the water, and you can't have sound waves without the air; that's why sound doesn't travel in a vacuum. But light does travel in a vacuum, so what's waving the light, so to speak? Well, the answer is that the light itself is waving, or less opaquely (heh heh), the electromagnetic fields that permeate space are waving.

In any event, like other waves, light waves also carry energy that is proportional to the square of the light's amplitude. If you double the amplitude, you quadruple the energy; triple the amplitude, and the energy goes up nine-fold. And so on.

How would light's amplitude be doubled, though? You might imagine that if you put two flashlights, the amplitude of the two together would be twice that of each individual flashlight, and the combined light output—the energy of the two together—would be four times that of each flashlight. But I think, intuitively, we know this to be false, that the combination is only twice as bright as each flashlight. And if you measure the light carefully, in a dark room, this turns out to be perfectly true.

What happened? Light waves, like other waves, have a secondary property, called phase. Two waves of the same frequency are said to be in phase if they swing in the same "direction" (in some not altogether well-defined sense); imagine two pendulums swinging in unison, so that when one swings left, the other does, too. They are out of phase if when one swings left, the other swings right, and vice versa. Or, they may be partly in phase, partly out of phase.

When you combine two light waves of the same frequency and the same amplitude, you get for all intents and purposes a single wave that is the two original waves added together. If they're in phase, the peaks get peakier and the valleys get, err, valleyier, and the amplitude of the waves is in fact doubled. On the other hand, if they're out of phase, the peaks of one get cancelled out by the valleys of the other (and vice versa), and the resultant wave has no amplitude at all.

More typically, though, the two waves are partly in phase and partly out of phase, and the resulting wave's amplitude is somewhere in between zero and two times the original. On average, one can show that the amplitude is the original times √2 . What's more, if you add three waves together at random phases, the amplitude of the sum is the original times √3 . And so on. Aha, the square root!

And since the energy of the final wave is the square of the amplitude, what comes out has two, three, or whatever times the original energy. Which is, of course, exactly what you'd expect. And good thing, too, because if it came out otherwise, we'd have a violation of the conservation of energy. Clearly, it takes n times as much energy to run n flashlights as it does to run one, and if their combined output were something other than n times the original, we'd have to seriously rethink our physics.

You might wonder if there isn't a way to get the waves to line up properly in phase so that the amplitudes do add up in the normal way, and you get a dramatic ramp up in energy. And there is; it's called a laser. A laser essentially gets n individual photons to line up in phase so that what comes out is a sort of super-photon (or super-wave, equivalently) with n² times the energy of any of the input photons. The physics-saving catch is that it takes more energy to line up, or lase, the light than you get as a result.

Nevertheless, that single photon or wave, coordinated as it is, can do things that you couldn't do with the individual photons separately. You can shine a bunch of flashlights at your eye and nothing will happen, other than a rather annoying afterimage and perhaps a headache. But even a modest laser can be used to reshape your cornea and render your eyeglasses superfluous. Of course, it should go without saying that it's not such a great idea to randomly shine lasers into your eye!

Or out, for that matter.

I see in this a kind of metaphor for human nature, and I hasten to say it's only that; as far as I know, one can't really take this and apply it rigorously in any scientific sense. But I think it's a useful metaphor all the same. I like to say that religion, among other things, is a laser of people. What on earth do I mean by that? A single human being can do a certain amount of work (in physics, work is defined as energy applied in furtherance of a force). What happens if you get two human beings together? Well, if they work against each other—if they're out of phase, in other words—less work gets done. Maybe none, if they spend all their time squabbling. Even if they're not exactly out of phase, if they're not particularly coordinated, their combined output is rather less than you might think, like the drunkard making slow and halting progress homeward because he can't put one foot directly in front of the other.

On the other hand, if they cooperate—if they're in phase—they can do twice the work. In fact, maybe they can get even more done, for there's no arguing that a coordinated combination of two people can do things that each individual person couldn't do, even adding their results together. Two people can erect a wall, for instance, that neither person could individually. Maybe, in some sense, those two people can do what it would take four people, working randomly, to achieve. And perhaps three coordinated people can do what it would take nine randomly working people to. And so on.

But it's pretty straightforward to get two or three people to work together, if they're of a mind to. But what about a hundred, or a thousand, or a million? That's where ideologies can be enormously effective; through them, a thousand can achieve what would otherwise require a million. And there may be no ideology better suited for the purpose than religion, although other ideologies—sociological, fiscal, even autocratical—may suffice. That's not to say that all that these various ideologies achieve is beneficial: for every great liberation, there may be a dozen pogroms. But they are part and parcel of a society's capacity for achievement; without them, we get only as far as a drunkard's walk will take us.

Monday, December 7, 2009

Square Roots and Great Comebacks

From the time I learned about them, I've been fascinated (probably to an unseemly amount) by the square root. I remember reading about a method for calculating square roots by long hand. There's no point, really; we have calculators to do that for us. (If you have some spare time and you enjoy this sort of thing, see if you can figure out the algorithm from the example at left.)

What use are square roots, anyway, aside from solving math problems about the diameters of circular lawns? (Have you ever seen any of those? They must encircle those conical swimming pools we dealt with in calculus class.) Here's one use: They can tell you when how big a lead your favorite basketball team needs to be secure in a win.

A few years ago, I derived a rule for determining when a lead was safe in a basketball game—specifically, an NBA game. (It matters, because the shot clock is different between an NBA game and a WNBA game and a NCAA men's game and a NCAA women's game.) You take the square root of the number of seconds left, and add three. For instance, if there's 3:45 left in the game, that's 225 seconds. Square root of 225 is 15, and you add 3, so an 18-point lead is pretty darned safe with 3:45 left. The "add 3" is for a trey at the buzzer. Go ask the Miami Heat about that 'round about now.

Pretty keen, huh? Although—not to put too fine a point on it—well-known sports statistician Bill James also came up with this very same rule. We'll call it independent discovery, at least on my part. I have no idea whether James stole it from me. Give him the benefit of the doubt, though.

But why? Why should this rule work? Why isn't it just the time remaining divided by some rate at which the team that's behind catches up? If a team can make up a 15 points in 225 seconds and then cap that with a trey to make up the 18, why can't it make up 33 points in 7:30? Or 63 points in 15:00?

And the sort-of answer to that is, it can. It's just terribly unlikely. Of course, it's already unlikely that a team can make up 15 points in 3:45, but it's still in the realm of possibility. Asking a team to do that twice in a row is just too much. If it was 100 to 1 against doing it once, doing it twice in a row would be 10,000 to 1 against. On the other hand, making up the same 15 points in twice the time is obviously easier. So in twice the time (7:30, natch), you should be able to make up some deficit in between. According to both me and Bill James—and honestly, are you going to go against both of us?—that deficit is 15 times the square root of 2. That's about 21, and if you add the 3 at the end it makes it 24.

Where on earth does this come from? One place is the drunkard's walk, otherwise known as the random walk (but I think "drunkard's walk" is more evocative). In this mathematical scenario, the eponymous drunkard starts off at some placemark—a lamppost, say. Each moment in time, he takes a step, but in a completely random direction. Might be in the same direction as the last step, might be in the opposite direction, might be anything. So after a bunch of steps, he might end up back at the lamppost where he started...or he might be home.

Odds are, though, he'll be at some intermediate distance. How far from the lamppost? Well, the first step is going to take him one step away for sure. We'll represent this by saying that d(1) = 1, where d(t) is the distance of the drunkard from the lamppost at time t. OK, now what about d(2)? Before that second step, he's one step away from the lamppost. His second step might take him two steps away, if he walks in the same direction, or zero steps away, if he walks in the opposite direction (back toward the lamppost). On average, though, he'll walk in some intermediate direction: let's say, perpendicular to his current progress from the lamppost. The Pythagorean theorem says then that

[d(2)]² = [d(1)]² + 1² = 1² + 1² = 1 + 1 = 2

or, in other words, d(2) = √2. We can go further. We've already got two examples where d(t) = √t and we'd like to get more. To do that, we'll use a process called induction. Suppose that you have a value of t for which d(t) = √t ; we'll now try to show that d(t) = √(t + 1) . Using the same argument as before—that the drunkard walks in some intermediate direction—we get

[d(t + 1)]² = [d(t)]² + 1² = [√t]² + 1² = t + 1

and then we directly get d(t + 1) = √(t + 1) . So as long as we can find a t where d(t) = √t , we're set; it's true for all greater values of t. But we already have such a value: t = 1! (And t = 2, for that matter.) It turns out, then, that the drunkard's walk, after time t, takes him a distance √t away from the lamppost.

Now, a couple of things. First, this isn't anything like a rigorous demonstration of the square root property of the drunkard's walk. You can look that up if you like. But if you work at it a little, it gives you an inkling of the intuition behind it. Secondly, though, and here we're back on track a bit: What has all this got to do with basketball games?

A basketball game is an alternating sequence of possessions. In each possession, the team with the ball is of course trying to score, and the other team is of course trying to prevent it from scoring. When the ball changes hands, the roles are reversed. In each individual possession, the effect on the score is biased: Only the team with the ball can score, usually. But in each pair of possessions, that bias cancels out, since both teams get a chance with the ball. The margin in the game can move in any direction—just like the drunkard's walk.

If the drunkard starts off 50 steps from home, he could conceivably get home in just 50 steps. But it's ridiculously unlikely: Each of those 50 steps would have to be in exactly the right direction. The square root property tells us he'll probably be just a bit over 7 steps from the lamppost; it would take 2500 steps to get him, on average, 50 steps from his starting point. After those 2500 steps, is he guaranteed to be home? Nope. He still has to be walking in the right direction. But it's at least plausible now.

In the same way, a basketball team that's down 18 points could conceivably make that up by scoring six three-pointers in a row while holding their opponents scoreless. If they did that by fouling and their opponents obliged by missing all of their free throws, the whole deficit could be made up in half a minute or so. But that's as unlikely as the drunkard walking 50 steps in exactly the right direction. Instead, a team will make up its deficit in halting fashion, sometimes making up three points, but other times giving up a point, or staying even, in any particular pair of possessions. The drunkard's walk, in other words, and that's why the square root rules great combacks.

I was going to follow this up with a discussion of sociology and mobilizing people, but this post is getting long (see, I do notice it!) and I'll defer that till next time.

Wednesday, December 2, 2009

Basketball Math Fail

Today's miscreant: The Boston Globe's Celtics Notebook. The offending paragraph reads:
Rasheed Wallace has eight technical fouls in 18 games, which would equate to 36 over a full season. That number is astronomical, of course, especially since the NBA suspends players one game for each technical after the 16th.
First of all, the NBA does no such thing. It does suspend players one game for every other technical, starting with the 16th. (See the NBA Rule Book, Rule 12, Section VII.) But that's not the math fail in this instance. The math fail is figuring that Wallace would get 36 technicals over a full 82-game season, when by their own admission, he wouldn't even play 82 games because of the suspensions for all those technicals.

So how many technicals would he get, if he were to get them at the same rate for the rest of the season, and he didn't miss any games to injury or other reasons besides the suspensions from the technicals?

At the current rate, Wallace would pick up his 16th technical in his 36th game, meaning he would be suspended for the team's 37th game. (We'll assume that Wallace doesn't appeal any of his technicals or suspensions.) He would then pick up four technicals in every nine games he played in thereafter. For the sake of argument, let's say he picks up the even-numbered technicals (16th, 18th, 20th, etc.) in the fifth and ninth game of every cycle of nine games he plays in. Since each of those technicals would carry with them a one-game suspension, these cycles would actually span 11 games for the Celtics.

As a result, Wallace would pick up those even-numbered technicals in the Celtics' 42nd, 47th, 53rd, 58th, 64th, 69th, 75th, and 80th games, in each case being suspended for the next game. That technical in the 80th game
his 32ndwould be his last, since he'd only be able to play in one additional game, and we'll charitably assume that he wouldn't get called for a technical in that one. So he'd get suspended for nine games in all, drawing 32 technicals in 73 games.

Tuesday, December 1, 2009

Analogies for Better or for Worse

Douglas Hofstadter wrote about the relationship between analogies and intelligence in the September 1981 installment of his Scientific American column series Metamagical Themas, entitled "Analogies and Roles in Human and Machine Thinking." His central point is that being able to see similarities between different situations and to capitalize on those similarities to make predictions is core to the nature of human intelligence (and by extension, to fruitful research on machine intelligence as well). "Being attuned to vague resemblances," he writes, "is the hallmark of intelligence, for better or for worse."

As if to highlight the "worse" side of the ledger, somewhere toward the middle of the column, he discusses the pitfalls of taking analogies too far. Ultimately, situations don't map perfectly onto each other, and the greater the demands placed on any given analogy, the more likely it will stretch so far it snaps.

Analogies are particularly useful for teaching purposes. Students seem often to learn something better when it is explained in terms of something they already know. We might learn about electrons orbiting an atomic nucleus by analogy with planets orbiting the Sun, for instance. To the extent that principles in one domain apply to the other, we can understand and explain behaviors in the new, unfamiliar domain in terms of the old, familiar one.

There are dangers to this path to learning, though. The famed Caltech physicist Richard Feynman—surely one of the great physics teachers of all time—was extremely conscientious when it came to teaching by analogy. He avoided analogies that he found misleading or circular. It might be natural to think of electromagnetism as being mediated by imaginary "rubber bands," he said, but in the first place, rubber bands draw things together more the further apart they get, whereas electromagnetism gets weaker with distance, and secondly, rubber bands themselves work through electromagnetism interactions at the molecular level, so any understanding students derived through this analogy must needs be circular.

Care must be taken, too, not to stretch the analogy beyond its limits. The fact is that electrons don't orbit the nucleus in neat circles (or even ellipses) like planets orbiting the Sun. If we study further, we find that although planets can apparently orbit the Sun at any distance whatsoever, electrons are constrained to orbit the nucleus only at specific distances, which we can characterize as those distances which allow an integral number of electron waves to circle the nucleus. If we study still further, we find that electrons don't travel in any kind of orbit at all, but instead can be found at any location around the nucleus according to a probability distribution (or, equivalently, are simultaneously at all different points according to that distribution—at least prior to observation).

The problem is that analogies are so darned appealing. The good ones yield correct answers to our questions so often that we lose track of where the limits of the analogies are, or even that there are any. We simply trust the analogies, often to our detriment. It's tempting to understand the budgetary situation of, say, the United States in relation to our personal budget; after all, there are many similar concepts and relationships: income, expenses, debt, balance, and so forth. It's tempting, but it's often misleading. But because we do understand many things correctly using that analogy, we become overconfident in areas where the analogy was never going to hold water.

My pet peeve in this regard is the rubber sheet analogy for general relativity. Given that general relativity was one of the major developments of 20th-century physics, you'd expect that there'd be significant time spent in explaining it to the lay public. I mean, even people who only vaguely have a notion of what physics is about have heard of Albert Einstein and "warped space."

Gravity is everywhere; we feel its effects all the time. And we've sort of internalized the Newtonian theory of gravity, which is that any two particles exert a gravitational force on each other, no matter how far apart they are; although the degree of force drops off quite rapidly with distance, it never quite shrinks down to zero. We've internalized it so well that we hardly ever wonder how that force is mediated. How does that force get exerted across all that distance? By the Newtonian theory, I wiggle my finger here, and my finger's gravitational influence on the most distant galaxy, however faint, oscillates with the same frequency as my wiggling finger. Newton himself felt this conundrum most keenly, never mind his insistence that he did not "feign hypotheses."

Einstein's general theory of relativity ostensibly resolves all of that. It posits space not simply as a theater in which gravitational interactions take place, but a physical, almost tangible thing that is affected by masses and in turn affects them. The usual term for this is curved space—a term that is justified in a mathematical sense but which is almost certain to mean nothing directly to anyone who isn't already a physicist. I imagine that the most common response is mute incomprehension.

So we explain what we mean by "curved space" by analogy. First of all, we should really be calling it "curved space-time," since in Einstein's theory time and space are interwoven almost irrevocably. With three dimensions of space and one of time—well, that's a lot of dimensions. People don't visualize four dimensions very well. So we abstract away two of them: one of the spatial dimensions, and the one time dimension, leaving two spatial dimensions. The one spatial dimension is OK, probably, but already there are problems. You've lost the one temporal dimension you have; it's possible that you might lose something essential there!

But we're pressing on. We lay down an infinite rubber sheet, typically marked with grid lines. We plop down a big heavy ball, like a bowling ball. This is the Sun, we are told. It bends or curves or warps space. Sure enough, the rubber sheet is seen to dimple significantly. Then, we roll a smaller ball around the bowling ball, and because of the warping caused by the bowling ball—err, Sun—the smaller ball (representing the Earth, say) sweeps around in a neat circular or elliptical orbit. Just like the real planets.

This is an enormously popular representation of general relativity; even Carl Sagan's Cosmos, my favorite science documentary series of all time, uses it. And yet, in my opinion, it's fatally flawed. In the first place, it's circular, just like Feynman's rubber bands. We're told that the effect of the Sun's gravity can be interpreted in terms of the Sun's warping of nearby space, by analogy with the warping of the rubber sheet caused by the bowling ball. But what is it that causes the bowling ball to warp the rubber sheet? Gravity itself! We can't rightly claim to understand gravity if gravity is involved in the explanation as well.

Even that would be excusable for pedagogical purposes if the analogy were actually accurate. But it's not. In all of the rubber-sheet depictions of general relativity I've seen, and I've seen quite a few, only one includes a disclaimer that demonstrates what's wrong with it—a little-known primer on relativity written by Lewis Carroll Epstein called, appropriately enough, Relativity Visualized. (I heartily recommend it.) He makes the following point: In space, there is no universally preferred direction up or down; those directions are only understood in reference to some gravitational field. So the rubber sheet analogy, if it's really right, should work just as well if you flip the rubber sheet upside down, so that the warp goes upward (like a volcano) rather than downward. After all, it's not supposed to be the bowling ball itself that makes the other ball go 'round, but the warp. But if you roll the smaller ball toward the volcano, what happens? As any miniature golfer knows, it certainly doesn't orbit the volcano; instead, it either goes into the volcano, or it veers away from it, never to return.

But even that's not the worst of it. The irony of this analogy is that even though it's not a very accurate depiction of general relativity, it's a dead-on match for Newtonian potential energy wells. That's right: This immensely popular analogy, which is supposed to highlight how general relativity differs from Newtonian gravity, is instead a much better illustration of the very theory general relativity was intended to supplant! I was so struck by this that I wrote up an exposition of general relativity for my astronomy Web site, which (on the off chance you've actually read this far) you can find here. In it, you'll find an analogy to general relativity which is hopefully understandable but hits much closer to the mark. (I even asked a physicist!)

But does anyone care? Nooooo, I'm sure we'll continue to see the rubber-sheet analogy trotted out at regular intervals on the Discovery Channel, with no disclaimer regarding its appropriateness.