"But I thought—what about changing your own past? What about the paradoxes?"
Dr. Vanner pursed her lips. "Yes, I wondered about that too."
"So what happens if I shoot my grandfather? Not that I would, but I could."
"Well, Jason, it turns out that's a bit of an interesting question, whether you could or not. But grandfathers are very large, complicated things. People are always trying to figure out how time travel could possibly work with grandfathers, and bullets, and messy macroscopic objects like that. It's easier just to deal with simple particles first. You figure out the particles, the grandfathers take care of themselves."
"OK..."
"Well, grandfathers are made out of particles, aren't they?"
"I guess that's one way to think of them."
"You know, I had a grandfather too." Dr. Vanner smiled warmly. "Anyway, I think the best way to answer your question is by way of example. Say you're a particle. An electron. You didn't come from nowhere, you started out as a muon. But muons don't live very long; they decay in a few microseconds to yield a couple of neutrinos, and you." She sketched on the board as she spoke.
"Me—oh right, I'm an electron."
"That's right. Now you, as an electron, can live essentially forever. You step into the time machine (or a smaller version of it), and you go back in time. Just a little bit: say, a microsecond."
"Ahh, I think I see where you're going. I'm going to bump the muon just enough so that it decays somewhere else, and even if it decays into me, I'm nowhere near the time machine to go back in time. Paradox."
"Exactly. So what's the resolution? The resolution is that particles aren't billiard balls. As an electron, you don't really bump into the muon. You 'interact' with it."
"What difference does that make?"
"The difference is that the interaction has a random element. If I hit a cue ball into another billiard ball in the same exact way, over and over again, both balls will go off in the same directions, over and over again. It's predictable, deterministic. That's why you can have expert billiard players. But subatomic particles aren't the same way. They can hit in exactly the same way, as far as we can tell, but the results may be completely different from one time to the next. There are no expert electron players.
"And that's the key. There's going to be one way or another that you could end up hitting that muon that will end up with it decaying into you at the right place at the right time. Maybe you give it an extra nudge, and it goes a bit faster in the same direction, but it decays sooner than it would have. Maybe it goes off in a different direction, but when it decays into you, you still end up heading toward the time machine."
"But if there's so many different ways it can happen, which one actually does happen?"
"That's a complicated question. The simplest way I can think of to understand it is to imagine the universe as a kind of simulation. If we conduct an ordinary quantum-mechanical experiment, there's a certain probability that the experiment will end up one way, and the rest of the time, it ends up another way. It can do that because the experiment is anchored on only one end: the start.
"But in the time travel case, it's actually anchored on both ends. When you the electron exist at a particular time and place, there's an anchor at that point. The universe is in a more or less definitive state at that point. Normally, that's the only anchor. But in this case, when you travel back in time, there's a second anchor, in that we know you have to end up back (or should I say 'forward'?) in the time machine. In between, nearly anything can happen—subject to the laws of physics.
"So imagine that the universe runs a simulation. How many different ways can you start at the first anchor point, and end up at the second anchor point? Which ones are most likely, when you adhere to the laws of physics? We don't even know which ones are most likely beforehand, except in the very simplest of cases."
"So as an electron, I end up taking the most likely path back to the time machine?"
"No, not quite. If the chances of you taking that path are three in five, then three times out of five, that's the path you'll take. Or you could end up taking a once-in-a-million path (like bouncing off of three other particles before entering the time machine); it's just that you only have a one-in-a-million chance of doing that."
"But I always end up back in the time machine."
"That's right."
"But then it sounds like I can't ever change anything. If the universe is anchored on both ends, what point is there in going back in time?"
"Very good question! The point is that the second anchor point is not a 'complete' anchor point. The first point is. It covers the whole universe. But the second anchor point only consists of you. The only thing that's required is that you—the original you, remember, not the one that goes back in time—you have to end up in the time machine. Everything else can change."
"Wait a minute. So forget about me being an electron and everything. I'm me, Jason Sawyer. I enter the time machine, and I go back a day or so. I could see anything. I could see me—the original me. And anything might happen, but in a day or so, that original me has to end up getting into that same time machine. But everything else could change. I might forget to do yardwork that I actually did earlier today. When I—the time-traveller me—catches back up to this present, I would know that the yardwork didn't get done. But if you were watching, you'd all of a sudden see the leaves suddenly strewn across the yard, instead of put away in the yard trash?"
"Not quite. Remember, when you rake the leaves, it affects more than just the leaves. If I were watching and the leaves never got done, I'd never have any memory of the leaves being in the trash, would I? So in fact, yes, the leaves would still be on the ground, but it wouldn't look to me like they suddenly appeared on the lawn. In my brain, there would be lost any impression that the leaves were ever raked in the first place. At least, I assume that's the most probable outcome. There are all sorts of other outlandish possibilities that are more drastic but which I probably wouldn't have to worry about."
"All right, forget about the trash. What about if I went back, really far back, far back enough to...well, let's not say I shot my grandfather, but I somehow set him up with someone other than my grandmother. How can I possibly end up in the time machine then?"
"Hmm, let's think..." Dr. Vanner considered this. "OK, well, how well did you know your grandfather?"
"Huh? Uhh, kind of? I—he died when I was eight. How does that matter?"
"It matters because it's not sufficient that you end up in the time machine. You also have to end up going into the time machine in a state that's sufficiently 'consistent' (in a technical sense) with the way you ended up going into it 'the first time.' And that state includes your brain: everything you remember and know about yourself and your experiences.
"So what happens? You obviously have to go into the time machine. So somehow, some collection of particles comes together to form you. The way it actually happened is just one possibility: Your parents conceive you, and you start out as a small number of particles. Over time, you take on some particles, and lose some other particles, and eventually, grow up to be who you are today.
"Now one other possibility is that somewhere, near here, just a few moments ago, a collection of particles just randomly happened to show up to make...well, you, but including all the memories you currently have (which would, in that case, be completely fictional). In a classical world, that's impossible. In a quantum-mechanical world, it's merely improbable—although we're talking really, really improbable. Like you could run the universe a googol times and it wouldn't even come close to happening. Still, it's possible. And because by setting your grandfather up with someone other than your grandmother, you've already eliminated a bunch of probable outcomes that end up with you in the time machine, all the improbable options get a boost, so to speak. As Sherlock Holmes said, 'When you have eliminated the impossible, whatever remains, however improbable, must be the truth.' In this case, by setting up your grandfather on a date, you've made certain options impossible, and they're eliminated. More than one path remains—all of them improbable a priori, perhaps—but one of them has to happen, in order for you to get back in that time machine. The option of just a bunch of particles coming together to make a fully-formed you is always available. If nothing is left besides that, then that's what'll happen. In this case, though, I bet something else is more likely than that."
"Like what?"
"Like...suppose that after you set up your grandfather with someone else, you go off to visit the world. You're not going to hang out with him forever, do you? So after you leave him, suppose he breaks it off with the other person, and gets back with your grandmother. And everything else happens more or less the same, so far as producing you is concerned. His life history would be a bit different, but not in ways that are really all that critical. That's why I asked you how well you knew him. Do you know who he was with before he met your grandmother?"
"Hmm...not really."
"Exactly. Remember, what happens to everyone else can change, but you have to stay more or less the same. So my guess is that the most probable outcome is that his life would change in ways that you never knew about in the first place, so that when you go into the time machine 'the second time,' whatever you knew about your grandfather the first time remains true."
"Whoa. Wait. That means that I have...I can't really change too much about the people that I know really well. Like Mom, Dad, my sister, and even you a little bit. But, everything else and everyone else could change drastically? Maybe I'd go back, and as a result of my getting my grandfather to go out on even one date with someone different, I still come back with the rest of my family, but—oh, let's just say—no World War II, ever?"
"Well, that's possible, but still unlikely, even after you eliminate the impossible. Remember that World War II wasn't started by just one thing. There were triggers, but there were broad forces too that were behind it. The second anchor, and the fact that what happens is likely the most probable thing, means that the whole 'butterfly effect' thing is not as chaotic as one might think. Just setting up your grandfather on one date is unlikely to reset all of world history. In order to change that, you'd probably have to go back quite a bit further in time.
"Also, by the way, keep in mind that your time-traveled self is still aging. If you go back far enough to set up your grandfather on a date, by the time you get back here to 2027, you'll probably be about as old as your grandfather would have been today. That's assuming you make it all the way back. There's no guarantee of that."
"Wait, I thought I had to get back. To get into the time machine."
"No, remember, that's the original you who has to do that, not the time-traveled you."
Jason hesitated for a second. "Right," he said finally. "It's very confusing. And it's weird to think that the results of going back in time are so intimately connected with me. I can change really distant things a lot, but everything I know well is saved—at least to the degree that I know them. How is that possible? I mean, this machine doesn't know a thing about me."
"It doesn't know it in the usual sense, Jason, but when you enter it and go back in time, it knows everything about your current state, at the moment you go back. It fixes it. That's the second anchor. The thing that makes all the other changes possible is that it is only you who is anchored in time and space. Everything else is floating partly free—and the less you know about them, the freer they are. Even me, to a point. Although I still have to be able to invent the machine. So I feel pretty safe, especially if I know the person well who's traveling in it."
"Why don't you get into it?"
Dr. Vanner looked for a moment as though she were going to answer that. "I—I can't explain that to you yet," she said finally. "You'll have to take my word for it that there's a good reason for me not to get into it."
"Hmm, OK." Jason looked thoughtful again. "All right, one more question. Suppose I do something really drastic. I shoot myself before I get into the machine. Or I do something really memorable to myself, something that didn't happen the first time. How can I—the original me—end up back in this time machine in a...what did you call it?"
"A consistent state."
"Yeah, that."
"Well, there's a limit to how reliable our senses are. How do you know you didn't already go back in time and make a big noise in front of yourself? Because you don't remember it. But what makes you so sure that it didn't happen? Is your memory that reliable? There must have been some things that have happened that you don't remember. Normally, it's because those things happened so far in the past that the memory has faded. We have the notion that the memory is still there, locked inside us somewhere, that we just can't find it. But what if the memory really went away? My guess—although I really don't know, I haven't tested it—is that you'd just lose all memory that it happened."
"Weird. But what if I shot myself?"
"That depends. Maybe the shot misses, even though you think you hit yourself? Maybe you miraculously heal in seconds? Those sound ridiculously improbable, but you've already eliminated as impossible all the normal paths, so the truth must be something outlandish. Even if you stay to watch yourself die, at some point, your body, reasonably healthy, must make its way into the time machine. In that case, you might very well see your dead body vanish in front of your eyes, just in time to make it into the time machine at the right moment. Again, impossible in the classical world, but possible in the quantum-mechanical world, and—if you've already eliminated everything else—even, in a sense, inevitable in that world."
"So the original me is immortal."
"The original you, yes," agreed Dr. Vanner. "But only up to the point you get in that machine. From that point on, that you vanishes, and the time-traveled you reappears at some point further back in time. And that you is vulnerable. Anything at all could happen to that you."
Excerpt from "Time Binder" copyright (c) 2012 Brian Tung
Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts
Thursday, August 16, 2012
Wednesday, March 14, 2012
No Two Alike
Another meandering post. You've been warned.
I'm re-reading Isaac Asimov's informal autobiography, I. Asimov (a play on his collection of robot stories, entitled I, Robot, and to be distinguished from his formal autobiographies published earlier in his life), and finding it quite entertaining. Partly, this is because I'm an inveterate re-reader and re-watcher. My enjoyment of a piece of writing or a movie or a TV program doesn't diminish because I know how it goes. If I enjoyed it the first time, I'll enjoy it just as much the seventh time, or the fifty-seventh. Even a sporting event isn't diminished because I know how the final score (although I do prefer to watch it live the first time, if I can). All this just by the way.
Anyway, in this book, Asimov mentions his facility at giving impromptu talks, and mentions by way of illustration that he has given a couple of thousand talks, no two exactly alike.
And that phrase, "no two exactly alike," is so characteristic of snowflakes that I immediately thought of them. In fact, I'd go so far as to wager that if you asked people what the first thing was that they thought about snowflakes, it would be that no two are alike.
But is that actually so? Have there really never been two snowflakes alike? If you're like most people, you'd probably just as soon leave well enough alone and assume it's true. For the heck of it, though, take a trip with me down the rabbit hole.
The whole idea that no two snowflakes are exactly alike has been around for time immemorial, but things really got moving with a man named Wilson Bentley (1865-1931), who grew up in Vermont. When he was fifteen, his mother gave him an old microscope to experiment with. Well, Vermont winters being what they were, I suppose it's natural that Wilson should have been drawn to snowflakes. And so he took to maneuvering snowflakes under his microscope and sketching them.
It turned out, however, that they melted quickly—far too quickly for him to sketch in time. So Bentley assembled a contrivance, a camera attached to a microscope attached to a board covered with black velvet, which permitted him to take pictures of the snowflakes before they melted. Over his lifetime, he took images of over five thousand snowflakes, and sure enough, no two of them were exactly alike.
Five thousand, though a lot to take pictures of, is still a minuscule fraction of all the snowflakes that ever were, or even of those that are currently in existence (a constantly changing population, to be sure). Surely there is no way that we could possibly take pictures of all the ones that currently exist, let alone those that have ever existed. Is there perhaps another way to answering the question?
Consider: Each year, a substantial portion of the Earth is hit by snowstorms sufficient to dump several meters of snow on the ground. I'm not sure of my statistics, but we probably wouldn't be far off if we assumed that the total annual snowfall amounted to a depth of, let's say, two tenths of a meter over the entire surface of the Earth, if it was spread around evenly. Since the surface area of the Earth is about 5×10^14 square meters, we're talking about 10^14 cubic meters of snow. When packed tightly (tightly enough to crush them), snowflakes might occupy a cube about a tenth of a millimeter on a side. So each year, we get something like 10^26 snowflakes. Taking into account the fact that there has been snowfall for a few billion years, there have been perhaps 10^36 snowflakes, ever, in the Earth's history. That's a lot of snowflakes.
However, there are also lots of different shapes that any particular snowflake might take on. Snowflakes exhibit six-fold symmetry because they're constructed from ice crystals, which have six-fold symmetry. (You can find a picture of one in this article.) So let's represent a snowflake as a hexagonal lattice, a bit like a honeycomb of cells, each of which might be occupied by an ice crystal, or not. An individual hexagonal ice crystal is a few tenths of a nanometer across, whereas an entire snowflake might be a few tenths of a millimeter across. So the hexagonal lattice representing our snowflake would have a diameter of about a million cells, and would contain about 750 billion cells in all.
Does that mean that there are nearly a trillion possible snowflakes? No, because each one of those cells could either have an ice crystal, or not. We could represent the snowflake by filling each one of those cells with a 1 if it had an ice crystal, or a 0 if it did not. In other words, each snowflake would be represented by a huge binary number with 750 billion digits. Such a tremendous number is on the order of 10 raised to the 230 billionth power.
It's hard to overstate how big a number this is. Even if you were, somehow, to write a 100 digits a second, every second of every hour of every day, without interruption for sleep or eating, you have perhaps only an even-money chance of just writing this number out during your entire lifetime. It goes without saying that it's much, much, much larger than 10^36. (It is, however, much smaller than a googleplex. I just thought I'd point that out.)
However, we're not playing quite fair, because we've completely neglected the symmetry exhibited by most snowflakes. If we take that into account, it turns out that the number of possible snowflakes drop to something more like 10 raised to the 40 billionth power. Quite a bit smaller, but still much larger than 10^36.
There's another thing, too. Bentley took his photographs with an optical microscope, which was of course incapable of resolving ice crystals down to the individual molecular level. These days, we're now capable of doing that, but it would be unfair to insist that snow crystals, which in an ordinary atmospheric environment would be constantly changing anyway, be identical to that level of precision. A typical photograph of a snowflake might be able to resolve crystals to a level of detail that would take a hundred thousand cells to fill the entire snowflake. Remembering to take into account the symmetry of snowflakes, there could still be on the order of 10^5,000 different snowflakes, at this reduced level of resolution. Still much larger than 10^36.
OK, how about this? If one looks at an array of Bentley's photographs, one notices that the ice crystals are not arranged haphazardly around the snowflakes, even after one takes into account the six-fold symmetry. Instead, there is order at all different scales. In fact, people have likened snowflakes to fractals; there are even simulations of snowflake generation that build upon the fractal arrangement.
That reduces the level of variation accessible to the snowflake. It's hard to say for sure, but in most of the Bentley images, I think one can make out about six levels of detail. (That's consistent with a scale ratio of about two to three.) What's more, each unit of detail has within it detail that only goes about three or four levels down, which means that each level can be represented using about fifty bits or so. That means a total of three hundred bits might suffice to denote a snowflake to the level of precision needed to figure out whether they match or not. That would still mean about 10^90 distinct snowflakes, though.
All right, one last thing, which at first will seem to be a significant digression. There is, in probability, something called the birthday paradox, which goes something like this: Suppose you get fifty otherwise randomly selected people together in a room. What are the odds that at least one pair of them will share the same birthday (possibly different year)? One in four? One in ten? How many people do you think you need to make the odds even? Would forty do it? How about sixty? A hundred?
The answer, surprising to most people who haven't heard this question before, is that the odds are about even that out of just 23 people, at least one pair will share a birthday in common. It's a bit surprising because there are 365 days in a year (not counting leap day), but consider what happens if you choose the people one by one. The first, of course, can have any birthday at all. In order to avoid a pair sharing the birthday, the second must not share a birthday with the first. The third must avoid sharing a birthday with both the first and the second. The fourth must avoid sharing a birthday with the first, the second, and the third. And so on. By the time you get to 23 people, there are about 250 birthday sharings that must be independently avoided. It's not surprising that such sharings are avoided only half the time.
It turns out that this "paradox" (not truly a paradox at all, naturally, but just a counter-intuitive result of probability theory) has very wide applicability. The number of samples that can be randomly selected before you stand a good chance of getting a pair is much smaller than the total number of choices. In fact, it's on the same order as the square root of the number of choices. (There's that square root again!) The square root of 365 is a bit over 19, and sure enough, 23 isn't very far over 19. If one takes into account the year of birth over the course of a century, then there are about 36,500 different birthdates, but the square root of 36,500 is only about 191, so that only about 200 randomly selected people are needed before you have a good chance of matching the entire birthdate. And the square root of 10^90 is 10^45, so the size of the collection of snowflakes you need to have a good chance of pairing two of them is about 10^45.
It's more than 10^36, but not much more. (What's a factor of a billion between friends?) And there are a lot of back-of-the-envelope manipulations in what I wrote, so perhaps there are other deeper symmetries to take advantage of. But I think it's rather magical that the numbers work out nicely so that it's quite possible that somewhere, across the vast mists of time, there were at (probably very different) points, two identical snowflakes!
I'm re-reading Isaac Asimov's informal autobiography, I. Asimov (a play on his collection of robot stories, entitled I, Robot, and to be distinguished from his formal autobiographies published earlier in his life), and finding it quite entertaining. Partly, this is because I'm an inveterate re-reader and re-watcher. My enjoyment of a piece of writing or a movie or a TV program doesn't diminish because I know how it goes. If I enjoyed it the first time, I'll enjoy it just as much the seventh time, or the fifty-seventh. Even a sporting event isn't diminished because I know how the final score (although I do prefer to watch it live the first time, if I can). All this just by the way.
Anyway, in this book, Asimov mentions his facility at giving impromptu talks, and mentions by way of illustration that he has given a couple of thousand talks, no two exactly alike.
And that phrase, "no two exactly alike," is so characteristic of snowflakes that I immediately thought of them. In fact, I'd go so far as to wager that if you asked people what the first thing was that they thought about snowflakes, it would be that no two are alike.
But is that actually so? Have there really never been two snowflakes alike? If you're like most people, you'd probably just as soon leave well enough alone and assume it's true. For the heck of it, though, take a trip with me down the rabbit hole.
The whole idea that no two snowflakes are exactly alike has been around for time immemorial, but things really got moving with a man named Wilson Bentley (1865-1931), who grew up in Vermont. When he was fifteen, his mother gave him an old microscope to experiment with. Well, Vermont winters being what they were, I suppose it's natural that Wilson should have been drawn to snowflakes. And so he took to maneuvering snowflakes under his microscope and sketching them.
It turned out, however, that they melted quickly—far too quickly for him to sketch in time. So Bentley assembled a contrivance, a camera attached to a microscope attached to a board covered with black velvet, which permitted him to take pictures of the snowflakes before they melted. Over his lifetime, he took images of over five thousand snowflakes, and sure enough, no two of them were exactly alike.
Five thousand, though a lot to take pictures of, is still a minuscule fraction of all the snowflakes that ever were, or even of those that are currently in existence (a constantly changing population, to be sure). Surely there is no way that we could possibly take pictures of all the ones that currently exist, let alone those that have ever existed. Is there perhaps another way to answering the question?
Consider: Each year, a substantial portion of the Earth is hit by snowstorms sufficient to dump several meters of snow on the ground. I'm not sure of my statistics, but we probably wouldn't be far off if we assumed that the total annual snowfall amounted to a depth of, let's say, two tenths of a meter over the entire surface of the Earth, if it was spread around evenly. Since the surface area of the Earth is about 5×10^14 square meters, we're talking about 10^14 cubic meters of snow. When packed tightly (tightly enough to crush them), snowflakes might occupy a cube about a tenth of a millimeter on a side. So each year, we get something like 10^26 snowflakes. Taking into account the fact that there has been snowfall for a few billion years, there have been perhaps 10^36 snowflakes, ever, in the Earth's history. That's a lot of snowflakes.
However, there are also lots of different shapes that any particular snowflake might take on. Snowflakes exhibit six-fold symmetry because they're constructed from ice crystals, which have six-fold symmetry. (You can find a picture of one in this article.) So let's represent a snowflake as a hexagonal lattice, a bit like a honeycomb of cells, each of which might be occupied by an ice crystal, or not. An individual hexagonal ice crystal is a few tenths of a nanometer across, whereas an entire snowflake might be a few tenths of a millimeter across. So the hexagonal lattice representing our snowflake would have a diameter of about a million cells, and would contain about 750 billion cells in all.
Does that mean that there are nearly a trillion possible snowflakes? No, because each one of those cells could either have an ice crystal, or not. We could represent the snowflake by filling each one of those cells with a 1 if it had an ice crystal, or a 0 if it did not. In other words, each snowflake would be represented by a huge binary number with 750 billion digits. Such a tremendous number is on the order of 10 raised to the 230 billionth power.
It's hard to overstate how big a number this is. Even if you were, somehow, to write a 100 digits a second, every second of every hour of every day, without interruption for sleep or eating, you have perhaps only an even-money chance of just writing this number out during your entire lifetime. It goes without saying that it's much, much, much larger than 10^36. (It is, however, much smaller than a googleplex. I just thought I'd point that out.)
However, we're not playing quite fair, because we've completely neglected the symmetry exhibited by most snowflakes. If we take that into account, it turns out that the number of possible snowflakes drop to something more like 10 raised to the 40 billionth power. Quite a bit smaller, but still much larger than 10^36.
There's another thing, too. Bentley took his photographs with an optical microscope, which was of course incapable of resolving ice crystals down to the individual molecular level. These days, we're now capable of doing that, but it would be unfair to insist that snow crystals, which in an ordinary atmospheric environment would be constantly changing anyway, be identical to that level of precision. A typical photograph of a snowflake might be able to resolve crystals to a level of detail that would take a hundred thousand cells to fill the entire snowflake. Remembering to take into account the symmetry of snowflakes, there could still be on the order of 10^5,000 different snowflakes, at this reduced level of resolution. Still much larger than 10^36.
OK, how about this? If one looks at an array of Bentley's photographs, one notices that the ice crystals are not arranged haphazardly around the snowflakes, even after one takes into account the six-fold symmetry. Instead, there is order at all different scales. In fact, people have likened snowflakes to fractals; there are even simulations of snowflake generation that build upon the fractal arrangement.
That reduces the level of variation accessible to the snowflake. It's hard to say for sure, but in most of the Bentley images, I think one can make out about six levels of detail. (That's consistent with a scale ratio of about two to three.) What's more, each unit of detail has within it detail that only goes about three or four levels down, which means that each level can be represented using about fifty bits or so. That means a total of three hundred bits might suffice to denote a snowflake to the level of precision needed to figure out whether they match or not. That would still mean about 10^90 distinct snowflakes, though.
All right, one last thing, which at first will seem to be a significant digression. There is, in probability, something called the birthday paradox, which goes something like this: Suppose you get fifty otherwise randomly selected people together in a room. What are the odds that at least one pair of them will share the same birthday (possibly different year)? One in four? One in ten? How many people do you think you need to make the odds even? Would forty do it? How about sixty? A hundred?
The answer, surprising to most people who haven't heard this question before, is that the odds are about even that out of just 23 people, at least one pair will share a birthday in common. It's a bit surprising because there are 365 days in a year (not counting leap day), but consider what happens if you choose the people one by one. The first, of course, can have any birthday at all. In order to avoid a pair sharing the birthday, the second must not share a birthday with the first. The third must avoid sharing a birthday with both the first and the second. The fourth must avoid sharing a birthday with the first, the second, and the third. And so on. By the time you get to 23 people, there are about 250 birthday sharings that must be independently avoided. It's not surprising that such sharings are avoided only half the time.
It turns out that this "paradox" (not truly a paradox at all, naturally, but just a counter-intuitive result of probability theory) has very wide applicability. The number of samples that can be randomly selected before you stand a good chance of getting a pair is much smaller than the total number of choices. In fact, it's on the same order as the square root of the number of choices. (There's that square root again!) The square root of 365 is a bit over 19, and sure enough, 23 isn't very far over 19. If one takes into account the year of birth over the course of a century, then there are about 36,500 different birthdates, but the square root of 36,500 is only about 191, so that only about 200 randomly selected people are needed before you have a good chance of matching the entire birthdate. And the square root of 10^90 is 10^45, so the size of the collection of snowflakes you need to have a good chance of pairing two of them is about 10^45.
It's more than 10^36, but not much more. (What's a factor of a billion between friends?) And there are a lot of back-of-the-envelope manipulations in what I wrote, so perhaps there are other deeper symmetries to take advantage of. But I think it's rather magical that the numbers work out nicely so that it's quite possible that somewhere, across the vast mists of time, there were at (probably very different) points, two identical snowflakes!
Friday, April 30, 2010
Bending Over Backwards

In this particular case, Blondlot was working in his laboratory in the wake of a flush of discoveries concerning radioactivity and X-rays. Apparently, he was trying to polarize X-rays (a tricky task owing to their high frequency and short wavelength), and as part of his attempt he placed a spark gap in front of an X-ray beam. After a few experiments with this set-up, it seemed to him that the spark was brighter when the beam was on than when it was off.
He attributed this to a new form of radiation, which he called N-rays after his home town and university of Nancy. He may have been influenced by all the work on radioactivity and X-rays then going on, but at any rate, he set about immediately to investigate attributes of the new radiation. It appeared, he said, to be emanated by many objects, including the human body. It was refracted by prisms made from various metals, although these had to be specially treated in order to prevent them from radiating N-rays themselves.
It was all very interesting, and for some time, there was a burst of scientific activity on N-rays. The problem was, the N-rays themselves were very shy and retiring, and many physicists had trouble reproducing the results obtained by Blondlot and his staff. But Blondlot always maintained either that they had inferior equipment, or inferior perception.
You see, there was no objective recording of N-rays. All one had was a subtle brightening of a spark, which Blondlot and his colleagues were already prepared to see. To lend at least some notion of objectivity to the research, Blondlot took photographs of the sparks and other N-ray phenomena, but this merely replaced subjective judgment of a live spark with subjective judgment of a photograph. Means for measuring the light output were not sufficiently reliable or accurate yet to resolve the matter.

Wood had by this time in his career established himself as something of a debunker, a sort of turn-of-the-century James Randi. But Blondlot was no charlatan; on the contrary, he was firmly convinced of his own discovery. So he had no misgivings about demonstrating his N-rays before Wood and others. He darkened the laboratory (the better to see the increase in brightness). He set his aluminum prism on a platform to refract the N-rays, made some measurements, rotated the platform a bit, made some more measurements, and so on, all the while casually detecting the N-rays. For his own part, Wood could see nothing of what Blondlot was describing. But he kept quiet, waiting for the experiments to conclude.
When they did, and the lights were turned back on, there was general astonishment, for despite all the careful measurements on the refraction of N-rays, there was no aluminum prism sitting on the platform. Wood had, it turned out, pocketed the prism early on in the experiment. The entire time, Blondlot and his staff had been obtaining gradually changing measurements of an unchanging experimental set-up. That spelled the end, for all intents and purposes, of N-rays.
What happened? Intentional deception can be ruled out rather easily, since Blondlot would have known that careful experimentation would eventually disprove N-rays; it would have been a most temporary fame. Nor was he a shoddy scientist. Before the N-ray affair, he was known for having measured both the speed of light and the speed of electricity through wires, a task that had stymied others, and which established that the two were very close (though not quite the same).
Consensus today is that Blondlot had simply wanted to believe in N-rays, expected and wanted to see the predicted brightening, so much that he really did see it, sincerely. It has been suggested that he may have been motivated by nationalism; X-rays were discovered by the German physicist Wilhelm Roentgen, and Germany had recently taken a sizable chunk of France, so that Nancy was now uncomfortably close to the French-German border. But my own feeling is that it almost doesn't matter. At some point, the desire to see his discovery of N-rays vindicated became its own driving force.
The N-ray affair is often cited in support of what is, in my opinion, a central—perhaps even the central—insight of scientific discovery: The easiest person to fool is yourself. And fooling yourself is a necessary prelude to fooling others; charlatanry would have been easier to expose. Exhibit A in support of this position is the sad fact that although N-rays essentially died a hard death in 1904, Blondlot lived on for another quarter century, continued to be productive in science, and took his belief in the existence of N-rays to his death.

It is because it is so easy to fool oneself that science is, and must be, an essentially social activity. It is often said that in science, experimental data rules the day. That's overstating it a bit. Experimental data is indeed necessary for science to progress, but that data means little without scientific theory to organize it (and vice versa). It's not that the data is more important than the theory, but that it validates it, makes it less likely to fool yourself or anyone else. And there's a strong social pressure, within the scientific community, for one to bend over backwards in an attempt to subject one's theories to as much scrutiny as possible. It's that intense examination, which eliminates many theories but marks the ones that survive with an imprimatur of robustness, that distinguishes science from so many other human activities (ahem, politics?) and has made it one of the most successful endeavors of all.
Friday, December 11, 2009
Square Roots, Lasers, and Mobilization
I promised (threatened) that I would say more about square roots, and so I am. This is me, talking about square roots again. In typical fashion, though, I'm going to start with something else that will seem, for a time, completely unrelated.
Galileo, he of the telescope, the balls rolling down inclined planes (and probably not in actuality from the Tower of Pisa), the sotto voce thumbing of the nose at the Inquisition—Galileo also discovered, or more likely rediscovered, that pendulums mark out roughly even time, no matter how far they swing. It isn't perfectly even time, owing to friction and to the circular track of the pendulum bob (although both of those can be—and were—accounted for, starting with Huygens's employment of cycloid guides). But it's pretty close.
Since the pendulum keeps fairly even time, that must mean that if the pendulum swings in twice as big an arc, it must also be moving twice as fast, in order to keep beating out even time. Now, as it's defined in Newtonian physics, the kinetic energy of the pendulum bob—that is, the energy of the bob due to its motion—goes as the square of its velocity:
KE = ½ mv²
So, twice the arc, twice the velocity, four times the kinetic energy; three times the arc, three times the velocity, nine times the kinetic energy. And so on.
That swinging motion of the pendulum bob is an example of periodic or wave motion, so called by virtue of it swinging back and forth as a water wave swings up and down, if you were to watch it passing by a buoy. Wave motion is primarily characterized by two parameters: its frequency, which is how often it returns to its starting point; and its amplitude, which is how wide it swings. So the arc through which the pendulum bob swings is essentially its amplitude. (Actually, for historical reasons, the amplitude is defined as half of that arc, from the center point of the swing to either of its extremes, but this won't affect our discussion.) So we can say that the pendulum's energy is proportional to the square of its amplitude.

This turns out to be common to many different kinds of waves—including light waves. Light is a wave. (It's also a particle, in many ways, but we'll ignore that for now.) And being a wave, it has an amplitude, which is the extent to which the light oscillates. What is it that's oscillating, anyway? In the case of water waves, it's water, and in the case of sound, it's the molecules in the air. You can't have water waves without the water, and you can't have sound waves without the air; that's why sound doesn't travel in a vacuum. But light does travel in a vacuum, so what's waving the light, so to speak? Well, the answer is that the light itself is waving, or less opaquely (heh heh), the electromagnetic fields that permeate space are waving.
In any event, like other waves, light waves also carry energy that is proportional to the square of the light's amplitude. If you double the amplitude, you quadruple the energy; triple the amplitude, and the energy goes up nine-fold. And so on.
How would light's amplitude be doubled, though? You might imagine that if you put two flashlights, the amplitude of the two together would be twice that of each individual flashlight, and the combined light output—the energy of the two together—would be four times that of each flashlight. But I think, intuitively, we know this to be false, that the combination is only twice as bright as each flashlight. And if you measure the light carefully, in a dark room, this turns out to be perfectly true.
What happened? Light waves, like other waves, have a secondary property, called phase. Two waves of the same frequency are said to be in phase if they swing in the same "direction" (in some not altogether well-defined sense); imagine two pendulums swinging in unison, so that when one swings left, the other does, too. They are out of phase if when one swings left, the other swings right, and vice versa. Or, they may be partly in phase, partly out of phase.
When you combine two light waves of the same frequency and the same amplitude, you get for all intents and purposes a single wave that is the two original waves added together. If they're in phase, the peaks get peakier and the valleys get, err, valleyier, and the amplitude of the waves is in fact doubled. On the other hand, if they're out of phase, the peaks of one get cancelled out by the valleys of the other (and vice versa), and the resultant wave has no amplitude at all.
More typically, though, the two waves are partly in phase and partly out of phase, and the resulting wave's amplitude is somewhere in between zero and two times the original. On average, one can show that the amplitude is the original times √2 . What's more, if you add three waves together at random phases, the amplitude of the sum is the original times √3 . And so on. Aha, the square root!


Since the pendulum keeps fairly even time, that must mean that if the pendulum swings in twice as big an arc, it must also be moving twice as fast, in order to keep beating out even time. Now, as it's defined in Newtonian physics, the kinetic energy of the pendulum bob—that is, the energy of the bob due to its motion—goes as the square of its velocity:
KE = ½ mv²
So, twice the arc, twice the velocity, four times the kinetic energy; three times the arc, three times the velocity, nine times the kinetic energy. And so on.
That swinging motion of the pendulum bob is an example of periodic or wave motion, so called by virtue of it swinging back and forth as a water wave swings up and down, if you were to watch it passing by a buoy. Wave motion is primarily characterized by two parameters: its frequency, which is how often it returns to its starting point; and its amplitude, which is how wide it swings. So the arc through which the pendulum bob swings is essentially its amplitude. (Actually, for historical reasons, the amplitude is defined as half of that arc, from the center point of the swing to either of its extremes, but this won't affect our discussion.) So we can say that the pendulum's energy is proportional to the square of its amplitude.

This turns out to be common to many different kinds of waves—including light waves. Light is a wave. (It's also a particle, in many ways, but we'll ignore that for now.) And being a wave, it has an amplitude, which is the extent to which the light oscillates. What is it that's oscillating, anyway? In the case of water waves, it's water, and in the case of sound, it's the molecules in the air. You can't have water waves without the water, and you can't have sound waves without the air; that's why sound doesn't travel in a vacuum. But light does travel in a vacuum, so what's waving the light, so to speak? Well, the answer is that the light itself is waving, or less opaquely (heh heh), the electromagnetic fields that permeate space are waving.
In any event, like other waves, light waves also carry energy that is proportional to the square of the light's amplitude. If you double the amplitude, you quadruple the energy; triple the amplitude, and the energy goes up nine-fold. And so on.

What happened? Light waves, like other waves, have a secondary property, called phase. Two waves of the same frequency are said to be in phase if they swing in the same "direction" (in some not altogether well-defined sense); imagine two pendulums swinging in unison, so that when one swings left, the other does, too. They are out of phase if when one swings left, the other swings right, and vice versa. Or, they may be partly in phase, partly out of phase.
When you combine two light waves of the same frequency and the same amplitude, you get for all intents and purposes a single wave that is the two original waves added together. If they're in phase, the peaks get peakier and the valleys get, err, valleyier, and the amplitude of the waves is in fact doubled. On the other hand, if they're out of phase, the peaks of one get cancelled out by the valleys of the other (and vice versa), and the resultant wave has no amplitude at all.
More typically, though, the two waves are partly in phase and partly out of phase, and the resulting wave's amplitude is somewhere in between zero and two times the original. On average, one can show that the amplitude is the original times √2 . What's more, if you add three waves together at random phases, the amplitude of the sum is the original times √3 . And so on. Aha, the square root!

And since the energy of the final wave is the square of the amplitude, what comes out has two, three, or whatever times the original energy. Which is, of course, exactly what you'd expect. And good thing, too, because if it came out otherwise, we'd have a violation of the conservation of energy. Clearly, it takes n times as much energy to run n flashlights as it does to run one, and if their combined output were something other than n times the original, we'd have to seriously rethink our physics.
You might wonder if there isn't a way to get the waves to line up properly in phase so that the amplitudes do add up in the normal way, and you get a dramatic ramp up in energy. And there is; it's called a laser. A laser essentially gets n individual photons to line up in phase so that what comes out is a sort of super-photon (or super-wave, equivalently) with n² times the energy of any of the input photons. The physics-saving catch is that it takes more energy to line up, or lase, the light than you get as a result.
Nevertheless, that single photon or wave, coordinated as it is, can do things that you couldn't do with the individual photons separately. You can shine a bunch of flashlights at your eye and nothing will happen, other than a rather annoying afterimage and perhaps a headache. But even a modest laser can be used to reshape your cornea and render your eyeglasses superfluous. Of course, it should go without saying that it's not such a great idea to randomly shine lasers into your eye!

Or out, for that matter.
I see in this a kind of metaphor for human nature, and I hasten to say it's only that; as far as I know, one can't really take this and apply it rigorously in any scientific sense. But I think it's a useful metaphor all the same. I like to say that religion, among other things, is a laser of people. What on earth do I mean by that? A single human being can do a certain amount of work (in physics, work is defined as energy applied in furtherance of a force). What happens if you get two human beings together? Well, if they work against each other—if they're out of phase, in other words—less work gets done. Maybe none, if they spend all their time squabbling. Even if they're not exactly out of phase, if they're not particularly coordinated, their combined output is rather less than you might think, like the drunkard making slow and halting progress homeward because he can't put one foot directly in front of the other.
On the other hand, if they cooperate—if they're in phase—they can do twice the work. In fact, maybe they can get even more done, for there's no arguing that a coordinated combination of two people can do things that each individual person couldn't do, even adding their results together. Two people can erect a wall, for instance, that neither person could individually. Maybe, in some sense, those two people can do what it would take four people, working randomly, to achieve. And perhaps three coordinated people can do what it would take nine randomly working people to. And so on.
But it's pretty straightforward to get two or three people to work together, if they're of a mind to. But what about a hundred, or a thousand, or a million? That's where ideologies can be enormously effective; through them, a thousand can achieve what would otherwise require a million. And there may be no ideology better suited for the purpose than religion, although other ideologies—sociological, fiscal, even autocratical—may suffice. That's not to say that all that these various ideologies achieve is beneficial: for every great liberation, there may be a dozen pogroms. But they are part and parcel of a society's capacity for achievement; without them, we get only as far as a drunkard's walk will take us.
I see in this a kind of metaphor for human nature, and I hasten to say it's only that; as far as I know, one can't really take this and apply it rigorously in any scientific sense. But I think it's a useful metaphor all the same. I like to say that religion, among other things, is a laser of people. What on earth do I mean by that? A single human being can do a certain amount of work (in physics, work is defined as energy applied in furtherance of a force). What happens if you get two human beings together? Well, if they work against each other—if they're out of phase, in other words—less work gets done. Maybe none, if they spend all their time squabbling. Even if they're not exactly out of phase, if they're not particularly coordinated, their combined output is rather less than you might think, like the drunkard making slow and halting progress homeward because he can't put one foot directly in front of the other.


Tuesday, December 1, 2009
Analogies for Better or for Worse
Douglas Hofstadter wrote about the relationship between analogies and intelligence in the September 1981 installment of his Scientific American column series Metamagical Themas, entitled "Analogies and Roles in Human and Machine Thinking." His central point is that being able to see similarities between different situations and to capitalize on those similarities to make predictions is core to the nature of human intelligence (and by extension, to fruitful research on machine intelligence as well). "Being attuned to vague resemblances," he writes, "is the hallmark of intelligence, for better or for worse."
As if to highlight the "worse" side of the ledger, somewhere toward the middle of the column, he discusses the pitfalls of taking analogies too far. Ultimately, situations don't map perfectly onto each other, and the greater the demands placed on any given analogy, the more likely it will stretch so far it snaps.
Analogies are particularly useful for teaching purposes. Students seem often to learn something better when it is explained in terms of something they already know. We might learn about electrons orbiting an atomic nucleus by analogy with planets orbiting the Sun, for instance. To the extent that principles in one domain apply to the other, we can understand and explain behaviors in the new, unfamiliar domain in terms of the old, familiar one.
There are dangers to this path to learning, though. The famed Caltech physicist Richard Feynman—surely one of the great physics teachers of all time—was extremely conscientious when it came to teaching by analogy. He avoided analogies that he found misleading or circular. It might be natural to think of electromagnetism as being mediated by imaginary "rubber bands," he said, but in the first place, rubber bands draw things together more the further apart they get, whereas electromagnetism gets weaker with distance, and secondly, rubber bands themselves work through electromagnetism interactions at the molecular level, so any understanding students derived through this analogy must needs be circular.
Care must be taken, too, not to stretch the analogy beyond its limits. The fact is that electrons don't orbit the nucleus in neat circles (or even ellipses) like planets orbiting the Sun. If we study further, we find that although planets can apparently orbit the Sun at any distance whatsoever, electrons are constrained to orbit the nucleus only at specific distances, which we can characterize as those distances which allow an integral number of electron waves to circle the nucleus. If we study still further, we find that electrons don't travel in any kind of orbit at all, but instead can be found at any location around the nucleus according to a probability distribution (or, equivalently, are simultaneously at all different points according to that distribution—at least prior to observation).
The problem is that analogies are so darned appealing. The good ones yield correct answers to our questions so often that we lose track of where the limits of the analogies are, or even that there are any. We simply trust the analogies, often to our detriment. It's tempting to understand the budgetary situation of, say, the United States in relation to our personal budget; after all, there are many similar concepts and relationships: income, expenses, debt, balance, and so forth. It's tempting, but it's often misleading. But because we do understand many things correctly using that analogy, we become overconfident in areas where the analogy was never going to hold water.
My pet peeve in this regard is the rubber sheet analogy for general relativity. Given that general relativity was one of the major developments of 20th-century physics, you'd expect that there'd be significant time spent in explaining it to the lay public. I mean, even people who only vaguely have a notion of what physics is about have heard of Albert Einstein and "warped space."
Gravity is everywhere; we feel its effects all the time. And we've sort of internalized the Newtonian theory of gravity, which is that any two particles exert a gravitational force on each other, no matter how far apart they are; although the degree of force drops off quite rapidly with distance, it never quite shrinks down to zero. We've internalized it so well that we hardly ever wonder how that force is mediated. How does that force get exerted across all that distance? By the Newtonian theory, I wiggle my finger here, and my finger's gravitational influence on the most distant galaxy, however faint, oscillates with the same frequency as my wiggling finger. Newton himself felt this conundrum most keenly, never mind his insistence that he did not "feign hypotheses."
Einstein's general theory of relativity ostensibly resolves all of that. It posits space not simply as a theater in which gravitational interactions take place, but a physical, almost tangible thing that is affected by masses and in turn affects them. The usual term for this is curved space—a term that is justified in a mathematical sense but which is almost certain to mean nothing directly to anyone who isn't already a physicist. I imagine that the most common response is mute incomprehension.
So we explain what we mean by "curved space" by analogy. First of all, we should really be calling it "curved space-time," since in Einstein's theory time and space are interwoven almost irrevocably. With three dimensions of space and one of time—well, that's a lot of dimensions. People don't visualize four dimensions very well. So we abstract away two of them: one of the spatial dimensions, and the one time dimension, leaving two spatial dimensions. The one spatial dimension is OK, probably, but already there are problems. You've lost the one temporal dimension you have; it's possible that you might lose something essential there!
But we're pressing on. We lay down an infinite rubber sheet, typically marked with grid lines. We plop down a big heavy ball, like a bowling ball. This is the Sun, we are told. It bends or curves or warps space. Sure enough, the rubber sheet is seen to dimple significantly. Then, we roll a smaller ball around the bowling ball, and because of the warping caused by the bowling ball—err, Sun—the smaller ball (representing the Earth, say) sweeps around in a neat circular or elliptical orbit. Just like the real planets.
This is an enormously popular representation of general relativity; even Carl Sagan's Cosmos, my favorite science documentary series of all time, uses it. And yet, in my opinion, it's fatally flawed. In the first place, it's circular, just like Feynman's rubber bands. We're told that the effect of the Sun's gravity can be interpreted in terms of the Sun's warping of nearby space, by analogy with the warping of the rubber sheet caused by the bowling ball. But what is it that causes the bowling ball to warp the rubber sheet? Gravity itself! We can't rightly claim to understand gravity if gravity is involved in the explanation as well.
Even that would be excusable for pedagogical purposes if the analogy were actually accurate. But it's not. In all of the rubber-sheet depictions of general relativity I've seen, and I've seen quite a few, only one includes a disclaimer that demonstrates what's wrong with it—a little-known primer on relativity written by Lewis Carroll Epstein called, appropriately enough, Relativity Visualized. (I heartily recommend it.) He makes the following point: In space, there is no universally preferred direction up or down; those directions are only understood in reference to some gravitational field. So the rubber sheet analogy, if it's really right, should work just as well if you flip the rubber sheet upside down, so that the warp goes upward (like a volcano) rather than downward. After all, it's not supposed to be the bowling ball itself that makes the other ball go 'round, but the warp. But if you roll the smaller ball toward the volcano, what happens? As any miniature golfer knows, it certainly doesn't orbit the volcano; instead, it either goes into the volcano, or it veers away from it, never to return.
But even that's not the worst of it. The irony of this analogy is that even though it's not a very accurate depiction of general relativity, it's a dead-on match for Newtonian potential energy wells. That's right: This immensely popular analogy, which is supposed to highlight how general relativity differs from Newtonian gravity, is instead a much better illustration of the very theory general relativity was intended to supplant! I
was so struck by this that I wrote up an exposition of general relativity for my astronomy Web site, which (on the off chance you've actually read this far) you can find here. In it, you'll find an analogy to general relativity which is hopefully understandable but hits much closer to the mark. (I even asked a physicist!)
But does anyone care? Nooooo, I'm sure we'll continue to see the rubber-sheet analogy trotted out at regular intervals on the Discovery Channel, with no disclaimer regarding its appropriateness.
As if to highlight the "worse" side of the ledger, somewhere toward the middle of the column, he discusses the pitfalls of taking analogies too far. Ultimately, situations don't map perfectly onto each other, and the greater the demands placed on any given analogy, the more likely it will stretch so far it snaps.

There are dangers to this path to learning, though. The famed Caltech physicist Richard Feynman—surely one of the great physics teachers of all time—was extremely conscientious when it came to teaching by analogy. He avoided analogies that he found misleading or circular. It might be natural to think of electromagnetism as being mediated by imaginary "rubber bands," he said, but in the first place, rubber bands draw things together more the further apart they get, whereas electromagnetism gets weaker with distance, and secondly, rubber bands themselves work through electromagnetism interactions at the molecular level, so any understanding students derived through this analogy must needs be circular.
Care must be taken, too, not to stretch the analogy beyond its limits. The fact is that electrons don't orbit the nucleus in neat circles (or even ellipses) like planets orbiting the Sun. If we study further, we find that although planets can apparently orbit the Sun at any distance whatsoever, electrons are constrained to orbit the nucleus only at specific distances, which we can characterize as those distances which allow an integral number of electron waves to circle the nucleus. If we study still further, we find that electrons don't travel in any kind of orbit at all, but instead can be found at any location around the nucleus according to a probability distribution (or, equivalently, are simultaneously at all different points according to that distribution—at least prior to observation).
The problem is that analogies are so darned appealing. The good ones yield correct answers to our questions so often that we lose track of where the limits of the analogies are, or even that there are any. We simply trust the analogies, often to our detriment. It's tempting to understand the budgetary situation of, say, the United States in relation to our personal budget; after all, there are many similar concepts and relationships: income, expenses, debt, balance, and so forth. It's tempting, but it's often misleading. But because we do understand many things correctly using that analogy, we become overconfident in areas where the analogy was never going to hold water.

Gravity is everywhere; we feel its effects all the time. And we've sort of internalized the Newtonian theory of gravity, which is that any two particles exert a gravitational force on each other, no matter how far apart they are; although the degree of force drops off quite rapidly with distance, it never quite shrinks down to zero. We've internalized it so well that we hardly ever wonder how that force is mediated. How does that force get exerted across all that distance? By the Newtonian theory, I wiggle my finger here, and my finger's gravitational influence on the most distant galaxy, however faint, oscillates with the same frequency as my wiggling finger. Newton himself felt this conundrum most keenly, never mind his insistence that he did not "feign hypotheses."
Einstein's general theory of relativity ostensibly resolves all of that. It posits space not simply as a theater in which gravitational interactions take place, but a physical, almost tangible thing that is affected by masses and in turn affects them. The usual term for this is curved space—a term that is justified in a mathematical sense but which is almost certain to mean nothing directly to anyone who isn't already a physicist. I imagine that the most common response is mute incomprehension.
So we explain what we mean by "curved space" by analogy. First of all, we should really be calling it "curved space-time," since in Einstein's theory time and space are interwoven almost irrevocably. With three dimensions of space and one of time—well, that's a lot of dimensions. People don't visualize four dimensions very well. So we abstract away two of them: one of the spatial dimensions, and the one time dimension, leaving two spatial dimensions. The one spatial dimension is OK, probably, but already there are problems. You've lost the one temporal dimension you have; it's possible that you might lose something essential there!

This is an enormously popular representation of general relativity; even Carl Sagan's Cosmos, my favorite science documentary series of all time, uses it. And yet, in my opinion, it's fatally flawed. In the first place, it's circular, just like Feynman's rubber bands. We're told that the effect of the Sun's gravity can be interpreted in terms of the Sun's warping of nearby space, by analogy with the warping of the rubber sheet caused by the bowling ball. But what is it that causes the bowling ball to warp the rubber sheet? Gravity itself! We can't rightly claim to understand gravity if gravity is involved in the explanation as well.

But even that's not the worst of it. The irony of this analogy is that even though it's not a very accurate depiction of general relativity, it's a dead-on match for Newtonian potential energy wells. That's right: This immensely popular analogy, which is supposed to highlight how general relativity differs from Newtonian gravity, is instead a much better illustration of the very theory general relativity was intended to supplant! I

But does anyone care? Nooooo, I'm sure we'll continue to see the rubber-sheet analogy trotted out at regular intervals on the Discovery Channel, with no disclaimer regarding its appropriateness.
Wednesday, August 26, 2009
How Random is Random?
We all think that we know when something is random. But how random is random?
Part of the aim of mathematics is to unify concepts. It's what makes mathematics more than just a collection of ways to figure things out. As a side effect, though, mathematics definitions tend to be a bit counterintuitive. For example, I think we all know what the difference between a rectangle and a square is: A square has all four sides of equal length, and a rectangle doesn't.
Except that a mathematician says that squares are rectangles, because to a mathematician, it's inefficient and non-unifying to say that a rectangle is a four-sided figure with four right angles, except when all four sides have the same length. It makes more sense, from a mathematical perspective, to make squares a special case of rectangles.
So hopefully it won't come as too much of a surprise if I say that a completely deterministic process, such as flipping a coin that always comes up heads, is still considered a random process to mathematicians who study that sort of thing. So is a coin that comes up heads 90 percent of the time. Or 70 percent. Or—and maybe this is the surprise, now—50 percent. The cheat coin is simply a special case of a random process. To a mathematician, none of these processes is "more random" than the others. They just have different parameters.
What we think of as randomness, mathematicians call entropy. This is related to, but not the same thing as, the thermodynamic entropy that governs the direction of chemical reactions and is supposed to characterize the eventual fate of the universe. (Another post, another time, perhaps.) It turns out that this "information-theoretic" notion of entropy corresponds pretty well to what the rest of us call randomness. For those of you who are even the slightest bit curious, the definition of entropy for a flipped coin is
S = - ( pH lg pH + pT lg pT )
where pH and pT are the probabilities for heads and tails, respectively, and lg is logarithm to the base 2. For a 50-50 coin, the entropy is S = 1; for a completely deterministic coin (a two-headed one, for instance), the entropy is S = 0. For something in between—say, one that comes up heads 70 percent of the time—the entropy is something intermediate: in this case, S = 0.88 approximately.
So, all right, how entropic is a real coin? The answer is that it's probably less entropic—less random, that is—than you think it is, especially if you spin it. A paper by researchers from Stanford University and UC Santa Cruz (via Bruce Schneier, in turn via Coding the Wheel) has seven basic conclusions about coin flips:
Part of the aim of mathematics is to unify concepts. It's what makes mathematics more than just a collection of ways to figure things out. As a side effect, though, mathematics definitions tend to be a bit counterintuitive. For example, I think we all know what the difference between a rectangle and a square is: A square has all four sides of equal length, and a rectangle doesn't.
Except that a mathematician says that squares are rectangles, because to a mathematician, it's inefficient and non-unifying to say that a rectangle is a four-sided figure with four right angles, except when all four sides have the same length. It makes more sense, from a mathematical perspective, to make squares a special case of rectangles.

What we think of as randomness, mathematicians call entropy. This is related to, but not the same thing as, the thermodynamic entropy that governs the direction of chemical reactions and is supposed to characterize the eventual fate of the universe. (Another post, another time, perhaps.) It turns out that this "information-theoretic" notion of entropy corresponds pretty well to what the rest of us call randomness. For those of you who are even the slightest bit curious, the definition of entropy for a flipped coin is
S = - ( pH lg pH + pT lg pT )
where pH and pT are the probabilities for heads and tails, respectively, and lg is logarithm to the base 2. For a 50-50 coin, the entropy is S = 1; for a completely deterministic coin (a two-headed one, for instance), the entropy is S = 0. For something in between—say, one that comes up heads 70 percent of the time—the entropy is something intermediate: in this case, S = 0.88 approximately.
So, all right, how entropic is a real coin? The answer is that it's probably less entropic—less random, that is—than you think it is, especially if you spin it. A paper by researchers from Stanford University and UC Santa Cruz (via Bruce Schneier, in turn via Coding the Wheel) has seven basic conclusions about coin flips:
- If the coin is tossed and caught, it has about a 51 percent chance of landing on the same face it was launched. (If it starts out as heads, for instance, there's a 51 percent chance it will end as heads.)
- If the coin is spun, rather than tossed, it can have a much larger than 50 percent chance of ending with the heavier side down. Spun coins can exhibit huge bias (some spun coins will fall tails up 80 percent of the time).
- If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.
- If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.
- A coin will land on its edge around 1 in 6000 throws.
- The same initial coin-flipping conditions produce the same coin flip result. That is, there's a certain amount of determinism to the coin flip.
- A more robust coin toss (more revolutions) decreases the bias.

Matthews decided that was not quite kosher. People, he thought, don't usually toss buttered toast into the air; they accidentally slide it off the plate or table. That ought to be taken into account when analyzing Murphy's Law of Buttered Toast. And when he did take it into account, he found something rather unusual. A process that you might have thought was fairly entropic turned out to be almost wholly deterministic, given some not-so-unusual assumptions about how fast the toast slides off the table. Unless you flick the toast off the table with significant speed, the buttered side lands face down almost all of the time. And it has nothing to do with the butter making that side heavier; it's that the rotation put on the toast as it creeps off the table is just enough to give it a half spin. Since the toast starts out buttered side up (one presumes), it ends up buttered side down. Stewart recommends that if you do see the toast beginning to slide off the table, and you can't catch it, to give it that fast flick, so that it isn't able to make a half flip, and lands buttered side up. You won't save the toast, unless you keep your floor fastidiously clean, but you might save yourself the mess of cleaning up the butter.
On the other hand, maybe there's another solution.
Subscribe to:
Posts (Atom)