Sunday, August 3, 2014

Open and Shut

The other day, I was listening to sports radio, which I used to do quite a bit.  It's been a while now, though.  This time, they were chatting about how the Los Angeles Dodgers and Anaheim Angels*, the local baseball teams, did very little at the trade deadline.  (One of the trade deadlines, rather.  There are a few of them, apparently.)

One of the speakers thought the Dodgers should have done something at least.  He based his assertion on the notion that there is such a thing as a championship window, and that many teams, including the Dodgers, don't pay enough attention to that, but instead meander from season to season, doing their best to maintain the best team they can within the strictures of their finances.  He felt that the Dodgers should instead opportunistically go "all in" for a season or two, to maximize their chances of winning a title within that window, and pay the cost of mediocrity (or worse) down the road, rather than maintain respectability on a continual basis at the cost of never winning a title.

Actually, I rather think he overplayed the extent to which teams are unaware of their championship windows, the way that he was describing them.  I tend to believe the Dodgers are perfectly aware that there is a finite window for them, since that is true for everyone.  (Even the Yankees.)  Nonetheless, let's take a look at the championship window, and maybe there's something interesting to be divined from it.

Normally, when people think of a championship window, they tend to think of it as having a certain length—of time, that is, usually measured in years.  The Miami Heat have had a window of about five years, during which they went to the NBA Finals four times and won two championships.  It appears to be closing, given the departure of LeBron James, but it hasn't shut entirely, I think most people would say.

The fact that people do think of a championship window as having gradations of openness suggests that there's a second dimension to the championship window: its height, which we might conceive of as representing a team's probability of winning a championship during any given year.  For instance, if the Dodgers have, let's say, a 15 percent chance of winning the World Series any time in the next three years, we might say that the window is three years long—or wide, perhaps it's better to say—and 15 percent tall.

The sports talk host's opinion might then be construed as being that the Dodgers should have made some kind of deal that might shorten the window to two years, but increase its height to 22 percent, or 25 percent.  Would that be worth it?  Well, let's think about that a bit.  If you start off with a 15 percent chance of winning a title in each of three consecutive years, that means that at the end of the window, you'll have won 0.45 titles on average.

If, instead, you have a two-year, 25-percent window, you'll win an average of 0.50 titles.  On that basis, we might consider that kind of deal to be worth making (if you can make it).  On the other hand, if you have a two-year, 22-percent window, you'll win an average of 0.44 titles, which would seem to make that deal just barely not worth making.

The average title count isn't all that matters, however.  Extra importance is attached to the first title; there's a much bigger jump perceived from zero titles to one title than there is from one title to two titles (or, conceivably, to any larger number of titles).  We might evaluate championship windows based on the probability of winning at least one title during that window.

A three-year, 15-percent window wins at least one title about 38.6 percent of the time, a two-year, 25-percent window wins one about 44.8 percent of the time, and a two-year, 22-percent window wins one about 39.2 percent of the time, which would (according to this standard) make that deal just barely worth making.

Of course, a window need not be uniformly high.  Maybe the Dodgers could make a deal that would put their title probability up to 30 percent in 2014, but have it drop to 10 percent in 2015, and just 5 percent in 2016.  That would yield an average title count of 0.45—same as the initial situation—but now the probability of winning at least one title would be 40.2 percent.

At this point, it occurred to me that there's one aspect of championship windows that people don't talk about a lot, and when they do, it's not really couched in terms of the window.  The fact is that multiple teams can have championship windows at the same time, and when they do, they tend to squeeze against each other.  Imagine a top-heavy league in which two teams each have a three-year, 40-percent window, and the remaining 20 percent of a title is parceled out to all the rest of the teams.  Those two teams would each win, on average, 1.20 titles in the next three years, and each would have a whopping 78.4 percent chance of winning at least one title during that time.

Now, suppose that one of those two teams can make a deal that, in isolation, would front-load their window, raising it to 65 percent this year, but dropping it to 40 percent the following year, and only 15 percent the year after that.  The average title count would remain at 1.20, but the probability of winning at least one title goes to 82.2 percent.  Seems like a marginally better deal, right?

But what if the other team could make the same deal?  Worse yet, what if the other team could make the same caliber of deal, but an entirely different one, so that both deals could be made at the same time?  They can't both win a title this year with a probability of 65 percent; the best they can do is win one with 50 percent.  And in fact, it would very likely be less than that—let's say, 45 percent.  Perhaps, as a result of both front-runners making that deal, they would win the following year at 40 percent each, and the year after that at 20 percent each.

That yields an average title count of "only" 1.05, and a probability of winning at least one title of "just" 73.6 percent.  In other words, both teams are still good, but somehow worse off now than if neither of them had made a deal.  On the other hand, it's also the case that either team would be better off making the deal, whether or not the other team made their deal, which makes this situation a little Prisoner's Dilemma-ish.  (This reminds me that I've never written a post on the Prisoner's Dilemma, and I really should get to that at some point.)  It intrigues me that two of the also-ran teams could screw the front-runners up by conspiring to offer them both "good" deals.

In practice, of course, you probably couldn't force the two front-runners to pull the trigger on the deals at the same time.  One deal would almost certainly make the news before the other.  At that point, it's not clear what the other team would do.  The rational thing to do might be to search for some other deal that has the same general impact, but perhaps back-loading it so that the championship windows dovetail with each other, rather than squeezing each other out.  My intuition, though, suggests that the other team would probably try to engage in the arms race, so perhaps my Prisoner's Dilemma-ish scenario would still play out.

*Yes, I realize that they are technically the Los Angeles Angels of Anaheim.  You'll pardon me if I refuse to employ that ungainly circumlocution.

(Also, this post would probably benefit from some figures.  I'll try to add them at some point.)

Monday, June 2, 2014

Fine, I'll Take It

So, this happened.  And I have to wonder—are we supposed to be impressed by this fine?  Because I'm pretty sure Phil Jackson isn't.

I don't know if Phil was aware that this was a violation of league rules.  I kind of suspect that he was; it doesn't strike me as the sort of thing he'd do without even considering whether it broke the rules.  I don't say that just because I'm somehow impressed with his knowledge of league restrictions.  I say it because this tampering makes sense strategically.

Listen: The Clippers are going to be sold for somewhere in the neighborhood of $2 billion.  If you didn't hear that correctly, do not pass GO, just return to the beginning of this paragraph.  Two billion dollars.  The Clippers.  I really admire (I won't go so far as to say "love" or even "like") the current incarnation of this team.  They hustle, they want to win, and for once, they have the talent to do it.  They remind me of the Lakers in the late 1990s.  But even the Lakers of the 1990s had some history.  What do the Clippers have?

And yet a Microsoft CEO, whose previous claim to Internet fame was a clip in which he repeated the word "developers" approximately a zillion times, but who otherwise doesn't actually seem insane, felt the Clippers were worth $2 billion.  (Sorry if this grosses you out.)

Against that backdrop, consider what Phil Jackson has to gain by mentioning Derek Fisher's name in advance of the Thunder's ouster from the Western Conference Finals: Fisher now knows that he's wanted, on the short list for the Knicks job.  Is Fisher the best man for the job?  I don't know.  He has a reputation for clutch (built in part upon this shot), he's earned respect from much of the league outside of Salt Lake City fans, and he's done it with seemingly very little in the way of natural physical gifts.  He's not a preternatural baller the way his longtime backcourt mate Kobe Bryant is.  It's quite conceivable that he could turn out to be a successful NBA coach.  Given the Knicks' recent history, that bar is not set excessively high.  Jackson's words have made it a bit more likely that Fisher will lean toward New York than he would have otherwise.

So let's suppose that the Knicks are currently worth as much as the Clippers are, that their current state of basketball inferiority is compensated for by the fact that they are New Friggin' York.  The team finished with 37 wins this past season, a .451 clip.  How much do you think they'd be worth if they finished at .500 (41 wins)?  How much if they finished at .600 (49 wins)?  I think conservatively, the team would increase their net value by at least $10 million per additional win to start with, and each successive win would only increase that margin.  And Jackson's supposed to be worried about $25,000?

Admittedly, Jackson doesn't get all of that increase in value.  That's James Dolan's.  Still, Dolan has to pay Jackson, and he'd be a lot happier about paying Jackson if his team were suddenly worth $100 million more.  The more candidates Jackson has to choose from, the more likely it is that the team will make that leap.  That's the real value of the so-called tampering with Derek Fisher: It makes it more likely that Jackson will have him to choose from.  Nothing in his words binds him to choose Fisher at all.  There's very little downside, compared to that negligible $25,000 fine.

So what's it worth, exactly?  I'll take a look at that in a future post, but for now, I'm confident Phil Jackson knows what he's doing.

Friday, May 30, 2014

Stages of Prejudice

I don't want to become a downer, I really don't, but when the urge to write hits, I write, and here I am again with another post on prejudice.

I will admit that the immediate trigger for this post is the Elliot Rodger case, but although that's obviously at the forefront of our minds right now, let's not kid ourselves into thinking that a year-and-a-half from now, anyone not directly connected with the case will still be thinking hard about it.  There will always be new tragedies—that's part of being human—and focusing too closely on one of them leads to the fallacy that one makes progress in apprehending a forest by understanding each tree individually.

Still, let's start with that case.  It seems evident that there were some serious problems with Elliot Rodger, to say the least.  It also seems evident to me, however, that those problems mostly just exaggerated attitudes that are already floating around in society all the time: that for men, women are prizes to win, plot devices to negotiate; that because some men are awful, a man deserves a woman's love merely for not being awful; etc.

Now, it may seem ridiculous to say that people think this all the time.  I'm quite certain that if I were to ask a hundred people if they thought like that, and if all hundred were to answer the question sincerely (a big if, I concede), very few—maybe none—would admit to thinking like that.  Because when you put it that baldly, very few people—though not none—do think like that.

But those attitudes are there, all the same.  I don't think there is anyone, myself included certainly, who is completely free of these attitudes.  Such a person would probably have had to grow up completely isolated from everyone else.  Do you think there are numbers of such people around?  I don't.

Listen: Arthur Schopenhauer, whom I've quoted before, once said, famously,

"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident."

That's probably painting with an overly broad brush, but there's a kernel of truth in it.  (I guess I don't know if it passed through three stages.)  At the risk of oversimplification, a similar process happens to prejudices on their way out, but in the reverse order.  First, it is accepted as being self-evident.  When racism was at its post–Civil War peak in the United States, in the late nineteenth and early twentieth century, it was received wisdom that African Americans were simply inferior to European Americans, and could appropriately be treated as such by the latter.  (Other groups were subjected to racism, too, but none so violently as African Americans.)  To be sure, the African Americans didn't tend to feel that way, but their opinion was roundly ignored, coming as it did from inferior African Americans.

Eventually, there was violent opposition to this attitude, coming to a boil in the middle part of the twentieth century.  There was always some violent opposition, even before the twentieth century, but it never managed to change societal attitudes.  We can speak all we like of peaceful opposition, but I'm not sure we get what advances were made in the 1950s and 1960s, and later, without violent opposition of some sort.  (I know that's not necessarily the kind of violence that Schopenhauer was talking about, but I find the parallel poetic, OK?)

Finally, it is ridiculed.  And I do think we have, to a certain extent, reached a point where racism, open racism, is ridiculed.  Even an assay at a kind of academically-treated racism (in The Bell Curve) was ridiculed, albeit in a meticulous, academic sort of way.

(Incidentally, you would think that we're fighting thousands of years of racism, but racism in the way that we think of it today—based principally on skin color—is a comparatively new phenomenon.  Several hundred years ago, Europeans considered Africans unusual-looking, but not inherently inferior.  It was only when they found they could manipulate them with marginally more advanced technology that they then had to justify the manipulation.  We don't even know for sure whether the Egyptians of the dynastic era were "Nubians."  As I understand it, we think they were, but we don't know, because people of the time didn't think it noteworthy enough to comment on consistently.  That's not to say that they weren't prejudiced, but their prejudices apparently had more to do with place of origin than with skin color.)

And do we have it there, are we done with it?  Have we put racism to bed?  Not by a long shot, for after all, it is still around to be ridiculed.  We do not have to go back into the history books, to find decades-old instances of racism to poke fun at.  There's more than enough to go around now.

Part of the problem, of course, is that racism, like most prejudices worthy of the name, is subconscious, automatic.  It's difficult to reason away, even if you realize it would be better to do so.  We are not so different from little children who, when scolded for drinking directly from the milk jug, don't actually stop drinking directly from the milk jug.  They just figure out how to do it without getting caught.

So we've learned, as a group, how to be prejudiced without getting caught.  We learn that if we denounce racism openly, we're less likely to get caught for being racist covertly.  We learn that if we apologize for our prior racism, we're less likely to get caught for our present racism.  We learn that if we have a non-racist cover story for a racist act, we're less likely to get caught.  We even learn—in one of the few examples of the random person on the street "getting" statistics—that if our racist decisions are parsimoniously made, we're less likely to get caught, because the sample size is too small.

There's nothing special about racism, in this regard (and this regard alone).  The kind of sexist prejudice that reigned in the Elliot Rodger case is at about the same stage.  Open sexism is (mostly) ridiculed, so it's been sublimated, suppressed in favor of covert sexism.  You know, the kind where we root for the loyal nerd friend crushing on a girl over the glib jock, because he's, well, loyal, and all guys know how irresistible it is when an otherwise plain girl is always there for us.  Well, don't we?

In some sense, there's been progress made, because it's now clear that We Won't Stand For That Anymore (In Public).  On the other hand, it might be a lot harder to stamp out the covert sexism, the hidden racism, especially if it's the 90 percent of the iceberg hanging out under water.  It might be insuperably difficult to root out every last bit of it.

In fact (and I recognize that this is a controversial question to even ask), is there much point in trying to root out every last bit?  Before you excoriate me, let me draw an analogy with science.  Science is a social endeavor in which the community at large attempts to address questions about nature in an objective manner.  It does this, not by attempting to eliminate bias in scientists (for it's recognized that this is plainly too hard), but by having procedures in place for recognizing biases and even potential biases, and compensating appropriately for them.  These procedures, when properly applied, are so successful that it is difficult for scientists to influence their results materially without getting caught—so difficult, in truth, that it must be done intentionally and consciously, if it be done at all.  It cannot be just the result of subconscious bias.

The measures that we take in society at large to deal with biases are not at that relatively advanced stage yet.  These so-called "social programs" are bluntly applied, and although they can and do help, that bluntness also tends to make them easy targets for their detractors.  To be fair, the biases they deal with are probably more intractable.  Science has the luxury of dealing with one almost infinitesimally small question at a time.  Society is, at least with our present understanding, much more tangled.  But the present practice of avoiding getting caught won't work in the long run, and I have a sneaking suspicion that when prejudices are finally dealt with successfully, if they ever are, it will be by having such measures in place that are considered culturally de rigeur, and not by eliminating them entirely.

Tuesday, February 4, 2014

One Language, Under Force

I watched the Super Bowl.  Well, "watched" might be putting it a bit strongly.  I watched the first part, a very short part, in which the Broncos seemed as though they might have had a decent chance to win.  After that, I watched mostly to see what other parts could fall off the Denver bandwagon.  Congratulations to Seattle; they thoroughly outclassed their opponents.

That left the halftime show and the commercials.  I have to say that I didn't even watch those very assiduously, though I find the idea that they aren't as good as they used to be to be about a step or two shy of yelling at the neighborhood kids to get off the lawn.  It's Cranky Old Geezer time!

But even through my haze of disappointment in the football game, I did manage to get a look at the Coca-Cola "America the Beautiful" commercial.

The one-minute spot consists of a sequence of short video vignettes of the broad span of Americana, against which is sung "America the Beautiful."  There's nothing at all contentious about that, as far as it goes, of course.  What seems to have gotten lots of people in a lather is the fact that, except for the first and last phrase, the song is sung in several different languages.  No doubt Coca-Cola wanted to evoke the idea that part of what makes America beautiful is the wide variety of people that make it up, and that's what the commercial does.  In fact, Coca-Cola went so far as to follow the commercial up with a tweet, just in case someone missed the point:

Apparently, that's not the message that many people got.  I imagine the reaction of Coca-Cola to the some of the retweets ranged from bemused concern to horrified astonishment.  (Or maybe they're more cynical than that; it's quite plausible.)  I don't have the patience to drag them all out, obviously, so I'll just link to a collection of some of them here.

As might be expected, there's also been a backlash against those reactions, lambasting them as racist or ignorant or condescending, or who knows what.  I won't attempt to characterize them one way or the other; as I like to say, people feel what they feel, and it's pointless to tell them they're "wrong" to feel that way.  But I think it is interesting to try to suss out just why they feel that way.  What is it about diversity, in what seems like such a harmless context, that spooks some people?  Is our sense of national pride so fragile that it relies upon the exclusive use of a language that was brought forth onto this continent for the first time not half a millennium ago?  I have no idea whether one of the languages used in the commercial was an Amerind tongue (maybe someone can tell me), but I wonder what the reaction to that would be.

The melting-pot metaphor used to be a point of pride for us; it's a central point of one of those Schoolhouse Rock shorts, for those of you who remember those.  I don't remember anyone lashing back at those a few decades ago.  Shall we say to those who object to singing "America the Beautiful" in anything other than English that they are simply being too sensitive?

In connection with that possibility, let me introduce another commercial, which aired last year (and was brought back to mind by a friend of mine):

Notice anything out of the ordinary?  I have to admit that the first time I watched this, I didn't.  Then, my friend pointed out, "Look at who's in the box."  My reaction to this was, "Oh, PoCs in a box," since the forward-thinking hotel guest (who incidentally spouts some meaningless marketing mumbo-jumbo, but that's neither here nor there) is a white male, and all the persons of color, along with a white male or two (for variety I suppose), are in the box.  Even the guy who thinks to venture out before scurrying back to the safety of the box is a white male.  (I must say that the look of relief on the woman next to him is hilarious.  She should get a Cleo for that.)

A "natural" reaction by some people, in response to such a comment, might well be "Oh, you're being too sensitive.  They had to put someone outside the box; it just so happens it was a white guy."  In isolation, that point might be arguable.  However, it happens too frequently for it to be just random chance.  The vast majority of business travellers I work with are white males, and it's not surprising that they (the primary target of the commercial, after all) would prefer to see someone like themselves as the hero of the story.

I see too frequently, however, the objection that people of color have an inferiority complex, that they play the race card too readily, that they are too comfortable in the victim role.  Does it really make sense that a group of people who are actually empowered would feel that way, that instead of doing what they're capable of, they would rather lie down and cry foul?  I'm not sure that there's been a significant group of people like that in the history of ever.  Regardless of whether that group is a victim of discrimination as they claim, or for some constitutional reason is less capable, or both, it's utterly implausible to me that they would rather blame someone else than have more power.  Blame may be a salve for what ails them, but equal power is the cure.  A small number of them might miss that, but not the whole group.

Could something similar be at work with the reaction to the Coca-Cola commercial?  Is it that people feel upset about the commercial because it represents a situation they have little or no immediate control over?  Undoubtedly that's part of it.  After all, the tweets are rife with threats to boycott Coke, but these threats would have essentially no impact on Coke's bottom line even were they credible.  As it is, I suspect the vast majority of those would-be boycotters will be back to drinking Coke before the month is out.  Inexpensive habits can be terribly hard to break.  And at any rate, Coca-Cola is here serving only as a proxy for what some evidently see as a distressing trend toward inclusiveness.

Isn't it provocative, though, that each side sees a given cultural portrayal as betraying an awful truth, and often speaks out vigorously against it—something that the other side views as oversensitive and tiresome?  And this may be the crux of the matter: that there is a kind of massive joint cognitive dissonance between the way that the various groups perceive the current cultural situation, and the way that the various groups think the situation should be.  This dissonance is made all the more contentious by the striking symmetry between the views.

There is one thing, however, that distinguishes the two cases, as exemplified by these commercials, and that is the distinction between equality and uniformity, something that has stuck with me ever since it was first explained to me in stark simplicity in Madeleine L'Engle's A Wrinkle in Time. is at the root of the desire for uniformity (for I see no other way to describe, as succinctly, the demand that people sing this song in English) that the Coca-Cola tweets share?  It seems to me that it aims for a feeling of security, that if we only trust those people who cleave to the majority culture, then all will be well in this world gone mad.  But if that's so, is it necessary to demand uniformity?  Can't we feel secure without insisting on the elimination of the traces of other cultures?  Why not cut the middle man of uniformity out of the picture entirely?

I fear, though, that this is not likely until people see that this kind of uniformity not only isn't the end goal, but is actually counter-productive as far as any real kind of security is concerned.  I like to say that religion is a laser of the people, by which I mean that it moves people to behave and operate in unison, almost as though they constituted a single being, which can do certain things that the individuals couldn't do, separately.  But that same uniformity has a cost, because if all the individuals uniformly have a weakness, that weakness is passed onto the group as a whole, and is not amortized, so to speak.  I'm reminded of the old Aesop's fable in which an old man, near the end of his life, demonstrates to his sons the value of unity by tying together a bundle of sticks.  That bundle, of course, could not be broken by vigorous effort, even as the individual sticks were easily snapped.  It's ironic to think, though, that one could quickly slice through the bundle if one were to cut lengthwise.

The amortization of weaknesses is what makes diverse groups so robust.  It's why a farm made up of a single strain of high-yield crops is not a good long-term strategy.  It's why a diversified investment portfolio is safer than one that relies on a single kind of asset.  And why shouldn't the same kind of reasoning apply just as well to people as to crops or funds?  Yes, uniformity is good in moderate doses, for it enables feats that could not be achieved otherwise, but in doses large enough to dominate an entire country, it's dangerous.  It's dangerous not only because it makes the country more vulnerable, but also because it is such an appealing dogma.

Who knows if there will come a time when ads like Coca-Cola's will not produce such a strong negative reaction.  But if it does, it will be because people understand, viscerally, the value of diversity, and do not see it for the demise of national security.

Tuesday, December 17, 2013

The Travelling Santa Problem

This is a problem that briefly entertains me each year around this time, because it's mathematical and I'm me.

The question is, "How fast does Santa have to go to visit all those homes?"  We're not going to assume he has to go down chimneys or anything like that; he just has to get to all of the homes.  But assuming that Santa is real, he is still subject to the laws of physics.  No getting around those.

The Travelling Santa Problem bears a distinct resemblance to another classic problem of computation, the Travelling Salesman Problem.  A typical statement of this problem (made as non-gender-specific as I can manage) goes as follows: A sales rep must visit the capital cities of all 48 contiguous states, in whatever order desired.  What order minimizes the total travel distance?  It doesn't matter whether the sales rep drives or flies; what matters is that there is a definite and known distance between any pair of capitals.

For small numbers of capitals, this problem is trivial.  Consider three cities: Sacramento CA, Carson City NV, and Phoenix AZ.  The air distances between these cities are S-C = 160 km, C-P = 930 km, and P-S = 1016 km (I got these figures from the City Distance Calculator at  The sales rep, in order to minimize the total travel distance, should avoid the long Phoenix-to-Sacramento leg, and visit the cities in the order Sacramento, Carson City, Phoenix (or the reverse).

Adding a fourth city does complicate matters somewhat.  The cost of adding one city is three new distances.  If we add, say, Boise ID, the new distances are S-B = 712 km, C-B = 580 km, and P-B = 1179 km.  And instead of only three essentially different routes, there are now twelve: B-C-P-S, B-C-S-P, B-P-C-S, B-P-S-C, B-S-C-P, B-S-P-C, C-B-P-S, C-B-S-P, C-P-B-S, C-S-B-P, P-B-C-S, and P-C-B-S.  (There are twelve other orders, for a total of 4! = 24, but the other twelve are the reverse of those already listed, and are the same for the purpose of total distance.)  By exhaustive calculation, we find that the minimal path is B-S-C-P (or P-C-S-B), with a total length of 712+160+580 = 1452 km.

One thing that becomes quickly apparent about this problem is that you can't solve it just by picking the three shortest distances, because those three distances may not connect all of the cities, or do so in a path.  Instead, in this case at least, we had to try all the different routes and pick the shortest overall route.  In fact, the Travelling Salesman Problem is a so-called NP-hard problem; this ties it with a number of other problems whose solution times are all expected to increase exponentially with the size of the problem, barring some unexpected theoretical advance.

In the case of the Travelling Santa Problem, however, we are not interested in knowing how Santa knows what order to visit the homes, or even what order he actually visits the homes.  We just need to know, to a rough order of magnitude, how far he must travel to visit every home.

Let us consider that the current population of the Earth is about seven billion.  How many homes is that (if by home we mean any single living unit)?  There are some homes with lots of people in them, living as a unit; on the other hand, there many homes with only one person in them.  We probably would not be too far off if we assume an average of two people per home.  That would mean 3.5 billion homes to visit.

Now, if these 3.5 billion homes were evenly distributed across the surface of the Earth, which has a surface area of about 510 million square kilometers, each home would have to itself an average of about 0.15 square kilometers, which means the mean home-to-home distance would be about 0.4 km, and Santa would have to travel about 3.5 billion times 0.4 km, or about 1.4 billion km.  If we assume that Santa has to travel all that way in a single day (86,400 seconds), that means he must travel about 16,000 km/s, a little over a twentieth of the speed of light.  So, very fast (about 35 million mph), but at least doable in principle.

In truth, it's a bit better than that.  In the first place, most of the Earth's surface is water; only about 30 percent of it is land.  Of that, the polar lands, especially around Antarctica, are not readily habitable in the usual way, so that perhaps only about 25 percent of the Earth's surface has any appreciable habitation.  That cuts the total distance in half to about 700 million km, and the necessary speed to 8,000 km/s.

Even better, human homes are not evenly distributed across the 25 percent of the Earth's surface they cover, but are clumped together in towns, villages, and cities large and small.  We might consider a clump to be any collection of homes that are within 100 meters of at least one other home.  This means, among other things, that a single isolated home is considered to be a clump.

It's hard to know the exact number of such clumps in the world, but perhaps we would not be too far off if we let the average clump size be 350 homes.  In that case, the total number of clumps would be 3.5 billion, divided by 350, or 10 million clumps.  The average clump-to-clump distance would then be about 7 km, and the total clump-to-clump travel distance would be 10 million times 7 km, or 70 million km.  To that would have to be added the home-to-home travel in each clump of 350 homes.  If each pair of homes is separated by 100 meters, and there are 350 homes, then each clump requires an additional 35 km, times 10 million clumps, or 350 million km, for a grand total of 420 million km.  That cuts the necessary speed to just under 5,000 km/s.

Of course, 100 meters is just the maximum cutoff distance between homes in a clump.  The average distance would be rather smaller.  In a major city like New York, for instance, the average distance is probably closer to 10 meters; in other areas, the average might be smaller than that.  In that case, the total clump internal distance for 350 homes would be more like 3 or 4 km, for a total distance of 100 million km, with a required speed of about 1,200 km/s.

Finally, statistical studies show that if N clumps are randomly distributed over an area of about A = 130 million square kilometers (as we've assumed here), the unevenness caused by that random distribution creates some clumping in the clumps, so that the total clump-to-clump travel distance is given approximately by  √(NA/2) = √(650 million million square kilometers) = 25 million km, lowering the total distance to 60 million km, and the speed to 700 km/s.

That's still about 1.6 million mph—fast enough to go around the Earth in a single minute—so perhaps Rudolph better get going.

Wednesday, May 15, 2013

A Pair of Potter Poems (Picked by Peter Piper?)

Here are a couple of poems.  Sonnets again.  The schtick here is that they concern a pair of Harry Potter characters.  It should be trivial to figure out who they are (although it might require a dictionary for our younger readers).

(I admit I felt compelled to put these here so that the three of you coming from Ash's poetry blog don't feel like you got put on a bus to Hoboken, Nerd Jersey.)


The boy stepped forth and took his place beneath
the brim.  A minute passed, now two, then three,
within which time the shades of bravery
and justice armed their forces to the teeth.
Though all saw brav'ry take the palm and wreath,
it lay in waiting, seeming idly:
At length, his courage glowed for one to see,
demure, as though he'd drawn it from its sheath.
It wavered, unaccustomed to the light;
it felt about, uncertain of its tread.
Till blunt necessity called out its right,
to cleave the foul ophidian at its head.
Oh say! where night left off and day began,
to slumber off a boy and wake a man.


He stands, a glower made inscrutable,
ambiguous.  He wreaths his honest thoughts
in coronets of random noise, in knots
of truths both blank and indisputable.
The swollen ranks, beneath his gaze, bear gloom.
Their dully thronging stride stamps out the time
left to his bitter charge, and neither rhyme
nor reason can forestall his chosen doom.
Though he may carp or cavil over weights
none else has will or wherewithal to bear,
that memory, besmirched, of onetime mates
does focus his poor genius in its glare.
So pity not the fool who plays the lie--
once! twice! now thrice!--to gamble and to die.

Copyright © 2011 Brian Tung

Tuesday, May 7, 2013

Why CPU Utilization is a Misleading Architectural Specification


Actually, this post only has a little to do with queueing theory.  But I can't help tagging it that way, just 'cause.

Once upon a time, before the Internet, before ARPANet, even before people were born who had never done homework without Google, computer systems were built.  These systems often needed to plow their way through enormous amounts of data (for that era) in a relatively short period, and they needed to be robust.  They could not break down or fall behind if, for instance, all of a sudden, there was a rush in which they had to work twice as fast for a while.

The companies that were under contract to build these systems were therefore compelled to build to a specified requirement.  This requirement often took a form something like, "Under typical conditions, the system shall not exceed 50 percent CPU utilization."  The purpose of this requirement was to ensure that if twice the load did come down the pike, the system would be able to handle itthat the system could handle twice the throughput that it experienced under a typical conditions, if it needed to.

One might reasonably ask, if the purpose was to ensure that the system could handle twice the load, why not just write the requirement in terms of throughput, using words something like, "The system shall be able to handle twice the throughput as in a typical load of work"?  Well, for one thing, CPU utilization is, in many situations, easier to measure on an ongoing basis.  If you've ever run the system monitor on your computer, you know how easy it is to track how hard your CPU is working, every second of every day.  Whereas, to test how much more throughput your system could handle, you'd actually have to measure how much work your CPU is doing, then run a test to see if it could do twice as much work without falling behind.  A requirement written in terms of CPU utilization would simply be easier to check.

For another thing, at the time these requirements were being written, CPU utilization was an effective proxy for throughput.  That is to say, in the single-core, single-unit, single-everything days, the computer could essentially be treated like a cap-screwing machine on an assembly line.  If your machine could screw caps onto jars in one second, but jars only came down the line every two seconds, then your cap-screwing machine had a utilization of 50 percent.  And, on the basis of that measurement, you knew that if there was a sudden burst of jars coming twice as fast—once per second—your machine could handle it without jars spilling all over the production room floor.

In other words, CPU utilization was quite a reasonable way to write requirements to spec out your system—once upon a time.

Since those days, computer systems have undergone significant evolution, so that we now have computers with multiple CPUs, CPUs with multiple cores, cores with multi-threading/hyper-threading.  These developments have clouded the once tidy relationship between CPU utilization and throughput.

Without getting too deep into the technical details, let me give you a flavor of how the relationship can be obscured.  Suppose you have a machine with a single CPU, consisting of two cores.  The machine runs just one single-threaded task.  Because this task has only one thread, it can only run in one core at a time; it cannot split itself to work on both cores at the same time.

Suppose that this task is running so hard that it uses up just exactly all of the one core it is able to use.  Very clearly, if the task is suddenly required to work twice as hard, it will not be able to do so.  The core it is using is already working 100 percent of the time, and the task will fall behind.  All the while, of course, the second core is sitting there idly, with nothing to do except count the clock cycles.

But what does the CPU report is its utilization?  Why, it's 50 percent!  After all, on average, its cores are being used half the time.  The fact that one of them is being used all of the time, and the other is being used none of the time, is completely concealed by the aggregate measurement.  Things look just fine, even though the task is running at maximum throughput.

In the meantime, while all of these developments were occurring, what was happening with the requirements?  Essentially nothing.  You might expect that at some point, people would latch onto the fact that computing advances were going to affect this once-firm relationship between CPU utilization (the thing they could easily measure) and throughput (the thing that they really wanted).

The problem is that requirements-writing is mind-numbing drudge work, and people will take any reasonable measure to minimize the numbness and the drudge.  Well, one such reasonable measure was to see what the previous system had done for its requirements.  What's more, those responsible for creating the requirements were, in many cases, not computer experts themselves, so unless the requirements were obviously wrong (which these were not), the inclination was to duplicate them.  That would explain the propagation of the old requirement down to newer systems.

At any rate, whatever the explanation, the upshot is that there is often an ever-diverging disconnect between the requirement and the property the system is supposed to have.  There are a number of ways to address that, to incrementally improve how well CPU utilization tracks throughput.  There are tools that measure per-core utilization, for instance.  And even though hyper-threading can also obscure the relationship, it can be turned off for the purposes of a test (although this then systematically underestimates capacity).  And so on.

But all this is beside the point, which is that CPU utilization is not the actual property one cares about.  What one cares about is throughput (and, on larger time scales, scalability).  And although one does not measure maximum throughput capacity on an ongoing basis, one can measure it each time the system is reconfigured.  And one can measure what the current throughput is.  And if the typical throughput is less than half of the maximum throughput—why, that is exactly what you want to know.  It isn't rocket science (although, to be sure, it may be put in service of rocket science).

<queueingtheory>And you may also want to know that the throughput is being achieved without concomitantly high latency.  This is a consideration of increasing importance as the task's load becomes ever more unpredictable.  Yet another reason why CPU utilization can be misleading.</queueingtheory>