“Extremism” is a Stupid Word

Something that has been fashionable to complain about in the last few decades is “extremism”. What is actually meant is something that is extremely out of the mainstream, but the problem is that it’s well known that the majority of people can be quite wrong. Indeed, given how recently deep racism was the mainstream in the United States (and analogous things can easily be found elsewhere, such as anti-semitism in Europe), simply accusing people of being a minority is not viable. So this is done under the cover of claiming that the problem isn’t that the person is in a minority position, but that his views are on the extreme of some spectrum, making them wrong, since truth always lies in the middle of all extremes.

The thing is, this is—to put it mildly—extremely stupid.

The truth is not always in the middle. This is very easy to see by just considering cases where no one is a moderate today. Everyone is an extremist about how much slavery should be allowed. Everyone is an extremist on whether the human race should be extinguished (most people hold an extreme ‘no’, some people hold an extreme ‘yes‘, but no one holds a moderate position). You will find exceedingly few moderates on the subject of whether the Jews should be wiped out.

This gets even clearer if you consider cases where no one was ever on the opposing side. Should we build as many nuclear weapons as possible and detonate them over all of the landmasses of the earth? The correct position is not a moderate one between “yes” and “no”. Should we salt all of the farmland on the earth? Should we beat people who go out in public? Should we mandate dragging a 500 pound rock if a person leaves his house? Should rape be legal? Should rape be state-sponsored, with generous bounties paid to rapists?

The correct position on all of these is not moderation.

By the way, I just happened to pick examples where the correct position is an extreme ‘no’. One can easily come up with examples where the correct answer is an extreme ‘yes’: should babies be suffered to live? Should we permit people to breathe air? Should we appreciate beauty? Should we love people? Should we try to be good? Should we be honest?

The correct position on all of these is not moderation, either.

Now, if you point this out to the people who say that the problem is extremism, they will get angry at you because this isn’t what they meant. And, indeed, it’s not. The problem is that it’s what they said. The reason that they’re getting angry is that they don’t like being called out on the fact that they’re lying to you. They are actually complaining about people being outside of the mainstream despite knowing that that isn’t a valid criticism.

Curiously, liars always seem to think it rude to point out that they’re lying.

There is an exception to the above, by the way, and that is what amounts to a heresy. It is always a mistake to take a single virtue (that isn’t the all-encompassing virtue of love, as in agape, the love of God) and to value only it at the expense of all other virtues. Thus courage is good, but not to the exclusion of honesty, mercy, faith, love, etc. Mercy is good, but not to the exclusion of justice, honesty, etc. This is not normally how people use the term “extremism” as a critique, but if they were to, that would be a valid critique. It would be better to criticize this as a heresy, since the idea of a heresy is that it takes one or a few parts of orthodoxy and leaves the rest, and this is what picking a virtue and valuing only it to the exclusion of other virtues is doing. But it is also going to an extreme, and a bad form of doing so, so the word would not be inapplicable.

As I said, that’s not what people normally mean by using “extremism” as a derogative, but if anyone does, my critique above does not apply to them.

Onion Breaking News

From many years ago, back when The Onion was funny (warning: colorful language):

It’s really quite accurate, and possibly even better than Charlie Booker’s version (warning, colorful language):

I can’t help but think that the days of this sort of news broadcast have to be numbered, if for no other reason that the median age of people who still watch TV news is probably 75. The damage that news has done to society and to the people who watch it is incalculable; I hope that it does end soon and that it isn’t replaced by something worse. (Not everything is replaced by something worse, after all. That only happens quite often, not every time.)

How Many Times Must the Cannon Balls Fly Before They’re Forever Banned?

This is a profoundly stupid lyric in an otherwise truly great song. If you’re not familiar with it, it’s Blowin’ in the Wind by Bob Dylan. He sang it first, but it was covered by nearly all of the folk singers of the 1960s, and pretty much all of them sang it better than Dylan sung it. My favorite version is by Peter, Paul, and Mary:

I should note that this song is truly great if it’s taken as a lament of sin; if the questions are rhetorical because the poor, we will always have with us. And I do think that this was—not, perhaps with perfect understanding—Bob Dylan’s intention:

There ain’t too much I can say about this song except that the answer is blowing in the wind. It ain’t in no book or movie or TV show or discussion group. Man, it’s in the wind — and it’s blowing in the wind. Too many of these hip people are telling me where the answer is but oh I won’t believe that. I still say it’s in the wind and just like a restless piece of paper it’s got to come down some … But the only trouble is that no one picks up the answer when it comes down so not too many people get to see and know … and then it flies away.

The answer to all of the questions is, “when everyone becomes a saint,” and indeed the answer is readily available but people won’t pick it up.

The one question in the song which cannot be taken as anything but an infantile cry for mommy and daddy to make everything better is the line I quoted in the title. The first and most obvious reason it’s just an infantile cry is: who will ban the cannon balls, and how will they enforce this ban? The only way to enforce a ban is by force, so in a direct sense whoever has better weapons than cannon balls can ban them. Nations possessing nuclear weapons could impose a cannon ball ban, for example. This is directly opposite to the intention of the song, but it is in fact the only possible answer.

Except that cannon balls were obsolete long before Blowin’ in the Wind was written. Cannon balls were a military weapon that answered the musket and pike formation which superceded bows on the battlefield. Musketeers were very slow, but a lot of them, shooting in alternation, could be fast. Because they were individually slow they were very vulnerable to cavalry, who could close the distance to musketeers before the latter could get off a second shot. The answer was to keep the musketeers close together and defend them with pikemen. Pikes were, basically, very long spears, which were good at keeping horses at bay.

The answer to the musket-and-pike formation was the cannon ball. An enormous ball of iron traveling at high speed would do enormous damage to a dense formation of men, as it would roll through them injuring or killing everyone in a straight line. This is why it was a cannon all rather than a shell; the rolling was integral to how it was used.

The answer to the cannon ball were the wide lines that were only 3 men deep which characterized infantry battles in the 1700s. With the lines being only 3 men deep, a cannon ball would injure at most only a few men as it rolled on (unless the lines made the grave mistake of maneuvering perpendicular to the enemy’s cannons). Cannon balls were a specialty item, and as military tactics adapted to them, they ceased to be used. Things like exploding shells with shrapnel, “grapeshot” (basically loading cannons with shrapnel rather than a single large projectile) etc displaced the cannon ball because they were better suited to the new military conditions.

Cannon balls were on their way out during the American civil war and were not used during World War 1 or World War 2 or the Korean war. They were most certainly not used during the “police action” in Vietnam with which much of the 1960s protest movement was concerned.

No one knows how many times cannon balls flew in history, but by the time Blowin’ in the Wind was written, the number of times they flew before their use was discontinued was a definite, if uncertainly known, quantity.

But they weren’t banned. They were only put down in favor of better weapons.

As a curious historical side-note, the use of poison gasses on the battlefield was banned. But the ban was enforced by the threat to use chemical weapons too. So, in effect, the use of chemical weapons was banned by the use of even more chemical weapons.

Not the answer that the protest movement wanted, but very much in keeping with the rest of the song.

The answer is blowin’ in the wind, and people don’t stop to pick it up.

UPDATE: Thanks to Paul for pointing out that I used the wrong spelling of “cannon”.

Has Gone With the Wind Went With the Wind?

One of the curious things about being a parent is that it raises a question about movies which really clarifies how good one thinks it is: is it worth showing this movie to my child?

Curiously, despite Gone With the Wind being one of the all-time classics, when I ask myself this the answer is a resounding “no”.

There’s only one scene in it that I can think of which is worth passing on (the first, roughly, 15 seconds of this clip):

Unfortunately, this scene lacks most of its power if you haven’t seen the movie, so I don’t know that I’ll even pass this on. It would probably be more effective to just tell the kids about it.

This is not to say that I think Gone With the Wind is not a good movie. It is a good movie. I think that my problem with it is that its main theme is that if a person makes relentlessly bad decisions and suffers misfortune, they will have neither the consolation of virtue nor the consolation of pleasant circumstances.

Which is certainly true.

It’s just one of those things which seems to me obviously true and if you try to orient your life such that your primary concern is to be a saint, you hardly need this symbolically represented for over four hours. It is very true that the wages of sin are death. At the end of the day, I think my reaction is because (using the American generic “you”): if you need a movie like Gone With the Wind to realize that the wages of sin are death, you’ve got bigger problems than (I hope that) my children have. This could, of course, be wishful thinking on my part.

On the other hand, a lot of great art in the last 200 or so years was people rediscovering what, as G.K. Chesterton put it, they could have learnt in their catechism—had they ever read it. Much of the power of it was people asking the question, “but perhaps it is true after all?

Just in a very limited sense.

The Reason for Post WW1 Revolutions

TIK has a very interesting video about HItler’s Socialism:

I’m only about 1 hour into this nearly 5 hour video, and it covers (as you might imagine) a wide range of topics, but something TIK mentioned almost casually as an aside really struck me: the reason for the revolutions after World War 1 was that the nations that took part in it took an enormous amount of wealth from its people and destroyed it. This immense destruction of wealth impoverished the people, who grew sick of it and revolted.

Something very important to understand about war is that it is bad for business. There are some select businesses that it is good for; gun makers and canon makers and the like benefit from war, though in practice only so much because they have a tendency to get very squeezed for profits since making a profit during a war is generally seen as unpatriotic, and the governments buying the weapons have far more negotiating power than the weapon-makers do, since only one gets to send the police to put the other in prison.

Apart from these extremely limited cases, most business suffers greatly from war. Raw materials get diverted from industrial uses to war-time uses, labor gets taken away, demand for goods shrinks because heavy taxation removes the money with which people would buy products, and in the 1900s there was like a 90% chance that goods would be rationed so people can’t buy as many of your products as they want to even if they had the money and weren’t off fighting in a war.

Worse for the economy, this isn’t even a temporary re-allocation of resources than can be shifted back afterwards. Tanks and battleships and the like can be scrapped for iron, but it’s a difficult and costly process and they have no value other than as scrap (or as museum pieces). Bombs and bullets simply blow up when they’re used, so they get expended in use and all of the resources and labor that went into making them literally goes up in a puff of smoke. Going to war is taking a nation’s resources and burning them (in many cases literally).

And all of this assumes that the war isn’t on your soil, so that your factories are getting demolished in the fighting. If that’s happening, it’s even more economically destructive.

War is always a waste of labor and resources (even a just war; it may simply be a necessary waste), but World War 1 was an especially wasteful war, and moreover was perceived to be an especially wasteful war. Enormous amounts of men and materiel were ground up in order to do basically nothing. For everyone but France and Germany, this largely consisted of taking one’s men and resources and sending them to far away lands to be ground up to accomplish nothing for other people.

This really helps to explain why the Russian Revolution happened. I had always wondered why a mostly agriculatural society would undergo a marxist revolution. Marxist revolutions never make sense, but they didn’t have the mass of factory workers necessary to have a worker’s revolution. And farmers don’t revolt, for the most part, unless you try to heavily tax them.

Well, there’s my answer.

You pay for wars with taxes. You pay for big wars with heavy taxes. And heavy taxes that aren’t perceived to bring massive benefits tend to produce revolutions.

Obviously this is painting the cause of a complex historical event with a ludicrously broad brush, and I’m not describing it very well. But this does make a lot more sense of the Russian Revolution than I had understood up til now.

The Baby Boom Had a Lot of Babies

I was talking with my parents, recently, about children and childhoods. My mother lamented that Halloween was on the wane, and attributed it, in no small part, to helicopter parents who won’t let their children roam the streets unattended. There may be some truth to this, but it struck me that the Baby Boomers’ childhoods were different in no small part because their generation was named for a very real boom in the number of babies. Here’s an interesting graph of the number of births in the US:

The population of the US has been far from constant, though, so let’s put that into context (births per thousand people):

One interesting thing to note is that the baby boom was only a boom relative to what came shortly before and especially what came after it. It was more common to have children in the early 1900s than during the baby boom, but that’s a subject for another day. The other key thing to consider, with regard to the baby boom, was that it lasted for a while. There aren’t hard edges on it, but it’s traditionally dated from 1945 to 1962, which is 17 years long. I think that’s significant in the experience of people like my parents, who were born in the middle of the baby boom.

Childhood, as described by people in their late sixties here in the year of our Lord 2021, was a fun time of independence and play, with children roaming neighborhoods without parental supervision. Part of that, though, was that older kids were expected (and usually did) look out for the younger kids. And I think a big reason why that worked out was that there were plenty of older kids around to do it.

Another thing that contributed to this phenomenon, I suspect, was the housing boom which happened (in America) after World War 2. Part of it was developments like Levittowns, but housing, in general, became much less labor-intensive as large machines and industrial processes replaced human labor with machine labor. The development of trucks (for World War 2) which could carry heavy things really helped with this, with more building materials able to be constructed efficiently then transported cheaply. We don’t tend to think of how trucks improve efficiency by separating things by distance but it’s far less efficient to make something on-site than in a place designed around making it.

There were also effects from the G.I. Bill which made it possible for many returning veterans to take out mortgages, which also helped to spur the market for cheap housing. That is often a cycle, as once a thing becomes cheaper you start getting additional demand from elsewhere. While that will drive prices up in the short term, it will also tend to drive up volume which (absent restricted resources) will tend to drive up economies of scale and to overall lower prices further.

When you put this all together it resulted in a lot of communities which were predominantly made up of people of child-bearing age, rather than the more normal age distribution one gets in stable communities. Baby Boomers who grew up in these communities would have experienced an especially large number of children around.

This will have effects on things like secular Halloween celebrations (Halloween is, after all, the celebration of the coming of All Saint’s Day, i.e. “All Hallows Eve”). When you have a ton of kids who will come out for candy, it becomes fun to stock up on candy and give it out. When you expected between 0 and 3 kids showing up, it takes a lot of the fun out of it. You’re just more likely to turn off your lights and pretend you’re not home.

The fewer kids who go out, the more the children who go out are alone, too. It’s one thing to send one’s children out on their own when the streets are crawling with children. It’s another thing to send them out into the night with no one around. What I’ve discovered is that, in practice, young kids really don’t want to go out alone at night when “alone” means “alone” and not “surrounded by other people, many of whom one knows, just not one’s parents”.

I think the absence of young kids also tends to discourage teenagers. It’s one thing to show up when unescorted children are around; you’re at least partially escorting them yourself by your presence. It’s another thing to be a teenager and the only person within view and be asking for candy from adults.

When you put all of this together, I think that much of how baby boomers experienced childhood differently than later generations was at least as much because they were born during a baby boom—and during a housing boom that often concentrated child-bearing families—as it was because of cultural shifts. Yes, this was before the news did its best to constantly scare parents about letting their children out of their sight, and yes this was when parents tended to have more children so they didn’t worry as much about each individual child because they had spares, and yes this was before designer children and helicopter parents. There are many threads that go together to weave a cloth.

All that said, I think that the boom in babies is an often under-estimated factor in what life was like for baby-boomers.

Are They Really Christmas Songs?

I don’t know if people still complain about Christmas songs being played early; like most things about “people” I suppose it depends on who one talks to. Anyway, while I’m sympathetic to the idea of “keep the waiting in advent,” it has occurred to me that there is a reason that recently traditional secular Christmas songs are song before Christmas and not after: if you look at them, they are really advent songs. Secular advent songs, of course, but advent songs. (I’m taking the list of Christmas songs from XKCD’s list which I discussed earlier.)

Have Yourself a Merry Little Christmas and Have a Holly Jolly Christmas both have titles (and main lyrics) in the future tense. Santa Claus is Coming To Town is technically in the present progressive tense, but all of the lyrics are anticipatory—primarily warning about present behavior in light of future rewards. Chestnuts Roasting on an Open Fire is set on Christmas Eve, but that is still, technically speaking, during Advent (unless you’re measuring days from sundown to sundown, in which case I think that the present-tense of the song would have to be taken as anticipatory).

I’ll Be Home for Christmas, though I rarely here it played or sung, is another one clearly set in the future tense and thus an advent song. I’m Dreaming of a White Christmas would most naturally be taken to be about anticipation though it could, technically, be set on Christmas. That is, until you get to the later lyrics where he dreams of a white Christmas with every Christmas card he writes. It would be absurd to suppose the song is about somebody who sends out Christmas cards after Christmas, since their purpose is to wish someone a happy Christmas.

Rocking Around the Christmas Tree is harder to place, temporally. Its subject is a Christmas party, which I’m used to being held prior to Christmas but in 1958 when it was released it might have been the custom to have Christmas parties on Christmas day itself, though I am inclined to doubt it.

Blue Christmas (which, again, I never hear anyone sing and don’t hear played) clearly talks about Christmas in the future tense in the lyrics (“I’ll have a blue Christmas without you”).

Silver Bells could be set on Christmas or even after it. That said, it’s about Christmas decorations and such which are generally put up before Christmas, so the smart money is on it being an anticipation of Christmas.

It’s Beginning to Look a Lot Like Christmas is another one whose very title shows it to be set before Christmas.

It’s the Most Wonderful Time of the Year is, like Silver Bells, not explicit, but it seems to be about the (secular) season of preparation for Christmas, placing it before Christmas.

The other songs on the XKCD list (with one exception) aren’t about Christmas at all, or at least not a present Christmas. Winter Wonderland, Let It Snow, Jingle Bell Rock, and Sleigh Ride are all just about winter. (So, for that matter, is Baby It’s Cold Outside, which is increasingly be played as if it’s a Christmas song.) Rudolph the Red Nosed Reindeer is primarily about the time before Christmas, and culminates in Rudolph’s triumph on Christmas Eve, but al of this was in the distant past. Frosty the Snowman is about a magical snowman and has nothing whatever to do with Christmas. (Admittedly, the animated movie Frosty the Snowman is set on Christmas Eve, but that’s still anticipating Christmas.)

The only real exception on the entire list is Little Drummer Boy, which is actually set after Christmas. It seems to be based on the visitation of the Magi, which is traditionally celebrated on Epiphany, which for many years in the western Church has been celebrated in January. Since the song doesn’t reference anything that sets its date, it could be anywhere from the day of Christ’s birth (e.g. when the angels gave the good news to the shepherds) to months after the Magi visited. I suspect that no one pays attention to the lyrics of this song, though, since approximately 20% of them are “pa”, 20% are “rum” and 45% are “pum”.

So, all things considered, I think we have some of the reason why these songs are all played before Christmas, rather than after it—they are, in fact, (secular) advent songs. As Chesterton often noted, the common man often has his heart in the right place, even when it’s there for the wrong reasons in his head.

Warm Feet While Hunting in Western Pennsylvania

I’m a bowhunter who hunts in western Pennsylvania, so one of the problems that I face in the late season is keeping warm when it’s cold out. Much of this is pretty easy, and is the same answer as anywhere else—layers. The only difference is that the outer layer is camouflage. That said, this does not apply as much to the feet, since most people out in the cold are doing different things than hunters are. In particular, hunters need to walk to their hunting spot, then they sit or stand still in the cold for hours on end. That last part is particularly important, because they don’t generate as much bodyheat as someone moving does.

Before I get to that, I should mention that it also doesn’t apply as much to the hands, and I’ve found some very good hunting mittens made by Hot Shot Gear. They’re called pop top hunting mittens, and they’re both warm mittens but also allow you to slide your fingers out of the mittens in thin gloves, which is essentially for your string hand. I shoot with a trigger release, so I only need one finger, though a thumb release or traditional finger guard would require more fingers and this allows that. They’re warm and very functional for archery. As a bonus, the index finger has a tab on it that can work with capacitive touchscreens, which is very helpful for texting someone to complain about the deer not coming by.

Anyway, there are two main solutions I’ve been able to find to the problem of keeping one’s feet warm. The first are enormous boots which are both heavy and cumbersome. They’re tiring to walk in and they clomp noisily as one can only really step with one’s full foot in them; the best one can do to not clomp is a mild heel-to-toe motion. On the plus side, they’re warm and waterproof.

The other main solution are thinks like mukluks. Mukluks were designed for seriously cold conditions but they are also lightweight and flexible. The only downside is that they’re not waterproof. In the places where they were developed this isn’t really a problem, since below about 15F (-9.4C) water can be relied upon to be hard and stay hard (that is, to be ice and not melt) and so being waterproof is irrelevant because the only liquid water you will be exposed to is in your water bottle. My understanding of places like North Dakota, Canada, Alaska, etc.—where people really love mukluks—15F is spring weather and people tend to wear tennis shoes and light jackets in it. I may be exaggerating slightly, but they’re concerned with whether the boots are good below -30F, not 15F. (Moreover, if the weather is frequently colder than 15F, a warm day that gets up to there or even into the 20s isn’t going to melt any ice.)

In western Pennsylvania, though, winter frequently oscillates between being a bit below freezing and just above it. Even on fairly cold days it’s not uncommon to find mud in places where the sun hits for a few hours or leaves provide some insulation, unless it’s been well below freezing every day for a few days. We need waterproof footgear, but I really don’t want to pay the penalty of clomping around in massive, inflexible boots. So I got a good idea from this post: making a winter boot out of galoshes and a thick felt bootliner plus insoles. I tried it out and it worked extremely well. The results were light, flexible, comfortable, waterproof, and warm.

I’m a size (men’s US) 11 wide and ordered the boot liner true to size and the overboot sized to 11-13 shoes. The result had plenty of room inside without being too big, and comfortable fit me wearing a thick winter socket plus a second, even thicker winter sock. I absolutely loved their performance and feel.

To give a list of the particulars that I used:

Something to note about this approach is that the total cost for the parts that weren’t the socks (which you’d have to buy separately with any boot) was $67.46 (not including tax or shipping) which is extremely cheap for a pair of insulated boots. With the socks it came to $113.36 (the Darn Tough socks were expensive, but in my experience Darn Tough socks are worth it, especially because they honor their no-questions-asked lifetime guarantee). For a comfortable way to avoid pain and possibly frostbite, I found it well worth it.

One thing I need to note is that this approach gives no “support” of any kind. I hate “support” in shoes because it mostly means some sort of uncomfortable rigid thing that prevents the foot from bending naturally and makes a natural gait extremely difficult. That said, I spent a year or two wearing vibram five-fingers, so I developed strong feet whose arch comes from the muscles and tendons in the foot, as it’s supposed to, and not from resting on top of something that pushes the middle of the foot up. If you haven’t developed the muscles and tendons in your foot to be able to walk naturally, you will probably not find this approach nearly as comfortable. If you haven’t, I recommend trying to do so. (If you can get them to fit your foot, or make do with ones that are too large as I had to, vibram five fingers are a great way to do this. Just take it slowly. You don’t want to walk through a large box store your first time out—concrete is very tiring to walk on naturally. After a few weeks, your feet will be strong enough that it’s not tiring anymore, but you have to walk a little before you can walk a lot. Once you’ve done this, though, walking is a lot more pleasurable to do, and it pays dividends for hunting where you can more easily use the ball-first walking style that allows you to feel if you’re stepping on a branch and pick your foot up, so you don’t announce to everything with ears in the forest that you’re coming.)

Thoughts on the Soul, While Hunting

A quick video I made while bow hunting while the deer weren’t coming. I share some thoughts on the soul, and how some people go wrong by thinking of the soul like a ghost in a machine, or like some sort of physical pure-energy matter that operates the body in a purely physical way, except not physical. I also talk about how everyone actually believes in the soul, because being a strict materialist would be absurd, and give examples.

Dr. Thorndyke’s Scientific Wizardry

I recently read the Dr. Thorndyke short story A Message From the Deep Sea. I’m not sure when it was first published, but it was collected in John Thorndyke’s Cases, the first short story collection of Thorndyke short stories, published in 1909. It’s a good example of the scientific wizardry that Thorndyke typified—you can loosely describe Dr. Thorndyke as “Sherlock Holmes with all of the humanity removed”. The police detective and police surgeon come to the wrong conclusion in a case where the murderer was trying to frame someone. Only Thorndyke, through his very careful examination and encyclopedia knowledge of everything, was able to see through it. The case, by the way, was that a single woman in her twenties—a German immigrant lodging in England for several years now, generally liked—was murdered in the middle of the night by having her throat slashed while she slept. In one of her hands she held a few strands of long red hair, pointing to the daughter of the landlord as the murderer because the victim stole the other woman’s fiancé from her.

I find it interesting that Thorndyke was able to see through the framing because of a setup designed to allow him to do it. In some sense, of course, this always has to be true in fiction because nothing happens without the story being written to allow it to happen. Somewhat analogous to God, nothing can happen in a story without being in at least the permissive will of the author. In this case, though, the story was really designed around Thorndyke seeing through it. That is, he required a lot of the story to be unusual in order for his scientific wizardry to work.

The titular message from the deep sea was a sand on the murdered woman’s pillow that turned out to be, under the microscrope, deep sea sand from the Mediterranean ocean. In fact, among the micro-shells of the Foraminifera in the sand, was a species that only lives near the Levant, making it possible to identify where in the Mediterranean the sand came from.

At first it seems very strange that sand from the bottom of the Mediterranean sea should show up on the pillow of a dead woman, but it turns out that the man who murdered her—her former boyfriend who she threw off for the fiancé of the landlord’s daughter—worked in a factory that imported and processed turkish sponges. In the early 1900s these would have been literal sponges from the sea floor, rather than the synthetic replicas we use today, so the collection of them would have involved copious quantities of sand being brought up along with them. And, it turned out, the murderer was a laborer in a factory that imported and processed the sponges. Since such sand is everywhere in these factories—the floors are often covered in it ankle-deep, and the men who work there get thoroughly dusted in it. If such a man were to bend over, some would naturally spill out of his pockets and the various folds of his clothing.

There were also some details about damp footprints which could only have been caused by the rain which happened for about an hour before the victim was murdered, with no rain having fallen for the preceding fortnight. Also, there were some candle-grease marks that were left and a bit of candle in a common candle-box which bore the octagonal mark of an unusual candle-holder in the victim’s room.

Oh, also, a tiny bit of the knife used to kill the victim was chipped off on one of her neck vertebrae (which Thorndyke found but the police surgeon missed) which corresponded exactly to a chip in the blade of the knife which the ex-boyfriend used to try to kill Thorndyke at the inquest once Thorndyke had proved him guilty.

Actually, I forgot to mention the part where Thorndyke explained that the victim’s hand wasn’t holding the hairs in a death-grip but only had them placed there afterwards, and also the hairs were clearly taken from a brush because there were hair bulbs on both ends, not all on the same end, and furthermore the hairs had clearly fallen out naturally because they didn’t have the surrounding part of the follicle which comes out when live hair is ripped out but doesn’t come out when it naturally sheds.

The explanation of all of the evidence which Thorndyke collected, which took several pages of slow and exacting explanation occasionally interrupted by questions from the coroner, does make Thorndyke look something like a wizard, especially when other experts in the room missed it all. I can see why it was popular at the time, especially since forensic science was quite new in 1909. Looking at stuff under a microscope to prove what it was was hot stuff at the time. Having an encyclopedia knowledge of anything is always impressive.

The thing is, these are all very strange coincidences. How often is someone murdered by a person who works in a factory that coats them with extremely distinctive powder? (One might object that they don’t change out of their work clothes, but in the early 1900s people had far less clothing and a bachelor might well not change his clothes after coming home from work.) How often is a murder committed during the one hour it rained in the last two weeks? (Something I’m less familiar with—how often does it go two weeks without rain in England?)

The knife getting chipped is not wildly out of the ordinary. (I’ve seen this fairly often with broadheads going through deer.) Without the murderer having been identified, though, it would not have been useful as evidence, except perhaps to exculpate the accused woman because her knife had no chip in it.

The hair with roots on both side struck me as the only really solid evidence of the case that was not put there merely to make Thorndyke look good. A person trying to frame someone with unusual hair might well try to plant their hair at the scene of the crime. Closing the victim’s hand on the hair but not being able to turn it into a death-grip is a mistake any murderer might make. The roots of the hair showing that they were shed and not ripped out would happen from hair that was taken from a brush, and the roots being on both sides would probably show up as well. How many murderers would take the time to orient the hairs with all of their roots on the same side?

One other curious thing about this case is that Thorndyke uses fingerprints as evidence. He found fingerprints in the discarded candle, and then matched them to fingerprints he stealthily took from the former boyfriend on a pretended chance encounter. (He gave the former boyfriend a picture to hold to help him identify, then dusted it for fingerprints.) Using fingerprints is quite unusual in detective fiction, in my experience. Indeed, Thorndyke make his first appearance in the novel The Red Thumb Mark, in which Thorndyke revealed his scientific wizardry in proving that the fingerprint in blood which was the chief evidence against Thorndyke’s client had been forged. The fingerprint is not very strong evidence, though, since it was taken from a candle in a common box, and the former boyfriend had been until very recently a lodger in the house. It wasn’t nothing, but it certainly wasn’t the main evidence used.

Incidentally, this reminds me of S.S. Van Dine’s rule of detective fiction number 20A: “[Do not use, because it has been over-used] determining the identity of the culprit by comparing the butt of a cigarette left at the scene of the crime with the brand smoked by a suspect.”

Murderers smoking exotic brands of cigarettes was common, for a while. Thorndyke, you must recall, solved the crime of the sea-sand twenty years before Van Dine wrote this list. That said, even Sherlock Holmes did not consider the butt-ends of cigarettes very often; he had trained himself in the much more difficult identification of cigar ash.

All in all, this case is entertaining, though only just. Back in 1908, when read in a magazine or newspaper, much in the same way we might watch an episode of a TV show, it would have been more entertaining. Thorndyke reminds me a bit, though, of the superhero Aquaman. Since his powers depended on water, the writers were forced to always work water into the scene of Aquaman’s fight with the bad guys. Thorndyke’s super-powers depend upon the microscopic traces of unusual conditions, so the writer must always work very unusual circumstances into his stories.

I’ve really come to appreciate Poirot’s line, in Murder on the Links, “Mon ami, a clue of two feet long is every bit as valuable as one measuring two millimetres!” He elaborates a bit later:

“One thing more, Poirot, what about the piece of lead piping?”

“You do not see? To disfigure the victim’s face so that it would be unrecognizable. It was that which first set me on the right track. And that imbecile of a Giraud, swarming all over it to look for match ends! Did I not tell you that a clue of two feet long was quite as good as a clue of two inches?”

Ultimately, I think that the clues that are two feet long have tended to win out over the clues that are two millimetres long. The clues which require a microscrope are now the domain of technicians who one hires at an hourly wage to examine crime scenes. We like to read about the people who analyze the clues, not the people who gather them up with specialized equipment.

At the end of the day, I am not surprised that I only discovered that Dr. Thorndyke ever existed from an off-hand line in a Lord Peter Wimsey story. It’s still interesting to see what’s been forgotten, though. And also interesting to see what readers will forgive when a genre is new.

Christmas Traditions

There’s an XKCD that a lot of people have seen which plots most-played Christmas songs by decade of release:

The conclusion it presents, “every year, American culture embarks on a massive project to carefully recreate the Christmases of Baby Boomers’ childhoods,” is true in a sense, but mostly wrong.

The biggest problem with it is that it’s using radio songs. There are several problems with this; they are largely technologically constrained to not have been recorded prior to the 1940s because sound recording was awful back then. Having done a fair amount of swing dancing, if you ever heard a recording made from the 1930s or worse the 1920s it’s barely listenable. You simply need to get a modern band to play those songs now in order for them to not hurt your ears. On the flip side, there just haven’t been any good popular christmas songs composed since the 1960s because of cultural shifts, but that’s a different story that I’ll get to later. The really big issue, though, is that the radio doesn’t play the really popular Christmas carols, they only play things recorded by popular recording artists. Even where popular recording artists record traditional carols, the radio will play versions by all sorts of different people, so a song which gets a lot of play time will not get it all on the same recording. To have this sort of concentration, we need the songs to still be in copyright so there’s only one or a very few versions of it available for the radio to play.

To really see the point, consider the popular Christmas carols—the ones that people actually sing—and when they were composed:

Jingle Bells: 1857
Heark! The Herald Angel Sings: 1739 (current musical arrangement: 1840)
Joy To the World: 1719 (current musical arrangement: 1848)
God Rest Ye Merry Gentlemen: traditional; at least the 16th century
O Holy Night: 1847 in french; English version by a guy who died in 1893, so before then
Silent Night: 1818 in German, English translation in 1859
O Come, All Ye Faithful: 1751
What Child Is This: 1871
Away in a Manger: 1897
The First Noel: 1833
We Three Kings: 1857

So yeah, the first problem is that if you consider stuff that can’t really have been done prior to when the baby boomers were born, you won’t find it. In a sense we’re done; the only thing which is trying to recreate the boomers’ childhood was a thing that barely pre-existed the baby boomers (commercial radio in the modern format).

Christmas songs after the 1960s tended to be either novelty songs or songs that really aren’t family friendly. As people got less religious and more sex-obsessed and so sang about having sex on Christmas with various degrees of veiling their meaning. That’s not actually going to be very interesting when it’s competing with songs about having sex five times a day, so it’s not shocking that these haven’t been popular. (In short: religious people won’t like them and irreligious people can get better).

There’s another aspect, which is that there was a short time period, as popular culture was becoming hardcore secular, where the newly secular people could enjoy the religion of their parents without participating. That’s the sort of thing that only lasts a decade or two; after that the energy just goes out of it.

Here in 2021 I think that the secular energy for Christmas is fading fast; one of the more popular things for adults to do is to agree with other adults to not exchange Christmas presents because it’s just a pain in the neck. No one really likes getting together with family to eat dry turkey and too many store-bought pies—that’s why they only do it when it’s an obligation they can’t get out of—and the concept of universal good will just doesn’t make any secular sense and has been long-since abandoned.

The grain of truth to the XKCD is that there is an attempt to LARP the most recent sincere Christmas celebration anyone can remember, which happens to be the baby boomers’ childhood Christmases. That’s mostly a coincidence, though, and in any event it includes many things which pre-dated the baby boomers. Twas the Night Before Christmas was first published in 1823 and the general depiction of Santa Claus as dressed in red and white originated at the latest with Puck magazine in the early 1900s and was set in popular imagination by the 1930s with widespread soft drink advertising campaigns (most notably Coca Cola).

So yes, the baby boomers were influential. The world did exist before them, though, and they don’t explain most of it.

The Tuskegee Experiment Was Weird

I recently read up on the Tuskegee Experiment, and it was really weird. (If you’re not familiar, it was an experiment run from 1932 to 1972 to study the effects of untreated syphilis on African Americans in which they pretended to treat 600 poor, male, African American share croppers for decades, resulting in over 100 of them dying from an entirely treatable disease.) What’s weird about it was not that it was cruel. Human beings are very frequently cruel. What’s weird about it was that it was both cruel and scientifically pointless. It’s not surprising when people do unethical things for some sort of benefit they could not get otherwise. It is very surprising when people do unethical things for no possible benefit to themselves or anyone else.

So I looked a little further, and like so many things that don’t make sense, it was the way it was because of a strange set of historical events which changed it repeatedly until it kept going because it was already going, but wasn’t something anyone would ever have started on purpose. Even more curiously, it was kept up for forty years in large part because no one would ever do another study like it again (since it was utterly pointless).

Let me explain.

(Note: I’m just using the Wikipedia page on the Tusgekee study as my source for this; take it with a grain of salt but it’s good enough for my purpose here.)

The Tuskegee experiment was motivated by a 1928 retrospective study in Oslo, Norway, called the “Oslo Study of Untreated Syphilis.” It looked at several hundred white males in various stages of untreated syphilis and documented their symptoms. This is medically important in a disease which can present differently over time (syphilis takes a long time to kill you, if it does)—if a doctor is looking at a patient and only is aware of the symptoms at one stage of the disease while the patient is at a different stage, the doctor could easily mis-diagnose the patient as not having the disease.

So, doctors had this very useful information for treating white patients, but is it also applicable to black patients? Perhaps they present differently (i.e. have different symptoms, or at least different severity of symptoms). There are some diseases more prevalent in white people than in black people, and vice versa; there isn’t really a good reason to assume that the two populations are identical. To do a good job treating black people who have the disease, doctors really would benefit from evidence that they present the same way as the white patients in the Oslo study do. (There are issues with lumping all people of European descent together as one homogeneous “white” population, just as there are with lumping all people of African descent together as “black”, though in the latter case most black Americans in the 1920s came from a small region of Africa so it wasn’t quite as bad.)

So far, this is fairly reasonable given the state of medical science in the 1920s. Now it starts to get a little iffy: the researchers at the US Public Health Service at Tusgekee decided to conduct a prospective study in order to complement the retrospective study from Oslo. This is not at all, ethically, the same thing, since not treating people and finding out the symptoms they had before you treated them are very different. Their reasoning was that the study participants, being poor share croppers, were unlikely to ever get treatment otherwise; thus it was a trade of six months of not treating them (during which time they would not have gotten treated otherwise), and after which they would give the participants treatment. Not great, but in a slow-moving disease, this could be defensible if informed consent was obtained (it wasn’t).

Something else to consider, here, is that the treatments of the time were mostly ineffective. They consisted of things like arsenic-based treatments like arsphenamine and mercury-based ointments. Penicillin, the actually effective treatment for syphilis, would only be discovered in 1928 and the technology to refine the compound into a medicine was only developed in 1940. (The first proof that it could cure a disease was an eye disease in 1930 in a laboratory setting.) So part of what needs to be considered was that in 1928, the treatments that they were temporarily withholding weren’t actually all that effective, anyway.

Somehow or other this became six months to one year, which was still in the realm of defensible if informed consent had been obtained (which, again, it hadn’t). However, this is where things really start going off the rails. Before the conclusion of the study when they were planning to administer the standard treatments they lost their funding and could not afford to treat the patients. At this point Taliaferro Clark, head of the USPHS, decided to extend the study without treatment (which involved pretending to treat the participants). He resigned before the study was actually extended, however. It’s a bit unclear (just from the Wikipedia page) who took over extending the study; various people contributed.

With the advent of penicillin as a safe and reliable treatment for penicillin, the entire study became even more pointless than it was at the start, but continued for much the same reason: it would never be possible to get this data again. It would never be possible because there was absolutely no point in getting the data and it was horribly unethical to get it, but that was beside the point. The fact that something was going on that could never be restarted made the people involved feel like they needed to keep it going, since once lost, it was lost forever. True, it had no apparent value, but I suspect they figured that perhaps one day someone would find the value in it that they couldn’t see right now.

In the baptismal vows a catechumen makes (or their parent makes for them at infant baptism), there are the questions: “Do you reject Satan? And all his empty promises?”

It’s interesting how good an example the Tuskegee study was of an empty promise.

Proofs for God’s Intelligence and Why Atheists Won’t Accept Them

Answering a question I’ve been asked, because there are a fair number of atheists who hear the argument from motion or the argument from contingency and necessity and then ask, “why would the uncaused cause or the unmoved mover need to be intelligent?” In this video I look into the answers to that, and why atheists won’t accept them.

And the Rock Cried Out No Hiding Place

In the third season of Babylon 5, there is the episode And the Rock Cried Out No Hiding Place. As with most Babylon 5 episodes, it’s complicated, but there’s a very interesting section of it which more-or-less explains itself. It’s the intertwining of a scene where one of the main characters, Londo Mollari, finally defeats his nemesis Reefa, with a scene of a preacher and gospel singer visiting the space station Babylon 5 and singing a gospel song:

This is apparently based on an old spiritual song; I’m not sure if they changed the lyrics. The spiritual is probably based on the sixth chapter of the Book of Revelation:

Then the kings of the earth, the princes, the generals, the rich, the mighty, and everyone else, both slave and free, hid in caves and among the rocks of the mountains. They called to the mountains and the rocks, “Fall on us and hide us from the face of him who sits on the throne and from the wrath of the Lamb!

The two scenes meld together well, though Reefa trying to run away is not necessarily realistic. A great many evil people, when they see that their time is up, basically shut down and don’t struggle. That said, many do not. Evil is always based on upon believing an illusion. As such, believing the illusion that escape is still possible fits well. And, more to the point, it’s more symbolically accurate: the evil one is evil because he believes the lies he tells himself to the end. He does not heed the instruction μετάνοιτε (metanoiete), “repent!” He does not change his mind; he does not turn himself around. He sticks to the lie he has chosen and runs as hard as he can from reality towards it.

How Can You Say Someone Is Great…

…who’s never had his picture on a bubblegum card? This is the question posed by Lucy van Pelt in A Charlie Brown Christmas. And before anyone jumps down my throat about it being too early for Christmas stuff, A Charlie Brown Christmas is clearly an advent movie, not a Christmas movie. It is set during the time when people are getting ready for Christmas (hence rehearsing a Christmas play, rather than performing it), and it was first aired on December 9, in the year of our Lord 1965.

So go ahead and jump down my throat for it being too early for advent stuff—to be fair, it is still ordinary time—but be warned that I have sharp teeth and strong jaws.

Anyway, back to the question Lucy poses: how can you say someone is great who’s never had his picture on a bubblegum card? This joke was funny back in 1965, but I think that it’s gained in humor, over the years, because bubblegum cards are no longer something children collect. I believe that they’re technically still made, or at least trading cards are. The Topps company still exists and still makes baseball cards, though I’ve no idea who buys them. I collected baseball cards for about a year, back in the 1980s, and rapidly lost interest. So far as I knew no one else collected them back then, and in the intervening three decades I’ve never heard of anyone collecting them. (There are still trading cards that are popular such as Magic: The Gathering and Pokémon, but these are not relevant because they do not feature the pictures of real people.)

This was a childish question when Lucy asked it, but it was also an ephemeral question, which she would have had no way of knowing back then. This works with the theme of the show, though; it’s all about how people were caught up in the ephemeral world and had no idea of what really matters. The way that Lucy’s question works with this theme has only become better with age.

Fun fact: if Lucy was 11 when A Charlie Brown Christmas aired she would be 67 now (in the year of our Lord 2021).