The Early Days of the Detective Story

As I mentioned, I’ve been reading the book Masters of Mystery: A Study of The Detective Story. The first chapter deals with the question of whether the detective story is literature, and if so, whether it is good literature. There are two things that particularly caught my attention: the enormous popularity of the detective story, and the basic morality of the detective story.

The first is very interesting because I’ve seen it in detective fiction from the era, but I never knew what to make of that. The example which most leaps out at me is Harriet Vane’s reception by the dons in Gaudy Night. A great many of them had read her books and were fans. It almost has the same feeling as the near-universal name recognition of Jessica Fletcher in Murder, She Wrote. In Jessica’s case, however, we know this to be a tremendous exaggeration. It was more plausible in the case of Harriet Vane, though, because television had not yet been invented and talkies (movies with recorded dialog) were only in their infancy. It is, therefore, interesting to see a description, if, granted, from an interested party, of how widespread was the interest in detective stories around the time of 1930. It was popular with educated people, with common people, with respectable people—in short, there was no notable group of people not reading detective stories at this time.

The other interesting thing which leapt out at me was the critique of the detective story as dangerous to morals, and the response that the detective story was, fundamentally, a moral story. That is, the detective story takes as given the ordinary moral framework of right and wrong and man’s duty to do right and to refrain from doing wrong. This interests me so much, not because it is a revelation—it is, after all, obviously true—but because I’ve seen it used as an explanation for why the detective story is so enduringly popular even until our own times (I write this at the end of the year of our Lord 2019).

It has been argued (possibly even by me) that the detective story and its modern television cousin, the police procedural, is the only modern story in which basic morality is taken for granted. It is curious to see that this was to some degree true even in the early days of detective stories.

An example given as contrast was An American Tragedy, which was the only assigned reading in highschool I never finished. I just couldn’t stand the book; I made it about halfway through and gave up, reading the Cliff’s Notes instead of finishing the wretched thing. The short short version of it is that a young man makes all sorts of awful life choices during the great depression and is eventually executed for murdering a woman he seduced (in order to be available to marry a rich woman). The main character is a bad man who learns nothing, and the book does not even appreciate the justice of him paying for his crime.

It was published in 1925.

Bad books have been around for quite a long time.

Talking of the Past in the Past

A few years ago a dear friend of mine gave me the book Masters of Mystery: A Study of The Detective Story, and I’ve finally started reading it. I’ll be writing about what it says about the detective story in another post; here I want to talk about something interesting in the timing of the book, and of the introduction which came later, as the copy I was given is actually a reprint.

Masters of Mystery was written by H. Douglas Thomson and originally published in 1931. The reprint and its foreward were made in 1978, three years short of the book’s fiftieth anniversary.

The book itself was written at an interesting time, given that 1931 was only the middle of the golden age of detective fiction and had yet to see most of the work of Agatha Christie, Dorothy L. Sayers, to name just two giants of the genre.

Further making it an interesting time, detective fiction was not that old. Granted, the first detective stories are generally reckoned to be Edgar Allen Poe’s Dupin stories, the first of which, Murders in the Rue Morgue, being published in 1841. There seems to be fairly little—in English—before Conan Doyle published Sherlock Holmes in 1887. 1931 was a scant 44 years later. That is enough time for much to have happened, but it was still early days.

We come, now, to the foreward which interests me, being written a slightly longer time in the future, and taking a historical look at how Masters of Mystery held up. It was written by a E.F. Bleiler, who according to Wikipedia was “an American editor, bibliographer, and scholar of science fiction, detective fiction, and fantasy literature.” He worked as an editor at the American publisher Charles Scribner’s Sons at the time of the reprint, but as he only left Dover in 1977 and it was Dover that did the reprint, it is possible that he wrote it while an editor at the publisher. He may have been, therefore, less an expert sought out for his opinion and more a man who happened to be around.

He praises the book, but also notes some weaknesses. Some may be fair, such as noting that Thomson leaves off much about the early days of detective fiction—for the understandable reason that not much was known, especially then and even now, of it.

He makes the somewhat odd claim that Detective mysteries were at the time Thomson wrote predominantly “house party” crimes. This is odd in that it’s simply false if predicated of the famous stories of the time. It was a common enough setting, but among the detective stories which have come to us at the time of my writing, it certainly did not predominate. How common it was amongst the stories which have long since been forgotten, I cannot say.

The really interesting claim, though, is rooted firmly in its time:

Thomson’s critical standards were often a function of his day, but two more personal flaws in his work must be mentioned. His worst gaffe, of course, is his failure to estimate Hammett’s work adequately. While Hammett-worship may be excessive at the moment, it is still perplexing that Thomson could have missed Hammett’s imagination, powerful writing, and ability to convey a social or moral message. Related to this lacuna is Thomson’s lack of awareness of the other better American writers of his day, men who stood just as high as the better English writers that he praises. It was inexcusable to be unaware of the work of Melville D. Post, F.I. Anderson and T.S. Stribling. It is also surprising, since all three men were writers of world reputation at this time.

To deal with the last, first: I’ve never heard of Post, Anderson, or Stribling. F.I. Anderson does not even have a wikipedia page. Such is the short duration of fame, I suppose, that a man can be castigated for not talking about famous men 48 years after his book that, 44 years later, are generally unknown.

Dashiell Hammett, I do of course know of. That said, it is funny to me to speak of Hammett as some sort of master that everyone must talk about. I’ve met exactly one person who seriously likes Dashiell Hammett’s writing, and I don’t even know his name—I struck up a conversation with him while waiting to pick up Chinese food one night.

I suspect that Hammett’s reputation in the 1970s was a product of the success of the movies based upon his books. The casting for The Maltese Falcon and The Thin Man were excellent, and anyone having seem them—as an editor working for Dover in 1977 almost certainly would have—cannot help but read the tremendous performances of the actors into the words on the page. If one does not picture Humphrey Bogart as Sam Spade, much of the magic is lost.

Again, I should note that in the main Blieler’s foreward is positive and mostly about how Masters of Mystery is worth reading. I was merely struck by how much the retrospective criticisms of it were a product of their time, but were phrased as if they were now timeless.

Especially the Lies

There was a very interesting character in Star Trek: Deep Space 9 who was a deeply enigmatic character that was basically a spy and/or secret police officer who had possibly defected. More or less he was in the position of possibly being a gestapo agent who fled from Nazi Germany prior to the Nazis losing WWII. Instead of the Nazis, it was the Cardassians, and instead of the Gestapo, it was the Obsidian Order, but the basic structure holds.

This is an interesting character because one doesn’t know whether he left as a matter of principle, or if he was driven out merely by political considerations, or if he never left at all and his job as a tailor and status as a refugee is merely a cover. He is, of course, charming and charismatic, and denies ever having been of any importance, or a member of the Obsidian Order, and always claims that he’s “Just plain simple Garak.”

There’s an episode (or possibly a few episodes) in which his past is explored. I should note, in passing, that my suspicion is that in usual TV fashion, I don’t think that the writers ever did decide on a backstory. TV writers are much better at hints than worked-out ideas. Be that as it may, it was interesting, and there were a number of highly conflicting stories that surfaced about Garak’s past. When the episode (or arc) ended, Garak spoke with his friend, Dr. Bashir, who asked him about the stories.

Bashir: You know, I still have a lot of questions to ask you about your past.
Garak: I’ve given you all the answers I’m capable of.
Bashir: You’ve given me answers all right, but they were all different. What I want to know is: out of all the stories you told me, which ones were true and which ones weren’t?
Garak: My dear doctor, they’re all true.
Bashir: Even the lies?
Garak: Especially the lies.

If you want to watch the exchange, here’s a clip of it on YouTube:

This was a great exchange, and, in a different context, it would have been a brilliant conclusion. The problem, of course, is that it gets its power by hinting at a cohesive story behind the fragments Bashir (and hence, the viewer) are allowed to see. This is a problem because there was no cohesive story behind the fragments; they were just fragments thrown out in order to contradict previous fragments.

I don’t mean that they had literally no ideas; it was clearly established that Garak was in fact, at least at one point, a high ranking member of the Obsidian Order. What was not established was what principles he actually had.

Nebulous hints are only interesting if there is something good at the back of them. If a man simply lies because he is so warped and twisted that he doesn’t know the truth, this is not interesting. This gets back to something I’ve said more than a few times: it is a man’s virtues, not his flaws, which are interesting. Flaws are, at most, a crutch to make it easy to show off a man’s virtues.

What would have made this great is if there was some principle—that was not just loose consequentialism plus a goal—which was being served, and, therefore, all of the lies actually conveyed a truth, if properly understood. That is, this would be great if all of the lies were actually cyphers, and at some time later the key would be given which would decypher the lies into truths.

You can see an example of this, though not a great example, in the retcon of how Obi Wan Kanobi explained why he said that Anakin Skywalker was killed by Darth Vader. When he said it, he meant that the good man who called himself Anakin Skywalker was gone forever, replaced by the evil man who called himself Darth Vader. It wasn’t great, but the lie does make sense as containing a truth, when interpreted under that rubric.

That’s what enigmatic characters should all be, though in general it works best if the writers create the cypher key before encrypting things with it. When the writers do that, they do have the potential to create something great.

For it is good, indeed, when it turns out that the lies are all true.

Throwing Is Not Automatic

I’m a fan of Tom Naughton, and his movie Fathead helped me out a lot. But recently he had something of a headscratcher of a blog post. Mostly he just mistake coaching cues that happen to work for him with the One True Way to swing a golf club—which is a very understandable mistake when in the grips of the euphoria of finally figuring out a physical skill one has been working on for years—but there was this really odd bit that I thought worth of commenting on:

If you ask someone to throw a rock or a spear or a frisbee towards a target, he’ll always do the same thing, without fail: take the arm back, cock the wrist, plant the lead foot, rotate the hips, sling the arm toward the target, then release. Ask him exactly when he cocked his wrist, or planted his foot, or turned his hips, he’ll have no idea – but he’ll do it correctly every time. That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA. We don’t have to learn it. Our bodies and brains already know it.

The basic problem is: throwing is not automatic. It’s learned.

I can say this with certainty because I’ve spent time, recently, trying to teach children to throw a frisbee. They do not, in fact, instinctively do it correctly. Humans have very few actual instincts, at least when it comes to voluntary activities. We instinctively breath, and we will instinctively withdraw our hand from pain, but that’s about it. Oh, and we can instinctively nurse from our mother, though even their we need to learn better technique than we come equipped with pretty quickly or Mom will not be happy.

Now, what we do, in fact, come with naturally is the predisposition to learn activities like throwing. This is like walking: we aren’t born knowing how to walk, but we are born with a predisposition to learn to walk. We’re good at learning how to walk and we want to do the sorts of things that make us learn how to walk. Language is the same way—we’re not born speaking or understanding language, but we are predisposed to learn it.

Another odd thing is the “he’ll do it correctly every time”—no he won’t. Even people who know how to throw things pretty well occasionally just screw up and do it wrong. When teaching my boys to throw a frisbee, occasionally I just make a garbage throw. It’s not just when my conscious thoughts get in the way of my muscle memory—muscle memory needs to be correctly activated, and not paying sufficient attention is a great way to do that wrong.

Finally, the evolutionary biology part is just odd: “That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA.”

There’s an element of truth to this, in that we can find evidence of spear use in humans going back hundreds of thousands of years. The problem is that the kinematic sequence to throw a spear and the kinematic sequence to hit a golf ball is not the same thing at all.

Here’s a golf swing:

By contrast, here’s someone throwing a javelin:

And just for fun, here are some Masai warriors throwing spears:

Something you’ll notice about the Masai, who throw actual weapons meant to kill, is that the thing is heavy, and they throw it very close. Alignment is incredibly important, since a weak throw that hits point-on is vastly more effective than a strong throw that hits side-on. The other thing is that the ability to actually throw quickly without a big wind-up matters, since they’re practicing to hit moving targets. They don’t have time for a huge wind-up. Also, they tend to face their target, rather than be at a 90 degree angle to it—when your target has teeth and claws, you need to be able to protect yourself if the target starts coming for you.

Anyway, if you look at these three activities, they’re just very kinematically different. Being good at one of those things will not transfer to being good at the others. The Masai warrior needs accuracy, timing, and power on a heavy projectile. The javelin thrower needs to whip his arm over his body as fast as possible, from a sprint. His arm is straight and his shoulder hyper-extended. The golfer needs to whip the head of a long stick as fast as possible, below his body, from a standing position. His arms are bent and his elbows are kept in to generate more force than arm-velocity, since the greater force translates to greater velocity on the end of the stick. The golf swing probably has more in common with low sword-strikes using a two-handed sword than it does with swinging a spear.

Anyway, I don’t have a major point. I just think it’s interesting what we will tell ourselves in order to try to figure out motion patterns.

On The Seventh Day God Rested

On the seventh day, God rested.

This is an interesting thing to contemplate since as a American Northerner, I don’t really understand the concept of rest.

Granted, every now and again I take breaks, and every night I sleep. The thing is, I can’t help but think of these as weaknesses, as concessions to a fallen world. Chesterton described this attitude toward work and rest very well in Utoptia of Userers, though he was talking about employers and not individuals:

The special emblematic Employer of to-day, especially the Model Employer (who is the worst sort) has in his starved and evil heart a sincere hatred of holidays. I do not mean that he necessarily wants all his workmen to work until they drop; that only occurs when he happens to be stupid as well as wicked. I do not mean to say that he is necessarily unwilling to grant what he would call “decent hours of labour.” He may treat men like dirt; but if you want to make money, even out of dirt, you must let it lie fallow by some rotation of rest. He may treat men as dogs, but unless he is a lunatic he will for certain periods let sleeping dogs lie.

But humane and reasonable hours for labour have nothing whatever to do with the idea of holidays. It is not even a question of ten hours day and eight-hours day; it is not a question of cutting down leisure to the space necessary for food, sleep and exercise. If the modern employer came to the conclusion, for some reason or other, that he could get most out of his men by working them hard for only two hours a day, his whole mental attitude would still be foreign and hostile to holidays. For his whole mental attitude is that the passive time and the active time are alike useful for him and his business. All is, indeed, grist that comes to his mill, including the millers. His slaves still serve him in unconsciousness, as dogs still hunt in slumber. His grist is ground not only by the sounding wheels of iron, but by the soundless wheel of blood and brain. His sacks are still filling silently when the doors are shut on the streets and the sound of the grinding is low.

Again, Chesterton is talking about employers, but this also encompasses an American attitude toward the self which need have nothing to do with money. Chesterton goes on:

Now a holiday has no connection with using a man either by beating or feeding him. When you give a man a holiday you give him back his body and soul. It is quite possible you may be doing him an injury (though he seldom thinks so), but that does not affect the question for those to whom a holiday is holy. Immortality is the great holiday; and a holiday, like the immortality in the old theologies, is a double-edged privilege. But wherever it is genuine it is simply the restoration and completion of the man. If people ever looked at the printed word under their eye, the word “recreation” would be like the word “resurrection,” the blast of a trumpet.

And here we come back to where I started—that on the seventh day, God rested. We are not to suppose, of course, that God was tired. Nor are we even to suppose that God stopped creating creation—for if he were to do that, there would not be another moment, and creation would be at an end. Creation has no independent existence that could go on without God.

So what are we to make of God’s resting on the seventh day, for it must be very unlike human rest?

One thing I’ve heard is that the ancient Jewish idea of rest is a much more active one than our modern concept of falling down in exhaustion. It involves, so I’ve heard, the contemplation of what was done. Contemplation involves the enjoyment of what is done. What we seem to have is a more extended version of “and God looked on all that he had made and saw that it was good”.

There is another aspect, I think, too, which is that God’s creative action can be characterized into two types, according to our human ability to understand it—change and maintenance. In the first six days we have change, as human beings easily understand it. There are arising new forms of being different enough that we can have words to describe them. We can, in general, so reliably tell the difference between a fish and a bush that we give them different names. But we cannot so reliably tell the difference between a fish at noon and that same fish ten minutes later, even though it has changed; we just call them both “fish” and let that suffice because we cannot do better. Thus God’s rest can also been as the completion of the large changes, which we easily notice, and the transition to the smaller changes, which we have a harder time noticing or describing.

I’m thinking about this because I recently sent the manuscript of Wedding Flowers Will Do for a Funeral off to the publisher. It’s not done, because there will be edits from the editor, but for the moment there is nothing for me to do on it. I finally have time—if still very limited time owing to having three young children—to do other projects, but I’m having a hard time turning to them.

My suspicion is that I need to spend some time resting, which is what put me in mind of this.

Wedding Flowers Is Off to the Editor

For anyone who is interested my my novels: a few days ago I sent the manuscript of Wedding Flowers Will Do For a Funeral (the second chronicle of Brother Thomas) off to Silver Empire publishing (they published the first Chronicle of Brother Thomas). Next comes edits, and if all goes well it will be published in the first half of 2020. It’s been a long time coming, and I’m really looking forward to finally having it published.

Sequels Shouldn’t Reset To the Original

One of the great problems that writers have when writing sequels is that, if there was any character development in a story at all, its sequel begins with different characters, and therefore different character dynamics. If you tell a coming-of-age story, in the sequel you’ve got someone who already came of age, and now you have to tell a different sort of story. If you tell an analog to it, such as a main character learning to use his magical powers or his family’s magic sword or his pet dragon growing up or what-have-you, you’ve then got to start the next story with the main character being powerful, not weak.

One all-too-common solution to this problem is to reset the characters. The main character can lose his magic powers, or his pet dragon flies off, or his magic sword is stolen. This can be done somewhat successfully, in the sense of the change not being completely unrealistic, depending on the specifics, but I argue that in general, it should not be.

Before I get to that, I just want to elaborate on the depending-on-the-specifics part. It is fairly viable for a new king with a magic sword to lose the sword and have to go on a quest to get it back, though it’s better if he has to entrust it to a knight who will rule in his absence while he goes off to help some other kingdom. Probably the most workable version of this is the isekai story—a type of story, common in Japanese manga, light novels, and animation, where the main character is magically abducted to another world and needs to help there. Being abducted to another world works pretty well.

By contrast, it does not work to do any kind of reset in a coming-of-age story. It’s technically viable to have the character fall and hit his head and forget everything he learned, but that’s just stupid. Short of that, people don’t come of age then just become people who no experience who’ve never learned any life lessons again.

So why should resets be avoided even when they work? There are two main reasons:

  1. It’s throwing out all of the achievements of the first story.
  2. It’s lazy writing.

The first is the most important reason. We hung in with a character through his trials and travails to see him learn and grow and achieve. If the author wipes this away, it takes away the fact that any of it happened. And there’s something worse: it’s Lucy pulling the football away.

If the author is willing to say, “just kidding” about character development the first time, why should we trust that the second round of character development was real this time? Granted, some people are gullible—there will be people who watch the sequel to The Least Jedi. I’m not saying that it’s not commercially viable. Only that it makes for bad writing.

Which brings me to point #2: it’s lazy writing to just undo the events of the original in order to just re-write it a second time. If one takes the lazy way out in the big picture, it sets one up to take the lazy way out in the details, too. Worse, since the second will be an echo of the first, everything about it will either be the first warmed over or merely a reversal of what happened the first time. Except that these reversals will have to work out to the same thing, since the whole reason for resetting everything is to be able to write the same story. Since it will not be its own story, it will take nearly a miracle to make the second story true to itself given that there will be some changes.

A very good example of not taking the lazy way out is the movie Terminator 2. Given that it’s a movie about a robot from the future which came back in time to stop another robot from the future from killing somebody, it’s a vastly better movie than it has any right to be. Anyway, there’s a very interesting bit in the director’s commentary about this. James Cameron pointed out that in most sequels, Sarah Connor would have gone back to being a waitress, just like she was in the first movie.

But in Terminator 2, she didn’t. James Cameron and the other writer asked themselves what a reasonable person would do if a soldier from the future came back and saved her from a killer robot from the future, and impregnated her with the future leader of the rebellion against the robots? And the answer was that she would make ties with gun runners, become a survivalist, and probably seem crazy.

We meet her doing pullups on her upturned bed in a psychiatric ward.

Terminator 2, despite having the same premise, is a very different movie from Terminator because Terminator 2 takes Terminator seriously. There are, granted, some problems because it is a time travel story and time travel stories intrinsically have plot holes. (Time travel is, fundamentally, self-contradictory.) That said, Terminator and Terminator 2 could easily be rewritten to be about killer robots from the Robot Planet where the robots have a prophecy of a human who will attack them. That aside, Terminator 2 is a remarkably consistent movie, both with itself and as a sequel.

Another good example, which perhaps illustrates the point even better, is Cars 2. The plot of Cars, if you haven’t seen it, is that a famous race car (Lightning McQueen) gets sentenced to community service for traffic violations in a run-down town on his way to a big race. There he learns personal responsibility, what matters in life, and falls in love. Then he goes on to almost win the big race, but sacrifices first place in order to help another car who got injured. (If you didn’t figure it out, the cars are alive in Cars.)

The plot of Cars 2 is that McQueen is now a champion race car and takes part in an international race. At the same time, his buddy from the first movie, Mater, is mistaken for a spy and joins a James Bond-style espionage team to find out why and how an international organization of evil (I can’t recall what they’re called; it’s C.H.A.O.S. from Get Smart or S.P.E.C.T.R.E. from James Bond) is sabotaging the race. McQueen is not perfect, but he is more mature and does value the things he learned to value in the first movie. The main friction comes from him relying on Mater and Mater letting him down.

As you can see, Cars 2 did not reset Cars, nor did it try to tell Cars over again. In fact, it was so much of a sequel to Cars, which was a coming-of-age movie, that it was a completely different sort of movie. This was a risk, and many of the adults who liked Cars did not like Cars 2, because it was so different. This is the risk to making sequels that honor the first story—they cannot be the first story over again, so they will not please everyone who liked the first story.

Now, Cars 2 is an interesting example because there was no need to make it a spy thriller. Terminator 2 honored the first movie and was still an action/adventure where a killer robot has come to, well, kill. But there was a practical reason why Cars 2 was in a different genre from its predecessor but Terminator 2 was not: most everyone knows how to grow up enough to not be a spoiled child, but pretty few people in Hollywood have any idea how to keep growing up to become a mature adult from a minimally functioning adult.

If one wants to tell a true sequel to a coming-of-age film, which mostly means a film in which somebody learns to take responsibility for himself, the sequel will be about him learning to take responsibility for others. In practice, this means either becoming a parent or a mentor.

This is a sort of story that Hollywood has absolutely no skill in telling.

If you look at movies about parents or mentors, they’re almost all about how the parent/mentor has to learn to stop trying to be a parent/mentor and just let the child/mentee be whatever he wants to be.

Granted, trying to turn another human being into one’s own vision, materialized, is being a bad parent and a bad mentor, just letting them be themselves is equally bad parenting and mentoring. What you’re supposed to do as a parent or a mentor is to help the person to become themselves. That is, they need to become fully themselves. They must overcome their flaws and become the perfect human being which God made them to be. That’s a hard, difficult process for a person, which is why it takes so much skill to be a parent or a mentor.

There’s a lot of growth necessary to be a decent parent or mentor, but it’s more subtle than growing up from a child. Probably one of the biggest things is learning how much self-sacrifice is necessary—how much time the child or mentee needs, and how little time one will have for one’s own interests. How to balance those things, so one gives freely but does not become subsumed—that is a difficult thing to learn, indeed. That has the makings of very interesting character development.

The problem, of course, is that only people who have gone through it and learned those lessons are in a position to tell it—one can’t teach what one doesn’t know.

At least on purpose.

Art is a great testament to how much one can teach by accident—since God is in charge of the world, not men.

But I think that the world really could do with some (more) decent stories about recent adults learning to be mature adults. I think that they can be made interesting to general audiences.

The Scientific Method Isn’t Worth Much

It’s fairly common, at least in America, for kids to learn that there is a “scientific method” which tends to look something like:

  1. Observation
  2. Hypothesis
  3. Experiment
  4. Go back to 1.

It varies; there is often more detail. In general it’s part of the myth that there was a “scientific revolution” in which at some point people began to study the natural world in a radically different way than anyone had before. I believe (though am not certain) that this myth was propaganda during the Enlightenment, which was a philosophical movement primarily characterized by being a propagandistic movement. (Who do you think gave it the name “The Enlightenment”?)

In truth, people have been studying the natural world for thousands of years, and they’ve done it in much the same way all that time. There used to be less money in it, of course, but in broad strokes it hasn’t changed all that much.

So if that’s the case, why did Science suddenly get so much better in the last few hundred years, I hear people ask. Good question. It has a good answer, though.

Accurate measurement.

Suppose you want to measure how fast objects fall. Now suppose that the only time-keeping device you have is the rate at which a volume of sand (or water) falls through a restricted opening. (I.e. your best stopwatch is an hour glass). How accurately do you think that you’ll be able to write the formula for it? How accurately can you test that in experimentation?

To give you an idea, in physics class in high school we did an experiment where we had an electronic device that let long, thin paper go through it and it burned a mark onto the paper exactly ten times per second, with high precision. We then attached a weight to one end of the paper and dropped the weight. It was then very simple to calculate the acceleration due to gravity, since we just had to accurately measure the distance between the burn marks.

The groups in class got values between 2.8m/s and 7.4m/s (it’s been 25 years, so I might be a little off, but those are approximately correct). For reference, the correct answer, albeit in a vacuum while we were in air, is 9.8m/s.

The point being: until the invention of the mechanical watch, the high precision measurement of accurate time was not really possible. It took people a while to think of that.

It was a medieval invention, by the way. Well, not hyper-precise clocks, but the technology needed to do it. Clocks powered by falling weights were common during the high medieval time period, and the earliest existing spring driven clock was given to Phillip the Good, Duke of Burgundy, in 1430.

Another incredibly important invention for accurate measurement was the telescope. These were first invented in 1608, and spread like wildfire because they were basically just variations of eyeglasses (the first inventer, Hans Lippershey, was an eyeglass maker). Eyeglasses were another medieval invention, by the way.

And if you trace the history of science in any detail, you will discover that its advances were mostly due not to the magical properties of a method of investigation, but to increasing precision in the ability to measure things and make observations of things we cannot normally observe (e.g. the microscope).

That’s not to say that literally nothing changed; there have been shifts in emphasis, as well as the creation of an entire type of career which gives an enormous number of people the leisure to make observations and the money with which to pay for the tools to make these observations. But that’s economics, not a method.

One could try to argue that mathematical physics was something of a revolution, but it wasn’t, really. Astronomers had mathematical models of things they didn’t actually know the nature of nor inquire into since the time of Ptolemy. It’s really increasingly accurate measurements which allow the mathematicization of physics.

The other thing to notice is that anywhere that taking accurate measurements of what we actually want to measure is prohibitively difficult or expensive, the science in those fields tends to be garbage. More specifically, it tends to be the sort of garbage science commonly called cargo cult science. People go through the motions of doing science without actually doing science. What that means, specifically, is that people take measurements of something and pretend it’s measurements of the things that they actually want to measure.

We want to know what eating a lot of red meat does to people’s health over the long term. Unfortunately, no one has the budget to put a large group of people into cages for 50 years and feed them controlled diets while keeping out confounding variables like stress, lifestyle, etc.—and you couldn’t get this past an ethics review board even if you had the budget for it. So what do nutrition researchers who want to measure this do? They give people surveys asking them what they ate over the last 20 years.

Hey, it looks like science.

If you don’t look to closely.

Sherlock Holmes and the Valley of Fear

I recently read the fourth and final Sherlock Holmes novel, The Valley of Fear. It’s an interesting book, or in some sense two books, the first of which I know to be interesting and the second I’m not really interested in reading.

(If anyone doesn’t want spoilers, now’s the time to stop reading.)

The book begins with Sherlock Holmes working out a cryptogram by reasoning to the key from the cipher. It’s a book cipher, which has many pages and two columns, so Holmes is able to guess that it’s an almanac. This is clever and enjoyable; the message decodes that something bad is going to happen to a Douglas in Birlstone. Shortly after they decrypt it, a detective from Scotland Yard arrives to consult Sherlock Holmes about the brutal murder of Mr. Douglas of Birlstone. The plot thickens, as it were. This is an excellent setup for what is to follow.

When Holmes arrives, we get the facts of the case, that Mr. Douglas lives in a house surrounded by a moat with a drawbridge, and was found in his study with his head blasted off with a sawed-off shotgun fired at close range. Any avid reader of detective fiction—possibly even at the time, given how detective fiction had taken off in short story form by 1914, when The Valley of Fear was written—will immediately suspect that the body is not the body it is supposed to be. However, Conan Doyle forestalls this possibility by the presence of a unique brand on the forearm of the corpse, which Mr. Douglas was known to have had. This helps greatly to heighten the mystery.

The mystery is deepened further by the confusing evidence that Mr. Douglas’s friend forged a footprint on the windowsill which was used to suggest that the murderer escaped by wading in the moat—which was only 3′ deep at its deepest—and ran away. Further confusing things, Dr. Watson accidentally observes Mrs. Douglas and Mr. Douglas’ friend being lighthearted and happy together.

Holmes then finds some additional evidence which convinces him of what really happened, which he does not tell us or the police about, which is not exactly fair play. He then he sets in motion a trap where he has the police tell Mr. Douglas’ friend that they is going to drain the moat. This invites the reader to guess, and I’m not sure that we really have sufficient evidence at this point to guess. That’s not entirely true; we have sufficient evidence to guess, but not to pick among the many possible explanations of the facts given to us. It turns out that the dead man was the intruder, but it could have turned out otherwise, too. The facts, up till then, would have supported Mr. Douglas’ friend having been in on the crime, for example. That said, the explanation given does cover the facts very well, and is satisfying. It does rely, to some degree, on happenstance; none of the servants heard the gunshot, except for one half deaf woman who supposed it to be a door banging. This is a little dubious, but investigation must be able to deal with happenstance because happenstance is real.

We then come to the part where Mr. Douglas is revealed and the mystery explained, and which point the narrative shifts over to explaining his history in America and why it was that there were people tracking him from America to England in order to murder him. This, I find very strange.

It is the second time in a novel that Conan Doyle did it. The first time was in A Study in Scarlet, where the middle half of the book (approximately) took place in America. I really don’t get this at all.

I suspect it makes more sense in the original format of the novels, which were serialized in magazines. It would not be so jarring, in a periodical magazine, to have to learn new characters, since one would to some degree need to reacquaint oneself with the already-known characters anyway. Possibly it also speaks to Conan Doyle having not paced himself well, being more used to short stories, and needing to fill the novel with something else.

The very end of the book, when we return in the present in England, is a very short epilogue. Douglas was acquitted as having acted purely in self defense, but then is murdered by Moriarty when it was taking Holmes’s advice to flee England because Moriarty would be after him.

That the book takes such an interest in Moriarty is very curious, given that it was written in 1914 while Holmes killed Moriarty off in 1893. Actually in 1891, but The Final Problem was published in 1893. Holmes was brought back in 1903, in The Adventure of the Empty House, where it is confirmed that Moriarty died at the Reichenbach Falls. So we have a novel which is clearly set prior to the death of Moriarty, establishing him as a criminal mastermind, almost 15 years after he was killed off. What’s even stranger about it is that Moriarty barely features in the story. He’s in the very beginning, mentioned only in connection to the cryptogram and as having something to do with the murder, but he nor his men actually tried to carry out the murder. His involvement was limited to finding out where Douglas was, so the American who was trying to murder Douglas could try. He naturally makes no appearance in the story of Douglas’ adventures in America, and only shows up in a note at the end of the book:

Two months had gone by, and the case had to some extent passed from our minds. Then one morning there came an enigmatic note slipped into our letter box. “Dear me, Mr. Holmes. Dear me!” said this singular epistle. There was neither superscription nor signature. I laughed at the quaint message; but Holmes showed unwonted seriousness.

Moriarty is indicated to have killed Douglas off the cape of South Africa, and the book ends with Homles’s determination to bring Moriarty to justice.

Which would be a great setup for Holmes bringing Moriarty to justice in a later book, but we already read about it in an earlier book. It doesn’t really help to flesh the character out, it’s not really needed for the plot of the book, and it serves to end the book on a note of failure rather than of triumph. I do not understand it. Perhaps its purpose is to help increase the grandeur of Holmes’ previous victory over Moriarty? But that is a strange thing to do. Perhaps it was the reverse—a note of caution to fans of Holmes that no man, not even Sherlock Holmes, is omnipotent?

Why Moderns Always Modernize Stories

Some friends of mine were discussing why it is that modern tellings of old stories (like Robin Hood) are always disappointing. One put forward the theory it’s because they can’t just tell the story, they have to modernize it. He’s right, but I think it’s important to realize why it is that modern storytellers have to modernize everything.

It’s because they’re Modern.

Before you click away because you think I’m joking, notice the capital “M”. I mean that they subconsciously believe in Modern Philosophy, which is the name of a particular school of philosophy which was born with Descartes, died with Immanuel Kant, and has wandered the halls of academia ever since like a zombie—eating brains but never getting any smarter for it.

The short, short version of this rather long and complicated story is that Modern Philosophy started with Descartes’ work Discourse on Method, though it was put forward better in Meditations on First Philosophy. In those works, Descartes began by doubting literally everything and seeing if he could trust anything. Thus he started with the one thing he found impossible to doubt—his own existence. It is from this that we get the famous cogito ergo sumI think, therefore I am.

The problem is that Descartes had to bring in God in order to guarantee that our senses are not always being confused by a powerful demon. In modern parlance we’d say that we’re not in The Matrix. They mean the same thing—that everything we perceive outside of our own mind is not real but being projected to us by some self-interested power. Descartes showed that from his own existence he can know that God exists, and from God’s existence he can know that he is not being continually fooled in this way.

The problem is that Descartes was in some sense cheating—he was not doubting that his own reason worked correctly. The problem is that this is doubtable, and once doubted, completely irrefutable. All refutations of doubting one’s intellect necessarily rely on the intellect being able to work correctly to follow the refutations. If that is itself in doubt, no refutation is possible, and we are left with radical doubt.

And there is only one thing which is certain, in the context of radical doubt: oneself.

To keep this short, without the senses being considered at least minimally reliable there is no object for the intellect to feed on, but the will can operate perfectly well on phantasms. So all that can be relied upon is will.

After Descartes and through Kant, Modern Philosophers worked to avoid this conclusion, but progressively failed. Kant killed off the last attempts to resist this conclusion, though it is a quirk of history that he could not himself accept the conclusion and so basically said that we can will to pretend that reason works.

Nietzsche pointed out how silly willing to pretend that reason works is, and Modern Philosophy has, for the most part, given up that attempt ever since. (Technically, with Nietzsche, we come to what is called “post-modernism”, but post-modernism is just modernism taken seriously and thought out to its logical conclusions.)

Now, modern people who are Modern have not read Descartes, Kant, or Nietzsche, of course, but these thinkers are in the water and the air—one must reject them to not breathe and drink them in. Modern people have not done that, so they hold these beliefs but for the most part don’t realize it and can’t articulate them. As Chesterton observed, if a man won’t think for himself, someone else will think for him. Actually, let me give the real quote, since it’s so good:

…a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy…

(From The Revival of Philosophy)

In the context of the year of our Lord’s Incarnation 2019, what Christians like my friends mean by “classic stories” are mostly stories of heroism. (Robin Hood was given as an example.) So we need to ask what heroism is.

There are varied definitions of what hero is which are useful; for the moment I will define a hero as somebody who gives of himself (in the sense of self-sacrifice) that someone else may have life, or have it more abundantly. Of course, stated like this it includes trivial things. I think that there simply is a difference of degree but not of kind between trivial self-gift and heroism; heroism is to some degree merely extraordinary self-gift.

If you look at the classic “hero’s journey” according to people like Joseph Campbell, but less insipidly as interpreted by George Lucas, the hero is an unknown and insignificant person who is called to do something very hard, which he has no special obligation to do, but who answers this call and does something great, then after his accomplishment, returns to his humble life. In this you see the self-sacrifice, for the hero has to abandon his humble life in order to do something very hard. You further see it as he does the hard thing; it costs him trouble and pain and may well get the odd limb chopped off along the way. Then, critically, he returns to normal life.

You can see elements of this in pagan heroes like Achilles, or to a lesser degree in Odysseus (who is only arguably a hero, even in the ancient Greek sense). They are what C.S. Lewis would call echoes of the true myth which had not yet been fulfilled.

You really see this in fulfillment in Christian heroes, who answer the call out of generosity, not out of obligation or desire for glory. They endure hardships willingly, even unto death, because they follow a master who endured death on a cross for their sake. And they return to a humble life because they are humble.

Now let’s look at this through the lens of Modern Philosophy.

The hero receives a call. That is, someone tries to impose their will on him. He does something hard. That is, it’s a continuation of that imposition of will. Then he returns, i.e. finally goes back to doing what he wants.

This doesn’t really make any sense as a story, after receiving the call. It’s basically the story of a guy being a slave when he could choose not to be. It is the story of a sucker. It’s certainly not a good story; it’s not a story in which a characters actions flow out of his character.

This is why we get the modern version, which is basically a guy deciding on whether he’s going to be completely worthless or just mostly worthless. This is necessarily the case because, for the story to make sense through the modern lens, the story has to be adapted into something where he wills what he does. For that to happen, and for him not to just be a doormat, he has to be given self-interested motivations for his actions. This is why the most characteristic scene in a modern heroic movie is the hero telling the people he benefited not to thank him. Gratitude robs him of his actions being his own will.

A Christian who does a good deed for someone may hide it (“do not let your left hand know what your right is doing”) or he may not (“no one puts a light under a bushel basket”), but if the recipient of his good deed knows about it, the Christian does not refuse gratitude. He may well refuse obligation; he may say “do not thank me, thank God”, or he may say “I thank God that I was able to help you,” but he will not deny the recipient the pleasure of gratitude. The pleasure of gratitude is the recognition of being loved, and the Christian values both love and truth.

A Modern hero cannot love, since to love is to will the good of the other as other. The problem is that the other cannot have any good beside his own will, since there is nothing besides his own will. To do someone good requires that they have a nature which you act according to. The Modern cannot recognize any such thing; the closest he can come is the other being able to accomplish what he wills, but that is in direct competition with the hero’s will. The same action cannot at the same time be the result of two competing wills. In a zero-sum game, it is impossible for more than one person to win.

Thus the modern can only tell a pathetic simulacrum of a hero who does what he does because he wants to, without reference to anyone else. It’s the only way that the story is a triumph and not the tragedy of the hero being a victim. Thus instead of the hero being tested, and having the courage and fortitude to push through his hardship and do what he was asked to do, we get the hero deciding whether or not he wants to help, and finding inside himself some need that helping will fulfill.

And in the end, instead of the hero happily returning to his humble life out of humility, we have the hero filled with a sense of emptiness because the past no longer exists and all that matters now is what he wills now, which no longer has anything to do with the adventure.

The hero has learned nothing because there is nothing to learn; the hero has received nothing because there is nothing to receive. He must push on because there is nothing else to do.

This is why Modern tellings of old stories suck, and must suck.

It’s because they’re Modern.

Meek is an Interesting Word

Somebody asked me to do a video on the beatitude about meekness, so I’ve been doing some research on the word “meek”. Even though I don’t speak from a place of authority, talking about the beatitudes still carries a lot of responsibility.

The first problem that we have with the word “meek” is that it is not really a modern English word. It’s very rarely used as a character description in novels, and outside of that, pretty much never. So we have to delve back into history and etymology.

The OED defines meek as “Gentle. Courteous. Kind.” It comes from a Scandinavian root. Various Scandinavian languages have an extremely similar word which means, generally, “soft” or “supple”.

Next, we turn to the original Greek:

μακάριοι οἱ πραεῖς, ὅτι αὐτοὶ κληρονομήσουσιν τὴν γῆν

To transliterate, for those who don’t read the Greek alphabet:

makarioi hoi praeis, hoti autoi kleronomesousin ten gen.

Much clearer, I’m sure. Bear with me, though, because I will explain. (I’m going to refer to the words in the English transliteration to make it easier to follow.)

The beatitudes generally have two halves. The first half says that someone is blessed, while the second half gives some explanation as to why. This beatitude has this form. Who is blessed is the first three words, “makarioi hoi praeis”. In the original the verb is left understood, but this is usually translated as “blessed are the meek”. The second half, “hoti autoi kleronomesousin ten gen” is commonly translated “for they shall inherit the earth”.

Let’s break the first half down a little more, because both major words in it are very interesting (“hoi” is just an article; basically it’s just “the”). The first word, “makarioi”, can actually be translated in English either as “blessed” or as “happy”, though it should be noted happy in a more full sense than just the pleasant sensation of having recently eaten on a sunny day with no work to do at the moment.

I’ve noticed that a lot of people, or at least a lot of my fellow Americans, want to take “blessed”, not as an adjective, but as a future conditional verb. Basically, they want to take Christ, not as describing what presently is, but as giving rules with rewards that are attached. This doesn’t work even in English, but it’s even more obvious in Greek where makarioi is declined to agree with the subject, “hoi praeis”. Christ it’s telling us what to do and offering rewards. He’s telling us that we’re looking at the world all wrong, and why.

The other part, “hoi praeis”, is what gets translated as “the meek”, though I’ve also seen “the gentle”. It is the noun form of an adjective, “praios” (“πρᾷος”), which (not surprisingly) tends to mean mild or gentle.

Now, to avoid a connotation which modern English has accrued over hundreds of years of character descriptions in novels, it does not mean week, timid, or mousy. The wiktionary entry for praios has some usage examples. If one peruses through them, they are things like asking a god to be gentle, or saying that a king is gentle with his people.

So translating the first half very loosely, we might render the beatitude:

Those who restrain their force have been blessed, for they will inherit the earth.

This expanded version of the beatitude puts it in the group of the beatitudes which refer to something under the control of the people described as “makarios” (blessed, happy). Consider the other groups of people, which are roughly half of beatitudes: “the poor in spirit,” “those who mourn”, “those who hunger and thirst for righteousness”, “those who are persecuted in the cause of righteousness,” and “you when people abuse you and persecute you and speak all kinds of calumny against you falsely on my account”.

I think that this really makes it clear that what is being described is a gift, though a hard-to-understand one. So what do we make of the other beatitudes, the ones under people’s control?

Just as a quick refresher, they are: “the meek”, “the merciful”, “the pure in heart”, and “the peacemakers”. They each have the superficial form of there being a reward for those who do well, but if we look closer, the reward is an intrinsic reward. That is, it is the natural outcome of the action.

So if we look closely at the second half of the meek beatitude, we see that indeed it is connected to the first half: “for they will inherit the earth”. This is often literally the case: those who fight when they don’t have to die when they don’t have to, and leave the world to those who survive them.

Now, I think too much can be made of “the original context”—our Lord was incarnate in a particular time and spoke to particular people, but they were human beings and he was also speaking to all of us. Still, I think it is worth looking at that original context, and how in the ancient world one of the surest paths to glory was conquest. Heroes were, generally, warriors. They were not, as a rule, gentle. Even in more modern contexts where war is mechanized and so individuals get less glory, there are still analogs where fortune favors the bold. We laud sports figures and political figures who crush their enemies in metaphorical, rather than literal, senses.

Even on a more simple level, we can only appreciate the power than a man has when he demonstrates it by using it.

And here Christ is saying that those are happy who do not use their power when they don’t have to. And why? Because they inherit the earth. Glory is fleeting, and in the end one can’t actually do very much with it. Those who attain glory by the display of power do not, in putting that power on display, use it to do anything useful. They waste their power for show, rather than using it to build. And having built nothing, they will end up with nothing.

You can see this demonstrated in microcosm in a sport I happen to like: power lifting. It is impressive to see people pick up enormous weights. But what do they do with them once they’ve picked them up? They just put them back down again.

Now, the fact that this is in microcosm means that there can be good justifications for it; building up strength by lifting useless weights can give one the strength to lift useful weights, such as children, furniture, someone else who has fallen down, etc. And weightlifting competitions do serve the useful role of inspiring people to develop their strength; a powerlifting meet is not the same thing as conquering a country. But there is, none the less, a great metaphor for it, if one were to extend the powerlifting competition to being all of life. Happy are those who do not.

Strength vs. Skill

Many years ago, I was studying judo from someone who had done judo since he was a kid and was teaching for fun. He was not a very large man, but he was a very skilled one. One time, he told a very interesting story.

He was in a match with a man who was a body builder or a power lifter or something of that ilk—an immensely, extraordinarily strong man. He got the strong man into an arm bar, which is a hold in which the elbow is braced against something and the arm is being pulled back at the wrist. Normally if a person is in a properly positioned arm bar, this is inescapable and the person holding it could break his arm if he wanted to; this (joint locks) is one of the typical ways of a judo match ending—the person in the joint lock taps out, admitting defeat.

The strong man did not tap out.

He just curled his way out of the arm bar.

That is, his arm—in a very weak position—was so much stronger than my judo teacher’s large core muscles that he was able to overpower them anyway.

Next, my judo teacher pinned him down. In western wrestling, one can win a match by pinning the opponent’s shoulders to the ground for 3 seconds. In judo it’s a little more complicated, but the point which is important to the moment is that you have to pin the opponent such that he can’t escape for 45 seconds. Once he had pinned the strong man, the strong man asked him, “you got me?” My teacher replied, “yeah, I got you.” The strong man asked, “are you sure about that?” “Yes, I’m sure,” my teacher replied.

The strong man then grabbed my teacher by the gi (the stout clothing worn in judo) and floor-pressed him into the air, then set him aside. (Floor pressing is like bench pressing, only the floor keeps your elbows from going low enough to generate maximum power.)

Clearly, this guy was simply far too strong to ever lose by joint locks or pinning. So my teacher won the match by throwing him to the ground (“ippon”).

The moral of the story is not that skill will always beat strength, because clearly it didn’t, two out of three times. The moral of the story is also not that strength will always beat skill, since it didn’t, that final time.

The moral of the story is to know your limits and always stay within them.

It cost 1 billion dollars to tape out 7nm chip

Making processors is getting very expensive. According to this report, the R&D to take a processor design and turn it into something that can be fabricated at the latest silicon mode is $1B.

https://www.fudzilla.com/news/49513-it-cost-1-billion-dollars-to-tape-out-7nm-chip

Each fabrication node (where the transistors shrink) has gotten more expensive. I suspect it’s likely that economics will play as big a role in killing off Moore’s Law as physics will. Eventually no one will be able to afford new nodes, even if they are physically possible to create.

This is what an s-curve looks like.

A Michaelmas Book Sale

My friend and publisher, Russell Newquist, is having a Michaelmas sale this weekend on his books since they feature a modern day paladin who fights with the sword of Saint Michael (the archangel). If you’re in the mood for Catholic action-horror (Amazon calls it “Christian fantasy”) check out:

“Jim Butcher’s Harry Dresden collides with Larry Correia’s Monster Hunter
International in this supernatural thriller that goes straight to Hell!”

Also, the sequel:

“There’s a dragon in the church.”

I have to confess that these are still on my shelf waiting to be read, but I have read Russell’s short story Who’s Afraid of the Dark? (which is about a character who appears in War Demons and Vigil) and it was very good. So if you’re not busy writing murder mysteries and have time to read other people’s work, I strongly recommend checking them out.

This weekend the sale prices for War Demons are:
Ebook: $0.99
Paperback: $9.99
Hardcover: $19.99

The sale prices for Vigil are:
Ebook: $0.99
Paperback: $4.99

History Suffers From Academia

Academia has problems. This is an obvious statement, since it is an institution in a fallen world. It is worth looking at these problems in some depth, however, because they affect the various academic disciplines to varying degrees, and I think that History may be hit the hardest.

This occurred to me as I was reading the book A Catholic Introduction to the Bible: The Old Testament by Brant Pitre and John Bergsma. (It’s a massive tome that could be used to bludgeon a waterbuffalo to death, and I’m still in the introductory materials.) In the prefatory materials is an overview of biblical scholarship over the last two centuries, and this includes some theories which went from being novel, to being dominant, to mostly in the rubbish bin in recent times. And this got me thinking about the problems of academia and how they affect history.

The big problem of academia is that its currency is novelty. You can see this in the economic angle of publish or perish, to be sure, but even if publication didn’t affect people’s job prospects and salaries, it would still affect their reputations and standing as scholars. This selective pressure has a selective effect on scholars, very akin to what the same pressure has on scientists. Those who come up with novelty, for whatever reason, will tend to publish more, and will thus receive more academic status. This has generational effects, since new scholars learn from and have to work with old scholars. (For more on that, see Who Works for Bad Scientists?.)

While true and important, and as I say no one takes evolution seriously enough, it’s not what I want to focus on today. Rather, I want to focus on what limits there are inherent in the field to protect against novelty which is novel by being purely fictitious.

The prototypical example of a system which corrects against novelty-through-pure-fiction is science, but this is painting with an overly broad brush. It is not all sciences that do this, but rather experimental sciences. Theoretical physicists can spin theories about 11-dimensional string vibrations until the cows come home, and no one will ever notice that they don’t work because no one can run an experiment on the things.

The fiction-detecting aspect of experimental science only really works the way it’s advertised to in crowded fields of science. That is, it only works in fields of science where people will be doing the same experiments many times, or at least experiments who results depend upon previous results. This is quite true of experimental physics, especially of basic physics such as newtonian mechanics. It gets less true the less crowded the field is. The more low-hanging fruit there is to pick, the more people will spend their time picking it than looking in each other’s baskets.

Biology, and to the degree that it is a sub-field of that, medicine, are good examples of non-dense fields. There are far more questions to ask than there are researchers with funding to try to find answers. There are exceptions for particularly hot topics where at least several researchers will try to ask the same question, but you just don’t find much in the way of people duplicating each other’s experiments to find out if they get the same results.

Worse, in medicine certain types of answers become can make experiments unethical to try to replicate. If a drug shows a statistically significant result in a clinical trial, running such a clinical trial with a placebo group becomes unethical. All is not lost, since new drugs get run against “standard care” (i.e. the already approved drug) as a control group, so if the original drug is really just a placebo that got lucky, a real drug that comes along will prove better than it. There isn’t much of an ethical way of discovering that the drug is actually just a placebo (with side effects), though.

Chemistry may be the best field simply because chemistry is so closely tied to engineering, and the purpose of engineering is to replicate the heck out of whatever experiment was run. That is, engineering replicates findings by putting things into production on industrial scales, and a million LED lights shipped to Home Depot and Lowes and other such stores replicate the findings that Indiium, Gallium, and Nitrogen, when mixed correctly and an electric current is run through them, emit blue light. (Indium Nitride and Gallium Nitride are the semiconductors used in the blue LED.) But this is far and away the best case, and since it is industrial, it is definitionally not academic.

So what is it about the sciences, where there is some limit on fictitious novelty, that provides this limit? It’s that the test is not whether or not it seems plausible to another human being, but whether or not it actually works when one tries it.

This is, largely, not available in history. It is not entirely absent, as there are historical theories which can be disproved or confirmed by subsequent archaeological finds. These are, however, fairly rare. They are extremely rare in ancient history, such as biblical studies.

The particular theory which got me thinking about this was the Documentary Hypothesis, which more-or-less was a theory created by a liberal protestant in the 1800s that the traditional attribution of the Pentateuch to Moses was ahistorical.

What amazing new archaeological evidence was unearthed that gave rise to this theory? Why, none at all. The guy who came up with it looked at the text and spun his theory out of air. He decided to categorize the various lines and paragraphs and stories in the Pentateuch on the basis of which he thought similar and which he thought different, then took these groupings he thought similar and different from other groupings and attributed them to different authors, at different times.

Of course, being a liberal protestant, he decided that the religion started pure and simple and later got corrupted by law and liturgy. This appealed to his prejudices, and also gave him an interpretive framework to pull out of thin air approximate dates for the various authors he also pulled out of thin air.

And this was a huge hit in academia? Why? There are many threads which went into it, of course, but a significant one is that it was novel. It was new, and fresh, and exciting. It produced an enormous amount of work for scholars to do—someone had to go through the Pentateuch line by line and classify every line according to which theoretical author wrote it.

I think even more than being novel, this produced low-hanging fruit. That is, it made for relatively easy work.

One of the problems that a modern Christian who wants to write about scripture is up against is that he’s implicitly competing with twenty centuries of the most brilliant people in the world, because the brilliant people who had interesting things to say about scripture wrote them down, and Christians valued those writings and kept them and passed them on. There is, at present, an astonishing amount of excellent reading material, if one wants it, available from the saints and doctors of the church. What can one say, today, that has not already been said, and better?

Here, a new theory which overturns everything is the savior of the man in the bottom 99.99% of humanity (i.e. one who is not in the top 0.01%) by genius or talent, or whatever metric one wants to use. With everything overturned, there is new, fresh work to be done that most anyone can do, but not one has done before because no one could have done it before.

And who will bite the hand that feeds him? Certainly, the academic is not known for this counter-productive activity any more than any of his fellow men are.

So we get a century of people arguing over imaginary authors that they have not a shred of evidence for, mostly because it’s easier and more fun than real work.

From what I understand, a similar thing happened with Plato. At some point someone decided that most of the Socratic dialogues weren’t written by Plato because… they didn’t sound the same to him. Skip forward by about a century, and scholarly consensus once again becomes that Plato wrote the stuff commonly attributed to him. What did the intervening century have to show for it? A lot of sound and fury, signifying nothing.

Oh, and a lot of employed academics.

America’s Sweethearts

I’ve written before about the movie America’s Sweethearts. I would like to add to those thoughts, since I’ve watched it a few more times since then. (It’s one of a handful of movies I watch while debugging code because it helps to keep me from getting distracted while I wait for compiles, and because I know it so well it doesn’t distract me from doing the work because I always know what happens next.)

One of the very curious things about the movie America’s Sweethearts is that all of its characters are bad. (For those who are not familiar with it, America’s Sweethearts is a romantic comedy.) The show opens with the information that the titular couple of Eddie Thomas and Gwen Harrison has split. During the filming of their most recent and now last movie together, Time Over Time, Gwen took up with a Spanish actor and left Eddie. Eddie went crazy and tried to kill them, then retreated to a sort of faux-hindu wellness center and stayed there.

This is recapped fairly early on; the plot of America’s Sweethearts begins with the director of Time Over Time refusing to show the movie to the head of the studio until the press junket, when the press would see it at the same time as everyone else. This causes the head of the studio to panic and re-hire Lee, the studio’s publicist who he had fired as a cost-saving measure, to put together the junket because his talents really do match his salary. The only other major character is Kiki, Gwen’s sister (it’s unspecified who is older; they might even be fraternal twins, which would help to explain shared high school experiences). She’s a mousy creature whose life is mostly taken up pleasing the whims of her famous sister, but she’s played by Julia Roberts so you know that won’t last through the end of the movie.

We now have all of the major characters: an adulterer, a lunatic, an unscrupulous businessman, a wimpy woman who lets herself by tyrannized by her awful sister, a publicist who follows the line which Hercule Poirot’s friends said of him: he would never tell the truth if a lie would suffice.

And what’s really weird is that they’re a loveable cast, and it’s a really enjoyable movie, even though it is not a redemption arc for most of them.

I think that part of what makes it work—apart from the massive charisma of all of the actors, which cannot be understated as a causal element—is that the characters’ vices, while not repented of, are not excused, either.

The movie has something like a happy ending for about half of the characters in it, but it is very fitting because it’s a very small happy ending. The head of the studio gets a movie which has a lot of legal liabilities but which might make enough money to cover them. The publicist has what is probably going to be a successful movie. The adulterer is embarrassed, but she stays with her Spaniard for whatever that is worth. Eddie and Kiki wind up together, but shortly before they decide to give it a try, Kiki prognosticates that it’s never going to work, and she might well be right.

I think that ultimately what makes the movie work is the subconsciously stoic theme that vice is its own punishment, and so successful vice is still punished vice. America’s Sweethearts is all about people who do not deserve their natural virtues—beauty, fame, wealth, power—who are punished by getting to keep them. But—and this is an important but—the movie is so short that one is left with the hope that the punishment may serve its purpose and the people may in time learn to repent.

This may be the formula for all successful movies about vicious people (that is, people who practice vice). At least where they do not repent. Redemption stories are probably better. But if a story about vicious people is not going to be about their redemption, I think the story of how they are punished by success may be the only other option for a good story.

Because good stories need to be true to life.

Science Fiction vs. Fantasy

On a twitter thread, I proposed the idea that the main distinction between Science Fiction and fantasy is whether people prefer spandex uniforms or robes:

I did mean this in a tongue-in-cheek way. Obviously the only difference between Science Fiction and Fantasy is not the wardrobe. It is curiously harder to define than one would first suspect, though.

Before proceeding, I’d like to make a note that genres are not, or at least are not best considered as, normative things which dictate which books should be. Rather, they are descriptions of books for the sake of potential readers. The purpose of a genre is “if you like books that have X in it, you might like this book”. (The normative aspect comes primarily from the idea of not deceiving readers, but that runs into problems.)

Science Fiction is often described as extrapolating the present. The problem is that this is simply not true in almost all cases. It is very rare for Science Fiction to include only technology which is known to be workable within the laws of nature which we currently know. This is doable, and from what I’ve heard The Martian does an excellent job of this. At least by reputation, the only thing it projects into the future which is not presently known to be possible is funding. This is highly atypical, though.

The most obvious example is faster-than-light travel. This utterly breaks the laws of nature as we know them. Any Science Fiction story with faster-than-light travel is as realistic a projection of the future as is one in which people discover magic and the typical mode of transportation is flying unicorns.

I have seen attempts to characterize science fiction based on quantitative measures of how much of the science is fictional. This fails in general because fantasy typically requires only the addition of one extra energy field (a “mana” field, if you will) to presently known physics. And except for stories in which time travel is possible, the addition of a mana field is far more compatible with what we know of the laws of nature than faster-than-light travel is.

Now, one possibility (which I dislike) is that Science Fiction is inherently atheistic fantasy. This take, which I am not committed to, is that Science Fiction is fantasy without the numinous. Probably an alternative is Science Fiction is fantasy where there is no limit to the power which any random human being can acquire.

What I think might be the better distinction between Science Fiction and Fantasy is that Science Fiction is fantasy in which the author can convince the reader that the story is plausibly a possible future of the present. What matters is not whether, on strict examination, the possible future is actually possible. What matters is whether the reader doesn’t notice. And for a great many readers of Science Fiction, I suspect that they don’t want to notice.

In many ways, the work of a Science Fiction writer might be like that of an illusionist: to fool someone who wants to be fooled.

This puts Star Wars in a very curious place, I should note, since Star Wars is very explicitly not a possible future. But Star Wars has always been very dubiously Science Fiction. Yes, people who like Science Fiction often like Star Wars, but this doesn’t really run the other way. People who like Star Wars are not not highly likely to like other(?) science fiction. I personally know plenty of people who like space wizards with fire swords who do not, as a rule, read Science Fiction.

Anyway, even this is a tentative distinction between the two genres. It’s not an easy thing to get a handle on because it’s impossible to know hundreds of thousands of readers to identify the commonalities between their preferences. Even the classification of books into genres by publishers and books stores are only guesses as to what will get people to buy books, made by fallible people.

Murder For Revenge

In broad strokes, there are only a few reasons to murder someone:

  1. Gaining money or other forms of power
  2. To pave the way for love
  3. Revenge
  4. To gain status that properly belongs to the victim
  5. To protect one’s status

These correspond, roughly, to the deadly sins:

  1. Greed
  2. Lust
  3. Wrath
  4. Envy
  5. Vanity

Today I want to consider murder for revenge. It further subdivides into two possible situations:

  1. The murderer is fine with being destroyed in the process
  2. The murderer wishes to suffer no repercussions

The former can make an interesting story (such as the sub-plot in Chesterton’s The Sins of Prince Saradine), but it’s not easy for it to sustain a mystery. The main problem is that the murderer should, by hypothesis, confess. This can, however, be handled.

The first way to handle this is to have the murderer leave. This is hard to work unless he thinks the crime won’t be discovered and so no explanation is necessary. That can be done, though, especially for historical crimes being discovered and investigated only years later.

The second way to handle this is to have the murderer leave a confession before leaving but to have the confession intercepted by someone who wants to use the occasion to murder someone by framing him for murder. This is a very workable sort of plot, though it will be complicated.

The third major way is to kill the murderer before he can confess. This may be the most interesting option, especially if he is killed by the victim. Of course, if the murderer is murdered by his victim, this will not be mysterious unless at least one of them uses a scheme for which he does not need to be present, which is where the interesting part comes from. It is very hard to suspect a dead man of murder. If there is anyone one will leave off suspecting of a crime, it’s a dead man.

In a Cadfael story (Saint Peter’s Fair) Hugh Beringar remarks that babes and drunks are the world’s only innocents. But this is not an exhaustive list. Who is so incapable of harm as a man already dead?

What’s specially interesting about two people who have murdered each other is that with any conniving at all, the author can contrive to have everyone suspect them of being murdered by the same person, and this will be a very strange person indeed to have two enemies with so little in common. It also means that the murder will seem to have been done very craftily when it was in fact done very simply. Or at least one of them will be like that. There are absolutely wonderful possibilities for misdirection, here.

(I really want to write a story like this some day. I probably should first write a story with at least two victims at the start who were killed by the same person, so it’s not obvious, though.)

The other major option, and which is more common because it can far more easily sustain a mystery, is for the one seeking revenge to wish to avoid repercussions for his crime. This provides a simple reason for why he does not confess. It can sustain a mystery with little difficulty at all.

It can, of course, be made far more complex than the simple case. The variation that I suspect is most interesting, or at least that I personally find most interesting, is of introducing the complication of the passage of time. This can either be put between the original offense and the present, or between both the original offense and the revenge, and the present.

Of the two, my favorite is probably the ones where the revenge is recent but the offense in the past. This is probably most classically done with the child who grows up to avenge a parent, but this possibly should be avoided because it is common enough that, these days, the average reader might count the years since the crime in the past and guess the killer simply based on his age.

It comes to mind that an interesting way around that problem might be to give the murderer some scruple in his revenge, such as waiting for the 18th birthday of the vicitm’s youngest child, on the theory that his children should not be punished for the crime of their father. Something like that would throw a wrench into figuring out the culprit by simple calculations, at least.

There are more variations on murder for revenge, but this post is getting long enough that I think I’ll leave them for later. Enjoy writing your murder mysteries about revenge, and God bless you.

Dragnet

Something I find interesting on occasion is to look up the history of television shows. Television is a very young medium. Though the device itself was invented in the 1930s, the Great Depression and the second World War and its attendant economic privations meant that televisions were not widely owned until the late 1940s. Without an audience, not much was made to broadcast to it. It was, therefore, really the early 1950s in which television got its start.

This makes it easy to research, but also makes the chain of influences fairly short.

Dragnet actually started as a radio drama, starring Jack Webb as Detective Joe Friday. In 1951, it became a television show, with much the same cast as the radio drama, though his partner had to be changed out part way through. This show lasted until 1959. It was later revived in 1967, this time in color. This is the version which I think most people are familiar with, that stars Harry Morgon as Detective Bill Gannon alongside Jack Webb reprising his role as Joe Friday. Certainly it’s the version I’m most familiar with. It lasted until 1970.

There were other versions made, but none with Jack Webb since he died in 1982 (at the age of 62). In 1987 there was a comedic movie starring Dan Ackroyd and Tom Hanks. It’s almost a parody of the original, though it is not a mean-spirited parody and I can testify that it is a lot of fun. In 1989 there was a short-lived series called The New Dragnet, and in 2003 there was an even shorter-lived revival series called LA Dragnet.

Though Dragnet was not able to survive in the modern world of police procedurals, or possibly just it was not able to outlive its star, Jack Webb, it did have an enormous impact on television. Counterfactuals are impossible to state with certainty, but it seems likely that police procedurals would not have the form they have today if Dragnet had never happened.

Episodes of Dragnet, which are (surprisingly) easily found on YouTube, are interesting to watch. The detectives are in the homicide division, so in a very technical sense the cases are murder mysteries. However, they are not detective stories in the sense of Poirot or Agatha Christie. The detectives do a lot of work, of course, but they don’t really do anything particularly clever. They just keep talking to people until they get enough facts to convict the murderer.

What I find curious—given that I’m a huge fan of detective fiction with genius detectives and write some of it myself—is that, bare-bones as Dragnet is, it still satisfies the impulse to see a mystery solved. This is true of modern police procedurals as well. In both cases, they feel somewhat like empty calories—enjoyable while watching but they don’t really have any substance which sticks with one.

This is not true of the great detective stories. Murder on the Orient Express, Have His Carcase, Saint Peter’s Fair—these stories really stick with one. There are interesting ideas in them to chew on long after one’s read them.

But it’s a testament to the human craving for the solving of mysteries that even Dragnet, which was told in an almost deliberately un-entertaining style, still makes you want to watch to the end to find out what happens, if you watch the beginning. This may partially be a testament to the power of charisma, though. I can watch Harry Morgan in just about anything.

Calories In vs. Calories Out

When it comes to the subject of losing weight—more specifically, reducing excess fat stores in the body—it’s fairly common to come across somebody who puts it like this:

It’s just calories in versus calories out. Thermodynamics says that if you take in more calories than you burn, you’ll store them as fat. If you take in fewer, you’ll burn fat. So weight loss is very simple: just burn more calories than you take in. That’s it. Anything else is just people trying to kid themselves that here’s a magic bullet.

This represents a confusion one sees in many fields: making no distinction between the cause of something and the mechanism of how the cause makes it happen. It is quite true that when somebody stores fat in their body they require energy to make the fat and they can’t also burn that energy and therefore the amount of energy they took in was higher than the amount of energy which they burned. No one, anywhere, disputes this. It’s also entirely uninteresting to the subject of fat gain or loss in people with excess fat.

(NOTE: when talking about healthy people—typically lean athletes—regulating what little fat they have, this simplification is probably accurate. This post is not talking about how a body builder can force his body to levels of fat which are dangerously low, or how an athlete can cut to a lower weight class. Those will almost certainly have to be achieved by simple calorie restriction because they are manipulating a healthy body into going outside of the homeostasis it wants to maintain for optimal health).

The question which is actually interesting to the subject of fat gain or loss is why the body stores energy as fat. And this is where the people who love to talk about calories-in-calories-out show their reductionist colors. Tthey will tell you that since fat cells are energy storage, if you take in more calories than you burn, you will necessarily store them as fat. But they give no reason for this, while there are excellent reasons to doubt it.

The reason to doubt that extra calories eaten can only go to fat is that the human metabolism is a highly variable thing. Though I should clarify what I mean here because by “metabolism” some people mean “resting metabolism”, while I mean “total metabolism”. Our bodies spend calories on a lot of things—walking, talking, maintaining our temperature, repairing our bodies, and other things. Very few of these things are fixed costs. One possible reaction to being in a cold environment is moving more or just burning energy for heat. Another is feeling cold and putting on a sweater. Those do not use the same number of calories over the course of an hour.

Let’s consider a very analogous system: finances.

If a person makes an additional $1000 per month, it is possible that his bank account will grow by $1000/month. It is also possible that he will start eating at expensive restaurants, and his bank account won’t change at all. On the flip side, a person whose income doesn’t change can decide to stop eating out and can grow his bank account with no additional income, merely by cutting expenses. And could do both: he could decide his bank account isn’t nearly large enough, work a second job to bring in an extra $1000, move to a tiny, unheated apartment, and eat nothing but porridge for his meals so that his bank account swells rapidly.

It’s that last part that’s most interesting to the moment, because it’s what seems to be the case in people who are, shall we say, famine resistant. Because there’s a really fascinating question about people carrying excess fat which is rarely asked: why do they get hungry?

Seriously, why is it that a person with excess fat feels hungry when he has plenty of energy at his ready disposal? That’s not how the body normally works. The human body, when working correctly, tries to maintain a homeostasis. Granted, it’s a homeostasis with more fat than a bodybuilder would like, but the body tends to regulate hunger on the basis of energy availability. Or in other words, normal people usually stop being hungry when their calories in is roughly equal to their calories out.

At this point, a word is necessary about what we might call the balloon theory of hunger. Basically, it is the model of hunger where the stomach is a balloon with pressure sensors and hunger is merely the pressure sensors detecting whether there’s still room in the stomach to fit something without literally bursting it.

There is some minor truth to this, in that the stomach does in fact have sensors in it which detect the degree to which it is stretched, but a few years of living as a human being should be sufficient to show this model as the rubbish that it is. Consider a few counter-examples:

  1. Desert. A person can eat until “they’re so stuffed they can’t eat another bite” then the moment desert comes out they can somehow fit enough additional food to fill a grapefruit.
  2. Exercise makes people hungry. Starting a new exercise routine can make one feel ravenously hungry for days. Exercise does not drastically increase the size of someone’s stomach in the first few days.
  3. Teenage boys can out-eat their parents combined. I did it often as a teenage boy. (I was on the rowing team in high school and relatively lean, too.) Teenage boys do not have stomachs which are larger than their mother’s and father’s stomach’s combined.
  4. Tests show that a stomach can stretch to around the size of an entire human torso before bursting from pressure. They’re incredibly expandable.
  5. People who win hot-dog eating contests do not ordinarily eat that much food to feel full.

In short, the theory that being hungry is entirely, or even primarily, about whether your stomach is full is nonsense.

There is also the always-hungry model, which tends to involve some pretend evolutionary biology about humans having evolved in circumstances of constant famine and so we are always hungry in order to pack on as much fat as possible for the next famine which we know is right around the corner.

The main problem with this is that it directly contradicts experience. Americans live in an environment with truly enormous food surpluses always available, and there are plenty of not-fat people who eat until they are not hungry and who nevertheless do not eat the 10,000+ calories that they easily could and this model predicts.

In short, a little bit of experience shows that human beings are not normally ravenous eating machines consuming every calorie that they can get their mouths on.

With these models of human hunger out of the way, the question then comes up and is very pressing: why do fat people get hungry?

It is not the purpose of this post to give the answer to this question. Chief among the reasons why is that there are almost certainly many answers to this question; people’s energy regulation can get screwed up for a variety of unrelated reasons. It is only the purpose to highlight how important finding an answer to this question is for a person who wants to lose excess fat.

(So as to not completely shirk the question, I think that one of the most common is excessive fructose consumption causing insulin insensitivity in the liver, which cascades into general insulin insensitivity, which then disrupts energy regulation, though even that is probably an over-simplification since in general nothing in biology involves just a single hormone. This model, however, at least corresponds well to my own experience of when I gain and lose weight.)

There’s a really good metaphor for the issue in Tom Naughton’s post Toilet Humor: The How vs. Why of Getting Fat. I’m going to give a variant of this metaphor to keep things more pleasant: the kitchen sink.

Suppose that your sink is clogged and filling up with water and about to overflow. It is entirely true that the problem, in an acute sense, is that there is more water going into the sink than coming out of it. If one applied the standard dietary advice to a clogged sink, you would just drastically reduce the flow of water into the sink until the sink was empty.

And it will work if you do that. Cut off the water, and the sink will eventually not be full of water. Evaporation, if nothing else, will see to that.

There’s just one problem: you have the sink for a reason, and that reason is not merely to keep it empty. You want the sink to do work. And the water-in-water-out approach of just cutting off the water in means that your sink can’t do its job. The correct solution to a clogged sink is not to stop washing your dishes. It’s to find out why it’s clogged and clear the clog. Maybe the drain strainer is full. Maybe the pipe is clogged later on. Fixing the problem depends on what the problem is, and there isn’t one problem. But whatever the obstruction in the drain, that’s what you need to fix so that the sink can do its job.

Similarly, a human being almost certainly has things to do besides sitting around not being fat. Many of us are parents. Some of us have jobs. A few of us have friends. Whatever it is, we have more to do than just sitting around not being fat. Just cutting off our food without fixing why we’re hungry when we’ve got excess fat is like just cutting off the water to the sink. Whatever you’ve got to get done in life, you’re going to do a bad job.

Further, people who are constantly hungry tend to be irritable, short-tempered, and lethargic. Even if they manage to fulfill their primary responsibilities well (and they’re probably only doing it passably), they’re going to make life less pleasant for everyone around them. I once had a housemate who was doing a calorie-restricted cut, and I was nearly at the point of begging him to stop because he was just so unpleasant to be around during it.

Interestingly, you can see the same sort of indifference-to-function in sports-medicine vs. regular medicine. If an athlete has a problem where something really hurts when he uses it, the conventional medicine approach is to just stop playing the sport and (I’m exaggerating) get months of bed rest. People into sports medicine know that this is hyper-focusing on a mechanism—in this case, rest—while ignoring that the person is a human being with a life. Sports medicine tries very hard to figure out how to restore athletes to normal function in the context of still living life as an athlete and not considering being wheelchair-bound-but-alive to be an equivalent outcome.

So, in conclusion, the real question when it comes to someone who wants to lose excess fat is not how to get rid of excess fat. It’s how to fix the fact that they’re hungry when they shouldn’t be. If you fix that, then the person will certainly lose excess fat—people who aren’t hungry don’t eat as many calories. But they’ll do it while still being a functional human being.

In short: one should treat the problem, not the symptom. To do that, one must first identify the problem.

The Best Laid Schemes O’ Mice an’ Men Gang Aft Agley

This weekmonthsummer has really not been going the way I hoped it would. I’m going to talk about why that’s OK, but first I want to quote the stanza from which the title comes, because the original poem, To a Mouse, on Turning Her Up in Her Nest With the Plough, November, 1785, is not quoted often enough:

But Mousie, thou art no thy-lane,
In proving foresight may be vain:
The best laid schemes o’ Mice an’ Men
          Gang aft agley,
An’ lea’e us nought but grief an’ pain,
          For promis’d joy!

So, the reason for the strike-through up above is that I began this post in, if my memory serves me, July, and I am now finishing it in August. Between various things, mostly family related, as well as an annual trip to visit my parents, and most things have gotten pushed to the side. About the only thing I’ve managed to do which is creative is work on the second chronicle of Brother Thomas, Wedding Flowers Will Do For a Funeral.

On the plus side, I’ve finished the first draft and, as of the time of this writing, have edited the first 100 pages (actually, 99¼, but the word processor is on page 100). It’s going slower than I would like, of course, but that’s something of a theme, lately.

And just to make life more crowded, I’m finally going back to the gym to lift weights 3 times a week. In the long run, it’s very good that I’m doing it, but it means even less time.

And that’s OK.

I’d really like to be a lot more productive on this blog and on my YouTube channel. I’ve got a notepad of videos to do which is up to about 10 items now. It’s a backlog. And I’ve got tons of blog posts to write. I want to finish reviewing the Lord Peter Wimsey novels, I want to review all of the Cadfael novels, and after that, probably the Poirot novels. I want to talk more about mystery writing, I’ve got lots of things to write about theology and philosophy, too.

And, God willing, some day I will.

But it’s that first part that’s really important to keep in mind. It’s our job to do our best; it’s God’s job to figure out whether—and how—we should succeed. Running the world is a big and complex task, and God doesn’t ask of us that we do it. All He asks of us is that we do our best to do what he’s given us to do in the moment.

So, the world frequently doesn’t turn out like we expect. But we can trust that it does turn out for the best.

That’s really all we can ever do: do our best and trust God.

Interesting Video On Why Germany Lost World War II

In an interesting video, TIK talks about Germany’s access to oil and oil supplies and why these dictated its actions during World War II, and why they made its downfall all but certain:

It is said that when it comes to war, amateurs think in terms of tactics and professionals in terms of logistics. This is related to the saying that an army fights on its belly, that is, if it’s not fed, it doesn’t fight.

Feeding and watering an army—both men and horses—has been the concern of generals for thousands of years. (Horses were often relatively self-sustaining, since they eat grass, but they do better on grain if you want them to be constantly working.) Thus tactics like burning crop fields during retreat, so as to starve an invading army.

World War II was in many ways the first truly mechanized war, and thus the problem of logistics expanded into the economic sphere. Machines are produced only by a thriving economy, and machines run only on oil. In order to fight an effective mechanized war, one must have a strong economy and lots of fuel.

This, by the way, has strong social implications outside of war. In order to remain in peace, one must have the strength to defeat attackers. In order to do this in the modern context of mechanized warfare, one must have a high-production modern economy. One doesn’t need to be able to produce the weapons of war oneself, but one must be able to buy them. That requires a modern economy, which requires at least much of modern social organization.

Those who want to bring back the good parts of traditional social organization need to understand this well. Whatever form modern society takes, it must be one that powers a modern economy which can power a modern army. If it’s not, it will be short-lived.

Studies That Test Diets And Compliance

I’ve had good results from using an extremely low-carb (i.e. low carbohydrate) diet to lose weight, so I’m highly skeptical whenever a study shows that they don’t do that. There are studies that show that they do, too, in addition to my experience, so something is going on when one has highly conflicting studies. The only thing to do is to actually dig into the studies.

And the thing one finds with many of the “low carb” diets in such studies is that they are frequently quite high carb. “Low carb” will often be defined as less than 100 grams of carbohydrate per day. People who have success eat well under 50 and frequently less than 20 grams of carbohydrate per day. A diet with 5-10 times as much carbohydrate being tested as a “low carb” diet simply doesn’t tell anyone anything useful.

But another big problem one sees is studies which test compliance at the same time they test efficacy. That is, the study breaks people up into groups and tells them what to do, but then records what they do as part of the group that they were assigned to. So if someone in the low carb group eats nothing but pasta, his weight performance will count to the low carb diet average in that study.

There are legitimate reasons for this, but they’re all for medical practitioners. Basically, such studies are useful to know how likely, if you prescribe a diet to all of one’s patients, is one to see results. Great for doctors, useless for the rest of us.

The other problem is that we largely already know what compliance with any behavioral change is in human beings: very low. It doesn’t much matter what you’re talking about, people don’t, typically, change for the better.

Where this is really egregious is where people look at these studies and don’t distinguish between the efficacy of the behavioral change and the degree to which the study told us what we already know about human beings: they don’t comply.

Hell, the compliance rates on taking a single pill a day are far from perfect; just look at all the people one knows who forget the pill from time to time. The compliance rate on 2x, 3x, and 4x pills per day is progressively worse, just from simple observation. Who, who has been proscribed taking a pill 3x per day, actually manages to take it 3x per day for all the days of the prescription?

When this comes to bigger stuff like diet and exercise, a simple and only somewhat inaccurate model is that people don’t comply. So a study which measures compliance + some change will mostly show no effect. But that’s uninteresting for people who will actually change.

Consider other areas of life: lifting weights or running. If you did a study to find out if lifting weights makes you stronger in which you also measured compliance, you’d find out that lifting weights doesn’t make you stronger. If you did a studying measuring whether running makes you a better runner, which also measures compliance, you’d find out that running doesn’t make you better at running. Hell, as long as the study is also measuring compliance, you’d find out that practicing piano doesn’t make you play piano better and taking dance lessons doesn’t teach you how to dance. Because in all these studies, the fact that most people stopped lifting weights, running, practicing piano, and never went to the dance lessons would dominate the results.

Or to put it simply, doing something only has an effect if you actually do it. No kidding.

Which is why what we need are studies which also measure compliance and separate people into groups based on compliance. This does introduce problems. Probably the biggest problem is that it will cost a lot of money because it will require really large groups of people. With 90%+ of people non-complying, you need a ten times larger group of people to study, and that costs a lot of money.

The second problem is that this switches out measuring the efficacy of the diet (or whatever) together with compliance for measuring the efficacy of the diet together with whatever preconditions (genetics, preferences, etc) make one likely to actually stick with it.

However, this is clearly a much more useful thing for an individual to measure. If I’m considering lifting weights, I want to know how much stronger I might get if I can stick with it. If I find I can’t stick with it, I don’t really care what it would do, anyway. And I don’t much care why I can stick with it, either.

If it turns out that only 10% of the population can stick with some diet, then I will consider taking my chances on finding out if I’m in the lucky 10%. Weight lifting works that way, to a limited degree. Everyone can get somewhat stronger, but only a fraction of the population can get hugely strong.

But there’s another issue at play, which has to do with motivation: knowing that something will work if I stick to it makes it vastly more likely that I will stick to it. If I actually believe that there is a causal connection between an action and a benefit, it is much easier to keep doing the action until I get the benefit.

Which is yet another reason that studies which measure compliance as well as an effect are worthless: the study participants didn’t know whether sticking to the plan even had any potential benefit.

So, in short, when it comes to studies showing no benefit to something, always check to see whether it’s a study that’s just telling you that human beings rarely change. It’s not completely worthless, but it’s only telling you what you already know.

The First Mary Sue

The first Mary Sue was a character in a parody of Star Trek fan fiction, published in the fanzine Menagerie in 1973. (Fanzines were magazines, often distributed by photocopying them and handing out the results but always made cheaply and without advertiser sponsorship, typically given away for free or a nominal charge to cover the cost of printing.) The parody was called A Trekkie’s Tale. It’s only a few paragraphs long, so I’ll quote it in full:

“Gee, golly, gosh, gloriosky,” thought Mary Sue as she stepped on the bridge of the Enterprise. “Here I am, the youngest lieutenant in the fleet – only fifteen and a half years old.” Captain Kirk came up to her.

“Oh, Lieutenant, I love you madly. Will you come to bed with me?” “Captain! I am not that kind of girl!” “You’re right, and I respect you for it. Here, take over the ship for a minute while I go get some coffee for us.” Mr. Spock came onto the bridge. “What are you doing in the command seat, Lieutenant?” “The Captain told me to.” “Flawlessly logical. I admire your mind.”

Captain Kirk, Mr. Spock, Dr. McCoy and Mr. Scott beamed down with Lt. Mary Sue to Rigel XXXVII. They were attacked by green androids and thrown into prison. In a moment of weakness Lt. Mary Sue revealed to Mr. Spock that she too was half Vulcan. Recovering quickly, she sprung the lock with her hairpin and they all got away back to the ship.

But back on board, Dr. McCoy and Lt. Mary Sue found out that the men who had beamed down were seriously stricken by the jumping cold robbies, Mary Sue less so. While the four officers languished in Sick Bay, Lt. Mary Sue ran the ship, and ran it so well she received the Nobel Peace Prize, the Vulcan Order of Gallantry and the Tralfamadorian Order of Good Guyhood.

However the disease finally got to her and she fell fatally ill. In the Sick Bay as she breathed her last, she was surrounded by Captain Kirk, Mr. Spock, Dr. McCoy, and Mr. Scott, all weeping unashamedly at the loss of her beautiful youth and youthful beauty, intelligence, capability and all around niceness. Even to this day her birthday is a national holiday of the Enterprise.

The story was originally attributed to “Anonymous” but is known to be the word of the editor, Paula Smith. The basic story was a common submission; as such it’s a collection of common features, exaggerated. It’s very interesting to look at those features.

  1. Main character is a teenage girl.
  2. She’s beautiful and wonderful.
  3. Everyone loves her.
  4. She dies and everyone laments her death.

The standard meaning of “Mary Sue,” used as a criticism of a character in a work of fiction, is to impute that a character is an authorial stand-in for the purpose of wish fulfillment. And while the original Mary Sue is an author stand-in, the story is actually more of a Greek tragedy. Mary Sue is initially blessed by the gods, but when she tries to climb Mount Olympus she is cast down and destroyed.

Among the criticisms heaped on the Mary Sue character is that her excellence is always unearned. She appears out of nowhere in fully formed perfection and everyone loves her just for being her. This is generally derided as being horribly unrealistic.

And it is.

For men.

It should not be glossed over that Mary Sue stories are written by teenage girls about themselves. If Mary Sue is realistic to teenage girls, it would be utterly unsurprising that she would be unrealistic to adult men. So, is she realistic to teenage girls?

And here I think that the answer is: yes, actually.

The onset of puberty in a girl does come from nowhere, and transforms her into something beautiful and wonderful, that is, an adult woman capable of bearing children. And everyone loves her, at least if by “everyone”, you mean males, and by “love,” you mean “is interested in”.

A newly adult female is bursting with potential and, as such, everyone is (suddenly) very interested in her and what she does with this potential. It’s not always as benign and comfortable as in the Mary Sue story, of course, but life rarely is as comfortable as fiction.

And if we look further at the inspiration for Mary Sue, we also see why she had to die. Potential cannot last forever in this world. If Mary Sue does not choose a mate, she will eventually hit menopause and cease to have any potential (in the relevant sense; she might still have potential in a thousand other ways, of course, but an allegory only ever describes one aspect of life). If she does choose a mate, she will have children and her potential will be reduced by turning into actuality. But actuality is, in a fallen world, never as interesting as potential; Mary Sue with children does not excite the universal interest which Mary Sue without children did. (In a healthy society she excites respect, instead, but that’s a topic for another day.)

And so it must be that, not long after Mary Sue is blessed by the gods, she is cast down by them, too; Mary Sue cannot remain universally loved for long.

The story of Mary Sue leaves off at the most important part, since after all it was a parody, but it is worth mentioning the fact. That the first flower of youth cannot last is something all people must come to terms with. For some, they will foreswear actuality for some other actuality, as in the case of nuns, who cover themselves to hide their potential so people may forget it. For others, they will give up their potential by trading it for actuality; an actuality which is flawed because we live in a flawed world, but still a real actuality that’s better than the nothingness of pure potentiality.

They both require faith, but all good things require faith. Trying to remain in potentiality is trying to eat one’s cake and still have it afterwards. It promises happiness that it will never deliver.

I think it’s well to remember that the story of Mary Sue is only a bad story if it’s the story of a man, or an adult woman. Though that remains true even if a young woman is cast in the part.

Real Lawyer Reacts to My Cousin Vinny—And Likes It!

I ran across a really curious video on YouTube where a (putatively real) lawyer examined the movie My Cousin Vinny and talked about how accurate it was. To my great surprise he said that—allowing for parts that were obviously just comedic—it was actually very well done and parts of it could be used for teaching lawyers!

If you’ve never seen it, by the way, I highly recommend the movie My Cousin Vinny. It’s a ton of fun and has a lot of quotable lines.

Rearranging Deck Chairs on the Titanic

The common phrase, that something is like “rearranging the deck chairs on the Titanic” is often taken to mean “putting one’s effort where it won’t do good”, but it has another, slightly more subtle meaning: futility. (I’m writing this post because a friend was so used to the first meaning he hadn’t thought about the second, and what one man has done, another might do.)

Once the Titanic has been hit by the iceberg, there are two reasons why it doesn’t matter how the deck chairs are arranged:

  1. No one is going to sit on them while the boat is sinking.
  2. Once the boat sicks, their arrangement will be destroyed by the water washing the deck chairs away from the deck.

Rearranging the deck chairs on the titanic, therefore, suggests an activity is not only secondary to one’s primary concern but moreover one doomed to have no effect whatever.

You can see this by contrasting the Titanic, which sank, to a ship lost at sea where the rations have run out and the crew is starving. Rearranging the deck chairs will not give them food, but they might still take comfort sitting on them in a better arrangement, and whoever eventually finds the empty ship could take advantage of a particularly well thought out arrangement of the deck chairs which has remained after its first crew can no longer use them. (In theory, though admittedly not likely in practice.)

Nerf Gun as Cognitive Behavioral Therapy

Here’s an interesting post about some creative cognitive behavioral therapy. It’s not that long but out of courtesy I don’t want to quote the whole thing. Here’s the key setup:

i say, are you gonna shoot me with a nerf gun in this professional setting.
he happily informs me that that’s really up to me, isn’t it. and sits back down. and gestures, like, go ahead, what were you saying?
and i squint suspiciously and start back up about how i’m having too much anxiety to leave the house to run errands, like it was a miracle to even get here, like i’ve forgone getting groceries for the past week and that’s so stupid, what a stupid issue, i’m an idiot, how could i–
a foam dart hits me in the leg.


There’s a curious issue brought up in the specifics of the example linked. Self-criticism is a very important ability. People who can’t diagnose their own faults can’t improve, and worse tend to blame everyone but themselves which as a strong alienating effect. Yet, in the example in the link (and partially quoted above), what’s being done is not really self-criticism. It looks like it because the language is negative, but it’s, to use modern cant, disempowering. That is, it makes the one being criticized helpless.

It does this by attributing the failing, not to the will, but to the intellect. That is, it places the defect in the origin, not in the execution. By placing the defect in the origin, nothing can be done about it. A bad tree can’t produce good fruit, or perhaps more aptly, you can’t get blood from a stone.

The problem, in short, is that every time the person complains about himself, he’s giving up. He’s saying, not how he can do better, but that he can’t do better. And this is, indeed, the exact opposite of doing better. What he rephrases his complaints to illustrates the point nicely:

i say, slowly, it’s– not a stupid issue, i’m not stupid, but it’s frustrating me and i don’t want it to be a problem i’m having.

This has reframed it from despair to frustrating, i.e. from having given up to facing one’s problems. Giving up may look like facing problems, but in reality it’s the exact opposite. It’s burying one’s head in the ground so that one doesn’t have to face one’s problems. It is the false hope that one can fix problems without facing them, pretending to be facing them.

You see this a lot with problems; non-solutions love to pretend that they’re actually solutions.

This is related to why my favorite of the baptismal vows is, “Do you reject Satan? And all his empty promises?”

Supporting Indie Writers

Over at Amatopia, Alex writes about an interesting problem: supporting indie writers. Specifically, from the perspective of supporting indie writers who are on the conservative side of the culture war, relating to the goal of trying to rehabilitate our culture into a healthy one.

It starts with a comment which Alex found on another blog, which I think encapsulates some of the problems inherent in the issue:

Given that I have limited means, and thus cannot simply give donations but can buy only for my personal consumption, what exactly is my OBLIGATION here? The simple fact is that I have never liked the culture I live in, and the older I get the stronger that is. I know it speaks ill of my character, but I find myself ever more drawn to Evelyn Waugh as a kindred spirit. I started rejecting Boomer culture in the 60s, and I’ve seen nothing to tempt me to change. So, why should I, when I’d rather read older books, have to read the stuff they come out with now? This is especially so given that a lot of the energy seems to be in sci-fi, which is not my thing, and from what I have tried, I find mysteries are just worse.

Alex essentially proposes two solutions, though I’m paraphrasing heavily:

  1. Doing something is better than nothing, and it may be worth occasionally making the sacrifice of reading something which is not really to your taste in order to help fight the good fight.
  2. If the books aren’t really your cup of tea, they might be the cup of tea of someone you know, and social media makes it really easy to tell them about it these days.

I agree wholeheartedly with Alex on the second point, and do my best to let people know about the works of other authors who think that good is better than evil. There is a proviso here, though, in that a person’s ability to do this is limited before his passing on of the word about books becomes like the advertisements in a magazine—annoying and ignored—but all things done by men have their limitations. What each of us is given to do is finite and often less than we might like. To quote the Venerable Pierre Toussaint, we must take it as God sends it.

On the former point, I must, however, give a qualified disagreement. For two reasons: one particular to the man and one more general.

The particular reason is that the man who left the comment is almost certainly in his sixties and quite possibly in his seventies. It’s easy to forget, but as I write this in 2019, 1960 was 59 years ago—and presumably he didn’t start disagreeing with boomer culture while still in diapers. Fighting the culture war, like fighting all wars, is really the province of younger men like, for the time being, Alex and me. The commenter has, presumably, put in many decades fighting the culture war when he was a younger man, and I think is entitled, at long last, to some rest. Fighting for too long is bad for a man’s soul. It may well be time for him to devote most of his attention to those immediately around him, and save his strength for them.

The more general objection is a highly practical one. In my experience as an indie author, people who don’t normally read the sort of book you wrote don’t really do you any good by buying it. (This will, of course, not be true of famous people with large audiences; Oprah finding your book boring and not worth reading would probably still be good for selling 1,000 copies.)

The problem is two-fold. First, they have all the wrong graphs in big-data sites like Amazon; either because they’re not normally a reader or because they are but not of the books that you buy. In the first case, their lack of reading means that the Amazon algorithms have no one to recommend your book to. In the second case, your book just looks like noise to the algorithm.

And it doesn’t work to say “but if lots of people did this” because they won’t have the same reading graphs and so you’ll just get lots of noise. If you don’t believe me, take a look at “the Castalia ghetto”. That is, go to one of the Castalia House books on amazon and start looking at the recommended books. They’re all Castalia house books. But good luck finding links to Castalia books from other, non-Castalia books (books with sales ranks in the hundreds of thousands or worse don’t count, since basically that means that they don’t sell). You basically won’t find them, and the reason is that outside of books published by Castalia House, the readers of Castalia House books don’t really have tastes in common. So their dedication to Castalia House is good for Castalia house, because there are clearly a lot of these people (relative to the number of people who bought my self-published books, say), but it doesn’t attract attention past the money that they give.

Amazon is the clearest case of this, but you get similar things on less important social platforms, too. People acting atypically simple don’t produce results with algorithms that work off of statistical trends.

And this is a reflection of how human social interaction works. A few oddballs simply don’t have enough influence to move anything.

Unless they’re rich.

This is where things are highly asymmetric between conservative and liberal. Rich degenerates have an enormous motivation to spend their money trying to wreck the culture, but rich decent people have a thousand worthy causes to spend their money on. Culture is important, but so is supporting the Church, so is supporting orphans, so is feeding the hungry, and on and on.

And, come to think of it, there is another problem with the scheme of supporting “conservatives”. There are a lot of different things that people want to conserve, and many of them aren’t worth conserving. As is sometimes noted, a great many conservatives are just liberals from 30 years ago. But it’s not really that much better when they’re liberals from 300 years ago.

The destruction of American culture, so widely noted, is not a recent thing. In truth, it’s the necessary outcome of the protestant reformation.

There is an asterisk I should put here, which is that there are really two types of protestants. One, which makes up probably the majority of individuals, is protestant because of historical reasons. Historically, most protestants were made not by protesting anything but by their prince seizing on a great excuse for stealing Church lands. (One of the great problems of the middle ages was that the Church owned about a third of Europe and could prevent princes from going to war whenever they wanted by permitting serfs to live on Church lands. This check on their rapacity and eagerness for war was not tolerable to a great many European princes, especially German ones. And then there were the princes who couldn’t abide restrictions on divorce…) These are people doing their best to follow the teachings of Christ, bereft of sacred tradition. My heart goes out to these people for the predicament that they’re in, and many of them are quite admirable.

The other kind of protestant wants to admire his own reflection on the glossy cover of his bible. This was the sort of protestant Martin Luther was; the origin of the protestant reformation was, basically, the cry ,”nolo servire!”

“I will not serve!”

You might have heard that before in one of the characters in a play by Dante.

This attempt to turn Christianity from a religion in which reason plays a key role—Christ is the word, that is, the logos, of God, not the feelings of God—into a religion of emotion is doomed to end in the Modern world. Or more properly in the Post-Modern world; Nietzsche is the inevitable outcome of Kant. It’s really not a coincidence that neither of the great fathers of the protestant reformation—Martin Luther and John Calvin—believed in free will. They disbelieved in it for different reasons, but both for bad reasons, and the results are equally bad. Lady Gaga’s song Born This Way is, fundamentally, a protestant song.

So it doesn’t really do any good to talk about supporting conservatives without talking about what they want to conserve. To they want to return to the living vine, or do they want to go back to the moment after the branch was severed from the vine but before the sap still in it gave out and it began to whither? It makes quite a difference.

And when you get specific enough about this, I think what you will find is that (fundamentally) protestant authors will find support among (fundamentally) protestant readers, Catholic and Orthodox authors among Catholic and Orthodox readers (Catholics and Orthodox are both orthodox, just not in communion), and so on. Not because we’re bigots, but because these identities describe our fundamental goals and beliefs about the world.

And I think that what Alex will find is that if you confine the idea of helping authors because they’re on the right side of the culture war to these groups who actually have consonant goals, people will be far more willing to support authors because of that. Or in other words, posting on a Greek Orthodox forum about supporting Greek Orthodox authors isn’t going to be met with the same sort of reticence. I know from experience that it’s not in Catholic fora.

This is a disappointing conclusion because it necessarily means shrinking one’s support base. The problem is that a “big tent” only works in politics where all of the goals are short-term, imprecise, and desirable for many different reasons. One can be in favor of free speech because one is a libertine, or because one does not trust men with power, or because one simply doesn’t have the power to be the censor. When it comes to votes for particular laws, the motivation doesn’t matter.

You can’t really build a big tent in the culture war because the culture war is long term, precise, and about principles rather than specific actions. There’s only one reason to consider divorce a sin—because one holds it to be sinful. There’s only one reason to consider charity a virtue—because one holds it to be virtuous. When it comes to ideals, it’s not enough to do the right thing—one must also do it for the right reason. A book which celebrates a man who, in the ancient tradition, is a hero only because he wants glory, is not a book I want my children to read. At the end of the day, it’s not really better than a book about a man who isn’t a hero because he prefers heroin.

They’re just two different ways to go to hell.

William Gillette: The First To Play Sherlock Holmes

Thanks to frequent commenter Mary, I recently learned about the existence of William Gillette, the first man to play Sherlock Holmes, mostly on the stage but also in a silent film.

Born in 1853, in Connecticut, William Gillette was a stage director, writer, and actor in America. In 1897, his play, Secret Service, was sufficiently successful in America that his producer took it to England. There, a Sherlock Holmes play written by Conan Doyle—who wrote it because he needed money after killing Holmes off but before he brought him back—was not having success at getting produced. It happened to come to Gillette’s producer, who recommended Gillette for extensive re-writes. The deal was made and Gillette began the rewrites.

The story of when Gillette and Conan Doyle met for the first time is quite interesting:

Conan Doyle’s shock was understandable… when the train carrying Gillette came to a halt and Sherlock Holmes himself stepped onto the platform instead of the actor, complete with deerstalker cap and gray ulster. Sitting in his landau, Conan Doyle contemplated the apparition with open-mouthed awe until the actor whipped out a magnifying lens, examined Doyle’s face closely, and declared (precisely as Holmes himself might have done), “Unquestionably an author!” Conan Doyle broke into a hearty laugh and the partnership was sealed with the mirth and hospitality of a weekend at Undershaw. The two men became lifelong friends.

(Undershaw was the name of Conan Doyle’s home.)

The play which Gillette wrote, or rather, rewrote, was enormously successful, both in America and in England. In total, Gillette performed it approximately 1,300 times, while it was put on under license—and not infrequently, without license—by actors in other countries.

Perhaps most interesting is the effect which Gillette had on the image of Sherlock Holmes. It was Gillette who introduced the curved briar pipe—prior to Gillette, the famous illustration in Strand magazine had depicted Holmes with a straight pipe. He also performed in the deerstalker hat and ulster coat, which seem likely to have had a strong impact on depictions of Holmes in those particular clothes. His use of a magnifying glass as a stage prop also likely helped to cement the iconography of the magnifying glass with the detective.

Also curious is that Gillette, as a writer, may have had an influence on the classic phrase, never to be found in the actual Holmes stories, “Elementary, my dear Watson.” Gillette’s Holmes never said the exact phrase, but he did say, “Oh, this is elementary, my dear fellow.” This line, which would have been well known in the late 1920s and early 1930s when the first Sherlock Holmes talkies were made (starring Clive Brook), may well have led to the final version, which appeared in a Sherlock Holmes talkie starring Clive Brook. (At least according to Wikipedia; I haven’t watched any of the Clive Brook Holmes movies, though apparently at least parts of them are available on YouTube. A task for another time, perhaps. The first few minutes of part 1 of 6 weren’t encouraging.)

Mystery Commandment #10: Disguises

In this series, I examine the Mystery Decalogue of Fr. Ronald Knox.

The tenth commandment of Detective fiction is:

Twin brothers, and doubles generally, must not appear unless we have been duly prepared for them.

In his 1939 commentary on his decalogue, Fr. Knox said:

The dodge is too easy, and the supposition too improbable. I would add as a rider, that no criminal should be credited with exceptional powers of disguise unless we have had fair warning that he or she was accustomed to making up for the stage. How admirably is this indicated, for example, in Trent’s Last Case!

A few of these commandments have, over the years, become less applicable simply because people have developed the good sense to not violate them. I think that this commandment may be the one for which that is most the case. I can’t think of a story I’ve read—good or bad—in which twins and other doubles appear.

Well, that’s not quite true. There’s an episode of Scooby Doo where a woman was being framed as a witch by her (unknown) twin sister. And there was a Poirot where a murderer established her alibi by having a famous impersonator pretend to be her at a dinner party—but that certainly follows the commandment since the main thing we know about the impersonator is that she was extraordinarily skilled at pretending to be other people. But those are the only two examples which come to mind.

I should note that I’m thinking about really skillful disguises, where a person can interact with others, in person, for quite some time, and be taken to be someone else who wasn’t really there. Minor disguise, by contrast, is a fairly common device in mysteries. It’s a time-honored tradition to have the murderer pretend to be the victim so as to fake the time of death to a later time for which the murderer has an alibi. So much so that these days if a person overhears a conversation the victim was having through a closed door, or saw the victim doing something but at a great distance and with his face obscured but you could tell it was him because of the bright red scarf he always wore, one’s first thought is that it was the murderer pretending to be the victim. In such a case, woe to anyone who has an alibi for the time the murder is supposed to have happened.

With regard to twins, Fr. Knox’s commentary is interesting: “The dodge is too easy, and the supposition too improbable.” These are two different objections, and not particularly related to each other, though I think the conjunction is important here.

The first objection—that the dodge is too easy—is interesting because it is in a sense the essence of a twist that it is something which explains a lot once you know it. But this is not an intellectual twist; it is, rather, a natural twist. It is an oddity of nature that there should be such things as identical twins. And it is the essence of a mystery that the thing unraveled should have been twisted by the hand of man, not of God. It is legitimate to try to understand the mysteries of God, but it is a very different book in which that is done.

The second objection—the supposition is too improbable—is also interesting because it is the heart and soul of a mystery that the obvious solution is not the correct solution. And twins are not that uncommon. According to the statistics I found when googling, about 1 in 250 births is of identical twins. It’s possible that it’s a little less common in England, but this is not so uncommon that no one would think of it. It’s not nearly as esoteric as, say, a poison which hasn’t been discovered by science yet.

I think it’s the combination of being uncommon and explaining everything which makes it unfair. It’s not the sort of thing so likely that anyone in the story will do anything to rule it out, and it certainly will explain away just about anything inconvenient in the story. As such it’s a perennial possibility that the reader has no good way to rule out. That being the case, it should be ruled out as a matter of course and positive hints as to its possibility included if one is going to go down that route.

Mystery Commandment #9: The Watson

In this series, I examine the Mystery Decalogue of Fr. Ronald Knox.

The ninth commandment of Detective fiction is:

The stupid friend of the detective, the Watson, must not conceal any thoughts which pass through his mind; his intelligence must be slightly, but very slightly, below that of the average reader.

In his 1939 commentary on his decalogue, Fr. Knox said:

This is a rule of perfection; it is not of the esse of the detective story to have a Watson at all. But if he does exist, he exists for the purpose of letting the reader have a sparring partner, as it were, against whom he can pit his brains. ‘I may have been a fool,’ he says to himself as he puts the book down, ‘but at least I wasn’t such a doddering fool as poor old Watson.’

This is an interesting commandment because, as Fr. Knox notes in his commentary, a Watson is entirely optional. Plenty of good detective stories have no Watson. In fact, thinking over my favorite detective series, the only one which has a Watson is Sherlock Holmes—that is, the only Watson in my favorite detective stories is the original.

Occasionally Poirot had Captain Hastings, but he’s much rarer in the actual Poirot stories than he is in the David Suchet TV series. In the Lord Peter Wimsey stories Charles Parker was more of a co-detective than a Watson and Harriet Vane certainly was a co-detective. Hugh Berringar was a co-detective with Cadfael. Jessica Fletcher usually didn’t have anyone investigating with her and the gang in Scooby Doo was a team.

Interestingly, I’ve also read all but one of Fr. Knox’s Miles Bredon mysteries and there is no Watson character in those, either. His wife is sometimes his foil, but she is generally a co-detective, using very complementary skills to his.

As something of an aside, but also somewhat on point, police characters who occupy an in-between state as a sort-of Watson and a sort-of co-detective don’t seem to last. I’m basing this on an admittedly small sample size, but in Lord Peter Wimsey Charles Parker was a major character in the first two books, a fairly prominent character in the third, then progressively dwindled in significance until he becomes just a minor footnote in the last few (he married Lord Peter’s sister before his slide into irrelevance).

In the Miles Bredon stories, Inspector Leyland is a major character in the first two novels, then a mostly ancillary character in the third, and absent entirely from the fourth and fifth novels.

By contrast, the at first under-sheriff and later sheriff Hugh Beringar is absent from only a few Cadfael stories—The Summer of the Danes and Brother Cadfael’s Penance come to mind—which are, admittedly, later on, but The Holy Thief is between them and Hugh is a significant character in it.

It’s interesting to contrast the character of Hugh Beringar with Charles Parker and Inspector Leyland because it gets somewhat to the problems with a partial-Watson. By contrast to the other two, Hugh Beringar was intelligent and quick-witted. A scene which particularly stands out in my memory was from Saint Peter’s Faire, where after telling Cadfael that he too had deduced something Cadfael did, he said, “I may not pick up on all the subtleties but since knowing you’ve I’ve had to keep my wits about me” (or words to that effect). Since he was intelligent he was allowed to have a personality.

Charles Parker and Inspector Leyland, by contrast, partially serving the function of a Watson, couldn’t really have much in the way of personality. An everyman simply can’t be very distinctive or he ceases to be an everyman. It’s not, of course, strictly true that Charles Parker had no personality—we did learn that he read theology in his off hours to relax from his official duties. But we never found out that he learned anything from it; this passtime never informed anything he said.

Leyland didn’t even have any hobbies that I can recall reading about.

What makes these police inspectors different from Watson was, I think, the nature of their attachment to the detective—happenstance. Watson, by contrast, was attached to Holmes by friendship. Oh, granted, Charles Parker was in theory a friend of Wimsey, but we never saw any of it and Wimsey wasn’t really the sort of man to have friends. Holmes and Watson, by contrast, really loved each other and were comrades. Watson accompanied Holmes purely because he was devoted to him and Holmes brought him because Watson was his friend.

Cadfael and Hugh form an interesting comparison to both; Hugh was an officer of the law but also a close friend of Cadfael. In fact, Hugh and Cadfael were close enough that Cadfael was godfather to Hugh’s first son. Even when Hugh had no part in an investigation he might show up to spend Cadfael merely for the pleasure of company. And therein we see what’s necessary for a police friend to stay a character—his office must be his secondary connection to the detective, even if it was his original connection.

It is not viable, long-term, to have the same police inspector working with the same detective on every case. (Though I will grant that Monk made it work to some degree, since Monk was a consultant. Ditto for Sean Spencer in Psyche. That said, police consultants come with their own problems since they need to operate under police rules, and there’s an inherent tension with the police constantly hiring someone to do their job for them. Psych got around this by being a comedy and playing this tension for laughs.)

So coming back to the Watson in a story—I think that Fr. Knox is mostly correct, but a true Watson is the exception rather than the rule. It is common for detectives to not act entirely on their own—it is not good for man to be alone—but co-detectives are far more common and I think generally a better choice. And co-detectives should be intelligent; they are characters on their own, but they are also somewhat of a stand-in for the reader helping the detective and who would prefer to think himself incompetent?

Either way, it works much better for the detective and his associates to have a genuine affection for each other.

America’s Sweethearts

One of my favorite movies to watch when I’m in the mood for something comfortable is a mostly forgotten film starring John Cusak, Catherine Zeta Jones, Julia Roberts, and Billy Crystal called America’s Sweethearts.

The premise is that Eddie Thomas (Cusak) and Gwen Harrison (Zeta Jones) were an incredibly popular hollywood couple until Gwen cheated on Eddie with another actor in a movie they were in, Hector. The movie, Time Over Time, during the filming of which those events happened is about to be released but the eccentric director, Hal Weidmann, won’t show anyone the movie until the press junket. So the publicist for the film (Crystal) must put together the press junket with the two stars of the movies not being on speaking terms and there being no film to show the press. Hilarity ensues.

And hilarity does ensue; it’s a very funny movie. It pokes a lot of fun at Hollywood and the selfishness and complete dishonesty that characterizes the movie industry. Which brings me to the modern difficulty in watching movies of knowing how awful the people who makes movies are.

I think this may be best summarized by sci-fi author Rob Kroese, a few years ago, in response to some idiocy out of Hollywood in the wake of some disaster or other:

Nice to see celebrities taking time off from raping each other to condemn prayer.

(As a side note, there seems to be law of human behavior that a person’s private virtue is inversely proportional to the number of public statements he makes condemning vice in others. Or, more briefly: virtue signaling is often camouflage.)

So, the question come up, unavoidably: does one go on watching movies in spite of their deeply flawed origins?

I think that the answer is yes, but it’s not a question which can simply be dismissed; people who simply say “who cares?” about this are just people who don’t know enough—they’ve never looked in the kitchen to see how the sausage is made.

(I should note that I’m talking about things which do not enrich Hollywood further or do so very minimally. I already own the DVD of America’s Sweethearts so watching it again puts no more money in the hands of the Hollywood. And even buying DVDs of older movies does little to support the current degeneracy of Hollywood, though strictly speaking more than zero. But for older movies, much of the money goes to people who are no longer working in the industry or their descendants because they’re dead. Life is more complicated when you’re talking about watching a new movie in a Theater.)

There are two reasons why the answer is yes—that we should still enjoy the movies made by the wretches of Hollywood. The first is practical (and probably more accessible), the second is philosophical (and more conclusive).

The practical reason is that this is a fallen world and everything is made by wretches. Some are worse than others, but even the best men will inevitably have their work tainted by their imperfections. Worse still, from a practical perspective, many men (rightly) keep their vices secret (so as not to encourage others in vice), and so one will not know what vices secretly infect their work. When it comes to the near-devil-worshippers of Hollywood, one is at least forewarned (and thus fore-armed) against their messages of lust, sloth, and pride. This does not remove the danger, and certainly doesn’t make their work preferable to people who aren’t consciously trying to promote evils, but it does put it in the realm of what can be done safely—or at least as safely as anything can be done in this fallen world.

The philosophical reason is more complicated, but at its heart is the philosophical insight that evil is a negative, not a positive, thing. Evil is the (partial) absence of being—it is a thing being only partially itself. This partial being warps and twists things, but it is impossible to be purely evil—a thing which is pure evil would completely not exist. There’s a sense in which Nothing (with a capital N) is pure evil, but that’s not really different from saying that nothing (without the capital) is pure evil.

This means that in all things which exist, there is good. Evil does not, properly speaking, taint the good in a thing. What it does do is disguise the good. This is not, however, an insurmountable problem. A tainted thing cannot be safely consumed, since the taint has a positive existence—you can’t drink a poisoned glass of wine and drink only the wine but not the poison. But a disguise can be seen through.

Seeing through disguised good is a skill and thus a person can be good or bad at it; this is highly contextual to the person, the good, and the disguise. What one person may watch safely another may be misled by; it requires wisdom to tell the difference.

And there is no substitute for wisdom.