Throwing Is Not Automatic

I’m a fan of Tom Naughton, and his movie Fathead helped me out a lot. But recently he had something of a headscratcher of a blog post. Mostly he just mistake coaching cues that happen to work for him with the One True Way to swing a golf club—which is a very understandable mistake when in the grips of the euphoria of finally figuring out a physical skill one has been working on for years—but there was this really odd bit that I thought worth of commenting on:

If you ask someone to throw a rock or a spear or a frisbee towards a target, he’ll always do the same thing, without fail: take the arm back, cock the wrist, plant the lead foot, rotate the hips, sling the arm toward the target, then release. Ask him exactly when he cocked his wrist, or planted his foot, or turned his hips, he’ll have no idea – but he’ll do it correctly every time. That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA. We don’t have to learn it. Our bodies and brains already know it.

The basic problem is: throwing is not automatic. It’s learned.

I can say this with certainty because I’ve spent time, recently, trying to teach children to throw a frisbee. They do not, in fact, instinctively do it correctly. Humans have very few actual instincts, at least when it comes to voluntary activities. We instinctively breath, and we will instinctively withdraw our hand from pain, but that’s about it. Oh, and we can instinctively nurse from our mother, though even their we need to learn better technique than we come equipped with pretty quickly or Mom will not be happy.

Now, what we do, in fact, come with naturally is the predisposition to learn activities like throwing. This is like walking: we aren’t born knowing how to walk, but we are born with a predisposition to learn to walk. We’re good at learning how to walk and we want to do the sorts of things that make us learn how to walk. Language is the same way—we’re not born speaking or understanding language, but we are predisposed to learn it.

Another odd thing is the “he’ll do it correctly every time”—no he won’t. Even people who know how to throw things pretty well occasionally just screw up and do it wrong. When teaching my boys to throw a frisbee, occasionally I just make a garbage throw. It’s not just when my conscious thoughts get in the way of my muscle memory—muscle memory needs to be correctly activated, and not paying sufficient attention is a great way to do that wrong.

Finally, the evolutionary biology part is just odd: “That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA.”

There’s an element of truth to this, in that we can find evidence of spear use in humans going back hundreds of thousands of years. The problem is that the kinematic sequence to throw a spear and the kinematic sequence to hit a golf ball is not the same thing at all.

Here’s a golf swing:

By contrast, here’s someone throwing a javelin:

And just for fun, here are some Masai warriors throwing spears:

Something you’ll notice about the Masai, who throw actual weapons meant to kill, is that the thing is heavy, and they throw it very close. Alignment is incredibly important, since a weak throw that hits point-on is vastly more effective than a strong throw that hits side-on. The other thing is that the ability to actually throw quickly without a big wind-up matters, since they’re practicing to hit moving targets. They don’t have time for a huge wind-up. Also, they tend to face their target, rather than be at a 90 degree angle to it—when your target has teeth and claws, you need to be able to protect yourself if the target starts coming for you.

Anyway, if you look at these three activities, they’re just very kinematically different. Being good at one of those things will not transfer to being good at the others. The Masai warrior needs accuracy, timing, and power on a heavy projectile. The javelin thrower needs to whip his arm over his body as fast as possible, from a sprint. His arm is straight and his shoulder hyper-extended. The golfer needs to whip the head of a long stick as fast as possible, below his body, from a standing position. His arms are bent and his elbows are kept in to generate more force than arm-velocity, since the greater force translates to greater velocity on the end of the stick. The golf swing probably has more in common with low sword-strikes using a two-handed sword than it does with swinging a spear.

Anyway, I don’t have a major point. I just think it’s interesting what we will tell ourselves in order to try to figure out motion patterns.

On The Seventh Day God Rested

On the seventh day, God rested.

This is an interesting thing to contemplate since as a American Northerner, I don’t really understand the concept of rest.

Granted, every now and again I take breaks, and every night I sleep. The thing is, I can’t help but think of these as weaknesses, as concessions to a fallen world. Chesterton described this attitude toward work and rest very well in Utoptia of Userers, though he was talking about employers and not individuals:

The special emblematic Employer of to-day, especially the Model Employer (who is the worst sort) has in his starved and evil heart a sincere hatred of holidays. I do not mean that he necessarily wants all his workmen to work until they drop; that only occurs when he happens to be stupid as well as wicked. I do not mean to say that he is necessarily unwilling to grant what he would call “decent hours of labour.” He may treat men like dirt; but if you want to make money, even out of dirt, you must let it lie fallow by some rotation of rest. He may treat men as dogs, but unless he is a lunatic he will for certain periods let sleeping dogs lie.

But humane and reasonable hours for labour have nothing whatever to do with the idea of holidays. It is not even a question of ten hours day and eight-hours day; it is not a question of cutting down leisure to the space necessary for food, sleep and exercise. If the modern employer came to the conclusion, for some reason or other, that he could get most out of his men by working them hard for only two hours a day, his whole mental attitude would still be foreign and hostile to holidays. For his whole mental attitude is that the passive time and the active time are alike useful for him and his business. All is, indeed, grist that comes to his mill, including the millers. His slaves still serve him in unconsciousness, as dogs still hunt in slumber. His grist is ground not only by the sounding wheels of iron, but by the soundless wheel of blood and brain. His sacks are still filling silently when the doors are shut on the streets and the sound of the grinding is low.

Again, Chesterton is talking about employers, but this also encompasses an American attitude toward the self which need have nothing to do with money. Chesterton goes on:

Now a holiday has no connection with using a man either by beating or feeding him. When you give a man a holiday you give him back his body and soul. It is quite possible you may be doing him an injury (though he seldom thinks so), but that does not affect the question for those to whom a holiday is holy. Immortality is the great holiday; and a holiday, like the immortality in the old theologies, is a double-edged privilege. But wherever it is genuine it is simply the restoration and completion of the man. If people ever looked at the printed word under their eye, the word “recreation” would be like the word “resurrection,” the blast of a trumpet.

And here we come back to where I started—that on the seventh day, God rested. We are not to suppose, of course, that God was tired. Nor are we even to suppose that God stopped creating creation—for if he were to do that, there would not be another moment, and creation would be at an end. Creation has no independent existence that could go on without God.

So what are we to make of God’s resting on the seventh day, for it must be very unlike human rest?

One thing I’ve heard is that the ancient Jewish idea of rest is a much more active one than our modern concept of falling down in exhaustion. It involves, so I’ve heard, the contemplation of what was done. Contemplation involves the enjoyment of what is done. What we seem to have is a more extended version of “and God looked on all that he had made and saw that it was good”.

There is another aspect, I think, too, which is that God’s creative action can be characterized into two types, according to our human ability to understand it—change and maintenance. In the first six days we have change, as human beings easily understand it. There are arising new forms of being different enough that we can have words to describe them. We can, in general, so reliably tell the difference between a fish and a bush that we give them different names. But we cannot so reliably tell the difference between a fish at noon and that same fish ten minutes later, even though it has changed; we just call them both “fish” and let that suffice because we cannot do better. Thus God’s rest can also been as the completion of the large changes, which we easily notice, and the transition to the smaller changes, which we have a harder time noticing or describing.

I’m thinking about this because I recently sent the manuscript of Wedding Flowers Will Do for a Funeral off to the publisher. It’s not done, because there will be edits from the editor, but for the moment there is nothing for me to do on it. I finally have time—if still very limited time owing to having three young children—to do other projects, but I’m having a hard time turning to them.

My suspicion is that I need to spend some time resting, which is what put me in mind of this.

Wedding Flowers Is Off to the Editor

For anyone who is interested my my novels: a few days ago I sent the manuscript of Wedding Flowers Will Do For a Funeral (the second chronicle of Brother Thomas) off to Silver Empire publishing (they published the first Chronicle of Brother Thomas). Next comes edits, and if all goes well it will be published in the first half of 2020. It’s been a long time coming, and I’m really looking forward to finally having it published.

Why Do Moderns Write Morally Ambiguous Good Guys?

(Note: if you’re not familiar Modern spelled with a capital ‘M’, please read Why Moderns Always Modernize Stories.)

When Moderns tell a heroic story—or more often a story which is supposed to be heroic—they almost invariably write morally ambiguous good guys. Probably the most common form of this is placing the moral ambiguity in the allies who the hero protagonist trusts. It turns out that they did horrible things in the past, they’ve been lying to the protagonist (often by omission), and their motives are selfish now.

Typically this is revealed in an unresolved battle partway through the story, where the main villain has a chance to talk with the protagonist, and tells him about the awful things that the protagonist’s allies did, or are trying to do. Then the battle ends, and the protagonist confronts his allies with the allegations.

At this point two things can happen, but almost invariably the path taken is that the ally admits it, the hero gets angry and won’t let the ally explain, then eventually the ally gets a chance to explain (or someone else explains for him), and the protagonist concludes that the ally was justified.

In general this is deeply unsatisfying. So, why do Moderns do it so much?

It has its root in the modern predicament, of course. As you will recall, in the face of radical doubt, the only certainty left is will. To the Modern, therefore, good is that which is an extension of the will, and evil is the will being restricted. It’s not that he wants this; it’s that in his cramped philosophy, nothing else is possible. In general, Moderns tend to believe it but try hard to pretend that it’s not the case. Admitting it tends to make one go mad and grow one’s mustache very long:

(If you don’t recognize him, that’s Friedrich Nietzsche, who lamented the death of God—a poetic way of saying that people had come to stop believing in God—as the greatest tragedy to befall humanity. However, he concluded that since it happened, we must pick up the pieces as best we may, and that without God to give us meaning, the best we could do is to try to take his place, that is, to use our will to create values. Trying to be happy in the face of how awful life without God is drove him mad. That’s probably why atheists since him have rarely been even half as honest about what atheism means.)

The problem with good being the will and evil being the will denied is that there’s no interesting story to tell within that framework.

A Christian can tell the story of a man knowing what good is and doing the very hard work of trying to be good in spite of temptation, and this is an interesting story, because temptation is hard to overcome and so it’s interesting to see someone do it.

A Modern cannot tell the story of a man wanting something then doing it; that’s just not interesting because it happens all the time. I want a drink of water, so I pick up my cup and drink water. That’s as much an extension of my will as is anything a hero might do on a quest. In fact, it may easily be more of an extension of my will, because I’m probably more thirsty (in the moment) than I care about who, exactly, rules the kingdom. Certainly I achieve the drink more perfectly as an extension of my will than I am likely to change who rules the kingdom, since I might (if I have magical enough sword) pick the man, but I can’t pick what the man does. And what he does is an extension of his will, not mine. (This, btw, is why installing a democracy is so favored as a happy ending—it’s making the government a more direct extension of the will of the people.)

There’s actually a more technical problem which comes in because one can only will what is first perceived in the intellect. In truth, that encompasses nothing, since we do not fully know the consequence of any action in this world, but this is clearer the further into the future an action is and the more people it involves. As such, it is not really possible for the protagonist to really will a complex outcome like restoring the rightful king to the throne of the kingdom. Moderns don’t know this at a conscious level at all, but it is true and so does influence them a bit. Anyway, back to the main problem.

So what is the Modern to do, in order to tell an interesting story? He can’t tell an interesting story about doing good, since to him that’s just doing anything, and if he does something reader is not the protagonist, so it doesn’t do him any good. Granted, the reader might possible identify with the protagonist, but that’s really hard to pull off for large audiences. It requires the protagonist to have all but no characteristics. For whatever reason, this seems to be done successfully more often with female protagonists than with male protagonists, but it can never be done with complete success. The protagonist must have some response to a given stimulus, and this can’t be the same response that every reader will have.

The obvious solution, and for that reason the most common solution, is to tell the story of the protagonist not knowing what he wants. Once he knows what he wants, the only open question is whether he gets it or not, which is to say, is it a fantasy story or a tragedy? When he doesn’t know what he wants, the story can be anything, which means that there is something (potentially) interesting to the reader to find out.

Thus we have the twist, so predictable that I’m not sure it really counts as a twist, that the protagonist, who thought he knew what he wants—if you’re not sitting down for this, you may want to sit now so you don’t fall down from shock—finds out that maybe he doesn’t want what he thought he wanted!

That is, the good guys turn out to be morally ambiguous, and the hero has to figure out if he really wants to help them.

It’s not really that the Moderns think that there are no good guys. Well, OK, they do think that. Oddly, despite Modern philosophy only allowing good and evil to be imputed onto things by the projection of values, Moderns are also consequentialists, and consequentialists only see shades of grey. So, yes, Moderns think that there are no good guys.

But!

But.

Moderns are nothing if not inconsistent. It doesn’t take much talking to a Modern to note that he’s rigidly convinced that he’s a good guy. Heck, he’ll probably tell you that he’s a good person if you give him half a chance.

You’ll notice that in the formula I’ve described above, which we’re all far too familiar with, the protagonist never switches sides. Occasionally, if the show is badly written, he’ll give a speech in which he talks the two sides into compromising. If the show is particularly badly written, he will point out some way of compromising where both sides get what they want and no one has to give up anything that they care about, which neither side thought of because the writers think that the audience is dumb. However this goes, however, you almost never see the protagonist switching sides. (That’s not quite a universal, as you will occasionally see that in spy-thrillers, but there are structural reasons for that which are specific to that genre.) Why is that?

Because the Modern believes that he’s the good guy.

So one can introduce moral ambiguity to make things interesting, but it does need to be resolved so that the Modern, who identifies with the protagonist, can end up as the good guy.

The problem, of course, is that the modern is a consequentialist, so the resolution of the ambiguity almost never involves the ambiguity actually being resolved. The Modern thinks it suffices to make the consequences—or as often, curiously, the intended consequences—good, i.e. desirable to the protagonist. So this ends up ruining the story for those who believe in human nature and consequently natural law, but this really was an accident on the part of the Modern writing it. He was doing his best.

His best just wasn’t good enough.

Sequels Shouldn’t Reset To the Original

One of the great problems that writers have when writing sequels is that, if there was any character development in a story at all, its sequel begins with different characters, and therefore different character dynamics. If you tell a coming-of-age story, in the sequel you’ve got someone who already came of age, and now you have to tell a different sort of story. If you tell an analog to it, such as a main character learning to use his magical powers or his family’s magic sword or his pet dragon growing up or what-have-you, you’ve then got to start the next story with the main character being powerful, not weak.

One all-too-common solution to this problem is to reset the characters. The main character can lose his magic powers, or his pet dragon flies off, or his magic sword is stolen. This can be done somewhat successfully, in the sense of the change not being completely unrealistic, depending on the specifics, but I argue that in general, it should not be.

Before I get to that, I just want to elaborate on the depending-on-the-specifics part. It is fairly viable for a new king with a magic sword to lose the sword and have to go on a quest to get it back, though it’s better if he has to entrust it to a knight who will rule in his absence while he goes off to help some other kingdom. Probably the most workable version of this is the isekai story—a type of story, common in Japanese manga, light novels, and animation, where the main character is magically abducted to another world and needs to help there. Being abducted to another world works pretty well.

By contrast, it does not work to do any kind of reset in a coming-of-age story. It’s technically viable to have the character fall and hit his head and forget everything he learned, but that’s just stupid. Short of that, people don’t come of age then just become people who no experience who’ve never learned any life lessons again.

So why should resets be avoided even when they work? There are two main reasons:

  1. It’s throwing out all of the achievements of the first story.
  2. It’s lazy writing.

The first is the most important reason. We hung in with a character through his trials and travails to see him learn and grow and achieve. If the author wipes this away, it takes away the fact that any of it happened. And there’s something worse: it’s Lucy pulling the football away.

If the author is willing to say, “just kidding” about character development the first time, why should we trust that the second round of character development was real this time? Granted, some people are gullible—there will be people who watch the sequel to The Least Jedi. I’m not saying that it’s not commercially viable. Only that it makes for bad writing.

Which brings me to point #2: it’s lazy writing to just undo the events of the original in order to just re-write it a second time. If one takes the lazy way out in the big picture, it sets one up to take the lazy way out in the details, too. Worse, since the second will be an echo of the first, everything about it will either be the first warmed over or merely a reversal of what happened the first time. Except that these reversals will have to work out to the same thing, since the whole reason for resetting everything is to be able to write the same story. Since it will not be its own story, it will take nearly a miracle to make the second story true to itself given that there will be some changes.

A very good example of not taking the lazy way out is the movie Terminator 2. Given that it’s a movie about a robot from the future which came back in time to stop another robot from the future from killing somebody, it’s a vastly better movie than it has any right to be. Anyway, there’s a very interesting bit in the director’s commentary about this. James Cameron pointed out that in most sequels, Sarah Connor would have gone back to being a waitress, just like she was in the first movie.

But in Terminator 2, she didn’t. James Cameron and the other writer asked themselves what a reasonable person would do if a soldier from the future came back and saved her from a killer robot from the future, and impregnated her with the future leader of the rebellion against the robots? And the answer was that she would make ties with gun runners, become a survivalist, and probably seem crazy.

We meet her doing pullups on her upturned bed in a psychiatric ward.

Terminator 2, despite having the same premise, is a very different movie from Terminator because Terminator 2 takes Terminator seriously. There are, granted, some problems because it is a time travel story and time travel stories intrinsically have plot holes. (Time travel is, fundamentally, self-contradictory.) That said, Terminator and Terminator 2 could easily be rewritten to be about killer robots from the Robot Planet where the robots have a prophecy of a human who will attack them. That aside, Terminator 2 is a remarkably consistent movie, both with itself and as a sequel.

Another good example, which perhaps illustrates the point even better, is Cars 2. The plot of Cars, if you haven’t seen it, is that a famous race car (Lightning McQueen) gets sentenced to community service for traffic violations in a run-down town on his way to a big race. There he learns personal responsibility, what matters in life, and falls in love. Then he goes on to almost win the big race, but sacrifices first place in order to help another car who got injured. (If you didn’t figure it out, the cars are alive in Cars.)

The plot of Cars 2 is that McQueen is now a champion race car and takes part in an international race. At the same time, his buddy from the first movie, Mater, is mistaken for a spy and joins a James Bond-style espionage team to find out why and how an international organization of evil (I can’t recall what they’re called; it’s C.H.A.O.S. from Get Smart or S.P.E.C.T.R.E. from James Bond) is sabotaging the race. McQueen is not perfect, but he is more mature and does value the things he learned to value in the first movie. The main friction comes from him relying on Mater and Mater letting him down.

As you can see, Cars 2 did not reset Cars, nor did it try to tell Cars over again. In fact, it was so much of a sequel to Cars, which was a coming-of-age movie, that it was a completely different sort of movie. This was a risk, and many of the adults who liked Cars did not like Cars 2, because it was so different. This is the risk to making sequels that honor the first story—they cannot be the first story over again, so they will not please everyone who liked the first story.

Now, Cars 2 is an interesting example because there was no need to make it a spy thriller. Terminator 2 honored the first movie and was still an action/adventure where a killer robot has come to, well, kill. But there was a practical reason why Cars 2 was in a different genre from its predecessor but Terminator 2 was not: most everyone knows how to grow up enough to not be a spoiled child, but pretty few people in Hollywood have any idea how to keep growing up to become a mature adult from a minimally functioning adult.

If one wants to tell a true sequel to a coming-of-age film, which mostly means a film in which somebody learns to take responsibility for himself, the sequel will be about him learning to take responsibility for others. In practice, this means either becoming a parent or a mentor.

This is a sort of story that Hollywood has absolutely no skill in telling.

If you look at movies about parents or mentors, they’re almost all about how the parent/mentor has to learn to stop trying to be a parent/mentor and just let the child/mentee be whatever he wants to be.

Granted, trying to turn another human being into one’s own vision, materialized, is being a bad parent and a bad mentor, just letting them be themselves is equally bad parenting and mentoring. What you’re supposed to do as a parent or a mentor is to help the person to become themselves. That is, they need to become fully themselves. They must overcome their flaws and become the perfect human being which God made them to be. That’s a hard, difficult process for a person, which is why it takes so much skill to be a parent or a mentor.

There’s a lot of growth necessary to be a decent parent or mentor, but it’s more subtle than growing up from a child. Probably one of the biggest things is learning how much self-sacrifice is necessary—how much time the child or mentee needs, and how little time one will have for one’s own interests. How to balance those things, so one gives freely but does not become subsumed—that is a difficult thing to learn, indeed. That has the makings of very interesting character development.

The problem, of course, is that only people who have gone through it and learned those lessons are in a position to tell it—one can’t teach what one doesn’t know.

At least on purpose.

Art is a great testament to how much one can teach by accident—since God is in charge of the world, not men.

But I think that the world really could do with some (more) decent stories about recent adults learning to be mature adults. I think that they can be made interesting to general audiences.

The Scientific Method Isn’t Worth Much

It’s fairly common, at least in America, for kids to learn that there is a “scientific method” which tends to look something like:

  1. Observation
  2. Hypothesis
  3. Experiment
  4. Go back to 1.

It varies; there is often more detail. In general it’s part of the myth that there was a “scientific revolution” in which at some point people began to study the natural world in a radically different way than anyone had before. I believe (though am not certain) that this myth was propaganda during the Enlightenment, which was a philosophical movement primarily characterized by being a propagandistic movement. (Who do you think gave it the name “The Enlightenment”?)

In truth, people have been studying the natural world for thousands of years, and they’ve done it in much the same way all that time. There used to be less money in it, of course, but in broad strokes it hasn’t changed all that much.

So if that’s the case, why did Science suddenly get so much better in the last few hundred years, I hear people ask. Good question. It has a good answer, though.

Accurate measurement.

Suppose you want to measure how fast objects fall. Now suppose that the only time-keeping device you have is the rate at which a volume of sand (or water) falls through a restricted opening. (I.e. your best stopwatch is an hour glass). How accurately do you think that you’ll be able to write the formula for it? How accurately can you test that in experimentation?

To give you an idea, in physics class in high school we did an experiment where we had an electronic device that let long, thin paper go through it and it burned a mark onto the paper exactly ten times per second, with high precision. We then attached a weight to one end of the paper and dropped the weight. It was then very simple to calculate the acceleration due to gravity, since we just had to accurately measure the distance between the burn marks.

The groups in class got values between 2.8m/s and 7.4m/s (it’s been 25 years, so I might be a little off, but those are approximately correct). For reference, the correct answer, albeit in a vacuum while we were in air, is 9.8m/s.

The point being: until the invention of the mechanical watch, the high precision measurement of accurate time was not really possible. It took people a while to think of that.

It was a medieval invention, by the way. Well, not hyper-precise clocks, but the technology needed to do it. Clocks powered by falling weights were common during the high medieval time period, and the earliest existing spring driven clock was given to Phillip the Good, Duke of Burgundy, in 1430.

Another incredibly important invention for accurate measurement was the telescope. These were first invented in 1608, and spread like wildfire because they were basically just variations of eyeglasses (the first inventer, Hans Lippershey, was an eyeglass maker). Eyeglasses were another medieval invention, by the way.

And if you trace the history of science in any detail, you will discover that its advances were mostly due not to the magical properties of a method of investigation, but to increasing precision in the ability to measure things and make observations of things we cannot normally observe (e.g. the microscope).

That’s not to say that literally nothing changed; there have been shifts in emphasis, as well as the creation of an entire type of career which gives an enormous number of people the leisure to make observations and the money with which to pay for the tools to make these observations. But that’s economics, not a method.

One could try to argue that mathematical physics was something of a revolution, but it wasn’t, really. Astronomers had mathematical models of things they didn’t actually know the nature of nor inquire into since the time of Ptolemy. It’s really increasingly accurate measurements which allow the mathematicization of physics.

The other thing to notice is that anywhere that taking accurate measurements of what we actually want to measure is prohibitively difficult or expensive, the science in those fields tends to be garbage. More specifically, it tends to be the sort of garbage science commonly called cargo cult science. People go through the motions of doing science without actually doing science. What that means, specifically, is that people take measurements of something and pretend it’s measurements of the things that they actually want to measure.

We want to know what eating a lot of red meat does to people’s health over the long term. Unfortunately, no one has the budget to put a large group of people into cages for 50 years and feed them controlled diets while keeping out confounding variables like stress, lifestyle, etc.—and you couldn’t get this past an ethics review board even if you had the budget for it. So what do nutrition researchers who want to measure this do? They give people surveys asking them what they ate over the last 20 years.

Hey, it looks like science.

If you don’t look to closely.

Taking a Page from Scooby Doo

It strikes me that an interesting plot for a murder mystery would be to borrow a page from Scooby Doo and to have the murderer disguise a murder by taking advantage of a local legend.

Of course, this is hardly original to Scooby Doo. This basic structure for a plot can be found in The Hound of the Baskervilles, published some 67 years before Scooby Doo began his crime fighting in the classic TV show, Scooby Doo, Where Are You!

We live in a different time from either, which would make a somewhat different approach necessary, but I think it’s interesting to consider how, and why.

Scooby Doo originated at a time when there was tremendous interest in the “paranormal.” I’m not sure exactly when it started, and I think it had mostly died down by the 1990s, but for a while, in America at least, there was great interest in things like the Loch Ness monster, the Bermuda triangle, alien abductions, big foot, and such-like. I think that there was a relationship to the great popularity of self-help, self-actualization, consciousness-raising, and other such things that gave rise to a lot of cult activity. People knew that there was more to the world than the official explanation (that is, what they learned in public school and what was said in newspapers), and searched for it in some very strange places.

Scooby Doo, Where Are You! took this cultural pervasiveness as a starting point, and just ran with it. The eagerness of people to believe in more than the oversimplification they learned as children made for a ready setting to find strange things behind every tree and under every rock. (Scooby Doo was also a comedy, of course, so the American preference for exaggeration in comedy must also be taken into account.)

The Hound of the Baskervilles, by contrast, comes from an age which is more content with the simplistic answers that a rationalist oversimplification tends to give. Coming before the two world wars, when technology turn on its creators and the promised heaven-on-earth of Science became hell-on-earth, people had a different sort of relationship to Science than they did shortly after the second world war, which was when Scooby Doo was written and set. Given the long stretch of comparative peace within the United States, our modern time has come much more to hope in Science once again, and accordingly to be more content with rationalistic oversimplifications, so I think that we are culturally closer to The Hound of the Baskervilles. So we must look closer to the roles that the putatively supernatural plays.

Rationalistic ages tend to reject superstition, but very curiously they do it for a very different reason than Christian ages do. To the Christian, superstition is sinful because it is primarily a means of trying to step outside the natural order to control it. Bear in mind, that in a Christian context things like Angels are natural; a properly Christian distinction between natural and super-natural is created and creator. To a Christian, only God is super-natural. So stepping outside of the natural order largely means things like consorting with fallen angels, who are willing to abuse their power for their own dark ends. Superstition is sinful, then, because it is trying to be something one is not, and abusing the natural order in order to do it.

So to take an example of a superstition, trial by combat or trial by ordeal are both superstitious because they are an attempt for force the hand of God’ to serve men’s purposes. This is inverting the natural order; we were given senses to find out who is guilty, but the superstitious does not want to use our senses and our reason to determine guilt. So the superstitious man resorts to something which he thinks will give him control and force the world to do his will, that is, to tell him guilt without taking the trouble to determine it.

(Laziness is very ingrained in fallen humanity, and it took the Church many centuries to finally extirpate trial by combat and trial by ordeal.)

Also worth noting is that the first casualty in such abuse will tend to be a person’s reason; in order to act badly one must also think badly. Thus superstition always goes along with lousy reasoning; it must in order to seem like a good idea. Hence also why we get the sort of immense practicality of the monk who was trying to help a woman who said that she had the power to fit through keyholes. He locked his door, took the key out, then chased her around the room hitting her with a stick and telling her to get out of the room if she could. It might be difficult to get past a Human Subjects Review Board these days, but it’s a sound experimental design, and proved the point quite well.

Rationalistic ages, by which I mean ages which believe themselves to know everything, and approximate this by refusing to acknowledge the existence of anything they do not know, hold a radically different view of superstition. To them, superstition is anything which is super-natural where nature is defined as, basically, what they know. Thus to a rationalist, the super-natural is anything outside of his knowledge. (This is why things like big-foot will be considered supernatural even though they are supposed to be exactly as much flesh-and-blood as a Spaniard or an orangutan.)

The problem which rationalists have is that on some level they do know that they do not, in fact, know everything. They are confident, but they know that their confidence has no basis in reality. The only way to prove a negative is by contradiction; the Christian has a contradiction to people being able to achieve total power by stepping outside the natural order. (That contradiction is the providence of God; demons tremble at the name of Christ, etc.) But the rationalist has only the fallacy of ignorance (assuming that an absence of evidence is evidence of absence). Material fallacies are not very comforting at night, when it’s cold and hard to see, and one hears a sound which one cannot identify.

Thus rationalistic ages will always lend themselves to superstition (in both senses, really). Fear will never leave a man forever, and if he has no comfort from a higher power, he has no protection. There’s a section from Chesterton’s The Everlasting Man which describes it quite well:

Superstition recurs in all ages, and especially in rationalistic ages. I remember defending the religious tradition against a whole luncheon table of distinguished agnostics; and before the end of our conversation every one of them had procured from his pocket, or exhibited on his watch-chain, some charm or talisman from which he admitted that he was never separated. I was the only person present who had neglected to provide himself with a fetish. Superstition recurs in a rationalist age because it rests on something which, if not identical with rationalism, is not unconnected with scepticism. It is at least very closely connected with agnosticism. It rests on something that is really a very human and intelligible sentiment, like the local invocations of the numen in popular paganism. But it is an agnostic sentiment, for it rests on two feelings: first that we do not really know the laws of the universe; and second that they may be very different to all we call reason. Such men realise the real truth that enormous things do often turn upon tiny things. When a whisper comes, from tradition or what not, that one particular tiny thing is the key or clue, something deep and not altogether senseless in human nature tells them that it is not unlikely.

So when it comes to writing a story about someone using a legend as a disguise, the best place to put it will be in a group of rationalists. Some will violently protest against it, but all will be liked to be haunted by it. You can see this sort of thing in Chesterton’s story The Blast of the Book, where there is a book which is supposed to have some dark power to make the people who read it disappear. It is followed around a bit, with various people disappearing from it, and it turns out to be a practical joke by the subordinate the main character of the story (Professor Openshaw, which is, of course, a rationalist).

There was another long silence and then Professor Openshaw laughed; with the laugh of a great man who is great enough to look small. Then he said abruptly:

‘I suppose I do deserve it; for not noticing the nearest helpers I have. But you must admit the accumulation of incidents was rather formidable. Did you never feel just a momentary awe of the awful volume?’

‘Oh, that,’ said Father Brown. ‘I opened it as soon as I saw it lying there. It’s all blank pages. You see, I am not superstitious.’

Chesterton might, I suppose, be accused of poking fun at his enemies in this fashion, but it is actually rather good psychology. In the same way that one cannot write a devout Catholic racked by guilt—if he is so racked, he will go to confession and discharge the guilt—it does not work to write a devout Christian trembling in superstition. If one is really devout, one would make the sign of the cross, invoke the name of Christ, and open the thing one is not supposed to open. Or, failing that, take it to a priest to have a blessing said over it or perhaps an exorcism performed. A simple nameless dread does not make sense because the Christian has a definite idea of what to do with things he does not personally understand, because while he doesn’t know all the particulars, he does know the hierarchy.

Now, there is what might be called a middle ground, which we can described as undiscovered beasts. It is possible that there is a sixty foot long alligator swimming in the swamps by some campground, and the thing to do with alligators, even with large ones, is to not stand next to them. Everyone should be cautious of an insect with a poison so deadly one sting can kill twelve grown men. While these things would be superstition to the rationalist, they would not be superstition to the Christian, and would be things to investigate the within the ordinary course of probability. Demons do not leave footprints, but sixty foot alligators do.

Legends of a species of sixty foot alligators in a deep, unexplored swamp are of course possible, but do not lend themselves as well to detection, I think, for the simple reason that faking their presence requires the sort of effort that a single person is not likely to be able to put into things. Making footprints that big, and tail-tracks, and such-like would be time consuming and difficult.

This sort of unknown beast is very doable, but is more problematic in that it’s the sort of thing which should be observable, and moreover would make a lot of people interested in observing them. And since they should be observable, the perpetrator will be almost obligated to provide some physical evidence of the beastie. If there is a hyper-deadly wasp in the area, blaming deaths on it will not be very plausible unless somebody swats one of the things and it can be analyzed to find its hyper-deadly poison. Moreover, by the time one is faking the cause of death in a remote area, one might as well fake a more conventional cause of death, or even just find a way to inject the poor victim with a real disease, like tetanus or malaria or what-not. Or poison them plus give them a real disease.

So I think that the thing one would want to fake, as a murderer, would be the sort of supernatural which would not leave physical evidence of its crimes. Ghosts, etc. are a much better patsy than an undiscovered beast; they leave off all sorts of problems of having to produce evidence afterwards.

I think that the time is ripe for such stories again.

Sherlock Holmes and the Valley of Fear

I recently read the fourth and final Sherlock Holmes novel, The Valley of Fear. It’s an interesting book, or in some sense two books, the first of which I know to be interesting and the second I’m not really interested in reading.

(If anyone doesn’t want spoilers, now’s the time to stop reading.)

The book begins with Sherlock Holmes working out a cryptogram by reasoning to the key from the cipher. It’s a book cipher, which has many pages and two columns, so Holmes is able to guess that it’s an almanac. This is clever and enjoyable; the message decodes that something bad is going to happen to a Douglas in Birlstone. Shortly after they decrypt it, a detective from Scotland Yard arrives to consult Sherlock Holmes about the brutal murder of Mr. Douglas of Birlstone. The plot thickens, as it were. This is an excellent setup for what is to follow.

When Holmes arrives, we get the facts of the case, that Mr. Douglas lives in a house surrounded by a moat with a drawbridge, and was found in his study with his head blasted off with a sawed-off shotgun fired at close range. Any avid reader of detective fiction—possibly even at the time, given how detective fiction had taken off in short story form by 1914, when The Valley of Fear was written—will immediately suspect that the body is not the body it is supposed to be. However, Conan Doyle forestalls this possibility by the presence of a unique brand on the forearm of the corpse, which Mr. Douglas was known to have had. This helps greatly to heighten the mystery.

The mystery is deepened further by the confusing evidence that Mr. Douglas’s friend forged a footprint on the windowsill which was used to suggest that the murderer escaped by wading in the moat—which was only 3′ deep at its deepest—and ran away. Further confusing things, Dr. Watson accidentally observes Mrs. Douglas and Mr. Douglas’ friend being lighthearted and happy together.

Holmes then finds some additional evidence which convinces him of what really happened, which he does not tell us or the police about, which is not exactly fair play. He then he sets in motion a trap where he has the police tell Mr. Douglas’ friend that they is going to drain the moat. This invites the reader to guess, and I’m not sure that we really have sufficient evidence at this point to guess. That’s not entirely true; we have sufficient evidence to guess, but not to pick among the many possible explanations of the facts given to us. It turns out that the dead man was the intruder, but it could have turned out otherwise, too. The facts, up till then, would have supported Mr. Douglas’ friend having been in on the crime, for example. That said, the explanation given does cover the facts very well, and is satisfying. It does rely, to some degree, on happenstance; none of the servants heard the gunshot, except for one half deaf woman who supposed it to be a door banging. This is a little dubious, but investigation must be able to deal with happenstance because happenstance is real.

We then come to the part where Mr. Douglas is revealed and the mystery explained, and which point the narrative shifts over to explaining his history in America and why it was that there were people tracking him from America to England in order to murder him. This, I find very strange.

It is the second time in a novel that Conan Doyle did it. The first time was in A Study in Scarlet, where the middle half of the book (approximately) took place in America. I really don’t get this at all.

I suspect it makes more sense in the original format of the novels, which were serialized in magazines. It would not be so jarring, in a periodical magazine, to have to learn new characters, since one would to some degree need to reacquaint oneself with the already-known characters anyway. Possibly it also speaks to Conan Doyle having not paced himself well, being more used to short stories, and needing to fill the novel with something else.

The very end of the book, when we return in the present in England, is a very short epilogue. Douglas was acquitted as having acted purely in self defense, but then is murdered by Moriarty when it was taking Holmes’s advice to flee England because Moriarty would be after him.

That the book takes such an interest in Moriarty is very curious, given that it was written in 1914 while Holmes killed Moriarty off in 1893. Actually in 1891, but The Final Problem was published in 1893. Holmes was brought back in 1903, in The Adventure of the Empty House, where it is confirmed that Moriarty died at the Reichenbach Falls. So we have a novel which is clearly set prior to the death of Moriarty, establishing him as a criminal mastermind, almost 15 years after he was killed off. What’s even stranger about it is that Moriarty barely features in the story. He’s in the very beginning, mentioned only in connection to the cryptogram and as having something to do with the murder, but he nor his men actually tried to carry out the murder. His involvement was limited to finding out where Douglas was, so the American who was trying to murder Douglas could try. He naturally makes no appearance in the story of Douglas’ adventures in America, and only shows up in a note at the end of the book:

Two months had gone by, and the case had to some extent passed from our minds. Then one morning there came an enigmatic note slipped into our letter box. “Dear me, Mr. Holmes. Dear me!” said this singular epistle. There was neither superscription nor signature. I laughed at the quaint message; but Holmes showed unwonted seriousness.

Moriarty is indicated to have killed Douglas off the cape of South Africa, and the book ends with Homles’s determination to bring Moriarty to justice.

Which would be a great setup for Holmes bringing Moriarty to justice in a later book, but we already read about it in an earlier book. It doesn’t really help to flesh the character out, it’s not really needed for the plot of the book, and it serves to end the book on a note of failure rather than of triumph. I do not understand it. Perhaps its purpose is to help increase the grandeur of Holmes’ previous victory over Moriarty? But that is a strange thing to do. Perhaps it was the reverse—a note of caution to fans of Holmes that no man, not even Sherlock Holmes, is omnipotent?

Why Moderns Always Modernize Stories

Some friends of mine were discussing why it is that modern tellings of old stories (like Robin Hood) are always disappointing. One put forward the theory it’s because they can’t just tell the story, they have to modernize it. He’s right, but I think it’s important to realize why it is that modern storytellers have to modernize everything.

It’s because they’re Modern.

Before you click away because you think I’m joking, notice the capital “M”. I mean that they subconsciously believe in Modern Philosophy, which is the name of a particular school of philosophy which was born with Descartes, died with Immanuel Kant, and has wandered the halls of academia ever since like a zombie—eating brains but never getting any smarter for it.

The short, short version of this rather long and complicated story is that Modern Philosophy started with Descartes’ work Discourse on Method, though it was put forward better in Meditations on First Philosophy. In those works, Descartes began by doubting literally everything and seeing if he could trust anything. Thus he started with the one thing he found impossible to doubt—his own existence. It is from this that we get the famous cogito ergo sumI think, therefore I am.

The problem is that Descartes had to bring in God in order to guarantee that our senses are not always being confused by a powerful demon. In modern parlance we’d say that we’re not in The Matrix. They mean the same thing—that everything we perceive outside of our own mind is not real but being projected to us by some self-interested power. Descartes showed that from his own existence he can know that God exists, and from God’s existence he can know that he is not being continually fooled in this way.

The problem is that Descartes was in some sense cheating—he was not doubting that his own reason worked correctly. The problem is that this is doubtable, and once doubted, completely irrefutable. All refutations of doubting one’s intellect necessarily rely on the intellect being able to work correctly to follow the refutations. If that is itself in doubt, no refutation is possible, and we are left with radical doubt.

And there is only one thing which is certain, in the context of radical doubt: oneself.

To keep this short, without the senses being considered at least minimally reliable there is no object for the intellect to feed on, but the will can operate perfectly well on phantasms. So all that can be relied upon is will.

After Descartes and through Kant, Modern Philosophers worked to avoid this conclusion, but progressively failed. Kant killed off the last attempts to resist this conclusion, though it is a quirk of history that he could not himself accept the conclusion and so basically said that we can will to pretend that reason works.

Nietzsche pointed out how silly willing to pretend that reason works is, and Modern Philosophy has, for the most part, given up that attempt ever since. (Technically, with Nietzsche, we come to what is called “post-modernism”, but post-modernism is just modernism taken seriously and thought out to its logical conclusions.)

Now, modern people who are Modern have not read Descartes, Kant, or Nietzsche, of course, but these thinkers are in the water and the air—one must reject them to not breathe and drink them in. Modern people have not done that, so they hold these beliefs but for the most part don’t realize it and can’t articulate them. As Chesterton observed, if a man won’t think for himself, someone else will think for him. Actually, let me give the real quote, since it’s so good:

…a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy…

(From The Revival of Philosophy)

In the context of the year of our Lord’s Incarnation 2019, what Christians like my friends mean by “classic stories” are mostly stories of heroism. (Robin Hood was given as an example.) So we need to ask what heroism is.

There are varied definitions of what hero is which are useful; for the moment I will define a hero as somebody who gives of himself (in the sense of self-sacrifice) that someone else may have life, or have it more abundantly. Of course, stated like this it includes trivial things. I think that there simply is a difference of degree but not of kind between trivial self-gift and heroism; heroism is to some degree merely extraordinary self-gift.

If you look at the classic “hero’s journey” according to people like Joseph Campbell, but less insipidly as interpreted by George Lucas, the hero is an unknown and insignificant person who is called to do something very hard, which he has no special obligation to do, but who answers this call and does something great, then after his accomplishment, returns to his humble life. In this you see the self-sacrifice, for the hero has to abandon his humble life in order to do something very hard. You further see it as he does the hard thing; it costs him trouble and pain and may well get the odd limb chopped off along the way. Then, critically, he returns to normal life.

You can see elements of this in pagan heroes like Achilles, or to a lesser degree in Odysseus (who is only arguably a hero, even in the ancient Greek sense). They are what C.S. Lewis would call echoes of the true myth which had not yet been fulfilled.

You really see this in fulfillment in Christian heroes, who answer the call out of generosity, not out of obligation or desire for glory. They endure hardships willingly, even unto death, because they follow a master who endured death on a cross for their sake. And they return to a humble life because they are humble.

Now let’s look at this through the lens of Modern Philosophy.

The hero receives a call. That is, someone tries to impose their will on him. He does something hard. That is, it’s a continuation of that imposition of will. Then he returns, i.e. finally goes back to doing what he wants.

This doesn’t really make any sense as a story, after receiving the call. It’s basically the story of a guy being a slave when he could choose not to be. It is the story of a sucker. It’s certainly not a good story; it’s not a story in which a characters actions flow out of his character.

This is why we get the modern version, which is basically a guy deciding on whether he’s going to be completely worthless or just mostly worthless. This is necessarily the case because, for the story to make sense through the modern lens, the story has to be adapted into something where he wills what he does. For that to happen, and for him not to just be a doormat, he has to be given self-interested motivations for his actions. This is why the most characteristic scene in a modern heroic movie is the hero telling the people he benefited not to thank him. Gratitude robs him of his actions being his own will.

A Christian who does a good deed for someone may hide it (“do not let your left hand know what your right is doing”) or he may not (“no one puts a light under a bushel basket”), but if the recipient of his good deed knows about it, the Christian does not refuse gratitude. He may well refuse obligation; he may say “do not thank me, thank God”, or he may say “I thank God that I was able to help you,” but he will not deny the recipient the pleasure of gratitude. The pleasure of gratitude is the recognition of being loved, and the Christian values both love and truth.

A Modern hero cannot love, since to love is to will the good of the other as other. The problem is that the other cannot have any good beside his own will, since there is nothing besides his own will. To do someone good requires that they have a nature which you act according to. The Modern cannot recognize any such thing; the closest he can come is the other being able to accomplish what he wills, but that is in direct competition with the hero’s will. The same action cannot at the same time be the result of two competing wills. In a zero-sum game, it is impossible for more than one person to win.

Thus the modern can only tell a pathetic simulacrum of a hero who does what he does because he wants to, without reference to anyone else. It’s the only way that the story is a triumph and not the tragedy of the hero being a victim. Thus instead of the hero being tested, and having the courage and fortitude to push through his hardship and do what he was asked to do, we get the hero deciding whether or not he wants to help, and finding inside himself some need that helping will fulfill.

And in the end, instead of the hero happily returning to his humble life out of humility, we have the hero filled with a sense of emptiness because the past no longer exists and all that matters now is what he wills now, which no longer has anything to do with the adventure.

The hero has learned nothing because there is nothing to learn; the hero has received nothing because there is nothing to receive. He must push on because there is nothing else to do.

This is why Modern tellings of old stories suck, and must suck.

It’s because they’re Modern.

Meek is an Interesting Word

Somebody asked me to do a video on the beatitude about meekness, so I’ve been doing some research on the word “meek”. Even though I don’t speak from a place of authority, talking about the beatitudes still carries a lot of responsibility.

The first problem that we have with the word “meek” is that it is not really a modern English word. It’s very rarely used as a character description in novels, and outside of that, pretty much never. So we have to delve back into history and etymology.

The OED defines meek as “Gentle. Courteous. Kind.” It comes from a Scandinavian root. Various Scandinavian languages have an extremely similar word which means, generally, “soft” or “supple”.

Next, we turn to the original Greek:

μακάριοι οἱ πραεῖς, ὅτι αὐτοὶ κληρονομήσουσιν τὴν γῆν

To transliterate, for those who don’t read the Greek alphabet:

makarioi hoi praeis, hoti autoi kleronomesousin ten gen.

Much clearer, I’m sure. Bear with me, though, because I will explain. (I’m going to refer to the words in the English transliteration to make it easier to follow.)

The beatitudes generally have two halves. The first half says that someone is blessed, while the second half gives some explanation as to why. This beatitude has this form. Who is blessed is the first three words, “makarioi hoi praeis”. In the original the verb is left understood, but this is usually translated as “blessed are the meek”. The second half, “hoti autoi kleronomesousin ten gen” is commonly translated “for they shall inherit the earth”.

Let’s break the first half down a little more, because both major words in it are very interesting (“hoi” is just an article; basically it’s just “the”). The first word, “makarioi”, can actually be translated in English either as “blessed” or as “happy”, though it should be noted happy in a more full sense than just the pleasant sensation of having recently eaten on a sunny day with no work to do at the moment.

I’ve noticed that a lot of people, or at least a lot of my fellow Americans, want to take “blessed”, not as an adjective, but as a future conditional verb. Basically, they want to take Christ, not as describing what presently is, but as giving rules with rewards that are attached. This doesn’t work even in English, but it’s even more obvious in Greek where makarioi is declined to agree with the subject, “hoi praeis”. Christ it’s telling us what to do and offering rewards. He’s telling us that we’re looking at the world all wrong, and why.

The other part, “hoi praeis”, is what gets translated as “the meek”, though I’ve also seen “the gentle”. It is the noun form of an adjective, “praios” (“πρᾷος”), which (not surprisingly) tends to mean mild or gentle.

Now, to avoid a connotation which modern English has accrued over hundreds of years of character descriptions in novels, it does not mean week, timid, or mousy. The wiktionary entry for praios has some usage examples. If one peruses through them, they are things like asking a god to be gentle, or saying that a king is gentle with his people.

So translating the first half very loosely, we might render the beatitude:

Those who restrain their force have been blessed, for they will inherit the earth.

This expanded version of the beatitude puts it in the group of the beatitudes which refer to something under the control of the people described as “makarios” (blessed, happy). Consider the other groups of people, which are roughly half of beatitudes: “the poor in spirit,” “those who mourn”, “those who hunger and thirst for righteousness”, “those who are persecuted in the cause of righteousness,” and “you when people abuse you and persecute you and speak all kinds of calumny against you falsely on my account”.

I think that this really makes it clear that what is being described is a gift, though a hard-to-understand one. So what do we make of the other beatitudes, the ones under people’s control?

Just as a quick refresher, they are: “the meek”, “the merciful”, “the pure in heart”, and “the peacemakers”. They each have the superficial form of there being a reward for those who do well, but if we look closer, the reward is an intrinsic reward. That is, it is the natural outcome of the action.

So if we look closely at the second half of the meek beatitude, we see that indeed it is connected to the first half: “for they will inherit the earth”. This is often literally the case: those who fight when they don’t have to die when they don’t have to, and leave the world to those who survive them.

Now, I think too much can be made of “the original context”—our Lord was incarnate in a particular time and spoke to particular people, but they were human beings and he was also speaking to all of us. Still, I think it is worth looking at that original context, and how in the ancient world one of the surest paths to glory was conquest. Heroes were, generally, warriors. They were not, as a rule, gentle. Even in more modern contexts where war is mechanized and so individuals get less glory, there are still analogs where fortune favors the bold. We laud sports figures and political figures who crush their enemies in metaphorical, rather than literal, senses.

Even on a more simple level, we can only appreciate the power than a man has when he demonstrates it by using it.

And here Christ is saying that those are happy who do not use their power when they don’t have to. And why? Because they inherit the earth. Glory is fleeting, and in the end one can’t actually do very much with it. Those who attain glory by the display of power do not, in putting that power on display, use it to do anything useful. They waste their power for show, rather than using it to build. And having built nothing, they will end up with nothing.

You can see this demonstrated in microcosm in a sport I happen to like: power lifting. It is impressive to see people pick up enormous weights. But what do they do with them once they’ve picked them up? They just put them back down again.

Now, the fact that this is in microcosm means that there can be good justifications for it; building up strength by lifting useless weights can give one the strength to lift useful weights, such as children, furniture, someone else who has fallen down, etc. And weightlifting competitions do serve the useful role of inspiring people to develop their strength; a powerlifting meet is not the same thing as conquering a country. But there is, none the less, a great metaphor for it, if one were to extend the powerlifting competition to being all of life. Happy are those who do not.

Investigating Suicides

One of the popular plots in detective stories is the investigation of a murder which has been—so far—successfully disguised as a suicide. It’s a popular plot for a reason, offering some very interesting possibilities for stories. It does, however, come with some requirements on the stories containing it, which I’d like to discuss.

The first and most obvious requirement which faux-suicide poses is that of means. People tend to kill themselves in one of a limited number of ways, and in any event must kill themselves in a way that is plausible to do alone. A man cannot shoot himself in the back with a rifle from a great distance. Further complicating the faux suicide, the murder weapon must be plausibly accessible to the victim when the body is found. Dead men do not move murder weapons. The murderer must then have access to the body after the murder in order to plant the murder weapon in some fashion. This precludes, or at least makes very difficult, the locked room murder.

Of course, that’s really just a challenge to the writer, and there have been some clever solutions. I think my favorite is a room that had both a deadbolt and a latch; the murderer locked the dead bolt but left the latch unlocked then broke the door open, staged the suicide, then locked the latch. When the detectives broke in, they assumed that the deadbolt was broken then, and not already broken. That was quite clever. (This is the Death In Paradise episode at a nursing home.)

The limitations on the means of suicide are more strict, though. For example, any poisons used must be very fast acting. Poisons like arsenic which cause pain for days before death finally comes are simply not plausible as a means of suicide. Elaborate traps which catch the victim by surprise are also implausible. Simple drowning is right out.

Additionally, the means of murder have to be something one can force on a person without leaving bruising that will contradict the idea of suicide. It will not work to knock a man unconscious with a frying pan before staging his suicide with a gun. Sedatives are the easy way out, but they’re a gamble because a toxicology report will then prove that it was not murder. Another alternative is providing an explanation for bruising will also work, such as pre-mortem bruising, faking the victim changing his mind at the last minute, or damaging the body post-mortem such as by throwing it off of a cliff.

The second sort of requirement which faux-suicide imposes is on the conditions of the victim, pre-mortem. The victim must have some sort of plausible reason to have killed himself. This significantly limits the sort of victims one can have. It would be very difficult to disguise the murder of a successful man in good health as suicide, for example. It’s not impossible, of course, but the attempt will tend to involve faking a scandal which would ruin the man’s life. It’s doable—it’s certainly doable—it just introduces other problems which need to be solved in order to make it work.

The final requirement imposed by a faux-suicide is about the detective: why on earth is he investigating the crime? If it’s suicide, what is there to investigate? The perpetrator of the crime is already known.

Proximally, there’s only one reason: because someone thinks that the faux suicide was not suicide. In a sense this is just a sub-case of the more general case of there being someone who is widely accepted as guilty, but there is someone who does not accept their guilt. In both cases, there can even be a confession (a suicide note, in the case of the faux suicide). That said, I think that there are enough differences to consider the faux-suicide on its own, rather than just as a special case of the more general pattern.

The reasons for the detective investigating the faux-suicide seem to me to come in roughly two main classes:

  1. Some of the facts of the scene of the crime are inconsistent with the suicide theory.
  2. Someone who knows the victim does not believe they could have killed themselves.

There is a very good example of #1 in Death in Paradise. Detective Richard Poole does not believe that a woman could have killed herself because she had only drank half of her cup of extremely expensive tea. (He also thinks it unlikely she could have drowned herself by sheer force of will, and she drowned but had no bruising anywhere on her, nor any sedatives in her system.)

Another example of this is the death of Paul Alexis in Have His Carcass. His blood being liquid and there being only one set of footprints—his—up to the flatiron rock suggest suicide, but on the other hand it seems implausible for a man with a full beard to buy a cutthroat razor, then take a train (with a return ticket) and walk 5 miles to sit on a hot rock for several hours before cutting his own throat with the razor.

Have His Carcass also gives an example of the latter category—a wealthy widow who was engaged to Paul Alexis thinks it is impossible that he killed himself and begs Harriet Vane to find out who murdered her intended husband.

The first category is, I think, far more common than the second sort. I can’t, off hand, think of any examples in which a detective investigated a murder solely on the strength of someone thinking it impossible their friend committed suicide. The closest I can come to that may be Five Little Pigs, in which Poirot investigates an old murder because the convicted woman’s daughter is certain her mother is innocent. Her certainty comes from a letter from her mother assuring her daughter she is innocent. This, and that her mother always told the truth. In that story, though, Poirot did not accept the mother’s innocence and was explicit that he would tell her if his investigation made him think the mother did it.

It seems, then, that a faux suicide usually requires some amount of inconsistent facts in order to be a viable story. The question then becomes how to balance these facts such that the detective understands their meaning but the authorities do not. In a sense, this is just a sub-class of the problem of how to give clues the detective understands but others don’t; still, again, I think that it is worth looking into the specific case.

I think that, as a rule, the evidence in favor of suicide should be the main physical evidence, while the evidence against should be the more subtle psychological evidence. This is certainly the common pattern, at least, but I think it makes sense since small psychological inconsistencies are easier to brush away as explained by information not present. People occasionally do strange things, and suicide is almost definitionally the strangest. No one kills himself more than once in his life.

But, then, why does the detective—who knows better than anyone that life is sometimes just unaccountably strange—place such high value on the evidence which others dismiss?

One common answer is that the detective has a compulsion to make sure that everything is neat and orderly and makes sense and is explained. This is not very satisfying, though, since this trait must be selectively applied as life is very rarely neat and orderly, with everything having an explanation which makes sense.

Another approach, which is better but still not great, is that the detective has a hunch. It’s not really satisfying because it violates rule #6:

No accident must ever help the detective, nor must he ever have an unaccountable intuition which proves to be right.

That said, this is a fuzzy line. One man’s unaccountable hunch which proves right is another man’s feeling that he can’t articulate but bears further examination.

One variation of this which is not so much a hunch is giving the detective highly domain specific knowledge. “No deep sea diver would ever drink tea at this time of day”—that sort of thing. The problem is that unless one can prevent the authorities checking up on this, they will be immediately forced to conclude the suicide was not suicide. (This is sometimes dealt with by making the authorities very pig-headed or otherwise very budget-constrained, so that they will jump at the chance to classify every death as a suicide so as to avoid having to investigate it. It an be pulled off, of course.)

I think what probably works out the best is inconclusive evidence that the suicide is fake combined with someone other than the detective acting as the driving force. The detective may not have unaccountable hunches, but others may. This sets up the interplay that the other person is sure that it wasn’t suicide, while the detective can only see some evidence which supports this conclusion but at least does justify further investigation. By making the motive force a hunch, there does not need to be sufficient evidence to justify the hunch. This allows the faux suicide to be generally taken as suicide without all of the people involved being dimwits. Unless the murder mystery is also a comedy, it is preferable to populate the world with reasonably intelligent people.

Progress Report

In case anyone is interested in my progress on my Brother Thomas series, I’m currently editing the second chronicle of Brother Thomas, Wedding Flowers Will Do for a Funeral. The current draft is out to test readers, and I’ve already gotten some valuable feedback which I’ve begun to incorporate into my edits. I’m going to do another round or two of edits, which I hope to complete by the end of October, then have it off at the beginning of November to my publisher, Silver Empire, for final edits and publication.

It’s taken a lot longer than I’d hoped, but it’s happening.

Looking to the next Brother Thomas novel, I’ve started kicking around ideas. I’ve got a tentative setting of a family resort camp in the Adirondack mountains in upstate NY. It has a lot going for it:

  • a remote, isolated location which limits the suspect pool.
  • A picturesque place that would be nice to visit so would be pleasant to visit in a book
  • limited technology. there are real camps with no cell phone service, no wifi, and no electricity
  • the ability to bring together an interesting and eclectic group of suspects most of whom—supposedly—don’t know each other
  • a setting in which there are people with (relatively) stable lives, where for the most part the same people have been doing the same work for decades

I’m not entirely decided on it, yet. I’m still in the early stages of working out who the guests might be, who the victim and murderer is, and why the brothers would be called in. It’s only after that I can really come up with a title, though for some reason the title Thank God He Didn’t Drown in the Lake is kicking around in my mind. We’ll see.

For the Math Nerds

I was just thinking about the song Finite Simple Group (Of Order 2), and if you have studied graduate level math and haven’t heard it, you really should:

If you haven’t studied graduate level math, the many, many puns will not be funny—in many cases they get the meaning at least approximately correct in both senses, which is the ideal form for a pun. There is something interesting to contemplate without watching the video, though.

It is curious how context-dependent humor can be. This can, of course, become a problem. For about a year after I left grad school, I could barely make jokes which other people would understand. In fact, I often could barely make jokes because I was constantly interrupting them with, “oh, wait, that won’t make any sense to you.”

The problem was not that I couldn’t think of things to joke about that would be of general interest, but that all of the similes and analogies which sprang to mind were esoteric. Since the essence of wit is making suddenly obvious connections which are normally hidden, it proved disastrous because I couldn’t find the things which would make the connections obvious to others.

One of the things necessary for the skill of comedy, then, is to keep familiar with the things one’s audience will be familiar with, whatever those are. As can be seen by the laughter which Kleinfour (the a cappella group in the video) got, this can be esoteric if your audience happens to be made up of people who all share that esoteric knowledge.

Just a subset of the dictum, know your audience, I suppose.

Strength vs. Skill

Many years ago, I was studying judo from someone who had done judo since he was a kid and was teaching for fun. He was not a very large man, but he was a very skilled one. One time, he told a very interesting story.

He was in a match with a man who was a body builder or a power lifter or something of that ilk—an immensely, extraordinarily strong man. He got the strong man into an arm bar, which is a hold in which the elbow is braced against something and the arm is being pulled back at the wrist. Normally if a person is in a properly positioned arm bar, this is inescapable and the person holding it could break his arm if he wanted to; this (joint locks) is one of the typical ways of a judo match ending—the person in the joint lock taps out, admitting defeat.

The strong man did not tap out.

He just curled his way out of the arm bar.

That is, his arm—in a very weak position—was so much stronger than my judo teacher’s large core muscles that he was able to overpower them anyway.

Next, my judo teacher pinned him down. In western wrestling, one can win a match by pinning the opponent’s shoulders to the ground for 3 seconds. In judo it’s a little more complicated, but the point which is important to the moment is that you have to pin the opponent such that he can’t escape for 45 seconds. Once he had pinned the strong man, the strong man asked him, “you got me?” My teacher replied, “yeah, I got you.” The strong man asked, “are you sure about that?” “Yes, I’m sure,” my teacher replied.

The strong man then grabbed my teacher by the gi (the stout clothing worn in judo) and floor-pressed him into the air, then set him aside. (Floor pressing is like bench pressing, only the floor keeps your elbows from going low enough to generate maximum power.)

Clearly, this guy was simply far too strong to ever lose by joint locks or pinning. So my teacher won the match by throwing him to the ground (“ippon”).

The moral of the story is not that skill will always beat strength, because clearly it didn’t, two out of three times. The moral of the story is also not that strength will always beat skill, since it didn’t, that final time.

The moral of the story is to know your limits and always stay within them.

It cost 1 billion dollars to tape out 7nm chip

Making processors is getting very expensive. According to this report, the R&D to take a processor design and turn it into something that can be fabricated at the latest silicon mode is $1B.

https://www.fudzilla.com/news/49513-it-cost-1-billion-dollars-to-tape-out-7nm-chip

Each fabrication node (where the transistors shrink) has gotten more expensive. I suspect it’s likely that economics will play as big a role in killing off Moore’s Law as physics will. Eventually no one will be able to afford new nodes, even if they are physically possible to create.

This is what an s-curve looks like.