Throwing Is Not Automatic

I’m a fan of Tom Naughton, and his movie Fathead helped me out a lot. But recently he had something of a headscratcher of a blog post. Mostly he just mistake coaching cues that happen to work for him with the One True Way to swing a golf club—which is a very understandable mistake when in the grips of the euphoria of finally figuring out a physical skill one has been working on for years—but there was this really odd bit that I thought worth of commenting on:

If you ask someone to throw a rock or a spear or a frisbee towards a target, he’ll always do the same thing, without fail: take the arm back, cock the wrist, plant the lead foot, rotate the hips, sling the arm toward the target, then release. Ask him exactly when he cocked his wrist, or planted his foot, or turned his hips, he’ll have no idea – but he’ll do it correctly every time. That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA. We don’t have to learn it. Our bodies and brains already know it.

The basic problem is: throwing is not automatic. It’s learned.

I can say this with certainty because I’ve spent time, recently, trying to teach children to throw a frisbee. They do not, in fact, instinctively do it correctly. Humans have very few actual instincts, at least when it comes to voluntary activities. We instinctively breath, and we will instinctively withdraw our hand from pain, but that’s about it. Oh, and we can instinctively nurse from our mother, though even their we need to learn better technique than we come equipped with pretty quickly or Mom will not be happy.

Now, what we do, in fact, come with naturally is the predisposition to learn activities like throwing. This is like walking: we aren’t born knowing how to walk, but we are born with a predisposition to learn to walk. We’re good at learning how to walk and we want to do the sorts of things that make us learn how to walk. Language is the same way—we’re not born speaking or understanding language, but we are predisposed to learn it.

Another odd thing is the “he’ll do it correctly every time”—no he won’t. Even people who know how to throw things pretty well occasionally just screw up and do it wrong. When teaching my boys to throw a frisbee, occasionally I just make a garbage throw. It’s not just when my conscious thoughts get in the way of my muscle memory—muscle memory needs to be correctly activated, and not paying sufficient attention is a great way to do that wrong.

Finally, the evolutionary biology part is just odd: “That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA.”

There’s an element of truth to this, in that we can find evidence of spear use in humans going back hundreds of thousands of years. The problem is that the kinematic sequence to throw a spear and the kinematic sequence to hit a golf ball is not the same thing at all.

Here’s a golf swing:

By contrast, here’s someone throwing a javelin:

And just for fun, here are some Masai warriors throwing spears:

Something you’ll notice about the Masai, who throw actual weapons meant to kill, is that the thing is heavy, and they throw it very close. Alignment is incredibly important, since a weak throw that hits point-on is vastly more effective than a strong throw that hits side-on. The other thing is that the ability to actually throw quickly without a big wind-up matters, since they’re practicing to hit moving targets. They don’t have time for a huge wind-up. Also, they tend to face their target, rather than be at a 90 degree angle to it—when your target has teeth and claws, you need to be able to protect yourself if the target starts coming for you.

Anyway, if you look at these three activities, they’re just very kinematically different. Being good at one of those things will not transfer to being good at the others. The Masai warrior needs accuracy, timing, and power on a heavy projectile. The javelin thrower needs to whip his arm over his body as fast as possible, from a sprint. His arm is straight and his shoulder hyper-extended. The golfer needs to whip the head of a long stick as fast as possible, below his body, from a standing position. His arms are bent and his elbows are kept in to generate more force than arm-velocity, since the greater force translates to greater velocity on the end of the stick. The golf swing probably has more in common with low sword-strikes using a two-handed sword than it does with swinging a spear.

Anyway, I don’t have a major point. I just think it’s interesting what we will tell ourselves in order to try to figure out motion patterns.

On The Seventh Day God Rested

On the seventh day, God rested.

This is an interesting thing to contemplate since as a American Northerner, I don’t really understand the concept of rest.

Granted, every now and again I take breaks, and every night I sleep. The thing is, I can’t help but think of these as weaknesses, as concessions to a fallen world. Chesterton described this attitude toward work and rest very well in Utoptia of Userers, though he was talking about employers and not individuals:

The special emblematic Employer of to-day, especially the Model Employer (who is the worst sort) has in his starved and evil heart a sincere hatred of holidays. I do not mean that he necessarily wants all his workmen to work until they drop; that only occurs when he happens to be stupid as well as wicked. I do not mean to say that he is necessarily unwilling to grant what he would call “decent hours of labour.” He may treat men like dirt; but if you want to make money, even out of dirt, you must let it lie fallow by some rotation of rest. He may treat men as dogs, but unless he is a lunatic he will for certain periods let sleeping dogs lie.

But humane and reasonable hours for labour have nothing whatever to do with the idea of holidays. It is not even a question of ten hours day and eight-hours day; it is not a question of cutting down leisure to the space necessary for food, sleep and exercise. If the modern employer came to the conclusion, for some reason or other, that he could get most out of his men by working them hard for only two hours a day, his whole mental attitude would still be foreign and hostile to holidays. For his whole mental attitude is that the passive time and the active time are alike useful for him and his business. All is, indeed, grist that comes to his mill, including the millers. His slaves still serve him in unconsciousness, as dogs still hunt in slumber. His grist is ground not only by the sounding wheels of iron, but by the soundless wheel of blood and brain. His sacks are still filling silently when the doors are shut on the streets and the sound of the grinding is low.

Again, Chesterton is talking about employers, but this also encompasses an American attitude toward the self which need have nothing to do with money. Chesterton goes on:

Now a holiday has no connection with using a man either by beating or feeding him. When you give a man a holiday you give him back his body and soul. It is quite possible you may be doing him an injury (though he seldom thinks so), but that does not affect the question for those to whom a holiday is holy. Immortality is the great holiday; and a holiday, like the immortality in the old theologies, is a double-edged privilege. But wherever it is genuine it is simply the restoration and completion of the man. If people ever looked at the printed word under their eye, the word “recreation” would be like the word “resurrection,” the blast of a trumpet.

And here we come back to where I started—that on the seventh day, God rested. We are not to suppose, of course, that God was tired. Nor are we even to suppose that God stopped creating creation—for if he were to do that, there would not be another moment, and creation would be at an end. Creation has no independent existence that could go on without God.

So what are we to make of God’s resting on the seventh day, for it must be very unlike human rest?

One thing I’ve heard is that the ancient Jewish idea of rest is a much more active one than our modern concept of falling down in exhaustion. It involves, so I’ve heard, the contemplation of what was done. Contemplation involves the enjoyment of what is done. What we seem to have is a more extended version of “and God looked on all that he had made and saw that it was good”.

There is another aspect, I think, too, which is that God’s creative action can be characterized into two types, according to our human ability to understand it—change and maintenance. In the first six days we have change, as human beings easily understand it. There are arising new forms of being different enough that we can have words to describe them. We can, in general, so reliably tell the difference between a fish and a bush that we give them different names. But we cannot so reliably tell the difference between a fish at noon and that same fish ten minutes later, even though it has changed; we just call them both “fish” and let that suffice because we cannot do better. Thus God’s rest can also been as the completion of the large changes, which we easily notice, and the transition to the smaller changes, which we have a harder time noticing or describing.

I’m thinking about this because I recently sent the manuscript of Wedding Flowers Will Do for a Funeral off to the publisher. It’s not done, because there will be edits from the editor, but for the moment there is nothing for me to do on it. I finally have time—if still very limited time owing to having three young children—to do other projects, but I’m having a hard time turning to them.

My suspicion is that I need to spend some time resting, which is what put me in mind of this.

Wedding Flowers Is Off to the Editor

For anyone who is interested my my novels: a few days ago I sent the manuscript of Wedding Flowers Will Do For a Funeral (the second chronicle of Brother Thomas) off to Silver Empire publishing (they published the first Chronicle of Brother Thomas). Next comes edits, and if all goes well it will be published in the first half of 2020. It’s been a long time coming, and I’m really looking forward to finally having it published.

Sequels Shouldn’t Reset To the Original

One of the great problems that writers have when writing sequels is that, if there was any character development in a story at all, its sequel begins with different characters, and therefore different character dynamics. If you tell a coming-of-age story, in the sequel you’ve got someone who already came of age, and now you have to tell a different sort of story. If you tell an analog to it, such as a main character learning to use his magical powers or his family’s magic sword or his pet dragon growing up or what-have-you, you’ve then got to start the next story with the main character being powerful, not weak.

One all-too-common solution to this problem is to reset the characters. The main character can lose his magic powers, or his pet dragon flies off, or his magic sword is stolen. This can be done somewhat successfully, in the sense of the change not being completely unrealistic, depending on the specifics, but I argue that in general, it should not be.

Before I get to that, I just want to elaborate on the depending-on-the-specifics part. It is fairly viable for a new king with a magic sword to lose the sword and have to go on a quest to get it back, though it’s better if he has to entrust it to a knight who will rule in his absence while he goes off to help some other kingdom. Probably the most workable version of this is the isekai story—a type of story, common in Japanese manga, light novels, and animation, where the main character is magically abducted to another world and needs to help there. Being abducted to another world works pretty well.

By contrast, it does not work to do any kind of reset in a coming-of-age story. It’s technically viable to have the character fall and hit his head and forget everything he learned, but that’s just stupid. Short of that, people don’t come of age then just become people who no experience who’ve never learned any life lessons again.

So why should resets be avoided even when they work? There are two main reasons:

  1. It’s throwing out all of the achievements of the first story.
  2. It’s lazy writing.

The first is the most important reason. We hung in with a character through his trials and travails to see him learn and grow and achieve. If the author wipes this away, it takes away the fact that any of it happened. And there’s something worse: it’s Lucy pulling the football away.

If the author is willing to say, “just kidding” about character development the first time, why should we trust that the second round of character development was real this time? Granted, some people are gullible—there will be people who watch the sequel to The Least Jedi. I’m not saying that it’s not commercially viable. Only that it makes for bad writing.

Which brings me to point #2: it’s lazy writing to just undo the events of the original in order to just re-write it a second time. If one takes the lazy way out in the big picture, it sets one up to take the lazy way out in the details, too. Worse, since the second will be an echo of the first, everything about it will either be the first warmed over or merely a reversal of what happened the first time. Except that these reversals will have to work out to the same thing, since the whole reason for resetting everything is to be able to write the same story. Since it will not be its own story, it will take nearly a miracle to make the second story true to itself given that there will be some changes.

A very good example of not taking the lazy way out is the movie Terminator 2. Given that it’s a movie about a robot from the future which came back in time to stop another robot from the future from killing somebody, it’s a vastly better movie than it has any right to be. Anyway, there’s a very interesting bit in the director’s commentary about this. James Cameron pointed out that in most sequels, Sarah Connor would have gone back to being a waitress, just like she was in the first movie.

But in Terminator 2, she didn’t. James Cameron and the other writer asked themselves what a reasonable person would do if a soldier from the future came back and saved her from a killer robot from the future, and impregnated her with the future leader of the rebellion against the robots? And the answer was that she would make ties with gun runners, become a survivalist, and probably seem crazy.

We meet her doing pullups on her upturned bed in a psychiatric ward.

Terminator 2, despite having the same premise, is a very different movie from Terminator because Terminator 2 takes Terminator seriously. There are, granted, some problems because it is a time travel story and time travel stories intrinsically have plot holes. (Time travel is, fundamentally, self-contradictory.) That said, Terminator and Terminator 2 could easily be rewritten to be about killer robots from the Robot Planet where the robots have a prophecy of a human who will attack them. That aside, Terminator 2 is a remarkably consistent movie, both with itself and as a sequel.

Another good example, which perhaps illustrates the point even better, is Cars 2. The plot of Cars, if you haven’t seen it, is that a famous race car (Lightning McQueen) gets sentenced to community service for traffic violations in a run-down town on his way to a big race. There he learns personal responsibility, what matters in life, and falls in love. Then he goes on to almost win the big race, but sacrifices first place in order to help another car who got injured. (If you didn’t figure it out, the cars are alive in Cars.)

The plot of Cars 2 is that McQueen is now a champion race car and takes part in an international race. At the same time, his buddy from the first movie, Mater, is mistaken for a spy and joins a James Bond-style espionage team to find out why and how an international organization of evil (I can’t recall what they’re called; it’s C.H.A.O.S. from Get Smart or S.P.E.C.T.R.E. from James Bond) is sabotaging the race. McQueen is not perfect, but he is more mature and does value the things he learned to value in the first movie. The main friction comes from him relying on Mater and Mater letting him down.

As you can see, Cars 2 did not reset Cars, nor did it try to tell Cars over again. In fact, it was so much of a sequel to Cars, which was a coming-of-age movie, that it was a completely different sort of movie. This was a risk, and many of the adults who liked Cars did not like Cars 2, because it was so different. This is the risk to making sequels that honor the first story—they cannot be the first story over again, so they will not please everyone who liked the first story.

Now, Cars 2 is an interesting example because there was no need to make it a spy thriller. Terminator 2 honored the first movie and was still an action/adventure where a killer robot has come to, well, kill. But there was a practical reason why Cars 2 was in a different genre from its predecessor but Terminator 2 was not: most everyone knows how to grow up enough to not be a spoiled child, but pretty few people in Hollywood have any idea how to keep growing up to become a mature adult from a minimally functioning adult.

If one wants to tell a true sequel to a coming-of-age film, which mostly means a film in which somebody learns to take responsibility for himself, the sequel will be about him learning to take responsibility for others. In practice, this means either becoming a parent or a mentor.

This is a sort of story that Hollywood has absolutely no skill in telling.

If you look at movies about parents or mentors, they’re almost all about how the parent/mentor has to learn to stop trying to be a parent/mentor and just let the child/mentee be whatever he wants to be.

Granted, trying to turn another human being into one’s own vision, materialized, is being a bad parent and a bad mentor, just letting them be themselves is equally bad parenting and mentoring. What you’re supposed to do as a parent or a mentor is to help the person to become themselves. That is, they need to become fully themselves. They must overcome their flaws and become the perfect human being which God made them to be. That’s a hard, difficult process for a person, which is why it takes so much skill to be a parent or a mentor.

There’s a lot of growth necessary to be a decent parent or mentor, but it’s more subtle than growing up from a child. Probably one of the biggest things is learning how much self-sacrifice is necessary—how much time the child or mentee needs, and how little time one will have for one’s own interests. How to balance those things, so one gives freely but does not become subsumed—that is a difficult thing to learn, indeed. That has the makings of very interesting character development.

The problem, of course, is that only people who have gone through it and learned those lessons are in a position to tell it—one can’t teach what one doesn’t know.

At least on purpose.

Art is a great testament to how much one can teach by accident—since God is in charge of the world, not men.

But I think that the world really could do with some (more) decent stories about recent adults learning to be mature adults. I think that they can be made interesting to general audiences.

The Scientific Method Isn’t Worth Much

It’s fairly common, at least in America, for kids to learn that there is a “scientific method” which tends to look something like:

  1. Observation
  2. Hypothesis
  3. Experiment
  4. Go back to 1.

It varies; there is often more detail. In general it’s part of the myth that there was a “scientific revolution” in which at some point people began to study the natural world in a radically different way than anyone had before. I believe (though am not certain) that this myth was propaganda during the Enlightenment, which was a philosophical movement primarily characterized by being a propagandistic movement. (Who do you think gave it the name “The Enlightenment”?)

In truth, people have been studying the natural world for thousands of years, and they’ve done it in much the same way all that time. There used to be less money in it, of course, but in broad strokes it hasn’t changed all that much.

So if that’s the case, why did Science suddenly get so much better in the last few hundred years, I hear people ask. Good question. It has a good answer, though.

Accurate measurement.

Suppose you want to measure how fast objects fall. Now suppose that the only time-keeping device you have is the rate at which a volume of sand (or water) falls through a restricted opening. (I.e. your best stopwatch is an hour glass). How accurately do you think that you’ll be able to write the formula for it? How accurately can you test that in experimentation?

To give you an idea, in physics class in high school we did an experiment where we had an electronic device that let long, thin paper go through it and it burned a mark onto the paper exactly ten times per second, with high precision. We then attached a weight to one end of the paper and dropped the weight. It was then very simple to calculate the acceleration due to gravity, since we just had to accurately measure the distance between the burn marks.

The groups in class got values between 2.8m/s and 7.4m/s (it’s been 25 years, so I might be a little off, but those are approximately correct). For reference, the correct answer, albeit in a vacuum while we were in air, is 9.8m/s.

The point being: until the invention of the mechanical watch, the high precision measurement of accurate time was not really possible. It took people a while to think of that.

It was a medieval invention, by the way. Well, not hyper-precise clocks, but the technology needed to do it. Clocks powered by falling weights were common during the high medieval time period, and the earliest existing spring driven clock was given to Phillip the Good, Duke of Burgundy, in 1430.

Another incredibly important invention for accurate measurement was the telescope. These were first invented in 1608, and spread like wildfire because they were basically just variations of eyeglasses (the first inventer, Hans Lippershey, was an eyeglass maker). Eyeglasses were another medieval invention, by the way.

And if you trace the history of science in any detail, you will discover that its advances were mostly due not to the magical properties of a method of investigation, but to increasing precision in the ability to measure things and make observations of things we cannot normally observe (e.g. the microscope).

That’s not to say that literally nothing changed; there have been shifts in emphasis, as well as the creation of an entire type of career which gives an enormous number of people the leisure to make observations and the money with which to pay for the tools to make these observations. But that’s economics, not a method.

One could try to argue that mathematical physics was something of a revolution, but it wasn’t, really. Astronomers had mathematical models of things they didn’t actually know the nature of nor inquire into since the time of Ptolemy. It’s really increasingly accurate measurements which allow the mathematicization of physics.

The other thing to notice is that anywhere that taking accurate measurements of what we actually want to measure is prohibitively difficult or expensive, the science in those fields tends to be garbage. More specifically, it tends to be the sort of garbage science commonly called cargo cult science. People go through the motions of doing science without actually doing science. What that means, specifically, is that people take measurements of something and pretend it’s measurements of the things that they actually want to measure.

We want to know what eating a lot of red meat does to people’s health over the long term. Unfortunately, no one has the budget to put a large group of people into cages for 50 years and feed them controlled diets while keeping out confounding variables like stress, lifestyle, etc.—and you couldn’t get this past an ethics review board even if you had the budget for it. So what do nutrition researchers who want to measure this do? They give people surveys asking them what they ate over the last 20 years.

Hey, it looks like science.

If you don’t look to closely.

Sherlock Holmes and the Valley of Fear

I recently read the fourth and final Sherlock Holmes novel, The Valley of Fear. It’s an interesting book, or in some sense two books, the first of which I know to be interesting and the second I’m not really interested in reading.

(If anyone doesn’t want spoilers, now’s the time to stop reading.)

The book begins with Sherlock Holmes working out a cryptogram by reasoning to the key from the cipher. It’s a book cipher, which has many pages and two columns, so Holmes is able to guess that it’s an almanac. This is clever and enjoyable; the message decodes that something bad is going to happen to a Douglas in Birlstone. Shortly after they decrypt it, a detective from Scotland Yard arrives to consult Sherlock Holmes about the brutal murder of Mr. Douglas of Birlstone. The plot thickens, as it were. This is an excellent setup for what is to follow.

When Holmes arrives, we get the facts of the case, that Mr. Douglas lives in a house surrounded by a moat with a drawbridge, and was found in his study with his head blasted off with a sawed-off shotgun fired at close range. Any avid reader of detective fiction—possibly even at the time, given how detective fiction had taken off in short story form by 1914, when The Valley of Fear was written—will immediately suspect that the body is not the body it is supposed to be. However, Conan Doyle forestalls this possibility by the presence of a unique brand on the forearm of the corpse, which Mr. Douglas was known to have had. This helps greatly to heighten the mystery.

The mystery is deepened further by the confusing evidence that Mr. Douglas’s friend forged a footprint on the windowsill which was used to suggest that the murderer escaped by wading in the moat—which was only 3′ deep at its deepest—and ran away. Further confusing things, Dr. Watson accidentally observes Mrs. Douglas and Mr. Douglas’ friend being lighthearted and happy together.

Holmes then finds some additional evidence which convinces him of what really happened, which he does not tell us or the police about, which is not exactly fair play. He then he sets in motion a trap where he has the police tell Mr. Douglas’ friend that they is going to drain the moat. This invites the reader to guess, and I’m not sure that we really have sufficient evidence at this point to guess. That’s not entirely true; we have sufficient evidence to guess, but not to pick among the many possible explanations of the facts given to us. It turns out that the dead man was the intruder, but it could have turned out otherwise, too. The facts, up till then, would have supported Mr. Douglas’ friend having been in on the crime, for example. That said, the explanation given does cover the facts very well, and is satisfying. It does rely, to some degree, on happenstance; none of the servants heard the gunshot, except for one half deaf woman who supposed it to be a door banging. This is a little dubious, but investigation must be able to deal with happenstance because happenstance is real.

We then come to the part where Mr. Douglas is revealed and the mystery explained, and which point the narrative shifts over to explaining his history in America and why it was that there were people tracking him from America to England in order to murder him. This, I find very strange.

It is the second time in a novel that Conan Doyle did it. The first time was in A Study in Scarlet, where the middle half of the book (approximately) took place in America. I really don’t get this at all.

I suspect it makes more sense in the original format of the novels, which were serialized in magazines. It would not be so jarring, in a periodical magazine, to have to learn new characters, since one would to some degree need to reacquaint oneself with the already-known characters anyway. Possibly it also speaks to Conan Doyle having not paced himself well, being more used to short stories, and needing to fill the novel with something else.

The very end of the book, when we return in the present in England, is a very short epilogue. Douglas was acquitted as having acted purely in self defense, but then is murdered by Moriarty when it was taking Holmes’s advice to flee England because Moriarty would be after him.

That the book takes such an interest in Moriarty is very curious, given that it was written in 1914 while Holmes killed Moriarty off in 1893. Actually in 1891, but The Final Problem was published in 1893. Holmes was brought back in 1903, in The Adventure of the Empty House, where it is confirmed that Moriarty died at the Reichenbach Falls. So we have a novel which is clearly set prior to the death of Moriarty, establishing him as a criminal mastermind, almost 15 years after he was killed off. What’s even stranger about it is that Moriarty barely features in the story. He’s in the very beginning, mentioned only in connection to the cryptogram and as having something to do with the murder, but he nor his men actually tried to carry out the murder. His involvement was limited to finding out where Douglas was, so the American who was trying to murder Douglas could try. He naturally makes no appearance in the story of Douglas’ adventures in America, and only shows up in a note at the end of the book:

Two months had gone by, and the case had to some extent passed from our minds. Then one morning there came an enigmatic note slipped into our letter box. “Dear me, Mr. Holmes. Dear me!” said this singular epistle. There was neither superscription nor signature. I laughed at the quaint message; but Holmes showed unwonted seriousness.

Moriarty is indicated to have killed Douglas off the cape of South Africa, and the book ends with Homles’s determination to bring Moriarty to justice.

Which would be a great setup for Holmes bringing Moriarty to justice in a later book, but we already read about it in an earlier book. It doesn’t really help to flesh the character out, it’s not really needed for the plot of the book, and it serves to end the book on a note of failure rather than of triumph. I do not understand it. Perhaps its purpose is to help increase the grandeur of Holmes’ previous victory over Moriarty? But that is a strange thing to do. Perhaps it was the reverse—a note of caution to fans of Holmes that no man, not even Sherlock Holmes, is omnipotent?

Why Moderns Always Modernize Stories

Some friends of mine were discussing why it is that modern tellings of old stories (like Robin Hood) are always disappointing. One put forward the theory it’s because they can’t just tell the story, they have to modernize it. He’s right, but I think it’s important to realize why it is that modern storytellers have to modernize everything.

It’s because they’re Modern.

Before you click away because you think I’m joking, notice the capital “M”. I mean that they subconsciously believe in Modern Philosophy, which is the name of a particular school of philosophy which was born with Descartes, died with Immanuel Kant, and has wandered the halls of academia ever since like a zombie—eating brains but never getting any smarter for it.

The short, short version of this rather long and complicated story is that Modern Philosophy started with Descartes’ work Discourse on Method, though it was put forward better in Meditations on First Philosophy. In those works, Descartes began by doubting literally everything and seeing if he could trust anything. Thus he started with the one thing he found impossible to doubt—his own existence. It is from this that we get the famous cogito ergo sumI think, therefore I am.

The problem is that Descartes had to bring in God in order to guarantee that our senses are not always being confused by a powerful demon. In modern parlance we’d say that we’re not in The Matrix. They mean the same thing—that everything we perceive outside of our own mind is not real but being projected to us by some self-interested power. Descartes showed that from his own existence he can know that God exists, and from God’s existence he can know that he is not being continually fooled in this way.

The problem is that Descartes was in some sense cheating—he was not doubting that his own reason worked correctly. The problem is that this is doubtable, and once doubted, completely irrefutable. All refutations of doubting one’s intellect necessarily rely on the intellect being able to work correctly to follow the refutations. If that is itself in doubt, no refutation is possible, and we are left with radical doubt.

And there is only one thing which is certain, in the context of radical doubt: oneself.

To keep this short, without the senses being considered at least minimally reliable there is no object for the intellect to feed on, but the will can operate perfectly well on phantasms. So all that can be relied upon is will.

After Descartes and through Kant, Modern Philosophers worked to avoid this conclusion, but progressively failed. Kant killed off the last attempts to resist this conclusion, though it is a quirk of history that he could not himself accept the conclusion and so basically said that we can will to pretend that reason works.

Nietzsche pointed out how silly willing to pretend that reason works is, and Modern Philosophy has, for the most part, given up that attempt ever since. (Technically, with Nietzsche, we come to what is called “post-modernism”, but post-modernism is just modernism taken seriously and thought out to its logical conclusions.)

Now, modern people who are Modern have not read Descartes, Kant, or Nietzsche, of course, but these thinkers are in the water and the air—one must reject them to not breathe and drink them in. Modern people have not done that, so they hold these beliefs but for the most part don’t realize it and can’t articulate them. As Chesterton observed, if a man won’t think for himself, someone else will think for him. Actually, let me give the real quote, since it’s so good:

…a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy…

(From The Revival of Philosophy)

In the context of the year of our Lord’s Incarnation 2019, what Christians like my friends mean by “classic stories” are mostly stories of heroism. (Robin Hood was given as an example.) So we need to ask what heroism is.

There are varied definitions of what hero is which are useful; for the moment I will define a hero as somebody who gives of himself (in the sense of self-sacrifice) that someone else may have life, or have it more abundantly. Of course, stated like this it includes trivial things. I think that there simply is a difference of degree but not of kind between trivial self-gift and heroism; heroism is to some degree merely extraordinary self-gift.

If you look at the classic “hero’s journey” according to people like Joseph Campbell, but less insipidly as interpreted by George Lucas, the hero is an unknown and insignificant person who is called to do something very hard, which he has no special obligation to do, but who answers this call and does something great, then after his accomplishment, returns to his humble life. In this you see the self-sacrifice, for the hero has to abandon his humble life in order to do something very hard. You further see it as he does the hard thing; it costs him trouble and pain and may well get the odd limb chopped off along the way. Then, critically, he returns to normal life.

You can see elements of this in pagan heroes like Achilles, or to a lesser degree in Odysseus (who is only arguably a hero, even in the ancient Greek sense). They are what C.S. Lewis would call echoes of the true myth which had not yet been fulfilled.

You really see this in fulfillment in Christian heroes, who answer the call out of generosity, not out of obligation or desire for glory. They endure hardships willingly, even unto death, because they follow a master who endured death on a cross for their sake. And they return to a humble life because they are humble.

Now let’s look at this through the lens of Modern Philosophy.

The hero receives a call. That is, someone tries to impose their will on him. He does something hard. That is, it’s a continuation of that imposition of will. Then he returns, i.e. finally goes back to doing what he wants.

This doesn’t really make any sense as a story, after receiving the call. It’s basically the story of a guy being a slave when he could choose not to be. It is the story of a sucker. It’s certainly not a good story; it’s not a story in which a characters actions flow out of his character.

This is why we get the modern version, which is basically a guy deciding on whether he’s going to be completely worthless or just mostly worthless. This is necessarily the case because, for the story to make sense through the modern lens, the story has to be adapted into something where he wills what he does. For that to happen, and for him not to just be a doormat, he has to be given self-interested motivations for his actions. This is why the most characteristic scene in a modern heroic movie is the hero telling the people he benefited not to thank him. Gratitude robs him of his actions being his own will.

A Christian who does a good deed for someone may hide it (“do not let your left hand know what your right is doing”) or he may not (“no one puts a light under a bushel basket”), but if the recipient of his good deed knows about it, the Christian does not refuse gratitude. He may well refuse obligation; he may say “do not thank me, thank God”, or he may say “I thank God that I was able to help you,” but he will not deny the recipient the pleasure of gratitude. The pleasure of gratitude is the recognition of being loved, and the Christian values both love and truth.

A Modern hero cannot love, since to love is to will the good of the other as other. The problem is that the other cannot have any good beside his own will, since there is nothing besides his own will. To do someone good requires that they have a nature which you act according to. The Modern cannot recognize any such thing; the closest he can come is the other being able to accomplish what he wills, but that is in direct competition with the hero’s will. The same action cannot at the same time be the result of two competing wills. In a zero-sum game, it is impossible for more than one person to win.

Thus the modern can only tell a pathetic simulacrum of a hero who does what he does because he wants to, without reference to anyone else. It’s the only way that the story is a triumph and not the tragedy of the hero being a victim. Thus instead of the hero being tested, and having the courage and fortitude to push through his hardship and do what he was asked to do, we get the hero deciding whether or not he wants to help, and finding inside himself some need that helping will fulfill.

And in the end, instead of the hero happily returning to his humble life out of humility, we have the hero filled with a sense of emptiness because the past no longer exists and all that matters now is what he wills now, which no longer has anything to do with the adventure.

The hero has learned nothing because there is nothing to learn; the hero has received nothing because there is nothing to receive. He must push on because there is nothing else to do.

This is why Modern tellings of old stories suck, and must suck.

It’s because they’re Modern.

Meek is an Interesting Word

Somebody asked me to do a video on the beatitude about meekness, so I’ve been doing some research on the word “meek”. Even though I don’t speak from a place of authority, talking about the beatitudes still carries a lot of responsibility.

The first problem that we have with the word “meek” is that it is not really a modern English word. It’s very rarely used as a character description in novels, and outside of that, pretty much never. So we have to delve back into history and etymology.

The OED defines meek as “Gentle. Courteous. Kind.” It comes from a Scandinavian root. Various Scandinavian languages have an extremely similar word which means, generally, “soft” or “supple”.

Next, we turn to the original Greek:

μακάριοι οἱ πραεῖς, ὅτι αὐτοὶ κληρονομήσουσιν τὴν γῆν

To transliterate, for those who don’t read the Greek alphabet:

makarioi hoi praeis, hoti autoi kleronomesousin ten gen.

Much clearer, I’m sure. Bear with me, though, because I will explain. (I’m going to refer to the words in the English transliteration to make it easier to follow.)

The beatitudes generally have two halves. The first half says that someone is blessed, while the second half gives some explanation as to why. This beatitude has this form. Who is blessed is the first three words, “makarioi hoi praeis”. In the original the verb is left understood, but this is usually translated as “blessed are the meek”. The second half, “hoti autoi kleronomesousin ten gen” is commonly translated “for they shall inherit the earth”.

Let’s break the first half down a little more, because both major words in it are very interesting (“hoi” is just an article; basically it’s just “the”). The first word, “makarioi”, can actually be translated in English either as “blessed” or as “happy”, though it should be noted happy in a more full sense than just the pleasant sensation of having recently eaten on a sunny day with no work to do at the moment.

I’ve noticed that a lot of people, or at least a lot of my fellow Americans, want to take “blessed”, not as an adjective, but as a future conditional verb. Basically, they want to take Christ, not as describing what presently is, but as giving rules with rewards that are attached. This doesn’t work even in English, but it’s even more obvious in Greek where makarioi is declined to agree with the subject, “hoi praeis”. Christ it’s telling us what to do and offering rewards. He’s telling us that we’re looking at the world all wrong, and why.

The other part, “hoi praeis”, is what gets translated as “the meek”, though I’ve also seen “the gentle”. It is the noun form of an adjective, “praios” (“πρᾷος”), which (not surprisingly) tends to mean mild or gentle.

Now, to avoid a connotation which modern English has accrued over hundreds of years of character descriptions in novels, it does not mean week, timid, or mousy. The wiktionary entry for praios has some usage examples. If one peruses through them, they are things like asking a god to be gentle, or saying that a king is gentle with his people.

So translating the first half very loosely, we might render the beatitude:

Those who restrain their force have been blessed, for they will inherit the earth.

This expanded version of the beatitude puts it in the group of the beatitudes which refer to something under the control of the people described as “makarios” (blessed, happy). Consider the other groups of people, which are roughly half of beatitudes: “the poor in spirit,” “those who mourn”, “those who hunger and thirst for righteousness”, “those who are persecuted in the cause of righteousness,” and “you when people abuse you and persecute you and speak all kinds of calumny against you falsely on my account”.

I think that this really makes it clear that what is being described is a gift, though a hard-to-understand one. So what do we make of the other beatitudes, the ones under people’s control?

Just as a quick refresher, they are: “the meek”, “the merciful”, “the pure in heart”, and “the peacemakers”. They each have the superficial form of there being a reward for those who do well, but if we look closer, the reward is an intrinsic reward. That is, it is the natural outcome of the action.

So if we look closely at the second half of the meek beatitude, we see that indeed it is connected to the first half: “for they will inherit the earth”. This is often literally the case: those who fight when they don’t have to die when they don’t have to, and leave the world to those who survive them.

Now, I think too much can be made of “the original context”—our Lord was incarnate in a particular time and spoke to particular people, but they were human beings and he was also speaking to all of us. Still, I think it is worth looking at that original context, and how in the ancient world one of the surest paths to glory was conquest. Heroes were, generally, warriors. They were not, as a rule, gentle. Even in more modern contexts where war is mechanized and so individuals get less glory, there are still analogs where fortune favors the bold. We laud sports figures and political figures who crush their enemies in metaphorical, rather than literal, senses.

Even on a more simple level, we can only appreciate the power than a man has when he demonstrates it by using it.

And here Christ is saying that those are happy who do not use their power when they don’t have to. And why? Because they inherit the earth. Glory is fleeting, and in the end one can’t actually do very much with it. Those who attain glory by the display of power do not, in putting that power on display, use it to do anything useful. They waste their power for show, rather than using it to build. And having built nothing, they will end up with nothing.

You can see this demonstrated in microcosm in a sport I happen to like: power lifting. It is impressive to see people pick up enormous weights. But what do they do with them once they’ve picked them up? They just put them back down again.

Now, the fact that this is in microcosm means that there can be good justifications for it; building up strength by lifting useless weights can give one the strength to lift useful weights, such as children, furniture, someone else who has fallen down, etc. And weightlifting competitions do serve the useful role of inspiring people to develop their strength; a powerlifting meet is not the same thing as conquering a country. But there is, none the less, a great metaphor for it, if one were to extend the powerlifting competition to being all of life. Happy are those who do not.

Strength vs. Skill

Many years ago, I was studying judo from someone who had done judo since he was a kid and was teaching for fun. He was not a very large man, but he was a very skilled one. One time, he told a very interesting story.

He was in a match with a man who was a body builder or a power lifter or something of that ilk—an immensely, extraordinarily strong man. He got the strong man into an arm bar, which is a hold in which the elbow is braced against something and the arm is being pulled back at the wrist. Normally if a person is in a properly positioned arm bar, this is inescapable and the person holding it could break his arm if he wanted to; this (joint locks) is one of the typical ways of a judo match ending—the person in the joint lock taps out, admitting defeat.

The strong man did not tap out.

He just curled his way out of the arm bar.

That is, his arm—in a very weak position—was so much stronger than my judo teacher’s large core muscles that he was able to overpower them anyway.

Next, my judo teacher pinned him down. In western wrestling, one can win a match by pinning the opponent’s shoulders to the ground for 3 seconds. In judo it’s a little more complicated, but the point which is important to the moment is that you have to pin the opponent such that he can’t escape for 45 seconds. Once he had pinned the strong man, the strong man asked him, “you got me?” My teacher replied, “yeah, I got you.” The strong man asked, “are you sure about that?” “Yes, I’m sure,” my teacher replied.

The strong man then grabbed my teacher by the gi (the stout clothing worn in judo) and floor-pressed him into the air, then set him aside. (Floor pressing is like bench pressing, only the floor keeps your elbows from going low enough to generate maximum power.)

Clearly, this guy was simply far too strong to ever lose by joint locks or pinning. So my teacher won the match by throwing him to the ground (“ippon”).

The moral of the story is not that skill will always beat strength, because clearly it didn’t, two out of three times. The moral of the story is also not that strength will always beat skill, since it didn’t, that final time.

The moral of the story is to know your limits and always stay within them.

It cost 1 billion dollars to tape out 7nm chip

Making processors is getting very expensive. According to this report, the R&D to take a processor design and turn it into something that can be fabricated at the latest silicon mode is $1B.

https://www.fudzilla.com/news/49513-it-cost-1-billion-dollars-to-tape-out-7nm-chip

Each fabrication node (where the transistors shrink) has gotten more expensive. I suspect it’s likely that economics will play as big a role in killing off Moore’s Law as physics will. Eventually no one will be able to afford new nodes, even if they are physically possible to create.

This is what an s-curve looks like.

A Michaelmas Book Sale

My friend and publisher, Russell Newquist, is having a Michaelmas sale this weekend on his books since they feature a modern day paladin who fights with the sword of Saint Michael (the archangel). If you’re in the mood for Catholic action-horror (Amazon calls it “Christian fantasy”) check out:

“Jim Butcher’s Harry Dresden collides with Larry Correia’s Monster Hunter
International in this supernatural thriller that goes straight to Hell!”

Also, the sequel:

“There’s a dragon in the church.”

I have to confess that these are still on my shelf waiting to be read, but I have read Russell’s short story Who’s Afraid of the Dark? (which is about a character who appears in War Demons and Vigil) and it was very good. So if you’re not busy writing murder mysteries and have time to read other people’s work, I strongly recommend checking them out.

This weekend the sale prices for War Demons are:
Ebook: $0.99
Paperback: $9.99
Hardcover: $19.99

The sale prices for Vigil are:
Ebook: $0.99
Paperback: $4.99

History Suffers From Academia

Academia has problems. This is an obvious statement, since it is an institution in a fallen world. It is worth looking at these problems in some depth, however, because they affect the various academic disciplines to varying degrees, and I think that History may be hit the hardest.

This occurred to me as I was reading the book A Catholic Introduction to the Bible: The Old Testament by Brant Pitre and John Bergsma. (It’s a massive tome that could be used to bludgeon a waterbuffalo to death, and I’m still in the introductory materials.) In the prefatory materials is an overview of biblical scholarship over the last two centuries, and this includes some theories which went from being novel, to being dominant, to mostly in the rubbish bin in recent times. And this got me thinking about the problems of academia and how they affect history.

The big problem of academia is that its currency is novelty. You can see this in the economic angle of publish or perish, to be sure, but even if publication didn’t affect people’s job prospects and salaries, it would still affect their reputations and standing as scholars. This selective pressure has a selective effect on scholars, very akin to what the same pressure has on scientists. Those who come up with novelty, for whatever reason, will tend to publish more, and will thus receive more academic status. This has generational effects, since new scholars learn from and have to work with old scholars. (For more on that, see Who Works for Bad Scientists?.)

While true and important, and as I say no one takes evolution seriously enough, it’s not what I want to focus on today. Rather, I want to focus on what limits there are inherent in the field to protect against novelty which is novel by being purely fictitious.

The prototypical example of a system which corrects against novelty-through-pure-fiction is science, but this is painting with an overly broad brush. It is not all sciences that do this, but rather experimental sciences. Theoretical physicists can spin theories about 11-dimensional string vibrations until the cows come home, and no one will ever notice that they don’t work because no one can run an experiment on the things.

The fiction-detecting aspect of experimental science only really works the way it’s advertised to in crowded fields of science. That is, it only works in fields of science where people will be doing the same experiments many times, or at least experiments who results depend upon previous results. This is quite true of experimental physics, especially of basic physics such as newtonian mechanics. It gets less true the less crowded the field is. The more low-hanging fruit there is to pick, the more people will spend their time picking it than looking in each other’s baskets.

Biology, and to the degree that it is a sub-field of that, medicine, are good examples of non-dense fields. There are far more questions to ask than there are researchers with funding to try to find answers. There are exceptions for particularly hot topics where at least several researchers will try to ask the same question, but you just don’t find much in the way of people duplicating each other’s experiments to find out if they get the same results.

Worse, in medicine certain types of answers become can make experiments unethical to try to replicate. If a drug shows a statistically significant result in a clinical trial, running such a clinical trial with a placebo group becomes unethical. All is not lost, since new drugs get run against “standard care” (i.e. the already approved drug) as a control group, so if the original drug is really just a placebo that got lucky, a real drug that comes along will prove better than it. There isn’t much of an ethical way of discovering that the drug is actually just a placebo (with side effects), though.

Chemistry may be the best field simply because chemistry is so closely tied to engineering, and the purpose of engineering is to replicate the heck out of whatever experiment was run. That is, engineering replicates findings by putting things into production on industrial scales, and a million LED lights shipped to Home Depot and Lowes and other such stores replicate the findings that Indiium, Gallium, and Nitrogen, when mixed correctly and an electric current is run through them, emit blue light. (Indium Nitride and Gallium Nitride are the semiconductors used in the blue LED.) But this is far and away the best case, and since it is industrial, it is definitionally not academic.

So what is it about the sciences, where there is some limit on fictitious novelty, that provides this limit? It’s that the test is not whether or not it seems plausible to another human being, but whether or not it actually works when one tries it.

This is, largely, not available in history. It is not entirely absent, as there are historical theories which can be disproved or confirmed by subsequent archaeological finds. These are, however, fairly rare. They are extremely rare in ancient history, such as biblical studies.

The particular theory which got me thinking about this was the Documentary Hypothesis, which more-or-less was a theory created by a liberal protestant in the 1800s that the traditional attribution of the Pentateuch to Moses was ahistorical.

What amazing new archaeological evidence was unearthed that gave rise to this theory? Why, none at all. The guy who came up with it looked at the text and spun his theory out of air. He decided to categorize the various lines and paragraphs and stories in the Pentateuch on the basis of which he thought similar and which he thought different, then took these groupings he thought similar and different from other groupings and attributed them to different authors, at different times.

Of course, being a liberal protestant, he decided that the religion started pure and simple and later got corrupted by law and liturgy. This appealed to his prejudices, and also gave him an interpretive framework to pull out of thin air approximate dates for the various authors he also pulled out of thin air.

And this was a huge hit in academia? Why? There are many threads which went into it, of course, but a significant one is that it was novel. It was new, and fresh, and exciting. It produced an enormous amount of work for scholars to do—someone had to go through the Pentateuch line by line and classify every line according to which theoretical author wrote it.

I think even more than being novel, this produced low-hanging fruit. That is, it made for relatively easy work.

One of the problems that a modern Christian who wants to write about scripture is up against is that he’s implicitly competing with twenty centuries of the most brilliant people in the world, because the brilliant people who had interesting things to say about scripture wrote them down, and Christians valued those writings and kept them and passed them on. There is, at present, an astonishing amount of excellent reading material, if one wants it, available from the saints and doctors of the church. What can one say, today, that has not already been said, and better?

Here, a new theory which overturns everything is the savior of the man in the bottom 99.99% of humanity (i.e. one who is not in the top 0.01%) by genius or talent, or whatever metric one wants to use. With everything overturned, there is new, fresh work to be done that most anyone can do, but not one has done before because no one could have done it before.

And who will bite the hand that feeds him? Certainly, the academic is not known for this counter-productive activity any more than any of his fellow men are.

So we get a century of people arguing over imaginary authors that they have not a shred of evidence for, mostly because it’s easier and more fun than real work.

From what I understand, a similar thing happened with Plato. At some point someone decided that most of the Socratic dialogues weren’t written by Plato because… they didn’t sound the same to him. Skip forward by about a century, and scholarly consensus once again becomes that Plato wrote the stuff commonly attributed to him. What did the intervening century have to show for it? A lot of sound and fury, signifying nothing.

Oh, and a lot of employed academics.

America’s Sweethearts

I’ve written before about the movie America’s Sweethearts. I would like to add to those thoughts, since I’ve watched it a few more times since then. (It’s one of a handful of movies I watch while debugging code because it helps to keep me from getting distracted while I wait for compiles, and because I know it so well it doesn’t distract me from doing the work because I always know what happens next.)

One of the very curious things about the movie America’s Sweethearts is that all of its characters are bad. (For those who are not familiar with it, America’s Sweethearts is a romantic comedy.) The show opens with the information that the titular couple of Eddie Thomas and Gwen Harrison has split. During the filming of their most recent and now last movie together, Time Over Time, Gwen took up with a Spanish actor and left Eddie. Eddie went crazy and tried to kill them, then retreated to a sort of faux-hindu wellness center and stayed there.

This is recapped fairly early on; the plot of America’s Sweethearts begins with the director of Time Over Time refusing to show the movie to the head of the studio until the press junket, when the press would see it at the same time as everyone else. This causes the head of the studio to panic and re-hire Lee, the studio’s publicist who he had fired as a cost-saving measure, to put together the junket because his talents really do match his salary. The only other major character is Kiki, Gwen’s sister (it’s unspecified who is older; they might even be fraternal twins, which would help to explain shared high school experiences). She’s a mousy creature whose life is mostly taken up pleasing the whims of her famous sister, but she’s played by Julia Roberts so you know that won’t last through the end of the movie.

We now have all of the major characters: an adulterer, a lunatic, an unscrupulous businessman, a wimpy woman who lets herself by tyrannized by her awful sister, a publicist who follows the line which Hercule Poirot’s friends said of him: he would never tell the truth if a lie would suffice.

And what’s really weird is that they’re a loveable cast, and it’s a really enjoyable movie, even though it is not a redemption arc for most of them.

I think that part of what makes it work—apart from the massive charisma of all of the actors, which cannot be understated as a causal element—is that the characters’ vices, while not repented of, are not excused, either.

The movie has something like a happy ending for about half of the characters in it, but it is very fitting because it’s a very small happy ending. The head of the studio gets a movie which has a lot of legal liabilities but which might make enough money to cover them. The publicist has what is probably going to be a successful movie. The adulterer is embarrassed, but she stays with her Spaniard for whatever that is worth. Eddie and Kiki wind up together, but shortly before they decide to give it a try, Kiki prognosticates that it’s never going to work, and she might well be right.

I think that ultimately what makes the movie work is the subconsciously stoic theme that vice is its own punishment, and so successful vice is still punished vice. America’s Sweethearts is all about people who do not deserve their natural virtues—beauty, fame, wealth, power—who are punished by getting to keep them. But—and this is an important but—the movie is so short that one is left with the hope that the punishment may serve its purpose and the people may in time learn to repent.

This may be the formula for all successful movies about vicious people (that is, people who practice vice). At least where they do not repent. Redemption stories are probably better. But if a story about vicious people is not going to be about their redemption, I think the story of how they are punished by success may be the only other option for a good story.

Because good stories need to be true to life.

Science Fiction vs. Fantasy

On a twitter thread, I proposed the idea that the main distinction between Science Fiction and fantasy is whether people prefer spandex uniforms or robes:

I did mean this in a tongue-in-cheek way. Obviously the only difference between Science Fiction and Fantasy is not the wardrobe. It is curiously harder to define than one would first suspect, though.

Before proceeding, I’d like to make a note that genres are not, or at least are not best considered as, normative things which dictate which books should be. Rather, they are descriptions of books for the sake of potential readers. The purpose of a genre is “if you like books that have X in it, you might like this book”. (The normative aspect comes primarily from the idea of not deceiving readers, but that runs into problems.)

Science Fiction is often described as extrapolating the present. The problem is that this is simply not true in almost all cases. It is very rare for Science Fiction to include only technology which is known to be workable within the laws of nature which we currently know. This is doable, and from what I’ve heard The Martian does an excellent job of this. At least by reputation, the only thing it projects into the future which is not presently known to be possible is funding. This is highly atypical, though.

The most obvious example is faster-than-light travel. This utterly breaks the laws of nature as we know them. Any Science Fiction story with faster-than-light travel is as realistic a projection of the future as is one in which people discover magic and the typical mode of transportation is flying unicorns.

I have seen attempts to characterize science fiction based on quantitative measures of how much of the science is fictional. This fails in general because fantasy typically requires only the addition of one extra energy field (a “mana” field, if you will) to presently known physics. And except for stories in which time travel is possible, the addition of a mana field is far more compatible with what we know of the laws of nature than faster-than-light travel is.

Now, one possibility (which I dislike) is that Science Fiction is inherently atheistic fantasy. This take, which I am not committed to, is that Science Fiction is fantasy without the numinous. Probably an alternative is Science Fiction is fantasy where there is no limit to the power which any random human being can acquire.

What I think might be the better distinction between Science Fiction and Fantasy is that Science Fiction is fantasy in which the author can convince the reader that the story is plausibly a possible future of the present. What matters is not whether, on strict examination, the possible future is actually possible. What matters is whether the reader doesn’t notice. And for a great many readers of Science Fiction, I suspect that they don’t want to notice.

In many ways, the work of a Science Fiction writer might be like that of an illusionist: to fool someone who wants to be fooled.

This puts Star Wars in a very curious place, I should note, since Star Wars is very explicitly not a possible future. But Star Wars has always been very dubiously Science Fiction. Yes, people who like Science Fiction often like Star Wars, but this doesn’t really run the other way. People who like Star Wars are not not highly likely to like other(?) science fiction. I personally know plenty of people who like space wizards with fire swords who do not, as a rule, read Science Fiction.

Anyway, even this is a tentative distinction between the two genres. It’s not an easy thing to get a handle on because it’s impossible to know hundreds of thousands of readers to identify the commonalities between their preferences. Even the classification of books into genres by publishers and books stores are only guesses as to what will get people to buy books, made by fallible people.

Murder For Revenge

In broad strokes, there are only a few reasons to murder someone:

  1. Gaining money or other forms of power
  2. To pave the way for love
  3. Revenge
  4. To gain status that properly belongs to the victim
  5. To protect one’s status

These correspond, roughly, to the deadly sins:

  1. Greed
  2. Lust
  3. Wrath
  4. Envy
  5. Vanity

Today I want to consider murder for revenge. It further subdivides into two possible situations:

  1. The murderer is fine with being destroyed in the process
  2. The murderer wishes to suffer no repercussions

The former can make an interesting story (such as the sub-plot in Chesterton’s The Sins of Prince Saradine), but it’s not easy for it to sustain a mystery. The main problem is that the murderer should, by hypothesis, confess. This can, however, be handled.

The first way to handle this is to have the murderer leave. This is hard to work unless he thinks the crime won’t be discovered and so no explanation is necessary. That can be done, though, especially for historical crimes being discovered and investigated only years later.

The second way to handle this is to have the murderer leave a confession before leaving but to have the confession intercepted by someone who wants to use the occasion to murder someone by framing him for murder. This is a very workable sort of plot, though it will be complicated.

The third major way is to kill the murderer before he can confess. This may be the most interesting option, especially if he is killed by the victim. Of course, if the murderer is murdered by his victim, this will not be mysterious unless at least one of them uses a scheme for which he does not need to be present, which is where the interesting part comes from. It is very hard to suspect a dead man of murder. If there is anyone one will leave off suspecting of a crime, it’s a dead man.

In a Cadfael story (Saint Peter’s Fair) Hugh Beringar remarks that babes and drunks are the world’s only innocents. But this is not an exhaustive list. Who is so incapable of harm as a man already dead?

What’s specially interesting about two people who have murdered each other is that with any conniving at all, the author can contrive to have everyone suspect them of being murdered by the same person, and this will be a very strange person indeed to have two enemies with so little in common. It also means that the murder will seem to have been done very craftily when it was in fact done very simply. Or at least one of them will be like that. There are absolutely wonderful possibilities for misdirection, here.

(I really want to write a story like this some day. I probably should first write a story with at least two victims at the start who were killed by the same person, so it’s not obvious, though.)

The other major option, and which is more common because it can far more easily sustain a mystery, is for the one seeking revenge to wish to avoid repercussions for his crime. This provides a simple reason for why he does not confess. It can sustain a mystery with little difficulty at all.

It can, of course, be made far more complex than the simple case. The variation that I suspect is most interesting, or at least that I personally find most interesting, is of introducing the complication of the passage of time. This can either be put between the original offense and the present, or between both the original offense and the revenge, and the present.

Of the two, my favorite is probably the ones where the revenge is recent but the offense in the past. This is probably most classically done with the child who grows up to avenge a parent, but this possibly should be avoided because it is common enough that, these days, the average reader might count the years since the crime in the past and guess the killer simply based on his age.

It comes to mind that an interesting way around that problem might be to give the murderer some scruple in his revenge, such as waiting for the 18th birthday of the vicitm’s youngest child, on the theory that his children should not be punished for the crime of their father. Something like that would throw a wrench into figuring out the culprit by simple calculations, at least.

There are more variations on murder for revenge, but this post is getting long enough that I think I’ll leave them for later. Enjoy writing your murder mysteries about revenge, and God bless you.

Dragnet

Something I find interesting on occasion is to look up the history of television shows. Television is a very young medium. Though the device itself was invented in the 1930s, the Great Depression and the second World War and its attendant economic privations meant that televisions were not widely owned until the late 1940s. Without an audience, not much was made to broadcast to it. It was, therefore, really the early 1950s in which television got its start.

This makes it easy to research, but also makes the chain of influences fairly short.

Dragnet actually started as a radio drama, starring Jack Webb as Detective Joe Friday. In 1951, it became a television show, with much the same cast as the radio drama, though his partner had to be changed out part way through. This show lasted until 1959. It was later revived in 1967, this time in color. This is the version which I think most people are familiar with, that stars Harry Morgon as Detective Bill Gannon alongside Jack Webb reprising his role as Joe Friday. Certainly it’s the version I’m most familiar with. It lasted until 1970.

There were other versions made, but none with Jack Webb since he died in 1982 (at the age of 62). In 1987 there was a comedic movie starring Dan Ackroyd and Tom Hanks. It’s almost a parody of the original, though it is not a mean-spirited parody and I can testify that it is a lot of fun. In 1989 there was a short-lived series called The New Dragnet, and in 2003 there was an even shorter-lived revival series called LA Dragnet.

Though Dragnet was not able to survive in the modern world of police procedurals, or possibly just it was not able to outlive its star, Jack Webb, it did have an enormous impact on television. Counterfactuals are impossible to state with certainty, but it seems likely that police procedurals would not have the form they have today if Dragnet had never happened.

Episodes of Dragnet, which are (surprisingly) easily found on YouTube, are interesting to watch. The detectives are in the homicide division, so in a very technical sense the cases are murder mysteries. However, they are not detective stories in the sense of Poirot or Agatha Christie. The detectives do a lot of work, of course, but they don’t really do anything particularly clever. They just keep talking to people until they get enough facts to convict the murderer.

What I find curious—given that I’m a huge fan of detective fiction with genius detectives and write some of it myself—is that, bare-bones as Dragnet is, it still satisfies the impulse to see a mystery solved. This is true of modern police procedurals as well. In both cases, they feel somewhat like empty calories—enjoyable while watching but they don’t really have any substance which sticks with one.

This is not true of the great detective stories. Murder on the Orient Express, Have His Carcase, Saint Peter’s Fair—these stories really stick with one. There are interesting ideas in them to chew on long after one’s read them.

But it’s a testament to the human craving for the solving of mysteries that even Dragnet, which was told in an almost deliberately un-entertaining style, still makes you want to watch to the end to find out what happens, if you watch the beginning. This may partially be a testament to the power of charisma, though. I can watch Harry Morgan in just about anything.