Forgotten Literary Influences

As I’ve mentioned, I’m reading the book Masters of Mystery: A Study of the Detective Story, written by H. Douglas Thomson in 1931. One of the things which I’ve been getting out of it is an idea of what the popular mystery novels were at the time, which I’ve never heard of.

For full disclosure, the mystery authors I’ve actually read something by, from the early days and the golden age of mystery, are, in no particular order: Edgar Allen Poe, Sir Arthur Conan Doyle, G.K. Chesterton, Fr. Ronald Knox, Dorothy L. Sayers, Agatha Christie, S.S. Van Dine, and Mary Roberts Rinehart.

It’s not a long list, and not all of them have been by recommendation. I read Poe’s Murders in the Rue Morgue because it was the first detective story. I read S.S. Van Dine’s The Benson Murder Case out of curiosity since Van Dine had written up a set of rules of detective fiction I’ve seen referenced numerous times on Wikipedia. I read The Door by Mary Roberts Rinehart because it was supposed to be the origin of the phrase, “The Butler Did It”. (See my series on that phrase, if you haven’t read it yet.) And I’ve read most of Fr. Knox’s mysteries, but I started because he was a friend of G.K. Chesterton and because he wrote a famous ten commandments of detective fiction.

So if we subtract those, the mystery writers from that era which I’ve actually read because someone recommended them to me are: Conan Doyle, Chesterton, Sayers, and Christie. In my youth, that was my impression of the time period.

As I grew older, I realized that there must be other mystery writers of the time period that I was just unfamiliar with, but it was only in recent years that I came to appreciate just how popular a genre mystery was in those days, both to read and to write.

The thing which really drove it home to me was a short story entitled What, No Butler? about the accidental detective, Broadway. Here’s what I wrote about it at the time:

Incidentally, I looked up the two works cited. “What, No Butler?” seems to be a short story by Damon Runyon. I can’t find much information about it; according to Wikipedia it was in a book called Runyon on Broadway. It was performed on radio in 1946 and that performance is available on youtube. I don’t know when it was originally published. The story does have humor in it, but to call it satire seems like quite a stretch. Early in the story, the character Broadway (who I believe is a theater critic) says authoritatively upon finding out that a man was murdered that the butler did it. When he’s told that the victim didn’t have a butler, he insists that they have to find the butler, because in every play he sees with a murder in it, the butler did it.

What caught my attention was the reference, not to novels or even to magazine stories, but to plays. I know of literally one detective play, The Mousetrap, by Agatha Christie, which I only know about only because I was reading the wikipedia article about Ms. Christie. (Incidentally, it is the longest running play ever put on, being continually put on since 1952. Its 25,000th performance was in 2012.) There is evidence, though, that detective plays were fairly common.

This escaped me in no small part because plays have largely gone away as a form of common entertainment. Aside from high schools and community theater, plays are mostly a broadway affair for wealthy people and tourists in NYC. (This is not quite true, as there are actually plays elsewhere, but it is approximately true.) Back in the day, however, they seem to have had more of the role of television, these days, with plays being frequently written and performed for only a short time to be replaced by others. Television is a superior medium for this sort of fast-paced churn of mediocre writing, so it is natural that it would have eliminated it. But in that vane, we might take all of the episodes of a show like Murder, She Wrote to be somewhat representative of what plays of the era might have been like. Here today, gone tomorrow, and only meant for an evening’s entertainment.

Another blind spot in my knowledge of the time were short stories printed in magazines. Because novels are the dominant form of written fiction in our day, I tend think primarily of the novels written during the early days of detective fiction, or of collections of short stories. But in the late 1800s and early 1900s, magazines had enormous circulations and were apparently where the real money was in writing fiction. Even novels which we read today as novels, from the time period, were frequently originally published as serializations in a magazine. But of course short stories were extremely popular.

Of these blind spots, I was to some degree cognizant. What Masters of Mystery really drove home to me was the great number of popular detectives even available in novels which I had never heard of.

I had seen a few references to Dr. Thorndyke in the Lord Peter Wimsey stories—it turns out that he was a character in Dr. Austin Freeman’s popular detective stories. There were many others I had not heard of, though, and the push and pull of what constitutes the ideal detective story, as each writers takes in his turn to write his own detective, is quite interesting to see.

Possibly the most interesting to me at the moment is Mr. A.E.W. Mason’s Inspector Hanaud. First appearing in a story published in 1910, he is thought to have had some influence on Agatha Christie’s Hercule Poirot, especially when it comes to physical description, but also to brilliant intuition and a psychological approach. Interestingly, in looking this up to confirm some points on Wikipedia, I ran into this:

Poirot’s name was derived from two other fictional detectives of the time: Marie Belloc Lowndes’ Hercule Popeau and Frank Howel Evans’ Monsieur Poiret, a retired Belgian police officer living in London.

So it seems that perhaps the second most famous detective of all time (the most famous being Sherlock Holmes) drew very heavy inspiration from a number of sources, most of which (aside from Holmes) have been long forgotten.

It is yet more evidence that it is not originality which matters, but the quality of execution.

That said, Agatha Christie was a very original writer. Not, precisely, in her subject matter, but in her approach to it. She managed to pull off things which others could not. Perhaps the greatest example of this is The Murder of Roger Ackroyd. Prior to this, Fr. Knox, in his decalogue, had given two rules which are here relevant:

  1. (1) The criminal must be mentioned in the early part of the story, but must not be anyone whose thoughts the reader has been allowed to know.
  2. (7) The detective himself must not commit the crime.

The Murder of Roger Ackroyd broke both, and did so not only well, but even fairly. So well and so fairly that in a 1938 commentary on his rules, Fr. Knox said:

The second half of the rule is more difficult to state precisely, especially in view of some remarkable performances by Mrs. Christie. It would be more exact to say that the author must not imply an attitude of mystification in the character who turns out to be the criminal.

One such ingenious story would be enough for everyone, but Mrs. Christie pulled off at least a second, with her Murder on the Orient Express. This one did not break one of the rules of the decalogue, but it did break the generally unstated rule that there should be one or two murderers. Instead, Mrs. Christie pulled off a story in which everyone (with a few minor exceptions) did it. Every suspect (and several non-suspects) turned out to be guilty. Her originality consisted not in the idea—”everyone did it” is the sort of thing anyone might think of—but in figuring out how to make it work.

This is something those of us writing today should take to heart. In English class in high school we hear much about originality and genius. The reality of writing novels is that what really matters is doing a good job.

Question for Readers

One of the downsides to blog statistics is that one gets relatively little information about what people are reading. (Except in the archives, but that’s +/- more about search engine results than anything else.) So I have a question for you, gentle reader.

Of the various things that I write about, is there anything you’d like to see me write more about? Or, somewhat equivalently, when do you think this blog is at its best?

I thank you in advance for any answers you might give.

The Early Days of the Detective Story

As I mentioned, I’ve been reading the book Masters of Mystery: A Study of The Detective Story. The first chapter deals with the question of whether the detective story is literature, and if so, whether it is good literature. There are two things that particularly caught my attention: the enormous popularity of the detective story, and the basic morality of the detective story.

The first is very interesting because I’ve seen it in detective fiction from the era, but I never knew what to make of that. The example which most leaps out at me is Harriet Vane’s reception by the dons in Gaudy Night. A great many of them had read her books and were fans. It almost has the same feeling as the near-universal name recognition of Jessica Fletcher in Murder, She Wrote. In Jessica’s case, however, we know this to be a tremendous exaggeration. It was more plausible in the case of Harriet Vane, though, because television had not yet been invented and talkies (movies with recorded dialog) were only in their infancy. It is, therefore, interesting to see a description, if, granted, from an interested party, of how widespread was the interest in detective stories around the time of 1930. It was popular with educated people, with common people, with respectable people—in short, there was no notable group of people not reading detective stories at this time.

The other interesting thing which leapt out at me was the critique of the detective story as dangerous to morals, and the response that the detective story was, fundamentally, a moral story. That is, the detective story takes as given the ordinary moral framework of right and wrong and man’s duty to do right and to refrain from doing wrong. This interests me so much, not because it is a revelation—it is, after all, obviously true—but because I’ve seen it used as an explanation for why the detective story is so enduringly popular even until our own times (I write this at the end of the year of our Lord 2019).

It has been argued (possibly even by me) that the detective story and its modern television cousin, the police procedural, is the only modern story in which basic morality is taken for granted. It is curious to see that this was to some degree true even in the early days of detective stories.

An example given as contrast was An American Tragedy, which was the only assigned reading in highschool I never finished. I just couldn’t stand the book; I made it about halfway through and gave up, reading the Cliff’s Notes instead of finishing the wretched thing. The short short version of it is that a young man makes all sorts of awful life choices during the great depression and is eventually executed for murdering a woman he seduced (in order to be available to marry a rich woman). The main character is a bad man who learns nothing, and the book does not even appreciate the justice of him paying for his crime.

It was published in 1925.

Bad books have been around for quite a long time.

Talking of the Past in the Past

A few years ago a dear friend of mine gave me the book Masters of Mystery: A Study of The Detective Story, and I’ve finally started reading it. I’ll be writing about what it says about the detective story in another post; here I want to talk about something interesting in the timing of the book, and of the introduction which came later, as the copy I was given is actually a reprint.

Masters of Mystery was written by H. Douglas Thomson and originally published in 1931. The reprint and its foreward were made in 1978, three years short of the book’s fiftieth anniversary.

The book itself was written at an interesting time, given that 1931 was only the middle of the golden age of detective fiction and had yet to see most of the work of Agatha Christie, Dorothy L. Sayers, to name just two giants of the genre.

Further making it an interesting time, detective fiction was not that old. Granted, the first detective stories are generally reckoned to be Edgar Allen Poe’s Dupin stories, the first of which, Murders in the Rue Morgue, being published in 1841. There seems to be fairly little—in English—before Conan Doyle published Sherlock Holmes in 1887. 1931 was a scant 44 years later. That is enough time for much to have happened, but it was still early days.

We come, now, to the foreward which interests me, being written a slightly longer time in the future, and taking a historical look at how Masters of Mystery held up. It was written by a E.F. Bleiler, who according to Wikipedia was “an American editor, bibliographer, and scholar of science fiction, detective fiction, and fantasy literature.” He worked as an editor at the American publisher Charles Scribner’s Sons at the time of the reprint, but as he only left Dover in 1977 and it was Dover that did the reprint, it is possible that he wrote it while an editor at the publisher. He may have been, therefore, less an expert sought out for his opinion and more a man who happened to be around.

He praises the book, but also notes some weaknesses. Some may be fair, such as noting that Thomson leaves off much about the early days of detective fiction—for the understandable reason that not much was known, especially then and even now, of it.

He makes the somewhat odd claim that Detective mysteries were at the time Thomson wrote predominantly “house party” crimes. This is odd in that it’s simply false if predicated of the famous stories of the time. It was a common enough setting, but among the detective stories which have come to us at the time of my writing, it certainly did not predominate. How common it was amongst the stories which have long since been forgotten, I cannot say.

The really interesting claim, though, is rooted firmly in its time:

Thomson’s critical standards were often a function of his day, but two more personal flaws in his work must be mentioned. His worst gaffe, of course, is his failure to estimate Hammett’s work adequately. While Hammett-worship may be excessive at the moment, it is still perplexing that Thomson could have missed Hammett’s imagination, powerful writing, and ability to convey a social or moral message. Related to this lacuna is Thomson’s lack of awareness of the other better American writers of his day, men who stood just as high as the better English writers that he praises. It was inexcusable to be unaware of the work of Melville D. Post, F.I. Anderson and T.S. Stribling. It is also surprising, since all three men were writers of world reputation at this time.

To deal with the last, first: I’ve never heard of Post, Anderson, or Stribling. F.I. Anderson does not even have a wikipedia page. Such is the short duration of fame, I suppose, that a man can be castigated for not talking about famous men 48 years after his book that, 44 years later, are generally unknown.

Dashiell Hammett, I do of course know of. That said, it is funny to me to speak of Hammett as some sort of master that everyone must talk about. I’ve met exactly one person who seriously likes Dashiell Hammett’s writing, and I don’t even know his name—I struck up a conversation with him while waiting to pick up Chinese food one night.

I suspect that Hammett’s reputation in the 1970s was a product of the success of the movies based upon his books. The casting for The Maltese Falcon and The Thin Man were excellent, and anyone having seem them—as an editor working for Dover in 1977 almost certainly would have—cannot help but read the tremendous performances of the actors into the words on the page. If one does not picture Humphrey Bogart as Sam Spade, much of the magic is lost.

Again, I should note that in the main Blieler’s foreward is positive and mostly about how Masters of Mystery is worth reading. I was merely struck by how much the retrospective criticisms of it were a product of their time, but were phrased as if they were now timeless.

Especially the Lies

There was a very interesting character in Star Trek: Deep Space 9 who was a deeply enigmatic character that was basically a spy and/or secret police officer who had possibly defected. More or less he was in the position of possibly being a gestapo agent who fled from Nazi Germany prior to the Nazis losing WWII. Instead of the Nazis, it was the Cardassians, and instead of the Gestapo, it was the Obsidian Order, but the basic structure holds.

This is an interesting character because one doesn’t know whether he left as a matter of principle, or if he was driven out merely by political considerations, or if he never left at all and his job as a tailor and status as a refugee is merely a cover. He is, of course, charming and charismatic, and denies ever having been of any importance, or a member of the Obsidian Order, and always claims that he’s “Just plain simple Garak.”

There’s an episode (or possibly a few episodes) in which his past is explored. I should note, in passing, that my suspicion is that in usual TV fashion, I don’t think that the writers ever did decide on a backstory. TV writers are much better at hints than worked-out ideas. Be that as it may, it was interesting, and there were a number of highly conflicting stories that surfaced about Garak’s past. When the episode (or arc) ended, Garak spoke with his friend, Dr. Bashir, who asked him about the stories.

Bashir: You know, I still have a lot of questions to ask you about your past.
Garak: I’ve given you all the answers I’m capable of.
Bashir: You’ve given me answers all right, but they were all different. What I want to know is: out of all the stories you told me, which ones were true and which ones weren’t?
Garak: My dear doctor, they’re all true.
Bashir: Even the lies?
Garak: Especially the lies.

If you want to watch the exchange, here’s a clip of it on YouTube:

This was a great exchange, and, in a different context, it would have been a brilliant conclusion. The problem, of course, is that it gets its power by hinting at a cohesive story behind the fragments Bashir (and hence, the viewer) are allowed to see. This is a problem because there was no cohesive story behind the fragments; they were just fragments thrown out in order to contradict previous fragments.

I don’t mean that they had literally no ideas; it was clearly established that Garak was in fact, at least at one point, a high ranking member of the Obsidian Order. What was not established was what principles he actually had.

Nebulous hints are only interesting if there is something good at the back of them. If a man simply lies because he is so warped and twisted that he doesn’t know the truth, this is not interesting. This gets back to something I’ve said more than a few times: it is a man’s virtues, not his flaws, which are interesting. Flaws are, at most, a crutch to make it easy to show off a man’s virtues.

What would have made this great is if there was some principle—that was not just loose consequentialism plus a goal—which was being served, and, therefore, all of the lies actually conveyed a truth, if properly understood. That is, this would be great if all of the lies were actually cyphers, and at some time later the key would be given which would decypher the lies into truths.

You can see an example of this, though not a great example, in the retcon of how Obi Wan Kanobi explained why he said that Anakin Skywalker was killed by Darth Vader. When he said it, he meant that the good man who called himself Anakin Skywalker was gone forever, replaced by the evil man who called himself Darth Vader. It wasn’t great, but the lie does make sense as containing a truth, when interpreted under that rubric.

That’s what enigmatic characters should all be, though in general it works best if the writers create the cypher key before encrypting things with it. When the writers do that, they do have the potential to create something great.

For it is good, indeed, when it turns out that the lies are all true.

Throwing Is Not Automatic

I’m a fan of Tom Naughton, and his movie Fathead helped me out a lot. But recently he had something of a headscratcher of a blog post. Mostly he just mistake coaching cues that happen to work for him with the One True Way to swing a golf club—which is a very understandable mistake when in the grips of the euphoria of finally figuring out a physical skill one has been working on for years—but there was this really odd bit that I thought worth of commenting on:

If you ask someone to throw a rock or a spear or a frisbee towards a target, he’ll always do the same thing, without fail: take the arm back, cock the wrist, plant the lead foot, rotate the hips, sling the arm toward the target, then release. Ask him exactly when he cocked his wrist, or planted his foot, or turned his hips, he’ll have no idea – but he’ll do it correctly every time. That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA. We don’t have to learn it. Our bodies and brains already know it.

The basic problem is: throwing is not automatic. It’s learned.

I can say this with certainty because I’ve spent time, recently, trying to teach children to throw a frisbee. They do not, in fact, instinctively do it correctly. Humans have very few actual instincts, at least when it comes to voluntary activities. We instinctively breath, and we will instinctively withdraw our hand from pain, but that’s about it. Oh, and we can instinctively nurse from our mother, though even their we need to learn better technique than we come equipped with pretty quickly or Mom will not be happy.

Now, what we do, in fact, come with naturally is the predisposition to learn activities like throwing. This is like walking: we aren’t born knowing how to walk, but we are born with a predisposition to learn to walk. We’re good at learning how to walk and we want to do the sorts of things that make us learn how to walk. Language is the same way—we’re not born speaking or understanding language, but we are predisposed to learn it.

Another odd thing is the “he’ll do it correctly every time”—no he won’t. Even people who know how to throw things pretty well occasionally just screw up and do it wrong. When teaching my boys to throw a frisbee, occasionally I just make a garbage throw. It’s not just when my conscious thoughts get in the way of my muscle memory—muscle memory needs to be correctly activated, and not paying sufficient attention is a great way to do that wrong.

Finally, the evolutionary biology part is just odd: “That’s because humans have been throwing things at predators and prey forever, and the kinematic sequence to make that happen is hard-coded into our DNA.”

There’s an element of truth to this, in that we can find evidence of spear use in humans going back hundreds of thousands of years. The problem is that the kinematic sequence to throw a spear and the kinematic sequence to hit a golf ball is not the same thing at all.

Here’s a golf swing:

By contrast, here’s someone throwing a javelin:

And just for fun, here are some Masai warriors throwing spears:

Something you’ll notice about the Masai, who throw actual weapons meant to kill, is that the thing is heavy, and they throw it very close. Alignment is incredibly important, since a weak throw that hits point-on is vastly more effective than a strong throw that hits side-on. The other thing is that the ability to actually throw quickly without a big wind-up matters, since they’re practicing to hit moving targets. They don’t have time for a huge wind-up. Also, they tend to face their target, rather than be at a 90 degree angle to it—when your target has teeth and claws, you need to be able to protect yourself if the target starts coming for you.

Anyway, if you look at these three activities, they’re just very kinematically different. Being good at one of those things will not transfer to being good at the others. The Masai warrior needs accuracy, timing, and power on a heavy projectile. The javelin thrower needs to whip his arm over his body as fast as possible, from a sprint. His arm is straight and his shoulder hyper-extended. The golfer needs to whip the head of a long stick as fast as possible, below his body, from a standing position. His arms are bent and his elbows are kept in to generate more force than arm-velocity, since the greater force translates to greater velocity on the end of the stick. The golf swing probably has more in common with low sword-strikes using a two-handed sword than it does with swinging a spear.

Anyway, I don’t have a major point. I just think it’s interesting what we will tell ourselves in order to try to figure out motion patterns.

On The Seventh Day God Rested

On the seventh day, God rested.

This is an interesting thing to contemplate since as a American Northerner, I don’t really understand the concept of rest.

Granted, every now and again I take breaks, and every night I sleep. The thing is, I can’t help but think of these as weaknesses, as concessions to a fallen world. Chesterton described this attitude toward work and rest very well in Utoptia of Userers, though he was talking about employers and not individuals:

The special emblematic Employer of to-day, especially the Model Employer (who is the worst sort) has in his starved and evil heart a sincere hatred of holidays. I do not mean that he necessarily wants all his workmen to work until they drop; that only occurs when he happens to be stupid as well as wicked. I do not mean to say that he is necessarily unwilling to grant what he would call “decent hours of labour.” He may treat men like dirt; but if you want to make money, even out of dirt, you must let it lie fallow by some rotation of rest. He may treat men as dogs, but unless he is a lunatic he will for certain periods let sleeping dogs lie.

But humane and reasonable hours for labour have nothing whatever to do with the idea of holidays. It is not even a question of ten hours day and eight-hours day; it is not a question of cutting down leisure to the space necessary for food, sleep and exercise. If the modern employer came to the conclusion, for some reason or other, that he could get most out of his men by working them hard for only two hours a day, his whole mental attitude would still be foreign and hostile to holidays. For his whole mental attitude is that the passive time and the active time are alike useful for him and his business. All is, indeed, grist that comes to his mill, including the millers. His slaves still serve him in unconsciousness, as dogs still hunt in slumber. His grist is ground not only by the sounding wheels of iron, but by the soundless wheel of blood and brain. His sacks are still filling silently when the doors are shut on the streets and the sound of the grinding is low.

Again, Chesterton is talking about employers, but this also encompasses an American attitude toward the self which need have nothing to do with money. Chesterton goes on:

Now a holiday has no connection with using a man either by beating or feeding him. When you give a man a holiday you give him back his body and soul. It is quite possible you may be doing him an injury (though he seldom thinks so), but that does not affect the question for those to whom a holiday is holy. Immortality is the great holiday; and a holiday, like the immortality in the old theologies, is a double-edged privilege. But wherever it is genuine it is simply the restoration and completion of the man. If people ever looked at the printed word under their eye, the word “recreation” would be like the word “resurrection,” the blast of a trumpet.

And here we come back to where I started—that on the seventh day, God rested. We are not to suppose, of course, that God was tired. Nor are we even to suppose that God stopped creating creation—for if he were to do that, there would not be another moment, and creation would be at an end. Creation has no independent existence that could go on without God.

So what are we to make of God’s resting on the seventh day, for it must be very unlike human rest?

One thing I’ve heard is that the ancient Jewish idea of rest is a much more active one than our modern concept of falling down in exhaustion. It involves, so I’ve heard, the contemplation of what was done. Contemplation involves the enjoyment of what is done. What we seem to have is a more extended version of “and God looked on all that he had made and saw that it was good”.

There is another aspect, I think, too, which is that God’s creative action can be characterized into two types, according to our human ability to understand it—change and maintenance. In the first six days we have change, as human beings easily understand it. There are arising new forms of being different enough that we can have words to describe them. We can, in general, so reliably tell the difference between a fish and a bush that we give them different names. But we cannot so reliably tell the difference between a fish at noon and that same fish ten minutes later, even though it has changed; we just call them both “fish” and let that suffice because we cannot do better. Thus God’s rest can also been as the completion of the large changes, which we easily notice, and the transition to the smaller changes, which we have a harder time noticing or describing.

I’m thinking about this because I recently sent the manuscript of Wedding Flowers Will Do for a Funeral off to the publisher. It’s not done, because there will be edits from the editor, but for the moment there is nothing for me to do on it. I finally have time—if still very limited time owing to having three young children—to do other projects, but I’m having a hard time turning to them.

My suspicion is that I need to spend some time resting, which is what put me in mind of this.

Wedding Flowers Is Off to the Editor

For anyone who is interested my my novels: a few days ago I sent the manuscript of Wedding Flowers Will Do For a Funeral (the second chronicle of Brother Thomas) off to Silver Empire publishing (they published the first Chronicle of Brother Thomas). Next comes edits, and if all goes well it will be published in the first half of 2020. It’s been a long time coming, and I’m really looking forward to finally having it published.

Sequels Shouldn’t Reset To the Original

One of the great problems that writers have when writing sequels is that, if there was any character development in a story at all, its sequel begins with different characters, and therefore different character dynamics. If you tell a coming-of-age story, in the sequel you’ve got someone who already came of age, and now you have to tell a different sort of story. If you tell an analog to it, such as a main character learning to use his magical powers or his family’s magic sword or his pet dragon growing up or what-have-you, you’ve then got to start the next story with the main character being powerful, not weak.

One all-too-common solution to this problem is to reset the characters. The main character can lose his magic powers, or his pet dragon flies off, or his magic sword is stolen. This can be done somewhat successfully, in the sense of the change not being completely unrealistic, depending on the specifics, but I argue that in general, it should not be.

Before I get to that, I just want to elaborate on the depending-on-the-specifics part. It is fairly viable for a new king with a magic sword to lose the sword and have to go on a quest to get it back, though it’s better if he has to entrust it to a knight who will rule in his absence while he goes off to help some other kingdom. Probably the most workable version of this is the isekai story—a type of story, common in Japanese manga, light novels, and animation, where the main character is magically abducted to another world and needs to help there. Being abducted to another world works pretty well.

By contrast, it does not work to do any kind of reset in a coming-of-age story. It’s technically viable to have the character fall and hit his head and forget everything he learned, but that’s just stupid. Short of that, people don’t come of age then just become people who no experience who’ve never learned any life lessons again.

So why should resets be avoided even when they work? There are two main reasons:

  1. It’s throwing out all of the achievements of the first story.
  2. It’s lazy writing.

The first is the most important reason. We hung in with a character through his trials and travails to see him learn and grow and achieve. If the author wipes this away, it takes away the fact that any of it happened. And there’s something worse: it’s Lucy pulling the football away.

If the author is willing to say, “just kidding” about character development the first time, why should we trust that the second round of character development was real this time? Granted, some people are gullible—there will be people who watch the sequel to The Least Jedi. I’m not saying that it’s not commercially viable. Only that it makes for bad writing.

Which brings me to point #2: it’s lazy writing to just undo the events of the original in order to just re-write it a second time. If one takes the lazy way out in the big picture, it sets one up to take the lazy way out in the details, too. Worse, since the second will be an echo of the first, everything about it will either be the first warmed over or merely a reversal of what happened the first time. Except that these reversals will have to work out to the same thing, since the whole reason for resetting everything is to be able to write the same story. Since it will not be its own story, it will take nearly a miracle to make the second story true to itself given that there will be some changes.

A very good example of not taking the lazy way out is the movie Terminator 2. Given that it’s a movie about a robot from the future which came back in time to stop another robot from the future from killing somebody, it’s a vastly better movie than it has any right to be. Anyway, there’s a very interesting bit in the director’s commentary about this. James Cameron pointed out that in most sequels, Sarah Connor would have gone back to being a waitress, just like she was in the first movie.

But in Terminator 2, she didn’t. James Cameron and the other writer asked themselves what a reasonable person would do if a soldier from the future came back and saved her from a killer robot from the future, and impregnated her with the future leader of the rebellion against the robots? And the answer was that she would make ties with gun runners, become a survivalist, and probably seem crazy.

We meet her doing pullups on her upturned bed in a psychiatric ward.

Terminator 2, despite having the same premise, is a very different movie from Terminator because Terminator 2 takes Terminator seriously. There are, granted, some problems because it is a time travel story and time travel stories intrinsically have plot holes. (Time travel is, fundamentally, self-contradictory.) That said, Terminator and Terminator 2 could easily be rewritten to be about killer robots from the Robot Planet where the robots have a prophecy of a human who will attack them. That aside, Terminator 2 is a remarkably consistent movie, both with itself and as a sequel.

Another good example, which perhaps illustrates the point even better, is Cars 2. The plot of Cars, if you haven’t seen it, is that a famous race car (Lightning McQueen) gets sentenced to community service for traffic violations in a run-down town on his way to a big race. There he learns personal responsibility, what matters in life, and falls in love. Then he goes on to almost win the big race, but sacrifices first place in order to help another car who got injured. (If you didn’t figure it out, the cars are alive in Cars.)

The plot of Cars 2 is that McQueen is now a champion race car and takes part in an international race. At the same time, his buddy from the first movie, Mater, is mistaken for a spy and joins a James Bond-style espionage team to find out why and how an international organization of evil (I can’t recall what they’re called; it’s C.H.A.O.S. from Get Smart or S.P.E.C.T.R.E. from James Bond) is sabotaging the race. McQueen is not perfect, but he is more mature and does value the things he learned to value in the first movie. The main friction comes from him relying on Mater and Mater letting him down.

As you can see, Cars 2 did not reset Cars, nor did it try to tell Cars over again. In fact, it was so much of a sequel to Cars, which was a coming-of-age movie, that it was a completely different sort of movie. This was a risk, and many of the adults who liked Cars did not like Cars 2, because it was so different. This is the risk to making sequels that honor the first story—they cannot be the first story over again, so they will not please everyone who liked the first story.

Now, Cars 2 is an interesting example because there was no need to make it a spy thriller. Terminator 2 honored the first movie and was still an action/adventure where a killer robot has come to, well, kill. But there was a practical reason why Cars 2 was in a different genre from its predecessor but Terminator 2 was not: most everyone knows how to grow up enough to not be a spoiled child, but pretty few people in Hollywood have any idea how to keep growing up to become a mature adult from a minimally functioning adult.

If one wants to tell a true sequel to a coming-of-age film, which mostly means a film in which somebody learns to take responsibility for himself, the sequel will be about him learning to take responsibility for others. In practice, this means either becoming a parent or a mentor.

This is a sort of story that Hollywood has absolutely no skill in telling.

If you look at movies about parents or mentors, they’re almost all about how the parent/mentor has to learn to stop trying to be a parent/mentor and just let the child/mentee be whatever he wants to be.

Granted, trying to turn another human being into one’s own vision, materialized, is being a bad parent and a bad mentor, just letting them be themselves is equally bad parenting and mentoring. What you’re supposed to do as a parent or a mentor is to help the person to become themselves. That is, they need to become fully themselves. They must overcome their flaws and become the perfect human being which God made them to be. That’s a hard, difficult process for a person, which is why it takes so much skill to be a parent or a mentor.

There’s a lot of growth necessary to be a decent parent or mentor, but it’s more subtle than growing up from a child. Probably one of the biggest things is learning how much self-sacrifice is necessary—how much time the child or mentee needs, and how little time one will have for one’s own interests. How to balance those things, so one gives freely but does not become subsumed—that is a difficult thing to learn, indeed. That has the makings of very interesting character development.

The problem, of course, is that only people who have gone through it and learned those lessons are in a position to tell it—one can’t teach what one doesn’t know.

At least on purpose.

Art is a great testament to how much one can teach by accident—since God is in charge of the world, not men.

But I think that the world really could do with some (more) decent stories about recent adults learning to be mature adults. I think that they can be made interesting to general audiences.

The Scientific Method Isn’t Worth Much

It’s fairly common, at least in America, for kids to learn that there is a “scientific method” which tends to look something like:

  1. Observation
  2. Hypothesis
  3. Experiment
  4. Go back to 1.

It varies; there is often more detail. In general it’s part of the myth that there was a “scientific revolution” in which at some point people began to study the natural world in a radically different way than anyone had before. I believe (though am not certain) that this myth was propaganda during the Enlightenment, which was a philosophical movement primarily characterized by being a propagandistic movement. (Who do you think gave it the name “The Enlightenment”?)

In truth, people have been studying the natural world for thousands of years, and they’ve done it in much the same way all that time. There used to be less money in it, of course, but in broad strokes it hasn’t changed all that much.

So if that’s the case, why did Science suddenly get so much better in the last few hundred years, I hear people ask. Good question. It has a good answer, though.

Accurate measurement.

Suppose you want to measure how fast objects fall. Now suppose that the only time-keeping device you have is the rate at which a volume of sand (or water) falls through a restricted opening. (I.e. your best stopwatch is an hour glass). How accurately do you think that you’ll be able to write the formula for it? How accurately can you test that in experimentation?

To give you an idea, in physics class in high school we did an experiment where we had an electronic device that let long, thin paper go through it and it burned a mark onto the paper exactly ten times per second, with high precision. We then attached a weight to one end of the paper and dropped the weight. It was then very simple to calculate the acceleration due to gravity, since we just had to accurately measure the distance between the burn marks.

The groups in class got values between 2.8m/s and 7.4m/s (it’s been 25 years, so I might be a little off, but those are approximately correct). For reference, the correct answer, albeit in a vacuum while we were in air, is 9.8m/s.

The point being: until the invention of the mechanical watch, the high precision measurement of accurate time was not really possible. It took people a while to think of that.

It was a medieval invention, by the way. Well, not hyper-precise clocks, but the technology needed to do it. Clocks powered by falling weights were common during the high medieval time period, and the earliest existing spring driven clock was given to Phillip the Good, Duke of Burgundy, in 1430.

Another incredibly important invention for accurate measurement was the telescope. These were first invented in 1608, and spread like wildfire because they were basically just variations of eyeglasses (the first inventer, Hans Lippershey, was an eyeglass maker). Eyeglasses were another medieval invention, by the way.

And if you trace the history of science in any detail, you will discover that its advances were mostly due not to the magical properties of a method of investigation, but to increasing precision in the ability to measure things and make observations of things we cannot normally observe (e.g. the microscope).

That’s not to say that literally nothing changed; there have been shifts in emphasis, as well as the creation of an entire type of career which gives an enormous number of people the leisure to make observations and the money with which to pay for the tools to make these observations. But that’s economics, not a method.

One could try to argue that mathematical physics was something of a revolution, but it wasn’t, really. Astronomers had mathematical models of things they didn’t actually know the nature of nor inquire into since the time of Ptolemy. It’s really increasingly accurate measurements which allow the mathematicization of physics.

The other thing to notice is that anywhere that taking accurate measurements of what we actually want to measure is prohibitively difficult or expensive, the science in those fields tends to be garbage. More specifically, it tends to be the sort of garbage science commonly called cargo cult science. People go through the motions of doing science without actually doing science. What that means, specifically, is that people take measurements of something and pretend it’s measurements of the things that they actually want to measure.

We want to know what eating a lot of red meat does to people’s health over the long term. Unfortunately, no one has the budget to put a large group of people into cages for 50 years and feed them controlled diets while keeping out confounding variables like stress, lifestyle, etc.—and you couldn’t get this past an ethics review board even if you had the budget for it. So what do nutrition researchers who want to measure this do? They give people surveys asking them what they ate over the last 20 years.

Hey, it looks like science.

If you don’t look to closely.

Sherlock Holmes and the Valley of Fear

I recently read the fourth and final Sherlock Holmes novel, The Valley of Fear. It’s an interesting book, or in some sense two books, the first of which I know to be interesting and the second I’m not really interested in reading.

(If anyone doesn’t want spoilers, now’s the time to stop reading.)

The book begins with Sherlock Holmes working out a cryptogram by reasoning to the key from the cipher. It’s a book cipher, which has many pages and two columns, so Holmes is able to guess that it’s an almanac. This is clever and enjoyable; the message decodes that something bad is going to happen to a Douglas in Birlstone. Shortly after they decrypt it, a detective from Scotland Yard arrives to consult Sherlock Holmes about the brutal murder of Mr. Douglas of Birlstone. The plot thickens, as it were. This is an excellent setup for what is to follow.

When Holmes arrives, we get the facts of the case, that Mr. Douglas lives in a house surrounded by a moat with a drawbridge, and was found in his study with his head blasted off with a sawed-off shotgun fired at close range. Any avid reader of detective fiction—possibly even at the time, given how detective fiction had taken off in short story form by 1914, when The Valley of Fear was written—will immediately suspect that the body is not the body it is supposed to be. However, Conan Doyle forestalls this possibility by the presence of a unique brand on the forearm of the corpse, which Mr. Douglas was known to have had. This helps greatly to heighten the mystery.

The mystery is deepened further by the confusing evidence that Mr. Douglas’s friend forged a footprint on the windowsill which was used to suggest that the murderer escaped by wading in the moat—which was only 3′ deep at its deepest—and ran away. Further confusing things, Dr. Watson accidentally observes Mrs. Douglas and Mr. Douglas’ friend being lighthearted and happy together.

Holmes then finds some additional evidence which convinces him of what really happened, which he does not tell us or the police about, which is not exactly fair play. He then he sets in motion a trap where he has the police tell Mr. Douglas’ friend that they is going to drain the moat. This invites the reader to guess, and I’m not sure that we really have sufficient evidence at this point to guess. That’s not entirely true; we have sufficient evidence to guess, but not to pick among the many possible explanations of the facts given to us. It turns out that the dead man was the intruder, but it could have turned out otherwise, too. The facts, up till then, would have supported Mr. Douglas’ friend having been in on the crime, for example. That said, the explanation given does cover the facts very well, and is satisfying. It does rely, to some degree, on happenstance; none of the servants heard the gunshot, except for one half deaf woman who supposed it to be a door banging. This is a little dubious, but investigation must be able to deal with happenstance because happenstance is real.

We then come to the part where Mr. Douglas is revealed and the mystery explained, and which point the narrative shifts over to explaining his history in America and why it was that there were people tracking him from America to England in order to murder him. This, I find very strange.

It is the second time in a novel that Conan Doyle did it. The first time was in A Study in Scarlet, where the middle half of the book (approximately) took place in America. I really don’t get this at all.

I suspect it makes more sense in the original format of the novels, which were serialized in magazines. It would not be so jarring, in a periodical magazine, to have to learn new characters, since one would to some degree need to reacquaint oneself with the already-known characters anyway. Possibly it also speaks to Conan Doyle having not paced himself well, being more used to short stories, and needing to fill the novel with something else.

The very end of the book, when we return in the present in England, is a very short epilogue. Douglas was acquitted as having acted purely in self defense, but then is murdered by Moriarty when it was taking Holmes’s advice to flee England because Moriarty would be after him.

That the book takes such an interest in Moriarty is very curious, given that it was written in 1914 while Holmes killed Moriarty off in 1893. Actually in 1891, but The Final Problem was published in 1893. Holmes was brought back in 1903, in The Adventure of the Empty House, where it is confirmed that Moriarty died at the Reichenbach Falls. So we have a novel which is clearly set prior to the death of Moriarty, establishing him as a criminal mastermind, almost 15 years after he was killed off. What’s even stranger about it is that Moriarty barely features in the story. He’s in the very beginning, mentioned only in connection to the cryptogram and as having something to do with the murder, but he nor his men actually tried to carry out the murder. His involvement was limited to finding out where Douglas was, so the American who was trying to murder Douglas could try. He naturally makes no appearance in the story of Douglas’ adventures in America, and only shows up in a note at the end of the book:

Two months had gone by, and the case had to some extent passed from our minds. Then one morning there came an enigmatic note slipped into our letter box. “Dear me, Mr. Holmes. Dear me!” said this singular epistle. There was neither superscription nor signature. I laughed at the quaint message; but Holmes showed unwonted seriousness.

Moriarty is indicated to have killed Douglas off the cape of South Africa, and the book ends with Homles’s determination to bring Moriarty to justice.

Which would be a great setup for Holmes bringing Moriarty to justice in a later book, but we already read about it in an earlier book. It doesn’t really help to flesh the character out, it’s not really needed for the plot of the book, and it serves to end the book on a note of failure rather than of triumph. I do not understand it. Perhaps its purpose is to help increase the grandeur of Holmes’ previous victory over Moriarty? But that is a strange thing to do. Perhaps it was the reverse—a note of caution to fans of Holmes that no man, not even Sherlock Holmes, is omnipotent?

Why Moderns Always Modernize Stories

Some friends of mine were discussing why it is that modern tellings of old stories (like Robin Hood) are always disappointing. One put forward the theory it’s because they can’t just tell the story, they have to modernize it. He’s right, but I think it’s important to realize why it is that modern storytellers have to modernize everything.

It’s because they’re Modern.

Before you click away because you think I’m joking, notice the capital “M”. I mean that they subconsciously believe in Modern Philosophy, which is the name of a particular school of philosophy which was born with Descartes, died with Immanuel Kant, and has wandered the halls of academia ever since like a zombie—eating brains but never getting any smarter for it.

The short, short version of this rather long and complicated story is that Modern Philosophy started with Descartes’ work Discourse on Method, though it was put forward better in Meditations on First Philosophy. In those works, Descartes began by doubting literally everything and seeing if he could trust anything. Thus he started with the one thing he found impossible to doubt—his own existence. It is from this that we get the famous cogito ergo sumI think, therefore I am.

The problem is that Descartes had to bring in God in order to guarantee that our senses are not always being confused by a powerful demon. In modern parlance we’d say that we’re not in The Matrix. They mean the same thing—that everything we perceive outside of our own mind is not real but being projected to us by some self-interested power. Descartes showed that from his own existence he can know that God exists, and from God’s existence he can know that he is not being continually fooled in this way.

The problem is that Descartes was in some sense cheating—he was not doubting that his own reason worked correctly. The problem is that this is doubtable, and once doubted, completely irrefutable. All refutations of doubting one’s intellect necessarily rely on the intellect being able to work correctly to follow the refutations. If that is itself in doubt, no refutation is possible, and we are left with radical doubt.

And there is only one thing which is certain, in the context of radical doubt: oneself.

To keep this short, without the senses being considered at least minimally reliable there is no object for the intellect to feed on, but the will can operate perfectly well on phantasms. So all that can be relied upon is will.

After Descartes and through Kant, Modern Philosophers worked to avoid this conclusion, but progressively failed. Kant killed off the last attempts to resist this conclusion, though it is a quirk of history that he could not himself accept the conclusion and so basically said that we can will to pretend that reason works.

Nietzsche pointed out how silly willing to pretend that reason works is, and Modern Philosophy has, for the most part, given up that attempt ever since. (Technically, with Nietzsche, we come to what is called “post-modernism”, but post-modernism is just modernism taken seriously and thought out to its logical conclusions.)

Now, modern people who are Modern have not read Descartes, Kant, or Nietzsche, of course, but these thinkers are in the water and the air—one must reject them to not breathe and drink them in. Modern people have not done that, so they hold these beliefs but for the most part don’t realize it and can’t articulate them. As Chesterton observed, if a man won’t think for himself, someone else will think for him. Actually, let me give the real quote, since it’s so good:

…a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy…

(From The Revival of Philosophy)

In the context of the year of our Lord’s Incarnation 2019, what Christians like my friends mean by “classic stories” are mostly stories of heroism. (Robin Hood was given as an example.) So we need to ask what heroism is.

There are varied definitions of what hero is which are useful; for the moment I will define a hero as somebody who gives of himself (in the sense of self-sacrifice) that someone else may have life, or have it more abundantly. Of course, stated like this it includes trivial things. I think that there simply is a difference of degree but not of kind between trivial self-gift and heroism; heroism is to some degree merely extraordinary self-gift.

If you look at the classic “hero’s journey” according to people like Joseph Campbell, but less insipidly as interpreted by George Lucas, the hero is an unknown and insignificant person who is called to do something very hard, which he has no special obligation to do, but who answers this call and does something great, then after his accomplishment, returns to his humble life. In this you see the self-sacrifice, for the hero has to abandon his humble life in order to do something very hard. You further see it as he does the hard thing; it costs him trouble and pain and may well get the odd limb chopped off along the way. Then, critically, he returns to normal life.

You can see elements of this in pagan heroes like Achilles, or to a lesser degree in Odysseus (who is only arguably a hero, even in the ancient Greek sense). They are what C.S. Lewis would call echoes of the true myth which had not yet been fulfilled.

You really see this in fulfillment in Christian heroes, who answer the call out of generosity, not out of obligation or desire for glory. They endure hardships willingly, even unto death, because they follow a master who endured death on a cross for their sake. And they return to a humble life because they are humble.

Now let’s look at this through the lens of Modern Philosophy.

The hero receives a call. That is, someone tries to impose their will on him. He does something hard. That is, it’s a continuation of that imposition of will. Then he returns, i.e. finally goes back to doing what he wants.

This doesn’t really make any sense as a story, after receiving the call. It’s basically the story of a guy being a slave when he could choose not to be. It is the story of a sucker. It’s certainly not a good story; it’s not a story in which a characters actions flow out of his character.

This is why we get the modern version, which is basically a guy deciding on whether he’s going to be completely worthless or just mostly worthless. This is necessarily the case because, for the story to make sense through the modern lens, the story has to be adapted into something where he wills what he does. For that to happen, and for him not to just be a doormat, he has to be given self-interested motivations for his actions. This is why the most characteristic scene in a modern heroic movie is the hero telling the people he benefited not to thank him. Gratitude robs him of his actions being his own will.

A Christian who does a good deed for someone may hide it (“do not let your left hand know what your right is doing”) or he may not (“no one puts a light under a bushel basket”), but if the recipient of his good deed knows about it, the Christian does not refuse gratitude. He may well refuse obligation; he may say “do not thank me, thank God”, or he may say “I thank God that I was able to help you,” but he will not deny the recipient the pleasure of gratitude. The pleasure of gratitude is the recognition of being loved, and the Christian values both love and truth.

A Modern hero cannot love, since to love is to will the good of the other as other. The problem is that the other cannot have any good beside his own will, since there is nothing besides his own will. To do someone good requires that they have a nature which you act according to. The Modern cannot recognize any such thing; the closest he can come is the other being able to accomplish what he wills, but that is in direct competition with the hero’s will. The same action cannot at the same time be the result of two competing wills. In a zero-sum game, it is impossible for more than one person to win.

Thus the modern can only tell a pathetic simulacrum of a hero who does what he does because he wants to, without reference to anyone else. It’s the only way that the story is a triumph and not the tragedy of the hero being a victim. Thus instead of the hero being tested, and having the courage and fortitude to push through his hardship and do what he was asked to do, we get the hero deciding whether or not he wants to help, and finding inside himself some need that helping will fulfill.

And in the end, instead of the hero happily returning to his humble life out of humility, we have the hero filled with a sense of emptiness because the past no longer exists and all that matters now is what he wills now, which no longer has anything to do with the adventure.

The hero has learned nothing because there is nothing to learn; the hero has received nothing because there is nothing to receive. He must push on because there is nothing else to do.

This is why Modern tellings of old stories suck, and must suck.

It’s because they’re Modern.

Meek is an Interesting Word

Somebody asked me to do a video on the beatitude about meekness, so I’ve been doing some research on the word “meek”. Even though I don’t speak from a place of authority, talking about the beatitudes still carries a lot of responsibility.

The first problem that we have with the word “meek” is that it is not really a modern English word. It’s very rarely used as a character description in novels, and outside of that, pretty much never. So we have to delve back into history and etymology.

The OED defines meek as “Gentle. Courteous. Kind.” It comes from a Scandinavian root. Various Scandinavian languages have an extremely similar word which means, generally, “soft” or “supple”.

Next, we turn to the original Greek:

μακάριοι οἱ πραεῖς, ὅτι αὐτοὶ κληρονομήσουσιν τὴν γῆν

To transliterate, for those who don’t read the Greek alphabet:

makarioi hoi praeis, hoti autoi kleronomesousin ten gen.

Much clearer, I’m sure. Bear with me, though, because I will explain. (I’m going to refer to the words in the English transliteration to make it easier to follow.)

The beatitudes generally have two halves. The first half says that someone is blessed, while the second half gives some explanation as to why. This beatitude has this form. Who is blessed is the first three words, “makarioi hoi praeis”. In the original the verb is left understood, but this is usually translated as “blessed are the meek”. The second half, “hoti autoi kleronomesousin ten gen” is commonly translated “for they shall inherit the earth”.

Let’s break the first half down a little more, because both major words in it are very interesting (“hoi” is just an article; basically it’s just “the”). The first word, “makarioi”, can actually be translated in English either as “blessed” or as “happy”, though it should be noted happy in a more full sense than just the pleasant sensation of having recently eaten on a sunny day with no work to do at the moment.

I’ve noticed that a lot of people, or at least a lot of my fellow Americans, want to take “blessed”, not as an adjective, but as a future conditional verb. Basically, they want to take Christ, not as describing what presently is, but as giving rules with rewards that are attached. This doesn’t work even in English, but it’s even more obvious in Greek where makarioi is declined to agree with the subject, “hoi praeis”. Christ it’s telling us what to do and offering rewards. He’s telling us that we’re looking at the world all wrong, and why.

The other part, “hoi praeis”, is what gets translated as “the meek”, though I’ve also seen “the gentle”. It is the noun form of an adjective, “praios” (“πρᾷος”), which (not surprisingly) tends to mean mild or gentle.

Now, to avoid a connotation which modern English has accrued over hundreds of years of character descriptions in novels, it does not mean week, timid, or mousy. The wiktionary entry for praios has some usage examples. If one peruses through them, they are things like asking a god to be gentle, or saying that a king is gentle with his people.

So translating the first half very loosely, we might render the beatitude:

Those who restrain their force have been blessed, for they will inherit the earth.

This expanded version of the beatitude puts it in the group of the beatitudes which refer to something under the control of the people described as “makarios” (blessed, happy). Consider the other groups of people, which are roughly half of beatitudes: “the poor in spirit,” “those who mourn”, “those who hunger and thirst for righteousness”, “those who are persecuted in the cause of righteousness,” and “you when people abuse you and persecute you and speak all kinds of calumny against you falsely on my account”.

I think that this really makes it clear that what is being described is a gift, though a hard-to-understand one. So what do we make of the other beatitudes, the ones under people’s control?

Just as a quick refresher, they are: “the meek”, “the merciful”, “the pure in heart”, and “the peacemakers”. They each have the superficial form of there being a reward for those who do well, but if we look closer, the reward is an intrinsic reward. That is, it is the natural outcome of the action.

So if we look closely at the second half of the meek beatitude, we see that indeed it is connected to the first half: “for they will inherit the earth”. This is often literally the case: those who fight when they don’t have to die when they don’t have to, and leave the world to those who survive them.

Now, I think too much can be made of “the original context”—our Lord was incarnate in a particular time and spoke to particular people, but they were human beings and he was also speaking to all of us. Still, I think it is worth looking at that original context, and how in the ancient world one of the surest paths to glory was conquest. Heroes were, generally, warriors. They were not, as a rule, gentle. Even in more modern contexts where war is mechanized and so individuals get less glory, there are still analogs where fortune favors the bold. We laud sports figures and political figures who crush their enemies in metaphorical, rather than literal, senses.

Even on a more simple level, we can only appreciate the power than a man has when he demonstrates it by using it.

And here Christ is saying that those are happy who do not use their power when they don’t have to. And why? Because they inherit the earth. Glory is fleeting, and in the end one can’t actually do very much with it. Those who attain glory by the display of power do not, in putting that power on display, use it to do anything useful. They waste their power for show, rather than using it to build. And having built nothing, they will end up with nothing.

You can see this demonstrated in microcosm in a sport I happen to like: power lifting. It is impressive to see people pick up enormous weights. But what do they do with them once they’ve picked them up? They just put them back down again.

Now, the fact that this is in microcosm means that there can be good justifications for it; building up strength by lifting useless weights can give one the strength to lift useful weights, such as children, furniture, someone else who has fallen down, etc. And weightlifting competitions do serve the useful role of inspiring people to develop their strength; a powerlifting meet is not the same thing as conquering a country. But there is, none the less, a great metaphor for it, if one were to extend the powerlifting competition to being all of life. Happy are those who do not.

Strength vs. Skill

Many years ago, I was studying judo from someone who had done judo since he was a kid and was teaching for fun. He was not a very large man, but he was a very skilled one. One time, he told a very interesting story.

He was in a match with a man who was a body builder or a power lifter or something of that ilk—an immensely, extraordinarily strong man. He got the strong man into an arm bar, which is a hold in which the elbow is braced against something and the arm is being pulled back at the wrist. Normally if a person is in a properly positioned arm bar, this is inescapable and the person holding it could break his arm if he wanted to; this (joint locks) is one of the typical ways of a judo match ending—the person in the joint lock taps out, admitting defeat.

The strong man did not tap out.

He just curled his way out of the arm bar.

That is, his arm—in a very weak position—was so much stronger than my judo teacher’s large core muscles that he was able to overpower them anyway.

Next, my judo teacher pinned him down. In western wrestling, one can win a match by pinning the opponent’s shoulders to the ground for 3 seconds. In judo it’s a little more complicated, but the point which is important to the moment is that you have to pin the opponent such that he can’t escape for 45 seconds. Once he had pinned the strong man, the strong man asked him, “you got me?” My teacher replied, “yeah, I got you.” The strong man asked, “are you sure about that?” “Yes, I’m sure,” my teacher replied.

The strong man then grabbed my teacher by the gi (the stout clothing worn in judo) and floor-pressed him into the air, then set him aside. (Floor pressing is like bench pressing, only the floor keeps your elbows from going low enough to generate maximum power.)

Clearly, this guy was simply far too strong to ever lose by joint locks or pinning. So my teacher won the match by throwing him to the ground (“ippon”).

The moral of the story is not that skill will always beat strength, because clearly it didn’t, two out of three times. The moral of the story is also not that strength will always beat skill, since it didn’t, that final time.

The moral of the story is to know your limits and always stay within them.

It cost 1 billion dollars to tape out 7nm chip

Making processors is getting very expensive. According to this report, the R&D to take a processor design and turn it into something that can be fabricated at the latest silicon mode is $1B.

Each fabrication node (where the transistors shrink) has gotten more expensive. I suspect it’s likely that economics will play as big a role in killing off Moore’s Law as physics will. Eventually no one will be able to afford new nodes, even if they are physically possible to create.

This is what an s-curve looks like.

A Michaelmas Book Sale

My friend and publisher, Russell Newquist, is having a Michaelmas sale this weekend on his books since they feature a modern day paladin who fights with the sword of Saint Michael (the archangel). If you’re in the mood for Catholic action-horror (Amazon calls it “Christian fantasy”) check out:

“Jim Butcher’s Harry Dresden collides with Larry Correia’s Monster Hunter
International in this supernatural thriller that goes straight to Hell!”

Also, the sequel:

“There’s a dragon in the church.”

I have to confess that these are still on my shelf waiting to be read, but I have read Russell’s short story Who’s Afraid of the Dark? (which is about a character who appears in War Demons and Vigil) and it was very good. So if you’re not busy writing murder mysteries and have time to read other people’s work, I strongly recommend checking them out.

This weekend the sale prices for War Demons are:
Ebook: $0.99
Paperback: $9.99
Hardcover: $19.99

The sale prices for Vigil are:
Ebook: $0.99
Paperback: $4.99