Strength vs. Skill

Many years ago, I was studying judo from someone who had done judo since he was a kid and was teaching for fun. He was not a very large man, but he was a very skilled one. One time, he told a very interesting story.

He was in a match with a man who was a body builder or a power lifter or something of that ilk—an immensely, extraordinarily strong man. He got the strong man into an arm bar, which is a hold in which the elbow is braced against something and the arm is being pulled back at the wrist. Normally if a person is in a properly positioned arm bar, this is inescapable and the person holding it could break his arm if he wanted to; this (joint locks) is one of the typical ways of a judo match ending—the person in the joint lock taps out, admitting defeat.

The strong man did not tap out.

He just curled his way out of the arm bar.

That is, his arm—in a very weak position—was so much stronger than my judo teacher’s large core muscles that he was able to overpower them anyway.

Next, my judo teacher pinned him down. In western wrestling, one can win a match by pinning the opponent’s shoulders to the ground for 3 seconds. In judo it’s a little more complicated, but the point which is important to the moment is that you have to pin the opponent such that he can’t escape for 45 seconds. Once he had pinned the strong man, the strong man asked him, “you got me?” My teacher replied, “yeah, I got you.” The strong man asked, “are you sure about that?” “Yes, I’m sure,” my teacher replied.

The strong man then grabbed my teacher by the gi (the stout clothing worn in judo) and floor-pressed him into the air, then set him aside. (Floor pressing is like bench pressing, only the floor keeps your elbows from going low enough to generate maximum power.)

Clearly, this guy was simply far too strong to ever lose by joint locks or pinning. So my teacher won the match by throwing him to the ground (“ippon”).

The moral of the story is not that skill will always beat strength, because clearly it didn’t, two out of three times. The moral of the story is also not that strength will always beat skill, since it didn’t, that final time.

The moral of the story is to know your limits and always stay within them.

It cost 1 billion dollars to tape out 7nm chip

Making processors is getting very expensive. According to this report, the R&D to take a processor design and turn it into something that can be fabricated at the latest silicon mode is $1B.

https://www.fudzilla.com/news/49513-it-cost-1-billion-dollars-to-tape-out-7nm-chip

Each fabrication node (where the transistors shrink) has gotten more expensive. I suspect it’s likely that economics will play as big a role in killing off Moore’s Law as physics will. Eventually no one will be able to afford new nodes, even if they are physically possible to create.

This is what an s-curve looks like.

A Michaelmas Book Sale

My friend and publisher, Russell Newquist, is having a Michaelmas sale this weekend on his books since they feature a modern day paladin who fights with the sword of Saint Michael (the archangel). If you’re in the mood for Catholic action-horror (Amazon calls it “Christian fantasy”) check out:

“Jim Butcher’s Harry Dresden collides with Larry Correia’s Monster Hunter
International in this supernatural thriller that goes straight to Hell!”

Also, the sequel:

“There’s a dragon in the church.”

I have to confess that these are still on my shelf waiting to be read, but I have read Russell’s short story Who’s Afraid of the Dark? (which is about a character who appears in War Demons and Vigil) and it was very good. So if you’re not busy writing murder mysteries and have time to read other people’s work, I strongly recommend checking them out.

This weekend the sale prices for War Demons are:
Ebook: $0.99
Paperback: $9.99
Hardcover: $19.99

The sale prices for Vigil are:
Ebook: $0.99
Paperback: $4.99

History Suffers From Academia

Academia has problems. This is an obvious statement, since it is an institution in a fallen world. It is worth looking at these problems in some depth, however, because they affect the various academic disciplines to varying degrees, and I think that History may be hit the hardest.

This occurred to me as I was reading the book A Catholic Introduction to the Bible: The Old Testament by Brant Pitre and John Bergsma. (It’s a massive tome that could be used to bludgeon a waterbuffalo to death, and I’m still in the introductory materials.) In the prefatory materials is an overview of biblical scholarship over the last two centuries, and this includes some theories which went from being novel, to being dominant, to mostly in the rubbish bin in recent times. And this got me thinking about the problems of academia and how they affect history.

The big problem of academia is that its currency is novelty. You can see this in the economic angle of publish or perish, to be sure, but even if publication didn’t affect people’s job prospects and salaries, it would still affect their reputations and standing as scholars. This selective pressure has a selective effect on scholars, very akin to what the same pressure has on scientists. Those who come up with novelty, for whatever reason, will tend to publish more, and will thus receive more academic status. This has generational effects, since new scholars learn from and have to work with old scholars. (For more on that, see Who Works for Bad Scientists?.)

While true and important, and as I say no one takes evolution seriously enough, it’s not what I want to focus on today. Rather, I want to focus on what limits there are inherent in the field to protect against novelty which is novel by being purely fictitious.

The prototypical example of a system which corrects against novelty-through-pure-fiction is science, but this is painting with an overly broad brush. It is not all sciences that do this, but rather experimental sciences. Theoretical physicists can spin theories about 11-dimensional string vibrations until the cows come home, and no one will ever notice that they don’t work because no one can run an experiment on the things.

The fiction-detecting aspect of experimental science only really works the way it’s advertised to in crowded fields of science. That is, it only works in fields of science where people will be doing the same experiments many times, or at least experiments who results depend upon previous results. This is quite true of experimental physics, especially of basic physics such as newtonian mechanics. It gets less true the less crowded the field is. The more low-hanging fruit there is to pick, the more people will spend their time picking it than looking in each other’s baskets.

Biology, and to the degree that it is a sub-field of that, medicine, are good examples of non-dense fields. There are far more questions to ask than there are researchers with funding to try to find answers. There are exceptions for particularly hot topics where at least several researchers will try to ask the same question, but you just don’t find much in the way of people duplicating each other’s experiments to find out if they get the same results.

Worse, in medicine certain types of answers become can make experiments unethical to try to replicate. If a drug shows a statistically significant result in a clinical trial, running such a clinical trial with a placebo group becomes unethical. All is not lost, since new drugs get run against “standard care” (i.e. the already approved drug) as a control group, so if the original drug is really just a placebo that got lucky, a real drug that comes along will prove better than it. There isn’t much of an ethical way of discovering that the drug is actually just a placebo (with side effects), though.

Chemistry may be the best field simply because chemistry is so closely tied to engineering, and the purpose of engineering is to replicate the heck out of whatever experiment was run. That is, engineering replicates findings by putting things into production on industrial scales, and a million LED lights shipped to Home Depot and Lowes and other such stores replicate the findings that Indiium, Gallium, and Nitrogen, when mixed correctly and an electric current is run through them, emit blue light. (Indium Nitride and Gallium Nitride are the semiconductors used in the blue LED.) But this is far and away the best case, and since it is industrial, it is definitionally not academic.

So what is it about the sciences, where there is some limit on fictitious novelty, that provides this limit? It’s that the test is not whether or not it seems plausible to another human being, but whether or not it actually works when one tries it.

This is, largely, not available in history. It is not entirely absent, as there are historical theories which can be disproved or confirmed by subsequent archaeological finds. These are, however, fairly rare. They are extremely rare in ancient history, such as biblical studies.

The particular theory which got me thinking about this was the Documentary Hypothesis, which more-or-less was a theory created by a liberal protestant in the 1800s that the traditional attribution of the Pentateuch to Moses was ahistorical.

What amazing new archaeological evidence was unearthed that gave rise to this theory? Why, none at all. The guy who came up with it looked at the text and spun his theory out of air. He decided to categorize the various lines and paragraphs and stories in the Pentateuch on the basis of which he thought similar and which he thought different, then took these groupings he thought similar and different from other groupings and attributed them to different authors, at different times.

Of course, being a liberal protestant, he decided that the religion started pure and simple and later got corrupted by law and liturgy. This appealed to his prejudices, and also gave him an interpretive framework to pull out of thin air approximate dates for the various authors he also pulled out of thin air.

And this was a huge hit in academia? Why? There are many threads which went into it, of course, but a significant one is that it was novel. It was new, and fresh, and exciting. It produced an enormous amount of work for scholars to do—someone had to go through the Pentateuch line by line and classify every line according to which theoretical author wrote it.

I think even more than being novel, this produced low-hanging fruit. That is, it made for relatively easy work.

One of the problems that a modern Christian who wants to write about scripture is up against is that he’s implicitly competing with twenty centuries of the most brilliant people in the world, because the brilliant people who had interesting things to say about scripture wrote them down, and Christians valued those writings and kept them and passed them on. There is, at present, an astonishing amount of excellent reading material, if one wants it, available from the saints and doctors of the church. What can one say, today, that has not already been said, and better?

Here, a new theory which overturns everything is the savior of the man in the bottom 99.99% of humanity (i.e. one who is not in the top 0.01%) by genius or talent, or whatever metric one wants to use. With everything overturned, there is new, fresh work to be done that most anyone can do, but not one has done before because no one could have done it before.

And who will bite the hand that feeds him? Certainly, the academic is not known for this counter-productive activity any more than any of his fellow men are.

So we get a century of people arguing over imaginary authors that they have not a shred of evidence for, mostly because it’s easier and more fun than real work.

From what I understand, a similar thing happened with Plato. At some point someone decided that most of the Socratic dialogues weren’t written by Plato because… they didn’t sound the same to him. Skip forward by about a century, and scholarly consensus once again becomes that Plato wrote the stuff commonly attributed to him. What did the intervening century have to show for it? A lot of sound and fury, signifying nothing.

Oh, and a lot of employed academics.

America’s Sweethearts

I’ve written before about the movie America’s Sweethearts. I would like to add to those thoughts, since I’ve watched it a few more times since then. (It’s one of a handful of movies I watch while debugging code because it helps to keep me from getting distracted while I wait for compiles, and because I know it so well it doesn’t distract me from doing the work because I always know what happens next.)

One of the very curious things about the movie America’s Sweethearts is that all of its characters are bad. (For those who are not familiar with it, America’s Sweethearts is a romantic comedy.) The show opens with the information that the titular couple of Eddie Thomas and Gwen Harrison has split. During the filming of their most recent and now last movie together, Time Over Time, Gwen took up with a Spanish actor and left Eddie. Eddie went crazy and tried to kill them, then retreated to a sort of faux-hindu wellness center and stayed there.

This is recapped fairly early on; the plot of America’s Sweethearts begins with the director of Time Over Time refusing to show the movie to the head of the studio until the press junket, when the press would see it at the same time as everyone else. This causes the head of the studio to panic and re-hire Lee, the studio’s publicist who he had fired as a cost-saving measure, to put together the junket because his talents really do match his salary. The only other major character is Kiki, Gwen’s sister (it’s unspecified who is older; they might even be fraternal twins, which would help to explain shared high school experiences). She’s a mousy creature whose life is mostly taken up pleasing the whims of her famous sister, but she’s played by Julia Roberts so you know that won’t last through the end of the movie.

We now have all of the major characters: an adulterer, a lunatic, an unscrupulous businessman, a wimpy woman who lets herself by tyrannized by her awful sister, a publicist who follows the line which Hercule Poirot’s friends said of him: he would never tell the truth if a lie would suffice.

And what’s really weird is that they’re a loveable cast, and it’s a really enjoyable movie, even though it is not a redemption arc for most of them.

I think that part of what makes it work—apart from the massive charisma of all of the actors, which cannot be understated as a causal element—is that the characters’ vices, while not repented of, are not excused, either.

The movie has something like a happy ending for about half of the characters in it, but it is very fitting because it’s a very small happy ending. The head of the studio gets a movie which has a lot of legal liabilities but which might make enough money to cover them. The publicist has what is probably going to be a successful movie. The adulterer is embarrassed, but she stays with her Spaniard for whatever that is worth. Eddie and Kiki wind up together, but shortly before they decide to give it a try, Kiki prognosticates that it’s never going to work, and she might well be right.

I think that ultimately what makes the movie work is the subconsciously stoic theme that vice is its own punishment, and so successful vice is still punished vice. America’s Sweethearts is all about people who do not deserve their natural virtues—beauty, fame, wealth, power—who are punished by getting to keep them. But—and this is an important but—the movie is so short that one is left with the hope that the punishment may serve its purpose and the people may in time learn to repent.

This may be the formula for all successful movies about vicious people (that is, people who practice vice). At least where they do not repent. Redemption stories are probably better. But if a story about vicious people is not going to be about their redemption, I think the story of how they are punished by success may be the only other option for a good story.

Because good stories need to be true to life.

Science Fiction vs. Fantasy

On a twitter thread, I proposed the idea that the main distinction between Science Fiction and fantasy is whether people prefer spandex uniforms or robes:

I did mean this in a tongue-in-cheek way. Obviously the only difference between Science Fiction and Fantasy is not the wardrobe. It is curiously harder to define than one would first suspect, though.

Before proceeding, I’d like to make a note that genres are not, or at least are not best considered as, normative things which dictate which books should be. Rather, they are descriptions of books for the sake of potential readers. The purpose of a genre is “if you like books that have X in it, you might like this book”. (The normative aspect comes primarily from the idea of not deceiving readers, but that runs into problems.)

Science Fiction is often described as extrapolating the present. The problem is that this is simply not true in almost all cases. It is very rare for Science Fiction to include only technology which is known to be workable within the laws of nature which we currently know. This is doable, and from what I’ve heard The Martian does an excellent job of this. At least by reputation, the only thing it projects into the future which is not presently known to be possible is funding. This is highly atypical, though.

The most obvious example is faster-than-light travel. This utterly breaks the laws of nature as we know them. Any Science Fiction story with faster-than-light travel is as realistic a projection of the future as is one in which people discover magic and the typical mode of transportation is flying unicorns.

I have seen attempts to characterize science fiction based on quantitative measures of how much of the science is fictional. This fails in general because fantasy typically requires only the addition of one extra energy field (a “mana” field, if you will) to presently known physics. And except for stories in which time travel is possible, the addition of a mana field is far more compatible with what we know of the laws of nature than faster-than-light travel is.

Now, one possibility (which I dislike) is that Science Fiction is inherently atheistic fantasy. This take, which I am not committed to, is that Science Fiction is fantasy without the numinous. Probably an alternative is Science Fiction is fantasy where there is no limit to the power which any random human being can acquire.

What I think might be the better distinction between Science Fiction and Fantasy is that Science Fiction is fantasy in which the author can convince the reader that the story is plausibly a possible future of the present. What matters is not whether, on strict examination, the possible future is actually possible. What matters is whether the reader doesn’t notice. And for a great many readers of Science Fiction, I suspect that they don’t want to notice.

In many ways, the work of a Science Fiction writer might be like that of an illusionist: to fool someone who wants to be fooled.

This puts Star Wars in a very curious place, I should note, since Star Wars is very explicitly not a possible future. But Star Wars has always been very dubiously Science Fiction. Yes, people who like Science Fiction often like Star Wars, but this doesn’t really run the other way. People who like Star Wars are not not highly likely to like other(?) science fiction. I personally know plenty of people who like space wizards with fire swords who do not, as a rule, read Science Fiction.

Anyway, even this is a tentative distinction between the two genres. It’s not an easy thing to get a handle on because it’s impossible to know hundreds of thousands of readers to identify the commonalities between their preferences. Even the classification of books into genres by publishers and books stores are only guesses as to what will get people to buy books, made by fallible people.

Murder For Revenge

In broad strokes, there are only a few reasons to murder someone:

  1. Gaining money or other forms of power
  2. To pave the way for love
  3. Revenge
  4. To gain status that properly belongs to the victim
  5. To protect one’s status

These correspond, roughly, to the deadly sins:

  1. Greed
  2. Lust
  3. Wrath
  4. Envy
  5. Vanity

Today I want to consider murder for revenge. It further subdivides into two possible situations:

  1. The murderer is fine with being destroyed in the process
  2. The murderer wishes to suffer no repercussions

The former can make an interesting story (such as the sub-plot in Chesterton’s The Sins of Prince Saradine), but it’s not easy for it to sustain a mystery. The main problem is that the murderer should, by hypothesis, confess. This can, however, be handled.

The first way to handle this is to have the murderer leave. This is hard to work unless he thinks the crime won’t be discovered and so no explanation is necessary. That can be done, though, especially for historical crimes being discovered and investigated only years later.

The second way to handle this is to have the murderer leave a confession before leaving but to have the confession intercepted by someone who wants to use the occasion to murder someone by framing him for murder. This is a very workable sort of plot, though it will be complicated.

The third major way is to kill the murderer before he can confess. This may be the most interesting option, especially if he is killed by the victim. Of course, if the murderer is murdered by his victim, this will not be mysterious unless at least one of them uses a scheme for which he does not need to be present, which is where the interesting part comes from. It is very hard to suspect a dead man of murder. If there is anyone one will leave off suspecting of a crime, it’s a dead man.

In a Cadfael story (Saint Peter’s Fair) Hugh Beringar remarks that babes and drunks are the world’s only innocents. But this is not an exhaustive list. Who is so incapable of harm as a man already dead?

What’s specially interesting about two people who have murdered each other is that with any conniving at all, the author can contrive to have everyone suspect them of being murdered by the same person, and this will be a very strange person indeed to have two enemies with so little in common. It also means that the murder will seem to have been done very craftily when it was in fact done very simply. Or at least one of them will be like that. There are absolutely wonderful possibilities for misdirection, here.

(I really want to write a story like this some day. I probably should first write a story with at least two victims at the start who were killed by the same person, so it’s not obvious, though.)

The other major option, and which is more common because it can far more easily sustain a mystery, is for the one seeking revenge to wish to avoid repercussions for his crime. This provides a simple reason for why he does not confess. It can sustain a mystery with little difficulty at all.

It can, of course, be made far more complex than the simple case. The variation that I suspect is most interesting, or at least that I personally find most interesting, is of introducing the complication of the passage of time. This can either be put between the original offense and the present, or between both the original offense and the revenge, and the present.

Of the two, my favorite is probably the ones where the revenge is recent but the offense in the past. This is probably most classically done with the child who grows up to avenge a parent, but this possibly should be avoided because it is common enough that, these days, the average reader might count the years since the crime in the past and guess the killer simply based on his age.

It comes to mind that an interesting way around that problem might be to give the murderer some scruple in his revenge, such as waiting for the 18th birthday of the vicitm’s youngest child, on the theory that his children should not be punished for the crime of their father. Something like that would throw a wrench into figuring out the culprit by simple calculations, at least.

There are more variations on murder for revenge, but this post is getting long enough that I think I’ll leave them for later. Enjoy writing your murder mysteries about revenge, and God bless you.

Dragnet

Something I find interesting on occasion is to look up the history of television shows. Television is a very young medium. Though the device itself was invented in the 1930s, the Great Depression and the second World War and its attendant economic privations meant that televisions were not widely owned until the late 1940s. Without an audience, not much was made to broadcast to it. It was, therefore, really the early 1950s in which television got its start.

This makes it easy to research, but also makes the chain of influences fairly short.

Dragnet actually started as a radio drama, starring Jack Webb as Detective Joe Friday. In 1951, it became a television show, with much the same cast as the radio drama, though his partner had to be changed out part way through. This show lasted until 1959. It was later revived in 1967, this time in color. This is the version which I think most people are familiar with, that stars Harry Morgon as Detective Bill Gannon alongside Jack Webb reprising his role as Joe Friday. Certainly it’s the version I’m most familiar with. It lasted until 1970.

There were other versions made, but none with Jack Webb since he died in 1982 (at the age of 62). In 1987 there was a comedic movie starring Dan Ackroyd and Tom Hanks. It’s almost a parody of the original, though it is not a mean-spirited parody and I can testify that it is a lot of fun. In 1989 there was a short-lived series called The New Dragnet, and in 2003 there was an even shorter-lived revival series called LA Dragnet.

Though Dragnet was not able to survive in the modern world of police procedurals, or possibly just it was not able to outlive its star, Jack Webb, it did have an enormous impact on television. Counterfactuals are impossible to state with certainty, but it seems likely that police procedurals would not have the form they have today if Dragnet had never happened.

Episodes of Dragnet, which are (surprisingly) easily found on YouTube, are interesting to watch. The detectives are in the homicide division, so in a very technical sense the cases are murder mysteries. However, they are not detective stories in the sense of Poirot or Agatha Christie. The detectives do a lot of work, of course, but they don’t really do anything particularly clever. They just keep talking to people until they get enough facts to convict the murderer.

What I find curious—given that I’m a huge fan of detective fiction with genius detectives and write some of it myself—is that, bare-bones as Dragnet is, it still satisfies the impulse to see a mystery solved. This is true of modern police procedurals as well. In both cases, they feel somewhat like empty calories—enjoyable while watching but they don’t really have any substance which sticks with one.

This is not true of the great detective stories. Murder on the Orient Express, Have His Carcase, Saint Peter’s Fair—these stories really stick with one. There are interesting ideas in them to chew on long after one’s read them.

But it’s a testament to the human craving for the solving of mysteries that even Dragnet, which was told in an almost deliberately un-entertaining style, still makes you want to watch to the end to find out what happens, if you watch the beginning. This may partially be a testament to the power of charisma, though. I can watch Harry Morgan in just about anything.

Calories In vs. Calories Out

When it comes to the subject of losing weight—more specifically, reducing excess fat stores in the body—it’s fairly common to come across somebody who puts it like this:

It’s just calories in versus calories out. Thermodynamics says that if you take in more calories than you burn, you’ll store them as fat. If you take in fewer, you’ll burn fat. So weight loss is very simple: just burn more calories than you take in. That’s it. Anything else is just people trying to kid themselves that here’s a magic bullet.

This represents a confusion one sees in many fields: making no distinction between the cause of something and the mechanism of how the cause makes it happen. It is quite true that when somebody stores fat in their body they require energy to make the fat and they can’t also burn that energy and therefore the amount of energy they took in was higher than the amount of energy which they burned. No one, anywhere, disputes this. It’s also entirely uninteresting to the subject of fat gain or loss in people with excess fat.

(NOTE: when talking about healthy people—typically lean athletes—regulating what little fat they have, this simplification is probably accurate. This post is not talking about how a body builder can force his body to levels of fat which are dangerously low, or how an athlete can cut to a lower weight class. Those will almost certainly have to be achieved by simple calorie restriction because they are manipulating a healthy body into going outside of the homeostasis it wants to maintain for optimal health).

The question which is actually interesting to the subject of fat gain or loss is why the body stores energy as fat. And this is where the people who love to talk about calories-in-calories-out show their reductionist colors. Tthey will tell you that since fat cells are energy storage, if you take in more calories than you burn, you will necessarily store them as fat. But they give no reason for this, while there are excellent reasons to doubt it.

The reason to doubt that extra calories eaten can only go to fat is that the human metabolism is a highly variable thing. Though I should clarify what I mean here because by “metabolism” some people mean “resting metabolism”, while I mean “total metabolism”. Our bodies spend calories on a lot of things—walking, talking, maintaining our temperature, repairing our bodies, and other things. Very few of these things are fixed costs. One possible reaction to being in a cold environment is moving more or just burning energy for heat. Another is feeling cold and putting on a sweater. Those do not use the same number of calories over the course of an hour.

Let’s consider a very analogous system: finances.

If a person makes an additional $1000 per month, it is possible that his bank account will grow by $1000/month. It is also possible that he will start eating at expensive restaurants, and his bank account won’t change at all. On the flip side, a person whose income doesn’t change can decide to stop eating out and can grow his bank account with no additional income, merely by cutting expenses. And could do both: he could decide his bank account isn’t nearly large enough, work a second job to bring in an extra $1000, move to a tiny, unheated apartment, and eat nothing but porridge for his meals so that his bank account swells rapidly.

It’s that last part that’s most interesting to the moment, because it’s what seems to be the case in people who are, shall we say, famine resistant. Because there’s a really fascinating question about people carrying excess fat which is rarely asked: why do they get hungry?

Seriously, why is it that a person with excess fat feels hungry when he has plenty of energy at his ready disposal? That’s not how the body normally works. The human body, when working correctly, tries to maintain a homeostasis. Granted, it’s a homeostasis with more fat than a bodybuilder would like, but the body tends to regulate hunger on the basis of energy availability. Or in other words, normal people usually stop being hungry when their calories in is roughly equal to their calories out.

At this point, a word is necessary about what we might call the balloon theory of hunger. Basically, it is the model of hunger where the stomach is a balloon with pressure sensors and hunger is merely the pressure sensors detecting whether there’s still room in the stomach to fit something without literally bursting it.

There is some minor truth to this, in that the stomach does in fact have sensors in it which detect the degree to which it is stretched, but a few years of living as a human being should be sufficient to show this model as the rubbish that it is. Consider a few counter-examples:

  1. Desert. A person can eat until “they’re so stuffed they can’t eat another bite” then the moment desert comes out they can somehow fit enough additional food to fill a grapefruit.
  2. Exercise makes people hungry. Starting a new exercise routine can make one feel ravenously hungry for days. Exercise does not drastically increase the size of someone’s stomach in the first few days.
  3. Teenage boys can out-eat their parents combined. I did it often as a teenage boy. (I was on the rowing team in high school and relatively lean, too.) Teenage boys do not have stomachs which are larger than their mother’s and father’s stomach’s combined.
  4. Tests show that a stomach can stretch to around the size of an entire human torso before bursting from pressure. They’re incredibly expandable.
  5. People who win hot-dog eating contests do not ordinarily eat that much food to feel full.

In short, the theory that being hungry is entirely, or even primarily, about whether your stomach is full is nonsense.

There is also the always-hungry model, which tends to involve some pretend evolutionary biology about humans having evolved in circumstances of constant famine and so we are always hungry in order to pack on as much fat as possible for the next famine which we know is right around the corner.

The main problem with this is that it directly contradicts experience. Americans live in an environment with truly enormous food surpluses always available, and there are plenty of not-fat people who eat until they are not hungry and who nevertheless do not eat the 10,000+ calories that they easily could and this model predicts.

In short, a little bit of experience shows that human beings are not normally ravenous eating machines consuming every calorie that they can get their mouths on.

With these models of human hunger out of the way, the question then comes up and is very pressing: why do fat people get hungry?

It is not the purpose of this post to give the answer to this question. Chief among the reasons why is that there are almost certainly many answers to this question; people’s energy regulation can get screwed up for a variety of unrelated reasons. It is only the purpose to highlight how important finding an answer to this question is for a person who wants to lose excess fat.

(So as to not completely shirk the question, I think that one of the most common is excessive fructose consumption causing insulin insensitivity in the liver, which cascades into general insulin insensitivity, which then disrupts energy regulation, though even that is probably an over-simplification since in general nothing in biology involves just a single hormone. This model, however, at least corresponds well to my own experience of when I gain and lose weight.)

There’s a really good metaphor for the issue in Tom Naughton’s post Toilet Humor: The How vs. Why of Getting Fat. I’m going to give a variant of this metaphor to keep things more pleasant: the kitchen sink.

Suppose that your sink is clogged and filling up with water and about to overflow. It is entirely true that the problem, in an acute sense, is that there is more water going into the sink than coming out of it. If one applied the standard dietary advice to a clogged sink, you would just drastically reduce the flow of water into the sink until the sink was empty.

And it will work if you do that. Cut off the water, and the sink will eventually not be full of water. Evaporation, if nothing else, will see to that.

There’s just one problem: you have the sink for a reason, and that reason is not merely to keep it empty. You want the sink to do work. And the water-in-water-out approach of just cutting off the water in means that your sink can’t do its job. The correct solution to a clogged sink is not to stop washing your dishes. It’s to find out why it’s clogged and clear the clog. Maybe the drain strainer is full. Maybe the pipe is clogged later on. Fixing the problem depends on what the problem is, and there isn’t one problem. But whatever the obstruction in the drain, that’s what you need to fix so that the sink can do its job.

Similarly, a human being almost certainly has things to do besides sitting around not being fat. Many of us are parents. Some of us have jobs. A few of us have friends. Whatever it is, we have more to do than just sitting around not being fat. Just cutting off our food without fixing why we’re hungry when we’ve got excess fat is like just cutting off the water to the sink. Whatever you’ve got to get done in life, you’re going to do a bad job.

Further, people who are constantly hungry tend to be irritable, short-tempered, and lethargic. Even if they manage to fulfill their primary responsibilities well (and they’re probably only doing it passably), they’re going to make life less pleasant for everyone around them. I once had a housemate who was doing a calorie-restricted cut, and I was nearly at the point of begging him to stop because he was just so unpleasant to be around during it.

Interestingly, you can see the same sort of indifference-to-function in sports-medicine vs. regular medicine. If an athlete has a problem where something really hurts when he uses it, the conventional medicine approach is to just stop playing the sport and (I’m exaggerating) get months of bed rest. People into sports medicine know that this is hyper-focusing on a mechanism—in this case, rest—while ignoring that the person is a human being with a life. Sports medicine tries very hard to figure out how to restore athletes to normal function in the context of still living life as an athlete and not considering being wheelchair-bound-but-alive to be an equivalent outcome.

So, in conclusion, the real question when it comes to someone who wants to lose excess fat is not how to get rid of excess fat. It’s how to fix the fact that they’re hungry when they shouldn’t be. If you fix that, then the person will certainly lose excess fat—people who aren’t hungry don’t eat as many calories. But they’ll do it while still being a functional human being.

In short: one should treat the problem, not the symptom. To do that, one must first identify the problem.

The Best Laid Schemes O’ Mice an’ Men Gang Aft Agley

This weekmonthsummer has really not been going the way I hoped it would. I’m going to talk about why that’s OK, but first I want to quote the stanza from which the title comes, because the original poem, To a Mouse, on Turning Her Up in Her Nest With the Plough, November, 1785, is not quoted often enough:

But Mousie, thou art no thy-lane,
In proving foresight may be vain:
The best laid schemes o’ Mice an’ Men
          Gang aft agley,
An’ lea’e us nought but grief an’ pain,
          For promis’d joy!

So, the reason for the strike-through up above is that I began this post in, if my memory serves me, July, and I am now finishing it in August. Between various things, mostly family related, as well as an annual trip to visit my parents, and most things have gotten pushed to the side. About the only thing I’ve managed to do which is creative is work on the second chronicle of Brother Thomas, Wedding Flowers Will Do For a Funeral.

On the plus side, I’ve finished the first draft and, as of the time of this writing, have edited the first 100 pages (actually, 99¼, but the word processor is on page 100). It’s going slower than I would like, of course, but that’s something of a theme, lately.

And just to make life more crowded, I’m finally going back to the gym to lift weights 3 times a week. In the long run, it’s very good that I’m doing it, but it means even less time.

And that’s OK.

I’d really like to be a lot more productive on this blog and on my YouTube channel. I’ve got a notepad of videos to do which is up to about 10 items now. It’s a backlog. And I’ve got tons of blog posts to write. I want to finish reviewing the Lord Peter Wimsey novels, I want to review all of the Cadfael novels, and after that, probably the Poirot novels. I want to talk more about mystery writing, I’ve got lots of things to write about theology and philosophy, too.

And, God willing, some day I will.

But it’s that first part that’s really important to keep in mind. It’s our job to do our best; it’s God’s job to figure out whether—and how—we should succeed. Running the world is a big and complex task, and God doesn’t ask of us that we do it. All He asks of us is that we do our best to do what he’s given us to do in the moment.

So, the world frequently doesn’t turn out like we expect. But we can trust that it does turn out for the best.

That’s really all we can ever do: do our best and trust God.

Interesting Video On Why Germany Lost World War II

In an interesting video, TIK talks about Germany’s access to oil and oil supplies and why these dictated its actions during World War II, and why they made its downfall all but certain:

It is said that when it comes to war, amateurs think in terms of tactics and professionals in terms of logistics. This is related to the saying that an army fights on its belly, that is, if it’s not fed, it doesn’t fight.

Feeding and watering an army—both men and horses—has been the concern of generals for thousands of years. (Horses were often relatively self-sustaining, since they eat grass, but they do better on grain if you want them to be constantly working.) Thus tactics like burning crop fields during retreat, so as to starve an invading army.

World War II was in many ways the first truly mechanized war, and thus the problem of logistics expanded into the economic sphere. Machines are produced only by a thriving economy, and machines run only on oil. In order to fight an effective mechanized war, one must have a strong economy and lots of fuel.

This, by the way, has strong social implications outside of war. In order to remain in peace, one must have the strength to defeat attackers. In order to do this in the modern context of mechanized warfare, one must have a high-production modern economy. One doesn’t need to be able to produce the weapons of war oneself, but one must be able to buy them. That requires a modern economy, which requires at least much of modern social organization.

Those who want to bring back the good parts of traditional social organization need to understand this well. Whatever form modern society takes, it must be one that powers a modern economy which can power a modern army. If it’s not, it will be short-lived.

Studies That Test Diets And Compliance

I’ve had good results from using an extremely low-carb (i.e. low carbohydrate) diet to lose weight, so I’m highly skeptical whenever a study shows that they don’t do that. There are studies that show that they do, too, in addition to my experience, so something is going on when one has highly conflicting studies. The only thing to do is to actually dig into the studies.

And the thing one finds with many of the “low carb” diets in such studies is that they are frequently quite high carb. “Low carb” will often be defined as less than 100 grams of carbohydrate per day. People who have success eat well under 50 and frequently less than 20 grams of carbohydrate per day. A diet with 5-10 times as much carbohydrate being tested as a “low carb” diet simply doesn’t tell anyone anything useful.

But another big problem one sees is studies which test compliance at the same time they test efficacy. That is, the study breaks people up into groups and tells them what to do, but then records what they do as part of the group that they were assigned to. So if someone in the low carb group eats nothing but pasta, his weight performance will count to the low carb diet average in that study.

There are legitimate reasons for this, but they’re all for medical practitioners. Basically, such studies are useful to know how likely, if you prescribe a diet to all of one’s patients, is one to see results. Great for doctors, useless for the rest of us.

The other problem is that we largely already know what compliance with any behavioral change is in human beings: very low. It doesn’t much matter what you’re talking about, people don’t, typically, change for the better.

Where this is really egregious is where people look at these studies and don’t distinguish between the efficacy of the behavioral change and the degree to which the study told us what we already know about human beings: they don’t comply.

Hell, the compliance rates on taking a single pill a day are far from perfect; just look at all the people one knows who forget the pill from time to time. The compliance rate on 2x, 3x, and 4x pills per day is progressively worse, just from simple observation. Who, who has been proscribed taking a pill 3x per day, actually manages to take it 3x per day for all the days of the prescription?

When this comes to bigger stuff like diet and exercise, a simple and only somewhat inaccurate model is that people don’t comply. So a study which measures compliance + some change will mostly show no effect. But that’s uninteresting for people who will actually change.

Consider other areas of life: lifting weights or running. If you did a study to find out if lifting weights makes you stronger in which you also measured compliance, you’d find out that lifting weights doesn’t make you stronger. If you did a studying measuring whether running makes you a better runner, which also measures compliance, you’d find out that running doesn’t make you better at running. Hell, as long as the study is also measuring compliance, you’d find out that practicing piano doesn’t make you play piano better and taking dance lessons doesn’t teach you how to dance. Because in all these studies, the fact that most people stopped lifting weights, running, practicing piano, and never went to the dance lessons would dominate the results.

Or to put it simply, doing something only has an effect if you actually do it. No kidding.

Which is why what we need are studies which also measure compliance and separate people into groups based on compliance. This does introduce problems. Probably the biggest problem is that it will cost a lot of money because it will require really large groups of people. With 90%+ of people non-complying, you need a ten times larger group of people to study, and that costs a lot of money.

The second problem is that this switches out measuring the efficacy of the diet (or whatever) together with compliance for measuring the efficacy of the diet together with whatever preconditions (genetics, preferences, etc) make one likely to actually stick with it.

However, this is clearly a much more useful thing for an individual to measure. If I’m considering lifting weights, I want to know how much stronger I might get if I can stick with it. If I find I can’t stick with it, I don’t really care what it would do, anyway. And I don’t much care why I can stick with it, either.

If it turns out that only 10% of the population can stick with some diet, then I will consider taking my chances on finding out if I’m in the lucky 10%. Weight lifting works that way, to a limited degree. Everyone can get somewhat stronger, but only a fraction of the population can get hugely strong.

But there’s another issue at play, which has to do with motivation: knowing that something will work if I stick to it makes it vastly more likely that I will stick to it. If I actually believe that there is a causal connection between an action and a benefit, it is much easier to keep doing the action until I get the benefit.

Which is yet another reason that studies which measure compliance as well as an effect are worthless: the study participants didn’t know whether sticking to the plan even had any potential benefit.

So, in short, when it comes to studies showing no benefit to something, always check to see whether it’s a study that’s just telling you that human beings rarely change. It’s not completely worthless, but it’s only telling you what you already know.

The First Mary Sue

The first Mary Sue was a character in a parody of Star Trek fan fiction, published in the fanzine Menagerie in 1973. (Fanzines were magazines, often distributed by photocopying them and handing out the results but always made cheaply and without advertiser sponsorship, typically given away for free or a nominal charge to cover the cost of printing.) The parody was called A Trekkie’s Tale. It’s only a few paragraphs long, so I’ll quote it in full:

“Gee, golly, gosh, gloriosky,” thought Mary Sue as she stepped on the bridge of the Enterprise. “Here I am, the youngest lieutenant in the fleet – only fifteen and a half years old.” Captain Kirk came up to her.

“Oh, Lieutenant, I love you madly. Will you come to bed with me?” “Captain! I am not that kind of girl!” “You’re right, and I respect you for it. Here, take over the ship for a minute while I go get some coffee for us.” Mr. Spock came onto the bridge. “What are you doing in the command seat, Lieutenant?” “The Captain told me to.” “Flawlessly logical. I admire your mind.”

Captain Kirk, Mr. Spock, Dr. McCoy and Mr. Scott beamed down with Lt. Mary Sue to Rigel XXXVII. They were attacked by green androids and thrown into prison. In a moment of weakness Lt. Mary Sue revealed to Mr. Spock that she too was half Vulcan. Recovering quickly, she sprung the lock with her hairpin and they all got away back to the ship.

But back on board, Dr. McCoy and Lt. Mary Sue found out that the men who had beamed down were seriously stricken by the jumping cold robbies, Mary Sue less so. While the four officers languished in Sick Bay, Lt. Mary Sue ran the ship, and ran it so well she received the Nobel Peace Prize, the Vulcan Order of Gallantry and the Tralfamadorian Order of Good Guyhood.

However the disease finally got to her and she fell fatally ill. In the Sick Bay as she breathed her last, she was surrounded by Captain Kirk, Mr. Spock, Dr. McCoy, and Mr. Scott, all weeping unashamedly at the loss of her beautiful youth and youthful beauty, intelligence, capability and all around niceness. Even to this day her birthday is a national holiday of the Enterprise.

The story was originally attributed to “Anonymous” but is known to be the word of the editor, Paula Smith. The basic story was a common submission; as such it’s a collection of common features, exaggerated. It’s very interesting to look at those features.

  1. Main character is a teenage girl.
  2. She’s beautiful and wonderful.
  3. Everyone loves her.
  4. She dies and everyone laments her death.

The standard meaning of “Mary Sue,” used as a criticism of a character in a work of fiction, is to impute that a character is an authorial stand-in for the purpose of wish fulfillment. And while the original Mary Sue is an author stand-in, the story is actually more of a Greek tragedy. Mary Sue is initially blessed by the gods, but when she tries to climb Mount Olympus she is cast down and destroyed.

Among the criticisms heaped on the Mary Sue character is that her excellence is always unearned. She appears out of nowhere in fully formed perfection and everyone loves her just for being her. This is generally derided as being horribly unrealistic.

And it is.

For men.

It should not be glossed over that Mary Sue stories are written by teenage girls about themselves. If Mary Sue is realistic to teenage girls, it would be utterly unsurprising that she would be unrealistic to adult men. So, is she realistic to teenage girls?

And here I think that the answer is: yes, actually.

The onset of puberty in a girl does come from nowhere, and transforms her into something beautiful and wonderful, that is, an adult woman capable of bearing children. And everyone loves her, at least if by “everyone”, you mean males, and by “love,” you mean “is interested in”.

A newly adult female is bursting with potential and, as such, everyone is (suddenly) very interested in her and what she does with this potential. It’s not always as benign and comfortable as in the Mary Sue story, of course, but life rarely is as comfortable as fiction.

And if we look further at the inspiration for Mary Sue, we also see why she had to die. Potential cannot last forever in this world. If Mary Sue does not choose a mate, she will eventually hit menopause and cease to have any potential (in the relevant sense; she might still have potential in a thousand other ways, of course, but an allegory only ever describes one aspect of life). If she does choose a mate, she will have children and her potential will be reduced by turning into actuality. But actuality is, in a fallen world, never as interesting as potential; Mary Sue with children does not excite the universal interest which Mary Sue without children did. (In a healthy society she excites respect, instead, but that’s a topic for another day.)

And so it must be that, not long after Mary Sue is blessed by the gods, she is cast down by them, too; Mary Sue cannot remain universally loved for long.

The story of Mary Sue leaves off at the most important part, since after all it was a parody, but it is worth mentioning the fact. That the first flower of youth cannot last is something all people must come to terms with. For some, they will foreswear actuality for some other actuality, as in the case of nuns, who cover themselves to hide their potential so people may forget it. For others, they will give up their potential by trading it for actuality; an actuality which is flawed because we live in a flawed world, but still a real actuality that’s better than the nothingness of pure potentiality.

They both require faith, but all good things require faith. Trying to remain in potentiality is trying to eat one’s cake and still have it afterwards. It promises happiness that it will never deliver.

I think it’s well to remember that the story of Mary Sue is only a bad story if it’s the story of a man, or an adult woman. Though that remains true even if a young woman is cast in the part.

Real Lawyer Reacts to My Cousin Vinny—And Likes It!

I ran across a really curious video on YouTube where a (putatively real) lawyer examined the movie My Cousin Vinny and talked about how accurate it was. To my great surprise he said that—allowing for parts that were obviously just comedic—it was actually very well done and parts of it could be used for teaching lawyers!

If you’ve never seen it, by the way, I highly recommend the movie My Cousin Vinny. It’s a ton of fun and has a lot of quotable lines.

Rearranging Deck Chairs on the Titanic

The common phrase, that something is like “rearranging the deck chairs on the Titanic” is often taken to mean “putting one’s effort where it won’t do good”, but it has another, slightly more subtle meaning: futility. (I’m writing this post because a friend was so used to the first meaning he hadn’t thought about the second, and what one man has done, another might do.)

Once the Titanic has been hit by the iceberg, there are two reasons why it doesn’t matter how the deck chairs are arranged:

  1. No one is going to sit on them while the boat is sinking.
  2. Once the boat sicks, their arrangement will be destroyed by the water washing the deck chairs away from the deck.

Rearranging the deck chairs on the titanic, therefore, suggests an activity is not only secondary to one’s primary concern but moreover one doomed to have no effect whatever.

You can see this by contrasting the Titanic, which sank, to a ship lost at sea where the rations have run out and the crew is starving. Rearranging the deck chairs will not give them food, but they might still take comfort sitting on them in a better arrangement, and whoever eventually finds the empty ship could take advantage of a particularly well thought out arrangement of the deck chairs which has remained after its first crew can no longer use them. (In theory, though admittedly not likely in practice.)

Nerf Gun as Cognitive Behavioral Therapy

Here’s an interesting post about some creative cognitive behavioral therapy. It’s not that long but out of courtesy I don’t want to quote the whole thing. Here’s the key setup:

i say, are you gonna shoot me with a nerf gun in this professional setting.
he happily informs me that that’s really up to me, isn’t it. and sits back down. and gestures, like, go ahead, what were you saying?
and i squint suspiciously and start back up about how i’m having too much anxiety to leave the house to run errands, like it was a miracle to even get here, like i’ve forgone getting groceries for the past week and that’s so stupid, what a stupid issue, i’m an idiot, how could i–
a foam dart hits me in the leg.


There’s a curious issue brought up in the specifics of the example linked. Self-criticism is a very important ability. People who can’t diagnose their own faults can’t improve, and worse tend to blame everyone but themselves which as a strong alienating effect. Yet, in the example in the link (and partially quoted above), what’s being done is not really self-criticism. It looks like it because the language is negative, but it’s, to use modern cant, disempowering. That is, it makes the one being criticized helpless.

It does this by attributing the failing, not to the will, but to the intellect. That is, it places the defect in the origin, not in the execution. By placing the defect in the origin, nothing can be done about it. A bad tree can’t produce good fruit, or perhaps more aptly, you can’t get blood from a stone.

The problem, in short, is that every time the person complains about himself, he’s giving up. He’s saying, not how he can do better, but that he can’t do better. And this is, indeed, the exact opposite of doing better. What he rephrases his complaints to illustrates the point nicely:

i say, slowly, it’s– not a stupid issue, i’m not stupid, but it’s frustrating me and i don’t want it to be a problem i’m having.

This has reframed it from despair to frustrating, i.e. from having given up to facing one’s problems. Giving up may look like facing problems, but in reality it’s the exact opposite. It’s burying one’s head in the ground so that one doesn’t have to face one’s problems. It is the false hope that one can fix problems without facing them, pretending to be facing them.

You see this a lot with problems; non-solutions love to pretend that they’re actually solutions.

This is related to why my favorite of the baptismal vows is, “Do you reject Satan? And all his empty promises?”