Dorothy L. Sayers and Clever Murders

Dorothy L. Sayers, with her famous detective Lord Peter Wimsey, is best known for writing literary detective novels, while Agatha Christie is known for writing clever detective novels. Until we come to Gaudy Night, however, Dorothy L. Sayers writing more literary than clever novels was not really for lack of trying. As she said in her chapter of Titles To Fame:

When in a light-hearted manner I set out, fifteen years ago, to write the forst “lord Peter” book, it was with the avowed intention of producing something “less like a conventional detective story and more like a novel.” Re-reading Whose Body? at this distance of time I observe, with regret, that it is conventional to the last degree…

Whose Body? was conventional not merely in the form of its dialog and the actions of its hero—the best example that comes to mind is that Lord Peter took measurements and examined all manner of things carefully with a magnifying glass. Whose Body? was also conventional in that the mystery had, at its heart, a clever twist. As I alluded to before, she would keep this up for most of the Lord Peter novels until she got to Gaudy Night. The thing I find curious is that, unlike Agatha Christie, the twists mostly wouldn’t have worked. (If it’s not obvious, spoilers will follow.)

Whose Body? Is the main exception to the twists not actually working, because I think it would have worked. A surgeon with access to cadavers for dissection could probably have made the switch and done the relevant dissection work well enough to get the head to look like it fit on the wrong body.

In Unnatural Death, the murder weapon—injecting air into the veins—would not be much of a problem at all unless the syringe was comically large. One estimate I saw was that it would need to be the size of a bicycle pump. Since the victim was drugged at the time of the injection, this is not an entirely insurmountable problem as the murderer had time to pump air in with many strokes, but that would be exceedingly difficult to do without making the injection site obvious, which it needed to not be.

The Unpleasantness at the Bellona Club put the twist in the domain of human relations rather than in the method of murder itself, which meant that the murder would have worked. That said, I am dubious that forensic science in 1928 could measure the amount of digitalis in a person’s blood post-mortem, especially since according to Wikipedia digitalis was first isolated in 1930.

Strong Poison relied on the murderer being able to develop a tolerance to lethal doses of arsenic and thus to give himself a lethal dose at the same time as his victim, by poisoning a shared meal. While this was believed to be possible in the 1920s and 1930s, it turns out to not be possible at all. (The evidence that had been used at the time was the “arsenic eaters” who would eat large lumps of arsenic. It turns out that the thing that saved them was not tolerance but rather the lack of bio-availability of arsenic eaten in lump form. While they were consuming large doses of arsenic, they were also excreting virtually all of it in their solid waste. This does not apply to arsenic dissolved into liquid and put in an omelette, which would have been as fatal to them as to anyone else.)

The Five Red Herrings has as its twist the forging of a railway ticket which, in some strange way, provided an alibi. This one might work out, for all I know; it depends upon the details of the working of the Scottish railway system in 1929 or 1930, which is a thing I doubt is knowable with certainty in the year of our Lord 2023. I couldn’t stand anything about this book, and I still don’t know how I feel about the twist ending making the unbearable time-tables pointless. That said, “he forged the railway ticket” isn’t really a clever twist. Anyone could do it. It’s just in the category of “This obvious thing was surprising because I thought it was against the rules.”

Have His Carcase is a brilliant book and quite possibly my third favorite Lord Peter novel (after Gaudy Night and Busman’s Honeymoon.) The twists and turns are done extremely well, with evidence of suicide and evidence of murder alternating masterfully. The solution of hemophilia is both not-obvious and well-laid. The problem, though, is that I don’t think that blood behaves the way that it was described in the book. Granted, I’ve never slashed a healthy man’s throat on a hot rock in the sun but I’ve butchered deer and not cleaned up until the next day and the blood looked liquid enough. Even if human blood behaves differently, the timing doesn’t work out. Harriet took about twenty minutes to take pictures and collect things from the body such as a shoe. It was stated that for the blood to be in the condition described the man could have been killed ten minutes before at the outside. Thus either Harriet should have noticed the blood clotting as expected before she left twenty minutes after finding the body, or else blood doesn’t actually clot that way, or else Harriet mistook what clotted blood looked like, or else something was wrong with the blood. Whichever alternative you prefer, the characters should really have known that the timing was not as tight as they thought. That said, it was great to watch the characters deal with the problem of contradictory evidence and persevere.

Murder Must Advertise doesn’t really have a twist, so it’s an exception to the rule. It does have a massive drug-gang and action which is almost more in the realm of the spy-thriller than the detective story, which I suspect take the place of the twist. That said, using a slingshot to hit someone in the head with a stone scarab in order to knock them unconscious so they die by falling down the stairs is… an uncertain way to commit murder. It could certainly work—blows to the head can be surprisingly fatal. That said, if I wanted to commit murder, hitting a moving target in the head with an irregularly shaped rock using someone else’s slingshot would not be high on my list of methods. It would be too easy to miss the vital few square inches and then there would be a lot of explaining to do.

The Nine Tailors is, perhaps, my second-least favorite of the Lord Peter stories, so I’m probably not the best person to do it justice. That said, the twist in it was that the death was accidental, not intentional. The victim had been left tied-up in a belltower and couldn’t be retreived before an hours-long bellringing event and the loud noise killed him. The problem is that a bell, even close by, isn’t nearly loud enough to kill. To rupture the eardrums, maybe. To cause long-term hearing loss, sure. But to kill with sound requires sound energy approximately on par with explosions—or being way too close to a jet engine. (Sounds with this enormous amount of energy cause air embolisms in the lungs; it does not kill through the ears.)

Then we come to Gaudy Night, which had no twist at all, and I think was also the greatest of the Lord Peter novels. It’s not perfect, but it is a masterpiece.

In fairness, I should mention that Busman’s Honeymoon did have a twist, or at least a very clever trap used to commit the murder. While it would have worked to kill the victim, I am a bit dubious that it could have been set up quite as described without the victim noticing, despite his age and it being dark. This is a minor quibble, though, since the basic premise was sound, and it would not have been too hard to have made the trap less obtrusive.

I don’t really know what to make of all of this, other than the clever mystery seems to have been been very much in the water during the golden age, so much so that even writers who set out to not write them still ended up including elements of them. I don’t mean that there’s anything wrong with the clever mystery, either—Agatha Christie did them brilliantly. To some degree I’m just “thinking out loud” as I find it curious that Dorothy L. Sayers wrote them even though it was not really her thing.

What If Jessica Killed All Those People!

I don’t know why there are so many people who think that suggesting that it would somehow be clever if Murder, She Wrote ended by revealing that Jessica actually killed all of the people and framed all of the people who supposedly did it in each episode. Not only is this an obvious joke, it would be a truly terrible ending.

It is a comparatively minor objection, but this would require contradicting many of the episodes; it’s probably more than half in which the murderer confesses in the end. I suppose you could get around that by making it a science fiction show and giving Jessica mind control powers.

Which brings me to the real problem with this: it’s completely wrong for the genre. If the writers did this, it would just make Murder, She Wrote a comedy, or perhaps some sort of psychological horror show or something. What it would not be is a mystery show.

In this spirit, I’d like to propose a few similarly terrible final episodes for Star Trek: The Next Generation.

  • Picard wakes up from surgery after being stabbed by the Tuskan Raider Nausicaan. He’s still a young ensign. He goes back to the bar and all of the main cast are patrons or servers. (Guinan is a dance hall girl wearing mostly huge pink feathers.)
  • Picard wakes up from taking magic mushrooms in a native american ceremony and the rest of the cast are his fellow shamans. All of the dialog is in Navajo, with subtitles.
  • Instead of the poker game, we then flash to a bunch of kids playing in the back yard of a late 1960s house, with the suggestion that they just watched an episode of the original and TNG has all been them playing pretend. One of the kids is named Beaver. Another is named Dennis.
  • It is revealed that the entire show was actually the Star Trek fanfic of a prematurely balding teenager who looks kind of like Patrick Stewart, which he just finished reading to a hot girl who then flat-out rejects him the moment he’s done, and he commits suicide by drinking a mixture of crushed glass and wine.
  • In the final episode, it is revealed that the main cast are all patients in a lunatic asylum in the 1980s, and Q is actually one of the nurses.
  • We pull back from the final episode to see Lee Harvey Oswald in a chair with one of those things on his face holding his eyes open. It turns out that TNG was actually a torture device people wearing grey robes used to break down his will and make him complacent. We hear him saying, over and over, “I will kill President Kennedy.” This fades to black and Leonard Nemoy walks out and says, “And perhaps this was how the Illuminati killed JFK.”

Aren’t those all brilliant? Wouldn’t it have been so cool if they’d done those instead of what they actually did? Can there be a better way to end a long-running TV show than with a giant out-of-genre F.U. to the fans?

Yes, actually. There can be.

Admittedly, though, this still isn’t as bad as the people who blame superheroes for the actions of supervillains.

GPT4 And Movie Scripts

I asked a friend to have GPT4 (the successor to chatGPT) write a prompt for itself to write a movie script. Here’s the prompt it came up with:

Craft an engaging and original movie script that revolves around an enigmatic object with mysterious powers. The story should include a diverse and dynamic ensemble of characters, each with their own unique backstory, who must come together to uncover the truth behind the object. The plot should feature unexpected twists and turns, with a focus on character development and emotional depth, while incorporating elements of suspense, action, and heartwarming moments. Please provide a complete script, including character dialogues, scene descriptions, and visual details.

I love how generic this description is. To a great degree that’s what chatGPT/GPT4 specializes in; it’s an engine for generating the most probable text that the people who wrote what it was trained on would say. That is, it is a cliché engine.

I should also note, in fairness, that chatGPT/GPT4 is woefully underpowered to generate a full movie script. Those are usually about 100 pages long (the rule of thumb is 1 minute per page for a competently written script), while chatGPT/GPT4 is limited to 4096 “words” (where “word” is defined a bit weirdly, almost more like “syllable”, though not quite). That said, it’s interesting to take a look at what little it can do.

So, here’s the script generated from the prompt:

Title: The Enigma of the Eon Orb

INT. MUSEUM – NIGHT

A dimly lit room is filled with ancient artifacts. We focus on a particular exhibit: a small, glowing ORB, encased in glass. This is the EON ORB.

Right off the bat, this is not right for a movie script. A movie script describes what the audience sees and hears, it does not give exposition to the reader as if it’s a narrator in a novel. Movie scripts are essentially the blueprints for creating a movie; you can’t do anything with “This is the EON ORB.” You could probably get away with “The camera focuses in on the exhibit with the EON ORB, though it’s only labeled ‘mysterious artifact'” That lets the prop department know which prop to use.

EXT. MUSEUM – NIGHT

Establishing shot of the museum. A heist is taking place; masked THIEVES break in and steal the Eon Orb.

Again, this is not how movie scripts work. In a movie script you have to say things like how many people we see, vaguely what they look like, and it needs to describe in detail what they do. This is more like the prompt you might give to an improv group.

Further, how are we supposed to see that they steal the EON ORB from outside the museum? There is a lot of detail available in HD movies these days, especially if you spring for the 4K version, but showing a heist inside a museum from the outside of the museum is asking a lot even of 4K.

CUT TO:

INT. CIA HEADQUARTERS – DAY

AGENT JAMES PARKER (40s, experienced, determined) is briefed about the theft. He learns about the Eon Orb’s mysterious powers and its ability to manipulate time.

Again, not how movie scripts work.

Also, if people knew about the Eon Orb’s mysterious powers and its ability to manipulate time, why was it just left in a museum, rather than in a securely guarded vault?

PARKER
(serious)
We need to assemble a team to retrieve this artifact before it falls into the wrong hands.

Who is he talking to? Why does he believe that it has not already fallen into the wrong hands?

CUT TO:

INT. VARIOUS LOCATIONS – DAY

That’s not how movie scripts work! You can’t cut to several different places. Or is this meant to be split-screened like the Ang Lee hulk movie which I never saw but I heard was unwatchable because he show several scenes simultaneously in split-screen?

Parker recruits a diverse group of experts for his team:

This is more like a summary than a script, and barely even that.

DR. OLIVIA MARTINEZ (30s, intelligent, strong-willed), an archaeologist with knowledge about the Eon Orb
LENA JONES (late 20s, athletic, tech-savvy), a skilled thief with a troubled past
RAYMOND “RAY” SMITH (40s, charismatic, mysterious), a history professor with expertise in ancient artifacts
RUBY ZHANG (early 20s, quiet, empathetic), a gifted artist who can recreate any object she sees

OK, the archaeologist with knowledge about the Eon Orb I get. A history professor expertise in ancient artifacts… I guess is defensible. Why is the CIA recruiting a tech-savvy skilled thief with a troubled past? Don’t they have tech-savvy people with thief-like skills but without the troubled past in the CIA already? And what on earth could the point be of recruiting an artist who can recreate any object she sees? Is she supposed to make commemorative figurines to help them remember their adventures once it’s over?


INT. CIA HEADQUARTERS – DAY

The team gathers, and Parker briefs them on the mission.

Yeah, this isn’t how you write a movie script, but it’s also pointless because it’s summarizing one of the few lines of dialog that comes immediately after.

PARKER
We have intel that a powerful criminal organization is behind the theft. Our mission is to retrieve the Eon Orb before they discover its true power.

The team agrees to work together, despite their differences.

GPT4 really believes in “tell, don’t show.” That said, why does having different jobs mean that they have to agree to work together “despite their differences”? Also, isn’t that what they did when they were recruited? Were they only recruited to hear the briefing and then they’d decide on what they were committing to?

EXT. VARIOUS LOCATIONS – DAY

The team travels to different parts of the world, following clues and leads. Along the way, they encounter dangerous situations and form strong bonds.

This is impressively generic, even by chatGPT standards. Also, it’s not a movie script. I need a macro for that…

INT. ABANDONED WAREHOUSE – NIGHT

The team discovers the ORGANIZATION’s hideout. They devise a plan to retrieve the Eon Orb.

The place where they discover the hideout of the ORGANIZATION (what a creative name for a criminal organization!) is outside of an abandoned warehouse? This is also where they devise their plan? They walked up to it planless and only when they were sitting around outside did they devise a plan???

Also, why is the team who is going to infiltrate a hideout in an abandoned warehouse a CIA agent, an archaeologist, a history professor, a thief with a troubled past, and an artist? Remember, these weren’t thrown together by coincidence, they were recruited by the CIA agent. When it comes to sneaking into a criminal organization’s hideout, there’s no one he’d rather have by his side than these people with (possibly aside from the thief) no relevant skills?

INT. ABANDONED WAREHOUSE – NIGHT

The team infiltrates the hideout in an intense action sequence. They succeed in retrieving the Orb, but they accidentally activate its powers.

CUT TO:

EXT. ANCIENT CITY – DAY

The team is transported back in time to an ancient civilization, where they meet the original GUARDIANS of the Eon Orb. They learn about the Orb’s true purpose: to maintain balance between the past, present, and future.

GUARDIAN
You must return the Orb to its rightful place in your time. The balance of the world depends on it.

Wow. What a plot. So the Eon Orb’s purpose is to maintain the balance of past, present, and future. It does this by being in the time where it’s supposed to be. So why does it have a user interface which you can accidentally trigger that sends it to the wrong time? Seriously. The thing’s function is best accomplished by not doing anything with it. Why does it have the ability to allow people to screw that up when literally nothing good can come of it. That would be like putting a self destruct button on a sandbag used for holding floodwaters back. That’s not helpful.

The team faces moral dilemmas and confronts their personal pasts. They grow and learn from their experiences.

Man did this follow the prompt to “feature twists and turns with an emphasis on character development”.

INT. ABANDONED WAREHOUSE – NIGHT

They return to the present, but are discovered by the Organization. A thrilling chase ensues.

I guess we’re supposed to take the script’s word for it that the chase was thrilling.

EXT. CITY STREETS – NIGHT

The team is cornered, but they use their individual skills to outsmart the Organization and escape.

I really want to know which skill the history professor used to outsmart the Organization and escape. No, scratch that, I want to know which object the soft-spoken artist copied perfectly during a thrilling chase which allowed her to escape.

INT. MUSEUM – NIGHT

The team returns the Eon Orb to its rightful place, restoring balance to the world.

The script never actually mentioned anything bad happening because the Eon Orb was gone, but OK.

EXT. MUSEUM – DAY

The team says their goodbyes, having formed lifelong bonds.

MARTINEZ
We’ve come a long way. I’m proud to call you all my friends.

RAY
Our paths crossed for a reason. I’ll never forget our journey together.

LENA
Who knows? Maybe we’ll cross paths again someday.

RUBY
(softly)
I hope so.

PARKER
Until then

Oh wow is that dialog original!

GPT4 suggests that the title of the movie should be “The Enigma of the Eon Orb,” but I suspect “Action Team Cliché: Origins” would be a better title.

I guess this wasn’t an entirely fair request. It’s not easy to write an original, engaging, heart-warming movie that focuses on character development. Let’s give it something easier.

Here’s GPT4’s attempt at a script for a sequel to Star Wars Episode VII: The Force Awakens:

Title: Star Wars Episode VIII: Shadows of the Past

INT. RESISTANCE BASE – DAY

Commander POE DAMERON is briefing a group of pilots, including REY and FINN.

POE
We’ve discovered an old Imperial outpost on the distant planet, Tython. Our intel indicates that there might be valuable information on the whereabouts of other potential Force-sensitive individuals.

REY
We can’t let the First Order get their hands on that information.

FINN
Agreed. We have to get there first.

EXT. SPACE – DAY

The Millennium Falcon and several X-wing fighters blast off towards Tython.

EXT. TYTHON – DAY

The Resistance team arrives on Tython and enters the ancient outpost.

INT. IMPERIAL OUTPOST – DAY

The team explores the outpost, uncovering hidden rooms and ancient artifacts.

REY
(sensing something)
There’s a strong presence of the Force here.

FINN
Let’s find that information and get out of here.
[4:26 PM]
As they continue to search, they discover a holocron with the data they need. However, they also awaken an ancient DARK SIDE ENTITY, which attacks them.

INT. IMPERIAL OUTPOST – BATTLE

The team fights the dark side entity. Rey’s powers are tested as she struggles to protect her friends.

EXT. TYTHON – DAY

The team escapes the outpost, but the entity follows them. A massive battle ensues between the Resistance and the dark side entity.

REY
We can’t defeat it alone. We need help.

POE
Rey, you have the power within you. Trust the Force.

Rey takes a deep breath and focuses her energy. She harnesses the power of the Force and defeats the entity.

EXT. RESISTANCE BASE – DAY

The team returns with the information and is greeted by General Leia Organa.

LEIA
Well done. With this information, we can find and train others like you, Rey.

FINN
The First Order won’t know what hit them.

REY
We’ll be ready.

The movie ends with the promise of a new generation of Jedi and the hope of defeating the First Order once and for all.

Yup. It was better than the one Rian Johnson wrote. (If you’re curious, here’s my twenty seven thousand word review of Star Wars Episode VIII: The Least Jedi.)

Of course, that’s a very low bar to clear. You have to dig in order to find it.

Agatha Christie and Miss Marple

I’m going out on a limb a little, but, Miss Marple is perhaps as iconic a character as Hercule Poirot is. At least most of everyone who has heard of Poirot will have heard of Miss Marple. And while I think Poirot has been adapted into movies more often, Miss Marple might have been more influential. I can’t think of any other detectives who exist because of Poirot, but Murder, She Wrote would not have happened without Miss Marple.

Having said that, there can be no question of which one Agatha Christie wrote more of. There are something like three times as many Poirot stories as Miss Marple stories. Part of this, of course, is that Poirot had a seven year head-start. Those were seven very productive years for Agatha Christie, too—in addition to several Poirot novels, she wrote a great many short stories.

The subject of short stories is interesting, here, for while Poirot first appeared in a novel, Miss Marple first appeared in a short story. Most of the Miss Marple short stories were gathered together into the book The Thirteen Problems, and in the introduction to it—written years after most of the short stories themselves—Agatha Christie opines that Miss Marple is at her best in short stories, while Poirot demands novels. This is curious because she only wrote a few more Miss Marple short stories, while she had yet to write eleven of the twelve Miss Marple novels that she would ever write. This may, perhaps, be attributed to the changing economics of fiction. In the twenties and thirties the real money was in short stories; this was much less the case in the 1950s and 1960s. (I’m not sure how things were in the 1940s; television had not yet taken over from short stories as popular short entertainment but magazines and newspapers were hit hard by the war, especially since paper was in short supply.)

She also remarked in that introduction that Miss Marple was as popular as Poirot—she got about an equal number of letters requesting that she drop Poirot in favor of Miss Marple as requested that she drop Miss Marple in favor of Poirot. And here we come to one of the things that intrigues me about Miss Marple: it was twelve years between the first Miss Marple novel (The Body in the Vicarage, 1930) and the second (The Body in the Library, 1942). While the stories themselves were written (and published) between 1927 and 1930, the foreward to The Thirteen Problems was written in 1932. After declaring Miss Marple to be as popular as Poirot, it would be ten more years before she published another Miss Marple novel. And so far as I can tell looking at original publication dates, she only wrote three Miss Marple short stories in that time, one in 1939 (commissioned by the BBC as a radio play) and two in 1941. Two more would appear in 1942 after the publication of The Body in the Library, another in 1954, and the final Miss Marple short story in 1956.

I can’t help but wonder why it is that Agatha Christie thought that Miss Marple was better in short stories but after the first collection wrote her mostly in novels, and thought her as popular as Poirot but waited twelve years to write more of her.

That said, the wait did her no great harm. Miss Marple was always a creature from a previous age so she did not suffer greatly from how the world changed during and after the second world war—if anything, she felt less out of place after the war than before it. Young people in the 1920s and 1930s were concerned with being Modern in a way that only the hippies of the 1960s came close to. By the 1950s, people simply didn’t look down on their elders as they once did and Miss Marple’s Victorian girlhood was not the object of (indulgent) ridicule that it once was.

Which reminds me: I wonder if Agatha Christie, in writing Miss Marple, was at all inspired by G.K. Chesterton’s Father Brown. Certainly not entirely; she mentions inspiration from her own grandmother who had a sheltered Victorian upbringing and thought the worst of everyone. That said, the indulgent way that people assumed that Miss Marple knew nothing of the world is very reminiscent of the indulgent way that people assumed that Father Brown knew nothing of the world—and they had been assuming that of Father Brown for seventeen years by the time Miss Marple came on the scene. If so, she was inspired well.

But getting back to my main question, I wonder if it was merely that it was harder to come up with plots for Miss Marple stories than it was for Poirot stories. Miss Marple was a very different kind of sleuth and her stories (almost) always involved a great deal more investigation by people other than Miss Marple herself. It would be an exaggeration, but not too much of an exaggeration, to say that until the last few novels Miss Marple didn’t appear in the novels more than she did in the short stories, there was just a lot more of other people in the novels padding them out to full length. This means that the stories for Miss Marple require a setting in which amateurs (or ordinary policemen) can do the investigating and find the clues, but not understand them. This is not so easy to do. It is much more convenient for the author, especially of a novel, to have a detective who can look for clues that most people would not. Often this is unraveling the mystery of a red herring so that it can be set aside and further clues looked for, but it in any event helps.

This is also, I think, why Agatha Christie said that Miss Marple is at her best in short stories. Short stories allow there to be a summarizing of the evidence without much time to think about it, then Miss Marple can give her brilliant interpretation. If this is to be done in a novel, it’s trickier both to come up with a mystery which will last without anyone figuring it out, and also to come up with red herrings that don’t need to be cleared away for the right evidence to be discovered.

This is only speculation and I have little confidence that this is correct. Unfortunately, Agatha Christie says nothing on the subject in her autobiography, so I doubt that I’ll be able to advance much beyond speculation.

The Golden Age and Theories of Detection

One of the characteristics I’ve noticed quite a bit in detective stories from the golden age of mysteries (roughly, From 1890 until the start of World War 2) is how many detectives had a theory of detection which they discussed.

In the very early days, the detectives differentiated themselves from the police through their use of forensic investigation. In the 1890s, Sherlock Holmes performed chemical analysis to prove a stain was blood and wrote a monograph on how to identify cigar ash. In the early 1900s, Dr. Thorndyke looked at everything he could under a microscope, and what he couldn’t he would look at with enlarged photographs.

Sherlock Holmes did not long predate real forensics, though. By 1901 Scotland Yard was using fingerprints to identify people and in 1902 the first conviction was obtained with the use of fingerprint evidence. (See Fingerprints And Forensic Evidence.) It did not take the police long to make use of this kind of forensic evidence, and private detectives began to shift their methods. G.K. Chesterton would revolutionize the field of private detection in 1910 with Father Brown’s psychological approach to solving crimes, and to varying degrees this has been the primary tool of detectives ever since, so no advances in forensic technology can make psychology obsolete.

Through all of these changes, there remained an air of novelty. The brilliant detective during the golden age was not merely brilliant; he had a method. He got his results because he brilliantly followed his method while others either followed the wrong method or else had no method.

Perhaps nothing exemplifies this as well as the unjustly neglected second Poirot novel, Murder On the Links. Poirot’s method is contrasted very strongly with that of the indomitably forensic M. Giraud. Giraud examines the crime scene with the utmost care and uncovers impressively small clues. Yet Giraud dismisses a section of pipe as being of no importance because it’s not the kind of clue he’s looking for. As Poirot remarks to Hastings, “Mon Ami, a clue of two feet long is every bit as valuable as one measuring two millimetres! But it is the romantic idea that all important clues must be infinitesimal!”

Poirot considers all clues because his method is to adjust his theory until nothing is out of place; Giraud’s method is to ignore whatever does not fit his preferred kind of evidence. The point, here, is not the specifics of the contrast, but that the contrast is so important.

Another, though less important, example that comes to mind is in The Strange Case of Mr. Challoner. In it the detective, Malcolm Sage, delivers a lecture on how the Police misunderstand the evidentiary value of photographs and fingerprints. They think that photographs are for identification while fingerprints are evidence; in reality fingerprints are for identification and photographs are evidence. He took a series of photographs of the crime scene and announces that they will be the principle evidence at trial, and then uses fingerprint evidence to show that the butler is actually a wanted criminal. I don’t know that the police ever ignored the identificative value of fingerprints or the evidentiary value of photographs, but that’s not the point. In a short story written for entertainment value, the writer and editor thought that the audience of the newspaper would be entertained by a lecture on how the police don’t understand the proper use of evidence.

I’m not sure exactly when this aspect of detective fiction died off. Certainly you can’t find it in the Cadfael series, which started in the 1970s. I can’t think of any detective fiction I’ve read from the 1940s through the 1960s except for Miss Marple. I haven’t read any of the Poirot stories written after 1947 (yet). I don’t remember this in the Miss Marple stories from that time period, but then I don’t recall it in the Miss Marple stories from the golden age, either. (To be fair, that’s only one novel, though it is also most of the short stories.) Miss Marple was never really a detective, though. People told her things and then she would give them the solution. With the exception of Nemesis, and to a lesser extent A Caribbean Mystery, she never went looking for clues of any kind. On the other hand, there were her typical reminiscences of people who committed similar sins in Saint Mary Meade, which was certainly a unique style of detection.

By the time we get to television detectives like Columbo in 1971, the aspect of a unique method is missing. While it might be objected that Columbo is a policeman and therefore cannot contrast with policemen, he is still a contrast with the other officers who do not get nearly the same results.

There is similarly no trace of in the 1980s’ Murder, She Wrote.

So, what happened?

Alternatively, what was special about the golden age?

I’m really not sure which of these questions we should be asking. It is tempting to think that there was something special about the time that the golden age happened. To some degree it was the first time police forces were getting organized and police detectives were becoming a real thing. Advances in technology also made various kinds of detection newly possible, or at least newly practical, and so the whole thing had an air of novelty to it.

On the other hand, it’s also possible that there was simply a fundamental split in the mystery genre, with mysteries taking the psychological and logical aspects of detection and police procedurals taking the forensic aspects of detection.

On the third hand, it may just be that all of the possible theories of detection have been expostulated and all that remains is to do one of them well.

Perhaps it’s a bit of all three.

Just Imagine What It Will Be Able To Do!

Something I hear a lot when it comes to the recent AI stuff like Stable Diffusion, Dall-E, chatGPT, etc. etc. etc. is some version of “this technology is just in its infancy, imagine what it will be able to do in a few years!” I’m not saying that these AI technologies won’t improve, but the thing is, it’s just not true that these technologies are in their infancy. These technologies are all developments of technologies which have been worked on for decades.

At their core, these approaches are various ways of doing massive quantities of massive matrix multiplications in order to encode the relationships between data. The T in chatGPT stands for “Transformer,” which is a variant of the previous types which had generally had some form of “neural” in the their name, such as Convolutional Neural Networks or Recurrent Neural Networks. In particular, Transformers (which were first mentioned publicly by a team at Google in 2017) replaced RNNs as the model of choice in natural language processing by being simpler and having the feature of being able to do the pre-training in parallel, which made vastly larger training data sets feasible.

Transformers were not a radically new idea that created a field which didn’t exist before; they were a new approach which was created because a large number of well funded smart people had been working in the field for a long time on relatively similar approaches. It’s an innovation which yielded noticeably better results, it might even be a breakthrough. What it’s not is the first dipping of humanity’s toe into something no one had ever done before. It may be the first supersonic flight; it is not the first flight at Kitty Hawk.

Moreover, the hardware to execute these things has been under development for a very long time. A huge breakthrough in performance came when the AI algorithms were adapted to run on GPUs (graphics processing units, the things that do all of the calculations for 3D graphics). This provided a relatively inexpensive source of incredibly high performance in number crunching that made the massive amount of processing involved in AI far more accessible. The thing is, this was like a decade ago. Since then special-purpose GPUs have been created to do the work even more cost-effectively (One I know of in the current generation of them is the nVidia A100 which costs around $10,000). But wait, there’s more!

Cerebras developed the Wafer Scale Engine—an AI processing chip the size of an entire silicon wafer—back in 2019. It’s an impressive piece of technology; it consumes about 22kW of electricity in a silicon wafer that’s 300mm in diameter (basically, 1 foot wide). It’s quite a technical achievement, but it went on sale back in 2019. There will be newer and better ones, to be sure, but it’s not a new idea with completely untapped potential.

Don’t get me wrong. I’m not saying that this is the end of technological development, or that AI won’t get any better. It would be outright shocking if there were no further improvements. My point is that the improvements that we’re going to see are most likely to be much slower than the people who don’t know anything about the history of AI development think it will be. We’re not at the very beginning of an exponential curve.

I do strongly suspect that generative AI is going to be useful, just as classificational AI has proven useful. The thing is, classificational AI has been with us for a while—it’s things like face unlock on phones and de-noising of video and audio and actually usable speech-to-text. It’s gotten better, and it continues to get better, but speaking as someone who develops technology: a technology becomes viable when it works all of the times for a use, not merely when it can do an impressive demo under favorable circumstances. And in the real world, edge cases are often 99% of the work and not being able to handle them often means that a tool is more work than it saves. The result is frequently a limited-use tool for cases which the new technology is good at, and it’s one more tool in the toolbox of a human being who can handle all the edge cases.

That’s why the result of all of the labor-saving devices is people being so busy all the time.

Poirot, ITV, and the 1920s

Something curious about the ITV version of Poirot is that (with the exception of The Mysterious Affair at Styles) all of its episodes were set in the 1920s. Not literally the 1920s, per se; I’m sure that plenty of the technology or fashions were from the 1930s, but neither the Great Depression nor the looming war due to the military buildup of Germany ever feature.

This is not true at all of the novels.

The Poirot novels are always set contemporaneously to when they were written and current events, or at least current conditions, play into the plot. The only anachronism is Poirot himself; when Agatha Christie first wrote him, she presented him as being at least in his sixties. In her autobiography she mentioned that this was an unfortunate choice on her part, but she had no idea how popular he would be or how long he would last, and as of the time of her writing about it he had to have been over 100 by then. She simply ignored this problem and made Poirot always an old man of unspecified age.

When ITV made its version of the stories with David Suchet, they chose to set all of the stories in the same few years, though rarely with anything that would date them. There were practical reasons for this, of course. For example, it would be difficult to age the actors appropriately by decades in order to follow the real stories. Wardrobe and set decoration would be far more difficult if they kept track of the changing styles. Moreover, a series of episodes (or short movies) would be far more jarring if they skipped forward by years every few weeks or months, while the books always skipped ahead by however long it had been since the last one.

However many practical reasons to set Poirot in the span of a few years, though, I suspect that the biggest reason was that the 1920s are simply far more interesting, and far prettier, than later decades. This isn’t the totality of the 1920s, of course. Poirot was a celebrity and tended to deal with clients of means. Accordingly, the stories are set largely among the prettier parts of the 1920s. This is as it should be. Detective stories are stories for the common man, and so they should deal with things that he will not normally come across. Fiction about the lower classes is the domain of the upper classes, who need to read about drudgery and difficulty to find variety from their lives.

There are complex reasons why this should be, but the one thing I think it isn’t is rose-colored glasses from anyone’s past. By the 1990s when ITV was making the Agatha Christie’s Poirot series, the 1920s were seventy years before. No one remembered them. Instead, if we look to the specifics, we will find a decent answer. The 1930s were an interesting time but heavily influenced by the world-wide Great Depression and in the later portion by the looming war on the European continent. The 1940s were dominated by the second World War, to the point where no one ever talks about the events of 1946-1950. The 1950s had a primarily industrial aesthetic, as people took refuge in the post-war plenty which was so different from the great depression and the war years. In more rarified circles, architects and designers were greatly attracted to anything which was not beautiful. This was the era of the Helvetica font and the beginning of the era of buildings which no one likes. The 1960s spiraled off into kaleidescopic colors that meant nothing but were fun and new. The 1970s were, of course, varied, but let us leave it with two words: shag carpet. That takes us to the end of when Poirot stories were written, but for completeness: the 1980s were the era of big shoulder pads and bigger hair with leather jackets and denim jackets, while the 1990s… I wonder what the style of the 1990s even was? T-shirts and jeans or shorts? It’s been thirty years since 1993, and has anyone figured out anything to be nostalgic for? Classic video games are the only thing that I can think of.

Anyway, I think that I’ve made the point. The 1920s are an era with a fascinating aesthetic that’s pleasing to look at, and it was the last time to have that for quite some time. (Portions of the 1930s were more-or-less continuous with the 1920s, but I’m counting them as part of it since they were, aesthetically, a continuation of them.) There will be others, of course. At some point our fascination with trying to see how little clothing people can wear will be over, and people will try to make their clothing interesting rather than revealing, again.

This is not the same thing as nostalgia for the 1920s, by the way. I don’t think that it being fun for Poirot to be set in the 1920s is nearly the same thing as wishing to live in the 1920s. It’s merely a recognition that the interesting parts of the 1920s were very interesting, while the interesting parts of later decades weren’t nearly so interesting.

There is also the argument to be made that the 1920s (and 30s) were the last real era of the private detective. After World War 2 we live much more in the era of the spy thriller. In the spy thriller people kill and are killed for governments and large organizations; we don’t care nearly so much for the concerns of the individual. There may be some truth to this, though for all that people still go on murdering people for their own reasons even in the 2020s, and people even still care when people are murdered. It may be fewer than in former times, but detective stories were always about unusual people.

ChatGPT is Frequently Misunderstood

A while back I looked into a paper that described the actual mathematics/programming behind chatGPT (really, behind GPT3, which is basically the same thing but with a less sophisticated front-end). I find it interesting how often the thing is misunderstood.

ChatGPT is, as it will candidly tell you, a Large Language Model. That language part is very important. It does not model facts, or concepts; it has no understanding of anything. It does not try to have an understanding of anything. What it does is model language, but not in the way that grammarians model language. It models how language is used in practice. That is, it is a model of the sort of things that people actually say.

Without getting into the details of how the model works, it is enough to know that it was trained by taking in the appearance and order of words within approximately everything that the people at openAI were able to scrape off of the internet about two years before going public with chatGPT.

(This, btw, is the big improvement in chatGPT over previous large language models; the T stands for “transformer” and is a particular kind of use of matrices which allows the model to be “trained” in parallel, which allows for massively larger training sets than had previously been used in large language models. That said, its interesting to note that you can’t increase the training size to much larger than “approximately all of the text that has ever been written”, so on this basis alone we’d expect to see improvements in large language models slowing down after chatGPT because improved training is no longer an option. By contrast, this is not a problem for image-generating AI training. Generating massively larger numbers of images is quite straight forward.)

The output of chatGPT can loosely but accurately be described as “the words that the people who wrote the text chatGPT was trained on would probably say following whatever you said to it.” It doesn’t understand subjects and has no concept of what it is saying or the truth or falsity of what it is saying, it only has a concept of the probability of words appearing in an order on the basis of what words came earlier.

Like all modern “AI” stuff, the “AI” part takes input and produces output that a human being would not recognize as related to what the AI is being used for. Translating from a use case into the input of the AI model, and translating from its output back to the use case, is the work of programs written in normal programming languages that the programmers understand quite well. This front-end software is responsible for things like chatGPT being able to use references to previous subjects, or handle special cases like refusing to tell people how to commit crimes or adding a disclaimer to everything it says that is fitness-related that you should consult a doctor. They seem to be constantly adding more to this front-end, such as the ability to take instructions like, “write a sentence that ends in the word apple.” A language model only produces a set of words in order that are probable based on its training set; following directions is not related to this. Thus any amount of following directions is entirely in the front-end and consists of programmers looking at what kinds of instructions it’s been given and writing front-end code to handle those cases.

I’ve heard people say things like, “if they add the ability for it to check whether the things it is saying are true,” but no one has developed an AI which can identify what parts of a sentence constitute facts that could be checked. Consider a simple sentence like, “Mary, Queen of Scotts, once placed a bet on a game of tennis.”

What even are the facts that the sentence asserts? Let’s list them:

  • A person named Mary exists (or perhaps is a fictional character in a published work)
  • She was queen of the Scotts
  • The Scotts are a people, specially who live in Scottland
  • Scottland exists (or existed)
  • There was at least one event at which this Mary made a bet
  • That bet was on a game of tennis (anywhere)
  • A game of tennis was played during the lifetime of Mary, Queen of Scotts

Some of these facts do imply others. For example: if Mary did in fact bet on a game of tennis, then it must be the case that a game of tennis was played during Mary’s lifetime. That is not necessarily the order you want to check them, though; it is common when fact checking to start with the easy-to-check facts.

There are complications, though. When we say “tennis” do we mean the medieval game played in an indoor court in which the walls (and some roofs) were in play, or do we mean “lawn tennis” which is the modern use of “tennis.” Is the sentence asserting something about Mary relating to the modern game of tennis, or only to the game which was more popularly played in her day and from which the modern game called “tennis” (more properly, “lawn tennis”) is derived?

ChatGPT doesn’t even begin to have anything within its model relevant to answering any of these questions.

Was She Pretty?

In the novel The A.B.C. Murders, Hercule Poirot asks an interesting question about a murder victim. There are two versions of it I’m aware of; one is the version that Agatha Christie wrote and the other the version in the ITV version starring David Suchet. I’m going to quote both versions because they’re interesting to compare.

First, the original:

“Pas ga. I wondered — if she were pretty?”

“As to that I’ve no information ,” said Inspector Crome with a hint of withdrawal. His manner said: “Really — these foreigners! All the same!”

A final look of amusement came into Poirot’s eyes.

“It does not seem to you important, that? Yet, pour une femme, it is of the first importance. Often it decides her destiny!”

Then, the ITV version (which replaced Inspector Crome with Chief Inspector Japp):

Poirot: Was she pretty?
Inspector Japp: There he goes again.
Poirot: That does not seem to be important? Mais pour un femme, it is of the first importance. It often decides her destiny.

Curiously, that’s rather different than how I remembered it, and much closer to the book. I remembered the exchange in the ITV version as something like:

Poirot: Was she pretty?
Japp: What does that matter?
Poirot: Poor girl, it mattered a great deal to her. It decided the whole course of her life.

It is interesting to me that I misremembered the ITV version so much, though to be fair to me I like my version better. Since you, dear reader, are not me, I presume that Agatha Christie’s version is the most interesting, here, and quite rightly so.

A great deal of detective fiction might be written by a male or female author, but occasionally one comes across a passage that seems like it could only have been written by one or the other. This is one such passage. I can only imagine a woman writing this. It’s not that only a woman would know it; we all know that physical beauty affects the lives of both sexes. Perhaps the best way I can describe what I mean is another example of this, from the Hamish MacBeth story Death of a Gossip.

I had mentioned to a female friend of mine that the story was very markedly written by a woman and she jokingly asked, “what, did it have no descriptions of women’s breasts?”

“Oh, no, it’s got plenty of descriptions of women’s breasts,” I replied. “Just never in admiration.”

In the exchange above, whether the woman was pretty was quite relevant to the detection. She was strangled with her own belt and it takes an unusual kind of man to charm the belt off of a pretty woman for the simple reason that she will be used to getting attention from men and so to charm her he will need to be above average. Or as Poirot puts it:

Betty Barnard was a flirt. She liked attention from a personable male. Therefore A.B.C., to persuade her to come out with him, must have had a certain amount of attraction — of le sex appeal! He must be able, as you English say, to ‘get off.’ He must be capable of the click!

Since it is directly relevant to the solving of the murder, any author might have thought of it or mentioned it. There is just something about how it was mentioned which seems distinctly feminine to me, even though it is put in the mouth of a male character. It’s hard to articulate what, since it’s subtle.

I think it’s the sympathy involved.

Males are tempted to treat beautiful women better than plain women and so it is a mark of virtue to a male to treat plain women as well as he treats beautiful women. A male recognizes the temptation otherwise, but (a virtuous one) regrets it as the effect of a fallen world. Since women are affected by this temptation but are not actually tempted by it, their primary concern is on its effects, not on avoiding it. When Poirot says that whether a woman is pretty many decide her whole destiny, it only speaks to concern with the effect.

However that goes, it is a relatively subtle point that Agatha Christie handled very deftly. Her writing tended toward the plain side, but her psychology and her plots were masterful. This may well be why she is one of the best selling authors of all time; the plain style of her writing makes it extremely accessible, while at the same time the brilliance of the plot is easy to see.

The Man Born Blind vs. Pontius Pilate

Last Sunday’s reading at Mass was from the Gospel of John, and was the story about Jesus giving sight to the man born blind. Towards the end of the story, after the man born blind is questioned by the pharisees, he runs into Jesus, who asks the man whether he believes in the Son of Man. The man born blind asks a very interesting question: “Who is he, Sir, that I may believe in him?” (emphasis mine.)

It is interesting to contrast this with Pontius Pilate when Jesus said, “For this I was born and for this I came into the world: to testify to the truth. And those who are of the truth hear my voice.” Pilate’s response was, “Truth? What is truth?”

Jesus answered only one of these men, though they were, in a sense, asking the same question.

It’s interesting to contemplate why.

They were asking essentially the same question, but for opposite reasons. The man born blind was asking so that he may believe. A man cannot believe in something he does not know; faith is not the opposite of knowledge, but actually impossible without knowledge. The man who was born blind was willing to have faith, but he did not yet have the knowledge which would let him have faith, so he asked for it.

Pontius Pilate asked for knowledge in order to avoid believing in it. His question was not the seeking of truth but rather the denial of the possibility of attaining truth.

Despite what internet trolls will tell you, questions are not neutral things. We do not encounter questions floating in a vacuum. Questions always come from questioners, and questioners always have a goal in asking their questions.

As G.K. Chesterton said in Orthodoxy, motives matter:

But there is an anti-patriot who honestly angers honest men, and the explanation of him is, I think, what I have suggested: he is the uncandid candid friend; the man who says, “I am sorry to say we are ruined,” and is not sorry at all. And he may be said, without rhetoric, to be a traitor; for he is using that ugly knowledge which was allowed him to strengthen the army, to discourage people from joining it. Because he is allowed to be pessimistic as a military adviser he is being pessimistic as a recruiting sergeant. Just in the same way the pessimist (who is the cosmic anti-patriot) uses the freedom that life allows to her counsellors to lure away the people from her flag. Granted that he states only facts, it is still essential to know what are his emotions, what is his motive. It may be that twelve hundred men in Tottenham are down with smallpox; but we want to know whether this is stated by some great philosopher who wants to curse the gods, or only by some common clergyman who wants to help the men.

Looking at USA All-Cause Mortality

Back during the pandemic I did a number of posts looking at all-cause mortality in the USA. As a reminder, all-cause mortality is worth looking at because of the clarity of its definition. To give an example, if somebody has COPD and gets COVID-19 and dies, but probably wouldn’t have died if they only had one (then; COPD is eventually fatal), do you classify that as a COVID-19 death or a COPD death? Or as both? Different medical systems will reasonably differ on this question. (Then there are far less reasonable diagnostic criteria, like recording all deaths where a person had COVID-19 regardless of the cause of death.) This stuff can vary from hospital to hospital and state to state.

All-cause mortality data gets around these problems because, while it can be hard to agree on why a person died, it’s easy to agree on whether they died. Eventually. There is still the problem that it can take months for a death to actually be reported to the CDC. So much so that the CDC doesn’t even bother publishing all-cause mortality data for the most recently two weeks, and there’s very little point in looking at the last 3-6 weeks of data that they do publish. (They have algorithms that try to predict how many deaths will be reported eventually based on the data that has been reported so far, but it has a tendency to under-count what eventually gets reported.)

As I said, I put up several posts looking at this during the pandemic, and I recently became curious to look at it again now that the pandemic is in retrospect. So, here’s the data from the CDC as of February 20, 2023:

Let’s look at it a bit closer. To make the time frame of the data a little easier to follow, I’ve marked the approximate location of January 1 with yellow lines and the approximate location of July 1 with green lines:

If you’re curious, the Pfizer vaccine received emergency use authorization in December of 2020. The more infectious Delta variant of SARS-CoV-2 was named in March of 2021. The Delta variant had mutations in the spike protein which is how the virus enters cells as well as the thing that the vaccine gives the immune system to detect. This simultaneously made it better at entering cells, and also reduced the efficacy of the immune response acquired through vaccination or infection with the original variant.

Because I sometimes would look at Sweden’s data, here’s their COVID-19 deaths/day as reported by the Swedish government:

I’m told by a Swedish friend that “avlidna” literally means something like “release from suffering.”

This data is in no way directly comparable to the all-cause mortality data above, but it is none-the-less interesting to note how, with the exception of Summer 2020, the spikes like up pretty well. It is widely theorized that the US’s summer spikes correspond to air-conditioning season in the southern united states, when people stay indoors during waking hours. To the best of my knowledge, Swedish summers are far more mild than are the summers in the American south, so one would expect them to be absent.

That hypothesis brings up an interesting question looking at the USA data, though: why was there no summer spike in 2021? There was a spike in deaths in the fall of 2021, not the summer. One explanation is that COVID-19 deaths started taking longer since the onset of infection to kill people (or at least to contribute to their death). If that is the case we would expect all subsequent spikes in deaths to also come later and, indeed, they do. The winter spike in deaths (in early 2022) also came later than the spike in the winter of 2020/ 2021.

If that is the case, why should it be? One hypothesis which covers these facts—though is in no way certain—is that later mutations of SARS-CoV-2 took much longer to kill people than the original ones did. Another hypothesis which would explain the delay is that the most vulnerable people were killed off in the first waves, and everyone who is left are less susceptible. (Though they are less susceptible, it can still contribute to their deaths when they are weakened by other diseases.) These are just two hypotheses; the truth could involve some version of either, both, or neither.

Of course, another explanation which covers this data is that the response to the increases in prevalence of SARS-CoV-2, or very technically, the response to increasing numbers of positive SARS-CoV-2 tests, was responsible for the increase in deaths. This will, of course, vary among the states as they had very different responses to COVID; some states like California were known for draconian measures, while other states imposed very few restrictions, and many were inbetween. It is reasonable to suppose that the extra stress of lockdowns, closure of businesses, etc. would have some negative effect. There are no actions without consequences and it would be absurd to suppose that drastic actions like the ones taken in response to COVID-19 are free.

For reasons relating to other data I’ve looked into but don’t have time to get into here, I am skeptical that this explains all of the increase in mortality over the years before COVID, but I haven’t seen any data to conclusively rule that possibility out. I am also a bit resistant to this explanation because it would be too convenient; I think that the responses to COVID-19, after the first few months, were wildly overblown and a massive overreach of government power. Lockdowns could be justified in the face of a pandemic of the magnitude of the Black Death—something that could kill off a third of the population in a short time. During the very early rapid spread of SARS-CoV-2 there were reasons to believe that SARS-CoV-2 was an escaped bio-weapon that could have been that kind of threat—the kind of threat which could potentially justify temporarily suspending all of the normal rules of society. After a few months it was obvious that SARS-CoV-2, escaped bio-weapon or not, was in no way another Black Death. Since I think that what ensued as an unjustified massive overreaction, it would be very convenient if SARS-CoV-2 was barely worse than a normal flu and most of the bad consequences of it were actually due to what I consider to be an overreaction.

I like to be careful of convenient conclusions, especially when conclusive evidence is intrinsically hard to come by.

Whatever the cause and whatever exactly happened, it does seem very clear that it’s over. A few people still wear masks, but few enough that they might all be immuno-compromised people for whom trying to filter the air that they breathe in public places makes sense anyway. We still have some lingering excess mortality, though only slightly. It was never all that high—this is more clear when you look on a multi-year timescale rather than zooming in—and it is very possible that the last few years were a perfect example of Alexander Pope’s line, “a little learning is a dangerous thing.”

We had tests to detect the SARS-CoV-2 virus, but we didn’t know how it spread, how bad it was, or how bad it would be. Thirty years before, without the tests to detect the virus, the entire experience might have been radically different. The absolute worst weeks had increases in mortality of about 40%, but a 40% increase in a small number is still small. Throughout everything, there were only a few cases of hospitals becoming so over-full that they had to send patients elsewhere and there were no (or at least statistically no) instances of people dying because of a lack of treatment because the hospitals were full. Without the PCR tests that could detect the virus without symptoms (or with symptoms that could be a cold or the flu), and thus allow us to map out its spread, we might not have done much more than think that there was a nasty flu for a few years.

Samsung Cameras And the Moon

I recently came across this interesting video on a quasi-scandal involving Samsung smartphone cameras taking better pictures of the moon than the physical camera elements actually allows:

The video brings up an interesting question about what the pictures that smartphones take actually are. In the video Marques proposes that the images that the smartphone generates are something along the lines of an image which is what the smartphone “thinks” you want the slice of reality you were trying to capture to look like.

It’s no secret that smartphones these days do massive amounts of processing on the photos that they take and that this goes way beyond removing noise and compensating for camera shake; for years now they’ve been actively recognizing the subject in front of them and adjusting focus, faking bokeh (the way in which subjects behind the focal plane are blurred), punching up colors, adjusting contrast in only some parts of the picture, etc. etc. etc.

There is a problem with this when it comes to taking pictures of the moon, though, because there is only one moon, we only ever see one part of it because it’s tidally locked to the earth, and we’re so far away from it that there is effectively only one angle to take the pictures from. In short, except for haze in the atmosphere or objects in front, there’s only one picture you can take of the moon.

Using AI to improve pictures of the moon is thus not easily distinguished from just replacing your picture with a better picture of the moon. It is different; the approach Samsung uses preserves whatever color in the moon you see due to haze in the atmosphere (a honey moon, a red moon, etc) and won’t override a cloud or bird in front of the moon when you take the picture. But if you’re not capturing weird lighting or something in front of the moon, a cleared-up version of your picture of the moon isn’t really different from just using a better picture instead.

Smartphones have been clearing up the pictures that they take for a long time now, and for the most part people don’t really object. (Every now and then when posting pictures of my superdwarf reticulated python to Instagram I have to note that the camera punched up the color, though it’s not a big deal because it’s what he looks like outdoors in sunlight, so it’s only a slight inaccuracy.)

It’s just weird that there happens to be a subject where you can only take one picture and so the AI image enhancement doesn’t need your original photo to present a clearer version of the photo you took. From what we can tell it does use your photo and doesn’t improve every photo of the moon to a pixel-perfect photo of the moon, but in some sense that’s just an implementation detail and imperfect photo enhancement, respectively.

Of course, the same thing that makes this a problem makes it purely academic; there’s no important reason to take photos of the moon because at best they look exactly like photos you can easily look up. And if you’re doing it for fun, you’re going to use a real camera not a smartphone camera.

It is an interesting academic problem, though.

Mysteries and Changing Society

During the golden age of mysteries, a great many of the stories were (of necessity) set against the backdrop of drastic changes in society. These changes often provided motives as well as opportunities for the murders. Motives would often be the desire for money to be used on something other than maintaining the vestiges of an old way of life that the new generation is not interested in. The opportunity provided is often along the lines of a large house with few people in them. It’s that latter part that really interests me at the moment.

Large, derelict houses make great settings for mysteries, and I think that this is especially the case in mysteries for children. Scooby Doo was very frequently set in large houses with few people in them, isolated from their neighbors by large plots of land. These are things that most easily happen when societal changes make things that had been popular, or at least populous, less so. When things get abandoned, or even just partially abandoned, there become the remnants of things that people used to do without there being the people around to explain what they were. This makes such a setting is intrinsically mysterious. Whatever crimes a villain is currently committing, there are many things that need an explanation but without the people present who know what they are to give the explanation. Figuring them out, then figuring out which of these is innocuous and which nefarious can provide a wealth of things for the detective to use his intellect on.

This scope for investigation provided by the former scene of a bustling community now in some state of abandonment can be amplified by the intertwining of the current mystery with previous events. This can take the form of treasure which can be discovered or inherited, but it can also take the form of the deeds or misdeeds of the past influencing revenge in the present. It can take the form of both, separately or intertwining.

So how do we make use of this in contemporary murder mysteries? (I mean, murder mysteries set in at least approximately the time of their writing, as opposed to historical murder mysteries.) Many of the social changes which formed fertile ground for Golden Age murder mysteries are, in the twenty first century, over. The remnants of the medieval system are now pretty much entirely gone in England and, to the degree that the southern plantations and robber barons of the United States formed some counterpart, they’re gone too. We still have billionaires, of course, but for a variety of reasons they have fewer servants. (Part of this is technology, part of it is a more efficient economic system where things like cleaning and landscaping are more efficiently done by companies with specialized equipment who service multiple clients.) Even where a billionaire has something potentially interesting like a hundred million dollar yacht, the things are all new. An American billionaire’s household was assembled fairly recently. The odds are pretty good that his house was built fairly recently. The odds of a billionaire’s parents being billionaires is… not high. There are wealthy families, of course, and some of them even have history. I think these can work for this kind of murder mystery—the wealth of wealthy families tends to substantially diminish with each generation. There are exceptions, of course, but children are so frequently different from their parents that it’s rare for the grandchildren of someone who built up a fortune to have even a quarter of their grandfather’s talent for making money, and even less of his being in the right place at the right time to take advantage of that talent.

I suspect that there is more, in the contemporary United States, that can be made of institutions falling on hard times. That happens in all ages, but especially in our contemporary industrial times. Businesses, schools, hospitals, and more go out of business all the time; plenty come close to it or shrink before they’re bought out by competitors. Not every business would be ripe for this kind of setting, but I suspect a lot would. If one couples this with the advisability of Fun Settings for a Murder Mystery, there’s a lot of fertile ground, here.

How To Get a Girlfriend

This is, admittedly, very similar to a previous video, but this one is one hour long instead of two! And it covers how to keep an acquaintanceship ambiguous in order to create space for the two people to get to know each other without having to make rush judgements.

Some day I will break this into several videos by specific subjects, and with God’s grace they’ll be like 15 minutes each…

Three Act Tragedy

Published in 1935, Three Act Tragedy was the eighth novel by Agatha Christie to feature Hercule Poirot. It is unusual among (early) Poirot novels in that Poirot is not the main detective in the story.

The basic setup is that Famed Actor Sir Charles Cartwright is hosting a dinner party at which Poirot is attending (I can’t recall why Poirot was in the neighborhood; he might have been retired at this point) and one of the attendees of the party—a charming older vicar—keels over dead with no obvious cause. A few months later, one of the other attendees at the party, a psychiatrist by the name of Sir Bartholomew Strange, keels over dead at a party at his own house in the same way. This time instead of attributing it to heart trouble, it is discovered that he died of nicotine poisoning. Sir Charles and another of the guests, Mr. Satterthwaite, investigate, along with a precocious young woman who goes by the nickname Egg. The three occasionally consult with Poirot during their investigation, which is his involvement until he reveals the murderer at the end.

NOTE: there are spoilers after this point.

This book was very much about the theater, or at least about theatrics. It begins and ends with theatrics, and much of it is taken up with the theatrical personality of Sir Charles Cartwright. It is even divided into three acts which are titled, in theatrical terms, Suspicion, Certainty, and Discovery. It’s a bit hard to relate to this; stage actors are a different breed from movie actors. By 1935 movies were well on their way to replacing the theater as the dominant form of acting-based entertainment, but this novel was not really about 1935. Sir Charles had retired from the stage by now; Three Act Tragedy was about the aftermath of things that had been, not things that are currently.

The most memorable scene, to me, was Sir Charles employing his acting skill to reconstruct what the butler Ellis had done based on clues and to find the sheets of paper which no other detective had found. It’s a vivid scene, but it is diminished in the recollection by the fact that Sir Charles had planted the papers there himself, and Ellis had not, in fact, been interrupted.

The story is well constructed and like most Christies, the plot is original and clever. The murder of the vicar being a dress rehearsal for the murder of Dr. Strange was certainly an original motive for murder and yet a plausible one. Not so plausible when described as Sir Charles following his actor’s instincts and doing a dress rehearsal, but if it’s not presented so theatrically, testing out a type of poisoning which is supposed to go undetected on a victim to whom one has no motive to kill him is reasonable, if diabolical. But demons still have their reason, and it makes sense.

It’s also curious that this book ends with an explanation that was probably much inquired about Poirot:

“You’ll excuse me—” said Mr. Satterthwaite.

“Yes, there is some point you wanted explained to you?”

“There is one thing I want to know.”

“Ask then.”

“Why do you sometimes speak perfectly good English and at other times not?”

Poirot laughed.

“Ah, I will explain. It is true that I can speak the exact, the idiomatic English. But, my friend, to speak the broken English is an enormous asset. It leads people to despise you. They say—a foreigner—he can’t even speak English properly. It is not my policy to terrify people—instead I invite their gentle ridicule. Also I boast! An Englishman he says often, ‘A fellow who thinks as much of himself as that cannot be worth much.’ That is the English point of view. It is not at all true. And so, you see, I put people off their guard. Besides,” he added. “it has become a habit.”

That is not the actual ending, though. A little after this comes the true ending:

Mr. Satterthwaite looked cheerful.

Suddenly an idea struck him. His jaw fell.

“My goodness,” he cried, “I’ve only just realized it. That rascal, with his poisoning cocktail! Anyone might have drunk it. It might have been me.”

“There is an even more terrible possibility that you have not considered,” said Poirot.

“Eh?”

“It might have been ME,” said Hercule Poirot.

Never Show a Good Movie in the Middle of Your Bad Movie

In the Mystery Science Theater 3000 episode Overdrawn At the Memory Bank, one of the call-outs, when they play a few seconds of Casablanca on a computer screen, is “never show a good movie in the middle of your bad movie” (or words to that effect). It’s funny, especially in the moment, but I wonder if it’s actually good advice. It can be applied with very little modification to having characters discuss a good book in what one may hope is a good book but what may not be as good a book as one hopes.

The intention behind the comment, during the episode, is that showing a good movie reminds the audience of how good movies can be, and thus makes it more difficult to enjoy whatever little good there is in the bad movie that they’re watching. And, indeed, this is possible. Reminding someone that they could be having a better time is not always a great strategy for an entertainer.

But.

The fact is, if someone is watching your movie or reading your book, they’re not watching that better movie or reading that good book, and in the modern world it’s unlikely that it’s because they would rather be doing it, but can’t. I’m a big re-reader and re-watcher, but you can’t watch Casablanca and read Pride & Prejudice every day.

If the viewer has chosen to not watch Casablanca today, then reminding him of Casablanca will not cause him to stop your movie and go watch Casablanca instead. In fact, it may have the opposite effect—it may make him happy with the reminiscence of a movie that he likes. Further, it may create the parasocial feeling in the viewer of being with people who like the same movie he likes.

Parasocial engagement is one of the great problems of our day, but that does not make it intrinsically bad. Like so many things, much of whether it’s healthy or harmful is in how it is managed and presented. A great deal of the parasocial engagement that exists on the internet is exploitative parasocial engagement; it is designed to encourage the mistake that it is not parasocial, but social. It is designed to be addictive. Movies and books have an end; they are a fantasy that comes to a definite conclusion and thus makes it easy to get back to the real world and remember that one needs to live in it.

By Vectron

Mitchell and Webb have a really funny sketch about mannerisms in a galactic empire:

I love the overall work that they put into the aesthetic—nice touches like the space station behind them and the guards standing in the hallway, motionless.

I also find it funny that there are gullible atheists of the dim-witted but aggressive sort who will think that this is an accurate description of why people believe in God, and how traditions came to be more generally. They’ll think this sketch is funny because, “see, that’s just how religion got formed!”

Whereas I think it funny because it’s obviously not how religion came to be, and such ideas are absolutely absurd when you put them into practical form.

By Vectron’s kindly claw.

Overdrawn at the Memory Bank and Maruba Fruit

Overdrawn at the Memory Bank may be my favorite episode of Mystery Science Theater 3000. There are a lot of interesting things to talk about in this episode, but I’ll have to do that later. For the moment, I want to share some interesting things I find in researching the episode.

I had a hunch, based on the apparent budget of Overdrawn, that none of the scenes of African animals were filmed for the movie. Most of them were close-cropped enough that they could have been filmed at a zoo, but it just seemed unlikely. I couldn’t find out what movie they came from, though—it’s not credited in the credits and no one seems to list it.

I then tried to find out whether “maruba fruit” is real. It turns out it is, though its actualy name is “marula fruit“. If you scroll down to the “Use by Other Species” section, you find:

In the documentary Animals Are Beautiful People by Jamie Uys, released in 1974, some scenes portray elephants, ostriches, warthogs and baboons allegedly becoming intoxicated from eating fermented marula fruit, as do reports in the popular press. While the fruit is commonly eaten by elephants, the animals would need a huge amount of fermented marulas to have any effect on them, and other animals prefer the ripe fruit.

Now that’s interesting. Jumping over to the wikipedia page for Animals Are Beautiful People, we find:

One scene depicts baboons, elephants, giraffes, warthogs and other African animals eating rotten, fermented fruit of the Marula tree. The animals are then intoxicated, and they stagger around to comic effect, before nightfall comes and they fall asleep. In the morning, we see one baboon wake up, disheveled, next to a warthog, and quietly exit the burrow, as not to wake her.

Well that’s quite promising. So jumping over to YouTube and searching for “Animals Are Beautiful People drunk animals” we find this clip:

And yes, this is definitely where they took the footage from. Some of the scenes are easy to recognize.

Interestingly, Overdrawn changed the order of the scenes. In the documentary, the elephant knocking the tree with the monkey in it happened while the marula fruit was ripe but not yet over-ripe. Later on the fruit over-ripened and started fermenting, and this when we get the drunk animals. (In Overdrawn, the drunk animals come first and the elephant knocks the tree after, which is the precipitating incident for Fingal to demand removal with override priority.)

The Mystery of the Magi with Fr. Dwight Longenecker

A discussion with Fr. Dwight Longenecker about his book The Mystery of the Magi. It’s an interesting book which goes into the historicity of Magi—did they exist, who were they really, where did they actually come from, how did they follow the star, and related questions.

Check out his books The Mystery of the Magi and The Bethlehem Shepherds, or all of his books. You can also check out Fr. Longenecker’s blog, or just visit his website.

Materialists Often Replace God With the Future

When listening to a review by Zarathustra’s Serpent of a David Bowie song whose theme was the announcement by news reporters that we only have five more years until the world ended and the ensuing chaos, I found Arad’s suggestion that moderns had replaced, as the foundation for morality and the reason for living, God with The Future. As soon as I’d heard this, I knew it to be right. It is very interesting.

It has widely been pointed out that if God is dead then all things are permitted. This is so because nothing has a nature and so it cannot be a violation of anything’s nature to change it or even to destroy it. If things are just brute facts and we change them, they are, in their new form, just more brute facts, and there is no real way to choose between them. (This is why Nietzsche saw that humanity needed a superman who could create values and impose them by his will, such that it would become possible to live by something other than sheer will.) The superman has never come, though he has been long waited for. It seems that, while they waited, people can up with an alternative. If they could no longer live for today, they could yet live for the future.

It’s a solution that is not without its problems. It only works if one can keep up a very short shortsightedness; any contemplation of The Future as an idea reveals how empty it is. Whatever future one sacrifices for today will only be a present that must sacrifice for its own yet-to-come. Another fatal flaw is that even a cursory knowledge of history will show how utterly unpredictable the future is and how little one can realize any goal for it. To quote Jane Austen, however, desperate people are not always wise. Further, there is a sort of wisdom in a fool making plans which will only work if he remains a fool—that is what he is most likely to do.

In short, it is not philosophically coherent, but it does explain why it is that so many materialists deny that if God is dead all things are permitted.

Parasocial Relationships Let Social Skills Atrophy

There’s a lot of money to be made in parasocial relationships, which is why, if you look closely, you’ll notice that a huge portion of the internet is engaged in selling or buying parasocial relationships. Some of these are unavoidable—you cannot help but feel some attachment to a person you hear and see, even if they’re just giving a lecture on theology. The more pernicious ones have a built-in vicious cycle, though.

A person who lacks the social skills (and possibly the self-discipline) to form real relationships with the human beings actually around them, or even to do the much less demanding task of forming and maintaining such relationships with people they only know online, will naturally be drawn to parasocial relationships all the more strongly because their social needs are not lower just because they’re incompetent at meeting them. YouTubers and Instagram stars who tell their audience how much they love them are a minor example. On the more dangerous end of the spectrum you will find males who devote themselves to masculinity gurus or, more extreme yet, women with OnlyFans pages. (I think these are sufficiently obvious that they need no further explanation.) This end of the spectrum manifests somewhat differently for women; for example, a great deal of what gets called “feminism” is actually the attempt by lonely women with poor social skills to parasocially bond with other women. Women in real life often bond over what superficially looks like complaining but is actually an exercise in shared (detail-oriented) social analysis. A lot of the “feminism” that you see has at most the bare trappings of any kind of idea, not even of the ideas of the bad kinds of feminism, and really is just complaining gussied up with popular terminology whose only purpose is to feel like the writer and reader and bonding over sharing experiences. Or what they wish were shared experiences. (“Don’t you just hate it when hot guys stare at you instead of doing their own workouts at the gym???”)

These more extreme forms of parasocial bonding involve no actual interactions, or at most perhaps the interaction of leaving a comment and it occasionally being responded to. This requires few social skills on the part of the “content creator” and no social skills of any kind on the part of the content consumer. Especially on the side of the content consumer, that attracts people who have few social skills, and then it leaves them there. Real relationships are hard, but intrinsically teach people to improve their social skills. Even the thick-headed who don’t try much eventually learn to be more cautious after losing a bunch of friends, though they may blame the friends rather than themselves. Even if very imperfectly, they still learn. But people who consume parasocial content learn nothing. In the case of masculinity gurus and “feminist” complaining, they may even learn only bad lessons. Masculinity gurus sometimes advise the people who listen to them to substitute pointless aggression for confidence and to avoid social skills for fear of being mislabeled as a “nice guy”, and “feminist” complaining sometimes is complaining that women have to treat other people—especially male people—like human beings (e.g. articles that talk about “emotional labor”). As a sort of double-whammy, people who have come to this kind of “content” because they lack social skills are particularly poorly equipped to spot what’s wrong and reject it.

Even if people whose social interactions are primarily parasocial consumption don’t learn bad lessons from the people they flock to, they get their needs for social interaction temporarily met without having to engage in any of the social interactions that will build their social skills. Since social skills, like all skills, takes work to maintain, this means that whatever they do have of social skills will atrophy through disuse. And, of course, the more these skills atrophy, the stronger the attraction of purely parasocial content.

It is by no means inevitable, but it is a dangerous trap.

A Bored Chinese Housewife Wrote Fictional Russian History on Chinese Wikipedia

I just came across an interesting article from, of all places, vice, titled, A Bored Chinese Housewife Spent Years Falsifying Russian History on Wikipedia. The tl;dr is that the woman, who had the username Zhemao, pretended to be a scholar of Russian history, which is a thing that Chinese Wikipedia had little of. It apparently began when she tried to understand real scholarly articles but couldn’t, and so started off filling in the missing pieces.

By the time she had written around 10,000 characters, she had gotten attached to it and didn’t want to delete it. Eventually she had a network of sock puppet accounts to boost herself, and had written or contributed to over 200 articles. She got very good at producing scholarly-sounding citations that were extremely difficult to verify. For the most part no one will spend the effort to verify citations to print books that are hard to get, especially on obscure topics that not many people are interested in.

This is, in general, the best way to deceive people—make it easy to believe you and hard to disprove you. The upshot of that is that one should be most careful about things that are easy to believe and hard to disprove—the more that is the case, the more important the trustworthiness of the source is. What can be dangerous is that this sort of thing can bootstrap itself. If you learn of a person through something that’s easy to believe and hard to disprove, and conditionally believe them, the more this goes on the more you will tend to feel like you’ve already trusted them and haven’t been disappointed, so they must be trustworthy.

Incidentally, this applies remarkably well to Science—by which I mean the academic industry of publishing papers. It’s all well and good to say that people’s results can be independently verified—but for the most part no one independently verifies them. When people actually try to, well, there’s a reason that if you Google “reproducibility crisis” you’ll get a lot of results.

Wikipedia gets a lot of criticism because “anyone can edit it,” but that’s not actually all that different from what the industry of scientific paper publication is like. Yes, there is peer review. There’s also peer review on Wikipedia, at least much of the time. In both places, the amount of rigor varies highly. And peer review, in academia, never includes actually running the experiments described in the paper to see if one gets the same results. No one has the money, or even the time, to do that. For some reason a lot of people think that “peer review” in science means “this paper is guaranteed to be good,” when in fact what it means is, “this paper is not guaranteed to be garbage.”

Finite Simple Group of Order Two

Not long after I graduated with my master’s degree in mathematics, I came across the song Finite Simple Group (of Order Two). It’s an a cappella love song that consists almost entirely of puns from graduate-level math.

The number of people who will get all of the jokes is substantially lower than the number of people who have PhDs in math, though getting a master’s degree will allow you to get many of the jokes, at least from having heard other students talking about classes you didn’t take. (For example, some of these come from differential geometry, which I never took, but heard people talking about.)

Even the name is a mathematical pun. A klein bottle is a mathematical manifold somewhat akin to a mobius strip, except you cannot create one in three dimensions without a self-intersection. It can be created without a self-intersection in four dimensional space, hence why four is associated with “klein”. (There appear to be five people in Klein Four, but this is something of an implementation detail.)

This video was uploaded to YouTube 16 years ago (as of the time of writing), which was several years after it was recorded. The website kleinfour.com doesn’t seem to exist anymore. (Last time I checked it, it said that the members of kleinfour had long ago got their PhDs and went on to be professors at different universities, so were not in a position to make more music.)

I find it interesting to look at it now, after so much time has passed. As I recall it came out roughly at the time that I was in grad school, and it felt incredibly relatable. Grad students in math do an enormous amount of work; it’s not even that they’re assigned so much as that they are there because they live and breath math. That creates a lot of pressure of its own, quite apart from grades and assignments (which create their own pressure), and so they sometimes need outlets where they do something else. I can remember late at night taking a break with some other grad students I was working with to do some experiments dropping paper helicopters down the area next to the central stairs and seeing what designs too the longest to reach the bottom. (Of the kinds we tried, it was spirals with a small central weight.)

This song is well done; it is composed well and the singer sings well. The puns were generally not strained. I had hoped that they would go on to do other things, but it was not to be. I have no idea whether the members of the group ever think about the Klein Four anymore; sometimes it’s easier to forget work that one has done than it is to forget art by someone else that one really appreciated. (Sometimes not.) The fact that no one has bothered to keep up the Klein Four website makes me suspect that all of its members have moved on.

If they have, though, there is still this video up on YouTube. Video has the curious property that it never changes. If you watch it a hundred times, it is the same moment in history, every time. You change as you watch it, of course, but I think that, properly considered, this gives a bit of insight into that strange aspect of how this life works. The past is real, but we cannot get at it, and yet it still matters. I believe it was Saint Augustine who first proposed the image of our existence being shattered over the moments of time and God, at the end of time, gathering up those shattered fragments and putting them together into a coherent whole.

Ironically, it is the past mattering that is why one should not dwell on the past. Because we will, after the end of days, be reunited with the past, we need to put our efforts into the present—because it will become part of that past that will eventually become an eternal present. That is, of course, harder to see when the goodness of the moment is less clear than it was at some of its clearest moments in the past, but that’s only a defect in our sight.

Mitchell & Webb: The Confident Forger

Another great Mitchell & Webb sketch:

I really love the confidence that this forger has. It reminds me of what Chesterton remarked on Orthodoxy:

Thoroughly worldly people never understand even the world; they rely altogether on a few cynical maxims which are not true. Once I remember walking with a prosperous publisher, who made a remark which I had often heard before; it is, indeed, almost a motto of the modern world. Yet I had heard it once too often, and I saw suddenly that there was nothing in it. The publisher said of somebody, “That man will get on; he believes in himself.” And I remember that as I lifted my head to listen, my eye caught an omnibus on which was written “Hanwell.” I said to him, “Shall I tell you where the men are who believe most in themselves? For I can tell you. I know of men who believe in themselves more colossally than Napoleon or Caesar. I know where flames the fixed star of certainty and success. I can guide you to the thrones of the Super-men. The men who really believe in themselves are all in lunatic asylums.” He said mildly that there were a good many men after all who believed in themselves and who were not in lunatic asylums. “Yes, there are,” I retorted, “and you of all men ought to know them. That drunken poet from whom you would not take a dreary tragedy, he believed in himself. That elderly minister with an epic from whom you were hiding in a back room, he believed in himself. If you consulted your business experience instead of your ugly individualistic philosophy, you would know that believing in himself is one of the commonest signs of a rotter. Actors who can’t act believe in themselves; and debtors who won’t pay. It would be much truer to say that a man will certainly fail, because he believes in himself. Complete self-confidence is not merely a sin; complete self-confidence is a weakness. Believing utterly in one’s self is a hysterical and superstitious belief like believing in Joanna Southcote: the man who has it has `Hanwell’ written on his face as plain as it is written on that omnibus.”

By the way, here’s Van Gogh’s Sunflowers, for comparison:

I think that the ten pund note was closer to the original.

Trading Cards

I was recently reminded of the existence of trading cards, which of course calls to mind the scene in A Charlie Brown Christmas where Lucy asks whether Beethoven ever had his picture on a bubble gum card. This prompted me to look into the history of trading cards a little (i.e. on Wikipedia).

Trading cards as we know them today seem to have begun with cigarette manufacturers including trade cards in the packages of cigarettes. Trade cards were the business counterpart to calling cards—things you would give someone with your name and some information about you, including how to contact you, which you might leave if you called on someone and they weren’t home, or else as a means of introduction. Tradesmen would have cards for themselves in their professional capacity, and these might include instructions on how to find them (in the days before standardized addressing schemes). Businesses would also have similar cards for similar reasons, and cigarette companies started including theirs in the packages of cigarettes both to help protect the cigarettes (by stiffening the package) and also to help advertise their brand.

In 1875, the American tobacco company Allen & Ginter began to print pictures of more interesting things on the cards. According to Wikipedia these pictures were of, “actresses, baseball players, Native American chiefs, boxers, national flags, or wild animals.” In the UK, John Player & Sons introduced a series of cigarette cards called, “Castles and Abbeys.” People began to collect these cards and trade them to build up their collections, and the phenomenon quickly spread, with candy makers taking it up before long.

At approximately the same time, baseball cards were getting printed as trade cards. In the late 1800s baseball teams and players were posing for photographs, and these became available to things like sporting goods stores who used them as trade cards (interesting picture on one side, information about the store on the other). It would not be too long before these sorts of trade cards began to be included in cigarette packs and confectionery, and have information about the player rather than about a business on the obverse side.

If we distinguish trading cards from the broader set of collectible cards by whether or not there’s anything one can do with the cards such as play a game that the cards were designed for—the latter category including standard playing cards with 52 cards in 4 suits, Pokémon, Magic the Gathering, etc—trading cards had mostly disappeared from popular consciousness by the time I was a kid in the 1980s. I remember a brief fad for Garbage Pail Kids, but they were a joke and lasted about as long as one would expect a joke to last. (I don’t know if anyone collected them for even two years.)

I had a very minor baseball card collection, for about a year, almost entirely because I’d heard about how much money old baseball cards were worth and figured I might as well try in case they’d be worth anything in thirty years. That’s not much of a reason to pay money for things and then store them, though, which is why I quickly gave it up. I had never heard of anyone else collecting baseball cards even for that reason, and certainly not with any interest in the things for themselves.

Thinking about this made me wonder why anyone ever collected trading cards. The best that I can come up with is that it mostly makes sense as a pre-television and pre-color-magazine phenomenon.

Magazines began being printed in color in the late 1930s, so I expect that they were (approximately) all printed in color by the late 1940s. This would provide a common source of pictures that people could cut out and look at any time they wanted. Television only turned to color in the 1960s, but even black-and-white television provided many opportunities to see celebrities and sports stars for free. Prior to these things, though, how would a kid know what his favorite baseball players looked like? There were pictures in newspapers, of course, but that was a bit more haphazard and only turned to color in the late 1970s and I’m not sure when the quality of the photographs actually became decent.

To put it very simply, in the era when color photographs were not common, it makes sense that color photographs printed on cards might have been valuable. I don’t mean in the monetary sense that rare cards from people’s youths eventually were bought for large amounts of money, but rather in the simpler sense that people would actually want the things and spend some effort to collect them or trade them.

Of course, like most things that built up cultural inertia they would have continued to be made and collected for a while after they really made sense. If I’m right that trading cards stopped making sense in the late 1940s, then it seems possible that people might still be collecting them (in earnest) and talking about them into the 1960s, for Lucy to ask, in 1965, how can you say that someone is great who’s never had his picture on a bubblegum card?

Of course, this question of Lucy’s was written by adults, and moreover written as a joke. If it was an instance of adults thinking that kids still did what they themselves did when they were kids—it would not have been the first.

G.K. Chesterton on Marriage

I was recently trying to find a quote from G.K. Chesterton on how the point of a wedding is the marriage vow, and the point of the marriage vow is that it’s daring. I wasn’t able to find the original, what I did find was a newspaper called The Holy Name Journal which seems to have been from Michigan. In the August 1921 edition, someone quotes Chesterton’s article almost in full. Since it was only available as a photograph (though, thanks to Google, a text-searchable photograph), I transcribed it for easier quoting:

A writer of the Westminster Gazett recently made the proposal to alter the marriage formula: “As to the vow at the altar, it seems conceivable that under other conditions the form of words ordained by the Prayer Book might be revised.” And the writer adds that may have omitted the words “to obey”, others might omit the words “til death do us part.” The following is Mr. G.K. Chesterton’s rejoined to The New Witness:

It never seems to occur to him that others might omit the wedding. What is the point of the ceremony except that it involves the vow? What is the point of the vow except that it involves vowing something dramatic and final? Why walk all the way to a church in order to say that you will retain a connection as long as you find it convenient? Why stand in front of an altar to announce that you will enjoy somebody’s society as long as you find it enjoyable? The writer talks of the reason for omitting some of the words, without realizing that it is an even better reason for omitting all the words. In fact, the proof that the vow is what I describe, and what Mr. Hocking apparently cannot even manage, a unique thing not to be confounded with a contract, can be found in the very form and terminology of the vow itself. It can be found in the very fact that the vow becomes verbally ridiculous when it is thus verbally amended. The daring dogmatic terms of the promise become ludicrous in face of the timidity and triviality, of the thing promised. To say “I swear to God, in the face of this congregation as I shall answer at the dreadful day of judgment, that Maria and I will be friends until we quarrel” is a thing of which the very diction implies the derision. It is like saying, “In the name of the angels and archangels and the whole company of heaven, I think I prefer Turkish to Egyptian cigarettes,” or “Crying aloud on the everlasting mercy, I confess I have grave doubts about whether sardines are good for me.” Obviously nobody would ever have invented such a ceremony, or invented any ceremony, to celebrate such a promise. Men would merely have done what they liked, as millions of healthy men have done, without any ceremony at all.

Divorce and re-marriage are simply a heavy and hypocritical masquerade for free love and no marriage; and I have far more respect for the revolutionists who from the first have described their free love as free. But of the marriage service obviously refers to a totally different order of ideas; the rather unfashionable [stuff?] that may be called heroic ideas. Perhaps it is unreasonable to expect the fatigued fatalist of this school and period to understand these ideas; and I only ask here that they should understand their own ideas. Every one of their own arguments leads direct to promiscuity; and leaves no kind of use or meaning in marriage of any kind. But the idea of the vow is perhaps a little too bold and bracing for them at present, and is too strong for their heads, like sea air.

How One Treats People

I was recently in a conversation discussing how people treat each other, and it came up that the big problem, these days, is that there is substantial disagreement as to what a human being even is. Ideally, the first question that a person should ask after “how should I treat this thing” is “what is it?” A person should treat a human being, a dog, and a cell phone differently because they are different things. So what, then, is a human being.

The Christian answer, from which one kind of answer about how to treat people follows, is that human beings are contingent beings created out of love by God so that he can give us more, and he put us in the same time and space so that we could take part in the act of creating each other (every time one gives food or knowledge or whatever to another, that is added to his being, and so one is becoming part of God’s act of creating him). Hence we should love other human beings, i.e. will their good for their sake.

The atheistic answer to “what is a human being?” is very different. The more accurate but less common answer is, “A joke with no punchline that no one has told.” or “A pointless illusion that no one is seeing.” More common, however, is to go with the creation myth of evolution (as distinct from the scientific theory of evolution, which doesn’t get enough attention): DNA wants to reproduce itself as much as possible and so it created us to do it. All that we call pleasure and happiness are just carrots that our blind and stupid master dangles in front of our faces to try to get us to do its idiot bidding. He were designed badly and will fall apart into nothing after a while. Fortunately for us, our master is blind, and we can sometimes trick him into giving us the carrots when we only pretend to pull his cart. Contraception is probably the best example of this; we can get the pleasure of reproduction without having to pay the price of reproducing.

So how should one treat human beings, if that’s what they are, and what happiness consists of? Being a human being oneself, obviously everyone else will either be a fool (in which case they are easy pickings) or will be trying to trick one into giving them pleasure without paying for it. So look out for the fools and try to trick them, while at the same time do your best to avoid being tricked by the people more clever than you.

And there you have modern dating in a nutshell.

There’s So Much Bad Science And Worse Reporting

I recently ran into an article on a study which compared the “Green Mediterranean Diet” with the Mediterranean diet and a “healthy diet”. The article begins

The green Mediterranean diet was pitted against the Mediterranean diet and a healthy diet in a large-scale clinical interventional trial- the DIRECT PLUS. Subsequent analysis found that the green Med diet reduced visceral fat by 14%, the Med diet by 7% and the healthy diet by 4.5%.

A bit later, we find out what the heck they mean by “green Mediterranean diet”:

The DIRECT-PLUS trial research team was the first to introduce the concept of the green-Mediterranean diet. This modified MED diet is further enriched with dietary polyphenols and lower in red/processed meat than the traditional healthy MED diet. On top of a daily intake of walnuts (28 grams), the participants consumed 3-4 cups of green tea/day and 100 grams (frozen cubes) of duckweed green shake/day. 

The first thing that jumps to mind is that the Mediterranean diet is popular among diet researchers because it is very low in red meat. How much lower could one make it? It is not plausible that a tiny amount of red meat causes enormous deleterious health effects since there’s obviously no corresponding dose-response to higher doses (people who eat a pound of red meat a day don’t die off at 15).

However that goes, worse is that this modification introduces three additional foods. This is the very opposite of controlling variables. Granted, the scientists in question probably think of it only as “introducing polyphenols”, but there’s a lot more in the foods they introduce than just polyphenols. Even worse, in terms of controlling variables, is that they are almost certainly not increasing the calories of the people on the diet, so they’re also going to be removing something or some things, introducing even more variables. You can tell that this is the case from the quote that they have from one of the professors who conducted the study (emphasis mine):

A healthy lifestyle is a strong basis for any weight loss program. We learned from the results of our experiment that the quality of food is no less important than the number of calories consumed and the goal today is to understand the mechanisms of various nutrients, for example, positive ones such as the polyphenols, and negative ones such as empty carbohydrates and processed red meat, on the pace of fat cell differentiation and their aggregation in the viscera

This strongly suggests that specific kinds of foods were removed from the diet at the same time that the polyphenols were added. That is really lousy variable control.

To be fair, it is possible to test many variables at once as a preliminary study to further, actually controlled studies, except that this isn’t a great way to do that unless you’re looking for something like an acute poison. If you design a study where on the one hand you test increasing protein intake and smoking cigarettes, and on the other hand reducing dietary trans-fats and moderate cocaine usage, who knows which group you’ll end up following up with, but either way you’re going to miss out on some important stuff.

Yet another variable in these kinds of studies is that they’re almost never free-feeding studies. By “free-feeding, I mean, “eat however much seems appropriate to you, of whatever food you’re hungry for/seems appropriate to you, whenever you’re hungry.” That is, free feeding is what normal people do. The number of people who actually weigh out all of their meals and eat according to some macro nutrient plan, every day of their lives, is approximately the number of bodybuilders there are. So we have yet another variable going on, which is that people who are watching their macro nutrient intake eat differently than people who don’t.

The reporting on this is also extremely lacking. For example, the summary at the top indicated a reduction in “red/processed meat”, while the professor who did the study referred to “processed red meat”. These are not the same things, and the top one is a much larger category. It includes fresh red meat and processed chicken, while the latter includes nothing that the former doesn’t too.

There’s also no mention of the limitations of the type of study performed. It’s an intervention study on 294 people (no indication if that was the starting or ending number) over 18 months. There’s no way that they had the money to keep the test subjects in a laboratory or hospital for all 18 months and strictly control all food given to them—not to mention, who would be willing to spend 18 months of their life under these conditions—so this had to have been an at-home study with self-reporting of compliance. Compliance rates for those are usually pretty bad, especially after the first month or two, and actual compliance as opposed to self-reported compliance is especially bad.

I’ve seen it argued that how difficult a diet (or other intervention) is to comply with is important, and this is certainly true. However, one of the things that affects compliance is the participant’s belief that compliance will reliably achieve something. Someone who wants to build muscle is far more likely to comply with a regimen of weight lifting because he is certain that if he complies, he will get the results he wants. An experimental method of muscle gain, where no one has any idea if it works, is far more psychologically difficult to comply with. Suppose one were to test out building muscle through twenty minutes a day of staring into the mirror and visualizing oneself with more muscle. Compliance with that will get difficult after the first day or two because there’s no obvious reason to keep going. (The flip side of this is that studies where one keeps participants in a lab and feeds them 100% of the food that they eat have phenomenal compliance, though higher drop-out rates, but they’re so expensive that in general they can only be run for 4-8 weeks and there’s very little of significance that one can find out in that short a span of time.)

The funding section is interesting, too:

This work was funded by grants from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project number 209933838- SFB 1052; the Rosetrees trust (grant A2623); Israel Ministry of Health grant 87472511; Israel Ministry of Science and Technology grant 3-13604; and the California Walnuts Commission.

None of the funding providers was involved in any stage of the design, conduct, or analysis of the study and they had no access to the study results before publication.

The California Walnuts Commission being one of the funders will, of course, sound alarms in the heads of the cynical or the conspiratorially minded. And, to be sure, the disclaimer that the funders of the study didn’t get to design it (etc) is no guarantee that the funders had no influence, because of course this isn’t the last study that the researchers are going to do and they will need funding again. By that same token, though, people who didn’t give a dime to this study may still have influenced the study by the researchers hoping to get funding from them next time. There is, as yet, no disclosure of funders who the researchers hope to get funding from in the future. All that said, it’s quite possible that the funding from the California Walnut Growers wasn’t that significant; they might well throw money at anyone who is trying to prove that walnuts are good for one’s health without overmuch worrying about the results of any one study. If you do fund enough studies looking into the health benefits of your product you’ll eventually get the favorable results you want by random chance, if nothing else, and this way your hands will be clean. It might even be cheaper, as it may potentially cost more to get someone with a decent reputation to falsify their results on purpose. But everyone makes mistakes.