Sometimes People Surprise You

Human beings are, obviously, very complex creatures. For any given person we deal with, we understand them to a degree, but only so far. And then on top of that they have free will and can choose to do things contrary to their nature. So we’ll never fully understand another human being—on this side of death, anyway.

Having said that, sometimes when people do things we thought they would never do it becomes clear that we misunderstand people’s motivations and thoughts. This happened to me recently with the YouTuber Logicked. A while back, as a joke for the beginning of Deflating Atheism’s 2,500 subscriber special livestream, I collaborated with Rob to make a satirical sketch with the premise that the YouTuber atheist “Rhetoricked” was criticisng the livestream before it even happened. The video on my channel where I put it up included a few minutes of context if you haven’t heard of Logicked before:

Just a few days ago, he made a serious response video to my comedic sketch! Here’s the description:

Missing the Mark is still mad that I didn’t like a few dumb things he said, so he parodied my videos in an evil beekeeper costume. I’m sure it will be a deeply honest representation and not remotely hypocritical.

For the record, I wasn’t mad. I found the idea of him making 3 videos criticizing things I said in the Deflating Atheism 1000 subscriber special—which was a hangout among friends just chatting, having fun, and reminiscing—to be a little absurd, and since my sense of humor tends towards absurdism, I decided to add to the absurdity with a comedic sketch on the Deflatheism 2,500 subscriber special. I actually didn’t expect him to watch the video. Or, really, for all that many people to watch the video. Livestreams rarely get all that many views and though it would probably be reasonably popular with my scubscribers—all of my Just For Fun videos are—I don’t have all that many subscribers (at the time I uploaded it, I had around 1500).

Anyway, I never dreamed that Logicked would do a response video to it. And yet he did. Being that wrong means I need to rethink some things. But first, I’ll explain my reasoning:

First, Logicked rarely does response videos. Full disclosure: I haven’t actually watched more than two of his videos in their entirety (one about me which I responded to in two parts, and an early, very short one in which he’s tempted by some sort of devil in exchange for subscribers). But I skimmed the titles and also searched on YouTube and he’s got the word response in something like 2 other videos besides this response. His videos are almost entirely critiques. That is, he takes videos which weren’t about him and then criticizes them. So this was just atypical.

Second, his YouTube channel is a business for Logicked. Keeping on-brand is good business. Giving free air-time to people criticizing you is not a good business practice. This is summarized in the phrase “never punch down”, though people have been using the phrase “punching down” to mean other things, so it might perhaps be best summarized as, “never give publicity to critics who can’t hurt your bottom line on their own”. Now, as several friends of mine have pointed out, judging by his comment section, Logicked’s core fans are several dozen light bulbs short of a full picnic basket (i.e. they’re not intelligent), but core fans typically draw much of their energy from peripheral fans, and peripheral fans are the people more likely to be swayed by criticism. Not that any one act of criticism will hurt all that much, but why take unnecessary risks with your primary source of income?

Third, the draw of YouTube atheists is the air of superiority which they assume. They are basically selling confidence. I described this in my video The Value of Atheist Hacks:

And it seemed to me that on some level Logicked understood this because by sticking to critiques he maintained his position of superiority from which his viewers could derive vicarious confidence. Doing a response video puts him in a position of equality with me. He can maintain as superior a tone as he wants in the video, but fundamentally in a response he is defending himself rather than being on the attack. Again, this isn’t going to change anyone’s opinion of him drastically—and certainly not consciously—but it comes back to the question: why take unnecessary risks with one’s primary source of income?

Being a professional YouTuber is a one of the professions in which a person is being professionally popular. Being popular—even with a sub-group—is not an easy thing to do. Humans are incredibly fickle creatures. The mob which one day shouts “hosannah!” may be shouting “crucify him!” the next week.

There’s also just the fact that as a professional YouTuber, Logicked needs to be charismatic, and seeming thin-skinned is very un-charismatic. And giving a serious rebuttal to an obvious joke does seem very thin-skinned.

Now, the problem with taking chances is not that they always go wrong, but you become vulnerable to two things going wrong at once. And that’s where the bad stuff tends to happen to people—when two things go wrong at once. And that’s why people with responsibilities like a wife and child tend to avoid risks. This way when the bad stuff that you can’t control happens, you’ve got a chance you can survive it without taking any damage.

And I thought that Logicked knew all this. And maybe he does, in which case there was some other force in his life which resulted in this very odd action on his part. For example, it could be that Max Kolbe is right and atheists are all narcissists. An older version of this would be Saint Thomas More’s maxim, “The devil, that proud spirit, cannot endure to be mocked”. (It should be noted he was comment on prayer, as the beginning of that quote is, “Prayer makes mock of the devil”.) I would still need to personally be a little important to Logicked, though, and I really doubt that I personally matter to him at all.

It’s possible that the parody was too spot-on and since it was public he felt embarrassed, but the thing is, it wasn’t a very spot-on parody. I was just being silly—which I think is very obvious from the costume I used, as well as how over-the-top the things I said were—and obviously played very fast and loose for fun. I don’t think that anyone could take the specifics of what I said to be a cogent criticism of Logicked.

It could be that Logicked is desperate for material and is confident in his ability to pull off seeming thick skinned and just having fun. Of the ideas I’ve seen, this may be the most likely.

Whatever the answer, it is clear that my misprediction of his behavior means that I misunderstood him. By which I really mean people like him, since I don’t know much about Logicked the man. It’s important to be able to recognize these signs of being mistaken and learn from them both with less confidence in predictions as well as in needing to do further research in understanding human beings.

Without Midwits, Geniuses Would be Useless

Over at Amatopia, Alex wrote an interesting post called, The Curse of the Midwit:

One of the worst things to be is a midwit. And I am one. Let me explain what I mean by “midwit.” I have seen the term used many ways, and they boil down to these six points: Someone who is not as smart as the truly intelligent, but is of above-average intelligence, Who wants other […]

As usual, it’s a post worth reading, but Alex only tells half the story. He talks about the dangers of midwits but every danger is just the flip side of a virtue. (Of a natural virtue, specifically. The natural virtues are things like intelligence, strength, physical beauty, health, and so on; they are distinct from the moral virtues like courage, self control, etc.; which are again distinct from the theological virtues of faith, hope, and love.)

In short, Alex leaves out the virtue unique to midwits. Now, in what follows I’m going to paint with a very broad brush because I don’t have time to give a full description of the hierarchy of being, so I ask you to use your imagination to fill in all that I’m going to leave vague.

As I’ve said before, God’s fundamental theme within creation is delegation (technically, secondary causation). He doesn’t give to each creature everything he gives to them directly, but instead gives some of his gift to other creatures to give to their fellow creatures on his behalf. Through this He incorporates us into his love of creation and into His creative action. But within creation, this theme of delegation echoes. Instead of one intermediary, God orders the world so that there are several intermediaries. He spreads the love around, as it were.

The part of that which we’re presently concerned with is that it is not (usually) given to geniuses to be able to give their knowledge to the great mass of humanity directly. And since it is (usually) not given to them, they generally can’t do it. When a genius speaks to a common man, he’s usually quite unintelligible. If the common man knows the genius to be a genius by reputation, he’ll assume the man is saying something too genius for him to understand, rather than to be raving nonsense, but he will typically get about as much from it as if the genius was raving nonsense. This is where the midwits come in.

A midwit can understand a genius, but he can also speak in ways that common men can understand. Thus God’s knowledge is given to the common man not directly, but first to the genius, who gives it to the midwit, who then gives it to the common man. Geniuses need midwits at least as much as midwits need geniuses. In truth, all of creation needs the rest of creation since we were created to be together.

Of course the distinction of men into three tiers—genius, midwit, and common—is a drastic oversimplification. In reality there are levels of midwits and levels of geniuses, each of which tends to receive knowledge from the level above it and pass knowledge down to the level below it. For example, Aristotle would have had the merest fraction of the effect he has had were it not for an army of teachers, down through the millenia, who have explained what he taught to those who couldn’t grasp it directly.

Of course in this fallen world every aspect of this can and often does go wrong in a whole myriad of ways. And Alex is quite right that midwits can be very dangerous when they consider themselves geniuses—or really, any time that they’re wrong—because the sacred burden of teaching the great mass of common men has been given to them. Midwits have the power to do tremendous good, which means that they have the power to do tremendous harm.  But the tremendous good which midwits were given to do should never be forgotten just because many of them don’t do it.

Thinking about Hell

One of the questions within Christian theology is how many people (i.e. human beings) will end up in hell? There is no definitive answer. While there are people the Church knows to be in heaven (canonized saints), there are no people which the church definitively knows to be in hell. As such, it’s theoretically possible that the answer to the question of how many people wind up in hell is zero.

Theoretically possible, but not very likely. A bit of experience with humanity suggests that the number is definitely higher than zero. And our Lord Himself spoke rather more often about the narrowness of the gate to heaven than about anything which can be taken to be about universal salvation. Which is why many pre-modern scholars such as Saint Thomas Aquinas and Saint Augustine held that most people would be damned.

There’s a lot one can say on this subject, but it’s not really what I want to talk about now. Instead, the thing I want to talk about is how poorly suited to this subject human reason is. And the problem is that, as far as nature goes, we should all go to hell. That heaven is not devoid of human beings is super-natural. It is mercy surpassing justice.

And because it is not a natural state, but a super-natural state, which we are in, our intuition is pretty much useless on the subject.

Christ Change the World Twice

There were two ways in which Christ utterly and completely changed the world forever.

First, by the incarnation, Christ forever elevated the status of matter. No longer could matter be looked down upon as something unworthy of spirit, because God took on a body.

Second, by rising from the dead Christ defeated death. No longer is death the victor over life; now we can say with the Apostle, O Death, where is thy sting?

I find this interesting because human reasoning would tend to expect the savior of the world to change the world in only one way—by saving it. Elevating its dignity as well seems like too much to ask.

The Evolution of Scientism

There’s a curious thing which happens to those who believe that the only real knowledge comes from science: they start to believe that nearly everything—except what they want to reject—is science. Ultimately this should not be shocking, since people who live with a philosophy will invariably change it—gradually—until it is livable.

The people who become Scientismists generally start out extremely impressed with the clear and convincing nature of the proofs offered in the physical sciences. It would be more accurate to say, with the few best proofs in the physical sciences which are offered to them in school—but the distinction isn’t of great import. In practice, most of the impressive results tend to be in the field of Chemistry. It doesn’t hurt that Chemistry is a bit akin to magic, with the astonishing substances it allows people to make, but what it’s really best at is interesting, counter-intuitive predictions. Physics, at least as presented in school, generally allows you to predict simple things like where a thrown object will land or how far a hockey puck will skid on the ice. These aren’t very practical, and the results tend to be intuitive. Chemistry, by contrast, involves the mixing of strange chemicals with the results ranging from anything to nearly nothing to things which glow to explosions to enormously strong plastics.

And Chemistry does this with astonishing accuracy. If you start with clean reagents and mix them in the appropriate steps, you actually do end up with close to the right amount of what you’re supposed to end up with. If you try to run a physics experiment, you’ll probably be nowhere close to correct simply because the experiments are so darn finicky. I still remember when my high school honors physics class broke into groups to run an experiment to calculate acceleration due to gravity at the earth’s surface. The results were scattered between 2.3m/s and 7.3m/s (the correct answer is 9.8m/s).

The problem for our budding Scientismist  is that virtually nothing outside of chemistry and (some of) physics is nearly as susceptible to repeatable experiment on demand. Even biology tends to be far less accommodating (though molecular biology is much closer to chemistry in this regard than the rest of biology is). Once you get beyond biology, things get much worse for the Scientismist; by the time you’re at things like morality, economics, crime & punishment, public decency, parenting and so forth, there aren’t any repeatable controlled experiments which you can (ethically) perform. And even if you were willing to perform unethical controlled experiments, the system involved is so complex that the very act of controlling the experiment (say, by raising a child inside of a box) affects the experiment. So what is the Scientismist to do?

What he should do, of course, is realize that Scientism is folly and give it up. The second best thing to do is to realize that (according to his theory) human beings live in near-complete ignorance and so he has nothing to say on any subject other than the hard sciences. What he actually does is to then declare all sorts of obviously non-scientific things to be science, and then accepts them as knowledge. Which is to say, he makes Scientism livable. It’s neither rational nor honest, but it is inevitable. In this great clash of reality with his ideas, something has to give—and the least painful thing to give up is a rigorous criteria for what is and is not science.

Telling Reality From a Dream

“What if real life is actually a dream?”  is a favorite question of Modern philosophers and teenagers who want to sound deep. It’s a curious thought experiment, but in reality—that is, when we’re awake—we can all easily tell the difference between reality and a dream. But how? The answer is, I think, very simple, but also telling.

Thought experiments aside, we can tell reality from a dream because—to put it a little abstractly—reality contains so much more information than a dream does. Anything we care to focus on contains a wealth of detail which is immediately apparent to us. Whether it’s the threads in a blanket or the dust in the corner of the room or just the bumps in the paint on the drywall, reality has an inexhaustible amount of complexity and detail to it. And what’s more, it has this even in the parts we’re not focusing on. Our eyes take in a truly enormous amount of information that we don’t exactly notice and yet are aware of.

Dreams, by contrast, are very simple things. They do feel real while we are in them, but I think this comes from two primary causes. One is that we’re so caught up in the plot of our dream that we’re not paying enough attention to ask ourselves the simple question, “is this a dream?”

And I think that this is because dreams are natural to us. We often lose sight of this fact because dreams are involuntary and strange. But many things we do are involuntary, in the sense of sub-conscious; our breathing is most involuntary and our heartbeat always is. Our stomachs go on without our concentrating on them and our intestines wind our food through them whatever our conscious thoughts may be. Merely being involuntary does not make a thing unnatural. And since it is natural to us to dream, it is natural that we do not ordinarily try to escape our dreams. As with our other bodily functions, we ordinarily do what we’re supposed to do.

The other reason that dreams feel real to us is because our attention is so focused in a dream that we never consider the irrelevant details. If you ever try to call a dream back in your memory, though, you’ll notice that you can recall almost no detail in them—detail which was irrelevant at the time, I mean. The things in dreams only have properties where one is paying attention. The enormous amount of information we can see without paying attention to it is missing. This is also why they have a “dreamlike” quality to them—if we turn away then come back, they may not be the same because they stopped existing while we weren’t looking at them.

Dreams lack this stable, consistent, overwhelming amount of information in them precisely because they are our creations. We can’t create an amount of information so large that we can’t take it in.

And here we come to the fitting part: the difference in richness between reality and dreams shows what inadequate Gods we are. Our creations are insubstantial, inconsistent wisps. We can tell reality from a dream at a glance between it only takes one glance at reality to know that we couldn’t have created what we’re looking at.

(Note: This is a heavily revised version of a previous post, Discerning Reality From a Dream.)

The Influence of Art

Over at Amatopia, Alex talks about The Influence of Art. As usual, it’s worth reading. Unfortunately, Alex fails to make the critical distinction which everyone else always fails to make when discussing the influence of art: moral influence versus behavioral influence.

This distinction is most clearly distinct when considering violence versus pornography. Violence is a behavior. It can be immoral, as in the case of murder, or moral, as in the case of self defense. Pornography, by contrast, is inherently immoral because pornography specifically includes its purpose within its definition.

When people talk about whether violent video games cause people to be violent, they never specify what sort of violent video games. Do games in which people act in strictly moral ways, such as defending themselves and others or participating in a just war, make them more likely to act in immoral ways, such as committing murder or arson? Well, why would it? Let’s consider another case where people engage in morally justified violence: a surgeon’s job is to engage in morally upright violence—they slice people’s bodies open with knives. Does anyone think that this practice of justified violence makes surgeons desperate to commit unjustified violence? This is ridiculous even to suggest. Yet it is the same sort of suggestion; we don’t think that surgeons wander the streets slashing away with daggers and swords because of their frequent exposure to cutting people.

And if this sounds like an absurd example, that’s only because it’s an example of an absurd idea. Suggesting that violent video games make people violent is absurd precisely because it is confusing an action—violence—with a moral dimension—justification for violence.

By contrast, consider pornography, which contains its immoral object within its definition. Pornography is art which is designed to arouse sexual desire at an object which is not the proper object of sexual desire. Exposing oneself to pornography and using it to indulge in lustful behavior is training oneself to direct sexual energy at objects other than those it should be directed at (one’s husband or wife). It is obvious how this will have an effect on behavior because it consists in training oneself to misuse a faculty. It repetitively breaks the link between the sexual act and its proper use; it violates the habit of looking only at one’s spouse as one’s spouse.

And if we look at other cases of people repetitively misusing sex, we expect them to misuse sex. We do not expect someone who engages in many one-night stands to be faithful in marriage. Nor do we expect a (voluntary) prostitute to be faithful in a marriage. Granted, they will loudly proclaim that it’s not impossible. Which is true, but unimportant. Liars will very loudly proclaim that just because they were lying the last twenty three times, they could be telling the truth this time. And so they could. But the fact that a great many people want respect that they haven’t earned is of no consequence to the present subject.

Making this distinction solves the entire problem. It is not that art has no effect, or art has complete control over the viewer, or that art has 15% control. Art affects the viewer, but it affects the viewer along several dimensions. It affects them in what they do, and it affects them morally. I’ve no doubt that people who play violent video games with morally justified violence will be at least a little less sensitive to the sight of blood and the sound of gunfire, but that could easily mean that they are slightly more able to help people during a terrorist attack. Immoral video games, by contrast, will train people to act a little worse when they get the chance.

As is always the case with morality, it is not enough to ask what a man is doing, you must also ask why he is doing it.

Review: Whose Body?

Whose Body? is the first of Dorothy L. Sayers’ novels featuring her justly famous sleuth, Lord Peter Wimsey. There’s something which might almost be called a tradition in detective fiction that the first novel featuring the detective is not the place to start reading them, and though it is a good book, Whose Body? is not an exception. The author doesn’t really know his character in the first book, or more properly, characters—half of what makes a detective great are usually his friends and occasionally his enemies. As such things go, Lord Peter does come onto the scene in Whose Body? close to fully formed. Still, I would recommend start with Strong Poison or Cloud of Witness first.

With that out of the way, Whose Body? is a good mystery as well as a good Lord Peter story. It has a great deal of wit in it, both in wry observations as well as some excellent scenes involving Lord Peter’s mother, the Dowager Duchess of Denver. The mystery unfolds at a good pace, with new things for the reader to think about coming regularly. There is also the pleasure of reading about Lord Peter’s 1920s luxury. Though set contemporaneously, they are now period fiction, and Dorothy L. Sayers paints the scene vividly enough to work as period fiction for the modern reader. It is certainly a must-read for any Lord Peter fan.

(If you don’t want spoilers, don’t read any further.)

Analysis of the Story

(Note: please take everything that I say following in light of Whose Body? being a good novel. The purpose of this section is to try to learn from a master (Sayers) at work. Anything which sounds like harsh criticism should be taken merely as economy of speech.)

In light of some of Sayers’ later triumphs—such as Have His Carcass and Gaudy night—in Whose Body? she is clearly still finding her way with Lord Peter and detective fiction in general in. It is important to bear in mind the relativity of that statement, because Whose Body? is still superior to most other writers’ polished detectives. But none the less, Whose Body? is more conventional and ultimately a little hesitant.

By more conventional, I mean that it follows the conventions of detective fiction more closely than do the other Lord Peter novels. Though that is a somewhat strange thing to say given that in 1923, detective fiction wasn’t that old. A Study in Scarlet (the first Sherlock Holmes story) was published in 1887, a mere 36 years earlier. Granted, detective fiction exploded after Sherlock Holmes, but the explosion was still in its relatively early days in the 1920s. But none the less there were plenty of conventions at the time, and Sayers did follow them more closely than she would later.

Part of this is also related to the distinction between short story mysteries and mystery novels. I’ve talked about his before, but the short explanation is that short story mysteries are quite commonly brain teasers, while novels are the story of a detective at work. This follows necessarily from the length. In the quintessential mystery short story, the detective comes onto the scene of a crime, takes in the clues, then realizes the solution to the problem and explains it. The shortness of the story allows the reader to take in all of the clues, then pause to consider them before finding out whether he guessed correctly. (This, by the way, is why in television shows the detective suddenly realizing the solution to the problem after somebody says something which stirs his imagination is so common. I.e. why there’s the classic, “wait, say that again. You’ve solved it!” moment. After laying out the clues, they had to give the audience time to think about it, and it can’t be a new clue which solves the case for the detective, so something has to be the trigger for the detective realizing who did it so we can get to the reveal.)

This is structurally impossible in a novel, however. If the reader is given all of the information he needs in order to solve the mystery in the first ten pages of the novel, the rest of the novel becomes pointless and the brilliance of the detective becomes impossible to believe when it takes him 200 pages to figure out what any intelligent reader already figured out. Accordingly, the clues have to be revealed slowly, throughout the book, for the book to remain interesting. That forces the book to be about the process of finding the clues, rather than purely about understanding the clues presented in a jumble.

(This, incidentally, is one of the problems in the first Filo Vance novel, The Benson Murder Case. The author presented us with all the evidence we needed to know who the murderer was in the first chapter, and so the rest of the book dragged on a bit. Granted, Philo Vance also figured out who the murderer was in the first chapter, which made it a little odd that he didn’t tell anyone until the last chapter.)

Whose Body? does not give us all the evidence we need up front, but it does give us enough evidence early on so that we can make an educated guess fairly early. This does not spoil the fun as subsequent evidence is required to really substantiate the guess, and we get the fun of finding it out along with Lord Peter. It does, however, lessen the impact of the red herrings. The biggest of which is Cripsham and his pince-nez which were found on the corpse. There are several pages spent on speculating about Cripsham after he answers the advertisement Lord Peter put in the newspapers, but none of it is really credible at this point. There’s far too much we already know and/or suspect about Sir Reuben Levy’s connection to the corpse in the bathtub, and the latter’s connection to—if not yet to Sir Julian Freke, at least to the hospital next door to the corpse. Granted, it’s a little unfair to hold against a book that it’s too well written to have the second half of the book make the first half of the book a waste of time, but mystery has always been a self-conscious genre. And it is, so the idea that the murder was committed by a character as yet completely unknown and wholly unrelated to anyone already in the novel is not really credible. The result is that the extensive speculations about Cripsham just feel like a waste of time. In fact the whole affair of the pince-nez was over-played. Since the body was clearly arranged by the murderer, it was not plausible that the pince-nez were any sort of solid clue. Since they had to be either a practical joke by, or an attempt at misdirection on the part of, the murderer, they were never going to lead anywhere directly. The only really plausible connection they could have to the murderer was pointing to the murderer’s enemy. As soon as the owner of the pince-nez was utterly unconnected with anything or anyone else in the book, they couldn’t really have pointed to the murderer’s enemy, so they had to be merely a practical joke.

The character of Inspector Charles Parker was very interesting in this book—it is perhaps his best role in any Lord Peter book. I can’t help but think that Sayers never really thought that Parker worked. He continued to appear in Lord Peter stories, but he got ever-smaller roles. I wonder whether this may have stemmed from the fundamental contradiction in the role which Sayers gave him and the way she began to characterize him. Parker read theology in his spare time, which was an extremely interesting thing for a police inspector to do. It also set things up wonderfully for him to be a contrast in personality with Lord Peter who, while well educated, was an instinctive atheist. As Sayers put it more than once, Lord Peter would have thought it an impertinence to believe he had a soul. That would be a fascinating contrast.

Unfortunately, Parker’s main role was to be the Watson to Lord Peter’s Holmes. What makes this so unfortunate for the characterization which Sayers started to give Parker is that the ninth rule of Ronald Knox’s 10 Commandments for Detective fiction is commonly held to be true:

The “sidekick” of the detective, the Watson, must not conceal from the reader any thoughts which pass through his mind: his intelligence must be slightly, but very slightly, below that of the average reader.

That simply does not work for an interest in theology.

I should note that this is not actually a strict requirement for a Watson. The purpose behind this rule is that the detective must have some reason to explain himself. A beloved sidekick who doesn’t understand what’s going on and who constantly asks for explanations works very well for this job, hence it’s popularity. However, merely thinking differently will suffice. Thus an intelligent person with a different background from the detective works well. “I would have assumed it meant [plausible inference], but I’m guessing you conclude something different from it?” It’s more difficult since there must generally be two plausible inferences to pull this off, but it’s very doable. In fact, Sayers herself did this with the introduction of Harriet Vane. While not Lord Peter’s equal, she was generally the most intelligent person in any room he wasn’t in. But she had a very different background and personality from him, and so they complemented each other in just this way.

The only other thing I want to remark on was the interactions with Sir. Julian Freke. Lord Peter’s obsession with fair play and giving the murderer a chance to commit suicide before being taken was something I was glad that Sayers abandoned. I think she did it in only two cases. One was of course Whose Body? and the other was The Unpleasantness At the Bellona Club. It was perfectly fair to give Lord Peter his weaknesses, but this one just didn’t work. It wasn’t out of character, exactly, but neither did it feel like it was in character. Granted, Lord Peter tended to approach mysteries purely as a game, but  anguish at realizing that it was real was probably as unpleasant for the reader as it was for the character. The big problem being that this is all a game for the reader. Consulting detectives are not realistic. If one is going to indulge in them at all, one should see the fantasy through to the end. The detective has undertaken to put right, by a right use reason, what was put wrong through a misuse of reason. He may conclude that justice would be better served by letting the murderer go, but it is not right for him to conclude that justice would be better served by not serving it.

And to be fair to Sayers, she did abandon this line of thought pretty quickly. Whose Body? is the only time Lord Peter gave the murderer the opportunity of escape. In The Unpleasantness At the Bellona Club, he merely gave the murderer the opportunity to shoot himself before he was taken for murder and hanged. Granted, this is offensive to my Christian principles which holds suicide to be intrinsically evil, but it did at least still serve justice, if it served nothing better. And fortunately Sayers abandoned it entirely in her other stories.

Sir Julian Freke’s letter to Peter was also a little odd. First, it was strange he hadn’t prepared the bulk of it immediately after the murder on the assumption he would get away with it and the details should be preserved immediately for their scientific value. Second, it was largely a recapitulation of what we had already learned. Rather than being satisfying, I found it made for dull reading since we learned very little from it. It served in place of the denouement in an Agatha Christie where Poirot gathers everyone together and explains what happened, but with none of the revelation of when Poirot does it. There were no details commonly assumed to be one way but then put straight. There were barely any details even filled in—unless you count such trifles as the cotton wool placed under the surgical bandage to avoid bruising. Or that the bath running was to cover the sound of work rather than to actually bathe one of the corpses. And I think it’s telling that Sayers never repeated the many-page confession in her other books. Except possibly Inspector Sugg—who wasn’t really a character—no one learned anything from this confession.

In conclusion, Whose Body? is a fascinating first story for a detective. It clearly did a good job of introducing Lord Peter in 1923, and set the stage for some true masterpieces of detective fiction. It wasn’t uniformly great, as were some of Sayers later works, but where it was good it was very good. And I find it interesting that the character which changed the least in subsequent books was the Dowager Duchess. While Lord Peter took a little refinement through the books, Sayers really nailed the Dowager Duchess from the first page which contained her.


If you enjoy Lord Peter Wimsey stories even half as much as I do, please consider checking out my murder mystery, The Dean Died Over Winter Break.

tddowb

Who Works For Bad Scientists?

One of the latest scandals in science is the shoddy research of Brian Wansink, with new scrutiny on his papers resulting in many of them being revised or withdrawn. Apparently this started in the aftermath of a post on his blog titled The Grad Student Who Never Said No. I bring this up because it ties into a previous post of mine, The Fundamental Principle of Science. But the entire blog post is very interesting so let’s look at it.

A PhD student from a Turkish university called to interview to be a visiting scholar for 6 months.  Her dissertation was on a topic that was only indirectly related to our Lab’s mission, but she really wanted to come and we had the room, so I said “Yes.”

So far, no problems.

When she arrived, I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others).

Right away you have a problem with whatever they’re trying to find out because there’s no realistic way to charge people half as much as others. Most people find out what a buffet costs before ordering it, with this influencing their choice to eat there or not. Further, many people will be regular customers and thus already know what the buffet costs.

 I said, “This cost us a lot of time and our own money to collect.  There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.”

This is a really bad sign. If your experiment fails, you’re not supposed to torture the data until it tells you what you want to hear. This is called p-hacking, an it results in an awful lot of garbage. Virtually all data sets have some correlations in them by sheer chance; finding them is simply misleading.

I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).  I told her what the analyses should be and what the tables should look like.  I then asked her if she wanted to do them.

Granted, this isn’t quite as bad as the approach where one uses a computer to generate hundreds or thousands of “hypotheses” and test them against the dataset to find one that will stick to the wall, but it’s a bad sign. This is such a bad practice, in fact, that some scientific journals are requiring hypotheses to be pre-registered to prevent people from doing this.

Every day she came back with puzzling new results,

This is a very bad sign. It’s a huge red flashing neon sign that your data set has a lot of randomness in it.

and every day we would scratch our heads, ask “Why,” and come up with another way to reanalyze the data with yet another set of plausible hypotheses.

Now this is just p-hacking, except without a computer. You could call it artisinal, hand-crafted p-hacking.

Eventually we started discovering solutions that help up regardless of how we pressure-tested them.

I’m actually kind of curious what he means here by “pressure-testing”. Actual pressure-testing is putting fluid into pipes at significantly higher pressures than the working system will be under to ensure that all of the joints are strong and have no leaks. Given that the data set has already been collected, I can’t think of an analog to that. Perhaps he meant throwing out the best data points to see if the rest still correlated?

I outlined the first paper, and she wrote it up, and every day for a month I told her how to rewrite it and she did.

What was going on that 30 rewrites were necessary? Perhaps this grad student just sucked at writing, but at some point one really should pick an idea and stick with it. I really doubt that thirtieth rewrite was much better than the 23rd or the 17th.

This happened with a second paper, and then a third paper

So we’re up to about 90 rewrites in 3 months? That’s is a lot of rewrites for papers about something as weak as tracking the behavior of people eating at a randomly discounted Italian buffet.

(which was one that was based on her own discovery while digging through the data).

This is pure snark, but I can’t resist: she learned to p-hack from the master.

At about this same time, I had a second data set that I thought was really cool that I had offered up to one of my paid post-docs (again, the woman from Turkey was an unpaid visitor).  In the same way this same post-doc had originally declined to analyze the buffet data because they weren’t sure where it would be published, they also declined this second data set.  They said it would have been a “side project” for them they didn’t have the personal time to do it.

It’s really interesting that we have no idea what the post-doc actually said. It’s possible that the post-doc was just being polite and came up with an excuse to avoid p-hacking. It’s also possible that the post-doc said that this seemed like p-hacking and Wansink interpreted that as trying to cover for not thinking that it was prestigious enough work.

But it’s also possible that someone who wanted to work with an apparent p-hacker like Wansink actually was concerned only with how prestigious a journal the p-hacked results could be published in.

Boundaries.  I get it.

I strongly suspect that he doesn’t get boundaries. Most people who have to talk about them this way—saying that they respect other people’s boundaries—don’t. At least in the cases I’ve seen. People who respect boundaries do so as a matter of course. It’s a bit like how people who don’t stab others in the face with spoons don’t talk about it, they just do it.

Six months after arriving, the Turkish woman had one paper accepted, two papers with revision requests, and two others that were submitted (and were eventually accepted — see below).

P-hacking is far more productive than having to find real results. That’s why it’s so tempting.

In comparison, the post-doc left after a year (and also left academia) with 1/4 as much published (per month) as the Turkish woman.

Right, but how good was it?

 I think the person was also resentful of the Turkish woman.

This could mean several things, depending on what he person actually said and meant when they declined to p-hack the buffet data set. If it was purely self-aggrandizement, then this becomes a valid criticism. If they were actually demuring from p-hacking, then the resentment makes a lot of sense since the Turkish woman made them look bad for standing on principle while others transgressed and didn’t get caught.

Balance and time management has its place, but sometimes it’s best to “Make hay while the sun shines.”

This part is certainly true. It’s rarely a good idea to disdain low hanging fruit. Unless it’s wax fruit, not real fruit.

About the third time a mentor hears a person say “No” to a research opportunity, a productive mentor will almost instantly give it to a second researcher — along with the next opportunity.

I really wonder what he thinks that the word “mentor” means. Whatever it is, it clearly doesn’t involve actually mentoring anyone. But don’t just pass over this, look at how glaring it is. The first half of the sentence, “About the third time a mentor hears a person say ‘No’ to a research opportunity”, is the setup for explaining how the mentor will then help the person to learn. Instead, the next three words are almost a contradiction in terms: “a productive mentor.” To mentor someone is to put time and energy into helping them learn. It’s the opposite of being productive. Craftsmen are productive. Mentors are supposed to be instructive. And then the rest of the sentence can be translated as, “…will just give up on the person.”

I think the word he was looking for was “foreman,” not “mentor”.

This second researcher might be less experienced, less well trained, from a lessor school, or from a lessor background, but at least they don’t waste time by saying “No” or “I’ll think about it.”  They unhesitatingly say “Yes” — even if they are not exactly sure how they’ll do it.

Yeah, the word he was looking for was “foreman”.

Facebook, Twitter, Game of Thrones, Starbucks, spinning class . . . time management is tough when there’s so many other shiny alternatives that are more inviting than writing the background section or doing the analyses for a paper.

I’ve got to say: if the reason that the post-doc wouldn’t p-hack the buffet data set was because they were too busy checking Facebook and Twitter, watching Game of Thrones, sitting chai lattes at Starbucks, and going to spinning class… that was actually a better use of time.

Yet most of us will never remember what we read or posted on Twitter or Facebook yesterday.  In the meantime, this Turkish woman’s resume will always have the five papers below.

Ironically, Wansink is likely to remember this blog post for a long time, since it drew attention to his p-hacking. And at this point, there’s a lot of it.

How To Get Your Posts Shared By Big Names

This is a post for people with small blog readerships who want to get links from people who have big followings. It will probably apply to other long-form media like YouTube videos and possibly even Facebook posts, but won’t apply to short-form media like Twitter. Also, I should mention that I’m talking about people who are big because of what they make. I have no idea how to get Kim Kardashian to notice blog posts.

If you’re wondering what my credentials are for making this post, I’ll start with the recent ones:

numbers2

In case you don’t know him, Ed Latimore has (as of the time of this writing) about 40,000 Twitter followers. Recently he linked to How To End Conversations and Acting Confident vs. Being a Jerk.

In the bigger picture, I’ve been blogging on and off for twenty years. In that time I’ve had novels of mine mentioned by Mark Shea, I’ve had blogs linked to by The Volokh Conspiracy (several hosts ago), Steven Den Beste (back when he was a big-name political blogger, before he switched to anime blogging, may he rest in peace), and others. Probably the biggest link I got was from Slashdot. (That one generated around  35,000 page views, 16 years ago.)

Having (hopefully) established my credentials, much of what I’m going to talk about is actually basic human psychology. The most effective way to get anybody to do what you want is to align your interests with theirs. So the best way to get someone with a big following to link to your blog post is to write a blog post where linking to it benefits them. This makes the most important thing to consider:

What People With Followings Want

The only way to get a large following is to give a lot of people a meaningful amount of value. (Whether it is given for free or for a fee is irrelevant.) Loyalty is, however, a rare thing among fallen humanity and so one has to keep giving new value. This may be clearer if it’s considered from the reader’s perspective: if someone spends too much time writing what you don’t want to read, you’ll stop reading them. This creates, for the person will the big following, two complementary incentives:

  1. A constant need for value to give to their readers
  2. A strong disincentive to publish anything which is not valuable to their readers

The techniques for getting blog posts shared by big names all follow from these two elementary forces. (It doesn’t hurt to know how readers behave, too.)

The Basics

The single best way to get your blog post shared by someone is to write a blog post which provides real value to the person’s readers. This can work if it’s a large subset of his readers but it’s better if it’s all of his readers. And the best way to do that is for the post to be the sort of post which that person would write. Now, I don’t mean that you should ape his style. In fact, that’s probably a bad idea. But your post should be on a subject which he might write about and from a viewpoint which is at least compatible with his. Also, critically, it’s got to be a post he hasn’t already written.

That’s not to say that it can’t be on the same subject that he’s written about before, but if it is, it’s got to be different: a different take, a different angle, some additional background information which augments or clarifies his point—in short, your post has to add something to his. If it doesn’t, what’s the point in him sharing it?

A follow-up post to the big name’s post can be a good approach to getting shared. Saving the big name the trouble of writing the followup can provide a lot of value. (This is especially true if he’d have had to do research to write it.)

A funny take on a serious post, or a serious take on a funny post can also both work as follow-ups. If you’re going the route of being funny, make sure that your sense of humor will make the writer laugh out loud. If you’re going serious, a good bet is to be an answer to all the nit-pickers on the writer’s humorous post, saving him the tedium of replying and/or explaining.

Subjects that the writer with the big following haven’t written about also work, depending on why he hasn’t written about them. Good reasons are: he hasn’t gotten around to it and writing it would require research he didn’t feel like doing. Bad reasons include: he isn’t interested, he thinks it will turn off readers, it’s off-brand.

Merely tangential relation to the writer’s main subject probably won’t cut it. For example, it’s probably a waste of everyone’s time to tell a Paleo blogger about your post on “how to hook up with hot paleo chicks”.  Even if your technique is to carry around a clear acrylic cooler packed with ice and raw steaks, your primary topic is picking up women, not eating paleo. By contrast, “how to eat paleo on a date without being a nuissance” would be primarily about eating paleo. Specifically, it would be about navigating the difficulties of the modern food environment without sacrificing paleo goals. And if you’re pitching a paleo blogger, posts about changing the oil in your car are right out.

Don’t Waste People’s Time

If your post is not likely to be of interest to a big name’s readers/viewers/followers/etc, don’t waste their time asking them to share it. That’s just asking for free advertising. It wouldn’t even do you any good if they did share it—their followers won’t click on it. If you follow someone because you love their bass fishing blog, if they publish a link to someone’s post on how to sell afghans on ebay it will just be noise to you. And people who’ve spent more than a few months on the internet are very good at filtering out noise. At the very least, if you insist on doing this, make the subject line, “I want free advertising for no reason”.

Joking aside, for most people the reason they do this is because their hope has clouded their judgment. Try not to do this. Hope is important, but other people’s time being finite means that hope should always be tempered by realism. There will be other opportunities.

Present Your Material Politely

This isn’t hard, but the big thing is to avoid looking like what you’re not. Everyone gets spam asking them for things; when you are offering value to someone you want to be careful to avoid looking like spam. The best way to do this is to avoid asking for things. I don’t mean to be passive aggressive, just to trust the person you’re talking to to run their own blog/channel/twitter/whatever. They can figure out on their own that if something is valuable to their readers, they should share it. So trust them that they know that.

Just be simple and direct. If you think they would like it, then say so. “I’ve got a blog post on [subject] which I think you might like: [link] [excerpt]”. If you think that it’s primarily their readers who would be interested, then say that. “I’ve got a blog post on [subject] which I think your readers might find interesting: [link] [excerpt]”.

A brief excerpt is valuable because it will give the person a sense of the post’s quality in a few seconds. This lets them judge whether reading the post will be a good use of their time. Unfortunately on the internet, a lot of people are wrong about how valuable their posts are and while I trust that the reader’s post is one of the valuable ones, the big name can’t. That means that they have to spend time to find out that it is valuable. An excerpt lets them do that in a few seconds, rather than having to judge whether to invest a minute or two. That may not sound like a lot if your inbox is empty, but when you’ve got thirty unread messages, it can feel like quite a lot.

More Advanced

You’ll note from the basics section that familiarity with the big name you’re hoping to get linked by is important. The best way to achieve this, of course, is to be a regular reader. There is a world of difference between how well a regular reader knows a writer’s interests and how well someone who’s read an article or two does. (Though even the article or two is a world again of difference with someone who’s read nothing by a writer.)

There’s another big advantage to being a regular reader: you’re probably going to end up being a regular commenter. Whatever the medium, almost all writers have some sort of avenue of feedback available. If you read someone regularly you will naturally use it on occasion. As long as you are doing so from a consistent persona (ideally one that has a real picture of your face, since human beings key recognition off of faces), you will start to build up an acquaintanceship.  In addition to having a bit of a human connection—which is valuable in its own right—this will automatically elevate your request that the big name check out your blog post in their attention. Obviously this can backfire if you get annoying, so be careful to only send them your best stuff, but this is a huge leg up on getting their attention.

You’ll probably also benefit from all the exposure to the writing (etc.) of people with large followings, including becoming a better writer yourself.

Trying to Get Noticed by Really Really Popular People

First, it’s important to be realistic. This is extremely hard. There are actually several reasons for this, but the numbers are enough on their own: if someone has a million readers, if 0.1% of the readers contact the writer per month, that’s 33 people per day. Thirty three strangers is a lot of people to talk to in a day, on top of trying to eat, get to the gym, do one’s job, and possibly interact with friends and family.

But things are much worse than that. When someone is that popular, your competition for their attention will not be entirely made up of incompetent people. Other people will be offering value too. That means that there’s a lot to pick through, and that doesn’t just take time. It takes emotional energy. Some of the work required in meeting strangers is involved in reading the words of a stranger; it takes some amount of adaptation to their style, their turns of phrase, their way of thinking, etc. This is one reason why newspapers traditionally had standardized voices, diction, etc.—by making the writers interchangeable, reading a newspaper requires significantly less effort on the part of the reader. The great variety of voices offered by blogging (etc) allow us to find people we enjoy far more, but it requires a great deal more energy to sift through. Just consider how many new bloggers you read a day—it’s not many, at least on a typical day. There’s no reason to expect it to be easier for more popular people.

None of which is to say that it is impossible, just that maintaining realism is especially important. You will of course want a title which is obviously interesting (but not click-bait), and a great pull-quote from the article to go with the title. But more than this, you also need some way of getting your post noticed. And the best way to do that is to get it noticed by smaller venues so that the really really popular person sees it all over the place. This was the technique I used to get my article linked by Slashdot. It was a different take on a hot topic of the day, and I submitted it to many smaller blogs and news sites before I submitted it to Slashdot. When the article went up on Slashdot, they weren’t even that enthusiastic about my post but noted that they were getting told about it by everyone. There was unquestionably an aspect of it being the right-place/right-time since my post hooked into a news story which was exploding at the moment. I wouldn’t have been able to get an article linked which wasn’t satisfying such a large, though short-lived, demand. But I also wouldn’t have been able to get it linked if it weren’t for all the smaller sites linking to it and making it something people were talking about. And this plays into the numbers I mentioned above. If all thirty-three people in a day are telling the big writer the same thing, he’ll probably notice.

Be Realistic About the Results

And finally, if you do manage to get your post shared by someone with a big following, be realistic about the results. It’s great for your blog post that it got a lot of traffic, but this probably won’t have much of an effect on your blog itself. The number of people who will check out more of your blog varies with a lot of factors, but unless something is very well aligned, expect the number of readers you’ve gained to be below 1% of the number who visited. Again, just think about your own reading habits: how many blogs do you follow links to, versus how many do you become a long-term reader of?

This is, by the way, another reason to focus on making the blog post high quality. If you get linked by someone with a large following, this blog post is probably going to be the only thing of yours most people who read the post will ever read. So this is your one chance to give them something of value to carry with them for the rest of their life. Make your shot count.

Acting Confident vs. Being a Jerk

For some, the difference between acting confident and being a jerk is extremely obvious. For others, it’s mysterious to the point of thinking that the one is just code for the other. Unfortunately, the former are rarely able to explain how to tell the difference between the two to the latter. It’s actually not that hard, though.

Confident people speak like what they are saying is true and is important. Jerks speak like they are an authority and that they personally are important.

You can also see this distinction in each person’s attitude toward agreement. Confident people speak like they don’t need people to agree with them. Jerks speak like the world will end if people don’t agree with them.

From the above, it should be obvious that if you want to learn how to be confident, the first step is developing your knowledge and skills to the point where what you say is both important and true. If you know that they are, then there’s nothing special to do. In that case, all you have to do to act confidently is to refrain from certain mannerisms. In particular, don’t weaken what you say with “I think”, “it seems”, “in my opinion”, etc. They’re completely redundant anyway. If you’re saying it, people know that you think it.  If you said it, people know that it was your opinion. To speak confidently, just say what’s true, stop talking, and trust your listener.

And if you’re knowledge and skills are not to the point where you’re certain that what you’re saying is both important and true, then don’t act confidently. Consider not speaking at all, but if you do, this is the time for qualifying what you say with “it might be” and “I’m not sure”. But as soon as you’ve gotten that over with, go develop your knowledge and skills until you have something to be confident about.

When Small is Big

On Twitter I recently posted this:

When it comes to seeming important, a small movement’s best friends are usually its enemies.

And got promptly asked what I meant by several people. So I might as well explain it here. There are three primary ways in which this is true:

1. Bogey Men Are Well Known

People often justify their own importance by the danger their enemies pose. Therefore they are prone to taking small groups and exaggerating the danger they pose to look important themselves. Doing this makes the bogey man they’ve chosen seem far more important too, though.

The classic example of this sort of enemy is somebody who is fighting a fight that’s already been won. They want to relive the glory days of old, but they just don’t matter any more. So they look far and wide to find some sign that the apocalypse is actually nigh and they are needed to fend it off. The result is that they seize on small groups and make them out to be a world-wide threat. It’s great publicity for the small group that they’ve seized upon. Unfortunately if the small group is actually bad, this is a net negative for the world, but people concerned with their own importance aren’t worried about that.

2. Rallying Points Rally

The thing about large groups is that they all have enemies. So when a large group attacks a small group, that small group becomes the most active place to fight the big group. This means that people who want to fight the big group will be drawn to the small group not out of any sympathy for the small group but only because that’s the place to be to fight the big group.

If the small group is extreme enough this can actually look like a tactical advantage to the big group. By causing their enemies to rally around the small group of wackos, some of the stink of the small group will rub off. This can backfire yugely, however, if the small group is not as generally unacceptable as the big group thinks that it is. The more extreme the big group, the greater the danger of this happening since somewhat mainstream groups look extreme to them.

3. Martyrdom Is Convincing

Having enemies gives you the opportunity to prove that you’re serious. It is only by having enemies than one can prove one’s courage and conviction. There is an almost Chestertonian paradox in this, but one cannot prove one’s valor on one’s own schedule. Real adversity can only come from the outside; it’s only adversity if there is an adversary. Or in other words, you can only show that you believe in a truth so much that you are willing to die for it if someone is willing to kill you for it.

It’s Normal to be Normal

Over at Amatopia, Alex writes about The Pinnacle of Flatness. To give a bit of the flavor of the post:

Extrapolate this line of thinking to cities and towns the world over. I’m sure you’ve noticed that Toronto looks like London looks like Los Angeles looks like Berlin, and so on. Not identical, but close enough. Modern architecture is but one way in which ideas of design seem to be converting on something universal…and kind of beige.

And then there’s urban sprawl and the explosion of squat, concrete strip malls, fast-food joints and gas stations, and big box stores everywhere. It seems like that’s all some towns are.

And this, of course, goes for the arts as well. Movies all feel the same, screenwriting formulae aside. Musicbooks, television shows, educationpop culture…the list goes on.

Then he asks the crucial question:

Is this just where things always lead? Is there an “ultimate design” that we as human beings have finally reached? Or is it the natural consequence of a society that embraces Adam Smith’s “capitalism” while rejecting the “guided by moral principles” part of the equation? In other words, is function driving this sameness, or is commerce? Or is something else?

Though Alex does have an important point which I’ll get to, I think even more important is to point out that his question is in some senses a very odd one. For most people throughout history, the big things tended to be very utilitarian. Dwelling places, transport, and to some degree clothing tended to be first utilitarian and second aesthetic. You have to be fairly rich before you can afford to spend money on non-functional design.

A manor house might well have heavily aesthetic influences in its design, but the average peasant’s hut would not have had a wide variety of designs whose purpose was to express the individuality of the owner. In the era before heavy machinery, those things take a lot of labor which a peasant could not have easily paid for. I don’t mean that people wouldn’t have decorated their houses to their tastes, but they wouldn’t have done much design to their tastes, since that mostly means varying the design from the most affordable one. There would have been variation in terms of adapting to the exact lay of the land, of course, but again that’s a result of not having the heavy machinery to turn anyplace into a decent building site.

Clothing is probably the biggest exception, but for the average peasant one’s clothing was made from fabric spun and woven by the women of the family, tending to limit one’s pallet to the colors wool and flax (etc) came in. And the designs tended toward those which required (and wasted) the least fabric to make, since so much work went into the making of that fabric. Modern clothing is unbelievably inexpensive since the spinning and weaving is all done by fast machines. There would have been variation because everyone made their own clothing and so it was all made differently, but that is a reflection of variations in workmanship, not in the expression of individuality.

But even more than the practical elements, aside from some people aspiring to fame and glory, human beings throughout most of history were not primarily concerned with distinguishing themselves from everyone else. They needed other people too much. Their primarily concern was solidarity with their fellow man. Your brothers and sisters and cousins and neighbors were all people you depended on for survival. Showing how special and unusual you are is a preoccupation of the rich, not of the common man. Novelty is also a luxury of the rich, who can afford to pay for it. But it doesn’t at all follow that novelty is all that good for human beings. Historically there were such things as diseases of the rich—chief among them obesity and various types of malnutrition caused by being able to afford things like white flour and white rice and foods prepared in expensive ways that also happen to leach the nutrients out into the less tasty part that the rich don’t need to bother to eat. I actually strongly suspect that novelty really isn’t all that good for human beings above a relatively small dose. People of above average intelligence—like Alex—can probably take higher doses—but on average novelty may actually be more destructive of happiness than conducive to it. Consider how important ritual—doing the same thing over and over—is to sanity.

Now, all that having been said, there is a very legitimate form of variation which is not available to the modern secular world. That is variation of virtue. A world which doesn’t understand virtue can’t tell stories of the interplay of different virtues, or how different men balance virtues in different yet good ways. As I’ve said, there’s That Story That Modern Screenwriters Can Tell.

How To End Conversations

Recently the topic of ending conversations came up and so I thought I’d write down a brief guide to good ways to do that in case it’s helpful to someone who hasn’t seen good examples of it.

And just as a preface, if you want to exit from a conversation, don’t give the other person hints that you want to be out of it. You have very little control over how aggressively hints are interpreted, and in the best case people will wonder why you didn’t trust them enough to say what you meant. In general, passive-aggressive leaves a bad taste in people’s mouths. And further, if you want a job done, don’t delegate it to someone who may not want it done.

Before I get into specifics, we should first talk about the generalities of the situation, so that the specifics make sense. All conversations have between one and two purposes. Conversations which might be said to have no purpose will generally have the purpose of fulfilling social obligations to interact with people in some circumstances. Common purposes include:

  • Wanting human connection (to stave off loneliness)
  • Enjoyment of a subject with someone who also enjoys it
  • Passing the time
  • Communicating information
  • Being polite

For the most part, people are in a conversation for one of these reasons. Exiting a conversation in a way that does not offend the other person is primarily a matter of acting consonant with two propositions:

  1. The other person’s concerns matter
  2. the reason you are ending the conversation is that something of greater importance than the current state of the conversation has come up

The specifics of this depend on the reason the other person has for being in the conversation. Though one generality is to make sure to smile as you’re ending the conversation. Smiling makes everyone less likely to be offended, as long as your smile is commensurate with the words you’re saying. (For more, I’ve got a whole video about the use of smiling as communication.) Taking them in increasing order of difficult:

Being Polite

If the other person is in the conversation merely to be polite, which typically means something like the two of you are together and it would be rude to act as if the other person isn’t there, exiting the conversation politely is generally as simple as saying that you should do something else and saying it was pleasant to talk with them. (Note: there is no way to politely exit a conversation if you will still be in the situation where it would be impolite to not talk. “I’m going to stand here and ignore you while you stare at my forehead” will always be impolite no matter how you say it.)

Here’s my stop. It was nice talking you, and good luck with [thing person said].

Passing the Time

Related to being polite, passing the time is where conversation isn’t necessary but someone finds it preferable to the alternatives. When only one person is passing the time, this can be unpleasant for the other person but may be done as an act of generosity. If you’re the one passing the time and the other person has things they’d rather be doing, generally the best way out is to apologize, since it implicitly recognizes their generosity.

Well, look at me going on and on. I didn’t mean to take up so much of your time but thanks and I’ll let you get back to [whatever they were doing or should be doing].

If the other person was passing the time, then the key is to not make them feel like they were a burden. (Even if they were; odds are very good they’ll realize it on some level even if you say nothing and anything you say will probably over-communicate that message. If a person is constantly doing this to you, greater firmness will be required, but if at all possible escalate slowly.)

Hey, it was good talking to you but unfortunately I’ve got to get to [whatever you should be doing]. See you around!

Communicating Information

On the plus side, people generally don’t have emotional investments in communicating information. On the downside, these sorts of conversations can easily get lost in the woods and wander endlessly. The key to ending them is making sure that the other person has all the information that they need and that the conversation doesn’t accidentally become mutual politeness, like the time I and a group of college friends walked to the ATM before getting food together only to stand there and look at each other to see who needed to get cash before eating when none of us did. How to get out of this conversation will depend on whether you are the one who needs information or the one who is giving it. If you’re the one giving it (at a suitable time when you’re not interrupting a thought):

OK. Well, does that answer your question / give you what you need?

If they say no, then go back to trying to answer the question. If they say yes:

OK, great! I’m glad I could help, and if there’s anything else you need, just let me know.

If you’re the one who was asking the questions, how you exit the conversation will depend on whether you got the information you were after. If you did, this is easy:

Hey, well, that answered all the questions I have. Thanks you very much for all the information.

(At this point the other person may take a moment to point you to additional sources of information, such as books, websites, etc. Actually write this stuff down if you can because in the worst case a little effort here will make the other person feel better, and in the more common case you won’t have to ask for the recommendation all over again.)

If you didn’t get the information and it’s clear that you’re not going to, then it’s best to be a little vague, but of course within the bounds of honesty:

Hey, well, thanks. That gives me a sense of where to get started. I need to do some more research to come up with more focused, better-formed questions. But this gives me a good start for doing that.

On the real extreme end of having gotten nothing at all out of it, just thank them for their time. They’ll probably be more glad than you are to get out of the conversation. If they ask if that answered your question, I suggest discovering your inner skeptic. What can you really be certain of, anyway?

I’m not really sure, actually. I’ve got to think about it and figure out what it is I’m even trying to ask.

or

Possibly. I need some time to think it over and turn things over in my head and see if it makes sense or if there’s stuff I still need to ask about.

If it was such a cluster-fudge that you got information that was contradictory or you know to be wrong, stick to what’s true:

Well, thank you for taking the time to answer my questions.

Enjoyment of a Subject With Someone Who Also Enjoys It

This is the classic conversation between friends, at least when it’s going well. If this is actually going on with a friend, then it will probably be hard to go wrong, unless you have to leave early. With friends, openness is generally the best approach, so if something came up that means you have to run, say what it is.

Oh shoot. I promised my [blood relation] I’d [do something] now, so I’ve got to run. I need a few more hours in a day. Will you be available [time/date]?

This both gives them an entirely believable reason why you had to leave so quickly, and by making reference to when you next talk to them, communicates unambiguously that you want to continue the subject, or at least keep talking with them.

If the conversation has come to a natural close, then mostly all that’s needed is an acknowledgement that you enjoyed the conversation. Everyone has things to do in order to stay healthy and under shelter, so no real excuse is needed, though there’s no harm in providing one, either.

Well, it’s been great talking with you but I have to get going.

Or with an excuse:

Well, it’s been great talking with you, but unfortunately I need to [practical activity, such as eating or going to sleep].

If the conversation was not really symmetric, where the other person was far more into than you were, the excuse is more important. And to limit such conversations without giving offense, try to pick an early but not abrupt point to consistently end them; the other person’s sense of you being as into it as them will depend heavily on how participatory you are, so limiting your participation will naturally encourage them to look elsewhere while still thinking of you as meaning well toward them. (I’m assuming that you do; if you dislike someone and wish them ill, you don’t need advice on how to communicate that. Everyone knows how to shriek obscenities and throw things.)

Wanting Human Connection

This may be the hardest one since ending a conversation is inherently—if temporarily—severing the human connection which the other person is seeking. Accordingly, there isn’t a great way of doing this. There are actually two bad outcomes you need to try to avoid:

  1. Making the person feel unwanted or like they’re a burden
  2. Making the person think that you have more time to give them than you do, so that they are set up for disappointment when you don’t talk to them again as soon or for as long as they were expecting.

As is probably obvious, navigating this isn’t easy, since the easiest way to avoid one is to run straight into the other. The best bet is to express happiness that you conversed and to be very realistic about the next time you’ll talk. It is far, far better to over-estimate how long it will be than to under-estimate it. People are always delighted to hear from someone earlier than expected but feel quite bad about not hearing from someone when they expect to. This is of course difficult because the further off an estimate one gives, the less happy the other person will be to hear it. This is what tends to push us into giving under-estimates and disappointing them.

If this is a relative or other close person, it’s ideal to establish some sort of regularity. Calling every Saturday afternoon or Tuesday evening or whatever. The regularity both gives the person something to look forward to and eases the ending of the conversation because less will feel like it’s at stake. If they feel like they can rely on hearing from you again, it will be painful—but not nearly as painful—to say goodbye.

That said, the key is to strike a balance between being cheerful and acknowledging that the ending of the conversation is not a happy thing for the other person. Much of this is in the tone of voice, of course; something gentle with a note of sadness among a generally positive sound is the goal. If you can stick to a schedule, something like this:

Well, it’s time for me to get going. It was great talking with you, and I hope you have a good rest of the [realistic time period until you talk again]. I look forward to talking with you [tomorrow/next week/etc].  [If appropriate, this is where you stick professions of love and affection.]

If you can’t stick to a schedule, then something like this:

Well, it’s time for me to get going. It was great talking with you, and I hope you have a good rest of your day. I look forward to talking with you again. [If appropriate, this is where you stick professions of love and affection.]

Honestly,

It is ironic that the English language does not have any literally-true colloquialisms for “what I am about to say would be too complex to say in a manner that complies with normal etiquette so I’m going to say it without normal etiquette but do not take it to mean that I think you are unworthy of etiquette and still less take it to mean what it would if you were to apply the normal etiquette-reversal filter we all use to know what the other person means”. The standard ways to say things I know of are:

  • “Honestly,”
  • “With respect,”
  • “With all due respect,” (this one really loses its effect since one isn’t bothering to figure out how much respect is actually due—which is not very respectful)
  • “I love him, but,”
  • “I consider him a friend, but,”
  • “To be blunt,”

None of these directly mean what is intended, though usually the speaker understands it from context. There’s not wrong with this. It’s how a lot of language works. It is, however, ironic, that the way one says that one will not use circumlocutions is with a circumlocution. (If you’re not familiar with the word, it means to talk around the subject rather than directly to it, circum=circle, locution=talking.)

The question might arise why we need to do this at all. Why not dispense with etiquette all the time and just speak directly? That would work in cases where everyone knows everyone else extremely well. In small, isolated groups of hunter-gatherers, for example. Outside of that, we mostly only have a basic sense of what someone means and have an instinctive tendency to take what other people mean in its most negative light. It’s safer that way. Etiquette exists in order to deal with this instinctive tendency. It softens what we say in a manner that doesn’t trigger our instinctive tendency to take everything strangers say as badly as possible, while its standardization means that we also know how to invert it to get at the original meaning at a higher cognitive level where our comprehension won’t trigger our fight-or-flight instincts. It’s cumbersome and time consuming but all safety is cumbersome and time consuming. This is also why there are protocols for temporarily setting it aside without losing all benefit from it.

Urban-Fantasy.com—An Opportunity

Silver Empire Publishing—the company who will be publishing my novel The Dean Died Over Winter Break—has just (as of February 2nd, 2018) announced an interesting opportunity for devoted fans of urban fantasy. (For those who don’t know what Urban Fantasy is but are reading this anyway, here’s the Wikipedia article on it. tl;dr fantasy in a modern-day setting.)

We’re looking for a few good contributors to our new blog! Applicants must be able and willing to provide regular blog posts on the following topics, all related to Urban Fantasy, Paranormal Fiction, or Supernatural Thrillers:

  • Book reviews
  • Movie reviews
  • TV reviews
  • Theory, critique and discussion
  • Analysis

They’re looking for people who will do this primarily for love of the genre because the perks (in addition to exposure) are related to free access to lots and lots of Urban Fantasy. If you’re interested, check out the link to the full announcement for details and how to apply.

The Problem With Outrage Quoting

I’m fairly careful to limit my intake of social media to people who say reasonable things. This is in part a survival strategy for Staying Sane on Social Media. However, this still leaves a fairly large vector for things which unbalance my mood and make me less effective at the main stuff I’m supposed to be doing: outrage quoting.

This is where a person who is themselves reasonable sees a very unreasonable thing, then quotes it to express their outrage at it. There’s also a variation on this where the person quotes it to make fun of it. The latter isn’t quite as bad as the former, but both do have the following problem: one is still being exposed to the crazy stuff one was trying to avoid.

Actually, it’s a bit worse than that—the people one follows are specifically filtering through the stuff from the unreasonable people to find the craziest stuff that they say. This can be extremely unbalancing to one’s state of mind. As I talked about in Social Media is Doomed, human beings aren’t designed to deal with a large number of strangers. We deal with people by acclimating to them, but it takes time and is harder the more different sorts of people we need to acclimate to. Even when we are careful to keep our reading to a set group of people to whom we’ve acclimated—there’s no requirement that these people agree with each other or with us, only that we’ve acclimated to them—outrage quoting constantly introduces new people to our notice who are saying crazy things that we haven’t acclimated to. This is extremely stressful to human beings.

Also, please note that I’m not talking about being exposed to new ideas as being stressful. There are some circumstances in which that can be stressful, but usually it’s quite manageable. I’m talking about running into expressions of ideas we’re not used to. Perhaps we know somebody who will say #KillAllMen and we’ve gotten used to this eccentricity. There is no new argument to be found in a person saying, instead, #CastrateAllMen (I made that up; who knows, perhaps I will have actually come up with an absurd example that the universe didn’t beat me to for once). But if we’re used to the former and not the latter, the latter will be far more stressful to run into. There’s a new person here, and people are complex. They’re also dangerous. A stress reaction to having to deal with a new person is actually entirely appropriate. Best case scenario is a big drain on your emotional energy is incoming.

Except that this being a one-off quote means that actually, a big drain on one’s emotional energy isn’t incoming because you don’t actually need to get used to this new person. You’re almost certainly never going to see them again. And therein lies one strategy to help mitigate the stress from encountering outrage quoting: focus on how this is a person you’ll never see again and how they don’t really matter.

I don’t have any other good suggestions, other than be careful about people who do a lot of outrage quoting. But certainly I think the golden rule applies, here: be very careful when quoting to make sure that one isn’t outrage quoting. For example, when I wrote a humorous blog post about that CNN article on cuckolding (CNN’s Love of Cuckolding), I started it off with explaining why it doesn’t matter and isn’t worth stressing over. And I’ve stopped myself from quoting outrageous things often enough that it’s now becoming a habit to not quote outrageous things. Still, it’s something I always keep in mind—if I’m quoting something, what effect will seeing that have on the people who read what I write?

MST3K’s Complaints About the 80s

I was just watching one of my favorite Mystery Science Theater 3000 episodes, Space Mutiny. During the end credits, Mike and the bots are complaining about the 1980s. Actually, I’ll just quote it since the people at the MST3K wikia kindly typed it up:

Crow: You and your ’80s!
Servo: Your precious ’80s!
Crow: You know it would’ve continued to be the ’70s if not for you!
Servo: Yeah!
Mike: All right, all right, that’s it, that tears it!
[Mike attacks Crow and the three begin fighting on the floor]
Crow: You want a piece of me! It’s go time, ’80s man!
Servo: Come on cool-breeze! Ow owie ow don’t!
[After a while Mike sits up]
Mike: Wait, wait you guys, wait, this isn’t us man.
[Pause of a second]
Servo: Yes it is, you hair-feathering freak! Get him!
Crow: No, no, Servo, he’s right, he’s right. This movie has us turning on each other! It won’t end! These credits just won’t end! [sobbing]
Servo: [sobbing] It’s just like the stupid ’80s, they never ended either!
Mike: No no, actually they did end Tom, there there, it’s okay. See, see there’s the copyright, that means it’s over.
Servo: [sobbing] I’m sorry, Mike!
Crow: [sobbing] Sorry, Mike!
Mike: It’s all over, you guys. I’m sorry too.

I’ve never blinked at that, but here I am watching this in the year of our Lord’s incarnation 2018, where the 1980s are a distant memory of my childhood. And of course tons of material from the time like movies and songs and such. But MST3K has been off the air for quite some time, and it occurred to me to wonder when this episode was first aired. It turns out that it was aired in 1998. That’s just 8 years after the 80s came to a close. The 1990s weren’t the same as the 1980s, to be sure, but my recollection is that they weren’t nearly as different as the 1980s were from the 1970s.

Granted the above interaction was exaggerated for comedic effect, but it’s curious to see a perspective on the 1980s from relatively close to it.

Incidentally, my recollection of the 2000s is that, culturally, they weren’t all that different from the 1990s and that the 2010s are even less different from the 2000s. Certainly things changed, of course. People do dress somewhat differently, though among the mainstream (rather that people who live and breathe fashion) not *that* differently. And of course streaming is a huge thing these days. But at the same time I wonder if the prevalence of recorded media, both VHS/DVDs/Blu-Ray and streaming, will act to be something of a break on cultural change. There’s money to be made in back-catalogs, and new stuff tends to be more expensive. Plus most new stuff is garbage—in comparison to the best stuff of the last 50 years. (And atheists can’t tell decent stories.) This may partially be why so much of what’s made these days is remakes. This isn’t a well developed thought, just something that occurred to me.

History Is Safe Because It’s Over

Some thoughts on historical fiction and our perspective on history. In particular, how knowing the outcome of history makes it hard to relate to the things historical people worried about, and how this colors our view of them and their actions. You can also watch this on YouTube:

Movie Magic

When I was a kid, there was a TV show on the discovery channel called Movie Magic. It was about special effects, I believe. I never watched it that I can recall. But its title has stuck with me all these years later. It strikes me that its title captures something fundamental about movies: movies are magic. Even bad movies. I’ve been reminded of this as I’ve been watching Hobgoblins.

hobgoblins

It was Rick Sloane’s third movie and had a budget of $15,000. According to an inflation calculator I tried, that’s the equivalent of $31,337 today. And they didn’t have digital photography or editing back then. It’s not a good movie by any stretch of the imagination, but the acting, camera work, editing, and so on were… competent. Not compared to big budget movies, but compared to other tiny-budget movies. There were characters who were written and played consistently from start to finish. And the result was that this movie—bad as it was—had that movie magic.

Movie Magic is, specifically, the creation of a world. Not merely a temporary world, but a world which lasts in the imagination of those who watched it. As cheesey as the scenes between Macready and his boss were, in some sense they happened. In some sense this was a movie studio boss’s office:

hobgoblins-boss

It doesn’t make much intuitive sense, and yet it’s true.

And I think that it’s the people who have that sense of movie magic who are the primary fans of Mystery Science Theater 3000. We’re fans of it because it’s an opportunity to laugh at ourselves. Because every one of us would jump at the chance to be part of movie magic. Every one of us would make the compromises which are unavoidable when you have a budget of $31,338. But for all those tradeoffs, the movie would still be a movie. It would still be a bit of reality with places and people we made out of thin air. Maybe we’d write better dialog, but even if we didn’t it sure would be something to be part of making a movie. And I think that we all know that on some level that’s ridiculous, which is why we enjoy laughing at ourselves so much.

Hobgoblins Arrived

I recently got the 20th anniversary DVD of the movie Hobgoblins. Not the MST3K of it, mind you. Look as hard as you want, you won’t see Mike and the bots:

hobgoblins

Granted, that’s not the best screenshot to show it. How about this:

hobgoblins-boss

What’s that, you say? You don’t recognize that from the MST3K episode? Indeed you don’t, and neither do I. I haven’t watched the movie yet, but I’ll be interested to see what it’s like as it was meant to be seen. In my previous experience watching the original movies (MST3K: Werewolf is the only one I wrote up so far), they didn’t cheat by editing the movie to make it more laughable, so I don’t expect that to here, either. But while I’ve only watched up to the title so far, the tone is quite different from the MST3K episode. The impression I got from the episode was that the movie was goofy; watching the original it feels more like mostly-incompetent horror. Certainly the beginning sequence is played entirely straight, even though the tension was so not-tense that you couldn’t cut it with a laser-chainsaw.

Once I watch the entire movie I’ll write up a review of it. I find this sort of dive into movie history to be very interesting.

Mills

I was recently looking up mills, and came across this fascinating picture of a Roman flour mill:

Urn_holder_of_Publius_Nonius_Zethus_01_-_Vatican_museum

(Photo Credit: By Chris 73 (Own work) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons)

I’m so extremely used to the wheel type of mill that this almost shocked me. Just to be clear, I mean this kind of mill:

Hacienda_La_Laguna-Museo_del_Olivar_Y_del_Aceite-Molino_antiguo-20110918-09618

(Photo Credit: By Daniel Villafruela. (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons)

Interestingly, the wikipedia page called it an edge mill, and the wikipedia page for edge mills claims that edge mills were invented in China in the third century. Which, if true, means that the sort of mill stone I normally think of as a mill stone wouldn’t have existed at the time of Christ. Perhaps even more interesting, the sort of mill which quite possibly did exist then (the roman one in the first picture) looks to me far more complicated and advanced than the sort of flour mill which apparently superseded it. Very interesting.

The thing which led me to discover this was looking up Friedrich von Logau’s poem about the mills of God. The original poem is:

Gottes Mühlen mahlen langsam, mahlen aber trefflich klein,
ob aus Langmut er sich säumet, bringt mit Schärf ‘er alles ein.

Which was translated into English (according to Wikipedia, by Henry Wadsworth Longfellow) as:

Though the mills of God grind slowly; Yet they grind exceeding small;
Though with patience He stands waiting, With exactness grinds He all.

Snow is Peaceful

There’s something very peaceful about snow. Snow causes all sorts of problems, of course, but it is in fact these problems from which the peacefulness of snow arises. Snow is peaceful precisely because it causes most creatures to go home. But it does more; since it takes footprints so readily, snow also proves when it has kept men away from a place:

20180113_0018

Isolation is not the best kind of peace, of course. Peace, most properly considered, is the ordering of creation according to God’s will. Among human beings, peace refers to harmony, not merely to the cessation of fighting. But in a fallen world one must often accept second-bests, and snow gives us a respite from many of the troubles which burden us in a fallen world.

It is interesting to consider that rain also drives men indoors and away from causing mischief, but rain is not peaceful. It seems to me that this different comes from three differences between rain and snow. The first is that rain is loud. Snow is not only quiet but even muffles sound, a bit. Snow gives one quiet in which to think.

The second difference is that human beings, in our natural state, are rain-proof but are not snow-proof. Except in very cold weather—which is not our predominant experience of rain—going out in the rain will make one wet but do one no harm. This is actually most inconvenient when one is wearing clothes (which is, admittedly, almost all the time). Snow will kill a naked man. Rain is only really a problem because we wear clothing, and then it’s really on uncomfortable. Our retreat from snow is, therefore, more dignified.

The third reason is that snow is less dangerous to us when we have shelter. Rain is just an inconvenience until one gets too much of it, in which case it causes floods which are extremely destructive and deadly even when we have shelter. And while these are unusual circumstances, they’re not unheard of. In the places where people have floods, floods happen every few years, if not more often. In the places that get snow, enough snow to collapse buildings is very rare. Water moves and combines its power but snow mostly falls where it lands. That’s not true of mountains where avalanches happen, of course, but I imagine that snow isn’t nearly as peaceful there.

But in more ordinary places, snow only keeps men indoors where they give each other little trouble, and so it’s deep snowfall is very peaceful.

20180113_0007

The Internet Needs Distributed Recomendations

It is widely recognized that centralization, such as one sees in most social media (Facebook, Twitter, etc.) has strengths which bring concomitant dangers. Possibly the biggest danger, and certainly the most pressing on people’s minds, is censorship.

Distributed media, such as blogs, make censorship much harder. However, this has (so far) been at the cost of making discovery much more difficult. Finding new blogs is a very haphazard things, to a great degree relying on cross-promotion in blogs. By contrast YouTube is able to leverage its centralized information to provide a list of recommended videos to the user after each video. Given the massive data available to them of what people watched and how long they watched it for, this enables them to make recomendations for videos which are often good. Almost every YouTube channel I’m subscribed to I found through recommended videos.

It probably goes without saying, but unfortunately recomendation engines are extraordinarily succeptible to manipulation by hosts with an agenda. Moreover, it would be virtually impossible to discover such manipulation as it’s only relevant to people who are not aware of particular video makers anyway.

In order to make distributed media truly competitive with centralized media, what we really need is a system for making distributed recomendations. It’s not immediately obvious that this is doable, unfortunately. A system of distributed recomendations would be a spammer’s dream ifi they could figure out how to manipulate it. In fact, most parties would be deeply desirous of manipulating this system. This guarantees that a lot of effort would be put into trying to figure out how to game the system. Worse, the system would almost certainly need to be anonymous, so as not to track people’s reading habits, which makes fake recomendations all the harder to defend against.

The most obvious approach to avoiding spam would be to attach micro-payments to the recomendation system. That brings with it its own problems, but it also has benefits. There are probably other options, too. Especially if one were to somehow include negative reviews as well as positive reviews, the correlations required to spam people might need to be far too complex to allow for useful spamming.

Anyway, if you know anyone who likes to develop algorithms, try planting the seed of a distributed recomendations network. It may not be doable, but the internet would benefit tremendously from it if it is.

Generational Warfare

Over at Amatopia, Alex wrote a post titled On Boomer Hate. It’s a good post which I recommend reading. Here’s a sample:

t’s trendy to hate Boomers. Literally, everyone is doing it. I did as well. But when something is trendy, it’s usually garbage. But a funny thing happened on the way to critical thinking: I’ve changed my opinion.

The more I thought about generational struggles, the more I realized that generational warfare hurts us all: What I’m getting at is that I think generational warfare is stupid and counterproductive. And I’m not just talking about the young. Us older folks do it too and we should to stop it.

The more I think about it, the more obvious it becomes that the righteous Gen X indignation against Boomers is pretty hypocritical, especially since many of us express the same sentiments towards Millennials.

As they say, read the whole thing. What I find interesting about this is the way my mother—who is towards the end of the baby boom, but still solidly inside it—spoke about the demonization of her generation and the lionization of her parents generation. She objected to both.

The “greatest generation” were the people who endured the great depression then fought in World War II. It is certainly true that they went through a lot. However, they didn’t do it voluntarily. It was not an ascetic practice, nor (in the main) a job they volunteered for. It happened and there was nothing to do about it and they put up with it as best they could. The great depression, which overlapped Prohibition, was filled with crime, both organized and disorganized. If you look at divorce statistics they had been trending up since the 1860s and showed a dip during World War II followed by a much larger spike afterward:

marriage_and_divorce_over_time 1867-2011 new _with-trend

That spike in divorces afterwards is somewhat typical of how much mother characterized the generation before her: finally done with deprivation, they finally wanted to get theirs. By the way, I added that trend line, and it brings us to another thing blamed on the boomers. People complain about the introduction of no-fault divorce, but if you look at the data, it really seems that no-fault divorce led to a spate of divorces on stocked-up divorces which then let off once the backlog had been processed. Granted, marriage is down and so one would expect divorce to be as well, but it’s very far from obvious that the boomers had any real causal relationship to the boom of divorces which happened in the late 1960s and early 1970s. Especially when you look up the history of no-fault divorce and find out it was done because people were lying about having cause for divorce so often that people feared that respect for the truth was going to disappear.

And so it goes with many of the problems of the boomers. To quote a famous boomer/songwriter, they didn’t start the fire. And they were handed quite a lot to deal with in the form of a deeply racist society, too.

Did the boomers do a lot wrong? Of course they did. Every generation does a lot wrong. We live in a fallen world. Which brings me to where it brought Alex: inter-generational warfare is stupid. There’s no way to judge the raw materials that any given generation was given to work with, and in any event it’s deeply ungrateful. The previous generation gave us life. Imperfect life, to be sure, but life that’s quite a lot better than nothing.

And thinking about it as a parent, it’s painfully obvious to me how imperfectly I’m raising my own children. I suspect something like this applied to every generation. Our children always suffer for our mistakes. It does no good to blame our parents. What we really should do is ask God to have mercy on us all.

Popularity in the Digital Age

I don’t know how many of my readers aspire to publish publicly and have their words read by an audience. I suspect that it’s a fairly large percentage. I know this is something that I have always been drawn to, since I was young. It’s not that I wanted to be famous, though I suspect that all human beings are tempted by fame. Fame makes some very empty promises very loudly. But there are othaser good reasons to want to have an audience. In particular, having an audience enables one to give away knowledge that one has been given. Next to learning, there is nothing more satisfying than teaching. (In learning we are looking at the goodness of God directly, in teaching we are (by God’s gift) taking part in God’s self-gift to others.)

The commonality of wanting an audience for one’s writing, combined with the way that technology has made publishing all but free, has resulted in there being so much writing that finding things is incredibly difficult. Further, with so many options on offer, we all look for those voices which speak to us very effectively. Since there’s so much available, there’s a lot of sifting to find the things we really like. Thus the problem in the age of handwriting was copying, the problem in the age of print was distribution, and the problem in the digital age is discovery. How does one find an audience, which is really the question: how does one’s audience find one?

Aside from large amounts of money, there do not seem to be any sure-fire answers. At least quick answers. How does one get a sizable audience in six months without spending a ton of money on advertising and cross-promotion? Heaven knows. But it does seem to be the case that longevity is a major component of finding an audience without a ton of spending. This is for two reasons, I think.

The first is that much of stumbling into an author that one enjoys reading is by luck, and luck takes time. Over the course of several years, some people will stumble into one’s blog and like it. The other is that recommendation (posting on social media, emailing, etc) is itself something which grows with the size of one’s audience. A small audience rarely recommends posts, a larger audience recommends posts more often. Thus the few people who find one initially will occasionally recommend one’s work in a way that puts other people who enjoy it together with that work. Over time that builds, as well, both because there’s more time for that to happen but also because there’s more time for older posts to become relevant to some conversation or topic.

In essence, the key to winning the lottery is to buy a large number of tickets; the way one does that in blogging is by writing a lot of blog posts over a lot of time. Something similar applies to YouTube channels, Twitter accounts, etc.

Once again it turns out that patience is the most practical of the virtues.

Game Design and the Rule of Cool

The Rule of Cool is, I believe, actually a TV trope, it applies to video games as well. The variant I’m thinking of is when all inconsistencies between game play and the story are waved away as game play being more important than consistency with the story.

Now, in fairness, games must be unrealistic in order to be games. If a game was perfectly realistic, it would be a simulator, not a game. And people would mostly only do them to train for doing the real version. Thus wounds must heal in seconds or minutes, not weeks or months. Thus is should take minutes to build a hut, not days. And so on; there are a lot of things which need to be cheated in order to have a game and not a simulator. This I grant.

Having granted that, it’s important to point out that it does not follow that no thought should go into how one cheats reality in order to make a game. This is not true, of course, of pure games, such as tic-tac-toe or tetris. But in games that have a story, it is extremely important to consider how the gameplay fits in with the story. Recently I’ve been playing ARK: Survival Evolved, so I’ll draw my examples from there. The one thing that you need to know about ARK is that it’s a survival-type game (i.e. you gather resources and craft tools, structures, etc) with dinosaurs. An the dinosaur models are gorgeous.

Of course, the first problem is that ARK isn’t really a survival game. It’s a team assault game where the weapons are gathered in a semi-survivalist sort of way. I say semi-survivalist because after a certain point all of the resources are gathered by heavy machines. It just so happens that the heavy machines are dinosaurs, but aside from having a setting where they can aimlessly wander around and having a breeding mechanic, they are designed just like heavy machines would be. There are heavy trucks (brontosauri), light trucks (diplodocuses), tanks (rexes), armored assault vehicles (allosauruses), and so on. There are even spy helicopters (pteranodons), attack helicopters (tapejaras) and cargo helicopters (quetzals). You might think that flying reptiles would be more like planes, but they’re all slow, very maneuverable, and extremely good at hovering. The more heavily laden they are, the slower they move. It makes sense as a game mechanic but makes absolutely no physical sense. If a slow moving animal flapped very slowly, it would fall like a rock.

And the problem is that this takes you right out of the story. When flying reptiles are actually swimming through the air in entirely impossible ways, the beauty of the models loses most of its effect. The same is true of the walk-cycles which don’t adapt to the ground, but I think for different reasons.

Granted, walk cycles which don’t use physics to adjust the skeleton in natural ways for locomotion will never look entirely right, but I think that we’d forgive scripted walk cycles far more if the dinosaur which was walking imperfectly was actually moving with a purpose. But in ARK they aren’t. Or rather, they almost never are. On occasion a predator does run at another dinosaur to attack it. But under normal circumstances the dinosaurs simply wander around completely aimlessly. The herbivores do not eat, nor do the walk towards plants. They are simply on a random walk. And I think that the fact that their movements are completely pointless make you far more likely to notice that they’re not walking correctly.

And this problem carries over to appreciating the models for another reason. It’s great that the triceratops looks almost exactly how you’d picture it, but it’s hard to notice that when they’re not behaving at all like how you’d expect. They’re a herd animal. You should see them in groups and they should move around with some relationship to the others in the herd. That they don’t just breaks the illusion even more.

And of course everything has terrible eyesight in ARK. Predators don’t notice prey until they’re within about 50 yards. Prey doesn’t notice predators until the predators have  actually bitten them. No creature in ARK has a nose.

Of course, none of these are likely to be overly noticeable if you’re playing in team-versus-team since you have to be on constant lookout for other teams who will try to kill you if you’re alone.

I should note that the dinosaur taming also suffers from the idea of gameplay-over-story. With exceptions, dinosaur taming is accomplished by using tranquilizers to knock a dinosaur unconscious, then feed it its favorite foods while it’s unconscious. Once it’s eaten enough it then instantly forms a lifelong bond to you where it is willing to go on suicide missions on your command. Granted you have to cheat taming an animal somehow for this to be a game and not as simulator, but this is extremely stupid. Worse, as you are shooting the dinosaur with tranquilizer-soaked crossbow bolts in order to knock it out, once it’s torpor falls below a certain point it realizes that you are trying to tranquilize it and runs away at its top speed. This is very stupid because tranquilizers make animals slower, not faster, but it also makes taming dinosaurs frustrating. There’s also no way to vary the amount of tranquilizer being delivered per shot, so larger, higher level dinosaurs require very large numbers of shots to tranquilize. That’s tedious, not fun. (This is another case where the game is made for multi-player, because using one of the many multi-person dinosaur mounts makes chasing after dinosaurs much easier since one person drives the dinosaur while the other person shoots. It’s also the case that, for example, four people can deliver 4x as many tranquilizing shots so chasing may not even be necessary for teams.)

A mechanic where you feed the awake dinosaur until it likes you would have been much better. This does actually exist with a few dinosaurs, but even here this has been screwed up so that it isn’t too easy, by which I really mean, too fun. There’s a dolphin-like marine reptile which likes to come up to survivors (what the players are called) and nuzzle them. You can give them meat and this tames them, except that once they realize you’re trying to tame them, they run away. This makes no sense, and is no fun. Apparently the most important game mechanic is that the player must struggle for everything.

Ultimately, ARK is an absolutely beautiful games which isn’t very much fun to play in single player mode because its central theme is being a tribal warfare simulator where it takes hundreds of hours to build up assets that get destroyed in a few minutes during a raid. The later stages of the games are even fought with automatic weapons and heavy artillery; the dinosaur seem almost out of place among auto-turrets and C4 bombs.

But the upshot is that the game really doesn’t work as a single player game. It really looks like it should work as a single player game. There should be an enormous amount to do all on one’s own. But it’s mostly ruined by inattention to the story. It’s not possible to suspend one’s disbelief long enough to enjoy it. Which is a great pity because the dinosaur models are gorgeous.

Advance Review Copies of The Dean Died Over Winter Break

The first bit of news is that Silver Empire Publishing will be publishing my novel The Dean Died Over Winter Break. It’s due out on early February. And as you might be able to guess from the title, it’s a murder mystery.

tddowb

And on that note, if you are interested in an advance review copy of The Dean Died Over Winter Break, please contact Russell at Silver Empire (russell at silverempire dot org). As I understand it the only requirement is that you agree to read it and leave an Amazon.com review on the publication date. Which, I should point out, is a very kind service to perform. Amazon reviews are extremely helpful in connecting books with people who might enjoy reading them.

Falcons Are Murderous Parrots, Not Raptors

At least, so says this biologist. Basically, the idea is that instead of being a splinter off of raptors (hawks and eagles) which specialized for speed, falcons are actually a splinter off of parrots who specialized for speed and meat eating.

The article goes on at length about how shocking this is, but having been very into studying falconry in my youth and having once had a pet parrot (well, a cockatiel, but it’s in the parrot family), I’m actually not very surprised. Eagles (mostly) look kind of like large hawks, but falcons just don’t look much like hawks at all. They actually look more like short-tailed parrots. This is especially true of their wing shape. Falcons have long, narrow wings where the flight feathers stick together forming a (mostly) solid surface. Hawk’s and eagle’s flight feathers stick out independently, looking almost like outstretched fingers. Parrot’s wings look extremely similar to falcon’s wings, with the flight feathers touching each other.

Granted, looks aren’t dispositive, which is the point of the original article. I just think it’s worth noting that it goes out of its way to emphasize the ways that falcons look like hawks but not how they look dissimilar.

And it should be noted that the results of convergent evolution are, in the end, convergence. That falcons are more closely related to parrots than to hawks means very little; you can train hawks and falcons for falconry (hunting), but you can’t train parrots to hunt. Falcons don’t talk like parrots do, and you interact with them much more like hawks than like parrots. Hawks, eagles, and falcons are all fairly solitary creatures.

So while it’s fascinating trivial that falcons are more closely related to parrots than to hawks, it’s not actually useful information. You still shouldn’t name your gyrfalcon polly, or offer it a cracker.

That Story That Modern Screenwriters Can Tell

Recently, I wrote about The Story Modern (Western) Screenwriters can Tell. I realized that the modern story can be put even more succinctly: the main character decides whether he’s going to be completely worthless or only mostly worthless.

Usually the thing which precipitates this crises is that the main character wants to be completely worthless, but the plot makes it such that if he is as completely worthless as he wants to be, many people will die (or at least suffer). In the end, we find out if he’s willing to go beyond himself to alleviate their suffering in the way that only he can do.

This has nothing to do with heroism, but it is mistakable for heroism by people who primarily think in terms of story beats (i.e. of plot points broken down by scene, the way that screenwriters do when writing or editing stories). Real heroism is not about whether someone will do the minimum necessary, but whether he will go beyond what is necessary. At its core, heroism is about generosity. That’s why it moves us so much—it’s about being a true image of God.

I suspect that it’s not a coincidence that modern writing is primarily concerned with how imperfectly the main character will be an image of hell.

What People Mean To Their Fans

I was recently reading about John Denver. Probably my favorite song of his is Thank God I’m a Country Boy (which described, to some degree, the life I aspired to as a child, not the one I had):

I was also extremely fond of Christmas for Cowboys:

Anyway, he had a somewhat tumultuous life and died in a plane crash where we was piloting an experimental plane that he was flying. He was also somewhat politically active, championing environmental concerns, being against the NRA, backing Jimmy Carter, and so on. Still, this was from the time when celebrities didn’t—or weren’t allowed to—mix their politics into their art by way of expressing venomous hatred for fans who disagreed. And without the internet, one didn’t tend to run into their off-duty political rants nearly as often. Ah, the good old days. But it brings up a very interesting point: John Denver meant something very different to a ten year old me than he meant to himself.

In one sense that’s obvious. To me he was primarily his music while to him he was primarily a man. But in another sense, it does bring in a fascinating point about God’s governance of the universe.  As I’ve written about, You Rarely Know What Good You Do. Electronic reproduction, which brings out lives into contact with people we’ll never meet, makes this even more obvious. I don’t know whether John Denver was a humble man, but I do know that his song Thank God I’m a Country Boy did help to teach a very young me about humility. I don’t know if he even thought of that song as being about humility. He may well have thought of it as being about not being suckered in by the promises of city life and/or living within your means. But even if he did, it still taught me lessons about humility.

I’ve never understood when people get hung about what their “identity” is. How on earth do they know? First, they’re a work in progress. Second, they don’t know most of what they do. How on earth are they supposed to know what their “identity” is. For the most part our identity is out of our control, anyway. How we relate to others is dominated by the world, not by us. Which means that it’s almost entirely under God’s direction, not ours, even in the limited sense in which our choices are not God’s direction of the world. (Which is a useful sense, even if not the truest sense.)

And John Denver is a good example of this. He was someone important to me but he never knew that I existed and consequently had no idea who he was to me.

Life must be lived in faith, since it sure as hell can’t be lived in present knowledge of what we’re doing.

Happy Christmas!

Somehow in America we switched to predominantly saying “merry Christmas” instead of “happy Christmas”. I haven’t had time to look up when, but I’m curious when it happened since it can’t have been that long ago. The final line of Twas The Night Before Christmas is:

And I heard him exclaim, ere he drove out of sight, “Happy Christmas to all, and to all a good night!”

That poem was published in 1823. And yes, I know the original title was A Visit From St. Nicholas. Anyway, I prefer the original (and what’s still said in England, as I understand it) because “happy” is one of the translations of the Greek “makarios”. The other translation being, “blessed”. It’s the primary attribute described in the beatitudes in the sermon on the mount. Both are good, but happy just encompasses more than merry.

May you have a very happy Christmas.