People only Read What is Published

(In a sense this post is a generalization of the fundamental principle of science, but it’s worth looking at that generalization in detail.) It is obviously true that people cannot read what hasn’t been published because if it was not published, it would not be available to read. From this utterly trivial point we can predict several non-trivial things which in a fallen world will reliably be true about many of the people who create for publication.

Actually, there is a second fact which we need, but it is only slightly more controversial than the first: people do not re-read material often. If we put these two together, for a creator to be read as often as possible, they will need to publish a lot of work. There are exceptions, of course—I’ve re-read Pride & Prejudice around twenty times now—but in general this holds true and is especially true of anyone who wants to make an ongoing living from their creative work. (It’s also true of anyone who simply wants ongoing attention even if they don’t make any money from it.)

In order to publish frequently, a person must have many things to say, and this is the crux of the problem. There several ways to have a lot to say, and—outside of explicit fiction—only one of them is good. The good way is to study the world and talk to the wise so that one becomes wise oneself. This is a long, hard road, and it will be inevitable that there will be things which come up in popular discussion which might be well-read if one could write them, but one simply doesn’t know enough to write about them well. Many people take this long, difficult path, and it is good idea to not lose track of them when you can find them.

There are much easier ways to have a lot to say, though. Making stuff up is the easiest, but also the most dangerous way, as a number of disgraced reporters and academics have proven. Outright lying is very hard to defend and also very offensive to readers. Several orders of magnitude safer is explicit speculation. You can see this in articles that have a question mark in their title. “Did [Famous Politician] Buy And Eat Sudanese Sex Slaves?” is an article that can be based on as little as a trip to the Sudan—or a neighboring country if necessary—and the politician being the sort of person who would do that sort of thing. It’s not hard to make things seem plausible, especially if one picks things that aren’t as extreme as this silly example. There are many variants of this approach, too. One can speculate about the implications of what it would mean if someone in a position of authority were to say something. One can also speculate on why a politician won’t say something at a particular time. Since a politician can’t say everything in every speech, there will always be a wasted opportunity to talk about. If the important people aren’t sufficiently obliging, one can also talk about what other people are saying about what was—or wasn’t—said.

Speculation on its own is not very interesting, however. One wants not only to publish material, but to have people read it. For that the writing must seem important as well as new. Now, it is possible to write about important things through hard work coupled with the patience to wait for important subjects to come along. But once again there is a much easier way to do this: throw perspective out the window. There are variants, of course, but they at their heart they all consist of some sort of skewed perspective. Probably the most popular is to take whatever topic one is writing about and imply that it spells the end of civilization as we know it, or if it isn’t utterly trivial even the death of any possibility of happiness in this world. Extrapolation is a very useful tool for this.

When exaggerating, the easiest approach is to assume that the world is static and project all trends out to infinity with no reactions to the trends or changes in behavior. Now, human beings have many flaws, and chief among them is that most of us do very little by principle. This is why so many people profess terrible principles—what’s the point in considering the truth of something one has no intention of living by anyway? But there is an upside to this, and it is that extrapolating out from people’s bad principles to their actions is usually quite misleading. The more principles have terrible results, the more people ignore the principles—sometimes even going so far as to reinterpret them to mean the opposite of what they originally meant. Whether this speaks well of the people or not, it is simply unreasonable to pretend that they will stick to their principles as things get worse and worse. Civilizations do die off, but at vastly lower frequencies than publishing cycles demand.

There is also the flip side of this coin—science reporting always has to include some section about how the discovery will cure a disease, make people thinner, make phones thinner, finally bring about the electric car, or at least significantly impact half the population’s life within the next few years. The overwhelming majority of them won’t, of course, but on the plus side this provides some grist for the worry mill because [political bad guys] will prevent the good things from happening. And don’t forget that every change hurts someone. Interestingly, this constant stream of good things coming in the future, rather than being here in the present, may also help to raise people’s ideas of what can be expected about life now—it really sucks in comparison to how good it will be ten years from now—so even without spin this works synergistically with the world-is-ending articles. Focusing people’s attention on what they don’t have is a great way to make them discontent and in need of an explanation for that unhappiness.

I should probably also point out that since really interesting new facts come along fairly infrequently, if a person is sloppy with their facts and doesn’t check into whether the things they have heard as facts are actually true, this will make them far more likely to come across “facts” which seem important. (Scientific studies with small sample sizes and no pre-registered hypothesis are a goldmine for this.)

The point, of course, is not nearly so much that all of this is a temptation to disciplined writers, but that it is a selective pressure which greatly rewards undisciplined writers and punishes disciplined writers. When considering the big picture, it doesn’t much matter whether disciplined writers resist temptation because the undisciplined writers will succeed and do very well regardless. And writing is not a zero-sum game. Undisciplined writers who trick people into reading material of exaggerated importance will increase the amount of reading that goes on. (Which editors who come up with headlines have known for as long as there have been headlines.)

But more more reading is not always better than less reading; reading which unbalances the mind through doomsday predictions breathlessly uttered makes people less able to understand truth spoken calmly. People also have finite and often small amounts of time and mental energy for reading, so consuming large amounts of exaggerated fluff can squeeze out real reading, even where it doesn’t habituate a person out of being able to do it.

(And everything I’ve said here applies to things that are watched or listened to just as much as for reading. As the saying goes, it’s not the medium, it’s the message.)

The takeaway is very simple: be very careful in how much news and news commentary you consume, and remember how big a selective pressure there is on the people who are giving you the news to exaggerate and distort it.

Control is the Worst But Most Certain Proof

The things we know, we know according to different levels of certainty. To illustrate the spectrum with its extremes: everyone knows with complete certainty that they themselves exist, and they know with virtually no certainty at all the things half-remembered that they heard from a known liar who thinks he heard it from his cousin one time. Most things, obviously, are somewhere in between those extremes. And in all but the most certain cases, is only indirect, which requires us to trust the use of our own reason to know the truth from the evidence.

Consider the case of a woman who asks the question, “does my boyfriend really love me?” It is not possible to measure love, and it is always possible to respond to a direct question with a lie. Perhaps he doesn’t love her but is even more afraid of being alone while he waits for someone better to come along. Even worse for her certainty in his love, he could be mistaken. Perhaps he loves an ideal of her which he will someday discover is not the real her?

Worse, doubt can lead to imagining all of the possible ways he could not love her but still do the things he did which seemed like love. Considering one’s imagination can be confused with looking at the world, which will further fuel her doubts. If she gives into this, turning her attention away from the evidence of his love towards the counter-evidence of her doubts and suspicious imaginings, she could work herself into a state where all of the true things in real life which should make her convinced of her boyfriend’s love leave her empty and uncertain. What can she do?

This is where many people go wrong, because they know that control is powerful proof. If you can make something do what you want, it is very convincing evidence that you really know the thing. (This is why repeatable experiments are so critical to the scientific method.) If she can make him do things he would do only if he loved her, then this should finally assuage her doubt. But there is a problem: whatever she asks he might have wanted to do  anyway. This adds the temptation for the demands to become unreasonable or even anti-reasonable. The more self-destructive and unreasonable the demands, the more clearly the only reason he is complying is because he loves her so much.

Of course, this is bound for disappointment. In practice we can never fully control another person, and if she keeps this up for very long the boyfriend will almost certainly stop loving the woman. People dislike being manipulated and distrusted. And even if he doesn’t leave her, she’ll then know she’s with a man so desperate he’ll put up with being treated terribly. This makes his love worth very little since it’s really an indication of how desperate he is, not how lovable she is. In fact, there is literally no way that this attempt to prove his love through control will end well. Alas, to paraphrase Jane Austen, insecure people are not always wise.

A very similar problem can be seen among a certain sort of atheist. When they reject the evidence given (here’s a summary of what’s often offered)  and are asked what sort of evidence they would accept, it’s rarely specific. It varies all over the place, but tends to have in common that it is something simply counterfactual to the world as we find it. But unlike when a Christian might say that the evidence he would accept that God does not exist is that nothing at all existed, this counterfactual isn’t related to the nature of God in a direct way at all. Creation not being created is evidence against the creator in a direct and sensible way. There being more of something or less of something is not directly related to the creator being our creator; it’s just something picked at random. And a moment’s thought shows that it is the counterfactual nature of the evidence that is important and not its being related to the creator. That is, this lack of relationship to what the evidence is supposed to prove is no accident. If the message “I exist. –God” burned forever in the sky in five hundred foot tall letters, atheists would just say that it was an unexplained natural phenomena which influenced primitive people to come up with the myth of God to explain it. Also that it influenced our language so that these letters were meaningful to us. And some day we’d definitely have a natural explanation for it.

What people want is not just any sort of evidence, but specifically the evidence of control. It is not really different from people in Jesus’ time who wanted a sign, which is to say, a miracle done on command. They did not then and do not now want to have to discover what the world is. They want to know it by having it conform to their desires.

But the psychology of this is interesting, because I don’t think that it’s selfishness. More specifically, I mean that it isn’t pride. It isn’t the desire to be God, to be the lord of all. Rather, control is powerful evidence because it seems to make the thing controlled an extension of the self, which as Descartes noted is certain even if we doubt everything else. It is not, at its core, a desire to dominate. It’s a fear of trusting. It is the insecurity of a timid creature which will not venture out of the burrow of certainty to see what actually exists in the larger world where it is possible to doubt.

Charles Dodgson, Modus Ponens, Achilles, and the Tortoise

Eve explains why requiring all proof to be recursively, discursively priced is a form of skepticism that is simply a denial of reason. There is a real sort of skeptical attitude which is the thirst for truth, but there is also simply the refusal to believe something which lazily uses the impossible standard Eve very clearly describes.

Eve Keneinan's avatarLast Eden

Charles Dodgson, probably better known to most by his pen name Lewis Carroll and his books Alice In Wonderland and Through the Looking Glass, was a logician and mathematician at Oxford University.  The Alice books are actually wonderfully full of logical puzzles and paradoxes, and I have heard the claim made that the reason that everyone in Wonderland is insane, is precisely because they are all perfectly logical, within their own parameters.

I want to talk about something else today, though.  At one point, Dodgson wrote a short dialogue between swift-footed Achilles and the Tortoise, sometime after Achilles, impossibly, has caught up with the Tortoise and is riding on his back.  For some reason, the conversation has turned to a discussion of the modus ponens, the logical validity of which Achilles is trying to persuade the Tortoise.

Modus ponens is one of the most basic valid argument…

View original post 1,287 more words

Material People are Immaterial

There is a problem which Materialists face that is rarely talked about. (Materialism is the belief that only matter and physical forces exist, i.e. that all disciplines are really a form of applied physics.) And the problem is a fairly basic one: what is an individual person?

In one sense this is a silly question because we all know. But the problem, for Materialists, is that what we all know directly contradicts Materialism. And it contradicts it because what makes a particular person that person transcends the particular matter which they’re made of. A materialist denies that there is anything can transcend the particular matter; all that exists are sub-atomic particles and a few forces acting on them. How, then, could a materialist possibly define what a person is?

This is an especially hard problem over time, since the matter which makes up a particular body changes through the years. All proteins, fats, sugars etc. get recycled by the body in its process of continual renewal. Even more of a problem is that a person starts off weighing less than ten pounds and usually ends by weighing over 100, often quite a bit more than 100 pounds. By adulthood their original matter is largely long gone, and any matter which by chance is the same is a tiny fraction of the original. Other changes such as larger muscles, longer hair, shorter hair, losing a limb, growing extra teeth, and many other changes significantly change the physical configuration of the matter. Neurons in the brain are constantly being made and new synaptic connections forming and others going away. Neither the particular matter nor the shape of the matter can be used to define a person. And according to the Materialist, nothing else exists.

There’s even a further problem that Materialists face in defining people: if the only real thing are sub-atomic particles and forces, there isn’t a good way to distinguish between the person and the chair he is sitting on. Individual molecules have inter-molecular attractions, but so do the molecules in the person and the molecules in the chair. The wood is a different density than the person’s skin and muscles, but those are a different density than the person’s bones. And if this is hard, what about when two people shake hands?

In my experience, when you point this out to a Materialist, their reaction is to get annoyed and say, “come on, you know what I mean.” Or, “and yet I can reliably tell what is a person and what isn’t.” I’ve never understood why it is supposed to be an argument in the Materialist’s favor that in practice not even he believes the nonsense he’s saying.

Believing our Imagination

After I posted about whether we can choose to to believe something, my friend Eve Keneinan pointed out to me that I had left out the subject of imagination. In particular, that it is not merely a question of whether we close our eyes or look at reality, we can also choose to look at our imagination and mistake that for looking at reality. The phenomenon of falling in love with a theory is a subset of this practice.

Imagination is a very interesting subject and one remarked on probably less than it should be. Even the simple question of what is imagination is not asked very much. In broad terms, imagination appears to be the ability of the mind to take on the form of something with which it is not in contact. (This is in reference to the Aristotelian idea that knowledge consists of the mind taking on the form of the thing known; where form refers, very roughly, not to the physical shape of a thing but essentially to what makes it what it is.) The mind can take on the form of something not real, such as when one writes fiction, or it can take on the form of something real but simply not present, such as when one calls to mind the face of a friend.

There is a problem with the latter type of imagination, when it is derived from reality, because we are fallen creatures: we can call things to mind imperfectly. This immediately introduces problems, though it can largely (though rarely perfectly) be corrected by consulting other aspects of our memory to make sure that our reconstruction of our memory is in fact correct. Our imagination is notoriously misleading when it comes to eye-witness testimony, identifying a person we’ve never seen before, and other things courts of law rely on all too often, but that’s not the main point here.

In Immanuel Kant’s killing off of knowledge in the last days of Modern Philosophy being a living endeavor, he proposed imagination as a substitute for knowledge. Not pure imagination, of course, since that would be absurd to even a brilliant man, but imagination which is then checked against experience (where practical). If experience confirms it, then we continue to count our imagination as “knowledge”, if not, we must try to imagine something else which does conform to our experience. For a fuller explanation, check out Kant’s Version of Knowledge.

For many people this idea of “knowledge” has replaced actual knowledge, and interacting with the world becomes an almost solipsistic exercise in playing with the phantasms conjured up by our imaginations. Even where it hasn’t, it is a common practice to understand something by trying to imagine it from incomplete knowledge, very frequently supplying the gaps with pieces of ourselves. That a great many people assume that everyone else is just like them only makes this more misleading whenever it is applied to people or things which are not just like them.

Perhaps most dangerous of all, it is exceedingly easy to fool ourselves into thinking that by looking things as we imagine them, we are actually looking at the world. Not only do we go astray but we don’t even realize our own ignorance. Having applied ourselves with great effort to learn about things which exist nowhere else but our imaginations, we feel like we’ve tried. Worse, it is painful to realize all that effort was wasted, making admitting our mistake to ourselves very difficult indeed.

It is possible to be lazy and ignorant, by not trying. But it is also possible to be very industrious and still ignorant, by looking in the wrong place.

Postscript

There is a saying that Modern Philosophy was born with Descartes, died with Kant, and has roamed the halls of academia ever since like a zombie: eating brains but never getting any smarter for it.

No One Preaches Radical Freedom to Children

Radical freedom, if you’re not familiar with the term, is basically just “do as you will is the whole of the law”. There are many variants of it, but in general it’s the proposition that there are no binding constraints upon a person’s actions—no good or evil—except what they themselves impose.

If this sounds like pure madness, it is, but it’s always coupled with some variant of the belief that humans are innately good and never (or very rarely) want to do wrong, so the people who profess it always assume that it will produce the exact same results but with less guilt. You can see this in the ads by an atheist group on the side of buses—I believe it was in London—saying, “There’s probably no God. Now stop worrying an enjoy your life.” A friend said that a famous atheist once answered a question about morality if nothing is forbidden because God is dead, “I’ve already murdered the number of people I want to: zero.”

In defense of the people who propose ideas like this, they’re not complete idiots and do know that there are people who do murder, steal, rape, etc. There isn’t a single response to that, but I think in most cases they classify anyone who does this as mentally ill and think all such behavior should be dealt with medically.

You could even make a case that Ayn Rand should be classified as a preacher of radical freedom, since her version of radical selfishness was somehow supposed to involve everyone working together towards the common good. (They were supposed to realize that cooperation to mutual benefit was their best way to benefit. I think they’re also supposed to recoil in horror from benefiting at the expense of another or by receiving anything which they haven’t earned. Because that’s obvious to everyone who is rational. I’ll wait until you’ve stopped laughing to type more.)

But something I’ve noticed about everyone who preaches radical freedom is that they never preach this to children.  They always wait until somebody who doesn’t believe in radical freedom has painstakingly, over many years, trained children children to do what is right rather than whatever they want to do, until the children largely want to do what is right, by habit. Only then do the preachers preach radical freedom. Then they look and notice that people who are largely set in their ways don’t much vary their ways if they start believing that anything goes and conclude they were right that radical freedom is harmless.

Or at least people don’t vary their ways much at first. Another thing I’ve noticed is that the people who preach radical freedom don’t tend to follow up, over decades, with the people they’ve converted. Not that it would matter, since if any of their followers do bad things, it is because they were defective, or mentally ill, or irrational, or whatever, and never because all human beings face temptation and need support in virtue.

And they never seem to ask what happens to the children raised by their followers. In part, of course, people tend to abandon radical freedom as a doctrine once they’re forced to raise children because telling a child that what they really want to do is share their favorite toy is just so utterly doomed to abject failure that almost no one ever tries it. And of course when followers practice some amount of realism raising their children, they are no longer followers, or are heretical followers, or just don’t show up to the monthly do-whatever-you-want meetings because children make it hard to belong to clubs. Whatever the reason, the preachers of radical freedom never talk about the practical aspects of raising children. And in the end, I suppose it shouldn’t be shocking that people who never consider how to raise children should be unaware that degeneration generally happens by generations.

Choosing to Believe

I recently saw the question posed whether it is possible to choose what one believes. The answer is obviously not. Having said that, it clearly is possible.

Before I get into either answer, I want to briefly define what I will mean by the word reality. It is that which, when you stop believing in it, doesn’t go away.

It is clear, then, that it is not possible to choose what one believes because belief is, simply, what reality appears to be. Beliefs are, in this sense, passive, like sight or hearing. We cannot choose what we see—we look and there it is.

But even in saying that, we can begin to see why it is possible to choose our beliefs: You can choose where you look.

If you hear a belief proposed which depends for intelligibility on knowledge which you don’t at present have, the belief will necessarily not be believable. You might have no reason  to disbelieve it, or you might take it on the authority of whoever told you as likely to be true (whatever it means), but you will not actually believe it. To give a concrete example, suppose someone is telling you something about relativity, and says that some property is true of the Lagrangian near massive bodies. If you have no idea what the Lagrangian is, you can trust that he isn’t wrong, but you can’t believe what he’s saying because you don’t know what it means. For you to believe it, it must seem to you an accurate description of reality. Until you understand it, to you it is not in fact a description of anything at all. Now, it is quite possible to, by choice, refuse to ever learn any of the base knowledge necessary for the belief to be believable. If you did this, you would be choosing not to believe the belief.

A practical case I deal with all the time is that young children will not listen to any evidence about the toy store being closed because they are unwilling to believe the necessary corollary to it: that they cannot go to the toy store right now. Toy stores can’t close, and I’m a monster for not taking them there, now. It is true that they don’t believe the toy store is in fact closed, but in shutting themselves off from all evidence because they can’t deal with the consequences, they are clearly choosing to believe that the toy store is in fact open. (To be clear, I picked this example because it should be familiar to everyone and is ready to hand, I am not trying to subtly call all atheists children, nor anything like that. I do my best to restrict rhetoric to posts in the rhetoric category and with a warning up top about how to read them. I believe in active aggression, not passive aggression.)

In a similar way, it is also possible to choose to believe something: in the spirit of inquiry, one could seek out all of the knowledge necessary for a belief. Properly, one would be attempting to believe it.  There is an asymmetry here, because the best one can do is try to believe something whereas ignorance can be guaranteed. It is always possible that, having all the necessary groundwork for a proposed belief to be believable—in other words, fully understanding the idea—it still does not seem to be an accurate description of the world. This is always going to be true of false beliefs, like the terrorist attacks of 9/11 on the world trade center being an inside job or the one-gene-one-protein theory which was recently chucked into the dustbin of biology. It may even be the case of true beliefs  where we don’t understand them well enough, like people who rejected the Monty Hall problem despite knowing a lot about probability and thinking they fully understood the problem specification.

But it is very important to note that what constitutes attempting to believe a belief is not purely an act of will. It is the will directing the intellect where to look. That is as far as the will can go; the intellect will see what it sees, just as will can literally make your eyes look at something but your eyes then see what they see, and not what you wished to see. It is a question of the will overcoming laziness or fear and putting in the work of learning, not a matter of the will overcoming the intellect and creating something in it. Human will is a powerful thing, but it cannot do the impossible, and it is not possible to create impressions upon the intellect through sheer will. The intellect is always fertilised by the reality it perceives. Will can no more create a belief in the intellect than a man can impregnate the color blue.

Update: My friend Eve Keneinan pointed out that I didn’t address the complication that we can choose to look at our imagination rather than at reality or nothing at all. I’ve fixed that in its own blog post.

The Argument Against God from the Existence of Atheism

I recently came across an argument which attempts to prove that God does not exist. It’s interesting for two reasons:

  1. It’s not the standard dodge of saying that the burden of proof is on others, as if all of life is a debate, rather than the burden of investigation being on all rational people to find out what is actually true of the world.
  2. It seems to be a novel argument, which would mean that Saint Thomas did not in fact give an exhaustive list in the Summa Theologica. (This is of course possible; Saint Thomas was only human.)

The original version of the argument apparently comes from a book, but is summarized here. It is fairly long and uses a term which it doesn’t define, “meaningful conscious relationship”. There are several possible meanings according to ordinary English usage, each of which makes the argument break down in different places. If it’s not obvious where, let me know and I’ll explain in detail, but suffice it to say it is not explained what would be wrong with a meaningful subconscious relationship.

It is not explained because “meaningful conscious relationship” is useful in this argument precisely insofar as it means “belief”, in the sense of “propositional belief”. That is, the sort of belief you state in words. If you have in your heart a conviction you can’t articulate that the world actually means something and isn’t just a bad joke with no punchline, that is a belief in God but not a propositional belief in God, since you can’t articulate it.

So right away, this argument can be more briefly stated, “If God existed, he would make everyone believe in him because to not know that God exists would be unthinkably cruel.” (There are variants which assume that God’s #1 priority is having people believe in him, as if he were Apollo from the Star Trek episode Who Mourns for Adonis? and the existence of atheists proves that he is not omnipotent, but this is idiotic and I prefer to focus on the most favorable interpretation of someone’s position.)

The problem with arguments from how unthinkably awful something is consists in the fact that they are never thought through. How can you know that something is so awful that no good could possibly be greater than it, except by thinking out in detail how bad it really is? And here we come to the real crux of the problem, for it should be obvious by now that this is just another phrasing of what C.S. Lewis called, “the problem of pain.” (It was Saint Thomas’ first objection to the existence of God.) No one can think out exactly how terrible something is in detail, nor can they think out what sort of goods might be better and available only if the bad thing is permitted. No one can do this because there are too many details. What a person can know is how afraid he is of some particular suffering as he imagines it, and this is invariable what we are actually presented with. This is not thinking, this is being afraid.

What we cannot know because our experience and our minds are finite, God can know because he is not finite. There is no suffering so terrible that it is not theoretically possible that permitting this evil allows greater good to be brought about. And so we come to the real answer to the problem of pain: trust God. God is good, wise, and powerful, and though we cannot see how things are presently being worked to the good, our sight is so very limited there is no reason to expect that we could see it. Not seeing it is, therefore, not only not a contradiction to faith in God, but actually consonant with what we would expect if we are being realistic.

Incidentally, this last part is also why freedom can only be found in obedience to God. To be free, one must be able to choose. But to choose, you must be able to apprehend what it is you are choosing. On our own, since we have no idea what the full consequences of our actions are, but the consequences of are actions are in fact the content of the action, we cannot actually choose anything on our own. Apart from God, we are simply slaves to our environment. We can hope, but our hopes are invariably disappointed. Only by joining our will to the will of one who can apprehend our actions because he knows the consequences of our actions, can we actually do what we intend. It is true we do not apprehend the action in its fullness, but because we will to do good, and God wills that we do good, by joining our will to his in obedience, we actually do accomplish what we intended, though we find out what the intention was after it happens while God knows it in his eternal now. This is the most that freedom can mean to a finite creature that lives in time.

The Lessons of Beetles

I once heard a story which I have dearly loved ever since. It was originally told as a joke, I believe, but I think it actually captures an important theological insight:

Some time in the seventeenth century a naturalist, funded by the crown, returned from one of his voyages and came to an audience before the Queen, who was the one principally responsible for his being funded. After he recounted some of his more interesting discoveries the Queen asked him, “And what have your investigations into the natural world taught you about the Creator?” The naturalist paused for a moment to consider, then replied, “That he has an inordinate fondness for beetles.”

Beetles currently comprise about 25% of known life-forms and 40% of all known insects, with new species of beetles being described all the time (currently there are around 400,000 described species of beetles). Clearly, God loves beetles. But humans who love beetles are considered quite weird: in movies they’re usually played by scrawny guys wearing glasses and bad haircuts and given dialog which proves in every line that they have neither social skills nor friends. And in fairness, God does stand alone; “from whom does God take counsel?” and all that. But the critical difference is, of course, why.

Human beings, being fallen creatures, love things primarily out of need. We are a dying species in a dying world, and we seek scraps of life wherever we can get them. This is almost a literal description of eating food, but it is more relevantly a description of the things we enjoy. We go on hikes because the beauty of trees and rocks and sunshine fills us up for a little while. We go on roller coasters because the rush of power reminds us for a moment that we are alive. We’ll even go to the ruins of ancient buildings made by long-dead hands because, remote as it is, we can feed on the crumbs of life which spilled over when someone was so filled with life that he built something only that it might exist. Art, when it is not purely commercial, is an act of generosity, and therefore life, because things are generous precisely to the degree that they live.

God stands apart because God is fully alive, and therefore needs nothing. He is not just fully alive, he is life itself, or as Saint Thomas Aquinas put it, the “subsistent act of to be”. (Subsistent in this case meaning to be in itself, rather than in another as a subject; the terms of scholastic philosophy are rather specialized.) God loves things in a purely generous way. He does not love anything because it is interesting; it is interesting because he loves it. When Saint John famously said, “God is love”, that might reasonably be rendered, “God is generosity”. Generosity, after all, comes from the same root as “generate”.

God loves all things into existence that he may give them more and bring them from potentiality into full actuality with him in his eternal actuality, which is why God does not disdain the smallest thing. We disdain the small things because our needs are so great; God needs nothing, and so he disdains nothing. God is interested in everything because his ability to give is so great.

God loves beetles, and he even loves the dung which the dung beetles feed on. There is no spec of dust on any cold and lonely planet so far from its sun that the sun just looks like another star in its sky which is not immediately in the presence of God. Most of our lives are made up of mundane moments no one would ever make a movie about; perhaps we can all take comfort, as we trudge through the details of everyday life, from the fact that God is inordinately fond of beetles. For it means that the smallness and dullness of our lives is only a defect in our sight.

We Are All Beasts of Burden

If you spend much time in certain parts of the Internet you’re likely to come across the hot topic of the Burden of Proof. By which I mean people like to pass it around like they’re playing hot potato. And if you’re lucky enough to be in the right part of that part the Internet, you will occasionally see my friend Eve Keneinan put on her oven mitts, reach into the oven, and pull out a second hot potato and stuff it down the pants of someone who was trying to pass the first hot potato to her. Her wording varies, but usually it looks something like this:

You say that the burden of proof is on the person making the positive claim. That itself is a positive claim, so by your own principle you now have the burden of proof to prove that it’s true. Go ahead, I’ll wait.

There’s a very interesting reason why she does that, but before we can talk about it we have to talk about what the Burden of Proof is. So, what is it? There’s no one answer because people have borrowed it and come up with variations of it, but it’s primarily a concept in courts of law and (by imitation) in debating clubs. It exists to solve a specific and big problem which courts have: what do you do when there isn’t a clear answer?

And what courts do varies. In American courts there is, at least in theory, the presumption of innocence for the accused so that if the prosecution does not meet the evidential criteria set forth at the beginning of the trial, the accused gets to go home like he wants to. This is the prosecution having the burden of proof. However, courts are not always set up this way. Many courts have been set up under the assumption that if the police or crown or what-have-you have gone to the trouble of arresting a man for a crime, it’s for good reasons, and so the accused must prove that the competent authorities are in fact wrong. Should he fail to meet the evidentiary threshold of proving them in error, he can’t make the police not put him in prison. In this case, the defendant has the burden of proof. Even in the American legal system, once convicted a person is presumed guilty and the burden of proof shifts to him on appeal to prove that something very wrong has happened.

So what is the unifying theme in all of this? It’s this: the person who most wants something to happen must demonstrate to the people he wants to do it why they should do it.

Which results in numerous conversations that go something like this:

Atheist: If you want me to believe in God, you have to prove God to me.

Theist: I’m fine with you not believing in God, but you now have the burden of proof to show me why I should treat you like you’re mentally competent.

Atheist: You awful, terrible person. You must treat me like I’m a genius, for some reason. It would be rude not to. Didn’t Jesus tell you to treat all atheists like they’re perfect?

Theist: No, and I’m a generic theist anyway, so why are you lecturing me about Jesus?

Atheist: If I’m honest, because of daddy issues. Officially because all theists look alike to me.

Theist: Am I supposed to pretend it’s for the official reason?

Atheist: It would be offensive of you not to.

Theist: Why? You just explicitly contradicted yourself, and for some reason I’m suppose to not notice?

Atheist: I didn’t make the rules. Don’t shoot the messenger.

Theist: I’m pretty sure you just did make that rule up.

Atheist: OK, maybe you did, but if you take everything I say seriously, we’d have nothing to talk about. I mean, I don’t believe in free will. For Christ’s sake, I don’t even believe that thought is valid! I will say, with a straight face, that all of our thoughts are just post-hoc explanations for warring instincts. If any of us took what I say are my beliefs as my actual beliefs, I’d make the guys who think that they’re Napoleon look sane!

Sorry, I get carried away with dialogs sometimes. It’s just so refreshing to talk with a self-aware atheist for once! The problem is that it’s not a stable position—self-aware atheists tend to cease being atheists after a while. It’s like my friend Michael’s question about why there seem to be no atheists today who really take Nietzsche seriously. There are, but typically then they stop because they’ve become Christians. Nietzsche was a unique case because while he could see the stark raving irrationality of the atheist position, he couldn’t escape being an atheist. So he ended up dancing naked in his apartment and telling his Jewish landlord that out of gratitude for the landlord’s kindness he would wipe out all of the anti-semites. (I forget whether he was going to personally shoot them all or wipe them out with a mere thought.)

Please pardon the stream-of-consciousness of his post, but, after all, the subtitle of this blog is “Quick Observations on a Variety of Subjects”.  You can’t fault me for truth in advertising, at least.

Anyway, getting back to the point, there are a great many people who were raised in a particular sort of mostly secular way peculiar to a christian heritage which I will call social hedonism. It is probably a kind of practical utilitarianism, but its basic tenets are very familiar to anyone who grew up among non-christians with a christian heritage: fulfill your emotional needs, primarily with human relationships, and have fun, constrained by being at least halfway decent to the people around you, especially with regards to having arguments and disputes. It’s a stage in societal decay, so it is not stable and there will not be many generations of people who think like this. (If you prefer the term societal transformation to societal decay, I won’t argue it with you.) It is almost accidentally atheistic, but the real point is that it is a definite set of beliefs which people are raised with and therefore never considered. Most people never ask themselves whether the things they were raised with were true unless they run into someone who asserts something contrary. That’s why religious belief is often on the wane in pluralistic societies: it gets challenged more than other beliefs, some more true some less true, do. And now, we’re finally able to get to the question that this post started with.

So why does Eve ask people to prove where the burden of proof lies? There are several answers which are suggested not infrequently by the people she gets into this particular argument with, all of which are wildly off the mark. They’re also good examples of why knowing a person really helps in understanding what they say.

She’s an idiot.

In fact, she is extremely intelligent. That does not mean that she’s right about everything—intelligent people are very capable of making huge mistakes and in fact are more likely to stick to such mistakes far longer than a less intelligent person because their intelligence allows them plug holes in their theory for a long time. What it means is that she’s not trying to avoid the burden of proof because she can’t handle it.

She doesn’t have any reasons for what she believes.

In fact she is exceedingly well read, and could off the top of her head articulate at least 5 proofs for God and explain them in great detail. She has probably read another half dozen or more, as well as a great many arguments against God. Also everything Nietzsche wrote. And sometimes it seems like half of everything else that was ever written in philosophy. She says that her personal library contains over 10,000 books, and I believe her. I also suspect her library card has gotten a fair amount of use too. She reads Attic Greek and has studied Chinese philosophy. She’s probably seen 99% of the argument anyone has made for or against God, ever.

She’s never considered whether the religion she inherited from her parents is true.

First, she is American Orthodox, which neither of her parents ever was. Second, she spent many years as an atheist and then as a platonist, only finally coming to Orthodoxy. Each step was only after about ten thousand times more consideration than the average internet atheist puts into anything at all.

OK, so why, then?

Because being a philosophy teacher is not just what she does for a living, it’s who she is. Real philosophers aren’t content to know things, they must understand them, as well. Philosophers ask what everything is, and this includes mundane and ordinary things. She doesn’t want to shirk anything, she wants people to ask themselves what the burden of proof is, and whether it’s relevant.

She wants this because the burden of proof is a practical thing for certain cases where uncertainty is not a viable answer and so a mistake is preferable to indecision. This isn’t all of life, or even most of it. If you’re going to hang a man, you need to come to a decision whether to hang him or let him go, then you have to move on. Most of life does not have this urgency coupled with this finality, and this is especially true of big questions like, “is there a God” or “is there anything better in life than sex and drugs then kill myself quickly when they stop being fun?”

Just because we inherited an answer from our parents or rebelled as children against the answer we inherited from our parents does not mean that we may not think about these things any more. Just because we were told that there is nothing more important than getting along with friends, family, and co-workers does not in fact mean that these things are our highest good or even that they will make us happy.  The thing which should be unquestionable is reality itself, not what we’ve been assuming all along.

The point—the real point—is that in the truly important things of life, no one has the burden of proof. We all have a duty of investigation. Every man that lives has a burden of proof for the things he believes and denies. When it comes to the truth, no one may be a rider. We must all be our own beasts of burden.

Appendix A. Authority

Nothing I said above is meant as a disparagement of authority. Life is short and it is impossible to live without trusting. The key is to trust where it is appropriate. Like how helping people and accepting help are good, but adults should still blow their own noses. And all trust of human beings should be done with the fallibility of all human beings never forgotten.

Positively Negative Claims

If you spend much time on the Internet around atheists, you will inevitably hear something like this:

The burden of proof is on the person making the positive claim.

The burden of proof in any conversation is actually on the person who wants to be in the conversation, but if we accept the above statement for the moment it brings up a very important point: negative claims often have positive implications.

Let me start with a trivial example: suppose I were to deny the claim that the prime numbers are infinite. It’s a negative claim so I have nothing to defend, right? Ah, but here’s the problem: the natural numbers have properties, and in particular, they are well ordered. If there are finitely many prime numbers, then there is a biggest prime number. Thus if my negative claim is true, so is a positive claim. My negatively is, therefore, convertible with a positive claim. If I merely said, “I’m not claiming anything, I just don’t believe the prime numbers are infinite”, I either believe that there is a largest prime number or I haven’t thought through what I’m saying. This latter option is what one often sees on the Internet. Basically, “I haven’t considered the claim and you can’t make me consider it”, though it’s never stated so baldly, for obvious reasons.

“But Math is different!” someone might say. If we’re unlucky, they’ll tell us that Math is empirically verifiable. If instead our objector actually knows something about Math he’ll probably say that Math is hypothetical and thus true in all possible worlds or that the properties of the numbers flow out of their definitions whereas real things have properties quite apart from whatever definitions we want to give them. This makes no difference, because real things still have properties, which is all that’s needed. Consider the following, very simplified example:

Everyone agrees with me that the color red exists. I deny that anything else exists but the color red exists.

(I know that in ordinary life you’d assume the fellow who said this was joking or insane, but for the sake of this post not being twenty pages long, please just play along. Examples which are uncontroversial because they were made up on the spot require far fewer disclaimers.)

This necessarily entails the claim that everything we perceive to exist is one or the other of the following:

  1. An illusion.
  2. Made up of the color red.

The negative claim that nothing but the color red exists will also be false unless the positive claim that cats, chairs, and sounds are all made up of the color red is also true. If it turned out that gravity was, for example, a force that attracts mass and not some shade of the color red, this negative claim would be false.

If our very hypothetical a-non-redist were to to actually discuss his claim instead of just use it to shut down all discussion by saying, “where’s your evidence?” like he’s a child’s doll with a cord in his back and only one recording, he would have to defend the positive claim that gravity is actually a shade of the color red. (Or he could maintain that getting fat is an illusion which doesn’t really happen, but unless he’s willing to argue circularly, he needs to make the case that it is an illusion.)

What is true of this silly example is also true of examples I wish were hypothetical, such as Materialism. The claim that there is nothing beyond matter and the forces so far elaborated by physics (or forces substantially similar) entails the claim that everything we experience is either an illusion or material. This is neither a tautology nor a self-refuting claim, so it is one which must be proven, not merely asserted.

Which is why when it is a mere assertion, it is typically asserted angrily. As the (purportedly lawyers’) saying goes:

When the law is on your side, argue the law. When the facts are on your side, argue the facts. When neither is on your side, bang on the table.

Postscript

Occasionally one will hear a defense of how free will is an illusion which invokes experiments using  fMRI machines. Aside from these things not proving what they purport to prove even if they were conducted perfectly, consider that you can use an fMRI to prove that a dead salmon can read emotion in human facial expressions.

Poe’s Law Isn’t Quite True

There isn’t an official version of Poe’s Law, but basically it is:

A parody of an extremist will be indistinguishable from the real thing.

In a sense this is all but definitionally true, since parody is making fun of something by presenting a more extreme version of it. If something is already maximally extreme, there is nowhere to go with a parody, so a parody will consist of saying the same things.

But… this is not quite true. It is possible to distinguish between an extremist and a parody because the extremist has a different goal than the parodist does. The parodist seeks to make people laugh. The extremist is trying to live life, and no matter who you are, life is primarily mundane. If you pay attention to what an extremist says, you will notice that most of what they say is actually fairly boring.

This stems from something Chesterton observed: a madman seems normal to himself. Since he is normal, he doesn’t think about his extreme views differently from his normal views, because to him none of them are weird. It’s not that’s he’s unaware that most people disagree with his extreme views, but that the disagreement is what will be weird to him, not his own views. We think of his extreme views as some oddity tacked on to the rest of his normal views (such as eating when hungry, sleeping when tired, and washing his hands after using the bathroom). He thinks of his extreme views as fitting in with the rest, since there’s on reality an so everything that’s true about it necessarily fits together. The result is that when he speaks, much of what he says will be prosaic, because he has no reason to speak only about his extreme views. People like to talk about the world, not merely the occasional isolated belief about it.

We thus have a way to tell the difference between an extremist and a parody: the density of extremism in the expression. Or, to put it another way, how funny the thing is. The true extremist isn’t in on the joke, so he doesn’t take care to only talk about the funny stuff. The funny stuff may not even interest the extremist all that much. The parodist, by contrast, is in on the joke, so he takes care to avoid the boring things a real extremist would say.

To put it succinctly, brevity is the soul of wit and the parodist can put on the extremist’s clothes, wear a wig, and even use makeup to change the color of his skin, but he can never change his soul.

The Odd Rhetoric of Atheist=Lack of Belief

(A word of warning: this is primarily a rhetorical, rather than philosophical, post.) Apparently, in the late 1960s a prominent atheist by the name of Antony Flew redefined atheism from the belief that there is no God to the lack of belief in God. This was in light, I think, of what was becoming the primary atheist argument, largely popularized (if not invented) by Bertrand Russell:

You can’t make me believe in God!

That’s not the standard phrasing, which is usually some variant of this:

I don’t see any evidence for the existence of God.

I’m not sure if Bertrand Russell was simply being dim-witted or if he was a liar—he was at least a serial adulterer so honesty was by no means his strong point—but in any event the problem with the “I’m not convinced” argument is that it’s always open to the rejoinder:

But how on earth does that prove your contention that there is no God?

And indeed it doesn’t. By refusing to rationally engage the subject, the atheist of yesteryear simply took himself out of all discussion. A great many people are fine with this—they’d rather not be in any sort of philosophical discussions at all, really—but it sits very badly with pretentious intellectuals who want to be admired for understanding the universe through gross oversimplification. I mean, for their brilliance. Hence the redefinition of atheism to something which doesn’t need defense because it’s not a proposition about the world. Now it’s the default position which doesn’t need to be defended! Hurrah! Even better, now all children start off as atheists, so it’s not weird, it’s normal! Could it get any better!?

Well yes, it could, in the sense of actually better, since aside from the few minor points mentioned above, this puts the atheist in a terrible (rhetorical) position. Just for starters, it is not usually a compliment to someone’s understanding to call it childish. Proudly proclaiming that one knows no more about the world than a babe in its mother’s arms is… a dubious compliment to give oneself.

Then if you really think about it—and by “really” I mostly mean, “for more than two thirds of a second”—anything without a mind lacks a belief in God. Trees lack a belief in God, as does algae and literal piles of what the germans call “hund scheisse”. This means that the post-Flew atheist is in the position of proudly proclaiming that he’s no smarter than a gallon of dead krill.

This also puts the atheist in the embarrassing position of the best argument in favor of atheism being a tire-iron to the head.  Cause enough brain damage and you will guarantee that any theist will instantly become an atheist. Which does raise the question, “is atheism actually a form of brain damage?” Lesions to the brain can cause loss of memory or the inability to learn certain things. If atheism can be reliably induced through brain damage, is all atheism just brain damage? I’ll leave that one to the lack-of-belief atheists to figure out. (Or not, since they might be too brain-damaged to do it.)

This also puts the atheist into the very weird position of saying:

Intelligent people might believe in God—even partial idiots might believe in God—but complete idiots are all atheists.

Well, if that’s the company you want to keep… Of course being the sort of atheist whose goal was to cheat so he wouldn’t have to defend his position, the lack-of-belief atheists will immediately claim something to the effect of:

Obviously atheism is a lack of belief in people who are capable of belief.

And will then probably do some metaphorical version of throwing the hund scheisse at you, claiming that you’re as stupid as the stuff he claims to be as stupid as. Pointing out people’s inconsistencies usually makes them angry at you.

Anyway, unless he’s claiming that his lack of belief has some sort of positive aspect, it cannot be distinguished from the lack of belief of a brick. His lack of belief has no properties. The brick’s lack of belief also has no properties. There is, therefore, nothing by which they can be distinguished. On the other hand, if he claims that his lack of belief has a positive aspect, he has thrown away his argument because now that positive aspect is a claim which must be defended.

Of course what’s going on is plain to anyone who isn’t trying to eat his cake and still have it afterwards too. He’s trying to imply that a rational mind—which most atheists being Materialists don’t believe in, but whatever—would have come to belief if there really was a God. This always remains at the level of insinuation, however, because it’s obviously false.

Consider: I lack a belief that the prime minister of France had a pet dog as a child. I’ve got a mind capable of believing that he did. Does my lack of belief in his pet dog mean anything at all with regard to his possible pet dog’s existence? Obviously not. I’ve never so much as looked for any evidence that he had a dog or didn’t. I don’t even know what the prime minister of France’s name is. My ignorance about his childhood pets doesn’t mean anything at all except that it would be a bad idea to ask me for information on the subject.

So it is with lack-of-belief atheists, of course. The main difference between asking them and a dead bucket of krill about God is that only one of the two is likely to answer with verbal hund scheisse. Other than that, well, I’ll leave it to them to make the positive argument that the way that belief in God doesn’t exist in them is somehow different from the way it doesn’t exist in a brick. I mean, other than lack of belief in God possibly indicating brick damage to their brain but not brain damage in the brick.

Update: Fixed a spelling error to Antony Flew’s first name and tightened up the language in the conclusion slightly. Also included the “brick damage in their brain” joke at a reader’s request.

By the way, since this definition of atheism results in all inanimate objects being atheists (so far as we know), it means that more than 99.9999999999% of atheists are incapable of rational thought. So the next time an atheist gives you guff, ask them for evidence that they are capable of rational thought and remind them that extraordinary claims require extraordinary evidence. Remember: them being capable of rational thought is a positive claim and the default is to assume that they’re as dumb as a bag of bricks unless they provide you with clear and convincing evidence to the contrary.

And if they’ve really ticked you off, point out that you don’t need to bother listening to evidence presented by something which is incapable of rational thought since being incapable of rational thought there can’t be any evidence which shows that they are capable of it. (Do bear in mind, though, that whatever defect of intellect or character makes this joke about someone appropriate will almost guaranteedly prevent them from getting it. Like Chesterton said about madmen in Orthodoxy, if they could get the joke they would be sane, and it wouldn’t apply to them.)

Dualists Usually Aren’t Quite Dual

Dualists are people who believe that reality as we experience it is fundamentally different from reality as it actually is, which we can’t know (that is, we can’t know reality as it actually is). In the west this was popular before Socrates and after Descartes. A familiar example of modern dualists are Materialists who believe that there is nothing besides matter and therefore there is no such thing as free will. When it comes to actually living, they basically just shrug their shoulders and make decisions anyway because we experience free will, even if in reality it’s a complete illusion. (They’re wrong about this, of course, but I’m not going to bother with any further disclaimers to that effect; I trust you, dear reader, to supply the rest yourself.)

And there’s a curious thing about dualists: they usually believe that there is some link between reality as it actually is and the world of perception which we (supposedly) can’t escape. Most of them are more 1.95ists than true dualists. What’s significant about this is that this link is a source of power: it’s possible to use this link to modify the underlying reality in ways that affect the world of perception.

To keep with the example of Materialists (which New Atheists almost universally are), they believe that things like love, loyalty, curiosity, wonder, awe, compassion and so on are all the epiphenomena (that is, an accidental manifestation, analogous to a symptom) of base instincts which we have because they resulted in our ancestors producing us. This is not to say that the epiphenomena are themselves necessarily of any value, but the instincts which produce them must have been of some evolutionary benefit. To try to interact with these epiphenomena may be unavoidable, but it is not very likely to accomplish much since none of them are real. By contrast, there does exist an ability to probe reality. It’s limited, difficult, and tentative; and its name is science. The point is not, of course, to improve the evolutionary benefit. Just as evolution does not “care” about the individual, the individual does not care about evolution. The point is to understand the mechanisms which evolution produced in order to change those mechanisms into ones which are more convenient. A good example of this is anti-depressant medications. (Or perhaps it would be if anti-depressants were more effective.)

Even those who suffer greatly from clinical depression are often hesitant to take anti-depressant medications because psychoactive drugs are terrifying. There is of course the possibility that they won’t work in dangerous ways—there are anti-depressants whose common side-effects include frequent thoughts of suicide—but the biggest fear is that the anti-depressants would work but turn the person into somebody else. This is not really a concern for the materialist because who he is is a mere epiphenomenon, and its only value is in being happy. If the medication changes him, all that was lost was an illusion anyway. (I should note that when this is practical rather than theoretical, Materialists may well be hesitant because they know on some level that Materialism isn’t true.)

This is why Materialism goes so well with recreational drug use. Caution is of course still warranted for the heavy-duty drugs like cocaine and heroin which can destroy one’s life, but it is very compatible with non-addictive drugs like marijuana, LSD, and endorphin stimulation through promiscuous sex. The main reason to avoid these safer drugs is that they falsify one’s sense of the world and take one further away from reality and hence from the true source of happiness. They’re not just wastes of time but counter-productive because they distort one’s view of reality and pull one further away from the truth. Of course a single, low-dosage usage of such drugs is not likely to have much of an effect (ignoring quality control issues) and I don’t mean to suggest that a person who’s had a single puff on a reefer stick is doomed and bereft of hope. But this is the effect of such drugs; they are chemical lies which take a person further away from sanctity and therefore from happiness.

The situation is radically different for a Materialist, however. First, they start off massively disconnected from reality, so within their worldview their connection to itreality (more-or-less) can’t be diminished. Second, there is no real happiness which is possible, so there is nothing to lose by telling oneself pleasing lies. Happiness is itself just an accidental manifestation of underlying chemical processes in the brain, and all high-level explanations which we have for happiness are illusions, so messing with the chemistry of the brain to produce happiness is not only more reliable, it is in fact more real. Not that being more real is a virtue for the Materialist, but the argument—using drugs recreationally divorces the user from reality—will not even make sense to a Materialist.

This is, incidentally, why one runs into the oddity of the evangelical atheist. If God is dead then clearly nothing matters. Even if nothing matters in theory, however, human beings don’t cease to be human beings merely because they believe they are only flesh robots, and as Aristotle observed all men desire to be happy. The significant difference in effectiveness between trying to achieve happiness by dealing with the world according to its epiphenomena (duty, honor, morality, etc) and dealing with it as it is (scientific fun drugs) is so stark that they are moved by pity to try to spread the word to live according to the latter and not the former.

Science, Magic, and Technology

There is an interesting observation made, I believe, by Isaac Asimov:

Any sufficiently advanced technology is indistinguishable from magic.

This has been applied many times in science fiction to produce some form of techno-mage, but what’s more interesting is that the origins of modern science were in magic, specifically in astrology and alchemy. The goals of science were the same as that of magic: to control the natural elements. If you really study the history, it’s not even clear how to distinguish modern science from renaissance magic; in many ways the only real dividing line is success. There is some truth to the idea that alchemists whose techniques worked got called chemists to distinguish them from the alchemists whose ideas didn’t work. This is by no means a complete picture, because there was also at the same time natural philosophy, i.e. the desire to learn how the natural world worked purely for the sake of knowledge.

Natural philosophy has existed since the Greeks—Aristotle did no little amount of it—but it especially flourished in the renaissance with the development of optics which allowed for the creation of microscopes and telescopes. Probably more than anything else this marked the shift towards what we think of as modern science. As Edward Feser argues, the hallmark of modern science is viewing nature as a hostile witness. The ancients and medievals looked at the empirical evidence which nature gave, but they tended to trust it. Modern science tends to assume that nature is a liar. Probably more than any other single cause, being able to look at nature on scales we could not before and seeing that it looked different resulted in this shift towards distrusting nature. Some people feel a sense of wonder when looking through a microscope, but many people feel a sense of betrayal.

Another significant historical event was when the makers of technology started using the knowledge of natural philosophy in order to make better technology. This may sound strange to modern ears, who are used to thinking of technology as applied science, but in fact technological advancements very rarely rely on any new information about how the world works which was gained by disinterested researchers who published their results for the sake of curiosity. Technology mostly advances by trial and error modifying existing technology, and especially by trial and error on materials and techniques. In fact, no small amount of science has consisted of investigating why technology actually works.

But sometimes technology really does follow fairly directly from basic scientific research. One of the great examples is radio waves, which were discovered because the Maxwell’s theory of electromagnetism predicted that they existed. Another of the great examples of technology following from basic scientific research is the atomic bomb.

I suspect that these as well as other, lesser, examples, helped to solidify the identification between science and engineering. And I don’t want to overstate the distinction. In some cases the views of the natural world brought about by science have certainly helped engineers to direct their investigations into suitable materials and designs for the technology they were creating. But counterfactuals are very difficult to consider well, and it is by no means clear that the material properties which were discovered by direct investigation but also explained by scientific theories would not have been discovered at roughly the same time, or perhaps only a little later.

However that would have gone, the association between science and technology is presently a very strong one, and I think that this is why Dawkinsian atheists so often announce an almost religious devotion to science. I’ve seen it expressed like this (not an exact quote):

Science has given us cars and smartphones, so I’m going to side with science.

Anyone who actually knows anything about orthodox Christianity knows that there is no antipathy between science and religion. Though it is important to note that I mean this in the sense of there being no antipathy between natural philosophy and religion. In this sense, Christianity has been a great friend to science, providing no small amount of the faith that he universe operates according to laws (i.e. that being a creature is has a nature) and that these laws are intelligible to human reason. Moreover, the world having been created by God, it is interesting, since to learn about creation is to learn about the creator. It is no accident that plenty of scientists have been Catholic priests. The world is a profoundly interesting place to a Christian.

But there is a sense in which the Dawkinsian atheist is right, because he doesn’t really care about natural philosophy. What he cares about is technology, and when he talks about science he really means the scheme of conquering nature and bending it to our will. And this is something towards which Christianity is sometimes antagonistic. Not really to the practice, since technology is mostly a legitimate extension of our role as stewards of nature, but to the spirit. And it is antagonistic because this spirit is an idolatrous one.

The great difference between pagan worship and Christian worship is that Christian worship is an act of love, whereas pagan worship is a trade. Pagan deities gain something by being worshiped, and are willing to give benefits in exchange for it. This relationship is utterly obvious in both the Iliad and the Odyssey, but it is actually nowhere so obvious as when the Israelites worshiped the golden calf. For whatever reason this often seems to be taken to be a reversion to polytheism, where the golden calf is an alternative god to Yahweh. That is not what it is at all. If you read the text, after the Israelites gave up their gold and it was cast into the shape of a calf, they worshiped it and said:

Here is your God, O Israel, who brought you out of the land of Egypt.

The Israelites were not worshipping some new god, or some old god, but the same god who brought them out of Egypt. The problem was that they were worshiping him not as God, but as a god. That is, they were not entering into a covenant with him, but were trying to control him in order to get as much as they could out of him. Granted, as in all of paganism it was control through flattery, but at its root flattery has no regard for its object.

And this is the spirit which I think we can see in the people who say, “Science has given me the car and the iPhone, I will stick with Science.” They are pledging their allegiance to their god, because they hope it will continue to give them favors. And it is their intention to make sacrifices at its altars. This is where scientists become the (mostly unwitting) high priests of this religion; the masses do not ordinarily make sacrifices themselves, but give the sacrifices to the priests of the god to make sacrifice on their behalf. And so scientists are given money (i.e funded) as an offering.

To be clear, this is not the primary reason science gets funded. Dawkinsian atheists (and other worshipers of science) tend to be less powerful (and less numerous) than they imagine themselves. Still, this is, I think, how they view the world, except without the appropriate terminology because they look down on all other pagans.

And I think that it is largely this, and not the silly battles with fundamentalists and other young-earth creationists that result in their perception of a war between science and religion. There were other historical reasons for the belief in a war between science and religion, but I am coming to suspect that they had their historical time and then waned, and Dawkinsian atheism is resurrecting the battle for other reasons. They are idolaters, and they know Christianity is not friendly to idolatry. And idolaters always fear what will happen if their god does not get what it wants.

Authoritative Authorities

In my previous post I mentioned that people will use science’s scheme of self-correction as a support of its authority, and that this is utterly confused. In fact, here’s what I said (yes, I’m quoting myself. Think of it as saving you the trouble of clicking on the link):

(It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.)

Today is that day.

Before getting into it, I need to qualify what I mean by an authority. There are multiple meanings to the phrase authority, and the most common one—someone such as a king, judge, etc. who should be obeyed and who enforces their will through force—isn’t relevant. I’m using the term “authority” as in the material logical fallacy, “appeal to authority”. Unfortunately, appeal to authority is often misunderstood because it would be much better named “appeal to a false authority”. A true authority, in the logical sense, is  anyone or anything which can be relied upon to only say things which are true. If you actually have one of those, it is not a fallacy to appeal to their statements.

A logical authority may of course remain silent; its defining characteristic is that if it says something, you may rely on the truth of what it says. These are of course hard to come by in this world of sin and woe, and you will find absolutely none which are universally agreed upon. That doesn’t mean anything, since you will find absolutely nothing which is universally agreed upon.

To give some examples of real authorities, Catholics hold that the bible, sacred tradition, the magisterium, and the pope when speaking ex cathedra are all authorities. God has guaranteed us that they will not lead us astray. Muslims hold that the Quran is an authority.

Not everyone believes there exists any authorities at all, of course. Buddhists don’t and neither (ostensibly) do Modern philosophers. If you insist on distinguishing Modern philosophers from Postmodernists, then Postmodernists don’t believe there exist any authorities either. In general, anyone who holds that truth is completely inaccessible will not believe in any authorities.

So we come to Science, and the curious thing is that science explicitly disqualifies itself as an authority. Everything in science is officially a guess which has so far not been disproved by all attempts which have so far been made to disprove it. And yet many people want to treat science as an authority. In some cases this is sheer cognitive dissonance, where people pick what they say on the basis of which argument they’re having at the moment, but in other cases there is an interesting sort of reasoning which is employed.

Both forms tend to piggy-back the bottom 99% of science on the success of (parts of) physics, chemistry, and to a lesser extent some parts of biology. This especially goes together with conflating science and engineering.

The first and stronger sort of argument used is that science may always be subject to disproof, but that after a sufficient amount of testing, any such disproof will be at the margins and not in the main part. The primary example of this is the move from Newtonian mechanics to Relativity, where the two differ by less than our ability to measure at most energies and speeds we normally interact with.

The problem with this argument is that there is relatively little of science to which it actually applies. Physics is rare in that most physicists study a relatively small of phenomena. There are less than two hundred types of atoms, and less than two dozen elementary particles, and apparently no more than three forces. So thousands of physicists all work on basically the same stuff. (It’s not literally the same stuff, of course; physicists carve out niches, but these are small niches, and often rely on the more common things in a way where they would be likely to detect errors.) This is simply not true of other fields in science. You can study polar bears all your life and never do anything which tells you about the mating habits of zebra fish. You can study glucose metabolism for five decades straight without even incidentally learning anything about how DNA replication is error-checked. You can spend ten lifetimes in psychology doing studies where you ask people to rate perceptions on a scale of 1 to 10 and never learn anything about anything at all.

The result is that in most fields outside of physics and (to a lesser extent) chemistry, theories are not being constantly tested and re-tested by most people’s work. In some of the fluffier fields like human nutrition and psychology—where controlled experiments are basically unethical and in some cases may not even be theoretically possible—they may not even be tested the first time.

The second and weaker argument is that science is the best that we have, and so we must treat it as an authority. This is very frequently simply outright wrong. In fields where performing controlled experiments is unethical, science consists of untested guesses where the people making the guesses had a strong financial and reputational incentive to make interesting guesses, as well as often a strong financial incentive to make guesses which justify government policies that the government would like to do anyway. But that only counts if the financial incentive is provided by tobacco companies or weightloss companies. Other financial incentives leave people morally pure because most scientists have them.

Actually, there is a third argument too, though it’s almost never stated explicitly. A lot of people work hard in science and believe that they’re doing good work, so it would be rude to doubt them. This is, basically, a form of weaponized politeness. The sad truth is that lots of scientists aren’t more honest than other people, lots of scientists aren’t smart, and lots of scientists are wasting their time. It’s mean to say that. Sometimes the truth hurts. It always sucks when honesty and politeness are enemies, but if a person prefers politeness to honesty, he’s a liar, and there’s nothing to be said to him except that he’s working to make the world a worse place and should stop.

Ultimately, of course, the real reason science is held to be an authority—as opposed to a potential source of truth which must be evaluated on a case-by-case basis because a scientific theory is only as good as the evidence behind it—is because this is a cultural thing. People need authorities in order to feel secure, and if they won’t believe in the right authorities they will believe in the wrong authorities.

The Fundamental Principle of Science

In the philosophy of science, there have been many attempts to define what it is that distinguishes science from other attempts to know the world. There’s an interesting section of The Trouble With Physics where Lee Smolin discusses Paul Feyerabend’s work, and summarizes it something like this (I don’t have time to find the exact quote):

It can’t be that science has a method, because witch doctors have a method. It can’t be that science uses math, because astrologists use math. So what is it that distinguishes science?

Neither, so far as I know, came up with an answer. There is a hint in Smolin’s book that there is no answer; that each advance in science comes about because there is a weirdo whose approach to science works to make the discovery of the moment, but doesn’t work generally. This would explain why so few scientists tend to be really productive over their entire lives; usually they have a few productive years—maybe a productive decade or so—and then tend to fade: they spend a few years discovering everything that their personal quirks are suited to, then when it is exhausted, return to the normal state of discovering nothing.

There is something common, however, that one will find in all of these quirks, if one looks back over history. This is especially true if you go back far enough to notice how much of science turned out to be wrong. (It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.) There is one principle that you will find consistent between everything which has ever been science, right or wrong. That principle is: assume anything necessary in order to publish.

To see why, we must consider the evolutionary pressure that applies to science. For whatever reason, people rarely take the theory of evolution seriously. They consider it as a scientific doctrine, or an organizing principle for archaeology, or a creation myth or any number of other things, but very rarely as an operating force in the world. Yet selective pressures abound and have their effects.

Occasionally people will ask the question about what influence on science the academic doctrine of publish-or-perish has, and they are right to ask this, but it is really just a subset of a larger selective pressure: science consists exclusively of what is published. If someone were to do extensive research in his basement and discover all the secrets of the cosmos, but never tell anyone, none of his knowledge would be a part of Science. In the same sense that Chesterton said that government is force, Science is publication.

The big problem with trying to uncover the secrets of the cosmos is that they are well covered. Coming to know how the universe works is very difficult. It’s often much easier if one makes simplifying assumptions which get rid of variables or eliminate the need for expensive experiments because cheap ones will suffice. The problem is that an assumption being convenient is not a justification for making that assumption. But since science consists of what is published, there is a huge selective pressure on people to make these convenient assumptions. This may or may not influence any particular scientist, but the scientists who are willing to make these sorts of unjustified simplifying assumptions will certainly be included in Science, while the scientists who take the principled position and refuse to make unjustified assumptions may well not be, because they didn’t have results to publish. In fields where real results are difficult to come by, it’s entirely possible that this could come to dominate what is published. And as the pitchmen say, but wait, there’s more!

People who are willing to make unjustified assumptions tend to have some personality traits more than others. Arrogance and a certain sort of defensiveness tends to work well with making assumptions one can’t justify, since those discourage requests for justifications. It also works synergistically with making quick judgments based on superficial criteria (like holding unrelated unpopular opinions), since that tends to insulate the unjustified assumer from having to confront contrary arguments and evidence. And here we come to the question of evolution, because new scientists will have to get along with these people, since the scientists who have published largely serve as the gate-keepers of who gets to join science. What sort of candidates will these people accept? Who will find scientists like this tolerable?

In subsequent generations, there will be the further question of who will find tolerable the people who found the makers of unjustified assumptions tolerable? And so it will go through subsequent generations, each new generation being a mix of all sorts, but the presence of the makers of unjustified assumptions and those who they trained will act as as selective pressure even on those who don’t work with them directly, since they still must be able to work with these people as colleagues and in many cases submit journal articles to them for peer review, etc.

For any institution, if you want to know how it tends to go wrong, a good place to start is asking what are the selective pressures affecting it?

Patience is the Most Practical of the Virtues

Most of the moral virtues have a reputation for being impractical. Honesty may be the best policy, but it often makes for a great deal more work for the person telling the truth, at least in the short term. Courage is necessary to practice any other virtue, but courage also means having the courage to do things that will cause oneself a great deal of trouble. Diligence is almost the definition of impracticality; it is at least literally the opposite of laziness. And so it goes with most of the others. But patience stands apart from the others in being not only virtuous, but highly practical.

It has been said that insanity is doing the same thing and expecting different results, but the truth is that one never does precisely the same thing twice. The first time always does something, so the second time takes place in a different world. This is especially true when it comes to dealing with people, who usually remember the past. And this is where patience shows how practical it can be.

Anyone with any experience of the world knows that talk is cheap and when it comes to actions, a great many people will try anything once. Accordingly, when people state an intention, or even when they try to do something, the most likely outcome is that this is the last you have heard of them. It does not take a great deal of experience with the world to become accustomed to delaying responses. It is true that if you leave the dishes in the sink, they will be harder to clean the next day. It is also true that if you leave them on the table, the dog will probably clean them off within a few minutes so that you can stick them straight into the dish washer without having to scrape them first. The reason that procrastination is so common in this world is because it is very effective. Many, if not most, problems simply go away if you ignore them long enough.

This is why there was the story of the importunate widow in the bible. (Importunate comes from the same root as importune.) There was a judge who neither feared God nor respected man, and a widow who never ceased to demand justice from him against her enemy. For a long time he ignored her, but eventually he said to himself, “Though I neither fear God nor respect man, I must give this widow her rights or she will come slap me in the face.”

There is another practical aspect to patience, because patience must come from a source, and that source will carry a person through the execution of what they undertake. This is especially important in organizations with limited resources; to give someone what he asks is to commit resources which could be used elsewhere, even if just time. When people are willing to wait, it shows that their zeal has a reasonable chance of surviving the execution of their undertaking. Especially since all human undertakings in this fallen world will meet with adversity.

Patience is also involved in every attempt at learning. Whether it is practicing as skill or reading entire books to find out which are the good parts (if one isn’t reading Chesterton), learning will never be acquired without patience. This is perhaps especially evident at dance classes; a great many people quit because they don’t have the patience to look like a fool for a short time. It is true everywhere, though. Many people give up ice skating because they do not have the patience to fall a few hundred times. People give up learning to knit because they cannot stand to make a single misshapen scarf whose stitches are far too tight. Many a potential juggler has juggled nothing because they got tired of chasing after balls thrown wildly.

It has, for these reasons, always struck me as odd that patience is not a more commonly practiced virtue. It comes up almost any time one wants to accomplish anything, even vice. Pickpockets must wait until the right target comes along. How much more, then, will patience be required to practice virtue?

Bogeymen

The classic Bogeyman is a tale told by parents to frighten children into good behavior. There is another type of Bogeyman, however. It is a tale told by adults to themselves to explain why they’re already frightened.

We live in a fallen world, which means that we are separated from God. This is a terrible state for us to be in and more to the point we instinctively know that it is a terrible state for us to be in. In this state we are not happy and since we want to be happy we seek to know why we are not happy. Of course, if we came to the right answer we would go to church, receive the sacraments, and make progress on being happy. But not everyone does this, and the people who don’t still have a deep-seated emotional need to have an explanation for why they are unhappy. So they come up with one that isn’t true.

This explanation for why they are unhappy is what I call a Bogeyman. Bogeymen invariable have a few key traits. In particular they always:

  1. Are something which is reasonably powerful.
  2. Are something that is in theory beatable.
  3. Are something that is not in practice beatable.

If something is not powerful, it has no explanatory value for unhappiness. If it is not in theory something the unhappy person can overcome, then misery is assured and the Bogeyman leads to despair, which (most) people know to be wrong. If it is something that is beatable in practice, it will be beaten, the unhappiness will not go away, and so another Bogeyman will need to be found. Vaguely analogous to the Peter Principle, Bogeymen will be defeated until an undefeatable one that satisfies conditions 1 and 2 is found.

Bogeymen can be nearly anything that satisfies these three criteria. Groups of people are very popular, such as Republicans, Democrats, the Rich, drug users, popular entertainers, foreigners, racists, men, women, etc. Social conditions such as poverty, inequality, factory farming, industrial pollution, etc. have been not uncommonly used. Widely held social theories like capitalism, Marxism, nationalism, internationalism, Catholicism, etc. work well as bogeymen too.

This is not to say that a person will have no legitimate complaints about the real thing they are using as a Bogeyman. They almost certainly will, since real complaints work much better than imaginary complaints to create the skeleton of a scary figure. Rare is the imagination so powerful it can keep a menacing figure in view without any recourse to reality. But though the complaints are real, they will never be considered in any sort of balance. A person who focuses their fears onto a Bogeyman is inherently a utopian—someone who believes that perfection can be achieved in this life—and utopians can never consider imperfections in the world as permanent compromises. Utopians don’t mind temporary compromises, of course—hence the guillotine and the gulag—but a permanent compromise because the world will never be perfect? That is unthinkable. If that were the case, happiness would be impossible.

It’s a problem of looking for happiness in the wrong place, of course. This transitory world is not the sort of place in which you can find happiness. But if a man gives up looking for God, the wrong places to find happiness are all that are left.

Theoretical Empiricists

If you go to the right places on the Internet it is fairly easy to come across Dawkinsian atheists who claim to be empiricists. They are not empiricists, of course—most haven’t done a single basic experiment themselves, let alone all of the basic experiments—but they will certainly claim to be, if not by name. When this is pointed out to them, they will take refuge in what might be called a collective empiricism: as long as someone has empirically verified it, and it is theoretically possible to empirically verify it again, that’s OK.

Being a retreat this isn’t well thought out, of course. Why should the bare theoretical possibility of an experiment being run again make human testimony about a previous experiment more believable? Still, that’s really a minor point; this new version doesn’t do what they want it to, anyway. They are hoping to divide knowledge up into reliable knowledge and everything else. It doesn’t do anything like that; their “knowledge” is just as unreliable as every other form of knowledge they denigrate, except for the kinds it’s less reliable than. What it does do is codify the reductionism which they practice. They want life to be simple, and so they rule out, as a simple matter of choice, types of knowledge which they don’t want to deal with. In practice, those are most types of knowledge. Ironically, given the high respect in which most such people hold mathematical physics, this includes mathematics.

What they are really trying to limit knowledge is the substitute for knowledge proposed by Kant. Basically, come up with a theory and then test it against experience. According to this concept of knowledge, nothing is actually known. Things are guessed at, and the best you can do is feel reasonably confident in your guesses when applying them to the parts of life in which you have tested them before.

The curious thing about this is that not even Kant tried to limit knowledge to this; he only limited knowledge of real things to this. That is, of things which exist. He fully recognized the universal validity of logic and reason; all he doubted was noesis, that is, perception of reality. Being the end of Modern Philosophy, he doubted that the senses could be trusted at all, and so the mind could not know anything which existed outside of it. But things which do not exist, such as hypothetical statements like the theorems of mathematics, he still thought fully knowable.

The Dawkinsian reductionists have eliminated this as well. They take somewhat seriously C.S. Lewis’s argument that if reason is the product of blind material processes, there can be no reason to trust it. (They probably didn’t actually hear his version of it; the problem is fairly obvious with only a moment’s thought. Unfortunately Lewis’s conclusion was that Dawkinsian evolution is self-refuting, which is not true. Dawkinsian evolution may be true, and if it is, it is intellectual suicide, but it is not self-contradictory.)

Given this semi-radical skepticism, the modern materialist is actually abjuring all knowledge. He doesn’t deny it, he merely disavows it. He’s uninterested. He will proceed with what amounts to a betting scheme, taking the Kantian approach as simply his preferred method for dealing with what may well be an irrational universe. Trouble emerges because—having no use for it—he redefines the word “reason” to mean this sort of bet, and “reasonable” to mean betting in the same manner as him. Thus anyone who makes any sort of real knowledge claim is “irrational”. The most common knowledge claim to excite this sort of opprobrium is to claim to know that God exists, probably mainly for practical reasons—Dawkinsian Atheists tend to strongly dislike traditional morality, which they associate with Christianity for mostly historical reasons—but also because God is known entirely through means which the Dawkinsian Atheist rejects (reason and testimony).

This phenomenon also gives rise to some very strange results when applied to mathematics, which the Dawkinsian Atheist must accept, despite his obvious rejection of it, because Physics (the field of study) is dependent upon mathematics. The compromise which this sort of extreme skeptic tends to employ is absurd in the abstract, but fits with his adopted approach to life: he tests mathematical theorems experimentally. I recently saw a rather striking example of this when just such an atheist offered to experimentally prove that 2 + 2 = 4 using apples. It’s really beside the point that such a demonstration would fail using sub-atomic particles if two are electrons and two positrons; it’s actually most interesting that he doesn’t understand that 2 + 2 = 4 by definition. There are actually several definitions of the Natural Numbers, but the most common is using the piano axioms. Briefly: suppose there’s something, call it one. Suppose there’s a next number after it, call that two. Suppose there’s a next number after that, call it three. And so on. Addition is defined by succession, so 2 + 2 is 4 because the number after the number after two is called four. No other possibility is conceivable, because this is simply the definition of the successor of the successor of 2. But this is not really a thinkable thought for the Dawkinsian atheist, so he’s stuck offering to do demonstrations with apples.

Actually, he’s not quite stuck doing that; he can also ridicule people for doubting. “If you don’t think that 2 + 2 = 4, the IRS would like to talk with you,” he says, and smirks in derision. He’s not interested in the definitions of the numbers, or of what additional actually is; all he cares about is practical results, because he has disavowed knowledge in favor of a betting scheme. And he can’t know that he’s betting his soul, because he doesn’t believe he has one. Pray for them.

Incidentally, these people correspond fairly well with the men Aristotle described as wanting to be horses. Each man, as a rational creature, has a duty to the truth: to seek it out and to know it as far as it has been given to him to do. These men find that unpleasant; they wish to do only the simpler tasks of caring for the body. They want the wail of Ecclesiastes to be true: they want man to be only the cleverest of the beasts that crawls the earth. Pray for them.

Writing the Story Is Left as an Excercise for the Reader

I came across one of the stupid things that replaced email forwards but doesn’t have a name yet which (erroneously) claimed Hemingway once bet people he could write a short story in six words, and wowed everyone with how moving and profound yadda yadda:

For sale, baby shoes, never worn.

We’re supposed to imagine the tragedy and the sorrow of the parents who lost their infant child, etc. And yes, there could indeed be a very sad story of loss and grief behind these words. Or there could be a story of someone with a healthy, happy baby who was given more baby shoes than they could actually use by friends and relatives. It could even be the story of someone who had baby shoes and realised that the things are utterly pointless because babies can’t walk and once the child actually arrived the practicality of real parenting set in and they set the stupid things aside. Maybe it’s a science fiction story in which an alien used a replicator to duplicate baby shoes. Maybe it’s a spy-thriller in which the baby shoes were used to hide secrets. It could be anything at all.

Now, I have heard people defend this sort of thing on the grounds that this provides a wide scope for the imagination. It does, but only because it doesn’t provide anything else. It provides as wide a scope for the imagination as a blank page, because it basically is a blank page. If the reader is willing to do all the work, this sort of thing is super easy. Here’s one:

“Bottoms up,” he said, and died.

That can be about someone in a suicide pact, or perhaps a man being executed who got a last drink before execution. Here’s one in five words, though I’ll grant it’s not original:

And then there were none.

That one is about a serial killer on an isolated island. Here’s one in four words:

I knew love once.

A person who fell in love while on military deployment but could never find her again when he went back. Three words:

Call me Ahab.

Modern retelling of Moby Dick from Ahab’s perspective, because I saw a poster for Wicked in the train station today. Or if that’s too derivative, how about this:

I died, once.

Story told by a ghost or sci-fi where medicine can bring the dead back to life? You decide! Two words:

Me: Tarzan.

A new Tarzan reboot by someone who can’t afford the rights to a superhero. One word:

Shit!

You write the rest, I’m tired of this.

There are Four Vocations

In Star Trek: The Next Generation there is an episode where Captain Picard has been captured and is being tortured. In order to break his will, he is shown four bright lights, told that there are five lights, and severely punished every time he says the correct number. American culture sometimes feels like that with vocations, though instead of insisting that there are five it insists that there is only one: marriage.

One is an especially unfortunate number of vocations for our culture to have settled on since a single choice is no choice at all. Marriage thus becomes prescriptive and considering the very idea of vocation appears strange, if not outright mentally ill. Shoehorning all people into marriage also damages marriage, which might fairly be said to be splitting at the seams. The high rate of divorces generally and annulments within catholic culture testify to a great many people whose vocation was not marriage—or if it was, having thoroughly misunderstood what marriage was—going through the motions of marriage mostly because they thought that they were supposed to in order to be human. (Fornication carries so few consequences in American culture that going through the motions of marriage cannot be for gratification of sexual desires.)

There are four vocations—marriage, committed single, consecrated religious, and holy orders (priesthood/diaconate)—because people are not all the same. In America we tend to look at this backwards: the person first, then the vocation to fit the person, just like you pick a job based on whether you like horses more than bridges, or cooking food more than both, etc. A more accurate way to look at it is that the vocation is part of the person, and therefore their personality is suited to their vocation. (It is more accurate because the “being” in “human being” is really a verb, like “running” or “swimming”, though really the only distinction between verbs and nouns is that verbs are relatively short-lived actions and nouns are very long lived actions. God’s name is not “The Thing” but “I am”, and so far as we exist we are all made in the image of God.)

Trying to cram people into the wrong vocation will necessarily hurt the people thus crammed. The proper definition of sin is “privation of form”, or slightly more intelligibly to those not familiar with scholastic philosophy, “diminishment of being”. We can all see what is meant if you consider the loss to a pianist of having his hands mangled in an accident. He simply ceases to be a pianist at all. He might be or become many other good things, of course. He might become a great piano teacher. He might fall back on the degree he got in college of astrophysics and do excellent work examining the stars. He might concentrate on managing investments for family and friends which he had been doing in his spare time. He may become a much better man than ever he would have been as a pianist, but none the less he who once was a pianist is no longer; in that regard he is now less than he once was. His being has been diminished.

By analogy, sin diminishes a person too. Human beings were given language to enable us to tell the truth. To use language to tell a lie makes us less, because now we cannot be trusted and all our words convey less truth than they used to. So it goes with all sin; it is to make ourselves less than the fullness of what we are supposed to be. That God saves us means that this diminishment may not always be permanent, and it may not always be catastrophic, certainly it will not destroy us if we turn away from it and embrace God’s gift of salvation.

I think this analysis makes it obvious why a person being crammed into the wrong vocation diminishes them; it doesn’t destroy them and it certainly will provide many opportunities to practice the virtue of patience, but it will result in their life not being all that it could have been. But we must be clear that this does not mean that a person so crammed will grow new abilities and personality traits; they will have to make do as best they can with a personality which was adapted to something else. Swimming might here be a good analogy; the human body can swim, and some people can swim much better than others, but the human body is not made for swimming, and the fastest of us are slow compared to very average fish. A man who should have been a celibate priest might make a good husband and father, but that doesn’t mean he wouldn’t have made a better priest. The reverse is of course true, too, but that mistake is well understood within our current culture.

To some degree I think that part of the problem in understanding this is the degree to which catholics have acculturated to the predominantly protestant culture of America. In the aftermath of JFK, catholics went from being distrusted and hated to being accepted, and this caused many of the problems which are always associated with comfort. Chief among those problems is laziness. Comfortable people become very reluctant to hold onto difficult truths, and that people are not all the same is a difficult truth to hold onto.

Now, it is easy to be misled because “tolerance” is a common catchword, and we’re asked to be tolerant of seemingly everything. But these are all superficial differences which are accepted, and they are accepted precisely because they are superficial. Pretending that everyone is basically the same is much easier than loving people who are different, which is why one of the immediate actions of “tolerance” is to angrily call all discussion of difference bigotry. It may possible be that much of the “tolerance” we see is the overcompensation of self-loathing bigots, but much of it is that this “toleration” consists primarily of pretending that there are no differences. To discuss real differences is to shatter this illusion, and since they have no metaphysical system in which (real) difference is not defect, this has no other interpretation to them.

There is another problem which our culture has that tends to deny any of the celibate vocations, and (unsurprisingly) it is derived from an essentially secular origin. The basic principle is that a person must have a committed sexual partner in order to be a full human being. It’s a crazy idea, and one might be tempted to think that it comes from watching far too many romantic comedies, but in fact I think it derives from the belief in an imminent soul made up of the accumulation of a person’s experiences, rather than a transcendent soul which pre-exists but changes with experiences. The former is the only real metaphysical possibility absent an intelligent creator, hence its prevalence in our largely secular culture. An imminent soul has no inherent value, however. It can only be valued if known, and it can only be known with a very great investment in time, and that very great investment in time will only be made if a person is loved, but they can’t be loved before they are known and so something must attract another person prior to knowledge and the only two candidates are desperation which would take anybody, and sexual attraction which is at least a little selective. Desperation is of no value because it is entirely focused on the self, not the other, which is why it will take anyone. Hence sexual attraction is the only option for a worthwhile life. And hence celibate vocations are a form of suicide, and why parents might discourage their children from throwing their lives away in that manner. In the best case, these might be noble sacrifices, like being an organ donor or the soldier who jumps on a grenade to save his fellows, but still, one hopes that the noble soul who sacrifices himself will be someone else’s child.

Accordingly, I think that the first thing we must do to help everyone get into their proper vocations is to attack the idea of homogeneity. Sex is nice and all, but massively overvalued, so to get it into its right proportion in human life, I think that we need to undermine the idea that sex is something which makes a human life whole. It’s a powerful and very animating idea, which is why I think relatively little headway will be made while it is still dominant. It’s not a very sensible idea of stated directly, however, so I think it is vulnerable to mockery. And I think that there is merit to Saint Thomas More’s saying that the devil, being a proud spirit, cannot endure to be mocked.

I think that this is also very important for recovering an authentic understanding of the vocation of marriage. Far too many people go into marriage looking to get something out of it, rather than looking at it as a way to pour themselves out like a libation. Marriage and raising children have their enjoyable parts, to be sure, but the idea that marriage is some sort of odd hybrid between entertainment and psychotherapy is very destructive to human happiness. Children are wonderful to watch and play with, but it is proper they will take a great deal from one that they will never give back, and they will in their youthful ignorance cause a great deal of suffering which will form a heavy cross to carry. Pretending that marriage is something which will help to carry crosses, rather than something which will fashion them and load them onto one’s back, is to set people up for disappointment and misery. It is true that husband and wife will help each other, but they will also be one of the biggest sources of the other one’s problems. This does not mean that husband and wife will inevitably quarrel—though so far I’ve never heard of husband and wife who haven’t—but that the two are signing up to do something very difficult together, and the magnitude of problems are always proportional to the magnitude of the undertaking from which they arise. Marriage is a thing which should be viewed like enlisting in the army during a war, not like booking a Caribbean vacation.

Though it should be noted that most soldiers survive going to war, whereas marriage has a 50% mortality rate.

The Terrible Effects of Sola Fide

I have called protestantism proto-atheism largely because the denial of reason which you find with people like Martin Luther (who famously said that reason was the devil’s whore) and John Calvin (whose doctrine of the total depravity of man makes reason at best unattainable for men) sets it on that course. However, I have recently realized that there is another way in which protestantism is proto-atheism, embedded in what the doctrine of Sola Fide often becomes. (I would like to emphasize that I am talking about protestantism and not protestants, many of whom share little in common with Martin Luther and have a healthy respect for reason.)

According to Peter Kreeft, there is a way in which the doctrine of Sola Fide is in fact compatible with orthodox Christianity (it’s towards the end of that video if you’re looking for it). I have grave doubts that this expansive and non-obvious meaning of Sola Fide was what Martin Luther meant but since he’s dead that’s purely a question between him and God. What is relevant to us, however, is that a great many evangelicals and fundamentalists (and some other protestants) are quite sure that this orthodox interpretation is wrong. They hold that all that is needed to get into heaven is for a person to believe that Jesus is the son of God and died for their sins. Often this takes the form of “accepting Jesus into your life” by saying a prayer where you formally accept Jesus as your personal lord and savior. Often (but not always) it involves some feeling of “knowing that you are saved”. To distinguish this from possible other versions of Faith Alone, I will refer to this version as Belief Alone.

One of the problems which immediately crops up with salvation by belief alone—if you think about it for more than a few seconds—is that after people die and come to meet God face-to-face on the day of judgment, everyone will believe. (As the saying goes, Satan believes.) It is, therefore, not possible that there is anything operative in belief that contributes to or makes up part of the substance of salvation. Worse, since most evangelicals and fundamentalists seem to conceive of heaven and hell as two alternative rooms, one with a party one with far too many sharp things in the hands of unpleasant creatures with odd senses of humor, and in no way think of salvation as any sort of improvement from an imperfect state to a perfect state, belief during life can only be a criteria like how having all six colors of pie piece allows you to attempt to win the game in Trivial Pursuit. It is a purely arbitrary rule.

The only possible purpose of this arbitrary rule—if entry into the infinite party room being only for people wearing the wristband of belief has any purpose at all—is to function as a test of obedience. But, the question must be asked: obedience to whom?

Now this is where the rejection of reason (more formally, fideism) comes up again. If evangelicals and fundamentalists (etc) believed in natural theology, i.e. reason’s ability to approach God, this test of obedience would be very harsh, but it would at least be a test of obedience to God, because a natural man unaided by divine revelation through miracle can still learn of God through reason and thus such belief could—by a great stretch of the imagination—be some sort of test of the individual’s worth. How it can be a meaningful concept for a fallen creature to merit salvation is still something that would need to be explained, but there would at least be some hope for how salvation through belief alone would not be completely self-contradictory (not to mention completely evil).

But when you add in fideism, it is not possible for one to use reason to arrive at the truth. The ticket into the party room thus consists of belief in something one has no reason to believe. Whatever the person proposing this idea may say about asking you to have faith in God, what he is really doing is asking you to have faith in him. Moreover, because—according to him—you cannot know who God is, it cannot be God in whom you believe. You cannot believe in what you cannot know. The end result is that this is nothing other than a demand that you obey the person who is making the demand of you.

As I understand it in the typical case the one making the demand is a person’s parent, but since the demand did not originate with them—they are just passing it on—this really ends up being a demand for absolute fealty to a person’s society. This leads to atheism in two ways.

The first is that this demand is so unreasonable that a reasonable person will utterly reject it. This is why so many of the people who come to the Catholic church from fundamentalism or evangelicalism do so by way of atheism. It is also why modern atheists so often seem like fundamentalists who have simply switched their holy book from the bible which they interpret in light of popular books about it to their high school biology textbook which they interpret in light of popular books about it. (I mean that last part metaphorically, not literally.)

The other way that salvation by belief alone leads to atheism is that it is a form of idolatry. Idolatry is worshiping a created thing in place of the creator, and in this case the created thing is the society. Idolatry is a matter of fealty, i.e. priority, but not necessarily of belief, so this is not simply atheism by name, if it often seems to look like it in practice. What leads it to become avowed atheism is the existence of a another society which the person wishes to be a part of. Sometimes it’s another sub-culture. Often it’s the larger culture of the society in which the fundamentalist/evangelical lives. Whatever it is, this sets up a conflict, and if the other culture wins, a strong rejection of the idol becomes necessary, because it is a jealous idol. Since its official belief in God is part of that idol, it will become rejected when the idol does. The attitude of total fealty to society may not, however, and I believe that this is where we get most of our evangelical atheists from. They have transferred their complete devotion from fundamentalist/evangelical society onto whatever new society they identify with, and will attack believing in God with the same ferocity that they used to attack not believing in God. And their theological knowledge will not have improved from the transition.

The Argument From Design

Until 150 years ago, or so, the argument for God’s existence from design was probably one of the more commonly understood arguments of natural theology. (Natural theology consists of the things we can say about God by the light of our own reason and nature, in contrast to revealed theology, which are the things God has told us about himself.) After the rise of NeoDarwinism (by which term I refer to the Dawkinsian creation myth and not the scientific theory of evolution), the argument from design is still intuitively understood by many people, but it has generally become misunderstood formally. If you were to ask an atheist on the level of Richard Dawkins—who is among the best of the worst atheists—what the argument from design was, if you lucked into a calm and concise one you’d get something like this:

If you look at the natural world, many things in it are very simple, like rocks, but many of the things in it are far more complex than can reasonably be supposed to be assembled by blind chance. Things like plants and especially animals are too complex to be an accident, and so they must have been created by an intelligence more complex than they are. Since we, too, are part of the natural world, there must be something more intelligent than us which made us, and that thing is God.

This is not at all the classical argument from design, such as you can find in the Summa Theologica, though I will grant you that you can find something like it from young-earth creationists. It is, fundamentally, a god of the gaps argument. God of the gaps arguments are more repugnant to orthodox Christians than to atheists because they are an insult to God: they claim to show that God exists because the natural world doesn’t work and needs to be constantly fixed. This is a relatively new idea; it really only makes sense in the context of modern mathematical physics. Before that attempt to fit the workings of the universe into the human head, no one ever supposed that the universe didn’t actually work.

(At least next to nobody. There is probably some ancient Greek philosopher who argued that, because for pretty much any argument there is an ancient Greek philosopher who argued it. And technically (original) Buddhism is based on the idea that the universe doesn’t work, but at a higher and qualitatively different level than what I’m talking about here. Also, Buddhism is fundamentally atheistic. Since it holds that everything is an illusion, it holds that its gods are not real, and it certainly denies any uncreated creator. It’s much more akin to the zero-energy hypothesis.)

The classical argument from design is not based on probabilities and certainly does not depend on the idea that natural things do not fit together. It in fact contradicts the idea that the unfolding of nature couldn’t have been according to a natural process precisely because it argues from the fact that natural processes actually work. A fundamentally broken world would undermine the classical argument from design. So, without further introduction, here is a version of the classic argument from design (my words):

If you look at the world, it exists imperfectly according to a rational hierarchy of being. Things at lower levels work together to the advantage of better things, and these better things in turn order and improve the lower things. Quarks work together to form protons and neutrons. Protons, neutrons and electrons work together to form atoms. Atoms work together to form molecules. Molecules work together to form bodies. These bodies include plants, which turn sunlight into food, and animals, which eat the food the plants make. Some animals keep the other animals from over-eating the plant food. Other animals spread the seeds of the plants, as well as nutrients which the plants need. There are also less clever and more clever animals, with a rational animal at the apex, who directs the lower animals as well as the plants toward a harmonious function.  In all of this there is a rational order where the parts fit into each other and work together to create a good whole. This rational design reflects and points to a rational mind which orders the natural world according to the good. Any such rational mind which is itself a part of nature, such as a super-intelligent space alien or a little-g god or extra-cosmic aliens in a universe that created our big bang, or whatever, would themselves be a higher step in this rational order, since they are a part of it by virtue of shared time and causality. There must, therefore, be some rational mind which is not part of it, which stands utterly apart from it, like how Shakespeare stands apart from Hamlet or the characters in The Mousetrap (the play within the play). This rational mind which is utterly apart from all of the rational creation with a shared causality is what all men call God.

(Where the natural world varies from the rational order, this constitutes is a rebellion of the rationally ordered creature against its creator, possibly very indirectly since things are supposed to receive their rational ordering according to the other things within their shared hierarchy. Thus we clearly live within a fallen world, but that means we live within a rationally ordered world that has partially broken, not within an irrational world that doesn’t work at all.)

To see the difference between this world and an irrational world, consider how any of the components could have gone wrong. Suppose up quarks weren’t compatible with down quarks: we’d have neither protons nor neutrons, and consequently neither atoms nor molecules nor bodies nor plants nor animals. All there would be is a vast sea of sub-atomic particles without any interesting organization. (And please bear in mind I’m only saying that world would be irrational; I’m not saying anything about how likely or unlikely it is—its probability is utterly irrelevant.) Or suppose electrons couldn’t orbit an atomic nucleus: the result would be an ever-dispersing gas of particles fleeing from each other since nothing held them together. And again, I don’t care whether that possible world is more or less likely than ours, I only care that it would be far less interesting, because that is just another way of saying it would not be rationally ordered. What interests us is intelligible order—no one is fascinated by noise.

The same can be seen if we look at evolution. Dawkinsian atheists love to talk about how order emerges from chaos because of simple rules, in this case the simple rule being natural selection. This is fair enough so far as it goes, but it doesn’t go very far because mere order is not the same thing as rational order. In science fiction one encounters stories of nan0bot catastrophes where self-replicating nanobots which can use their environment for raw materials get out of control and turn the entire world into unimaginably many copies of themselves. This is called the grey goo scenario. But we in fact already have self-replicating nanobots which can use their environment for raw materials. They’re called bacteria. So why isn’t there a bacteria-goo scenario, where some bacteria hit on the winning combination of genes/proteins and turned everything into a copy of itself? A single species that has smashed all others is very compatible with survival of the fittest. (Yes, I know that survival of the fittest is only an approximation of the biological theory of evolution, but the more proper theory doesn’t differ in this regard.) Perhaps domination might be bad for the bacteria, but that downside could only emerge once they’ve wiped everything else out, at which point there would be no other species left to balance things out again. This also applies to other layers in the hierarchy of being. Why aren’t all plants poisonous? It would not be hard for a plant to have eliminated the first herbivore, and to have made a herbivore-free world. If wolves ate every last prey animal, they would starve to death, but only after they ate the last one. Then there would be neither predator nor prey and just an animal-free world left. (And it’s no answer to say that the changes happen so gradually that balance is always maintained because we know that evolution often happens very quickly. The gradual accumulation of changes is more a just-so story for children than it is a description of how evolution has typically worked, and certainly is not a description of how it can work.)

Now once again, I’m not talking about what is more probable, but what is more rationally ordered. (That is why it’s irrelevant that one species could balance out against another’s recently gained advantage; that’s only a question of probability.) A world in which a super-bacteria ate everything else and so was the only thing left (and then died off if it wasn’t an autotroph) would be very orderly, but its extreme homogeneity would not be a rational hierarchy. It would be just as complex as the world we live in, since it would have just as many moving pieces, but it would be far less interesting. And as the way that every foreign animal introduced into Florida seems to kill off the native species shows, evolution does not of its nature tend to produce a more interesting world. It won’t for the same reason that the history of warfare shows weapons all converging on the same basic designs: optimizing for one thing rarely has more than one solution.

Now, the reason why probability does eventually enter the discussion is that for any configuration of matter, it is always possible that it got that way by sheer accident (“randomly”), and so a world organized according to a rational hierarchy of being must, of necessity, look like a possible accidental outcome of blind matter. (This is less true if one recognizes the existence of free will, but since people wish to entertain the notion that free will might actually be an illusion, the similarity is unavoidable.) Thus one must ask of a thing that is organized according to a rational hierarchy: how likely is this to really just be a pure accident rather than what it appears to be? But please note that this question is utterly different from a god-of-the-gaps argument. We are not asking whether this world could work without God. We’re asking whether this world that looks like it was made by God could in fact be an accidental similarity only. We’re asking whether the portrait of a man we’re looking at might have been the result of a canvas and some tubes of paint falling off of a table and the resulting mess just happened to look like a skillful portrait of the man. That could have happened; the right colors could have been on the table, and the dog might have carried the tubes of paint off back to its bed to chew on them. An excellent portrait of a man is not impossible without a painter. But between a skilled painter and a freak accident, my money is on the painter.

That being said, this is why the argument from contingency is much stronger than the argument from design: the argument from contingency shows that it is absolutely impossible that there is no God. This is also why Dawkinsian atheists value evolutionary anecdotes so much—vivid stories capture the imagination and make the whole thing seem more plausible. It’s also why Dawkins spends so much time angrily sneering. His alternative is to say, “Come on, guys, it’s not technically impossible!” and that would be poor salesmanship.

In closing, I would like to show the version of this argument which you can find in the Summa Theologica:

The fifth way is taken from the governance of the world. We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

There is a great difference of expression between my version of the argument and Saint Thomas’ version, but they mean roughly the same thing. Most of the differences arise from a very different standard education. We are not taught about goodness or what the relationship between different types of beings is, and very little about intelligence and less about rationality, so in a modern context such things must be explained at length, whereas in Saint Thomas’s time, anyone with even a tiny bit of education was familiar with those concepts. Our scientific knowledge, by contrast is far more advanced. Everyone has heard of protons and neutrons and electrons, and most people have heard of quarks.

Addendum

There are two addenda which I should discuss briefly: the weak anthropic principle and the infinite multiverse.

The weak anthropic principle is, roughly, “if the universe weren’t configured in its present way, we wouldn’t be asking why it was configured this way.” Typically its phrasing is adapted to the needs of the moment, but it always means as little. Probably the strongest statement of it—and this isn’t saying much—it is technically possible that our evaluation of a thing is influenced by having grown up in a world where that thing having happened. Usually it’s said in a way to suggest that our evaluation most likely was so influenced, but this is pure showmanship, without any admixture of a reason to believe it’s true. “I believe it, so you should too if you want my respect,” intimates the Dawkinsian atheist, as if any self-respecting person wouldn’t question his life choices if a Dawkinsian atheist did respect him.

The infinite multiverse hypothesis family of hypothesis that claim, essentially, that every possible world exists in a parallel universe. Basically, take Occam’s Razor and reverse it: unnecessarily multiply entities. I think that this originated with the question of why our physical constants (the charge of basic particles, the gravitational constant, etc) were the way that they were, and so one answer proposed—presumably by someone who read too much science fiction—was that every possible world happened, and we’re just in one that turned out to produce life. How anyone gets past the instant destruction of science, I can’t imagine. If every possible world happens, then there are an infinite number of worlds where all scientific experiments came up with their results by accident. There are infinite number of worlds where some spiky demon-monster with amazing nano-technology to keep you alive whips you in a pit of fire until the heat-death of those universes for not believing in Jack T. Chick tracts. And so on. And there is precisely no way to tell which of these parallel universes you are in. Since there are infinitely many of the bad universes, there isn’t even a way to tell how likely any of these bad universes is. And all of this is relatively obvious with a few seconds of thinking about it, which should tell you how seriously any of the proponents of the infinite multiverse hypothesis actually take the idea.

Awful Authorities

I was reading an article by Richard Dawkins about why there is almost certainly no God. It’s impressive in how aggressively he misunderstands the subject matter, but it’s even more impressive how much he misunderstands what people have said about it. The way that he casually assumes he completely understands scholastic terminology—as if the scholastic philosophers like Aquinas were writing in conversational English—is a masterwork of arrogant stupidity, to be sure, but that’s not what I want to talk about. It would also be interesting to consider Dawkins as a Martin Luther Lite—Martin Luther was both supremely arrogant and not very bright—but at the moment I’m more interested in the people who accept Dawkins as an authority on religious matters. (I mean authority in the logical sense; to accept his characterization of an opponent’s arguments instead of reading those arguments in full in their original context is to accept Dawkins as an authority in this sense.)

To anyone capable of understanding brilliant thinkers like Socrates, Plato, Aristotle, Aquinas, Nietzsche, or Heidegger, Richard Dawkins is notable only for how utterly average he is. To put it colloquially, as a philosopher, he’d probably make an OK—but not great—bricklayer. An intelligent atheist who has studied philosophy and religion would be embarrassed by Richard Dawkins. So why do so many people respect and follow him?

The answer lies, I think, in how varying intelligence levels relate to intelligibility. This is especially observable in how people of varying intelligence levels follow arguments. Logical arguments for non-trivial things are very rarely made with every step in the argument being stated explicitly. It would take far too long, and explicitly stating connections between statements which are obvious makes an explanation seem dull, plodding, and even insulting. But which connections are obvious and which need to be stated explicitly depends on both the intelligence and the knowledge of the person trying to follow the argument. (For brevity, I will concentrate on the intelligence side of that, though the reality is more complicated because of the knowledge dimension, but the generalization from intelligence to intelligence-and-knowledge is relatively straight-forward.)

While explaining steps in an argument which are obvious to the reader can make the argument ponderous and boring, omitting steps which the reader cannot supply will make the argument entirely unintelligible. People can’t explain something at a higher level of intelligence than what they possess and most people will naturally explain an argument at the level of detail which they don’t find ponderous.  Now, while I think that intelligence is distributed among the population more like a poisson distribution than a bell curve, even if it is a bell curve, the inability to read (by lack of mental capacity, not whether one has been taught) forms a lower-level cutoff even to a bell curve, so either way, there is a large fraction of the population which is towards the effective bottom of the intelligence scale.

Given all of this, the most natural thing in the world is for people popular among people of average intelligence to be very slightly above them in intelligence. The slight edge will give them things to explain, but being very close means that (without much effort) their explanations will be intelligible. It is of course possible for a more intelligent person to condescend (in the etymological sense of the world—to come down and be with) to his less intelligent brethren; G.K. Chesterton is a great example of this because he was both  brilliant and quite popular. Still, the gift to understand people unlike oneself is relatively rare, as is the gift of being a good writer, and these two together with the willingness to expend the energy to condescend are rarer still. Still, it does happen, and so popularity does not give us any ability to predict the intelligence of the popular person.

But this does make Richard Dawkins’ popularity intelligible. A person who is in no position to judge whether Dawkins is right about religion will get the pleasure of being presented an intelligible thing, which can be convincing if it is in no way thought about. The less intelligent a person is, the more effort it takes to think about whether new information is congruent with what else is known about the world, making it especially unlikely for a person of average intelligence to think about whether Dawkins’ explanation is not only self-consistent but also consistent with the rest of the world.

Thus what Dawkins is doing may be regarded as a sort of unintentional seduction. His poor understanding has some explanatory power which is made very intelligible by it having been assembled specifically to appeal to an average intellect (his). It is then explained in a very intelligible way because he explains it at the ideal pace for a person of average intellect to understand it (i.e. at the pace he would want to read it).

This suggests that the best way to counter it is by presenting arguments which are similarly maximally intelligible to people of average intelligence. This is quite distinct from the strongest arguments against Dawkins’ position, and this is why I am leary about relying too heavily on cosmological arguments. They are incredibly powerful, but they are not simple. They rely on things like understanding that there cannot be an actual infinite regress. I love the argument from contingency, and in fact when I teach The Catholic Moral System in RCIA that is my starting point precisely because we can learn so much about God from it. But if people don’t always perfectly follow it, still, when I speak about the conclusions like God existing outside of time and space, or that God is perfectly happy and doesn’t need us, or that God’s relationship to us is one of pure gift from Him to us with no reciprocity, it works for them to take my word for it that this is catholic dogma, or even to recognize the truths as true once stated as the verbal formulation of something already intuitively known. They wouldn’t be in the Rite of Catholic Initiation for Adults if they didn’t already believe the faith is true, or at least very strongly suspect it (people are welcome to use RCIA to learn more about the faith and drop out if they think it was a mistake).

When it comes to people who are skeptical about the faith, I think that they will generally need something which they can not only accept, but something which they can fully recognize as true. For that reason, I don’t think that the argument from contingency (or other cosmological arguments) are the ideal way to go in arguing with most atheists. A much more intuitive argument is the argument from design, but since one of the pillars of Dawkinsian atheism is a creation myth based on the scientific theory of evolution plus a little astronomy, the argument from design is much less effective than it should be.

(I should mention that I’m not talking about a god of the gaps argument like you find supported by people like Michael Behe in Darwin’s Black Box. Rather, I mean that if you look at the world, it is imperfect but in the main rationally ordered according to a hierarchy of being. A hierarchy implies that there is something at the top. More colloquially: the universe looks like a work of art, and art implies an artist.)

Since this very natural proof for God is no longer very effective, I think that a better approach would be to argue from morality. This is an argument which is not yet well developed. Atheists generally dismiss the version of it which runs, “why would you be good if you weren’t afraid of going to hell”, and indeed this is not a great argument, though the way that the atheists dismiss it is worse. “I don’t need God to be good,” Christopher Hitchens famously said, and it would have sounded better if it wasn’t coming from a drunkard who abandoned the mother of his children to take up with another woman. But in any event this misses the point, because no one ever asked atheists how they will do something moral if they happen to feel like doing it, but why they would do it even if they don’t feel like doing it. I’ve never yet heard an answer to that question, except a few indignant yet half-hearted attempts to prove that everyone feels like doing the right thing in all cases. (Except the mentally ill, who should be medically treated, of course.)

That being said, despite the weakness of the atheist answer to even a childish argument from morality, I think that a more adult form of it would be vastly better. In particular, the fact that we recognize morality at all means that the world matters. The existence of morality proves that the world is real and not reducible to the meaningless arrangement of sub-atomic particles that New Atheists would have us believe. The New Atheists have a number of just-so stories to explain away morality as post-hoc rationalizations for instinctual behavior, but that’s obviously not true, and in general I don’t think that these arguments could persuade even a child. Work is needed, to be sure, to explain how morality is necessarily tied to God, but I suspect if done well this line of argumentation is more likely to be persuasive to the sort of person who finds Dawkins credible on religion.

Consulting Detectives and the Police

(In this post I’m going to consider the relationship between a consulting detective and the police, from the perspective of writing about them. Nothing in this post is meant as literary criticism of any examples which are considered.)

In most murder mysteries, the police are investigating the murder, which presents the writer the problem of what the relationship between the police and the detective will be. Authors have chosen all over the spectrum, from the police seeking out the help of the consulting detective to the police actively trying to deter the consulting detective. (This has even been true of murder mysteries in which the main detective is the police! In that case it takes the form of his superiors respecting him to his superiors assigning him elsewhere and forbidding him from investigating.)

Authors will also change things up. In The Cadfael Chronicles stories, Sheriff Gilbert Prestcote is mildly antagonistic to Cadfael, whereas his successor Hugh Beringar is a good friend of Cadfael’s and though competent himself, values Cadfael’s opinion highly (it would probably be more accurate to say because he is competent himself). In Murder, She Wrote the different locations for the murder allowed them to try out the entire spectrum, though for some reason the Cabot Cove sheriffs tended to be more on the skeptical side. Perhaps the actors in question were just better at scowling than they were at smiling. Sherlock Holmes and Hercule Poirot had excellent reputations and friends in high places which tended to make the police friendly for them. Dorothy L. Sayers solved this with Lord Peter Wimsey by making the police deferential to his title of nobility. Philo Vance was a long-time friend of the district attorney. That’s only a small sampling and it’s all over the place. Clearly anything will work, but it leaves the question of which is best?

Of course, to even ask the question that way is to highlight that the real question is what sort of stories do the points on the curve allow you to tell? It’s always easiest to start at the extremes. If the police are highly antagonistic to the detective—e.g. the detective is the prime suspect and there is an arrest warrant out for the detective—this tends to be more conducive to stories with a lot of action/suspense. In the examples I can think of (The Fugitive and Minority Report come to mind) most of the focus is on whether the detective will be caught before he can prove he didn’t do it. This also tends to raise the stakes by having an innocent person in danger of being punished for a crime they didn’t commit.

On the other end of the spectrum, the police enthusiastically ask for the detective’s help and will do anything the detectives tells them to. Some episodes of Murder, She Wrote come to mind. Some of the Lord Peter Wimsey stories come close to it as well. Come to think of it, so do a few of the Sherlock Holmes stories. The stakes tend to be lower—though not always; Lord Peter had police cooperation in Strong Poison but Harriet Vane was on trial for a crime she didn’t commit—and most of the action tends to be the actual investigation. This tends to open up more space for theorizing and collaboration. Unless it’s an ongoing murder story—where live characters keep turning into dead bodies—these stories are more likely to have a slower pace and focus more on dialog than action.

(It is of course possible to change locations on this spectrum throughout the story. A detective, once cleared, can be welcomed by the police. A detective who had full access can turn into a suspect (this is especially easy to do if there are ongoing murders). A story can start more in the middle and once the detective proves useful, they can become more welcome. Etc.)

I think that my own preference is for the friendlier side of the spectrum. I enjoy collaboration more than I do conflict. Conflict can certainly be interesting, and is often easier to make interesting than collaboration, but I think that collaboration done well has a greater potential for interest. Individuals are interesting, but people are more themselves in community. Of course, it must be a true community. False community obliterates the individual for the sake of the group, while real community brings each individual to the fullness of themselves, respecting each one’s unique virtues. (As a technical note, I mean their unique natural virtues. Moral virtues are—in an ideal world, at least—not distinct between people. All men should be perfectly honest, but each one’s identical perfect honesty will have a different natural content because they know different things.)

A friendly relationship between the police and a consulting detective is not easy to pull off, however, especially if one is striving for realism. There is something of a natural antagonism between a consulting detective and the police, and further there is a natural reticence the police will have in sharing information which is not public. Still, the police will certainly consult outside experts, and police departments have been known to consult psychics for help. In The Dean Died Over Winter Break the relationship was probably more neutral than welcoming, but the police were reasonably friendly. Still, the information mostly flowed from the detectives to the police, and not the other way around. In the circumstance, it seemed the most natural thing.

One of the more plausible ways of insinuating the consulting detective with the police involves the police being short on resources. Resource shortages have a number of effects on people, most of them tending to increase flexibility. People with too few resources tend to see the upsides of shortcuts and other sorts of flexibility more clearly than do people with enough resources to get everything done. They tend to be less worried about possible downsides, because the downsides compare to the downside of simply not getting their work done. Moreover, the people who are responsible for the short-staffing cannot credibly threaten to replace the overworked person with someone else. Finding people willing to be overworked is not easy, and in any event finding new people for a job is both difficult and expensive. Worse for the person responsible for the short-staffing, since overworked people often make mistakes and don’t get everything done, disciplinary issues will have come up before, and the overworked person will probably have gotten used to the toothlessness of any threats made. Thus by the time the consulting detective comes around, offering to take some of the work off of the overworked police detective’s shoulders, the upside will be all the more obvious while the downsides will already be known to be minimal. And since the worst case is that the overworked person finally stops being overworked, the downsides will seem especially minimal.

Also viable for making police collaboration with the consulting detective plausible is for the forensic evidence to be scant. Really it’s not just the forensic evidence, but all of the evidence in which the police are the best at obtaining: cell phone records, bank records, the sort of evidence for which warrants are generally attainable, etc. If the police don’t really know anything of value, they have very little to lose in a relationship with the consulting detective. The flip side of the fairly impressive powers to subpoena phone records, etc. is that they are bound by rules which private citizens are not. Moreover the police are bound to enforce all rules, though of course in practice they don’t always do so, but this makes the police scary since in the modern age virtually everyone is guilty of some crime or other. We have so many laws its impossible to know what they all are, and some of them run counter to common sense (especially copyright laws). Children and pets offer all sorts of judgement-based ways in which the police could make a person’s life miserable even if they haven’t technically broken any laws; a great many people are rightfully wary about anyone as powerful as the police. None of this applies to a consulting detective, who has no power and is therefore relatively safe. Further, with no superiors to whom a person can complain, a consulting detective is in a less vulnerable position if they take liberties with people who have valuable information (providing those liberties are within the law).

There are of course plenty of other ways for a consulting detective to get along with the police. Friends and relatives on the police force have been used innumerable times. If a consulting detective is likable a police detective might simply take a liking to them. Having a mutual friend and helping the consulting detective for the sake of the friend is certainly possible, as is there being someone in authority over the police who wants the consulting detective working on the case.  My memory might be deceiving me, but I think I’ve even seen it work for the consulting detective to—in effect—blackmail the police detective into sharing information. Since precedent is a powerful thing, I’ve also seen it done to bootstrap the consulting detective into a relationship with the police by some means which would only work once—a relative of the deceased having (politically expensive to use) power over the police, for example—which leaves the police eager to work with the detective again. I think that the choice of these techniques, if one wants to go this way, is going to depend on the detectives. In the case of my detectives—The Franciscan Brothers of Investigation—the choice varies with who it was that called the brothers in. In The Dean Died Over Winter Break, since it was the university president, this acted as something of a middle ground. The police were neutral, but they were not hostile, while the university president’s authority gave them full cooperation with the university staff, which was probably more valuable to them. In future mysteries, it’s likely to be different based on who is asking for help.

Review: The Benson Murder Case

Having become interested in American writers during the Golden Age of Detective Fiction (primarily because of research into the phrase The Butler Did It), I came across S. S. Van Dine and his detective Philo Vance. Since Philo Vance had been described as one of the most popular American detectives of the 1920s and 1930s, I bought a copy of The Benson Murder Case. Though I thought that it was merely OK as a story, it was certainly historically interesting.

The first thing which struck me about Philo Vance was how very reminiscent of Lord Peter Wimsey he is (Whose Body was published three years before The Benson Murder Case). Vance was educated at Oxford, at around the same time as Lord Peter, and has many of the same mannerisms, such as ending a declarative sentence with the question, “what?” Vance also uses a monocle, though he doesn’t wear it constantly as Lord Peter does. He is fashionable, wealthy, travels in high society, and dresses extremely well, just like Lord Peter. Whereas Lord Peter is knowledgeable about art and his real passion is music, Vance is knowledgeable about music and his real passion is art. Both like to quote classic literature while investigating cases. If so far the main difference between them seems to be their name, that is misleading. There is a significant, though subtle, difference, and I think that it traces back to their authors.

Willard Huntington Wright (S.S. Van Dine was a pen name) was a Nietzsche scholar. Dorothy L. Sayers was a devout Anglican, and even published some theology. Both detectives seem to lack any belief in God, and Sayers even went so far as to say, in private correspondence, that she thought Lord Peter would think it an impertinence to believe that he had a soul. Yet there is something religious in the character of Lord Peter. He did not believe in God, but he did believe in beauty. He might have been a worldling, but he knew somewhere in the back of his mind that it wasn’t true that the world is enough, and it saddened him because the better thing which beauty hinted at seemed unattainable. By contrast, Philo Vance might have been a celebrated art critic and collector, but he gave no indication that he actually saw any beauty in the world. The proof of it was that there was no sadness in his character. Lord Peter had suffered; Lord Peter’s heart had been broken, not just serving in World War I, but in other parts of life, as well. Philo Vance, by contrast, seemed to have an intact but very small heart. He does not seem to have suffered anything besides boredom, and as Rabbi Abraham Heschel said, “The man who has not suffered, what can he possibly know, anyway?” Joy is a greater wisdom than sadness, but there is no wisdom at all in being bored. As Chesterton put it:

There is no such thing on earth as an uninteresting subject; the only thing that can exist is an uninterested person.

There is also the curious element in the story of how Philo Vance lectures his friend, the district attorney, on the nature of investigation. This was a common feature of early detective fiction, especially contrasting proper investigation with how the police went about investigating. It started with Poe’s explantion of C. Auguste Dupin’s ratiocination in Murders in the Rue Morgue,  was a common feature of Sherlock Holmes stories, and featured in a great many others of the time, too. So much so that Chesterton wrote a very interesting conversation about the very phenomenon in The Mirror of the Magistrate, published in The Secret of Father Brown:

“Ours is the only trade,” said Bagshaw, “in which the professional is always supposed to be wrong. After all, people don’t write stories in which hairdressers can’t cut hair and have to be helped by a customer; or in which a cabman can’t drive a cab until his fare explains to him the philosophy of cab-driving. For all that, I’d never deny that we often tend to get into a rut: or, in other words, have the disadvantages of going by a rule. Where the romancers are wrong is, that they don’t allow us even the advantages of going by a rule.”

“Surely,” said Underhill, “Sherlock Holmes would say that he went by a logical rule.”

“He may be right,” answered the other; “but I mean a collective rule. It’s like the staff work of an army. We pool our information.”

“And you don’t think detective stories allow for that?” asked his friend.

“Well, let’s take any imaginary case of Sherlock Holmes, and Lestrade, the official detective. Sherlock Holmes, let us say, can guess that a total stranger crossing the street is a foreigner, merely because he seems to look for the traffic to go to the right instead of the left. I’m quite ready to admit Holmes might guess that. I’m quite sure Lestrade wouldn’t guess anything of the kind. But what they leave out is the fact that the policeman, who couldn’t guess, might very probably know. Lestrade might know the man was a foreigner merely because his department has to keep an eye on all foreigners…”

Philo Vance takes it one step further than this, claiming that the police methods are not just ineffective, but counter-productive. It’s a theme which Vance hits upon so often as to come across as supercilious. Typical murders are not fiendishly cunning, and forensic evidence, though circumstantial, is actually useful. (I’m going to get into spoilers at this point, so if you want to read the novel for yourself without knowing who did it, I suggest you go read it now.)

Much of Vance’s point is made by the police being rather unbelievably thick-headed. Their first suspect is a woman whose handbag and gloves were found at the scene of the crime, and who chucked two cigarette buts into the fireplace. The victim, Benson, was known to have gone out with some woman the night he was killed (he was killed shortly past midnight), and that’s the sum total of evidence which the police have upon which they conclude she must have murdered him. That plus she got home at around 1am, might possibly have gotten the murder weapon from her fiancé, who presumably owned a military colt automatic pistol because he had been in the Great War.  Oh, and Benson was known to make inappropriate advances to women. Somehow this added up to her cold-bloodedly shooting him in the forehead from six feet away while he was seated. Had he been killed defensively, this might have been plausible, but why a woman who went to dinner with him would execute him in this fashion is never so much as broached.

There is also the evidence of who the real killer is, which is rather conclusive. Benson normally wore a toupee and was never seen without it; ditto his false front teeth. Both were on his nightstand, and he was wearing his comfortable slippers and an old smoking jacket on top of his evening clothes without a collar. (In clothing of the time, collars were separate items from the shirts, and would attach by a button. It was therefore possible to take the collar off, and in fact when someone was at leisure and didn’t need to be presentable, they would often do that very thing for comfort’s sake.) The housekeeper is positive that the door was locked, for it automatically locked, and moreover that the doorbell was never rung. The windows were barred against break-in. Despite all of this evidence that the victim was on intimate terms with his murderer—he let the murderer in himself while in a state of comparative undress, without bothering to put his toupee and false teeth back on and was sitting down and even reading a book when he was shot—the police never ask what any of this evidence means, even when Vance more-or-less points it out to them. No explanation for this incredible thickness on the part of the police is given, except when Vance mentions that there are height and weight requirements to joint the police force, but no intelligence requirement.

This also basically gives away who the murderer is. This goes doubly so because of the form of the fiction. Vance is a genius who is always right, and Vance declares he knows who the murderer is five minutes after looking at the crime scene. Granted, it is revealed later on that Vance knew the murderer for many years, and thus knew his personality—which I would normally call cheating—but the evidence which points to the murderer is so clear apart from odd psychological theories that this foreknowledge on the part of Vance is fairly irrelevant. As of chapter 2 or 3, I forget which, there is only one suspect, and all that remains for the rest of the book is to watch Vance disprove the red herrings for the district attorney. In general it would be possible for some other character to be introduced who also knew the victim on such intimate terms, but since Vance was always right, and Vance knew who the murderer was, that possibility was foreclosed.

It is especially interesting to consider this in light of Van Dine’s Twenty Rule for Writing Detective Stories, published in 1928 (two years after The Benson Murder Case). You can argue that he violated #3 (no love interest) because there was an affianced couple who would have not been able to marry had either of them been executed for the murder. He borders on violating #4 (none of the official investigators should be the culprit), since the old friend who asked the district attorney to personally investigate turned out to be the murderer. He violates #16 (no literary dallying with side-issues) a few times blathering on about his theories on art at such a length I skimmed the section. Also curious is that his adherence to rule #15 (the clever reader should be able to finger the culprit as soon as the detective does) made the book rather anti-climactic. In essence he took a short-story murder mystery and then inserted an entire book’s worth of padding in between the investigation and the revelation of the murderer.

As an addendum, as I was googling around to see whether anyone else talked about the similarity between Vance and Lord Peter, I found this blog post about S.S. Van Dine and his sleuth Philo Vance, which is a different take than mine, to be sure, and has some interesting historical information in it.

Reductio ad Absurdum Isn’t Straw

Reductio ab Absurdum is a criticism of a position which shows that it is false by demonstrating that absurd conclusions follow from it. A Straw Man is a fake position that sounds like someone’s real position which is constructed by an opponent because it’s easier to disprove than the person’s real position. (It is often the case that the straw man is accidentally constructed because the attacker has never understood his opponents real position.) These two are often confused for each other, which is a bit odd, and I think that a big part of the explanation for why is Kantian epistemology. (I wrote about Kant’s substitute for knowledge here, and this blog post won’t make much sense unless you read that first.)

The relevant part of Kantian epistemology is that each of the several contradictory universal theories held by a person are held only in the areas of life in which the person believes that they produce correct results. In all other aspects of life, the universal theory is ignored. To continue with the example of neck-down darwinism, survival of the fittest is not even considered in the realm of politics, and all men being created equal is not even considered in the realm of science. Each theory has its proper domain, not in the sense of the domain where it makes claims, but rather the domain where its claims are heeded. This is the key ingredient in reductio ad absurdum being called a straw man.

Suppose Fred and James are arguing, and James holds a Kantian epistemology while Fred does not. Fred points out that James’ materialism implies that no action is any more moral than another, because no human action creates or destroys matter. James says that this is a straw man, because he never said that. Yet Fred never claimed that James said that, he claimed that James would have to say that if he were being consistent with what he (James) did say. Why is James so convinced that his is a straw man?

It’s because morality is not someplace that James applies materialism. To James’ mind, showing that one of his universal theories has implications is not enough to prove that James believes those implications. Instead, it must be shown that James actually believes that the universal theory should be applied to that part of life. James sees Frank’s reductio argument as a straw man because James does not believe his universal theory (materialism) should be applied to this part of life (morality), and so its implications in that area of life are in no way his position.

It’s difficult to know what to make of James’ contention that this is a straw man of his position. In a sense he’s right, because that implication of materialism is not his position. But that’s because materialism is not his position, despite the fact that he has claimed it to be. It’s not his position because he does not actually have a position. His claim to believe that truth is unknowable and so the best we can do is refining our theories as we “test” them against evidence is basically a methodological form of blank skepticism. It makes no positive claims of any kind, other than the self-evidently true ones about what at the moment appears to be the memory of past experiences, and as such attributing any positive claim to it is mistaken. This is an utter failure of rational thinking, but that’s really the only criticism which can be leveled against it. By claiming no more knowledge of the world than is possessed by a worm, it cannot be proven wrong about anything. The real problem is that the people who claim to believe this are essentially committing the moral crime of stolen valor. Just as a deserter pretending to be a decorated war hero is reprehensible, so is a putative earthworm who still wants to be treated like a man. Such skeptics would be consistent enough if they didn’t complain about being treated like worms. In practice, they complain about it quite loudly. They rely on the fact that we don’t believe them to live a much better life than they are entitled to according to their philosophy of the world. In argument, they take advantage of good manners. If we were to take their words seriously, the only correct response amounts to, though it is possible to state it less bluntly, “shut your mouth among your betters, dog”.

Kant’s Version of Knowledge

For those who don’t know, there is a school of philosophy called, unfortunately enough given the passage of time, Modern Philosophy. It had several features, but the main one was that it denied that knowledge was really possible. It was rarely that explicit, and oddly enough started in the 1600s with René Descartes’ proof that knowledge is possible. It ended with Immanuel Kant’s work in the 1700s trying to come up with a workable substitute for knowledge. It’s a common school of philosophy, these days, and no one has ever been able to figure out how its adherents are acting in good faith—especially since its adherents deny that good faith is really possible—but everyone acts like they are anyway since they seem to claim to, and academia is a very polite place (in front of students, anyway). There’s a joke about Modern Philosophy which runs:

Modern Philosophy was born with Descartes, died with Kant, and has been roaming the halls of academia ever since like a zombie: eating brains but never getting any smarter for it.

The most pernicious effect of Modern Philosophy—and I say this despite Modern Philosophy’s causative relationship to the existence of Post-Modernism—is the version of knowledge which Kant came up with in order to try to solve the problems of Modern Philosophy. (In technical terms, Kantian epistemology.) What Kant proposed was, roughly, the following:

We can’t have any direct knowledge of things apart from ourselves, so the best that we can do is to ape the scientific method: create theories of the world and then test them, refining them over time as we get more evidence.

Kant went on to say that we must believe in God, free will, and the immortality of the soul, because the alternative hypotheses predict an irrational world, which is not what we live in.

Most everyone else who takes Modern Philosophy seriously was quite happy to believe that we live in an irrational world, and so they will happily reject all three. (Interestingly, Kant was reputed to be a creature of extreme habit that never varied; I don’t know if that was of any significance to his intuitions.) But this has become the dominant idea of what knowledge is. It is not a direct communion of the mind with things outside of the mind, which everyone up until this point had meant by knowledge whether they affirmed or denied it.

The tricky thing to recognizing this is that Kant was very intelligent, and of a philosophical disposition. Most people are not very intelligent, and more importantly most people are not of a philosophical disposition. The result, taking these two things into account, is analogous to what has happened in physics after Newtonian mechanics was shown to be false.

Someone unfamiliar with how physics is conducted might think that once Newton’s laws of motion were shown to be wrong, they would have been discarded, but they were not. The reason they were not is that they are not very far from correct in low-mass and low-velocity situations, but they are much easier to compute. Since most everything that happens on the earth is in a low-mass, low-velocity situation compared to where the errors in Newtonian mechanics become noticeable, people just go on using Newtonian mechanics whenever they know that the error would be small. Basically, they know that the laws are wrong, but since there is always measurement error and other sources of imprecision in practice, the laws can be used anywhere we know that the error would be so small as to be insignificant compared to our measurement tolerances.

People do the same thing with the theories of reality which they substitute for knowledge. Instead of, like Kant, coming up with one consistent theory which is the best theory they can possibly come up with, they will use several theories—which they know to be quite wrong in some cases—and just make sure to restrict their application of these theories to the parts of life where these theories produce correct results. (Also, emotional reaction is commonly used as the test of whether the theory is right—does the theory say something that makes people feel worse than the alternative.) Neck-down Darwinism is probably the best example. (If you’re not familiar with it: below the neck evolution explains everything about the human body, but above the neck all men are created equal.)

The result is that people are completely unfazed when you point out the contradictions in their beliefs. They already knew that their beliefs contradicted. They just have some sort of rule (possibly a rule-of-thumb) for which belief they apply in the cases of contradiction. Most of them take this as part of the nature of knowledge: since a universally correct theory is impossible (so far) to construct, the best that you can do is several contradictory universal theories which are only applied where they have been experimentally verified to produce “correct” results. Many people with Kantian epistemology consider it a sign of mental weakness to be unaware that your own beliefs contradict; only the small-minded or extremely inexperienced think that one theory covers everything.

The truly sinister thing about this epistemology is that it deprives the victim of the obvious means of escape. For most wrong theories of the universe, running into an unresolvable (actual, rather than apparent) contradiction is evidence that the theory is wrong, and a sign that alternatives must be sought. Someone suffering from Kantian epistemology won’t even pause at contradictions, so God alone knows how they will know to look for something better.

The Butler Did It Again

(This is a follow-up to a series of blog posts on the subject, the most recent being here.) As I was reading another article on the origin of the phrase, “the butler did it,” my attention was drawn to the story The Strange Case of Mr. Challoner, by Herbert Jenkins. Published in 1921, it preceded The Door by nine years. (Interestingly, Herbert Jenkins owned the publishing house which published P.G. Wodehouse’s books, most famously the stories of Jeeves and Wooster.) I tracked down a copy and read it. (There’s a free ebook version of the book Malcolm Sage, Detective on kindle, which collects all of Jenkins detective stories—if you want to read it I suggest you do it now because there will be spoilers below).

Jenkins’ detective was Malcolm Sage, who was at least vaguely in the mold of Sherlock Holmes and Hercule Poirot, by which I mean that he was both very observant of physical details and very eccentric. All of  the stories about Malcolm Sage were short stories, which is very significant to understanding the relationship of this story to the phrase, “the butler did it”.

Novels and short stories are very different things in any genre, but this is especially true of murder mysteries. Novels tend to focus on the unraveling of intertwining mysteries, which is to say the elimination of red herrings. This is somewhat necessitated by the length of a novel; each red herring forms a sort of sub-mystery, which allows one to enjoy the solving of mysteries over and over throughout the course of a novel. There are exceptions, of course. It is possible to combine a mystery with some other genre where the other genre takes up most of the page count. Adventure is the obvious example; a mystery/adventure works well where each clue is the reward at the end of an adventure. To some degree the Hardy Boys books were like this, and to a lesser extent this is often true of the Cadfael stories. The Virgin in the Ice and The Summer of the Danes are both great examples of where the adventure takes up more pages than the mystery. (Both are excellent novels.)

For related reasons—though there are notable exceptions—murder mystery novels don’t tend to focus on figuring out a single ingenious mechanism for concealing the murder(er) for which the evidence was present at the crime scene. By contrast, this is extremely common in short stories. Among other things, they don’t have the space for disentangling red herrings. Short stories which were printed in magazines tended to be extremely short, sometimes only a few thousand words. It also is simply the right size for that sort of game.

The Strange Case of Mr. Challoner is a locked-room mystery. There is one obvious suspect: a nephew of whose impending marriage the deceased disapproves and who will be disinherited on the morrow. The butler was the last to see the deceased alive, and the body was discovered in the library, with all of the doors and windows locked from within. The deceased was staged to look like suicide, and the local police take it at face value. Malcolm Sage makes numerous measurements and observations, and also directs that the photographer attached to his detective agency take a number of photographs. Malcolm Sage is so fond of photographs as evidence that he gives a lecture on their importance to the local police detective inspector. Eventually he reveals that the butler, who had only been working in his position for six months and was highly praised for the excellence of his work, is the culprit. Sage had taken supposedly exclusionary fingerprints from everyone, and used those to find out that the butler had a criminal record and was still wanted. Further, he explained that the butler had put a small metal rod through the hole in the key’s handle and using a string attached to it turned the lock by pulling on the string with the door closed. Once the key turned far enough, the metal rod fell out of the hole in the key’s handle, and he used the string to pull the rod under the door and retrieve it.

Unlike the butler in The Door, this time at least the butler was actually taking advantage of his role as butler in committing the murder. His master didn’t think anything about his coming from behind because it’s the sort of thing that butlers do, and moreover he had an excuse for being in the house after the rest of the household had gone to sleep because he lived there. So at least in this case butling was relevant to the butler’s commission of the crime.

None of the articles I’ve seen so far have cited The Strange Case of Mr. Challoner as having had any influence on the phrase, but then again none of them have cited any evidence for why The Door did have influence, either. It leaves me wondering whether any of this is actually relevant to the phrase I’ve been considering. It might well not be. With murder mysteries having been quite popular ever since Sherlock Holmes first studied scarlet, I assume that there were a great many short stories in the weekly and monthly publications of the early 1900s which have largely been lost to the sands of time. In the days before television and even before radio plays were particularly popular, theatrical plays were quite popular. Wherever there is a maw gaping for novelty, there will be people trying to fill it. Certainly this is the source that the character Broadway cited as his authority that all murders were committed by butlers in the 1933 short story, What, No Butler? I’m disinclined to think that much of the source was movies, though I don’t have any hard evidence for that. Murder mysteries don’t lend themselves well to silent films, though I have no doubt that somebody tried it at least once. The Jazz Singer was the first talkie, in 1927. Talkies took over quite quickly, as I gather, dominating film no later than the mid-1930s and probably in the early 1930s, but that’s rather close to when What, No Butler? was written to have embedded itself in the culture as a common trope by then.

I’m left where I was before, wondering where this trope came from. Perhaps I’ll be successful in tracking down contemporary reviews of The Door, which might be illuminating, but unfortunately a quick google search didn’t turn up anything. I might have to resort to going to the library!

The Problem With Know-Nothing Atheism

A little while ago I wrote a post about The Problem With Agnostic Atheism. That was a more philosophical approach to the subject. This post is going to be basically the same thing, but from a rhetorical, rather than philosophical, perspective. Agnostic atheism is not really a philosophical position; one meets it almost exclusively as rhetoric. The purpose of this post, then, is to provide some rhetorical tools for meeting it. Accordingly, I’m going to refer to it, in this essay, as know-nothing atheism.

To save you the trouble of following the link above just to get a definition, here’s the position I mean by know-nothing atheism, in the sort of reasonable-sounding language used to pretty it up:

There is insufficient evidence to prove the existence of God, and the default in the absence of evidence that a thing exists is to assume it does not, so until such evidence exists I’m going to go with the default position that God does not exist.

This is a reasonably adequate translation of its use in practice:

I don’t care about whether there’s a God, so I’m not going to consider the question unless you can make me.

Just a word of warning, know-nothing atheists generally combine a great deal of arrogant confidence with incredibly thin skin. Because their position is one of refusing to think, they will never see any parallels between what you’re saying and what they said; they will call you arrogant the moment you counter their confidence with your own confidence, and they will call you mean if you counter their claims that you are mentally defective with claims that they are the one who is mentally defective. It’s like arguing with a ten year old because in many ways it is; this is a position held by people who have refused to grow up, so they behave like they have refused to grow up. Complete with the certainty that not only do they know everything and those who disagree with them are idiots, but that they’re unappreciated geniuses suffering the slings and arrows of outrageous fortune. (Individuals will vary, of course.)

If you want to see this in action, to verify it for themselves, just test them out. Here is a hypothetical exchange:

Atheist: The burden of proof is on the person making a positive claim.

Theist: Does France exist?

Atheist: Of course.

Theist: What evidence do you have that France exists?

Atheist: You can go there and see for yourself.

Theist: That isn’t evidence, that is a suggestion for how to get evidence—supposing France actually exists, as you claim—at great effort and expense on my part. [At this point the theist could say, “If that counts, then just commit suicide and you’ll go to hell and that will prove I’m right.” but I recommend against it, as it will just confuse the poor atheist.] Just as I thought, you don’t have any evidence.

Atheist: I don’t have the time for nonsense. I don’t need to show you the evidence that France exists, go do look it up for yourself. We’re talking about whether God exists.

If you’re doing this on Twitter, you’ll probably get a number of epithets insulting your intelligence and honesty added in. But the key thing is that they clearly don’t believe in the standard, think anything they don’t understand—no matter how clear—is nonsense, and get upset with you if you try to actually explain what you mean rather than just bowing down to their superior intellects.

The whole goal of the know-nothing atheist is to try to get you to fight on his terms. In particular, he wants to make himself the jury for the argument. This may be tempting to give into, since a person sincerely inquiring into the truth must receive it according to their present understanding. However, the know-nothing atheist is not pursuing truth. He’s only after a rhetorical victory. (This can be an unpleasant conclusion to come to, because we would like to believe that everyone is acting in good faith, and moreover it is bad manners to accuse someone of acting in bad faith, but in real life people do act in bad faith, and pretending otherwise helps no one. I do recommend always coming to this conclusion reluctantly, because there is always the danger of dismissing someone honestly seeking the truth, which can do great harm.)

Because the know-nothing atheist is only after rhetorical victory, it is a complete mistake to allow him to set himself up as the jury who must be convinced. When he tries to do this, a strong counter is to shift the argument to whether he’s arguing in good faith. Since he’s not, this is a weak position for him. To give an example:

Atheist: what is your evidence that God exists?

Theist: To know what book to recommend you, I’ll need to know whether you want a philosophical approach or more of a practical, common-sense approach.

Atheist: I’m not going to read a book. I want to know what *your* evidence is.

Theist: What sort of evidence would you accept as proof for God, if I could produce it for you?

Atheist: Stop evading. The truth is you don’t have any evidence and you know it.

Theist: I have plenty of evidence. What evidence do you have that you’re capable of understanding it?

Now, at this point, the atheist is very likely to go one of several routes:

  • They will take this as a personal insult and claim it’s evidence you have nothing.
  • They will claim that you’re evading.
  • They will just repeat their demand for evidence like they’re a broken record.
  • They will make some weird epistemological claim like evidence doesn’t need to be understood, because evidence directly points to the thing it’s evidence for.

Any of these responses are not too far from the end of the argument, because the atheist is being brought onto uncomfortable ground. They will try various rhetorical tricks, mostly accusations of ad-hominem fallacies and claims of having been insulted. You can explain that an ad-hominem fallacy is arguing that an argument is false because of some bad quality in the person putting forward the argument, it is not asking for evidence that the other person does not have a fault which renders them incapable of understanding argument. Mostly, though, I think that the best line is to just stick to the strong position, which amounts to asking, “What evidence do you have that you’re capable of understanding a reasonable argument?” If they can’t actually demonstrate this—and many people can’t; I’ve run into people who don’t know the difference between an assertion, an analogy, and an argument—then why you should spend time and effort trying to explain something to them is in fact a legitimate question. Most classes in school have prerequisites for a reason.

A slightly less confrontational tack to take—though I think a certain amount of blunt honesty is warranted; know-nothing atheists rarely want anything besides a confrontation and they’re hoping for the advantage of being the only person violating tea-time rules of politeness—is to shift the argument from burden of proof to duty to investigate. Basically this amounts to denying that you have an emotional investment in the other person’s holding any particular position. They want you to feel the need to convince them. Be clear you don’t feel that need. Basically, “I’m happy to help if you want recommendations for where to begin, but it’s your job to investigate the answers to the most important questions in life, not mine to do it for you.” To give an example dialog:

Atheist: Theism is irrational because there is no evidence for the existence of God.

Theist: There is plenty of evidence for the existence of God. You’re just defining evidence in an overly narrow way.

Atheist: if there was evidence, it wouldn’t be possible to deny that God exists.

Theist: anyone can deny anything if they want to. That’s a useless standard of evidence.

Atheist: do you deny science?

Theist: Do you affirm it? Even the parts that are wrong and will be contradicted by future discoveries?

Atheist: No, science is just the best method for finding the truth that we have.

Theist: leaving aside that you could only know that if you already had access to the truth to compare it to science, and further leaving aside the fact that “science” isn’t one thing nor do scientists only operate by one method, what you’ve said is that you don’t actually know anything. So the best we have are our guesses which seem to work?

Atheist: That’s right. Make a hypothesis, test it with evidence. That’s the best we can do.

Theist: But if the evidence confirms the hypothesis, you still don’t know that it’s right. Some evidence might come along later which contradicts it?

Atheist: of course. That’s the beauty of science—it’s self-correcting.

Theist: But if you need to make a decision, you will act as if the hypothesis is true?

Atheist: Yes. What would you do?

Theist: Actually, it would depend on how good the evidence is because evidence is not a binary yes/no thing, but that’s irrelevant. The point is that you will act as if a scientific hypothesis is true when you need to act, but outside of that case, you will hold that you don’t know anything because of course every theory might be contradicted by evidence which comes along later?

Atheist: Yes…

Theist: So you don’t know anything, you just have guesses which you are going to follow because you can’t think of anything better?

Atheist: I wouldn’t put it that way…

Theist: Of course not. That’s why I had to worm it out of you; it doesn’t sound very good without the poetic hand-waving to distract us from what you really mean. So that brings up the question: how are you any better than a horse? Horses have their guesses about the world that they will follow in default of some better guess, and don’t have any propositional knowledge which they affirm to be actually true.

Atheist: Why do you need to feel superior to other animals?

Theist: I don’t need to feel superior. The obvious fact that I am superior to a horse is evidence that your entire approach, which leaves you in the position of being no better than a horse, is wrong.

Atheist: Where is your evidence that you’re better than a horse?

Theist: I don’t argue with horses, which it is your contention to be no better than. Why should I argue with you?

Atheist: I can talk and a horse can’t.

Theist: But you have told me that what you say doesn’t mean anything more than a horse’s whinnies. Unless you’ve got some evidence that you’re more capable of rational understanding than a horse is, I can’t see why I should bother speaking with you any further. There are rational people whose words mean more than a horse’s whinnies with whom I could be speaking instead.

Atheist: !@#$ you.

Theist: I don’t believe in interspecies mating, but thanks for the offer.

Atheist: you’re just saying that because you’ve got nothing and you no it.

Theist: I’m saying that because I lack a minimally rational debating partner, and if I wanted to waste my time further, I could argue with the wall.

I’d just like to re-emphasize that this is a rhetorical approach, to be used in cases where someone is purely engaged in rhetoric, as distinct from honestly trying to find the truth. There is one other problem with a rhetorical approach like this: neutral observers will tend to blame one for using it, rather than for being maximally conciliatory. This is an odd reaction, and somewhat akin to the person who looks for his keys under a lamp-post despite having lost them in the dark because he won’t find them in the dark anyway. People who want peace at any price will often try to appeal to the person on the defensive, who is likely to be more reasonable precisely because they’re not the one initiating a rhetorical argument. I don’t think that there’s anything to be done about this besides when one is in the right being firm that one is. In any event the world seems to be getting less genteel, so I suspect that this will increasingly be less important.

So, The Butler Did It

I’ve been reading Mary Roberts Rinehart’s murder mystery The Door, which I talked about here and here, at five and twenty two chapters in, respectively. This was started off by my wondering about the phrase, “the butler did it”. I’ve finally finished the book, so this post will finish off my review of The Door, and also discuss the idea of the butler being the murderer. I’d warn you about spoilers, but, well, I think that you already know that the butler did it. I might spoil a few side-mysteries too, though, so caveat lector.

The book was in its entirety written in the style of the memoirs of someone who observed a very strange situation. I am used to murder mysteries and detective fiction being, roughly, synonyms, but The Door is very clearly a murder mystery while it is not at all detective fiction. There is a police detective—who does solve the case—but almost entirely outside of the narrative. Several members of the family play at a little detecting, but only occasionally. Only one of them does anything which does not simply anticipate a later discovery, and that was to effect a useful introduction, rather than any actual detection.

The story also maintains the style of foreshadowing hints until the end, abandoning it only as the police detective explains the solution, which is the last thing that happens in the book. I’ve concluded that I don’t like this style. It feels at best overwrought, and at worst like an attempt to spice up a dull narrative with chopped up bits of other parts of the same narrative. I don’t mean that all foreshadowing is bad, of course, but The Door seemed to use foreshadowing in place of a compelling plot.

There is also the very strange question of the narrator, Elizabeth Jane Bell, who narrates the story in a very personal way. Throughout the story alternately laments the tragedy, investigates it, and destroys evidence to try to protect the family. It’s that last part which is especially hard to reconcile with the narration; why on earth would she be narrating all of these scandalous details in a memoir when the character of herself within the memoirs would want all such scandal wiped out? Whether you take the inconsistency between herself in the story and herself as narrator to be a problem with the character or a problem with the narrator (I took it as the former), it is still an unsettling problem.

There is also the problem of the family which Elizabeth Jane was trying to protect. Her niece Judy was never really under any suspicion having, as I recall, an alibi from the beginning. She was the only really sympathetic member of the whole family other than Elizabeth Jane herself, and she mostly from a general pleasantness which seemed to be a combination of decent manners, comfortable circumstances, and little ambition. The rest were detestable. Towards the end I was hoping that the murder would be solved after the good-for-nothing Jim was executed, just so the wretch would be out of the story. The other characters were similarly unpleasant, which left me very unsympathetic to the family’s desire to avoid scandal, which was to a fair degree their only major motivation in anything that they did. But this brings up an interesting point in murder mysteries in general: it’s hard for likable characters to be suspects.

The mystery in a murder mystery obviously depends on there being more than one suspect. More properly, on there being more than one credible suspect. The problem is that a character can fail to be credible as a suspect by being too likable. It’s very difficult to write an enjoyable story about a good person who stoops to murder but then cheerfully covers it up. It’s that much harder to write several characters who are all credible in that way; to pull it off one must write good characters with depth, rather than the common approach of paper-thin automatons who are good merely because they’re not tempted by ordinary temptations. It’s much easier to make suspects credible by simply making there be nothing to which they won’t do for gain.

Another important distinction between suspects in a mystery is between those with an obvious motive and those without an obvious motive. Very often this does not line up well with the moral probity of the characters. In order to put an innocent person in peril (to heighten the tension) a morally upright person will get an obvious motive, while a moral degenerate will get none. This helps to spread the doubtfulness around, to be sure, but because both of these suspects have something obviously going for them as suspects, it is especially common to make the culprit someone who is not very morally offensive (apart from their murders) who has a hidden motive. Which brings us to the butler.

How much was the butler a character and therefore a potential suspect? It’s hard for me to say fairly because I already knew that he did it, of course, but doing my best to be fair, I would say somewhat, but not much. Joseph (the butler) gets progressively more tired, worn out, and on edge as the story progresses, which certainly was a clue (that he was running around doing things while everyone else was asleep). He had originally come from one of the victim’s household’s, which should have been a clue but actually wasn’t—his prior connection to the rich victim had no significance as far that was revealed in the story. Nothing was ever made of him having the opportunity for the murders, because they happened at times when everyone had opportunity, and the house was small enough that a butler’s ability to be unnoticed had no significance. In fact, all three murders happened outside of the house, so his position as butler was—if anything—a disadvantage. He had to sneak off to commit them, or commit them while he was off-duty. The one time his being a butler was an advantage was when he answered the door when one of the victims came to see Elizabeth Jane but he turned her away because Elizabeth Jane was sleeping. Any butler might have turned her away, and any murderer might have learned of her coming and consequently resolved to kill her before she could tell what she knew.

On balance, the disadvantages of Joseph’s being a butler far outweighing the advantages makes Joseph’s being a butler fairly irrelevant to his being a murderer. It’s really just his profession. Most murderers have a day-job and there’s no particular reason it shouldn’t be butling. In this case his being the butler of the narrator was something of a camouflage; it meant that she didn’t notice him. Also his many years of loyal service made her affectionate of him, and this combined with the murders happening nowhere he was supposed to be and her always thinking of him as having no existence past being her butler disguised him as a suspect. But it didn’t disguise him totally. One of the themes of the book is how little one really knows of the people one thinks one knows, and the fact that Joseph had a wife somewhere but Elizabeth Jane had no idea where does actually highlight this blindness in a way that makes it fair game for the reader to not be so blind. In fact, I would argue that line by Jane Elizabeth is a well crafted notice to the reader that Joseph is a potential suspect.

Further, if the test of victory in the contest between the reader and the writer of a murder mystery is that the writer wins if the reader doesn’t guess who the murderer is but blames himself rather than the writer for it, then I believe that The Door has the potential for victory. Reading it through while knowing what to look for, I think that Rinehart did play fair with the reader. Certainly it seems possible she knew who the murderer was from the first, and did not merely cast about for someone she hadn’t already ruled out when she came to the ending. So I don’t think that there’s any cogent criticism to be made of her choice of murderer. (Except, perhaps, that it’s a little odd for someone who engages in fraud, forgery, and conspiracy—which eventually leads to multiple murders to cover those up—to have no criminal history, but instead a long and unmarred career in positions of significant trust.)

So when we come to the question of whether it is legitimate that, as Wikipedia puts it (as of the time of this writing), “Rinehart is considered the source of the phrase “The butler did it” from her novel The Door (1930), although the novel does not use the exact phrase.” Not only does the novel not use that exact phrase, it doesn’t use any even somewhat similar phrase. I’m going to quote the reveal in the novel, but I need to mention a little context first. Joseph had been mysteriously shot in the collar bone about a week before, but he was not killed and recovered enough to come back to his duties, though with his arm in a sling. Elizabeth Jane had, therefore, given him leave to go on holiday to recover. We have not learned up to this point who Joseph’s wife is, but we can mostly guess it was a woman who figured into the plot somewhere else, who we knew to be dying of inoperable cancer. We’re picking up with the tail-end of the explanation given privately to Elizabeth Jane by the police detective. During the explanation he had been calling the murderer “James C. Norton”, which he told her was the pseudonym the murderer had used to procure a safe deposit box. So, with that said, here is the reveal in the novel:

“So we got him. We’d had his house surrounded, and he hadn’t a chance. He walked out of that house tonight in a driving storm, and got into a car, the same car he had been using all along; the car he used to visit Howard Somers and the car in which he had carried Florence Gunther to her death, under pretext of bringing her here to you.

“But he was too quick for us, Miss Bell. That’s why I say I bungled the job. He had some cyanide ready. He looked at the car, saw the men in and around it, said, “Well Gentlemen, I see I am not to have my holiday—”

“Holiday! You’re not telling me—”

“Quietly, Miss Bell! Why should you be grieved or shocked? What pity have you for this monster, whose very wife crawled out of her deathbed to end his wickedness?”

“He is dead?”

“Yes,” he said, “Joseph Holmes is dead.”

And with that I believe that I fainted. [that’s the last line in the book]

There is nothing there remotely similar to the exact phrase, “the butler did it.” As you can see, there was nothing there even related to him being a butler. There were a few things which happened in the house that his living in the house enabled, but much of the criminal activity actually in the house was not in fact Joseph’s doing. The door referred to in the title was a hotel door where a fraud was performed, and was not in the house in which Joseph was a butler. It was not even in the same city as the house in which Joseph buttled. Except possibly as a violation of the tacit convention that the butler is the one person who never, ever commits the murder(s) in a murder mystery, his being a butler is utterly irrelevant either to the murders or to whether one suspects him of those murders.

After a bit of research, I found what seems like evidence that Damon Runyon’s What, No Butler? was first published in Collier’s Weekly, August 5th, 1933. That is not so early that the joke that the butler always does it was necessarily common by the time that The Door was published, three years earlier, but I think it does suggest it. Given what the book actually is, and the timing of it relative to jokes about the butler always being the culprit, I really doubt that The Door was in any way the origin of the phrase. It’s not impossible, but I’d really like to see better evidence for it besides this being the first (and nearly only) book which anyone can find in which a butler actually did it.