The Origin of Rights

In the aftermath of the enlightenment which emphasized the rights of man, the fact that a world which thinks only of rights will fall apart is something of a problem. But the enlightenment gives no framework for reconciling rights and responsibilities, which has left many people very unsure of how to try to reconcile them. It’s actually quite simple as long as you look at the problem in the right way. The key to the whole mess is that rights come from responsibilities.

Obviously rights come from God, since all things come from God, but they don’t come directly from God. The most proximal intermediary in giving human beings rights is the responsibilities that they were given. Whatever a man has a responsibility to do, he has a right to do.

Consider, for example, feeding himself. A man has a responsibility to feed himself. Because of this, he has a right to the things intrinsically necessary to do it, such as the right to own property with which to get for himself food, the right to do the labor necessary to procure food, and so on.

Now, It is important to distinguish what is intrinsically necessary to fulfill a responsibility from what may be accidentally necessary. If I don’t happen to have any bread on hand, that doesn’t automatically give me a right to your bread because it is an accident of circumstances that you have bread on hand while I don’t. A responsibility conveys the rights that anyone would need in order to fulfill a task, not what would be necessary only for one person in some particular moment.

And this is the origin of all rights. Parental rights originate from the parental responsibility to care for one’s child. Speech rights originate from the responsibility to tell the truth. Religious rights originate from the duty to worship God.

Once you look at rights this way, the problem of reconciling them with responsibilities—or of reconciling conflicting rights—becomes a non-issue. Responsibilities exist in a hierarchy, and so whenever a right and a responsibility conflict, or when two rights conflict, one merely has to look at the responsibility from which the right derives and compare it to the other responsibility—or the responsibility from which the other right derives—and always fulfill the more important responsibility over the less important responsibility.

This also very neatly solves the problem of how to strongly defend rights without becoming a libertine. Because you never want to be this guy:

Why Is Determinism Attractive?

I used to assume that people believed in determinism (that human beings do not have free will) merely as a consequence to materialism, and that they weren’t really invested in it. More recently, however, I’ve come to suspect that it is determinism which they are primarily attracted to, and atheism is a way to achieve that determinism. (Not so explicitly, of course.)

One strong reason I suspect this is that we have direct, unequivocal experience of free will. If there wasn’t a strong attraction to determinism, this experience would render anything which contradicted free will simply unbelievable. (And for many people, it does just that.) So there must be some deeply compelling reason to want to disbelieve in free will. What can it be?

Before I answer that question, I want to note that there are several belief systems which denied free will, since there is a hint to the answer of this question in that fact. Hinduism is varied, but at least according to the hindu philosophers the monism of everything being God leaves no room for individual free will. Free will implies the existence of sin, but since everything is God nothing can be sin. (Ordinary hindus probably do believe in free will, I should note.) Buddhism does not believe in free will, which is just one of its many contradictions. (By Buddhism I mean the original Buddhism of Siddhartha Gautama which was a reaction against his failure to achieve happiness as a hindu yogi; I’m not talking about more modern, often syncretic Buddhisms.) And very interestingly, Martin Luther didn’t believe in free will either. In fact he wrote a whole book about how there’s no such thing as free will. (On the Bondage of the Will. It’s a terrible book.)

Now, what do all these things have in common, and what do they have in common with materialism? They are all reductionist systems. They all posit that reality is less than it seems, in some manner or other. But curiously only two of them are atheistic; the other two are theistic. This suggests that what people really object to is not God, but other people. And indeed, that makes sense in reductionist systems. People are messy. There are so many of them, and if they’re free they’re not explicable by a small number of easily understood rules.

To be content with understanding the universe but not being able to comprehend it (that is, to stand in right intellectual relationship to it but not to be able to fit it inside of one’s head) requires humility, and more than anything it requires trust. Trusting God, specifically (which seems to me to have been Martin Luther’s big hangup). So I suspect something like the following rule is the case:

Those who cannot trust God cannot deal with the existence of their fellow men, and will seek some philosophical means of getting rid of their fellow men as important.

In practice, the really thorny part of one’s fellow human beings is their free will. Thus to any such creature who finds trust in God to be impossible, determinism will have a huge appeal.

(As a post-script, I should note that reducing men to their base instincts is merely a less rigorous way of accomplishing the same denial of free will; wherever you find a man who reduces all men’s actions to greed or lust, you have found a man who doesn’t trust God.)

Admitting One’s Weird

In an interesting essay I suggest reading, Ed Latimore gave, “5 Lessons From Growing Up in the Hood.” One of them in particular caught my eye:

1. Good manners go a long way.

I fought a lot as a kid. That’s just par for the course growing up in the hood. I would have fought a lot more if it wasn’t for one simple phrase: “My bad.” For those of you that don’t speak hood, “My bad” is the equivalent of saying “I’m sorry.”

You bump somebody in a crowd? ‘My bad’ goes a long way. Step on someone’s foot on a crowded bus? Dude might get mad, but you can cool it quick by just saying ‘My bad.’ Say something a little too offensive that gets guys in the mood to fight? Just say ‘My bad’ and dial it down. It’s amazing what an apology can do to cool tempers in the hood.

I didn’t grow up in the hood, nor even particularly close to it, but I found the same thing applies to situations with much lower stakes: being willing to admit error where one can truthfully do so goes a long way to smoothing out human interactions. And the curious thing is that where one is telling the truth in admitting error, most people are very willing to accept that and move on. People, by and large, don’t tolerate affronts to their dignity, but they are very willing to tolerate other people’s human imperfection where it is acknowledged as such and where a person is willing to put in the work to make things right afterwards.

This applies quite a lot in the context of business. If one makes a mistake in a professional setting, simply admitting it in a straight-forward way tends to turn such mistakes into a non-issue. Professionals are there to earn money, which they do by solving problems. Co-workers’ mistakes are just one more problem to solve. This can of course become excessive to the point where you are causing more problems than you are solving, but if that’s the case you’re probably a bad fit for your job and should move on for everyone’s sake. But where you are competent at your job, people just don’t really care deeply about the occasional mistake, and if you own up to it, there’s nothing left to talk about so people just move on.

And it’s that last part that I want to talk about in another context. Most people are weird but hide it; and most people are made very uncomfortable by other people being different (which is just another way of saying that they’re weird). At its root this comes from a tribal instinct; it is not good for man to be alone—and we know it. Differences make us fear rejection, though a little bit of life experience and sense teaches us which differences matter and which don’t. But sense is surprisingly uncommon and learning from life experiences is—for quite possibly related reasons—similarly rare. So a great many people fear whatever is different from them. This can be people who look different but I think it’s far more common to be afraid of people who act differently. And one thing people do when they’re uncomfortable is talk about it.

And this is where admitting that one is weird can be a very useful strategy. To give a concrete example, I shoot an 80# bow. (For a long time it was actually 82# but string creep eventually set it and for some reason they couldn’t get it back up.) That’s pretty uncommon, these days, especially for someone with a 30″ draw length. Most men shoot a bow somewhere in the range of 55#-70# (women tend to shoot in the 35#-50# range). You’d think that an 80# bow wouldn’t seem that odd to people shooting a 70# bow, but for reasons relating to how many reps you can do in weight-lighting being a function of how close you are to your one-rep max, it actually is a pretty big jump for a lot of people. They could draw the bow, but only a few times an hour. I’m not that strong, but I’m a relatively big guy (6′ tall, over 200lbs) and so I can comfortably shoot my bow for an hour or two at a stretch without losing more accuracy than if I was shooting a 70# or a 60# bow (really the main thing affecting accuracy is that your shoulders get tired of holding the bow up at arm’s length). So it’s a very reasonable thing for me, personally, to do, but it’s pretty odd among people at the archery shop I go to. And moreover it’s not really necessary. Where I live the only common big game is whitetail deer and you can reliably kill a whitetail with a 40# bow if you’ve got a good broadhead/arrow setup and are a good shot. I do it because I like it, and because it acts like insurance. With the double-edge single-bevel broadheads I use on top of 0.175″ deflection tapered carbon fiber arrows, the whole thing weighing 715 grains, shot from an 80# bow, if I make a bad shot and hit the large bones my arrow will most likely go right through and kill the animal anyway. And I could use the same setup for hunting moose or buffalo without modification, should I ever get the opportunity. (That would fill the freezer with meat in one shot!)

So, as you can see, from my perspective this is a reasonable thing to do. But from most everyone else’s perspective, it’s weird. And moreover, it’s more than most men at the archery shop I go to can do. Some people there can’t even draw my bow, and many who could would find the strain too much to do more than a few times. It would be easy for people to suspect that I look down on them as lesser because of it, and to reject me in self-defense. If someone you respect looks down on you,  it’s painful. If someone you reject as mentally deranged looks down on you, it’s irrelevant.

So when people make jokes about me/my bow being atypical, I go along with it. I will cheerfully admit that I’m engaging is massive over-kill; I will joke along with them about the way deer are wearing bullet-proof vests these days. (My setup could probably go through a lighter bullet-proof vest since broadheads are razor sharp and can cut through kevlar. It has zero chance against the sort of vest with ceramic plates in it.) If someone characterizes me as crazy, I smile and say, “nuts, but I like it.” And in general the joking lasts for a minute then is forgotten about and things are normal. This is, I think, for two reasons:

  1. I have signaled that I know I am abnormal and am happy with the status of being abnormal. I am clearly indicating that I am not the standard against which others should be measured so I am no threat to anyone’s social standing or sense of self.
  2. It smothers the impulse to joke about me, in the sense of taking the air away from a flame. If you say that someone’s crazy and he smiles and says, “certifiable,” you just don’t have anywhere to go. Joking/teasing requires a difference of opinion. If someone agrees with you, there’s nothing left to say since a man looks like an ass if all he does is repeat himself.

Of course, this does depend on the content of what’s being said about me being something which I can agree with. In this example, “crazy” just means “abnormal,” which is quite true. If someone were to accuse me of being a criminal I would defend myself, not agree with them. The point is not to be a carpet for people to walk on but rather to learn how to pick one’s battles and only fight the ones that need to be fought. That’s a general principle of skill, by the way; skill consists in applying the right amount of force to the right place to generate the best results. A lack of skill wastes force first in applying it to the wrong place and so needing far more force to achieve the desired result, and then in needing to apply more force to correct the problems caused by having applied force to the wrong place. That’s as true of picking one’s battles as it is of swing dancing or balancing in ice skating. Or, for that matter, archery; missing the target in archery often means that you have to spend a lot of effort to pull your arrow out of a tree.

Beauty and the Beholder

In a recent blog post, John C Wright discusses beauty and where it is located. Now, there is nothing wrong with what he says, but I do submit that he is somewhat taking the meaning of a person who says the words to be overly related to the words that they say. You can see a similar thing when people criticize doing evil that good may come of it with the words, “the ends don’t justify the means”. Taken literally, this is nonsense. Of course means are justified by ends, because nothing else can justify means. There is no such thing as a self-justifying means. Pushing a sharpened piece of steel into somebody’s bodies is justified or not entirely on the basis of why you’re doing it. Is it to shiv him in revenge for a minor transgression? Or are you a surgeon cutting out a cancer to save his life? Plunging the metal into him is merely a means, and as such must find its justification in the ends for which it is used. What people really mean, of course is one of  “this end don’t justify those means” or more commonly, “remote ends do not justify proximate, intermediate ends”. But it’s less catchy, so you can see why people don’t say what they literally mean. Plus most people would have to look up what proximate means (in the proximity of; right next to, approximate meaning an estimate by way of analogy to not-right-next-to).

I submit that the same thing applies to the phrase, “beauty is in the eye of the beholder”. Taken literally, it is as Mr. Wright says a denial of beauty as a concept. But many if not most people do not mean it literally. (I’m on relatively safe ground here; when discussing anything less practical than passing the salt at dinner, people rarely mean what they say literally.) There is a real thing which is being described, which is that beauty is a direct perception of the goodness of God as reflected by the goodness in creation, and each person is given a different (if largely overlapping) perspective on the goodness of God, and hence what precise goodness each man is able to perceive does vary. Thus when beholding any particular beautiful thing, one man may see the goodness of God revealed clearly in it because it matches what he was made to see, while another may see it only dimly because he was given something else to see clearly. To generalize, there are those who like roller coasters and in them appreciate the power of God in velocity and turning; this is an aspect of God’s goodness I see only dimly, while I appreciate the stillness of a forest and the loudness if leaves falling to the ground in it quite a lot. Now, my inability to perceive God’s goodness in the rush of the roller coaster does not mean that it is not there, any more than a deaf man’s inability to hear the beauty in Mozart’s music does not mean that it is not beautiful.

It is quite wrong to say that beauty is in the eye of the beholder, but it is quite accurate to say that perceiving beauty depends on the eye of the beholder. But the second phrase is harder on the ear, and when it comes to expressing truths most people are far more traditionalists than they are philosophers, and those of us who are capable of saying what we mean should always look out in charity for those who are not. On the other hand, it is always good to give people who misuse common phrases a (metaphorical) hard slap upside the head to try to bring them to their senses, which I think is what Mr. Wright intends. So please take this post as an elaboration on the subject Mr. Wright is speaking about, and not a contention with Mr. Wright’s post.

The Probability of Theology

This is the script to my video, The Probability of Theology:

As always, it was written (by me) for me to read aloud, but it should be pretty readable.

Today I’m going to be answering a question I got from the nephew of a friend of mine from the local Chesterton society. He’s a bright young man who was (I believe) raised without any religion, and has been introduced by his aunt to some real, adult, theology, and has the intellectual integrity to seriously consider it until he can see how it’s either true or definitely wrong. Here’s his question:

I am an atheist, mostly due to a few primary objections I have with religion in general, the most prominent of which is that since there are infinite possible theologies, all with the same likelihood of being true, the probability of one single man-made theology such as Christianity, Judaism, or Islam being true is approximately zero. My aunt … is quite convinced that you can prove this idea false [and] we are both hoping that you could make a … video about this on your channel, if possible. We will be eagerly awaiting your response.

This is an excellent example of how it’s possible to ask in a few words a question which takes many pages to answer. I will attempt to be brief, but there’s a lot to unpack here, so buckle up, because it’s going to be quite a ride.

The first thing I think we need to look at is the idea of a man-made theology. And in fact there are two very distinct ideas in this, which we need to address separately. First is the concept of knowledge, which as I’ve alluded to in previous videos was hacked into an almost unrecognizable form in the Enlightenment. Originally, knowledge meant the conformity of the mind to reality, and though in no small part mediated by the senses, none the less, knowledge was understood to be a relatively direct thing. In knowledge, the mind genuinely came in contact with the world. All this changed in the aftermath of Modern Philosophy. It would take too long to give a history of it so the short version is: blame Descartes and Kant. But the upshot is that the modern conception of knowledge is at best indirect and at worst nothing at all; knowledge—to the degree it’s even thought possible—is supposed to consist of creating mental models with one’s imagination and trying to find out whether they correlate with reality and if so, to what degree. Thus there is, in the modern concept of “knowledge”—the scare quotes are essential—a complete disconnect between the mind and the world. The mind is trapped inside of the skull and cannot get out; it can only look through some dirty windows and make guesses.

This approach of making guesses and attempting (where practical) to verify them has worked well in the physical sciences, though both the degree to which it has worked and the degree to which this is even how physical science is typically carried on, is somewhat exaggerated. But outside of the physical sciences it has largely proved a failure. One need only look at the “soft sciences” to see that this is often just story-telling that borrows authority by dressing up like physicists. It is an unmitigated disaster if it’s ever applied to ordinary life; to friends and family, to listening to music and telling jokes.

There have been a few theologies which have been man-made in this modern sense; that is, created out of someone’s imagination then compared against reality—the deism that conceives of God as winding a clock and letting it go comes to mind—but this is quite atypical, and really only exists as a degeneration of a previous theology. Most theologies describe reality in the older sense; descriptively, not creatively. It is true that many of them use stories which are not literally true in order to convey important but difficult truths narratively. This is because anyone who wants to be understood—by more than a few gifted philosophers—communicates important truths as narratives. Comparatively speaking, it doesn’t matter at all whether George Washington admitted to cutting down a cherry tree because he could not tell a lie; the story conveys the idea that telling the truth is a better thing than avoiding the consequences of one’s actions, and that lesson is very true. It may well be that there was never a boy who cried “wolf!” for fun until people didn’t believe him; it’s quite possible no one was ever eaten by a wolf because he had sounded too many false alarms to be believed when he sounded a real one. But none of that matters, because it is very true that it is a terrible idea to sound false alarms, and that sounding false alarms makes true alarms less likely to be believed. None of these are theories someone made up then tested; they are knowledge of real life which is communicated through stories which are made up for the sake of clarity. And so it is with the mythology of religions. Even where they are not literally true, they are describing something true which people have encountered. I am not, of course, saying that this is what all religion is, but all religions do have this as an element, because all religions attempt to make deep truths known to simple people. So when considering anything from any religion, the first and most important question to ask about it is: what do the adherents mean by it. This is where fundamentalists of all stripes—theistic and atheistic alike—go wrong. They only ever ask what they themselves mean by what the adherents of a religion say.

So this is the first thing we must get clear: theologies are not man-made in the sense of having been created out of a man’s imagination. They are not all equally correct, of course; some theologies have far more truth in them than others, but all have some truth, and the real question about any religion is: what are the truths that it is trying to describe? Christianity describes far more truth than buddhism does, but buddhism is popular precisely because it does describe some truths: the world is not simply what it appears at first glance; the more we try to live according the world the more entangled in it we get and the worse off we are; and by learning to be detached from the world we can improve our lot. It is not the case—as many buddhisms hold—that we must reject the world outright; we need a proper relationship to it, which Saint Francis captured in his Canticle of the Sun. The world is our sibling, neither our master nor our slave. And so it goes with all religions: they are all right about at least something, because the only reason any of them existed at all was because somebody discovered something profoundly true about the world. (Pastafarianism being the exception which proves the rule; the flying spaghetti monster is a joke precisely because it was simply made up and does not embody anything true about the world. Even the Invisible Pink Unicorn falls short of this; it embodies the truth that some people don’t understand what mysteries actually are.)

The second thing we must address in the man-made part of “man-made theologies” is that—at least according to them—not all theologies are made by man, even in the more ancient sense of originating in human knowledge. The theology of Christianity originated with God, not with man. Christian theology is primarily the self-revelation of God to man. And we have every reason to believe that God would be entirely correct about Himself.

Now of course I can hear a throng of atheists screaming as one, “but how do you know that’s true?!? You didn’t hear God say it, all you’ve heard is people repeating what they say God said.” Actually, these days, they’re more likely to say, “where’s your evidence”, or accuse me of committing logical fallacies that I can’t be committing, and that they can’t even correctly define, but for the sake of time let’s pretend that only top-tier atheists watch my videos.

Oh what a nice world that would be.

Anyway, this gets to a mistake I’ve seen a lot of atheists make: evaluating religious claims on the assumption that they’re false. There’s a related example which is a bit clearer, so I’m going to give that example, then come back and show how the same thing applies here. There are people who question the validity of scripture on the basis of copying errors. “In two thousand years the texts were copied and recopied so many times we have no way of knowing what the originals said,” sums it up enough for the moment. This objection assumes that the rate of copying errors in the gospels is the same as for all other ancient documents. Actually, it also exaggerates the rate of copying errors on ancient documents, but that’s beside the point. It is reasonable enough to assume that the rate of copying errors in Christian scriptures does not greatly differ from that of other documents, if Christianity is false. Well, actually, even that is iffy since a document people hold in special reverence may get special care even if that reverence is mistaken, but forget about that for now. If Christianity is true, the gospels are not an ordinary document. They are an important part of God’s plan of salvation for us, which he entrusted to a church he personally founded and has carefully looked over throughout time, guarding it from error. In that circumstance, it would be absurd to suppose that copying errors would distort the meaning of the text despite the power of God preventing that from happening. Thus it is clear that the rate of copying errors is not a question which is independent of the truth of Christianity, and therefore a presumed rate of copying errors cannot be used as an argument against the truth of Christianity precisely because whatever rate is presumed will contain in it an assumption of the truth or falsehood of Christianity. (I should point out that what we would expect—and what the Church claims—is that God would safeguard the meaningful truth of revelation, not the insignificant details. That is, we would expect that if Christianity was true God would keep significant errors from predominating, not that he would turn scribes into photocopying machines—within Christianity God places a great deal of emphasis on free will and human cooperation. And as it happens, we have some very old copies of the gospels and while there have been the occasional copying errors, none of them have amounted to a doctrinally significant difference. Make of that what you will.)

So bringing this example back to the original point, whether Christian theology is man-made is not a question which is independent of the question of whether Christianity is true. If Christianity is false, then its theology is man-made. But if Christianity is true, then its theology is not man-made, but revealed. And as I said, while men often make mistakes, we can trust God to accurately describe himself.

So, to recap: theology is descriptive, not constructive, and in historically-based religions like Christianity, theology is revealed, not man-made. So now we can move onto the question of probabilities.

First, there is the issue that probability says nothing about one-offs. I covered this in my video The Problem with Probability, so I won’t go into that here, but since I’ve heard the objection that I only discussed the frequentist interpretation of probability, I will mention that if you want to go with a bayesian interpretation of probability, all you’re saying by assigning a probability of zero to an event is that it’s not part of your model. Now in the question we’re addressing, it’s not a probability of zero that’s being assigned but rather “approximately zero”. But the thing about the Bayesian interpretation is that probability is at least as much a description of the statistician as it is of the real world. It is, essentially, a way to quantify how little you know. Now, sometimes you have to make decisions and take actions with whatever knowledge you have at the moment, but often the correct thing to do is: learn. There is no interpretation of statistics which turns ignorance into knowledge, or in bayesian terms, the way to get better priors is outside of the scope of bayesian statistics.

But more importantly, this atomization of theologies is very misleading. Among all of the possible theologies, many of them have a great deal in common. They do not have everything important in common, obviously. There are some very substantial differences between, say, Greek Orthodoxy and say, Theravada Buddhism. But for all their differences, Islam, Christianity, Judaism, Baha’i, Sikhism, and several others have quite a lot in common. They all worship the uncreated creator of all that is. That’s actually a pretty big thing, which is to say that it’s very important. An uncreated creator who transcends time and space has all sorts of implications on the coherency of contingent beings within time (such as ourselves), the existence of a transcendent meaning to life, and lots of other things. This is in contrast to things that don’t matter much, like whether there is an Angel who has a scroll with all of the names of the blessed written on it. Whether there is one or isn’t doesn’t really matter very much. Grouping those two distinctions together as if they were of equal importance is highly misleading. Now, granted, there are all too many people who take a tribalistic, all-or-nothing approach to religion where the key thing is to pick the right group to formally pledge allegiance to. But one of the things which follows from belief in an uncreated creator is that this primitive, tribalistic approach is a human invention which is not an accurate description of reality. An uncreated creator cannot need us nor benefit from us, so he must have created us for our own sake, and so our salvation must be primarily not about something superficial like a formal pledge of allegiance, but about truth and goodness. And by goodness I mean conformity of action to the fullness of truth. For more on this, I’ll link my video debunking Believe-or-Burn, but for the moment, suffice it to say that being fairly correct, theologically, must be of some greater-than-zero value under any coherent theology with an uncreated creator behind all that exists. The correct approach is not to give up if you can’t be be completely correct. It’s to try to be as correct as possible.

And in any event there is no default position. Atheism is as much a philosophical position as any theology is. Well, that’s not strictly true. There is a default position, which is that there is Nothing. But that’s clearly wrong, there is something, so the default position is out. And while in a dictionary sense atheism is nothing but the disbelief in God—or for the moment it doesn’t even matter if you’re too intellectually weak for that and want to define atheism as the mere lack of a belief in God—western atheists tend to believe in the existence of matter, at least, as well as immaterial things like forces and laws of nature. So each atheist has a belief system, even if some refuse to admit it. The only way to not have a belief system is to give yourself a lobotomy. But until you do, since you have a belief system, it is as capable of being wrong as any theology is. And does it seem plausible that, if Christianity is true, if the version of Christianity you’ve encountered is a little inaccurate, you’ll be better off as an atheist?

I think that nearly answers the question, but there is a final topic which I think may answer an implicit part of the question: while there are infinitely many theologies which are theoretically possible, in practice there haven’t actually been all that many. This is something I’m going to cover more in my upcoming video series which surveys the world’s religions, but while there certainly are more than just one religion in the world, there aren’t nearly as many as many modern western people seem to think that there are. Usually large numbers are arrived at by counting every pagan pantheon as being a different religion, but this is not in fact how the pagans themselves thought of things. I don’t have the time to go into it—I addressed this somewhat in my video on fundamentalists, and will address it more in the future—but actual pagans thought of themselves as sharing a religion; just having some different gods and some different names for the same gods, just like French and American zoos don’t have all the same animals, and don’t use the same names for the animals they do have in common. But they will certainly recognize the other as zoos. American zookeepers do not disbelieve in French “python réticulé”.

And so it goes with other differences; those who worship nature worship the same nature. All sun worshippers worship the same sun. Those who believe in an uncreated creator recognize that others who believe in an uncreated creator are talking about the same thing, and generally hold that he can be known to some degree through examination of his creation, so they will tend to understand others who believe in an uncreated creator as having stumbled into the same basic knowledge.

And this explains why minor religions tend to die out as small groups make contact with larger groups. Those religions which are more thoroughly developed—which present more truth in an intelligible way—will appeal to those who on their own only developed a very rudimentary recognition and expression of those truths. There has been conversion by the sword in history, though it is actually most associated with Islam and often exaggerated in other faiths, but it is not generally necessary. When people come into contact with a religion which has a fuller expression of truth than the one they grew up with, they usually want to convert, because people naturally want the truth, and are attracted to intelligible expressions of it. And the key point is that the expressions of truth in better developed religions are intelligible precisely because they are fuller expressions of truths already found in one’s native religion. And this is so because religions are founded for a reason. I know there’s a myth common that religion was invented as bad science, usually something to the effect that people invented gods of nature in order to make nature seem intelligible. The fact that this is exactly backwards from what personifying inanimate objects does should be a sufficient clue that this is not the origin of religion. Think about the objects in your own life that people personify: “the printer is touchy”, “the traffic light hates me”, “don’t let the plant hear that I said it’s doing well because it will die on me out of spite”. Mostly this is just giving voice to our bewilderment at how these things work, but if this affects how mysterious the things are in any way, it makes them more mysterious, not less. If you think the printer is picky about what it prints, you’ll wonder at great length what it is about your documents it disapproves of. If you think of it as a mere machine, you turn it off, take it apart, put it back together again, and turn it on. Or you call a repairman. But if you personify it, you’ll wrap your life up in the mystery of its preferences. And anyone with any great experience of human beings has seen this. Especially if you’ve ever been the repairman to whom the printer is just a machine.

It’s also, incidentally, why many atheists have developed a shadowy, mysterious thing called “religion” which desires to subjugate humanity.

People personify what they don’t understand to communicate that it is mysterious, not to make it less mysterious. And they do this because people—having free will—are inherently and irreducibly mysterious.

So if you look past the mere surface differences, you will find that religions have generally originated for very similar reasons. So much so that more than a few people who haven’t studied the world’s religions enough are tempted to claim that there is only one universal religion to all of mankind with all differences being mere surface appearance. That’s not true either, but that this mistake is possible at all, is significant. Religions are founded for a reason, and that’s why there aren’t infinitely many of them.

Until next time, may you hit everything you aim at.

Authority Figures in Movies

One of the curious things about the roles of authority figures in movies is that they are very rarely played by people who have ever had any authority. One might think that this wouldn’t have too much of an impact since the actors are just reciting dialog which other people wrote. (People who most of the time haven’t had any authority themselves, but that’s a somewhat separate matter.) And in the end, authority is the ability to use force to compel people, so does it matter much what the mannerisms an actor uses are?

Actually, yes, because in fact a great deal of authority, in practice, is about using social skills to get people to cooperate without having to use one’s authority. And a great deal of social skills are body language, tone of voice, emphasis, and pacing. Kind of like the famous advice given by Dalton in Road House:

For some reason, authority figures are usually portrayed as grim and stern—at this point I think because it’s a shorthand so you can tell who is who—but there is a great deal which can be accomplished by smiling. There’s an odd idea that many people seem to have that smiling is only sincere if it is an instinctual, uncontrollable reaction. I’ve no idea where this crazy notion came from, but in fact smiling is primarily a form of communication. It communicates that one is not (immediately) a threat, that (in the moment) one intends cooperation, that the order of the moment is conversation rather than action. Like all communication it can of course be a lie, but the solution to that is very simple: don’t lie with your smile. Words can be lies, but the solution is not to refrain from speaking unless you can’t help yourself; it’s to tell the truth when one opens one’s mouth. So tell the truth when you smile with your mouth, too. And since actions are choices, one very viable option, if you smile at someone, is to follow through and (in the moment) be nice.

Anyone (sane) who has a dog knows that in many ways they’re terrible creatures. They steal your food, destroy everyday items, throw up on your floor when they’ve eaten things that aren’t food, get dog hair everywhere, and make your couches stink of dog. And yet, people love dogs who do these things to them for a very simple reason: any time you come home, your dog smiles at you and wags its tail and is glad to see you. And it’s human nature that it’s impossible to be angry at someone who is just so gosh darned happy that you’re in the same room as them.

People in authority are rarely there because they have a history of failure and incompetence at dealing with people; it may be a convenient movie shorthand that people in authority are stone-faced, grumpy, and stern, but in real life people in positions of authority are generally friendly. It’s easy to read too much into that friendliness, of course—they’re only friendly so long as you stay on the right side of what you’re supposed to be doing—but this unrealistic movie shorthand makes for far less interesting characters.

And I suppose I should note that there are some people in positions of authority who are often stone-faced and grim, but these are usually the people responsible for administering discipline to those already known to be transgressors. This is especially true of those dealing with children, who have little self control and less of a grasp of the gravity of most situations they’re in and who need all the help they can get in realizing that it’s not play time. By contrast, during the short time I was able to take part in my parish’s prison ministry, I noticed that the prison guards were generally friendly (if guardedly so) with the inmates. Basically, being friendly can invite people to try to take liberties, but being grumpy usually gets far less cooperation, and outside of places like Nazi death camps where you are actually willing to shoot people for being uncooperative, cooperation is usually far more useful than people trying to take liberties and having to be told “no” is inconvenient.

But most of the actors who play authority figures don’t know any of this; and when you research the individual actors they often turn out to be goofballs who don’t like authority and whose portrayal of it is largely formed by what they most dislike about it.

Atheism is Not a Religion

This is the script to my video, Atheism is Not a Religion. As always, it was written to be listened to when I read it aloud, but it should be pretty readable as text, too.

Today we’re going to look at a topic which a casual survey of atheist youtube channels and twitter feeds suggests is of importance to many atheists: that atheism is not a religion. Now, since the one thing you can’t convict internet atheists of is originality, I assume that this is because there are Christians who claim that atheism is a religion. Of course what they probably mean by this that atheism entails a set of metaphysical beliefs. And this is true enough, at least as a practical assumption if some atheists will scream at you until they’re blue in the face that it’s not what they believe in theory. But merely having metaphysical beliefs does not make something a religion; it makes it a philosophy or in more modern terms, a world-view. But a religion is far more than merely a world-view or a set of beliefs. As Saint James noted, the demons believe in God.

The first and most obvious thing which atheism lacks is: worship. Atheists do not worship anything. I know that Auguste Comte tried to remedy this with his calendar of secular holidays, but that went nowhere and has been mostly forgotten except perhaps in a joke G. K. Chesterton made about it. A few atheists have made a half-hearted go of trying to worship science. And if that had any lasting power, Sunday services might include playing a clip from Cosmos: A Spacetime Odyssey. But the would-be science worshippers haven’t gotten that far, and it is highly doubtful they ever will.

Secular Humanism is sometimes brought up as something like a religious substitute, but so far it only appears to be a name, a logo, some manifestos no one cares about, and the belief that maybe it’s possible to have morality without religion. And humanity is not a workable object of worship anyway. First, because it’s too amorphous to worship—as Chesterton noted, a god composed of seven billion persons neither dividing the substance nor confounding the persons is hard to believe in. The other reason is that worshipping humanity involves worshipping Hitler and Stalin and Mao and so forth.

Which brings us to Marxism, which is perhaps the closest thing to a secular religion so far devised. But while Marxism does focus the believer’s attention on a utopia which will someday arrive, and certainly gets people to be willing to shed an awful lot of innocent blood to make it happen sooner, I don’t think that this really constitutes worship. It’s a goal, and men will kill and die for goals, but they can’t really worship goals. Goals only really exist in the people who have them, and you can only worship what you believe actually exists.

It is sometimes argued that within a marxist utopia people worship the state, but while this is something put on propaganda posters, the people who lived in marxist nations don’t report anyone actually engaging in this sort of worship, at least not sincerely.

And I know that some people will say that atheists worship themselves—I suspect because almost all atheists define morality as nothing more than a personal preference—but, at least I’ve never seen that as anything more than a half-hearted attempt to answer the question of “what is the ground of morality”, rather than any sort of motivating belief. And in any event, it is inherently impossible to worship oneself. Worshipping something is recognizing something as above oneself, and it is not possible to place oneself above oneself. I think the physical metaphor suffices: if you are kneeling, you can’t look up and see your own feet. You might be able to see an image of yourself in a mirror, but that is not the same, and whatever fascination it might have is still not worship. So no, atheism does not worship anything.

The second reason why atheism is not a religion is that atheism gives you no one to pray to. Prayer is a very interesting phenomenon, and is much misunderstood by those who are not religious and, frankly, many who are, but it is, at its core, talking with someone who actually understands what is said. People do not ever truly understand each other because the mediation of words always strips some of the meaning away and the fact that every word means multiple things always introduces ambiguity. Like all good things in religion this reaches its crescendo in Christianity, but even in the public prayers said over pagan altars, there is the experience of real communication, in its etymological sense. Com—together unication—being one. It is in prayer—and only in prayer—that we are not alone. Atheists may decry this as talking with our imaginary friends if they like—and many of them certainly seem to like to—but in any event they are left where all men who are not praying are left: alone in the crowd of humanity, never really understood and so only ever loved very imperfectly at best. (I will note that this point will be lost on people who have never taken the trouble to find out what somebody else really means, and so assumes that everyone else means exactly the same things that he would mean by those words, and so assumes that all communication goes perfectly. You can usually identify such people by the way they think that everyone around them who doesn’t entirely agree with them is stupid. It’s the only conclusion left open to them.)

The third reason why atheism is not a religion is that it does not, in any way, serve the primary purpose of religion. The thing you find common to all religions—the thing at the center of all religions—is putting man into his proper relation with all that is; with the cosmos, in the Greek sense of the word. Anyone who looks at the world sees that there is a hierarchy of being; that plants are more than dust and beasts are more than plants and human beings are more than beasts. But if you spend any time with human beings—and I mean literally any time—you will immediately know that human beings are not the most that can be. All that we can see and hear and smell and taste and touch in this world forms an arrow which does not point at us but does run through us, pointing at something else. The primary purpose of a religion is to acknowledge that and to get it right. Of course various religions get it right to various degrees; those who understand that it points to an uncreated creator who loved the world in existence out of nothing get it far more right than those who merely believe in powerful intelligences which are beyond ours. Though if you look carefully, even those who apparently don’t, seem to often have their suspicions that here’s something important they don’t know about. But be that as it may, all religions know that there is something more than man, and give its adherents a way of putting themselves below what they are below; of standing in a right relation to that which is above them. In short, the primary purpose of all religion is humility.

And this, atheism most certainly does not have. It doesn’t matter whether you define atheism as a positive denial or a passive lack; either way atheism gives you absolutely no way to be in a right relationship to anything above you, because it doesn’t believe in anything above you. Even worse, atheism as a strong tendency, at least in the west, to collapse the hierarchy of being in the other direction, too. It is no accident that pets are acquiring human rights and there are some fringe groups trying to sue for the release of zoo animals under the theory of habeus corpus. Without someone who intended to make something out of the constituent particles which make us up, there is ultimately no reason why any particular configuration of quarks and electrons should mean anything more than any other one; human beings are simply the cleverest of the beasts that crawl the earth, and the beasts are simply the most active of the dust which is imprisoned on the earth.

We each have our preferences, of course, but anyone with any wide experience of human beings knows that we don’t all have the same preferences, and since the misanthropes are dangerous and have good reason to lie to us those who don’t look out for themselves quickly become the victims of those who do. Call it foreigners or racists or patriarchy or gynocentrism or rape culture or the disposable male or communism or capitalism or call it nature red in tooth and claw, if you want to be more poetic about it, but sooner or later you will find out that human beings, like the rest of the world, are dangerous.

Religious people know very well that other human beings are dangerous; there is no way in this world to get rid of temptation and sin. But religion gives the possibility of overcoming the collapsing in upon ourselves for which atheism gives no escape.

For some reason we always talk about pride puffing someone up, but this is almost the exact opposite of what it actually does. It’s an understandable mistake, but it is a mistake. Pride doesn’t puff the self up, it shrinks it down. It just shrinks the rest of the world down first.

In conclusion, I can see why my co-religionists would be tempted to say that atheism is a religion. There are atheist leaders who look for all the world like charismatic preachers and atheist organizations that serve no discernible secular purpose. Though not all atheists believe the same things, still, most believe such extremely similar things that they could identify on that basis. Individual atheists almost invariably hold unprovable dogmas with a blind certainty that makes the average Christian look like a skeptic. And so on; one could go on at length about how atheism looks like a religion. But all these are mere external trappings. Atheism is not a religion, which is a great pity because atheists would be far better off if it was.

Two Interesting Questions

On Twitter, @philomonty, who I believe is best described as an agnostic (he can’t tell whether nihilism or Catholicism is true), made two video requests. Here are the questions he gave me:

  1. If atheism is a cognitive defect, how may one relieve it?
  2. How can an atheist believe in Christ, when he does not know him? Not everyone has mystical experiences, so not everyone has a point of contact which establishes trust between persons, as seen in everyday life.

I suspect that I will tackle these in two separate videos, especially because the second is a question which applies to far more than just atheists. They’re also fairly big questions, so it will take me a while to work out how I want to answer them. 🙂

The first question is especially tricky because I believe there are several different kind of cognitive defects which can lead to atheism. Not everyone is a mystic, but if a person who isn’t demands mystical experience as the condition for belief, he will go very wrong. If a person who is a mystic has mystical experiences but denies them, he will go very wrong, but in a different way. There are also people who are far too trusting of the culture they’re in, thinking that fitting into it is the fullness of being human, so they will necessarily reject anything which makes it impossible or even just harder to fit in. These two will go very wrong, but in a different way from the previous ones.

To some degree this is a reference to my friend Eve Keneinan’s view that atheism is primarily caused by some sort of cognitive defect, such as an inability to sense the numinous (basically, lacking a sensus divinitatus). Since I’ve never experienced that myself, I’m certain it can’t be the entire story, though to the degree that it is part of the story it would come under the category of non-mystics who demand mystical experience. Or, possibly, mystics who have been damaged by something, though I am very dubious about that possibility. God curtails the amount of evil possible in the world to what allows for good, after all, so while that is not a conclusive argument, it does seem likely to me that God would not permit anything to make it impossible for a person to believe in him.

Anyway, these are just some initial thoughts on the topic which I’ll be mulling over as I consider how to answer. Interesting questions.

The Dunning-Kruger Effect

(This is the script for my video about the Dunning-Kruger effect. While I wrote it to be read out loud by someone who inflects words like I do, i.e. by me, it should be pretty readable as text.)

Today we’re going to be looking at the Dunning-Kruger effect. This is the other topic requested by PickUpYourPantsPatrol—once again thanks for the request!—and if you’ve disagreed with anyone in the internet in the last few years, you’ve probably been accused of suffering from it.

Perhaps the best summary of the popular version of the Dunning-Kruger effect was given by John Cleese:

The problem with people like this is that they have no idea how stupid they are. You see, if you are very very stupid, how can you possibly realize that you are very very stupid? You’d have to be relatively intelligent to know how stupid you are. There’s a wonderful bit of research by a guy called David Dunning who’s pointed out that to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place. This means, if you’re absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.

There are plenty of things to say about this summary as well as the curious problem that if an idiot is talking to an intelligent person, absent reputation being available, there is a near-certainty that both will think the other an idiot. But before I get into any of that, I’d like to talk about the Dunning Kruger study itself, because I read the paper which Dunning and Kruger published in 1999, and it’s quite interesting.

The first thing to note about the paper is that it actually discusses four studies which the researchers did, trying to test specific ideas about incompetence and self-evaluation which the paper itself points out were already common knowledge. For example, they have a very on-point quotation from Thomas Jefferson. But, they note, this common wisdom that fools often don’t know that they’re fools has never been rigorously tested in the field of psychology, so they did.

The second thing to note about this study is that—as I understand is very common in psychological studies—their research subjects were all students taking psychology courses who received extra credit for participating. Now, these four studies were conducted in Cornell University, and the classes were all undergraduates, so right away generalizing to the larger population is immediately suspect since there’s good reason to believe that undergraduates in an Ivy League university have more than a few things in common which they don’t share with the rest of humanity. This is especially the case because the researchers were testing self-evaluation of performance, which is something that Cornell undergraduates were selected for and have a lot invested in. They are, in some sense, the elite of society, or so at least I suspect most of them have been told, even if not every one of them believes it.

Moreover, the tests which they were given—which I’ll go into detail about in a minute—were all academic tests, given to people who were there because they had generally been good at academics. Ivy League undergraduates are perhaps the people most likely to give falsely high impressions of how good they are at academic tests. This is especially the case if any of these were freshmen classes (they don’t say), since a freshman at an Ivy League school has impressed the admissions board but hasn’t had the opportunity to fail out yet.
So, right off the bat the general utility of this study in confirming popular wisdom is suspect; popular opinion may have to stand on its own. On the other hand, this may be nearly the perfect study to explain the phenomenon Nassim Nicholas Taleb described as Intellectual Yet Idiot—credentialed people who have the role of intellectuals yet little of the knowledge and none of the wisdom for acting the part.

Be that as it may, let’s look at the four studies described. The first study is in many ways the strangest, since it was a test of evaluating humor. They created a compilation of 30 jokes from several sources, then had a panel of 8 professional comedians rate these jokes on a scale from 1-11. After throwing out one outlier, they took the mean answers as the “correct” answers, then gave the same test to “65 cornell undergraduates from a variety of courses in psychology who earned extra credit for their participation”.
They found that the people with the bottom quartile of test scores, who by definition have an average rank of being at the twelfth percentile, guessed (on average) their rank was the 66th percentile. The bottom three quartiles overestimated their rank, while the top quartile underestimated their rank, thinking that they were in the (eyeballing it from the graph) 75th percentile when in fact (again, by definition) they were in the 88th.
This is, I think, the least interesting of the studies, first because the way they came up with “right” and “wrong” answers is very suspect, and second because this isn’t necessarily about mis-estimation of a person’s ability, but could be entirely about mis-estimating their peer’s ability. The fact that everyone put their average rank in the class at between the 66th percentile and 75th percentile may just mean that in default of knowing how they did, Cornell students are used to guessing that they got somewhere between a a B- and a B+. Given that they were admitted to Cornell, that guess may have a lot of history behind it to back it up.

The next test, though unfortunately only given to 45 Cornell students, is far more interesting both because it used 20 questions on logical reasoning taken from an LSAT prep book—so we’re dealing with questions where there is an unambiguously right answer—and because in addition to asking students how they thought they ranked, they asked the students how many questions they thought that they got right. It’s that last part that’s really interesting, because that’s a far more direct measure of how much the students thought that they knew. And in this case, the bottom quartile thought that they got 14.2 questions right while they actually got 9.6 right. The top quartile, by contrast, thought that they got 14 correct when they actually got 16.9 correct.

So, first, the effect does in fact hold up with unambiguous answers. The bottom quartile of performers thought that they got more questions right than they did. So far, so good. But the magnitude of the error is not nearly as great as it was for the ranking error, especially for the bottom quartile. Speaking loosely, the bottom quartile knew half of the material and thought that they knew three quarters of it. That is a significant error, in the sense of being a meaningful error, but at the same time they thought that they knew about 48% more than they did, not 48,000% more than they did. The 11 Cornell undergraduates who took this class did have an over-inflated sense of their ability, to be sure, but they also had a basic competence in the field. To put this in perspective, the top quartile only scored 76% better than the bottom quartile.

The next study was on 84 Cornell undergrads who were given a 20 question test of standard English grammar taken from a National Teacher Examination prep guide. This replicated the basic findings of the previous study, with the bottom quartile estimating they got 12.9 questions right versus a real score of 9.2. (Interestingly, the top quartile very slightly over-estimated their score as 16.9 when it was actually 16.4) Again, all these are averages so the numbers are a little wonky, but anyway this time they over-estimated their performance by 3.7 points, or 40%. And again, they got close to half the questions right, so this isn’t really a test of people who are incompetent.

There’s another thing to consider in both studies, which is how many questions the students thought they got wrong. In the first study they estimated 5.4 errors while in the second 7.1 errors, and while these were under-estimates, they were correct that they did in fact get that many wrong. Unfortunately these are aggregate numbers (asked after they handed the test in, I believe) so we don’t know their accuracy on gauging whether they got particular questions wrong, but in the first test they correctly estimated about 40% of their error and on the second test they correctly estimated about 65% of their error. That is, while they did unequivocally have an over-inflated sense of their performance, they were not wildly unrealistic about how much they knew. But of course these are both subjects they had studied in the past, and their test scores did demonstrate at least basic competence with them.

The fourth study is more interesting, in part because it was on a more esoteric subject: it was a 10 question test, given to 140 cornell undergrads, about set selection. Each problem described 4 cards and gave a rule which they might match. The question was which card or cards needed to be flipped over to determine if those cards do match the rule. Each question was like that, so we can see why they only asked ten questions.

They were asked to rate how they did in the usual way, but then half of them were given a short packet that took about 10 minutes to read explaining how to do these problems, while the other half was given an unrelated filler task that also took about 10 minutes. They were then asked to rate their performance again, and in fact the group who learned how to do the problems did revise their estimate of their performance, while the other group didn’t change it very much.

And in this test we actually see a gross mis-estimation of ability by the incompetent. The bottom quartile scored on average 0.3 questions correct, but initially thought that they had gotten about 5.5 questions correct. For reference, the top quartile initially thought that they had gotten 8.9 questions correct while they had in fact gotten all ten correct. And after the training, the untrained bottom quartile slightly raised their estimation of their score (by six tenths of a question), but among the trained people the bottom quartile reduced their estimation by 4.3 questions. (In fact the two groups had slightly different performances which I averaged together; so the bottom quartile of the trained group estimated that they got exactly one question right.)

This fourth study, it seems to me, is finally more of a real test of what everyone wants the Dunning-Kruger effect to be about. An average of 0.3 questions right corresponds to roughly to 11 of the 35 people in the bottom quartile getting one question right while the rest got every question wrong. The incompetent people were actually incompetent. Further, they over-estimated their performance by over 1800%. So here, finally, we come to the substance of the quote from John Cleese, right?

Well… maybe. There are two reasons I’m hesitant to say so, though. The first is the fact that these are still all Cornell students, so they are people who are used to being above average and doing well on tests and so forth. Moreover, virtually all of them would have never been outside of academia, so it is very likely that they’ve never encountered a test which was not designed to be passable by most people. If nothing else, it doesn’t reflect well on a teacher if most of his class gets a failing grade. And probably most importantly, the skills necessary to solve these problems are fairly close to the sort of skills that Ivy League undergrads are supposed to have, so this skillset at which they are incompetent being similar to a skillset at which they are presumably competent might well have misled them.

The second reason I’m hesitant to say that this study confirms the John Cleese quote is that the incompetent people estimated that they got 55% of the questions right, not 95% of the questions right. That is to say, incompetent people thought that they were merely competent. They didn’t think that they are experts.

In the conclusion of the paper, Dunning and Kruger talked about some limitations of their study, which I will quote because it’s well written and I want to do them justice.

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods.

They go on to note that in some domains, knowledge is largely the substance of skill, like in grammar, whereas in other places knowledge and skill are not the same thing, like basketball.

They also note that there is a minimum amount of knowledge required to mistake oneself for competent. As the authors say:

Most people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct an 8-cylinder engine, or diagnose acute disseminated encephalomyelitis.

So where does this leave us with regard to the quote from John Cleese? I think that the real issue is not so much about the inability of the incompetent to estimate their ability, but the inability of the incompetent to reconcile new ideas with what they do actually know. Idiots may not know much, but they still know some things. They’re not rocks. When a learned person tells them something, they are prone to reject it not because they think that they already know everything, but because it seems to contradict the few things they are sure of.

There is a complex interplay between intelligence and education—and I’m talking about education, mind, not mere schooling—where intelligence allows one to see distinctions and connections quickly, while education gives one the framework of what things there are that can be distinguished or connected. If a person lacks the one or the other—and especially if they lack both—understanding new things becomes very difficult because it is hard to connect what was said to what is already known, as well as to distinguish it from possible contradictions to what is already known. If the learned, intelligent person isn’t known by reputation to the idiot, the idiot has no way of knowing whether the things said don’t make sense to him because they are nonsense or because they are too much sense, and a little experience of the world is enough to make many if not most people sufficiently cynical to assume the former.

And I think that perhaps the best way to see the difference between this and the Dunning-Kruger effect is by considering the second half of the fourth experiment: the incompetent people learned how to do what they initially couldn’t. That is, after training they became competent. That is not, in general, our experience of idiots.
Until next time, may you hit everything you aim at.

Why I Cringe When People Criticize Capitalism (in America)

Every time I hear a fellow Christian (usually Catholic, often someone with the good sense to be a fan of G.K. Chesterton) criticize capitalism, I cringe, but not for the reason I suspect most of them would expect. Why I cringe will take a little explanation, but it’s rooted in the fact that there are actually two very different things which go by the name capitalism.

The first is a theory proposed by Adam Smith that, to oversimplify and engage in some revisionist history which is not fair to him but which would take too long to go into further, holds that virtue is unreliable: if we can harness vice to do the work of virtue, we can get the same effect much more reliably. Thus if we appeal to men’s self-interest, they will do what they ought with more vigor than if we appealed to their duty and love of their fellow man. Immanuel Kant’s essay Perpetual Peace has a section which may be taken as a summary of this attitude:

The problem of the formation of the state, hard as it may sound, is not insoluble, even for a race of devils, granted that they have intelligence. It may be put thus:—“Given a multitude of rational beings who, in a body, require general laws for their own preservation, but each of whom, as an individual, is secretly inclined to exempt himself from this restraint: how are we to order their affairs and how establish for them a constitution such that, although their private dispositions may be really antagonistic, they may yet so act as a check upon one another, that, in their public relations, the effect is the same as if they had no such evil sentiments.” Such a problem must be capable of solution. For it deals, not with the moral reformation of mankind, but only with the mechanism of nature; and the problem is to learn how this mechanism of nature can be applied to men, in order so to regulate the antagonism of conflicting interests in a people that they may even compel one another to submit to compulsory laws and thus necessarily bring about the state of peace in which laws have force.

Capitalism in this sense was this general problem applied to economics: we need men to work, but all men are lazy. We can try to appeal to men to be better, but it is much simpler and more reliable to show them how hard work will satisfy their greed.

This version of capitalism is a terrible thing, and by treating men as devils has a tendency to degrade men into a race of devils. But there is something important to note about it, which is that it doesn’t really demand much of government or of men. While it appeals to men’s greed, it does not impose a requirement that a craftsman charge an exorbitant price rather than a just price. It does not forbid a man taking a portion of his just profits and giving it to the poor. It tends to degrade men into devils, but it does not produce a form of government which demands that they become devils.

That was left to Marxism, which by its materialism demanded that all men surrender their souls to the state. Marxism is an equally wrong theory of human beings to the Capitalism of the enlightenment, but it demands a form of government which is far less compatible with human virtue. Further, it demands a form of government which is intrinsically incompatible with natural justice—depriving, as it does, all men of the property necessary to fulfill their obligations to their family and to their neighbors. Marxism inherently demands that all to whom it applies becomes a race of devils.

Of course, Marxism was never historically realized in its fullness since as Roger Scruton observed, it takes an infinite amount of force to make people do what is impossible. But enough force was applied to create the approximation of Marxism known as The Soviet Union (though according to a Russian friend of mine who escaped shortly before the Soviet Union collapsed, a more accurate translation would have been “The Unified Union of United Allies”). This global superpower which was (at least apparently) bent on conquering the world in the name of Marx—well, in the name of Lenin, or communism, or The People; OK, at least bent on conquering the world—and to a marxist, who doesn’t really believe in personal autonomy and thus doesn’t believe in personal virtue, everyone else looks like a Capitalist, in the original sense of the word, since anything which is individual must inherently be greed.

So they called American capitalists. But if the devils in hell spit some criticism at you, it is only natural to take it as a compliment, and partly because of this and partly for lack of a better term, Americans started calling themselves capitalists. If the people with the overpopulated death camps for political prisoners in the frozen wastelands of Siberia despise us for being capitalists, then being a capitalist must be a pretty good thing. But in embracing the term capitalist, people were not thinking of Adam Smith’s economic theory or the problem Kant wrestled with in how to get a race of devils to cooperate, they were thinking of what they were and just using the name capitalist to describe that.

And here’s where we come to the part that makes me cringe when I hear fellow Christians complain about Capitalism. The United States of America has had many sins, but it never been capitalist in the philosophical sense. Much of what became The United States was founded as religious colonies, though to be sure there were economic colonies as well. But the economic colonies, which had all of the vices that unsupervised people tend to, were still composed of religious people who at least acknowledged the primacy of virtue over vice in theory. And for all the problems with protestantism, the famous “Protestant Work Ethic” was the diametric opposite of philosophical capitalism. The whole idea of the protestant work ethic is that men should work far beyond what is needed, because it is virtue and because idleness is dangerous. Perhaps it was always more of a theory than a practice, but even so it was not the opposite theory of capitalism that men should work to satisfy their greed.

For perhaps the first century after the founding of The United States, it was a frontier nation in which people expanded and moved around with fairly low population densities. It takes time to set up governments and small groups of people can resolve their own differences well enough, most of the time, so the paucity of government as we’re used to it today (and though in a different form people would have been used to it in Europe in the middle ages) was largely due to the historical accident of low population densities, and not to any sort of philosophical ideal that greed is the highest good, making government practically unnecessary except for contract enforcement.

And while it is true that this environment gave birth to the robber barons who made a great deal of money treating their fellow men like dirt, it also gave rise to trust busters and government regulation designed to curb the vices of men who did not feel like practicing even minimal virtue to their fellow man. Laws and regulations take time to develop, especially in a land without computers and cell phone cameras; before the advent of radio it took more than a little time to convince many people of some proposition because the skilled orators could only do the convincing one crowd at a time.

Moreover, the United States has never had a government free from corruption, but powerful men buying off politicians was not what the United States was supposed to be; all things in this fallen world are degenerate versions of themselves. Slowness to act on common principles in a fallen world does not mean that a people does not hold those principles, only that hard things like overcoming corruption are difficult and time consuming to do.

But throughout the history of the United States, if you walked up the average citizen and asked him, “ought we, as a people, to encourage men to be honest, hard working, and generous, or ought we to show each man that at least the first two are often in his self-interest and then encourage him to then be as selfish and greedy as possible?” you would have had to ask a great many people indeed to come across someone who would cheerfully give you the second answer. Being willing to give that second answer is largely a modern degeneracy of secularists who know only enough economics nor history to be dangerous, and for the most part think that you’re asking whether the government should micro-manage people’s lives to force them to be honest, hard working, and generous. Americans have many vices, but the least reliable way possible to find out what they are is to ask us.

I will grant that philosophical capitalism is also, to some degree, what is proposed by advertising. Indulge yourself! It’s sinfully delicious! You’re worth it! You deserve it! Everything is about making you happy!

I think that this may be why I cringe the most when my fellow Christians complain about our capitalist society; they should have learned by now not to believe everything they see on television.

Debunking Believe-or-Burn

This is the script from my video debunking believe-or-burn. It  was written to be read aloud, but it should be pretty readable. Or you could just listen to it.

Today we’re going to be looking at how abysmally wrong the idea of “believe or burn”, which I prefer to render as, “say the magic words or burn,” is. And to be clear, I mean wrong, not that I don’t like it or this isn’t my opinion. I’m Catholic, not evangelical, so I’m talking about how it contradicts the consistent teaching of the church since its inception 2000 years ago (and hence is also the position of the Eastern Orthodox, the Kopts, etc), and moreover how one can rationally see why “say the magic words or burn” cannot be true.

I’m not going to spend time explaining why non-Christian religions don’t believe you have to say the magic words or burn because for most of them, it’s not even relevant. In Hinduism, heavens and hells are related to your karma, not to your beliefs, and they’re all temporary anyway—as the story goes, the ants have all been Indra at some point. In Buddhism you’re trapped in the cycle of reincarnation and the whole point is to escape. To the degree that there even is a concept of hell in Buddhism, you’re there now and maybe you can get out. Many forms of paganism don’t even believe in an afterlife, and where they do—and what you do in life affects what happens to you in the afterlife—what happens to you is largely based on how virtuously you lived in society, not on worshipping any particular gods. Animistic religions are either often similar to pagan religions or they hold that the dead stick around as spirits and watch over the living. For the monotheistic religions, few of them have a well-defined theology on this point. Their attitude tends to be, “here is the way to be good, it’s bad to be evil, and for everyone else, well, that’s not a practical question.” For most of the world’s religions, “say the magic words or burn,” isn’t even wrong. And Islam is something of an exception to this, but I’m not going to get into Islam because the Quran doesn’t unambiguously answer this question and after Al Ghazali’s triumph over the philosophers in the 11th century, there really isn’t such thing as Islamic theology in the same sense that you have Christian theology. Christianity holds human reason, being finite, to be unable to comprehend God, but to be able to reason correctly about God within its limits. Since Al-Ghazali wrote The Incoherence of the Philosophers, the trend in Islam has been to deny human reason can say anything about God, past what he said about himself in the Quran. As such, any question not directly and unambiguously answered in the Quran—which, recall, is poetry—is not really something you can reason about. So as a matter of practicality I think Islam should be grouped with the other monotheisms who hold the question of what happens to non-believers acting in good faith to be impractical. And in any event there are hadith and a passage in the Quran which do talk about some Jews and Christians entering paradise, so make of that what you will.

There isn’t an official name for the doctrine of “say the magic words or burn”, but I think it’s best known because of fundamentalists who say that anyone who doesn’t believe will burn in hell. I think that the usual form is saying that everyone who isn’t a Christian will burn in hell, for some definition of Christian that excludes Roman Catholics, Eastern Orthodox, Anglicans, and anyone else who doesn’t think that the King James version of the bible was faxed down from heaven and is the sole authority in human affairs. You generally prove that you’re a Christian in this sense by saying, “Jesus Christ is my personal lord and savior”, but there’s no requirement that you understand what any of that means, so it functions exactly like a magical incantation.

As I discussed in my video on fundamentalists, when they demand people speak the magic words, what they’re asking for is not in any sense a real religious formulation, but actually a loyalty pledge to the dominant local culture. (Which is fundamentalist—all tribes have a way of pledging loyalty.) But the concept of “say the magic words or burn,” has a broader background than fundamentalists, going all the way back to the earliest Protestant reformers and being, more or less, a direct consequence of how Martin Luther and John Calvin meant the doctrine of Sola Fide.

Before I get into the origin of “say the magic words or burn”, let me give an overly brief explanation of what salvation actually means, to make sure we’re on the same page. And to do that, I have to start with what sin is: sin means that we have made ourselves less than what we are. For example, we were given language so that we could communicate truth. When we lie, not only do we fail in living up to the good we can do, we also damage our ability to tell the truth in the future. Lying (and all vices) all too easily become habits. We have hurt others and damaged ourselves. Happiness consists of being fully ourselves, and so in order to be happy we must be fixed. This is, over-simplified, what it means to say that we need salvation. Christianity holds that Jesus has done the work of that salvation, which after death we will be united with, if we accept God’s offer, and so we will become fixed, and thus being perfect, will be capable of eternal happiness. That’s salvation. Some amount of belief is obviously necessary to this, because if you don’t believe the world is good, you will not seek to be yourself. This is why nihilists like pickup artists are so miserable. They are human but trying to live life like some sort of sex-machine. They do lots of things that do them no good, and leave off doing lots of things that would do them good. Action follows belief, and so belief helps to live life well. We all have at least some sense of what is true, though, or in more classical language the natural law is written on all men’s hearts. It is thus possible for a person to do his best to be good, under the limitations of what he knows to be good. God desires the good of all of his creatures, and while we may not be able to see how a person doing some good, and some evil things under the misapprehension that they are good, can be saved, we have faith in God that he can do what men can’t. Besides, it doesn’t seem likely that God would permit errors to occur if they couldn’t be overcome. While we don’t know who will be saved, it is permissible to hope that all will be saved. As it says in the Catechism of the Catholic Church, “Those who, through no fault of their own, do not know the Gospel of Christ or his Church, but who nevertheless seek God with a sincere heart, and, moved by grace, try in their actions to do his will as they know it through the dictates of their conscience – those too may achieve eternal salvation.”

OK, so given that, where did the evil and insane idea of “say the magic words or burn” come from? Well, Sola Fide originated with Martin Luther, who as legend has it was scrupulous and couldn’t see how he could ever be good enough to enter heaven (I say, “as legend has it” because this may be an overly sympathetic telling). For some reason he couldn’t do his best and trust God for the rest, so he needed some alternative to make himself feel better. Unfortunately being Christian he was stuck with the word faith, which in the context of Christianity means trusting God. Martin Luther’s solution was to redefine the word faith to mean—well, he wasn’t exactly consistent, but at least much of the time he used it to mean something to the effect of “a pledge of allegiance”—basically, a promise of loyalty. The problem with that is that pledging your allegiance is just words. There’s even a parable Jesus told about this very thing: a man had two sons and told them go to work in his fields. The one son said no, but later thought better of it and went to work in the fields. The other said, “yes, sir” but didn’t go. Which did his father’s will? And please note, I’m not citing that to proof-text that Martin Luther was wrong. One bible passage with no context proves nothing. No, Martin Luther was obviously wrong. I’m just mentioning this parable because it’s an excellent illustration of the point about actions versus words. But as a side-note, it’s also an excellent illustration of why mainline protestants often have relatively little in common with Martin Luther and why it was left to the fundamentalists to really go whole-hog on Martin Luther’s theology: it was a direct contradiction of what Jesus himself taught.

John Calvin also had a hand in “say the magic words or burn”, though it was a bit different from the influence of Martin Luther. Though Luther and Calvin did agree on many points, they tended to agree for different reasons. While Martin Luther simply repudiated free will and the efficacy of reason—more or less believing that they never existed—John Calvin denied them because of the fall of man. According to Calvin man was free and and his reason worked before the first sin, but all that was destroyed with the first sin, resulting in the total depravity of man. Whereas Martin Luther thought that free will was nonsensical even as a concept, John Calvin understood what it meant but merely denied it. Ironically, John Calvin’s doctrines being a little more moderate than Martin Luther’s probably resulted in them having a much larger impact on the world; you had to be basically crazy to agree with Martin Luther, while you only needed to be deeply pessimistic to agree with John Calvin. Luther held that God was the author of evil, while Calvin at least said that all of the evil was a just punishment for how bad the first sin was. If outsiders can’t readily tell the difference between Calvin’s idea of God and the orthodox idea of the devil, insiders can’t even tell the difference between them in Martin Luther’s theology. Luther literally said that he had more faith than anyone else because he could believe that God is good despite choosing to damn so many and save so few. The rest of us, who don’t even try to believe blatant logical contradictions about God, just didn’t measure up. In the history of the world, Martin Luther is truly something special.

However, since both Luther and Calvin denied that there was such a thing as free will these days, Sola Fide necessarily took on a very strange meaning. Even a pledge of allegiance can’t do anything if you’re not the one who made it. So faith ends up becoming, especially for Calvin, just a sign that you will be saved. The thing is, while this is logically consistent—I mean, it may contradict common sense, but it doesn’t contradict itself—it isn’t psychologically stable. No one takes determinism seriously. The closest idea which is at least a little psychologically stable is that God is really just a god, if a really powerful god, so pledging allegiance is like becoming a citizen of a powerful, wealthy country. You’ll probably be safe and rich, but if you commit a crime you might spend some time in jail or even be deported. I realize that’s not the typical metaphor, but it’s fairly apt, and anyone born in the last several hundred years doesn’t have an intuitive understanding for what a feudal overlord is. This understanding of Sola Fide can’t be reconciled with Christianity, the whole point of which is to take seriously that God is the creator of the entire world and thus stands apart from it and loves it all. But this understanding of Sola Fide can plug into our instinct to be part of a tribe, which is why if you don’t think about it, it can be a stable belief.

So we come again to the loyalty pledge to the group—in a sense we have to because that is all a statement of belief without underlying intellectual belief ever can be—but with this crucial difference: whereas the fundamentalist generally is demanding loyalty to the immediate secular culture, the calvinist-inspired person can be pledging loyalty to something which transcends the immediate culture. I don’t want to oversell this because every culture—specific enough that a person can live in it—is always a subculture in a larger culture. But even so the calvinist-inspired magic-words-or-burn approach is not necessarily local. It is possible to be the only person who is on the team in an entire city, just like it’s possible to be the only Frenchman in Detroit. As such this form of magic-words-or-burn can have a strong appeal to anyone who feels themselves an outsider.
And the two forms of magic-words-or-burn are not very far apart and can easily become the other as circumstances dictate. And it should be borne in mind that one of those circumstances is raising children, because a problem which every parent has is teaching their children to be a part of their culture. In this fallen world, no culture is fully human, and equally problematic is that no human is fully human, so the result is that child and culture will always conflict. Beatings work somewhat, but getting buy-in from the child is much easier on the arms and vocal cords, and in the hands of less-than-perfect parents, anything which can be used to tame their children probably will be.

This would normally, I think, be a suitable conclusion to this video, but unfortunately it seems like salvation is a subject on which people are desperate to make some sort of error of exaggeration, so if we rule out the idea that beliefs are the only things that matter, many people will start running for the opposite side and try to jump off the cliff of beliefs not mattering at all. Or in other words, if salvation is possible to pagans, why should a Christian preach to them?

The short answer is that the truth is better for people than mistakes, even if mistakes aren’t deadly. This is because happiness consists in being maximally ourselves, and the only thing which allows us to do that is the truth. Silly examples are always clearer, so consider a man who thinks that he’s a tree and so stands outside with his bare feet in the dirt, arms outspread, motionless, trying to absorb water and nutrients through his toes and photosynthesize through his fingers. After a day or two, he will be very unhappy and a few days later he will die if he doesn’t repent of his mistake. Of course very few people make a mistake this stark—if nothing else anyone who does will die almost immediately, leaving only those who don’t make mistakes this extreme around. But the difference between this and thinking that life is about having sex with as many people as possible is a matter of degree, not of kind. You won’t die of thirst and starvation being a sex-maniac, and it will take you longer than a few days to become noticeably miserable, but it will happen with those who think they’re mindless sex machines as reliably as it will those who think they’re trees.

Pagans are in a similar situation to the pick-up-artists who think they’re mindless sex robots. Because paganism was a more widespread belief system that lasted much longer, it was more workable than pick-up-artistry, which is to say that it was nearer to the truth, but it was still wrong in ways that seriously affect human happiness. It varied with place and time, of course, but common mistakes were a focus on glory, the disposability of the individual, the inability of people to redeem themselves from errors, and so on. The same is true of other mistaken religions; they each have their mistakes, some more than others, and tend toward unhappiness to the degree that they’re wrong.

There is a second side to the importance of preaching Christianity to those who aren’t Christian, which is that life is real and salvation is about living life to the full, not skating by on the bare minimum. Far too many people think of this life as something unrelated to eternal life, as if once you make it to heaven you start over. What we are doing now is building creation up moment by moment. People who have been deceived will necessarily be getting things wrong and doing harm where they meant to help, and failing to help where they could have; it is not possible to be mistaken about reality and get everything right. That’s asking a person with vision problems to be an excellent marksman. A person who causes harm where they meant to help may not be morally culpable for the harm they do, but when all is made clear, they cannot be happy about the harm they did, while they will be able to be happy about the good they did. To give people the truth is to give them the opportunity to be happier. That is a duty precisely because we are supposed to love people and not merely tolerate them. Though I suppose I should also mention the balancing point that we’re supposed to give people the truth, not force it down their throats. Having given it to them, if they won’t take it, our job is done.

OK, I think I can conclude this video now. Until next time, may you hit everything you aim at.

Our Love for Formative Fiction

I think that for most of us, there are things which we loved dearly when we were children which we still love now, often greatly in excess of how much others love these things. And I think we’re used to heard this poo-pooed as mere nostalgia. But I think that for most of us, that’s not accurate.

Nostalgia is, properly speaking, a longing for the familiar. It is not merely a desire for comfort, but also a connection through the passage of time from the present to another time (usually our childhood, but it can be any previous time). As Saint Augustine noted, our lives are shattered across the moments of time, and on our own we have no power to put it back together. Nostalgia is, properly speaking, the hope that someone else’s power will eventually put the shattered moments of time back together into a cohesive whole.

But when we enjoy formative fiction, we’re not particularly thinking of the passage of time, or the connectedness of the present to the past. And the key way that we can see this is that we don’t merely relive the past, like putting on an old sweater or walking into a room we haven’t been in for years. Those are simple connections to the past, and are properly regarded as nostalgia. But when we watch formative fiction which we still enjoy (and no one enjoys all of the fiction they read/watched/etc as a child), we actually engage it as adults. We see new things that we didn’t see at first, and appreciate it in new ways.

What is really going on is not nostalgia, but the fact that everyone has a unique perspective on creation; for each of us there are things we see in ways no one else does. Part of this is our personality, but part of this is also our previous experiences. And the thing about formative fiction is that it helped to form us. The genuine teamwork in Scooby Doo, where the friends were really friends and really tried to help each other, helped me to appreciate genuine teamwork. It’s fairly uncommon on television for teammates to actually like each other—conflict is interesting! every lazy screenwriter in the world will tell you—so when I see it in Scooby Doo now, I appreciate it all the more than I’ve grown up looking for it and appreciating it where I see it. This is one of the things I love about the Cadfael stories, where Cadfael (the benedictine monk who solves murders) is on a genuine team with Hugh Berringar, the undersheriff of Shropshire. This is also one of the things I love about the Lord Peter stories with Harriet Vane—they are genuinely on each other’s side with regard to the mysteries.

And when I mention Scooby Doo, I am of course referring to the show from the 1960s, Scooby Doo! Where are you? I have liked some of the more recent Scooby Doo shows, like Scooby Doo: Mystery Inc., but by and large the more modern stuff tends to add conflict in order to make the show more interesting, and consequently makes it far less interesting for me. Cynics will say that this is merely because none of these were from my childhood, but in fact when Scooby Doo: Mystery Inc. had episodes where the entire team was functioning like a team where everyone liked each other and were on the same side, I genuinely enjoyed those episodes. (Being a father of young children means watching a lot of children’s TV.) The episodes where members of the team were fighting or the episodes where they split up were by far my least favorite episodes.

It is possible to enjoy fiction for ulterior motives, or at least to pretend to enjoy it for ulterior motives. Still, it’s also possible to enjoy fiction because one is uniquely well suited to enjoying it, and few things prepare us for life as much as our childhood did.

The Dishonesty of Defining Atheism as Lack of Belief in God

This is the script from a recent video of mine with the above title. It should be pretty readable, or you could just watch it.

Today we’re going to revisit the definition of atheism as a lack of belief in God, specifically to look at why it’s so controversial. As you may recall, Antony Flew first proposed changing the definition of atheism to lack of belief, from its traditional definition of “one who denies God,” in his 1976 essay, The Presumption of Atheism. By the way, you can see the traditional definition in the word’s etymology: atheos-ism, atheos meaning without God, and the -ism suffix denoting a belief system. Now, there’s nothing inherently wrong in changing a definition – all definitions are just an agreement that a given symbol (in this case a word) should be used to point to a particular referent. That is, any word can mean anything we all agree it does. And if a person is willing to define their terms, they can define any word to mean anything they want, so long as they stick to their own definition within the essay or book or whatever where they defined the term. Words cannot be defined correctly or incorrectly. But they can be defined usefully or uselessly. And more to the point here, they can be defined in good faith—cleary, to aid mutual understanding—or in bad faith—cleverly, in order to disguise a rhetorical trick.

And that second one is the why atheism-as-lack-of-belief is so controversial. If atheism merely denoted a psychological state—which might in fact be common between the atheist and a dead rat—no one would much care. Unless, I suppose, one wanted to date the atheist or keep the rat as a pet. But merely lacking a belief isn’t what lack-of-belief atheists actually mean. They only talk about lacking a belief to distract from the positive assertion they’ve learned to say quickly and quietly: that in default of overwhelming evidence to the contrary, one should assume atheism in the old sense. That is, until one has been convinced beyond a shadow of a doubt that God exists, one should assume that God does not exist. I’ll discuss how reasonable this is in a minute—spoiler alert: it’s not—but I’d first like to note the subtle move of people who have more or less explicitly adopted a controversial definition of atheism in order to cover for explicitly begging the question. I suspect that this is more accidental than intentional—somewhat evolutionary, where one lack-of-belief atheist did it and it worked and caught on by imitation—but it’s a highly effective rhetorical trick. Put all your effort into defending something not very important and people will ignore your real weakness. By the way, the phrase “beg the question” means that you’re assuming the answer to the question. It comes from the idea of asking that the question be given to you as settled without having to argue for it. But it’s not just assuming your conclusion, it’s asking for other people to assume your conclusion too, hence the “begging”. (“Asking for the initial point” would have been a better, if less colorful, translation of the latin “petitio principii”, itself a translation of the greek “τὸ ἐξ ἀρχῆς αἰτεῖν”. Pointing out how it’s not valid to do this goes back at least to Aristotle).

So, how reasonable is this assumption? The best argument I’ve ever heard for it is that in ordinary life we always assume things don’t exist until we have evidence for them. This is, properly speaking, something only idiots do. For example: oh look, here’s a hole in the ground. I’m going to assume it’s empty. It might be empty, of course, but in ordinary life only candidates for the Darwin Awards assume that. And in fact, taken to its logical conclusion, this default assumption would destroy all exploration. The only possible reason to try to find something is because you think it might be there. If you should act like planets in other solar systems don’t exist unless someone has already given you evidence for them, you wouldn’t point telescopes at them to see if they’re there. That’s not acting like they don’t exist; that’s acting like maybe they exist. In fact, scientific discovery is entirely predicated on the idea that you shouldn’t discount things until you’ve ruled them out. It’s also the entire reason you should control your experiments. You can’t just assume that other variables besides the one you’re studying had no effect on the outcome of your experiment unless somebody proves it to you, you’re supposed to assume that other variables do affect the outcome until you’ve proven that they don’t. This principle is literally backwards from good science.

Now, examples drawn from science will probably be lost on lack-of-belief atheists, who are in general impressively ignorant of how science actually works. But many of them probably own clothes. To buy clothes, one must first find clothes which fit. Until one gets to the clothing store, one doesn’t have evidence that they have clothes there, or that if they have clothes, that the clothes they have will fit. Properly speaking, one doesn’t even have evidence that the clothes that they sell there will have holes so the relevant parts of your body can stick out, like neck holes or leg holes. For all you know, they might lack holes of any kind, being just spheres of cloth. Do any of these atheists assume that the clothes at the clothing store lack holes? Because if they did, they’d stay home, since there’s no point in going to a store with clothes that can’t be worn.

Now, if one is trying to be clever, one could posit an atheist who goes to the store out of sheer boredom to see whether they have clothes or hippogriffs or whether the law of gravity even applies inside of the store. But they don’t, and we all know that they don’t. They reason from things that they know to infer other knowledge, then ignore their stupid principle and go buy clothes.

Now, if you were to point this out to a lack-of-belief atheist, their response would be some form of Special Pleading. Special Pleading is just the technical name for asking for different evidentiary standards for two things which aren’t different. You should have different evidentiary standards for the existence of a swan and for a law of mathematics, because those are two very different things. Sense experience is good evidence for a swan, but isn’t evidence at all for a law of mathematics, which must hold in all possible worlds. Special pleading is where you say that sense experience suffices for white swans but not for black swans. Or that one witness is enough to testify to the existence of a white swan, but three witnesses are required for a black swan. That’s the sort of thing special pleading is.

And this is what you will find immediately with lack-of-belief atheists. Their terminology varies, of course, but they will claim that God is in a special category which requires the default assumption of non-existence, unlike most of life. In my experience they won’t give any reason for why God is in this special category, presumably because there is none. But I think I know why they do it.

The special category of things they believe God is in is, roughly, the category of controversial ideas. Lack-of-belief atheists—all the ones I’ve met, at least—are remarkably unable to consider ideas they don’t believe. This is a mark, I think, of limited intellect, and people of limited intellect are remarkably screwed over by the modern world. Unable to evaluate the mess of competing ideas that our modern pluralistic environment presents to everyone, they could get by, by relying on a mentor: someone older and wiser who can tell them the correct answer until through experience they’ve learned how to navigate the world themselves. And please note that I don’t mean this in any way disparagingly. To be of limited intellect is like being short or weak or (like me) unable to tolerate capsaicin in food. It’s a limitation, but we’re all finite beings defined, to some degree, by our limits. God loves us all, and everyone’s limits are an opportunity for others to give to them. The strong can carry things for the weak, the tall can fetch things off of high shelves for the short, and people who can stand capsaicin can test the food and tell me if it’s safe. Limits are simply a part of the interdependence of creation. But the modern world with its mandatory state education and the commonality of working outside the home mean that children growing up have few—and commonly no—opportunities for mentors. Their teacher changes every year and their parents are tired from work when they are around. What are they to do when confronted with controversial ideas they’re unequipped to decide for themselves?

I strongly suspect that lack-of-belief atheism is one result. I’m not sure yet what other manifestations this situation has—given the incredible similarities between lack-of-belief atheism and Christian fundamentalism I strongly suspect that Christian fundamentalism is another result of this, but I haven’t looked into it yet.
This also suggests that the problem is not merely intellectual. That is, lack-of-belief atheists are probably not merely the victims of a bad idea. Having been deprived of the sort of stable role-models they should have had growing up, and not being able to find substitutes in great literature or make their way on their own through inspiration and native ability, they probably have also grown with what we might by analogy call a deformity in the organ of trust. They don’t know who to trust, or how to properly trust. Some will imprint on the wrong sort of thing—I think that this is what produces science-worshippers who know very little about science—but some of them simply become very mistrustful of everyone and everything.

Now, I don’t mean this as the only explanation of atheism, of course. For example, there are those who have so imprinted on the pleasure from a disordered activity that they can only see it as the one truly good thing in their life and so its incompatibility with God leads them to conclude God must not exist. There are the atheists Saint Thomas identified in the Summa Theologiae: those who disbelieve because of suffering and those who disbelieve because they think God is superfluous. But all these, I think, tend not to be lack-of-belief atheists and I’m only here talking about lack-of-belief atheists.

So finally the question becomes, what to do about lack-of-belief atheists? That is, how do we help them? I think that arguing with them is unlikely to bear much fruit, since most of what they say isn’t what they mean, and what they do mean is largely unanswerable. “I don’t know who to trust,” or, “I won’t trust anyone or anything,” can only be answered by a very long time of being trustworthy, probably for multiple decades. What I suspect is likely to be a catastrophic failure is any attempt to be “welcoming” or accommodating or inclusive. What lack-of-belief atheists are looking for—and possibly think they found already in the wrong place—is someone trustworthy who knows what they’re talking about. A person who is accommodating or inclusive is someone who thinks that group bonds matter more than what they claim is true, which means they don’t really believe it. The problem with “welcoming” is the scare quotes. There’s nothing wrong with being genuinely welcoming, since anyone genuinely welcoming is quite ready to let someone leave if he doesn’t want to stay. When you add the scare quotes you’re talking about people who are faking an emotional bond which doesn’t exist yet in order to try to manipulate someone into staying. Lack-of-belief atheists don’t need emotional manipulation, because no one needs emotional manipulation. What they need are people who are uncompromisingly honest and independent. The lack-of-belief atheist is looking for someone to depend on, not someone who will depend on them.

The good news is the same as the bad news: the best way to do this is to be a saint.

Imposter Syndrome Produces Many Fake Rules

Imposter Syndrome, which I’m using loose and not using to its clinical definition, is the feeling that a person is not actually competent at a job which they are manifestly competent at. I think that for many people it stems from being overly impressed with other people, putting those others on a pedestal, and not realizing that everybody everywhere is just “winging it”. That is, doing their best without full knowledge of what they should be doing. That is in fact the human condition—we are finite creatures and must live life by trust—but some people seem unable to accept that and have the conviction that other people must know what they’re doing. Only God knows what he’s doing; he’s the only one who accomplishes all things according to the intentions of his will. But for those who can’t accept that, they must turn others—often kicking and screaming—into God-substitutes and pretend that these people really know what they’re doing. (It’s part of the reason people turn so quickly and viciously on their idols—they view imperfect as treason, since they’ve elevated their idols to the status of God.)

Another coping mechanism which the sufferers of imposter syndrome have is to try to turn life into something they’re actually good at in this sense that no human being can be good at it. Thus they come up with a myriad of byzantine and difficult but achievable rules, then need to have everything in life go according to those rules in order to “feel in control”. These rules tend to cluster around anything with an inherently high degree of flexibility, such as around social interaction, writing fiction, etc. “When you visit someone, you must bring a food item” is really more of a ritual, being such a common rule, but it’s a way of showing that one cares and is not merely mooching. Especially in the modern world where food is absurdly available there’s little benefit to it, and so far as I know it was never the custom among rich people, but it gives something to do such that if one has done it, one did a good a good job and is not open to criticism. This is such a rule which caught on (and I’m forced to use a rule which is not particular to an individual in order that it might be generally recognizable), but they abound. Some people must always check the stove before leaving the house, some people must always hand-write thank-you notes, or send thank-you notes on paper rather than by email. An alternative way of thinking of these things is as ad-hoc superstitions.

Satanic Banality

Here is the script of the most recent video I posted. Or if you’d prefer, you can go watch it on youtube.

Some time ago, I made a video talking about the strange symbolism in the music video of Ke$ha’s song, Die Young. Here are all of the symbols she used:
kesha_die_young_symbols
The curious thing about them all is that despite the fact that the video is supposed to have a satanic theme, the symbols Ke$ha used are all actually Christian symbols. Here’s what I concluded in that video:

Ultimately what I think I find so frustrating about this video is that it’s use of symbolism is, essentially, magical thinking. Symbols have power, because they communicate something. A symbol stands in for something greater than itself, which is why it has more power than random scribbles. Using symbols without reference to what they mean is trying to use get power without invoking their function – it’s trying to steal their power.

But on further consideration, I’ve realized that this is actually quite fitting. Yes, this was rather incompetent satanism, but that is really the most consistent satanism possible. Diligence is a virtue; if she put a lot of work into her satanism—if she really tried to do a good job—that would undermine the entire point. Skillful Satanism is actually something of a contradiction in terms.

And this is something C.S. Lewis complained about in literature. In his preface to The Screwtape Letters, talking about artistic representations of the angelic and diabolic, he said: “The literary symbols are more dangerous because they are not so easily recognized as symbols. Those of Dante are the best. Before his angels we sink in awe. His devils, as Ruskin rightly remarked, in their rage, spite, and obscenity, are far more like what the reality must be than anything in Milton. Milton’s devils, by their grandeur and high poetry, have done great harm, and his angels owe too much too Homer and Raphael. But the really pernicious image is Goethe’s Mephistopheles. It is Faust, not he, who really exhibits the ruthless, sleepless, unsmiling concentration upon self which is the mark of Hell. The humorous, civilised, sensible, adaptable Mephistopheles has helped to strengthen the illusion that evil is liberating.”

There’s nothing all that particular to Satanism in these complaints, though. It’s really the same as a mistake that we tend to make about all evil. I think that the origin of this mistake is, roughly, the intuition that if a person is trading their soul for something, there must be something quite valuable which tempted them to do it. Consider the scene in A Man For All Seasons where Richard Rich has just perjured himself to produce false evidence that will get Sir Thomas More executed for treason:

More: There is one question I would like to ask the witness. That’s a chain of office you’re wearing. May I see it? The red dragon. What’s this?

Cromwell: Sir Richard is appointed Attorney General for Wales.

More: For Wales? Why Richard, it profits a man nothing to give his soul for the whole world. But for Wales?

(If you haven’t seen A Man for All Seasons, please do. It is an excellent movie.)

Why would somebody do something evil if it doesn’t benefit them? The answer to this question is straightforward, but we need a few concepts in order to be able to give the simple explanation. The first is the the Greek concept of hamartia. It comes from the verb hamartenein, which was, for example, what an archer did when he didn’t hit his target. It means, roughly, to miss. Hamartia thus means an error, or a mistake, or by the time you get to the early Christian church, sin. The key insight is that evil is not something positive, but something negative.

I think that people go wrong here by not taking nihilism seriously enough. We think of a world working in perfect harmony and unity as the default, and of evil as a deviation from that. But in fact the default is nothing. There need not be anything at all. No matter, no energy, no space or time or physics. Just pure nothing, is the default. And yet, there is something. I don’t even care at the moment whether you attribute that creation to God or to a “quantum fluctuation”—well, I care a little bit because the latter is still assuming that some sort of contingent laws of physics exist, but whatever. The point is that anything whatever that exists—in our contingent world—is more than had to exist. Whether you think of it as a gift or as something that fell off of some cosmic truck that was driving by, from our perspective it is all a positive addition to the nothingness which is logically prior to it.

When you look at it this way, you can see that good is not a maintenance of the status quo, but an addition to it. But of course good is not merely anything at all existing. This is why a table is better than a pile of splinters, and why in the ordinary course of events using an axe to turn a table into a pile of splinters is wrong. It is bringing the world closer to the default of nothing. Good is not just any existence, but existence ordered according to a rational relationship. By a rational ordering, small things can become something more than themselves. Put together in the right shape, splinters can be beautiful and hold things up off the ground. That is, they can be a table.

Incidentally, this is why hyper-reductionists have such an easy time seeing through everything. Because every good thing is a rational relationship of lesser things, it is always possible to deny that the relationship is real. You can look at a table and see no more than a pile of splinters. Why a reductionist is proud of seeing less than everyone else is a subject for another day, but if you look at anything you know to be good, you will see this. It is itself made up of a rational relationship of parts that form more than they would in some other relationship. Further, all good things themselves fit in a rational relationship with other good things. Anywhere you look, whether chickens or statues or vaccines or video games; all good things have this property. And all evil—murder, arson, terrorism, or just lying—all have the property that they destroy rational relationships between things. They destroy the whole which is greater than the sum of its parts.

It is also the case that there is no other possibility for what constitutes good and evil. I don’t have time to go into details, but if you examine any attempt to define good and evil which is not convertible into this definition, it invariably consists of taking one sort of rational relationship and calling that the only good. Good is doing your duty, or good is the family, or good is the state or good is pleasure. Every such thing, if you really spend some time looking into it and seeing what its proponents actually mean by their words and actions—they are all taking some rational relationships and elevating them above all other rational relationships. They are taking a part and treating it as the whole.

And this is why sin is analogous to an archer missing what he was shooting at. We all aim for doing the good, but it’s very rare that we actually hit our target. Sometimes our aim is off because we twitch—that is, we can’t hold steady—but very often it’s because we mistake what we’re looking at. We think it’s closer or further, or that we’re looking at one part when we’re looking at another. We go wrong not because we think, “oh man would it be great to shoot this deer in the log under it!” but because we thought we were looking at its chest. We weren’t, as proved by where our arrow struck. Or we can go wrong by being mistaken about where we’re aiming, thinking that because we’re looking at something, that’s where we are pointing our arrow. Know thyself is often quoted by unpractical people, but it’s actually intensely practical advice.

The drug addled, sex-crazed rock star doesn’t think she’s using Christian imagery when she’s trying to be Satanic. She has not traded looking like a buffoon for some amazing benefit we can’t see. In her mind, she doesn’t look like a buffoon. She thinks she looks awesome; that anyone sensible would cower in awe of her satanic majesty. She has missed her target, and hasn’t yet gone to see where her arrow has actually struck. There’s a reason why pop musicians rarely last a decade; once they realize what they’re doing, they stop doing it; once they stop believing in it, they can’t sell the illusion anymore. And then their popularity fades, because it was not them, but the illusion they were selling, which was so popular.

Satanic Majesty is always an illusion, which is why you can only ever encounter it in art. Art contrives to convey experience; to show you what the world looks like through someone else’s eyes. But Satanic Majesty always looks banal from the outside; it’s only from the inside that it looks spectacular. This is part of why pride is the deadliest of the sins: if you wrap yourself up inside yourself, you can fool yourself forever without anything to check your downward, inward progress. And this is why music videos feature so many reaction shots. It’s also why movies and TV and virtually everything fictive, features so many reaction shots. The thing itself rarely looks very impressive, but people’s reactions are limited only by their imagination and acting skills. It’s why in Power Rangers series, after they lower the camera to the monster’s feet, the next shot is always the power rangers looking up. Our age has been called the age of many things, but it is the age of nothing so much as it is the age of the reaction shot. TV news shows the reactions of people on the street, but it never shows you the considered opinions of people on something that happened ten years ago. Collectively, we don’t like reality; you can tell a tree by its fruit, which is why we prefer to look at seedlings.

It’s everywhere in entertainment—in which category news most certainly belongs— but it can be found throughout life, too. We endlessly discuss people’s reactions, but we rarely discuss things and ideas. And if we look at ourselves, when we are tempted, we can see the same thing. We do not consider our temptations in themselves, but only how they will make us feel. I mean when we’re experiencing them, not when we’re regretting having given into them afterwards. In the actual moment of giving in, our attention is never on the reality of what we’re about to do; we’re concentrating on how happy it will make us. That’s why one of the techniques for avoiding temptation is to face up to what we’re actually doing. Of course sometimes we can’t avoid facing up to what we’re actually doing; in addiction it’s called hitting rock bottom. But when one is young and healthy, it’s very rare that reality makes us face up to what we’re doing. On TV they always pick pretty people who smile for the camera, and it’s so hard to believe that anything can be wrong when pretty people are happy. On Facebook people post pictures of when things are going well, and the very fact that it’s rude to tell people about how bad your day was means that we don’t often face up to the reality of what is going on in life. A person has to be very unhappy indeed before they won’t smile for the camera.

Which is a pity, because so many people use reactions to tell whether the thing being reacted to is good or bad. Since people will put their best foot forward, this doesn’t work; to know right from wrong we must investigate the things themselves. And in fact in our world whether an action is defended on its own or by the reactions to it is actually a good heuristic for figuring out whether it is moral or immoral—if you can say something good about the action itself, it is probably moral. If it is only defended by people’s reactions to it, it is probably immoral. That’s only a heuristic, of course; people dance because it’s fun, and dancing is legitimate. But dancing is also beautiful, at least when it’s done well. There’s very little you can say about heroin except that it’s fun.

That’s all for now. Until next time, may you hit everything you aim at.

Prayer to an Unchanging God

If you aren’t familiar with the properties of God, perhaps the strangest, to us, is that God is unchanging. It follows necessarily from the fact that God is simple, that is, he is not composed of separable parts that are capable of existing independently. That follows from the fact that God is necessary, unlike us, who are contingent. Since God is necessary, he cannot be composed of things which are not necessarily together. And since God is necessary, he cannot change, because change means some part coming into being or ceasing to be. Since God is necessary (and has no contingent parts), there is no part of him which is capable of not existing. So far, OK, but how, then, does prayer work if God doesn’t change. What does prayer do?

It’s easy enough if you only consider our side of prayer, that is, how prayer changes us. But that’s not all prayer does. Prayer can change the world. We can pray for good things to happen, and God can answer our prayers with good things, if often (having to take everyone’s good into account) in ways so complex we don’t understand them until much later if at all. Or we can get immediate answers to our prayers, as in the case of miracles. How can that possibly work if God is unchangeable?

I think that it will be easier to give the answer if we first look at the fact that we creatures are able to interact with each other. C.S. Lewis mentioned, addressing the question, “since God knows what’s best, how can it make sense to ask him for anything?” He pointed out that the same problem applies to umbrellas. Surely God knows whether we should be wet, so why give him our opinion on the subject by opening our umbrella?

The answer to that question is that God has given it to us to take part in designing creation. This is part of a general plan of delegation which God seems to have. For a great many things, instead of doing things directly God gives it to us to do his work for him. He could feed the hungry man himself, but he gives it to us to be his feeding of the hungry man by us giving the hungry man food. You can see this in the analogy of the parent who gives his child a present to give to someone else; the parent could have given the present directly but the parent is incorporating the child into the parent’s act of generosity. Unsurprisingly, God does a far more complete job of it than human parents do. This is part of why people can ignore God; they see only the action of the people incorporated into God’s generosity and ignore the rest.

When God gives us these things by way of delegation, what happens is that we end up acting sort of like a lens to the sunlight. From our perspective, we don’t change the sun, but we do change how the sunlight affects earthly objects. By holding our hands up we make a shadow, but holding up a lens we concentrate the light on a place, with a prism we break the light into distinct pieces and make a rainbow. Real life is vastly more complex than just lensing the sun, but it works as a metaphor to show us how you can change the effect of the sun without changing the sun itself.

Prayer is the same basic thing, except we can’t directly observe it. By prayer we interact with God such that we change not God, but how his unchanging love for creation is expressed in creation itself. Prayer is like holding up a magnifying glass in front of the sun, shaping where the light goes without doing anything to the sun.