Authority Figures in Movies

One of the curious things about the roles of authority figures in movies is that they are very rarely played by people who have ever had any authority. One might think that this wouldn’t have too much of an impact since the actors are just reciting dialog which other people wrote. (People who most of the time haven’t had any authority themselves, but that’s a somewhat separate matter.) And in the end, authority is the ability to use force to compel people, so does it matter much what the mannerisms an actor uses are?

Actually, yes, because in fact a great deal of authority, in practice, is about using social skills to get people to cooperate without having to use one’s authority. And a great deal of social skills are body language, tone of voice, emphasis, and pacing. Kind of like the famous advice given by Dalton in Road House:

For some reason, authority figures are usually portrayed as grim and stern—at this point I think because it’s a shorthand so you can tell who is who—but there is a great deal which can be accomplished by smiling. There’s an odd idea that many people seem to have that smiling is only sincere if it is an instinctual, uncontrollable reaction. I’ve no idea where this crazy notion came from, but in fact smiling is primarily a form of communication. It communicates that one is not (immediately) a threat, that (in the moment) one intends cooperation, that the order of the moment is conversation rather than action. Like all communication it can of course be a lie, but the solution to that is very simple: don’t lie with your smile. Words can be lies, but the solution is not to refrain from speaking unless you can’t help yourself; it’s to tell the truth when one opens one’s mouth. So tell the truth when you smile with your mouth, too. And since actions are choices, one very viable option, if you smile at someone, is to follow through and (in the moment) be nice.

Anyone (sane) who has a dog knows that in many ways they’re terrible creatures. They steal your food, destroy everyday items, throw up on your floor when they’ve eaten things that aren’t food, get dog hair everywhere, and make your couches stink of dog. And yet, people love dogs who do these things to them for a very simple reason: any time you come home, your dog smiles at you and wags its tail and is glad to see you. And it’s human nature that it’s impossible to be angry at someone who is just so gosh darned happy that you’re in the same room as them.

People in authority are rarely there because they have a history of failure and incompetence at dealing with people; it may be a convenient movie shorthand that people in authority are stone-faced, grumpy, and stern, but in real life people in positions of authority are generally friendly. It’s easy to read too much into that friendliness, of course—they’re only friendly so long as you stay on the right side of what you’re supposed to be doing—but this unrealistic movie shorthand makes for far less interesting characters.

And I suppose I should note that there are some people in positions of authority who are often stone-faced and grim, but these are usually the people responsible for administering discipline to those already known to be transgressors. This is especially true of those dealing with children, who have little self control and less of a grasp of the gravity of most situations they’re in and who need all the help they can get in realizing that it’s not play time. By contrast, during the short time I was able to take part in my parish’s prison ministry, I noticed that the prison guards were generally friendly (if guardedly so) with the inmates. Basically, being friendly can invite people to try to take liberties, but being grumpy usually gets far less cooperation, and outside of places like Nazi death camps where you are actually willing to shoot people for being uncooperative, cooperation is usually far more useful than people trying to take liberties and having to be told “no” is inconvenient.

But most of the actors who play authority figures don’t know any of this; and when you research the individual actors they often turn out to be goofballs who don’t like authority and whose portrayal of it is largely formed by what they most dislike about it.

Atheism is Not a Religion

This is the script to my video, Atheism is Not a Religion. As always, it was written to be listened to when I read it aloud, but it should be pretty readable as text, too.

Today we’re going to look at a topic which a casual survey of atheist youtube channels and twitter feeds suggests is of importance to many atheists: that atheism is not a religion. Now, since the one thing you can’t convict internet atheists of is originality, I assume that this is because there are Christians who claim that atheism is a religion. Of course what they probably mean by this that atheism entails a set of metaphysical beliefs. And this is true enough, at least as a practical assumption if some atheists will scream at you until they’re blue in the face that it’s not what they believe in theory. But merely having metaphysical beliefs does not make something a religion; it makes it a philosophy or in more modern terms, a world-view. But a religion is far more than merely a world-view or a set of beliefs. As Saint James noted, the demons believe in God.

The first and most obvious thing which atheism lacks is: worship. Atheists do not worship anything. I know that Auguste Comte tried to remedy this with his calendar of secular holidays, but that went nowhere and has been mostly forgotten except perhaps in a joke G. K. Chesterton made about it. A few atheists have made a half-hearted go of trying to worship science. And if that had any lasting power, Sunday services might include playing a clip from Cosmos: A Spacetime Odyssey. But the would-be science worshippers haven’t gotten that far, and it is highly doubtful they ever will.

Secular Humanism is sometimes brought up as something like a religious substitute, but so far it only appears to be a name, a logo, some manifestos no one cares about, and the belief that maybe it’s possible to have morality without religion. And humanity is not a workable object of worship anyway. First, because it’s too amorphous to worship—as Chesterton noted, a god composed of seven billion persons neither dividing the substance nor confounding the persons is hard to believe in. The other reason is that worshipping humanity involves worshipping Hitler and Stalin and Mao and so forth.

Which brings us to Marxism, which is perhaps the closest thing to a secular religion so far devised. But while Marxism does focus the believer’s attention on a utopia which will someday arrive, and certainly gets people to be willing to shed an awful lot of innocent blood to make it happen sooner, I don’t think that this really constitutes worship. It’s a goal, and men will kill and die for goals, but they can’t really worship goals. Goals only really exist in the people who have them, and you can only worship what you believe actually exists.

It is sometimes argued that within a marxist utopia people worship the state, but while this is something put on propaganda posters, the people who lived in marxist nations don’t report anyone actually engaging in this sort of worship, at least not sincerely.

And I know that some people will say that atheists worship themselves—I suspect because almost all atheists define morality as nothing more than a personal preference—but, at least I’ve never seen that as anything more than a half-hearted attempt to answer the question of “what is the ground of morality”, rather than any sort of motivating belief. And in any event, it is inherently impossible to worship oneself. Worshipping something is recognizing something as above oneself, and it is not possible to place oneself above oneself. I think the physical metaphor suffices: if you are kneeling, you can’t look up and see your own feet. You might be able to see an image of yourself in a mirror, but that is not the same, and whatever fascination it might have is still not worship. So no, atheism does not worship anything.

The second reason why atheism is not a religion is that atheism gives you no one to pray to. Prayer is a very interesting phenomenon, and is much misunderstood by those who are not religious and, frankly, many who are, but it is, at its core, talking with someone who actually understands what is said. People do not ever truly understand each other because the mediation of words always strips some of the meaning away and the fact that every word means multiple things always introduces ambiguity. Like all good things in religion this reaches its crescendo in Christianity, but even in the public prayers said over pagan altars, there is the experience of real communication, in its etymological sense. Com—together unication—being one. It is in prayer—and only in prayer—that we are not alone. Atheists may decry this as talking with our imaginary friends if they like—and many of them certainly seem to like to—but in any event they are left where all men who are not praying are left: alone in the crowd of humanity, never really understood and so only ever loved very imperfectly at best. (I will note that this point will be lost on people who have never taken the trouble to find out what somebody else really means, and so assumes that everyone else means exactly the same things that he would mean by those words, and so assumes that all communication goes perfectly. You can usually identify such people by the way they think that everyone around them who doesn’t entirely agree with them is stupid. It’s the only conclusion left open to them.)

The third reason why atheism is not a religion is that it does not, in any way, serve the primary purpose of religion. The thing you find common to all religions—the thing at the center of all religions—is putting man into his proper relation with all that is; with the cosmos, in the Greek sense of the word. Anyone who looks at the world sees that there is a hierarchy of being; that plants are more than dust and beasts are more than plants and human beings are more than beasts. But if you spend any time with human beings—and I mean literally any time—you will immediately know that human beings are not the most that can be. All that we can see and hear and smell and taste and touch in this world forms an arrow which does not point at us but does run through us, pointing at something else. The primary purpose of a religion is to acknowledge that and to get it right. Of course various religions get it right to various degrees; those who understand that it points to an uncreated creator who loved the world in existence out of nothing get it far more right than those who merely believe in powerful intelligences which are beyond ours. Though if you look carefully, even those who apparently don’t, seem to often have their suspicions that here’s something important they don’t know about. But be that as it may, all religions know that there is something more than man, and give its adherents a way of putting themselves below what they are below; of standing in a right relation to that which is above them. In short, the primary purpose of all religion is humility.

And this, atheism most certainly does not have. It doesn’t matter whether you define atheism as a positive denial or a passive lack; either way atheism gives you absolutely no way to be in a right relationship to anything above you, because it doesn’t believe in anything above you. Even worse, atheism as a strong tendency, at least in the west, to collapse the hierarchy of being in the other direction, too. It is no accident that pets are acquiring human rights and there are some fringe groups trying to sue for the release of zoo animals under the theory of habeus corpus. Without someone who intended to make something out of the constituent particles which make us up, there is ultimately no reason why any particular configuration of quarks and electrons should mean anything more than any other one; human beings are simply the cleverest of the beasts that crawl the earth, and the beasts are simply the most active of the dust which is imprisoned on the earth.

We each have our preferences, of course, but anyone with any wide experience of human beings knows that we don’t all have the same preferences, and since the misanthropes are dangerous and have good reason to lie to us those who don’t look out for themselves quickly become the victims of those who do. Call it foreigners or racists or patriarchy or gynocentrism or rape culture or the disposable male or communism or capitalism or call it nature red in tooth and claw, if you want to be more poetic about it, but sooner or later you will find out that human beings, like the rest of the world, are dangerous.

Religious people know very well that other human beings are dangerous; there is no way in this world to get rid of temptation and sin. But religion gives the possibility of overcoming the collapsing in upon ourselves for which atheism gives no escape.

For some reason we always talk about pride puffing someone up, but this is almost the exact opposite of what it actually does. It’s an understandable mistake, but it is a mistake. Pride doesn’t puff the self up, it shrinks it down. It just shrinks the rest of the world down first.

In conclusion, I can see why my co-religionists would be tempted to say that atheism is a religion. There are atheist leaders who look for all the world like charismatic preachers and atheist organizations that serve no discernible secular purpose. Though not all atheists believe the same things, still, most believe such extremely similar things that they could identify on that basis. Individual atheists almost invariably hold unprovable dogmas with a blind certainty that makes the average Christian look like a skeptic. And so on; one could go on at length about how atheism looks like a religion. But all these are mere external trappings. Atheism is not a religion, which is a great pity because atheists would be far better off if it was.

Two Interesting Questions

On Twitter, @philomonty, who I believe is best described as an agnostic (he can’t tell whether nihilism or Catholicism is true), made two video requests. Here are the questions he gave me:

  1. If atheism is a cognitive defect, how may one relieve it?
  2. How can an atheist believe in Christ, when he does not know him? Not everyone has mystical experiences, so not everyone has a point of contact which establishes trust between persons, as seen in everyday life.

I suspect that I will tackle these in two separate videos, especially because the second is a question which applies to far more than just atheists. They’re also fairly big questions, so it will take me a while to work out how I want to answer them. 🙂

The first question is especially tricky because I believe there are several different kind of cognitive defects which can lead to atheism. Not everyone is a mystic, but if a person who isn’t demands mystical experience as the condition for belief, he will go very wrong. If a person who is a mystic has mystical experiences but denies them, he will go very wrong, but in a different way. There are also people who are far too trusting of the culture they’re in, thinking that fitting into it is the fullness of being human, so they will necessarily reject anything which makes it impossible or even just harder to fit in. These two will go very wrong, but in a different way from the previous ones.

To some degree this is a reference to my friend Eve Keneinan’s view that atheism is primarily caused by some sort of cognitive defect, such as an inability to sense the numinous (basically, lacking a sensus divinitatus). Since I’ve never experienced that myself, I’m certain it can’t be the entire story, though to the degree that it is part of the story it would come under the category of non-mystics who demand mystical experience. Or, possibly, mystics who have been damaged by something, though I am very dubious about that possibility. God curtails the amount of evil possible in the world to what allows for good, after all, so while that is not a conclusive argument, it does seem likely to me that God would not permit anything to make it impossible for a person to believe in him.

Anyway, these are just some initial thoughts on the topic which I’ll be mulling over as I consider how to answer. Interesting questions.

The Dunning-Kruger Effect

(This is the script for my video about the Dunning-Kruger effect. While I wrote it to be read out loud by someone who inflects words like I do, i.e. by me, it should be pretty readable as text.)

Today we’re going to be looking at the Dunning-Kruger effect. This is the other topic requested by PickUpYourPantsPatrol—once again thanks for the request!—and if you’ve disagreed with anyone in the internet in the last few years, you’ve probably been accused of suffering from it.

Perhaps the best summary of the popular version of the Dunning-Kruger effect was given by John Cleese:

The problem with people like this is that they have no idea how stupid they are. You see, if you are very very stupid, how can you possibly realize that you are very very stupid? You’d have to be relatively intelligent to know how stupid you are. There’s a wonderful bit of research by a guy called David Dunning who’s pointed out that to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place. This means, if you’re absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.

There are plenty of things to say about this summary as well as the curious problem that if an idiot is talking to an intelligent person, absent reputation being available, there is a near-certainty that both will think the other an idiot. But before I get into any of that, I’d like to talk about the Dunning Kruger study itself, because I read the paper which Dunning and Kruger published in 1999, and it’s quite interesting.

The first thing to note about the paper is that it actually discusses four studies which the researchers did, trying to test specific ideas about incompetence and self-evaluation which the paper itself points out were already common knowledge. For example, they have a very on-point quotation from Thomas Jefferson. But, they note, this common wisdom that fools often don’t know that they’re fools has never been rigorously tested in the field of psychology, so they did.

The second thing to note about this study is that—as I understand is very common in psychological studies—their research subjects were all students taking psychology courses who received extra credit for participating. Now, these four studies were conducted in Cornell University, and the classes were all undergraduates, so right away generalizing to the larger population is immediately suspect since there’s good reason to believe that undergraduates in an Ivy League university have more than a few things in common which they don’t share with the rest of humanity. This is especially the case because the researchers were testing self-evaluation of performance, which is something that Cornell undergraduates were selected for and have a lot invested in. They are, in some sense, the elite of society, or so at least I suspect most of them have been told, even if not every one of them believes it.

Moreover, the tests which they were given—which I’ll go into detail about in a minute—were all academic tests, given to people who were there because they had generally been good at academics. Ivy League undergraduates are perhaps the people most likely to give falsely high impressions of how good they are at academic tests. This is especially the case if any of these were freshmen classes (they don’t say), since a freshman at an Ivy League school has impressed the admissions board but hasn’t had the opportunity to fail out yet.
So, right off the bat the general utility of this study in confirming popular wisdom is suspect; popular opinion may have to stand on its own. On the other hand, this may be nearly the perfect study to explain the phenomenon Nassim Nicholas Taleb described as Intellectual Yet Idiot—credentialed people who have the role of intellectuals yet little of the knowledge and none of the wisdom for acting the part.

Be that as it may, let’s look at the four studies described. The first study is in many ways the strangest, since it was a test of evaluating humor. They created a compilation of 30 jokes from several sources, then had a panel of 8 professional comedians rate these jokes on a scale from 1-11. After throwing out one outlier, they took the mean answers as the “correct” answers, then gave the same test to “65 cornell undergraduates from a variety of courses in psychology who earned extra credit for their participation”.
They found that the people with the bottom quartile of test scores, who by definition have an average rank of being at the twelfth percentile, guessed (on average) their rank was the 66th percentile. The bottom three quartiles overestimated their rank, while the top quartile underestimated their rank, thinking that they were in the (eyeballing it from the graph) 75th percentile when in fact (again, by definition) they were in the 88th.
This is, I think, the least interesting of the studies, first because the way they came up with “right” and “wrong” answers is very suspect, and second because this isn’t necessarily about mis-estimation of a person’s ability, but could be entirely about mis-estimating their peer’s ability. The fact that everyone put their average rank in the class at between the 66th percentile and 75th percentile may just mean that in default of knowing how they did, Cornell students are used to guessing that they got somewhere between a a B- and a B+. Given that they were admitted to Cornell, that guess may have a lot of history behind it to back it up.

The next test, though unfortunately only given to 45 Cornell students, is far more interesting both because it used 20 questions on logical reasoning taken from an LSAT prep book—so we’re dealing with questions where there is an unambiguously right answer—and because in addition to asking students how they thought they ranked, they asked the students how many questions they thought that they got right. It’s that last part that’s really interesting, because that’s a far more direct measure of how much the students thought that they knew. And in this case, the bottom quartile thought that they got 14.2 questions right while they actually got 9.6 right. The top quartile, by contrast, thought that they got 14 correct when they actually got 16.9 correct.

So, first, the effect does in fact hold up with unambiguous answers. The bottom quartile of performers thought that they got more questions right than they did. So far, so good. But the magnitude of the error is not nearly as great as it was for the ranking error, especially for the bottom quartile. Speaking loosely, the bottom quartile knew half of the material and thought that they knew three quarters of it. That is a significant error, in the sense of being a meaningful error, but at the same time they thought that they knew about 48% more than they did, not 48,000% more than they did. The 11 Cornell undergraduates who took this class did have an over-inflated sense of their ability, to be sure, but they also had a basic competence in the field. To put this in perspective, the top quartile only scored 76% better than the bottom quartile.

The next study was on 84 Cornell undergrads who were given a 20 question test of standard English grammar taken from a National Teacher Examination prep guide. This replicated the basic findings of the previous study, with the bottom quartile estimating they got 12.9 questions right versus a real score of 9.2. (Interestingly, the top quartile very slightly over-estimated their score as 16.9 when it was actually 16.4) Again, all these are averages so the numbers are a little wonky, but anyway this time they over-estimated their performance by 3.7 points, or 40%. And again, they got close to half the questions right, so this isn’t really a test of people who are incompetent.

There’s another thing to consider in both studies, which is how many questions the students thought they got wrong. In the first study they estimated 5.4 errors while in the second 7.1 errors, and while these were under-estimates, they were correct that they did in fact get that many wrong. Unfortunately these are aggregate numbers (asked after they handed the test in, I believe) so we don’t know their accuracy on gauging whether they got particular questions wrong, but in the first test they correctly estimated about 40% of their error and on the second test they correctly estimated about 65% of their error. That is, while they did unequivocally have an over-inflated sense of their performance, they were not wildly unrealistic about how much they knew. But of course these are both subjects they had studied in the past, and their test scores did demonstrate at least basic competence with them.

The fourth study is more interesting, in part because it was on a more esoteric subject: it was a 10 question test, given to 140 cornell undergrads, about set selection. Each problem described 4 cards and gave a rule which they might match. The question was which card or cards needed to be flipped over to determine if those cards do match the rule. Each question was like that, so we can see why they only asked ten questions.

They were asked to rate how they did in the usual way, but then half of them were given a short packet that took about 10 minutes to read explaining how to do these problems, while the other half was given an unrelated filler task that also took about 10 minutes. They were then asked to rate their performance again, and in fact the group who learned how to do the problems did revise their estimate of their performance, while the other group didn’t change it very much.

And in this test we actually see a gross mis-estimation of ability by the incompetent. The bottom quartile scored on average 0.3 questions correct, but initially thought that they had gotten about 5.5 questions correct. For reference, the top quartile initially thought that they had gotten 8.9 questions correct while they had in fact gotten all ten correct. And after the training, the untrained bottom quartile slightly raised their estimation of their score (by six tenths of a question), but among the trained people the bottom quartile reduced their estimation by 4.3 questions. (In fact the two groups had slightly different performances which I averaged together; so the bottom quartile of the trained group estimated that they got exactly one question right.)

This fourth study, it seems to me, is finally more of a real test of what everyone wants the Dunning-Kruger effect to be about. An average of 0.3 questions right corresponds to roughly to 11 of the 35 people in the bottom quartile getting one question right while the rest got every question wrong. The incompetent people were actually incompetent. Further, they over-estimated their performance by over 1800%. So here, finally, we come to the substance of the quote from John Cleese, right?

Well… maybe. There are two reasons I’m hesitant to say so, though. The first is the fact that these are still all Cornell students, so they are people who are used to being above average and doing well on tests and so forth. Moreover, virtually all of them would have never been outside of academia, so it is very likely that they’ve never encountered a test which was not designed to be passable by most people. If nothing else, it doesn’t reflect well on a teacher if most of his class gets a failing grade. And probably most importantly, the skills necessary to solve these problems are fairly close to the sort of skills that Ivy League undergrads are supposed to have, so this skillset at which they are incompetent being similar to a skillset at which they are presumably competent might well have misled them.

The second reason I’m hesitant to say that this study confirms the John Cleese quote is that the incompetent people estimated that they got 55% of the questions right, not 95% of the questions right. That is to say, incompetent people thought that they were merely competent. They didn’t think that they are experts.

In the conclusion of the paper, Dunning and Kruger talked about some limitations of their study, which I will quote because it’s well written and I want to do them justice.

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods.

They go on to note that in some domains, knowledge is largely the substance of skill, like in grammar, whereas in other places knowledge and skill are not the same thing, like basketball.

They also note that there is a minimum amount of knowledge required to mistake oneself for competent. As the authors say:

Most people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct an 8-cylinder engine, or diagnose acute disseminated encephalomyelitis.

So where does this leave us with regard to the quote from John Cleese? I think that the real issue is not so much about the inability of the incompetent to estimate their ability, but the inability of the incompetent to reconcile new ideas with what they do actually know. Idiots may not know much, but they still know some things. They’re not rocks. When a learned person tells them something, they are prone to reject it not because they think that they already know everything, but because it seems to contradict the few things they are sure of.

There is a complex interplay between intelligence and education—and I’m talking about education, mind, not mere schooling—where intelligence allows one to see distinctions and connections quickly, while education gives one the framework of what things there are that can be distinguished or connected. If a person lacks the one or the other—and especially if they lack both—understanding new things becomes very difficult because it is hard to connect what was said to what is already known, as well as to distinguish it from possible contradictions to what is already known. If the learned, intelligent person isn’t known by reputation to the idiot, the idiot has no way of knowing whether the things said don’t make sense to him because they are nonsense or because they are too much sense, and a little experience of the world is enough to make many if not most people sufficiently cynical to assume the former.

And I think that perhaps the best way to see the difference between this and the Dunning-Kruger effect is by considering the second half of the fourth experiment: the incompetent people learned how to do what they initially couldn’t. That is, after training they became competent. That is not, in general, our experience of idiots.
Until next time, may you hit everything you aim at.

Why I Cringe When People Criticize Capitalism (in America)

Every time I hear a fellow Christian (usually Catholic, often someone with the good sense to be a fan of G.K. Chesterton) criticize capitalism, I cringe, but not for the reason I suspect most of them would expect. Why I cringe will take a little explanation, but it’s rooted in the fact that there are actually two very different things which go by the name capitalism.

The first is a theory proposed by Adam Smith that, to oversimplify and engage in some revisionist history which is not fair to him but which would take too long to go into further, holds that virtue is unreliable: if we can harness vice to do the work of virtue, we can get the same effect much more reliably. Thus if we appeal to men’s self-interest, they will do what they ought with more vigor than if we appealed to their duty and love of their fellow man. Immanuel Kant’s essay Perpetual Peace has a section which may be taken as a summary of this attitude:

The problem of the formation of the state, hard as it may sound, is not insoluble, even for a race of devils, granted that they have intelligence. It may be put thus:—“Given a multitude of rational beings who, in a body, require general laws for their own preservation, but each of whom, as an individual, is secretly inclined to exempt himself from this restraint: how are we to order their affairs and how establish for them a constitution such that, although their private dispositions may be really antagonistic, they may yet so act as a check upon one another, that, in their public relations, the effect is the same as if they had no such evil sentiments.” Such a problem must be capable of solution. For it deals, not with the moral reformation of mankind, but only with the mechanism of nature; and the problem is to learn how this mechanism of nature can be applied to men, in order so to regulate the antagonism of conflicting interests in a people that they may even compel one another to submit to compulsory laws and thus necessarily bring about the state of peace in which laws have force.

Capitalism in this sense was this general problem applied to economics: we need men to work, but all men are lazy. We can try to appeal to men to be better, but it is much simpler and more reliable to show them how hard work will satisfy their greed.

This version of capitalism is a terrible thing, and by treating men as devils has a tendency to degrade men into a race of devils. But there is something important to note about it, which is that it doesn’t really demand much of government or of men. While it appeals to men’s greed, it does not impose a requirement that a craftsman charge an exorbitant price rather than a just price. It does not forbid a man taking a portion of his just profits and giving it to the poor. It tends to degrade men into devils, but it does not produce a form of government which demands that they become devils.

That was left to Marxism, which by its materialism demanded that all men surrender their souls to the state. Marxism is an equally wrong theory of human beings to the Capitalism of the enlightenment, but it demands a form of government which is far less compatible with human virtue. Further, it demands a form of government which is intrinsically incompatible with natural justice—depriving, as it does, all men of the property necessary to fulfill their obligations to their family and to their neighbors. Marxism inherently demands that all to whom it applies becomes a race of devils.

Of course, Marxism was never historically realized in its fullness since as Roger Scruton observed, it takes an infinite amount of force to make people do what is impossible. But enough force was applied to create the approximation of Marxism known as The Soviet Union (though according to a Russian friend of mine who escaped shortly before the Soviet Union collapsed, a more accurate translation would have been “The Unified Union of United Allies”). This global superpower which was (at least apparently) bent on conquering the world in the name of Marx—well, in the name of Lenin, or communism, or The People; OK, at least bent on conquering the world—and to a marxist, who doesn’t really believe in personal autonomy and thus doesn’t believe in personal virtue, everyone else looks like a Capitalist, in the original sense of the word, since anything which is individual must inherently be greed.

So they called American capitalists. But if the devils in hell spit some criticism at you, it is only natural to take it as a compliment, and partly because of this and partly for lack of a better term, Americans started calling themselves capitalists. If the people with the overpopulated death camps for political prisoners in the frozen wastelands of Siberia despise us for being capitalists, then being a capitalist must be a pretty good thing. But in embracing the term capitalist, people were not thinking of Adam Smith’s economic theory or the problem Kant wrestled with in how to get a race of devils to cooperate, they were thinking of what they were and just using the name capitalist to describe that.

And here’s where we come to the part that makes me cringe when I hear fellow Christians complain about Capitalism. The United States of America has had many sins, but it never been capitalist in the philosophical sense. Much of what became The United States was founded as religious colonies, though to be sure there were economic colonies as well. But the economic colonies, which had all of the vices that unsupervised people tend to, were still composed of religious people who at least acknowledged the primacy of virtue over vice in theory. And for all the problems with protestantism, the famous “Protestant Work Ethic” was the diametric opposite of philosophical capitalism. The whole idea of the protestant work ethic is that men should work far beyond what is needed, because it is virtue and because idleness is dangerous. Perhaps it was always more of a theory than a practice, but even so it was not the opposite theory of capitalism that men should work to satisfy their greed.

For perhaps the first century after the founding of The United States, it was a frontier nation in which people expanded and moved around with fairly low population densities. It takes time to set up governments and small groups of people can resolve their own differences well enough, most of the time, so the paucity of government as we’re used to it today (and though in a different form people would have been used to it in Europe in the middle ages) was largely due to the historical accident of low population densities, and not to any sort of philosophical ideal that greed is the highest good, making government practically unnecessary except for contract enforcement.

And while it is true that this environment gave birth to the robber barons who made a great deal of money treating their fellow men like dirt, it also gave rise to trust busters and government regulation designed to curb the vices of men who did not feel like practicing even minimal virtue to their fellow man. Laws and regulations take time to develop, especially in a land without computers and cell phone cameras; before the advent of radio it took more than a little time to convince many people of some proposition because the skilled orators could only do the convincing one crowd at a time.

Moreover, the United States has never had a government free from corruption, but powerful men buying off politicians was not what the United States was supposed to be; all things in this fallen world are degenerate versions of themselves. Slowness to act on common principles in a fallen world does not mean that a people does not hold those principles, only that hard things like overcoming corruption are difficult and time consuming to do.

But throughout the history of the United States, if you walked up the average citizen and asked him, “ought we, as a people, to encourage men to be honest, hard working, and generous, or ought we to show each man that at least the first two are often in his self-interest and then encourage him to then be as selfish and greedy as possible?” you would have had to ask a great many people indeed to come across someone who would cheerfully give you the second answer. Being willing to give that second answer is largely a modern degeneracy of secularists who know only enough economics nor history to be dangerous, and for the most part think that you’re asking whether the government should micro-manage people’s lives to force them to be honest, hard working, and generous. Americans have many vices, but the least reliable way possible to find out what they are is to ask us.

I will grant that philosophical capitalism is also, to some degree, what is proposed by advertising. Indulge yourself! It’s sinfully delicious! You’re worth it! You deserve it! Everything is about making you happy!

I think that this may be why I cringe the most when my fellow Christians complain about our capitalist society; they should have learned by now not to believe everything they see on television.

Debunking Believe-or-Burn

This is the script from my video debunking believe-or-burn. It  was written to be read aloud, but it should be pretty readable. Or you could just listen to it.

Today we’re going to be looking at how abysmally wrong the idea of “believe or burn”, which I prefer to render as, “say the magic words or burn,” is. And to be clear, I mean wrong, not that I don’t like it or this isn’t my opinion. I’m Catholic, not evangelical, so I’m talking about how it contradicts the consistent teaching of the church since its inception 2000 years ago (and hence is also the position of the Eastern Orthodox, the Kopts, etc), and moreover how one can rationally see why “say the magic words or burn” cannot be true.

I’m not going to spend time explaining why non-Christian religions don’t believe you have to say the magic words or burn because for most of them, it’s not even relevant. In Hinduism, heavens and hells are related to your karma, not to your beliefs, and they’re all temporary anyway—as the story goes, the ants have all been Indra at some point. In Buddhism you’re trapped in the cycle of reincarnation and the whole point is to escape. To the degree that there even is a concept of hell in Buddhism, you’re there now and maybe you can get out. Many forms of paganism don’t even believe in an afterlife, and where they do—and what you do in life affects what happens to you in the afterlife—what happens to you is largely based on how virtuously you lived in society, not on worshipping any particular gods. Animistic religions are either often similar to pagan religions or they hold that the dead stick around as spirits and watch over the living. For the monotheistic religions, few of them have a well-defined theology on this point. Their attitude tends to be, “here is the way to be good, it’s bad to be evil, and for everyone else, well, that’s not a practical question.” For most of the world’s religions, “say the magic words or burn,” isn’t even wrong. And Islam is something of an exception to this, but I’m not going to get into Islam because the Quran doesn’t unambiguously answer this question and after Al Ghazali’s triumph over the philosophers in the 11th century, there really isn’t such thing as Islamic theology in the same sense that you have Christian theology. Christianity holds human reason, being finite, to be unable to comprehend God, but to be able to reason correctly about God within its limits. Since Al-Ghazali wrote The Incoherence of the Philosophers, the trend in Islam has been to deny human reason can say anything about God, past what he said about himself in the Quran. As such, any question not directly and unambiguously answered in the Quran—which, recall, is poetry—is not really something you can reason about. So as a matter of practicality I think Islam should be grouped with the other monotheisms who hold the question of what happens to non-believers acting in good faith to be impractical. And in any event there are hadith and a passage in the Quran which do talk about some Jews and Christians entering paradise, so make of that what you will.

There isn’t an official name for the doctrine of “say the magic words or burn”, but I think it’s best known because of fundamentalists who say that anyone who doesn’t believe will burn in hell. I think that the usual form is saying that everyone who isn’t a Christian will burn in hell, for some definition of Christian that excludes Roman Catholics, Eastern Orthodox, Anglicans, and anyone else who doesn’t think that the King James version of the bible was faxed down from heaven and is the sole authority in human affairs. You generally prove that you’re a Christian in this sense by saying, “Jesus Christ is my personal lord and savior”, but there’s no requirement that you understand what any of that means, so it functions exactly like a magical incantation.

As I discussed in my video on fundamentalists, when they demand people speak the magic words, what they’re asking for is not in any sense a real religious formulation, but actually a loyalty pledge to the dominant local culture. (Which is fundamentalist—all tribes have a way of pledging loyalty.) But the concept of “say the magic words or burn,” has a broader background than fundamentalists, going all the way back to the earliest Protestant reformers and being, more or less, a direct consequence of how Martin Luther and John Calvin meant the doctrine of Sola Fide.

Before I get into the origin of “say the magic words or burn”, let me give an overly brief explanation of what salvation actually means, to make sure we’re on the same page. And to do that, I have to start with what sin is: sin means that we have made ourselves less than what we are. For example, we were given language so that we could communicate truth. When we lie, not only do we fail in living up to the good we can do, we also damage our ability to tell the truth in the future. Lying (and all vices) all too easily become habits. We have hurt others and damaged ourselves. Happiness consists of being fully ourselves, and so in order to be happy we must be fixed. This is, over-simplified, what it means to say that we need salvation. Christianity holds that Jesus has done the work of that salvation, which after death we will be united with, if we accept God’s offer, and so we will become fixed, and thus being perfect, will be capable of eternal happiness. That’s salvation. Some amount of belief is obviously necessary to this, because if you don’t believe the world is good, you will not seek to be yourself. This is why nihilists like pickup artists are so miserable. They are human but trying to live life like some sort of sex-machine. They do lots of things that do them no good, and leave off doing lots of things that would do them good. Action follows belief, and so belief helps to live life well. We all have at least some sense of what is true, though, or in more classical language the natural law is written on all men’s hearts. It is thus possible for a person to do his best to be good, under the limitations of what he knows to be good. God desires the good of all of his creatures, and while we may not be able to see how a person doing some good, and some evil things under the misapprehension that they are good, can be saved, we have faith in God that he can do what men can’t. Besides, it doesn’t seem likely that God would permit errors to occur if they couldn’t be overcome. While we don’t know who will be saved, it is permissible to hope that all will be saved. As it says in the Catechism of the Catholic Church, “Those who, through no fault of their own, do not know the Gospel of Christ or his Church, but who nevertheless seek God with a sincere heart, and, moved by grace, try in their actions to do his will as they know it through the dictates of their conscience – those too may achieve eternal salvation.”

OK, so given that, where did the evil and insane idea of “say the magic words or burn” come from? Well, Sola Fide originated with Martin Luther, who as legend has it was scrupulous and couldn’t see how he could ever be good enough to enter heaven (I say, “as legend has it” because this may be an overly sympathetic telling). For some reason he couldn’t do his best and trust God for the rest, so he needed some alternative to make himself feel better. Unfortunately being Christian he was stuck with the word faith, which in the context of Christianity means trusting God. Martin Luther’s solution was to redefine the word faith to mean—well, he wasn’t exactly consistent, but at least much of the time he used it to mean something to the effect of “a pledge of allegiance”—basically, a promise of loyalty. The problem with that is that pledging your allegiance is just words. There’s even a parable Jesus told about this very thing: a man had two sons and told them go to work in his fields. The one son said no, but later thought better of it and went to work in the fields. The other said, “yes, sir” but didn’t go. Which did his father’s will? And please note, I’m not citing that to proof-text that Martin Luther was wrong. One bible passage with no context proves nothing. No, Martin Luther was obviously wrong. I’m just mentioning this parable because it’s an excellent illustration of the point about actions versus words. But as a side-note, it’s also an excellent illustration of why mainline protestants often have relatively little in common with Martin Luther and why it was left to the fundamentalists to really go whole-hog on Martin Luther’s theology: it was a direct contradiction of what Jesus himself taught.

John Calvin also had a hand in “say the magic words or burn”, though it was a bit different from the influence of Martin Luther. Though Luther and Calvin did agree on many points, they tended to agree for different reasons. While Martin Luther simply repudiated free will and the efficacy of reason—more or less believing that they never existed—John Calvin denied them because of the fall of man. According to Calvin man was free and and his reason worked before the first sin, but all that was destroyed with the first sin, resulting in the total depravity of man. Whereas Martin Luther thought that free will was nonsensical even as a concept, John Calvin understood what it meant but merely denied it. Ironically, John Calvin’s doctrines being a little more moderate than Martin Luther’s probably resulted in them having a much larger impact on the world; you had to be basically crazy to agree with Martin Luther, while you only needed to be deeply pessimistic to agree with John Calvin. Luther held that God was the author of evil, while Calvin at least said that all of the evil was a just punishment for how bad the first sin was. If outsiders can’t readily tell the difference between Calvin’s idea of God and the orthodox idea of the devil, insiders can’t even tell the difference between them in Martin Luther’s theology. Luther literally said that he had more faith than anyone else because he could believe that God is good despite choosing to damn so many and save so few. The rest of us, who don’t even try to believe blatant logical contradictions about God, just didn’t measure up. In the history of the world, Martin Luther is truly something special.

However, since both Luther and Calvin denied that there was such a thing as free will these days, Sola Fide necessarily took on a very strange meaning. Even a pledge of allegiance can’t do anything if you’re not the one who made it. So faith ends up becoming, especially for Calvin, just a sign that you will be saved. The thing is, while this is logically consistent—I mean, it may contradict common sense, but it doesn’t contradict itself—it isn’t psychologically stable. No one takes determinism seriously. The closest idea which is at least a little psychologically stable is that God is really just a god, if a really powerful god, so pledging allegiance is like becoming a citizen of a powerful, wealthy country. You’ll probably be safe and rich, but if you commit a crime you might spend some time in jail or even be deported. I realize that’s not the typical metaphor, but it’s fairly apt, and anyone born in the last several hundred years doesn’t have an intuitive understanding for what a feudal overlord is. This understanding of Sola Fide can’t be reconciled with Christianity, the whole point of which is to take seriously that God is the creator of the entire world and thus stands apart from it and loves it all. But this understanding of Sola Fide can plug into our instinct to be part of a tribe, which is why if you don’t think about it, it can be a stable belief.

So we come again to the loyalty pledge to the group—in a sense we have to because that is all a statement of belief without underlying intellectual belief ever can be—but with this crucial difference: whereas the fundamentalist generally is demanding loyalty to the immediate secular culture, the calvinist-inspired person can be pledging loyalty to something which transcends the immediate culture. I don’t want to oversell this because every culture—specific enough that a person can live in it—is always a subculture in a larger culture. But even so the calvinist-inspired magic-words-or-burn approach is not necessarily local. It is possible to be the only person who is on the team in an entire city, just like it’s possible to be the only Frenchman in Detroit. As such this form of magic-words-or-burn can have a strong appeal to anyone who feels themselves an outsider.
And the two forms of magic-words-or-burn are not very far apart and can easily become the other as circumstances dictate. And it should be borne in mind that one of those circumstances is raising children, because a problem which every parent has is teaching their children to be a part of their culture. In this fallen world, no culture is fully human, and equally problematic is that no human is fully human, so the result is that child and culture will always conflict. Beatings work somewhat, but getting buy-in from the child is much easier on the arms and vocal cords, and in the hands of less-than-perfect parents, anything which can be used to tame their children probably will be.

This would normally, I think, be a suitable conclusion to this video, but unfortunately it seems like salvation is a subject on which people are desperate to make some sort of error of exaggeration, so if we rule out the idea that beliefs are the only things that matter, many people will start running for the opposite side and try to jump off the cliff of beliefs not mattering at all. Or in other words, if salvation is possible to pagans, why should a Christian preach to them?

The short answer is that the truth is better for people than mistakes, even if mistakes aren’t deadly. This is because happiness consists in being maximally ourselves, and the only thing which allows us to do that is the truth. Silly examples are always clearer, so consider a man who thinks that he’s a tree and so stands outside with his bare feet in the dirt, arms outspread, motionless, trying to absorb water and nutrients through his toes and photosynthesize through his fingers. After a day or two, he will be very unhappy and a few days later he will die if he doesn’t repent of his mistake. Of course very few people make a mistake this stark—if nothing else anyone who does will die almost immediately, leaving only those who don’t make mistakes this extreme around. But the difference between this and thinking that life is about having sex with as many people as possible is a matter of degree, not of kind. You won’t die of thirst and starvation being a sex-maniac, and it will take you longer than a few days to become noticeably miserable, but it will happen with those who think they’re mindless sex machines as reliably as it will those who think they’re trees.

Pagans are in a similar situation to the pick-up-artists who think they’re mindless sex robots. Because paganism was a more widespread belief system that lasted much longer, it was more workable than pick-up-artistry, which is to say that it was nearer to the truth, but it was still wrong in ways that seriously affect human happiness. It varied with place and time, of course, but common mistakes were a focus on glory, the disposability of the individual, the inability of people to redeem themselves from errors, and so on. The same is true of other mistaken religions; they each have their mistakes, some more than others, and tend toward unhappiness to the degree that they’re wrong.

There is a second side to the importance of preaching Christianity to those who aren’t Christian, which is that life is real and salvation is about living life to the full, not skating by on the bare minimum. Far too many people think of this life as something unrelated to eternal life, as if once you make it to heaven you start over. What we are doing now is building creation up moment by moment. People who have been deceived will necessarily be getting things wrong and doing harm where they meant to help, and failing to help where they could have; it is not possible to be mistaken about reality and get everything right. That’s asking a person with vision problems to be an excellent marksman. A person who causes harm where they meant to help may not be morally culpable for the harm they do, but when all is made clear, they cannot be happy about the harm they did, while they will be able to be happy about the good they did. To give people the truth is to give them the opportunity to be happier. That is a duty precisely because we are supposed to love people and not merely tolerate them. Though I suppose I should also mention the balancing point that we’re supposed to give people the truth, not force it down their throats. Having given it to them, if they won’t take it, our job is done.

OK, I think I can conclude this video now. Until next time, may you hit everything you aim at.

Our Love for Formative Fiction

I think that for most of us, there are things which we loved dearly when we were children which we still love now, often greatly in excess of how much others love these things. And I think we’re used to heard this poo-pooed as mere nostalgia. But I think that for most of us, that’s not accurate.

Nostalgia is, properly speaking, a longing for the familiar. It is not merely a desire for comfort, but also a connection through the passage of time from the present to another time (usually our childhood, but it can be any previous time). As Saint Augustine noted, our lives are shattered across the moments of time, and on our own we have no power to put it back together. Nostalgia is, properly speaking, the hope that someone else’s power will eventually put the shattered moments of time back together into a cohesive whole.

But when we enjoy formative fiction, we’re not particularly thinking of the passage of time, or the connectedness of the present to the past. And the key way that we can see this is that we don’t merely relive the past, like putting on an old sweater or walking into a room we haven’t been in for years. Those are simple connections to the past, and are properly regarded as nostalgia. But when we watch formative fiction which we still enjoy (and no one enjoys all of the fiction they read/watched/etc as a child), we actually engage it as adults. We see new things that we didn’t see at first, and appreciate it in new ways.

What is really going on is not nostalgia, but the fact that everyone has a unique perspective on creation; for each of us there are things we see in ways no one else does. Part of this is our personality, but part of this is also our previous experiences. And the thing about formative fiction is that it helped to form us. The genuine teamwork in Scooby Doo, where the friends were really friends and really tried to help each other, helped me to appreciate genuine teamwork. It’s fairly uncommon on television for teammates to actually like each other—conflict is interesting! every lazy screenwriter in the world will tell you—so when I see it in Scooby Doo now, I appreciate it all the more than I’ve grown up looking for it and appreciating it where I see it. This is one of the things I love about the Cadfael stories, where Cadfael (the benedictine monk who solves murders) is on a genuine team with Hugh Berringar, the undersheriff of Shropshire. This is also one of the things I love about the Lord Peter stories with Harriet Vane—they are genuinely on each other’s side with regard to the mysteries.

And when I mention Scooby Doo, I am of course referring to the show from the 1960s, Scooby Doo! Where are you? I have liked some of the more recent Scooby Doo shows, like Scooby Doo: Mystery Inc., but by and large the more modern stuff tends to add conflict in order to make the show more interesting, and consequently makes it far less interesting for me. Cynics will say that this is merely because none of these were from my childhood, but in fact when Scooby Doo: Mystery Inc. had episodes where the entire team was functioning like a team where everyone liked each other and were on the same side, I genuinely enjoyed those episodes. (Being a father of young children means watching a lot of children’s TV.) The episodes where members of the team were fighting or the episodes where they split up were by far my least favorite episodes.

It is possible to enjoy fiction for ulterior motives, or at least to pretend to enjoy it for ulterior motives. Still, it’s also possible to enjoy fiction because one is uniquely well suited to enjoying it, and few things prepare us for life as much as our childhood did.

The Dishonesty of Defining Atheism as Lack of Belief in God

This is the script from a recent video of mine with the above title. It should be pretty readable, or you could just watch it.

Today we’re going to revisit the definition of atheism as a lack of belief in God, specifically to look at why it’s so controversial. As you may recall, Antony Flew first proposed changing the definition of atheism to lack of belief, from its traditional definition of “one who denies God,” in his 1976 essay, The Presumption of Atheism. By the way, you can see the traditional definition in the word’s etymology: atheos-ism, atheos meaning without God, and the -ism suffix denoting a belief system. Now, there’s nothing inherently wrong in changing a definition – all definitions are just an agreement that a given symbol (in this case a word) should be used to point to a particular referent. That is, any word can mean anything we all agree it does. And if a person is willing to define their terms, they can define any word to mean anything they want, so long as they stick to their own definition within the essay or book or whatever where they defined the term. Words cannot be defined correctly or incorrectly. But they can be defined usefully or uselessly. And more to the point here, they can be defined in good faith—cleary, to aid mutual understanding—or in bad faith—cleverly, in order to disguise a rhetorical trick.

And that second one is the why atheism-as-lack-of-belief is so controversial. If atheism merely denoted a psychological state—which might in fact be common between the atheist and a dead rat—no one would much care. Unless, I suppose, one wanted to date the atheist or keep the rat as a pet. But merely lacking a belief isn’t what lack-of-belief atheists actually mean. They only talk about lacking a belief to distract from the positive assertion they’ve learned to say quickly and quietly: that in default of overwhelming evidence to the contrary, one should assume atheism in the old sense. That is, until one has been convinced beyond a shadow of a doubt that God exists, one should assume that God does not exist. I’ll discuss how reasonable this is in a minute—spoiler alert: it’s not—but I’d first like to note the subtle move of people who have more or less explicitly adopted a controversial definition of atheism in order to cover for explicitly begging the question. I suspect that this is more accidental than intentional—somewhat evolutionary, where one lack-of-belief atheist did it and it worked and caught on by imitation—but it’s a highly effective rhetorical trick. Put all your effort into defending something not very important and people will ignore your real weakness. By the way, the phrase “beg the question” means that you’re assuming the answer to the question. It comes from the idea of asking that the question be given to you as settled without having to argue for it. But it’s not just assuming your conclusion, it’s asking for other people to assume your conclusion too, hence the “begging”. (“Asking for the initial point” would have been a better, if less colorful, translation of the latin “petitio principii”, itself a translation of the greek “τὸ ἐξ ἀρχῆς αἰτεῖν”. Pointing out how it’s not valid to do this goes back at least to Aristotle).

So, how reasonable is this assumption? The best argument I’ve ever heard for it is that in ordinary life we always assume things don’t exist until we have evidence for them. This is, properly speaking, something only idiots do. For example: oh look, here’s a hole in the ground. I’m going to assume it’s empty. It might be empty, of course, but in ordinary life only candidates for the Darwin Awards assume that. And in fact, taken to its logical conclusion, this default assumption would destroy all exploration. The only possible reason to try to find something is because you think it might be there. If you should act like planets in other solar systems don’t exist unless someone has already given you evidence for them, you wouldn’t point telescopes at them to see if they’re there. That’s not acting like they don’t exist; that’s acting like maybe they exist. In fact, scientific discovery is entirely predicated on the idea that you shouldn’t discount things until you’ve ruled them out. It’s also the entire reason you should control your experiments. You can’t just assume that other variables besides the one you’re studying had no effect on the outcome of your experiment unless somebody proves it to you, you’re supposed to assume that other variables do affect the outcome until you’ve proven that they don’t. This principle is literally backwards from good science.

Now, examples drawn from science will probably be lost on lack-of-belief atheists, who are in general impressively ignorant of how science actually works. But many of them probably own clothes. To buy clothes, one must first find clothes which fit. Until one gets to the clothing store, one doesn’t have evidence that they have clothes there, or that if they have clothes, that the clothes they have will fit. Properly speaking, one doesn’t even have evidence that the clothes that they sell there will have holes so the relevant parts of your body can stick out, like neck holes or leg holes. For all you know, they might lack holes of any kind, being just spheres of cloth. Do any of these atheists assume that the clothes at the clothing store lack holes? Because if they did, they’d stay home, since there’s no point in going to a store with clothes that can’t be worn.

Now, if one is trying to be clever, one could posit an atheist who goes to the store out of sheer boredom to see whether they have clothes or hippogriffs or whether the law of gravity even applies inside of the store. But they don’t, and we all know that they don’t. They reason from things that they know to infer other knowledge, then ignore their stupid principle and go buy clothes.

Now, if you were to point this out to a lack-of-belief atheist, their response would be some form of Special Pleading. Special Pleading is just the technical name for asking for different evidentiary standards for two things which aren’t different. You should have different evidentiary standards for the existence of a swan and for a law of mathematics, because those are two very different things. Sense experience is good evidence for a swan, but isn’t evidence at all for a law of mathematics, which must hold in all possible worlds. Special pleading is where you say that sense experience suffices for white swans but not for black swans. Or that one witness is enough to testify to the existence of a white swan, but three witnesses are required for a black swan. That’s the sort of thing special pleading is.

And this is what you will find immediately with lack-of-belief atheists. Their terminology varies, of course, but they will claim that God is in a special category which requires the default assumption of non-existence, unlike most of life. In my experience they won’t give any reason for why God is in this special category, presumably because there is none. But I think I know why they do it.

The special category of things they believe God is in is, roughly, the category of controversial ideas. Lack-of-belief atheists—all the ones I’ve met, at least—are remarkably unable to consider ideas they don’t believe. This is a mark, I think, of limited intellect, and people of limited intellect are remarkably screwed over by the modern world. Unable to evaluate the mess of competing ideas that our modern pluralistic environment presents to everyone, they could get by, by relying on a mentor: someone older and wiser who can tell them the correct answer until through experience they’ve learned how to navigate the world themselves. And please note that I don’t mean this in any way disparagingly. To be of limited intellect is like being short or weak or (like me) unable to tolerate capsaicin in food. It’s a limitation, but we’re all finite beings defined, to some degree, by our limits. God loves us all, and everyone’s limits are an opportunity for others to give to them. The strong can carry things for the weak, the tall can fetch things off of high shelves for the short, and people who can stand capsaicin can test the food and tell me if it’s safe. Limits are simply a part of the interdependence of creation. But the modern world with its mandatory state education and the commonality of working outside the home mean that children growing up have few—and commonly no—opportunities for mentors. Their teacher changes every year and their parents are tired from work when they are around. What are they to do when confronted with controversial ideas they’re unequipped to decide for themselves?

I strongly suspect that lack-of-belief atheism is one result. I’m not sure yet what other manifestations this situation has—given the incredible similarities between lack-of-belief atheism and Christian fundamentalism I strongly suspect that Christian fundamentalism is another result of this, but I haven’t looked into it yet.
This also suggests that the problem is not merely intellectual. That is, lack-of-belief atheists are probably not merely the victims of a bad idea. Having been deprived of the sort of stable role-models they should have had growing up, and not being able to find substitutes in great literature or make their way on their own through inspiration and native ability, they probably have also grown with what we might by analogy call a deformity in the organ of trust. They don’t know who to trust, or how to properly trust. Some will imprint on the wrong sort of thing—I think that this is what produces science-worshippers who know very little about science—but some of them simply become very mistrustful of everyone and everything.

Now, I don’t mean this as the only explanation of atheism, of course. For example, there are those who have so imprinted on the pleasure from a disordered activity that they can only see it as the one truly good thing in their life and so its incompatibility with God leads them to conclude God must not exist. There are the atheists Saint Thomas identified in the Summa Theologiae: those who disbelieve because of suffering and those who disbelieve because they think God is superfluous. But all these, I think, tend not to be lack-of-belief atheists and I’m only here talking about lack-of-belief atheists.

So finally the question becomes, what to do about lack-of-belief atheists? That is, how do we help them? I think that arguing with them is unlikely to bear much fruit, since most of what they say isn’t what they mean, and what they do mean is largely unanswerable. “I don’t know who to trust,” or, “I won’t trust anyone or anything,” can only be answered by a very long time of being trustworthy, probably for multiple decades. What I suspect is likely to be a catastrophic failure is any attempt to be “welcoming” or accommodating or inclusive. What lack-of-belief atheists are looking for—and possibly think they found already in the wrong place—is someone trustworthy who knows what they’re talking about. A person who is accommodating or inclusive is someone who thinks that group bonds matter more than what they claim is true, which means they don’t really believe it. The problem with “welcoming” is the scare quotes. There’s nothing wrong with being genuinely welcoming, since anyone genuinely welcoming is quite ready to let someone leave if he doesn’t want to stay. When you add the scare quotes you’re talking about people who are faking an emotional bond which doesn’t exist yet in order to try to manipulate someone into staying. Lack-of-belief atheists don’t need emotional manipulation, because no one needs emotional manipulation. What they need are people who are uncompromisingly honest and independent. The lack-of-belief atheist is looking for someone to depend on, not someone who will depend on them.

The good news is the same as the bad news: the best way to do this is to be a saint.

Imposter Syndrome Produces Many Fake Rules

Imposter Syndrome, which I’m using loose and not using to its clinical definition, is the feeling that a person is not actually competent at a job which they are manifestly competent at. I think that for many people it stems from being overly impressed with other people, putting those others on a pedestal, and not realizing that everybody everywhere is just “winging it”. That is, doing their best without full knowledge of what they should be doing. That is in fact the human condition—we are finite creatures and must live life by trust—but some people seem unable to accept that and have the conviction that other people must know what they’re doing. Only God knows what he’s doing; he’s the only one who accomplishes all things according to the intentions of his will. But for those who can’t accept that, they must turn others—often kicking and screaming—into God-substitutes and pretend that these people really know what they’re doing. (It’s part of the reason people turn so quickly and viciously on their idols—they view imperfect as treason, since they’ve elevated their idols to the status of God.)

Another coping mechanism which the sufferers of imposter syndrome have is to try to turn life into something they’re actually good at in this sense that no human being can be good at it. Thus they come up with a myriad of byzantine and difficult but achievable rules, then need to have everything in life go according to those rules in order to “feel in control”. These rules tend to cluster around anything with an inherently high degree of flexibility, such as around social interaction, writing fiction, etc. “When you visit someone, you must bring a food item” is really more of a ritual, being such a common rule, but it’s a way of showing that one cares and is not merely mooching. Especially in the modern world where food is absurdly available there’s little benefit to it, and so far as I know it was never the custom among rich people, but it gives something to do such that if one has done it, one did a good a good job and is not open to criticism. This is such a rule which caught on (and I’m forced to use a rule which is not particular to an individual in order that it might be generally recognizable), but they abound. Some people must always check the stove before leaving the house, some people must always hand-write thank-you notes, or send thank-you notes on paper rather than by email. An alternative way of thinking of these things is as ad-hoc superstitions.

Satanic Banality

Here is the script of the most recent video I posted. Or if you’d prefer, you can go watch it on youtube.

Some time ago, I made a video talking about the strange symbolism in the music video of Ke$ha’s song, Die Young. Here are all of the symbols she used:
kesha_die_young_symbols
The curious thing about them all is that despite the fact that the video is supposed to have a satanic theme, the symbols Ke$ha used are all actually Christian symbols. Here’s what I concluded in that video:

Ultimately what I think I find so frustrating about this video is that it’s use of symbolism is, essentially, magical thinking. Symbols have power, because they communicate something. A symbol stands in for something greater than itself, which is why it has more power than random scribbles. Using symbols without reference to what they mean is trying to use get power without invoking their function – it’s trying to steal their power.

But on further consideration, I’ve realized that this is actually quite fitting. Yes, this was rather incompetent satanism, but that is really the most consistent satanism possible. Diligence is a virtue; if she put a lot of work into her satanism—if she really tried to do a good job—that would undermine the entire point. Skillful Satanism is actually something of a contradiction in terms.

And this is something C.S. Lewis complained about in literature. In his preface to The Screwtape Letters, talking about artistic representations of the angelic and diabolic, he said: “The literary symbols are more dangerous because they are not so easily recognized as symbols. Those of Dante are the best. Before his angels we sink in awe. His devils, as Ruskin rightly remarked, in their rage, spite, and obscenity, are far more like what the reality must be than anything in Milton. Milton’s devils, by their grandeur and high poetry, have done great harm, and his angels owe too much too Homer and Raphael. But the really pernicious image is Goethe’s Mephistopheles. It is Faust, not he, who really exhibits the ruthless, sleepless, unsmiling concentration upon self which is the mark of Hell. The humorous, civilised, sensible, adaptable Mephistopheles has helped to strengthen the illusion that evil is liberating.”

There’s nothing all that particular to Satanism in these complaints, though. It’s really the same as a mistake that we tend to make about all evil. I think that the origin of this mistake is, roughly, the intuition that if a person is trading their soul for something, there must be something quite valuable which tempted them to do it. Consider the scene in A Man For All Seasons where Richard Rich has just perjured himself to produce false evidence that will get Sir Thomas More executed for treason:

More: There is one question I would like to ask the witness. That’s a chain of office you’re wearing. May I see it? The red dragon. What’s this?

Cromwell: Sir Richard is appointed Attorney General for Wales.

More: For Wales? Why Richard, it profits a man nothing to give his soul for the whole world. But for Wales?

(If you haven’t seen A Man for All Seasons, please do. It is an excellent movie.)

Why would somebody do something evil if it doesn’t benefit them? The answer to this question is straightforward, but we need a few concepts in order to be able to give the simple explanation. The first is the the Greek concept of hamartia. It comes from the verb hamartenein, which was, for example, what an archer did when he didn’t hit his target. It means, roughly, to miss. Hamartia thus means an error, or a mistake, or by the time you get to the early Christian church, sin. The key insight is that evil is not something positive, but something negative.

I think that people go wrong here by not taking nihilism seriously enough. We think of a world working in perfect harmony and unity as the default, and of evil as a deviation from that. But in fact the default is nothing. There need not be anything at all. No matter, no energy, no space or time or physics. Just pure nothing, is the default. And yet, there is something. I don’t even care at the moment whether you attribute that creation to God or to a “quantum fluctuation”—well, I care a little bit because the latter is still assuming that some sort of contingent laws of physics exist, but whatever. The point is that anything whatever that exists—in our contingent world—is more than had to exist. Whether you think of it as a gift or as something that fell off of some cosmic truck that was driving by, from our perspective it is all a positive addition to the nothingness which is logically prior to it.

When you look at it this way, you can see that good is not a maintenance of the status quo, but an addition to it. But of course good is not merely anything at all existing. This is why a table is better than a pile of splinters, and why in the ordinary course of events using an axe to turn a table into a pile of splinters is wrong. It is bringing the world closer to the default of nothing. Good is not just any existence, but existence ordered according to a rational relationship. By a rational ordering, small things can become something more than themselves. Put together in the right shape, splinters can be beautiful and hold things up off the ground. That is, they can be a table.

Incidentally, this is why hyper-reductionists have such an easy time seeing through everything. Because every good thing is a rational relationship of lesser things, it is always possible to deny that the relationship is real. You can look at a table and see no more than a pile of splinters. Why a reductionist is proud of seeing less than everyone else is a subject for another day, but if you look at anything you know to be good, you will see this. It is itself made up of a rational relationship of parts that form more than they would in some other relationship. Further, all good things themselves fit in a rational relationship with other good things. Anywhere you look, whether chickens or statues or vaccines or video games; all good things have this property. And all evil—murder, arson, terrorism, or just lying—all have the property that they destroy rational relationships between things. They destroy the whole which is greater than the sum of its parts.

It is also the case that there is no other possibility for what constitutes good and evil. I don’t have time to go into details, but if you examine any attempt to define good and evil which is not convertible into this definition, it invariably consists of taking one sort of rational relationship and calling that the only good. Good is doing your duty, or good is the family, or good is the state or good is pleasure. Every such thing, if you really spend some time looking into it and seeing what its proponents actually mean by their words and actions—they are all taking some rational relationships and elevating them above all other rational relationships. They are taking a part and treating it as the whole.

And this is why sin is analogous to an archer missing what he was shooting at. We all aim for doing the good, but it’s very rare that we actually hit our target. Sometimes our aim is off because we twitch—that is, we can’t hold steady—but very often it’s because we mistake what we’re looking at. We think it’s closer or further, or that we’re looking at one part when we’re looking at another. We go wrong not because we think, “oh man would it be great to shoot this deer in the log under it!” but because we thought we were looking at its chest. We weren’t, as proved by where our arrow struck. Or we can go wrong by being mistaken about where we’re aiming, thinking that because we’re looking at something, that’s where we are pointing our arrow. Know thyself is often quoted by unpractical people, but it’s actually intensely practical advice.

The drug addled, sex-crazed rock star doesn’t think she’s using Christian imagery when she’s trying to be Satanic. She has not traded looking like a buffoon for some amazing benefit we can’t see. In her mind, she doesn’t look like a buffoon. She thinks she looks awesome; that anyone sensible would cower in awe of her satanic majesty. She has missed her target, and hasn’t yet gone to see where her arrow has actually struck. There’s a reason why pop musicians rarely last a decade; once they realize what they’re doing, they stop doing it; once they stop believing in it, they can’t sell the illusion anymore. And then their popularity fades, because it was not them, but the illusion they were selling, which was so popular.

Satanic Majesty is always an illusion, which is why you can only ever encounter it in art. Art contrives to convey experience; to show you what the world looks like through someone else’s eyes. But Satanic Majesty always looks banal from the outside; it’s only from the inside that it looks spectacular. This is part of why pride is the deadliest of the sins: if you wrap yourself up inside yourself, you can fool yourself forever without anything to check your downward, inward progress. And this is why music videos feature so many reaction shots. It’s also why movies and TV and virtually everything fictive, features so many reaction shots. The thing itself rarely looks very impressive, but people’s reactions are limited only by their imagination and acting skills. It’s why in Power Rangers series, after they lower the camera to the monster’s feet, the next shot is always the power rangers looking up. Our age has been called the age of many things, but it is the age of nothing so much as it is the age of the reaction shot. TV news shows the reactions of people on the street, but it never shows you the considered opinions of people on something that happened ten years ago. Collectively, we don’t like reality; you can tell a tree by its fruit, which is why we prefer to look at seedlings.

It’s everywhere in entertainment—in which category news most certainly belongs— but it can be found throughout life, too. We endlessly discuss people’s reactions, but we rarely discuss things and ideas. And if we look at ourselves, when we are tempted, we can see the same thing. We do not consider our temptations in themselves, but only how they will make us feel. I mean when we’re experiencing them, not when we’re regretting having given into them afterwards. In the actual moment of giving in, our attention is never on the reality of what we’re about to do; we’re concentrating on how happy it will make us. That’s why one of the techniques for avoiding temptation is to face up to what we’re actually doing. Of course sometimes we can’t avoid facing up to what we’re actually doing; in addiction it’s called hitting rock bottom. But when one is young and healthy, it’s very rare that reality makes us face up to what we’re doing. On TV they always pick pretty people who smile for the camera, and it’s so hard to believe that anything can be wrong when pretty people are happy. On Facebook people post pictures of when things are going well, and the very fact that it’s rude to tell people about how bad your day was means that we don’t often face up to the reality of what is going on in life. A person has to be very unhappy indeed before they won’t smile for the camera.

Which is a pity, because so many people use reactions to tell whether the thing being reacted to is good or bad. Since people will put their best foot forward, this doesn’t work; to know right from wrong we must investigate the things themselves. And in fact in our world whether an action is defended on its own or by the reactions to it is actually a good heuristic for figuring out whether it is moral or immoral—if you can say something good about the action itself, it is probably moral. If it is only defended by people’s reactions to it, it is probably immoral. That’s only a heuristic, of course; people dance because it’s fun, and dancing is legitimate. But dancing is also beautiful, at least when it’s done well. There’s very little you can say about heroin except that it’s fun.

That’s all for now. Until next time, may you hit everything you aim at.

Prayer to an Unchanging God

If you aren’t familiar with the properties of God, perhaps the strangest, to us, is that God is unchanging. It follows necessarily from the fact that God is simple, that is, he is not composed of separable parts that are capable of existing independently. That follows from the fact that God is necessary, unlike us, who are contingent. Since God is necessary, he cannot be composed of things which are not necessarily together. And since God is necessary, he cannot change, because change means some part coming into being or ceasing to be. Since God is necessary (and has no contingent parts), there is no part of him which is capable of not existing. So far, OK, but how, then, does prayer work if God doesn’t change. What does prayer do?

It’s easy enough if you only consider our side of prayer, that is, how prayer changes us. But that’s not all prayer does. Prayer can change the world. We can pray for good things to happen, and God can answer our prayers with good things, if often (having to take everyone’s good into account) in ways so complex we don’t understand them until much later if at all. Or we can get immediate answers to our prayers, as in the case of miracles. How can that possibly work if God is unchangeable?

I think that it will be easier to give the answer if we first look at the fact that we creatures are able to interact with each other. C.S. Lewis mentioned, addressing the question, “since God knows what’s best, how can it make sense to ask him for anything?” He pointed out that the same problem applies to umbrellas. Surely God knows whether we should be wet, so why give him our opinion on the subject by opening our umbrella?

The answer to that question is that God has given it to us to take part in designing creation. This is part of a general plan of delegation which God seems to have. For a great many things, instead of doing things directly God gives it to us to do his work for him. He could feed the hungry man himself, but he gives it to us to be his feeding of the hungry man by us giving the hungry man food. You can see this in the analogy of the parent who gives his child a present to give to someone else; the parent could have given the present directly but the parent is incorporating the child into the parent’s act of generosity. Unsurprisingly, God does a far more complete job of it than human parents do. This is part of why people can ignore God; they see only the action of the people incorporated into God’s generosity and ignore the rest.

When God gives us these things by way of delegation, what happens is that we end up acting sort of like a lens to the sunlight. From our perspective, we don’t change the sun, but we do change how the sunlight affects earthly objects. By holding our hands up we make a shadow, but holding up a lens we concentrate the light on a place, with a prism we break the light into distinct pieces and make a rainbow. Real life is vastly more complex than just lensing the sun, but it works as a metaphor to show us how you can change the effect of the sun without changing the sun itself.

Prayer is the same basic thing, except we can’t directly observe it. By prayer we interact with God such that we change not God, but how his unchanging love for creation is expressed in creation itself. Prayer is like holding up a magnifying glass in front of the sun, shaping where the light goes without doing anything to the sun.

Atheist Fundamentalists

Over on my youtube channel, I posted a video called Atheist Fundamentalists. Here is the script I wrote for it. It was meant to be read aloud (I wrote it for how I speak), but if you bear that in mind I believe it’s quite readable. The video has some illustrative graphics, but they’re not critical. Or you can just go to my youtube channel and watch the video. 🙂

Today we’re going to talk about Fundamentalist Atheists. At the end of my video about the rhetoric of defining atheism as a lack of belief in God, I said that many lack-of-belief atheists seem just like fundamentalists. I got a request for clarification on that point, which I’m going to do a whole video about because it’s an interesting—and fairly large—subject.

To explain what an atheist fundamentalist is, we must first ask the question, what is a Christian Fundamentalist? In theory they are people who stick to the “fundamentals” of Christianity, but to other Christians, and especially to Christians with a valid apostolic succession (mostly the Catholics and the Eastern Orthodox), they don’t seem to know much about Christianity and are obsessed with things that aren’t at all fundamental.

They are probably best known for their supposedly literal interpretation of the bible and their young-earth creationism, but I think that these are red herrings. Epiphenomena, more properly. The bible is not in fact an idol that they worship, or more properly it literally is an idol which they worship exactly in the way that ancient pagans used to worship their idols. There has arisen a very strange idea that the primary relationship of ancient peoples to their gods was roughly the same as that of a bad scientist to his pet theory. That’s quite wrong. In fact it is doubtful whether explaining the actions of the physical world had anything at all to do with how ancient people related to their gods. The Romans are a particularly good example of this, because they had such a large number of gods. They had gods of everything. They had gods of doorways and of beds, of hearths and of wine. No one needed an explanation of these natural phenomena because they weren’t natural phenomena. There was a good chance that the Romans knew, personally, who built the particular ones they used. They did not have a god of wine because they didn’t know where wine came from.

The primary relationship which pagans had with their gods was one of control. The gods offered a way to control the natural world. You made sacrifices so things would turn out the way you wanted. The pagan gods needed these sacrifices, or at least they really wanted them, and so human beings had a bargaining chip with nature. But even more than this, since the gods were capricious and often didn’t do what you asked, it offered a way to organize society, and this part actually worked. Everyone took part in the public ceremonies, and the games, and the plays. By being dedicated to something more than the people, the people could work together and become great. The Romans did not worship the emperor as a god because they thought the emperor explained the rain or the wind or the rocks. They worshipped him because every Roman citizen worshipping the emperor made them one people.

And if you look at Christian fundamentalists, you’ll see something very similar. They insist that the bible is the literal word of God, but they don’t seem to mean by that, that it’s true. They don’t even seem to read very much of it. Something that happened to me a few years ago is aboth an amusing story and illustrates the point quite well. A fundamentalist I ran into was explaining his theory that the second creation story in the book of genesis is really just the first story told backwards—he didn’t explain in what sense this is a literal interpretation—and when he was done, instead of addressing this weird idea, I pointed out that if you’re going to take everything in the bible literally, then you have to conclude that God repented. His response was, “where does it say that?”

For those of you who’ve never read the book of Genesis, it says that in chapter seven. It’s right before the flood, before God called Noah, it says that God repented of having made man, for man’s works were evil from morning till night.

And it’s trivially easy to come up with other examples that fundamentalists don’t take literally. When Jesus said, of the eucharist, “this is my body,” of course for some reason the literal meaning of those words aren’t the literal meaning of those words. When Jesus said that unless you eat the flesh and drink the blood of the son of man, you will have no life in you, that’s purely symbolic… in some sort of literal sense. Examples abound; former fundamentalists are very fond of citing Leviticus, I believe.

And at this point a question which comes up, fairly frequently, from Atheists, I’ve found, is, “how do you know which parts not to take literally?” I even had one fellow ask for a list of non-literal passages, and he never really understood when I tried to explain that no such list exists because only a fundamentalist could ever think it useful. I tried to explain that orthodox Christians read the bible to learn, so whether a given book or passage is to be taken literally is something that would come up in commentary on that passage. A list of non-literal passages would be about as useful as a list of special effects in movies which defy physics. What would you do with that list? Go watch only those scenes? Would you keep this list handy when watching a movie to check every time you see a special effect?

Anyway, the answer to the question of how do we know what to not interpret literally is, first and foremost, the living interpretive tradition of how we are supposed to interpret the scriptures. This predates the apostles, of course. The Jews had a living interpretive tradition of what we now call the old testament, which was taken up by the Apostles since they were all Jews. But for simplicity’s sake I’m going to stick with just the new testament. In the four gospels, we see clear accounts that Jesus selected a group of men who he asked to follow him, which they did. Literally. They left their trades and ordinary lives and spent pretty much the next three years going with Jesus everywhere he went. He talked with them, all the time, and taught them things which he didn’t teach more generally. If you think of the apostles as being in an apprenticeship program, you won’t go too far off. And these apostles went on to become the first bishops, after Jesus rose from the dead and ascended into heaven. And all bishops since have been successors to one of the apostles. They are men who were trained, formed, and selected by their predecessors to carry on the living tradition of the apostles. And this was how the church was organized: around the apostles, and later around their successors. Because these are the people who studied, in depth, what the faith means. The ending to the gospel of John summarizes it very succinctly: “There were many other things Jesus did. If they were all written down, the world itself, I suppose, would not be able to hold all the books which would have to be written.”

It is also the case that we have no record of Jesus having ever written anything down. That’s not quite true, as there is one story which mentions he was writing in the sand when people spoke to him, but there’s no mention of what he was writing.  Jesus didn’t write the bible, he founded the Church. The Church wrote the bible. And it also passed on how to understand it.

And if you don’t understand why it is that Jesus would train the apostles rather than write the gospels, ask anyone who has studied martial arts how effective it would be to learn martial arts from a manual, with no teacher. There’s a reason why basic training in the military is not a study-at-home course.

Now, all of this is rejected by fundamentalists, who literally pretend that you can learn everything you need to know about how to live well by reading the bible on your own with no context, or training. With nobody around who has any idea of how any of this is supposed to work in practice. Or what the people who wrote it, actually meant by the words they wrote down. In a letter to some monks who were arguing about free will versus grace, Saint Augustine, who was a bishop, mentioned a useful interpretive strategy: if your interpretation contradicts most of the bible or makes it really, really stupid, this is a bad interpretation. The particular case he was talking about was the denial of free will: because denying free will means that every time God said anything to man, this was pointless and stupid. Since God is not an idiot who engages in completely futile actions, determinism is, therefore, bad theology. But if you actually talk to fundamentalists, you’ll find they violate this common sense principle all the time. They will take a passage, or a verse, or a quarter of a verse, and will with rocklike certainty conclude they know exactly what it means and that this meaning does not need to be reconciled with any other verses, not even with the rest of the sentence from which they drew it.

This is not the action of somebody who believes that the bible contains truth. And this is just one example, if you spend any time with fundamentalists you will rapidly conclude they don’t want people to think that the bible is true. At least, not in the literal sense of those words. What they want is for everyone to worship the bible. It is true that part of that worship is to say that the bible is literally true, but like with sacrifices to the emperor, the point is for everyone to do it, not to believe it.

Having finally said what a Christian fundamentalist is, we can now look at what an atheist fundamentalist is. They are people who do the exact same thing, but with a different idol. The idol is often science, but it can also be political theories like Objectivism, Marxism, Feminism, Environmentalism, and so on. Of course there isn’t just one science book, or one objectivist book, or one marxist book, etc, so they can’t worship just one book. On the other hand, the bible is properly a small library of books, so in that sense Christian fundamentalists don’t worship just one book either.

And just as Christian fundamentalists don’t seem all that interested in what Christianity actually is, atheist fundamentalists are often shockingly ignorant of real science. And I don’t just mean science’s sins, like the flaws in the peer review system, the problem with publish-or-perish, the infrequency of trying to reproduce results, and so on. Nor do I mean science’s self imposed limitation to what is measurable and quantifiable. No, I mean that they’re often quite ignorant of science’s virtues, like interesting experimental results or what scientific theories actually are. It’s quite perplexing until you realize that they’re not interested in science as something true, but in science as an idol that everyone can worship to unify society. And you can see the same elsewhere, with environmentalists who know nothing about the environment but recycle religiously, or marxists who know next to nothing about actual marxism but always vote for democrats and have a Che Guevara poster on their wall.

And it is not uncommon for an atheist fundamentalist to have a few favorite scientific “facts” which mirror the favorite bible verses of the Christian fundamentalist. “Atoms are made of mostly empty space”, though that’s actually an outdated model of the atom. “Nothing happens in Quantum Mechanics until an observer looks at it”, but observer doesn’t actually mean a person in quantum mechanics. Evolution means that animals get smarter and faster and stronger over time—survival of the fittest—though the theory of evolution actually refers only to the change in allele frequency in a population over time, and as in blind cave fish, might mean animals get weaker or smaller or dumber if the environment favors that.

And perhaps the most notable characteristic of fundamentalists, whether christian or atheist, is their fierce tribalism. Being primarily concerned with group unity, they (rightly) view outsiders as a threat to the group. This leads them to be insular, but it  also leads them to be hostile to outsiders. Christian fundamentalists talk about how everyone else is damned and will burn in hell; atheist fundamentalists talk about how everyone else is irrational and should be locked up in lunatic asylums. Richard Dawkins has said that teaching one’s children religion should be considered child abuse.

It is not really surprising that those who value people over truth should not have much truth, but they very often have little in the way of people, either. Fundamentalists are notorious for driving people away. Truth is a jealous God; if you love truth more than people you may well end up with both, but if you love people more than truth, you will usually end up with neither.

 

A Defense of Celebrating Christmas Early

(Originally published in Gilbert Magazine)

Most mistakes made by the human race are an attempt to fix some other mistake. Celebrating Christmas during Advent (and ordinary time, and one increasingly fears, Easter) is undoubtedly a mistake, but like most mistakes, to fix it we must find out what it is balancing. And when we ask ourselves what is being balanced, I think we will discover that on the other side of the scales from so great a holiday are several sins.

The first and most obvious reason for celebrating Christmas early is simply the extensive preparations which the secular celebration of Christmas has come to demand. That this preparation is a miserable experience scarcely needs defending. Indeed, when some months ago one of my atheist friends was complaining about all of the bother associated with Christmas, I suggested that the secular holiday should be moved to Black Friday, with the minor modification that people should buy presents for themselves instead of each other. If nothing else, under this scheme people would not have to worry that their gifts will be unappreciated. It is a sufficient sign of the times that he thought this transformation unachievable, but said nothing about it being inadvisable.

Whatever might reduce this stress, the stress still exists, and preparation would not, in itself, require the early celebration of Christmas. Women spend nine months preparing a child for birth, and do not ordinarily comfort themselves during that work by throwing the child birthday or graduation parties. When the connection between the difficulty of a job and the results of a job are well understood, it can be endured without aid. Where that connection is not apparent, unpleasant labor can still be undertaken as a penitential exercise. In the case of Christmas, however, modern culture has made it so unpleasant that nine people out of ten can’t conceive of their sins being that bad. Lacking any concept of vicarious atonement, the solution, to keep a weary race pulling its plow, is to borrow the enjoyment of the holiday to get people through its preparation.

The second reason to celebrate Christmas early is our culture’s slavehood to the calendar. Once December 26th hits, some are simply tired of Christmas celebrations, but for many it’s a yet lower idea: that one must always be up to date. It is acceptable to the chronological snobbery, by which people have flattered themselves for the last century and a half, to be in advance of the calendar but never to be behind it, for the devil will take the hindmost. Christmas is too great to confine its celebration to a mere twenty four hours, and the chronological snob can extend the celebration in only one direction which will keep him up to date.

The third reason is more subtle than the first two, but I think it is the most significant. Christmas, though it be no more than secular christmas, vigorously opposes the general nihilism of our time. Even watered down, Christmas still has flavor. Saint Nicholas, even when he is merely Santa Claus, still stands against Arianism. In the same manner that Arianism attempted to divorce the Son from the Father, modern culture tries to divorce happiness from goodness. This is not possible, and even bad christmas songs remind us it isn’t possible. The most theologically suspect lyrics about Santa Claus spying on people, with unspecified and probably magical technology, connects good behavior with happiness. It is true that it often connects it in a mercenary way, but it nevertheless connects it in an unbreakable way. It is also true that the proponents of unconditional affirmation — an absurd attempt to ape the generous love of God — will complain that this is an awful message. And yet not a single one of them has made a Christmas movie in which a bully gets a present from Santa Claus as the bully finishes beating up a smaller child for his lunch money.

It is a theological point, but it is the incarnation which makes this connection unbreakable. Arianism, which was a milder form of Gnosticism, held that spirit could not marry matter, or in more Thomistic terms, that the unconditional could not truly know the conditional. It is a recurring suspicion of the human race that the infinite can have no regard for the finite, and against all this, the incarnation proves that omnipotence loves weakness. But God’s love is a generous love. It turns weakness into strength. And that is why happiness cannot be separated from goodness: they have the same source. Gnosticism claimed that you could have happiness apart from goodness because the material world and the spiritual world had different fathers. Arianism had God adopt the material world; the incarnation proved its true parentage. It was, after a fashion, the first paternity test. The modern world denies this paternity, since it denies God, but every winter Santa Clause declares that the goodness of children, no matter how unenlightened or materialistic, is loveable.

These three reasons, between them, compel our culture to celebrate Christmas early. Until we explain to people why they prepare, that the calendar is a good servant but a poor master, and that God loves them and not merely the idea of them, we shall have Christmas during Advent. We can take comfort that at least it’s not Advent during Christmas.

Happy Father’s Day

I submitted this to my parish’s bulletin as a potential father’s day message:

In one of the many instances of audacity which marks Christianity out as a stumbling block to the Jews and folly to the Gentiles, we call the unimaginable uncreated creator of all that is, our Father. Let us celebrate, then, all those men who have entered into the recklessly humble Christian spirit of emptying themselves to become the image of God’s fatherliness. Happy Father’s Day!

Since this blog is a more general venue than a parish bulletin, let me add that I mean this for all fathers, including those who don’t know this was what they did. 🙂

On Its Own, the Golden Rule is Fool’s Gold

There is a very strange error which many atheists make when debating theists: they think that the word morality means no more than, “how you make decisions”. They will then propose some means by which they make decisions and say that this shows that atheists can be moral too. These rules never mandate nor forbid anything, of course, and always seem suspiciously like what somebody raised with a real moral code would find comfortable, supposing that they’re reasonably well-to-do and live in a peaceful place with little crime.

I recently saw an example where somebody proposed the golden rule, which he claimed required no God. Of course there is absolutely no reason given why one should obey it, but for the moment, let’s ignore that. Suppose that the following were true:

If I were rich and owned a bank, I would really like it if people tried to rob my bank at gunpoint so I could have the fun of patrolling the branches to heroically stop the robbery should I arrive at the right time.

The conclusion, then, would be that a man who felt this way should go and rob banks at gunpoint. No God required.

For the moment, let’s leave voluntarism out of account since anyone who believes in voluntarism has explicitly rejected reason anyway and so can’t be reasoned with (voluntarism is the idea that morality flows from God’s will rather than his intellect, so God could command rape and murder and forbid kindness and mercy). The only way to get an actual morality which both has both positive and negative commands and actually works is for it to be grounded in the nature of things to which the moral rules apply. I’ll give a fuller description of this later, but the short version is that all sin is a diminishment of being. God is love, which means that God is generosity, and in his generosity he has given it to us to be his generosity to the world. He could give my children food directly, but instead has given it to me to be his gift of food to my children. He could have created them directly, but gave it to my wife and I to be his act of creating them. To those of us who pass hungry beggars on the street, he gives it to us to be his gift of food to them. To those of us with tongues he gives it to us to be his speaking of the truth to those with ears to hear it. And so it goes for all moral rules: it is our nature to be God’s act of generous creation to the world, in ways big and small. To tell someone the truth is to create in them knowledge. And so it goes with all things we do that are good.

To sin is to refuse to do this work of creation we were given to do. Being is good, so to refuse to do this work is to diminish being, and is therefore evil because there is less good. (Evil is a negative, not a positive, thing, and has no existence on its own. Evil exists only in the manner of a shadow, which “exists” only where the light does not hit.)

All actually grounded moralities must have this in common as their ground. It is of course possible to take a morality on faith, without understanding its grounding, but it must of necessity come back to some ultimate source for our existence. Atheists will never succeed in grounding a real morality because they do not believe in a reality capable of grounding a morality. Blind matter mechanically acting so arbitrary rules has no further being than merely existing. We might think particular configurations of it interesting, or like them, but this is merely to be entertained by illusions. To have a real morality, you need a real reality.

The Anti-Teleology of Materialism

Most Christians tend to assume, for historical reasons, that matters of natural law can be discussed with atheists. As the historical reasons become less relevant this becomes less and less the case. The problem comes in that in the modern age most atheists are Materialists (that is, they believe that nothing exists besides matter and forces on it, or to put it another way all fields of study are just applied physics). And the problem is that Materialists do not hold that nature is good.

Actually, it is even worse than this, because if they ever think about it, Materialists must hold that nature is evil. That’s not true of Materialism in all possible worlds, but it is true of Materialism in this world. In our world, a Materialist believes that we exist purely as an accident of evolution. Evolution cares about nothing, but in a metaphorical sense it only cares about maximizing our descendants. In no sense does it care whether we’re happy. So all happiness, according to a Materialist, must be something which happened to maximize the number of descendants our ancestors had, and therefore only serves, in us, to (probably) maximize the number of descendants we have. All pleasure is a carrot dangled in front of us to keep us going.

Of course this means that pleasure must be, if not strictly speaking minimized, at least kept relatively small. This is why the Romans would starve and thirst baboons before letting them loose in an arena to kill prisoners for sport. A contented baboon usually won’t see the point in ripping a man limb from limb. Contentment is the enemy of effort, and in the great battle of all against all that is nature, quite a bit of effort is needed. Evolution must ensure that happiness won’t last.

The Materialist is, therefore, in the position of needing to cheat his creator in order to be as happy as possible. Nature is not the source of man’s happiness, it is a limit to be overcome and an enemy to be fought.

Of course it is not really possible to beat one’s creator forever. In the end, the creature will always lose. This is why the right to suicide is so often of great important to the Materialist. When you can no longer cheat life, the only thing left to do is to cheat death.

People only Read What is Published

(In a sense this post is a generalization of the fundamental principle of science, but it’s worth looking at that generalization in detail.) It is obviously true that people cannot read what hasn’t been published because if it was not published, it would not be available to read. From this utterly trivial point we can predict several non-trivial things which in a fallen world will reliably be true about many of the people who create for publication.

Actually, there is a second fact which we need, but it is only slightly more controversial than the first: people do not re-read material often. If we put these two together, for a creator to be read as often as possible, they will need to publish a lot of work. There are exceptions, of course—I’ve re-read Pride & Prejudice around twenty times now—but in general this holds true and is especially true of anyone who wants to make an ongoing living from their creative work. (It’s also true of anyone who simply wants ongoing attention even if they don’t make any money from it.)

In order to publish frequently, a person must have many things to say, and this is the crux of the problem. There several ways to have a lot to say, and—outside of explicit fiction—only one of them is good. The good way is to study the world and talk to the wise so that one becomes wise oneself. This is a long, hard road, and it will be inevitable that there will be things which come up in popular discussion which might be well-read if one could write them, but one simply doesn’t know enough to write about them well. Many people take this long, difficult path, and it is good idea to not lose track of them when you can find them.

There are much easier ways to have a lot to say, though. Making stuff up is the easiest, but also the most dangerous way, as a number of disgraced reporters and academics have proven. Outright lying is very hard to defend and also very offensive to readers. Several orders of magnitude safer is explicit speculation. You can see this in articles that have a question mark in their title. “Did [Famous Politician] Buy And Eat Sudanese Sex Slaves?” is an article that can be based on as little as a trip to the Sudan—or a neighboring country if necessary—and the politician being the sort of person who would do that sort of thing. It’s not hard to make things seem plausible, especially if one picks things that aren’t as extreme as this silly example. There are many variants of this approach, too. One can speculate about the implications of what it would mean if someone in a position of authority were to say something. One can also speculate on why a politician won’t say something at a particular time. Since a politician can’t say everything in every speech, there will always be a wasted opportunity to talk about. If the important people aren’t sufficiently obliging, one can also talk about what other people are saying about what was—or wasn’t—said.

Speculation on its own is not very interesting, however. One wants not only to publish material, but to have people read it. For that the writing must seem important as well as new. Now, it is possible to write about important things through hard work coupled with the patience to wait for important subjects to come along. But once again there is a much easier way to do this: throw perspective out the window. There are variants, of course, but they at their heart they all consist of some sort of skewed perspective. Probably the most popular is to take whatever topic one is writing about and imply that it spells the end of civilization as we know it, or if it isn’t utterly trivial even the death of any possibility of happiness in this world. Extrapolation is a very useful tool for this.

When exaggerating, the easiest approach is to assume that the world is static and project all trends out to infinity with no reactions to the trends or changes in behavior. Now, human beings have many flaws, and chief among them is that most of us do very little by principle. This is why so many people profess terrible principles—what’s the point in considering the truth of something one has no intention of living by anyway? But there is an upside to this, and it is that extrapolating out from people’s bad principles to their actions is usually quite misleading. The more principles have terrible results, the more people ignore the principles—sometimes even going so far as to reinterpret them to mean the opposite of what they originally meant. Whether this speaks well of the people or not, it is simply unreasonable to pretend that they will stick to their principles as things get worse and worse. Civilizations do die off, but at vastly lower frequencies than publishing cycles demand.

There is also the flip side of this coin—science reporting always has to include some section about how the discovery will cure a disease, make people thinner, make phones thinner, finally bring about the electric car, or at least significantly impact half the population’s life within the next few years. The overwhelming majority of them won’t, of course, but on the plus side this provides some grist for the worry mill because [political bad guys] will prevent the good things from happening. And don’t forget that every change hurts someone. Interestingly, this constant stream of good things coming in the future, rather than being here in the present, may also help to raise people’s ideas of what can be expected about life now—it really sucks in comparison to how good it will be ten years from now—so even without spin this works synergistically with the world-is-ending articles. Focusing people’s attention on what they don’t have is a great way to make them discontent and in need of an explanation for that unhappiness.

I should probably also point out that since really interesting new facts come along fairly infrequently, if a person is sloppy with their facts and doesn’t check into whether the things they have heard as facts are actually true, this will make them far more likely to come across “facts” which seem important. (Scientific studies with small sample sizes and no pre-registered hypothesis are a goldmine for this.)

The point, of course, is not nearly so much that all of this is a temptation to disciplined writers, but that it is a selective pressure which greatly rewards undisciplined writers and punishes disciplined writers. When considering the big picture, it doesn’t much matter whether disciplined writers resist temptation because the undisciplined writers will succeed and do very well regardless. And writing is not a zero-sum game. Undisciplined writers who trick people into reading material of exaggerated importance will increase the amount of reading that goes on. (Which editors who come up with headlines have known for as long as there have been headlines.)

But more more reading is not always better than less reading; reading which unbalances the mind through doomsday predictions breathlessly uttered makes people less able to understand truth spoken calmly. People also have finite and often small amounts of time and mental energy for reading, so consuming large amounts of exaggerated fluff can squeeze out real reading, even where it doesn’t habituate a person out of being able to do it.

(And everything I’ve said here applies to things that are watched or listened to just as much as for reading. As the saying goes, it’s not the medium, it’s the message.)

The takeaway is very simple: be very careful in how much news and news commentary you consume, and remember how big a selective pressure there is on the people who are giving you the news to exaggerate and distort it.

Control is the Worst But Most Certain Proof

The things we know, we know according to different levels of certainty. To illustrate the spectrum with its extremes: everyone knows with complete certainty that they themselves exist, and they know with virtually no certainty at all the things half-remembered that they heard from a known liar who thinks he heard it from his cousin one time. Most things, obviously, are somewhere in between those extremes. And in all but the most certain cases, is only indirect, which requires us to trust the use of our own reason to know the truth from the evidence.

Consider the case of a woman who asks the question, “does my boyfriend really love me?” It is not possible to measure love, and it is always possible to respond to a direct question with a lie. Perhaps he doesn’t love her but is even more afraid of being alone while he waits for someone better to come along. Even worse for her certainty in his love, he could be mistaken. Perhaps he loves an ideal of her which he will someday discover is not the real her?

Worse, doubt can lead to imagining all of the possible ways he could not love her but still do the things he did which seemed like love. Considering one’s imagination can be confused with looking at the world, which will further fuel her doubts. If she gives into this, turning her attention away from the evidence of his love towards the counter-evidence of her doubts and suspicious imaginings, she could work herself into a state where all of the true things in real life which should make her convinced of her boyfriend’s love leave her empty and uncertain. What can she do?

This is where many people go wrong, because they know that control is powerful proof. If you can make something do what you want, it is very convincing evidence that you really know the thing. (This is why repeatable experiments are so critical to the scientific method.) If she can make him do things he would do only if he loved her, then this should finally assuage her doubt. But there is a problem: whatever she asks he might have wanted to do  anyway. This adds the temptation for the demands to become unreasonable or even anti-reasonable. The more self-destructive and unreasonable the demands, the more clearly the only reason he is complying is because he loves her so much.

Of course, this is bound for disappointment. In practice we can never fully control another person, and if she keeps this up for very long the boyfriend will almost certainly stop loving the woman. People dislike being manipulated and distrusted. And even if he doesn’t leave her, she’ll then know she’s with a man so desperate he’ll put up with being treated terribly. This makes his love worth very little since it’s really an indication of how desperate he is, not how lovable she is. In fact, there is literally no way that this attempt to prove his love through control will end well. Alas, to paraphrase Jane Austen, insecure people are not always wise.

A very similar problem can be seen among a certain sort of atheist. When they reject the evidence given (here’s a summary of what’s often offered)  and are asked what sort of evidence they would accept, it’s rarely specific. It varies all over the place, but tends to have in common that it is something simply counterfactual to the world as we find it. But unlike when a Christian might say that the evidence he would accept that God does not exist is that nothing at all existed, this counterfactual isn’t related to the nature of God in a direct way at all. Creation not being created is evidence against the creator in a direct and sensible way. There being more of something or less of something is not directly related to the creator being our creator; it’s just something picked at random. And a moment’s thought shows that it is the counterfactual nature of the evidence that is important and not its being related to the creator. That is, this lack of relationship to what the evidence is supposed to prove is no accident. If the message “I exist. –God” burned forever in the sky in five hundred foot tall letters, atheists would just say that it was an unexplained natural phenomena which influenced primitive people to come up with the myth of God to explain it. Also that it influenced our language so that these letters were meaningful to us. And some day we’d definitely have a natural explanation for it.

What people want is not just any sort of evidence, but specifically the evidence of control. It is not really different from people in Jesus’ time who wanted a sign, which is to say, a miracle done on command. They did not then and do not now want to have to discover what the world is. They want to know it by having it conform to their desires.

But the psychology of this is interesting, because I don’t think that it’s selfishness. More specifically, I mean that it isn’t pride. It isn’t the desire to be God, to be the lord of all. Rather, control is powerful evidence because it seems to make the thing controlled an extension of the self, which as Descartes noted is certain even if we doubt everything else. It is not, at its core, a desire to dominate. It’s a fear of trusting. It is the insecurity of a timid creature which will not venture out of the burrow of certainty to see what actually exists in the larger world where it is possible to doubt.

Material People are Immaterial

There is a problem which Materialists face that is rarely talked about. (Materialism is the belief that only matter and physical forces exist, i.e. that all disciplines are really a form of applied physics.) And the problem is a fairly basic one: what is an individual person?

In one sense this is a silly question because we all know. But the problem, for Materialists, is that what we all know directly contradicts Materialism. And it contradicts it because what makes a particular person that person transcends the particular matter which they’re made of. A materialist denies that there is anything can transcend the particular matter; all that exists are sub-atomic particles and a few forces acting on them. How, then, could a materialist possibly define what a person is?

This is an especially hard problem over time, since the matter which makes up a particular body changes through the years. All proteins, fats, sugars etc. get recycled by the body in its process of continual renewal. Even more of a problem is that a person starts off weighing less than ten pounds and usually ends by weighing over 100, often quite a bit more than 100 pounds. By adulthood their original matter is largely long gone, and any matter which by chance is the same is a tiny fraction of the original. Other changes such as larger muscles, longer hair, shorter hair, losing a limb, growing extra teeth, and many other changes significantly change the physical configuration of the matter. Neurons in the brain are constantly being made and new synaptic connections forming and others going away. Neither the particular matter nor the shape of the matter can be used to define a person. And according to the Materialist, nothing else exists.

There’s even a further problem that Materialists face in defining people: if the only real thing are sub-atomic particles and forces, there isn’t a good way to distinguish between the person and the chair he is sitting on. Individual molecules have inter-molecular attractions, but so do the molecules in the person and the molecules in the chair. The wood is a different density than the person’s skin and muscles, but those are a different density than the person’s bones. And if this is hard, what about when two people shake hands?

In my experience, when you point this out to a Materialist, their reaction is to get annoyed and say, “come on, you know what I mean.” Or, “and yet I can reliably tell what is a person and what isn’t.” I’ve never understood why it is supposed to be an argument in the Materialist’s favor that in practice not even he believes the nonsense he’s saying.

Believing our Imagination

After I posted about whether we can choose to to believe something, my friend Eve Keneinan pointed out to me that I had left out the subject of imagination. In particular, that it is not merely a question of whether we close our eyes or look at reality, we can also choose to look at our imagination and mistake that for looking at reality. The phenomenon of falling in love with a theory is a subset of this practice.

Imagination is a very interesting subject and one remarked on probably less than it should be. Even the simple question of what is imagination is not asked very much. In broad terms, imagination appears to be the ability of the mind to take on the form of something with which it is not in contact. (This is in reference to the Aristotelian idea that knowledge consists of the mind taking on the form of the thing known; where form refers, very roughly, not to the physical shape of a thing but essentially to what makes it what it is.) The mind can take on the form of something not real, such as when one writes fiction, or it can take on the form of something real but simply not present, such as when one calls to mind the face of a friend.

There is a problem with the latter type of imagination, when it is derived from reality, because we are fallen creatures: we can call things to mind imperfectly. This immediately introduces problems, though it can largely (though rarely perfectly) be corrected by consulting other aspects of our memory to make sure that our reconstruction of our memory is in fact correct. Our imagination is notoriously misleading when it comes to eye-witness testimony, identifying a person we’ve never seen before, and other things courts of law rely on all too often, but that’s not the main point here.

In Immanuel Kant’s killing off of knowledge in the last days of Modern Philosophy being a living endeavor, he proposed imagination as a substitute for knowledge. Not pure imagination, of course, since that would be absurd to even a brilliant man, but imagination which is then checked against experience (where practical). If experience confirms it, then we continue to count our imagination as “knowledge”, if not, we must try to imagine something else which does conform to our experience. For a fuller explanation, check out Kant’s Version of Knowledge.

For many people this idea of “knowledge” has replaced actual knowledge, and interacting with the world becomes an almost solipsistic exercise in playing with the phantasms conjured up by our imaginations. Even where it hasn’t, it is a common practice to understand something by trying to imagine it from incomplete knowledge, very frequently supplying the gaps with pieces of ourselves. That a great many people assume that everyone else is just like them only makes this more misleading whenever it is applied to people or things which are not just like them.

Perhaps most dangerous of all, it is exceedingly easy to fool ourselves into thinking that by looking things as we imagine them, we are actually looking at the world. Not only do we go astray but we don’t even realize our own ignorance. Having applied ourselves with great effort to learn about things which exist nowhere else but our imaginations, we feel like we’ve tried. Worse, it is painful to realize all that effort was wasted, making admitting our mistake to ourselves very difficult indeed.

It is possible to be lazy and ignorant, by not trying. But it is also possible to be very industrious and still ignorant, by looking in the wrong place.

Postscript

There is a saying that Modern Philosophy was born with Descartes, died with Kant, and has roamed the halls of academia ever since like a zombie: eating brains but never getting any smarter for it.

No One Preaches Radical Freedom to Children

Radical freedom, if you’re not familiar with the term, is basically just “do as you will is the whole of the law”. There are many variants of it, but in general it’s the proposition that there are no binding constraints upon a person’s actions—no good or evil—except what they themselves impose.

If this sounds like pure madness, it is, but it’s always coupled with some variant of the belief that humans are innately good and never (or very rarely) want to do wrong, so the people who profess it always assume that it will produce the exact same results but with less guilt. You can see this in the ads by an atheist group on the side of buses—I believe it was in London—saying, “There’s probably no God. Now stop worrying an enjoy your life.” A friend said that a famous atheist once answered a question about morality if nothing is forbidden because God is dead, “I’ve already murdered the number of people I want to: zero.”

In defense of the people who propose ideas like this, they’re not complete idiots and do know that there are people who do murder, steal, rape, etc. There isn’t a single response to that, but I think in most cases they classify anyone who does this as mentally ill and think all such behavior should be dealt with medically.

You could even make a case that Ayn Rand should be classified as a preacher of radical freedom, since her version of radical selfishness was somehow supposed to involve everyone working together towards the common good. (They were supposed to realize that cooperation to mutual benefit was their best way to benefit. I think they’re also supposed to recoil in horror from benefiting at the expense of another or by receiving anything which they haven’t earned. Because that’s obvious to everyone who is rational. I’ll wait until you’ve stopped laughing to type more.)

But something I’ve noticed about everyone who preaches radical freedom is that they never preach this to children.  They always wait until somebody who doesn’t believe in radical freedom has painstakingly, over many years, trained children children to do what is right rather than whatever they want to do, until the children largely want to do what is right, by habit. Only then do the preachers preach radical freedom. Then they look and notice that people who are largely set in their ways don’t much vary their ways if they start believing that anything goes and conclude they were right that radical freedom is harmless.

Or at least people don’t vary their ways much at first. Another thing I’ve noticed is that the people who preach radical freedom don’t tend to follow up, over decades, with the people they’ve converted. Not that it would matter, since if any of their followers do bad things, it is because they were defective, or mentally ill, or irrational, or whatever, and never because all human beings face temptation and need support in virtue.

And they never seem to ask what happens to the children raised by their followers. In part, of course, people tend to abandon radical freedom as a doctrine once they’re forced to raise children because telling a child that what they really want to do is share their favorite toy is just so utterly doomed to abject failure that almost no one ever tries it. And of course when followers practice some amount of realism raising their children, they are no longer followers, or are heretical followers, or just don’t show up to the monthly do-whatever-you-want meetings because children make it hard to belong to clubs. Whatever the reason, the preachers of radical freedom never talk about the practical aspects of raising children. And in the end, I suppose it shouldn’t be shocking that people who never consider how to raise children should be unaware that degeneration generally happens by generations.

Choosing to Believe

I recently saw the question posed whether it is possible to choose what one believes. The answer is obviously not. Having said that, it clearly is possible.

Before I get into either answer, I want to briefly define what I will mean by the word reality. It is that which, when you stop believing in it, doesn’t go away.

It is clear, then, that it is not possible to choose what one believes because belief is, simply, what reality appears to be. Beliefs are, in this sense, passive, like sight or hearing. We cannot choose what we see—we look and there it is.

But even in saying that, we can begin to see why it is possible to choose our beliefs: You can choose where you look.

If you hear a belief proposed which depends for intelligibility on knowledge which you don’t at present have, the belief will necessarily not be believable. You might have no reason  to disbelieve it, or you might take it on the authority of whoever told you as likely to be true (whatever it means), but you will not actually believe it. To give a concrete example, suppose someone is telling you something about relativity, and says that some property is true of the Lagrangian near massive bodies. If you have no idea what the Lagrangian is, you can trust that he isn’t wrong, but you can’t believe what he’s saying because you don’t know what it means. For you to believe it, it must seem to you an accurate description of reality. Until you understand it, to you it is not in fact a description of anything at all. Now, it is quite possible to, by choice, refuse to ever learn any of the base knowledge necessary for the belief to be believable. If you did this, you would be choosing not to believe the belief.

A practical case I deal with all the time is that young children will not listen to any evidence about the toy store being closed because they are unwilling to believe the necessary corollary to it: that they cannot go to the toy store right now. Toy stores can’t close, and I’m a monster for not taking them there, now. It is true that they don’t believe the toy store is in fact closed, but in shutting themselves off from all evidence because they can’t deal with the consequences, they are clearly choosing to believe that the toy store is in fact open. (To be clear, I picked this example because it should be familiar to everyone and is ready to hand, I am not trying to subtly call all atheists children, nor anything like that. I do my best to restrict rhetoric to posts in the rhetoric category and with a warning up top about how to read them. I believe in active aggression, not passive aggression.)

In a similar way, it is also possible to choose to believe something: in the spirit of inquiry, one could seek out all of the knowledge necessary for a belief. Properly, one would be attempting to believe it.  There is an asymmetry here, because the best one can do is try to believe something whereas ignorance can be guaranteed. It is always possible that, having all the necessary groundwork for a proposed belief to be believable—in other words, fully understanding the idea—it still does not seem to be an accurate description of the world. This is always going to be true of false beliefs, like the terrorist attacks of 9/11 on the world trade center being an inside job or the one-gene-one-protein theory which was recently chucked into the dustbin of biology. It may even be the case of true beliefs  where we don’t understand them well enough, like people who rejected the Monty Hall problem despite knowing a lot about probability and thinking they fully understood the problem specification.

But it is very important to note that what constitutes attempting to believe a belief is not purely an act of will. It is the will directing the intellect where to look. That is as far as the will can go; the intellect will see what it sees, just as will can literally make your eyes look at something but your eyes then see what they see, and not what you wished to see. It is a question of the will overcoming laziness or fear and putting in the work of learning, not a matter of the will overcoming the intellect and creating something in it. Human will is a powerful thing, but it cannot do the impossible, and it is not possible to create impressions upon the intellect through sheer will. The intellect is always fertilised by the reality it perceives. Will can no more create a belief in the intellect than a man can impregnate the color blue.

Update: My friend Eve Keneinan pointed out that I didn’t address the complication that we can choose to look at our imagination rather than at reality or nothing at all. I’ve fixed that in its own blog post.

The Argument Against God from the Existence of Atheism

I recently came across an argument which attempts to prove that God does not exist. It’s interesting for two reasons:

  1. It’s not the standard dodge of saying that the burden of proof is on others, as if all of life is a debate, rather than the burden of investigation being on all rational people to find out what is actually true of the world.
  2. It seems to be a novel argument, which would mean that Saint Thomas did not in fact give an exhaustive list in the Summa Theologica. (This is of course possible; Saint Thomas was only human.)

The original version of the argument apparently comes from a book, but is summarized here. It is fairly long and uses a term which it doesn’t define, “meaningful conscious relationship”. There are several possible meanings according to ordinary English usage, each of which makes the argument break down in different places. If it’s not obvious where, let me know and I’ll explain in detail, but suffice it to say it is not explained what would be wrong with a meaningful subconscious relationship.

It is not explained because “meaningful conscious relationship” is useful in this argument precisely insofar as it means “belief”, in the sense of “propositional belief”. That is, the sort of belief you state in words. If you have in your heart a conviction you can’t articulate that the world actually means something and isn’t just a bad joke with no punchline, that is a belief in God but not a propositional belief in God, since you can’t articulate it.

So right away, this argument can be more briefly stated, “If God existed, he would make everyone believe in him because to not know that God exists would be unthinkably cruel.” (There are variants which assume that God’s #1 priority is having people believe in him, as if he were Apollo from the Star Trek episode Who Mourns for Adonis? and the existence of atheists proves that he is not omnipotent, but this is idiotic and I prefer to focus on the most favorable interpretation of someone’s position.)

The problem with arguments from how unthinkably awful something is consists in the fact that they are never thought through. How can you know that something is so awful that no good could possibly be greater than it, except by thinking out in detail how bad it really is? And here we come to the real crux of the problem, for it should be obvious by now that this is just another phrasing of what C.S. Lewis called, “the problem of pain.” (It was Saint Thomas’ first objection to the existence of God.) No one can think out exactly how terrible something is in detail, nor can they think out what sort of goods might be better and available only if the bad thing is permitted. No one can do this because there are too many details. What a person can know is how afraid he is of some particular suffering as he imagines it, and this is invariable what we are actually presented with. This is not thinking, this is being afraid.

What we cannot know because our experience and our minds are finite, God can know because he is not finite. There is no suffering so terrible that it is not theoretically possible that permitting this evil allows greater good to be brought about. And so we come to the real answer to the problem of pain: trust God. God is good, wise, and powerful, and though we cannot see how things are presently being worked to the good, our sight is so very limited there is no reason to expect that we could see it. Not seeing it is, therefore, not only not a contradiction to faith in God, but actually consonant with what we would expect if we are being realistic.

Incidentally, this last part is also why freedom can only be found in obedience to God. To be free, one must be able to choose. But to choose, you must be able to apprehend what it is you are choosing. On our own, since we have no idea what the full consequences of our actions are, but the consequences of are actions are in fact the content of the action, we cannot actually choose anything on our own. Apart from God, we are simply slaves to our environment. We can hope, but our hopes are invariably disappointed. Only by joining our will to the will of one who can apprehend our actions because he knows the consequences of our actions, can we actually do what we intend. It is true we do not apprehend the action in its fullness, but because we will to do good, and God wills that we do good, by joining our will to his in obedience, we actually do accomplish what we intended, though we find out what the intention was after it happens while God knows it in his eternal now. This is the most that freedom can mean to a finite creature that lives in time.

The Lessons of Beetles

I once heard a story which I have dearly loved ever since. It was originally told as a joke, I believe, but I think it actually captures an important theological insight:

Some time in the seventeenth century a naturalist, funded by the crown, returned from one of his voyages and came to an audience before the Queen, who was the one principally responsible for his being funded. After he recounted some of his more interesting discoveries the Queen asked him, “And what have your investigations into the natural world taught you about the Creator?” The naturalist paused for a moment to consider, then replied, “That he has an inordinate fondness for beetles.”

Beetles currently comprise about 25% of known life-forms and 40% of all known insects, with new species of beetles being described all the time (currently there are around 400,000 described species of beetles). Clearly, God loves beetles. But humans who love beetles are considered quite weird: in movies they’re usually played by scrawny guys wearing glasses and bad haircuts and given dialog which proves in every line that they have neither social skills nor friends. And in fairness, God does stand alone; “from whom does God take counsel?” and all that. But the critical difference is, of course, why.

Human beings, being fallen creatures, love things primarily out of need. We are a dying species in a dying world, and we seek scraps of life wherever we can get them. This is almost a literal description of eating food, but it is more relevantly a description of the things we enjoy. We go on hikes because the beauty of trees and rocks and sunshine fills us up for a little while. We go on roller coasters because the rush of power reminds us for a moment that we are alive. We’ll even go to the ruins of ancient buildings made by long-dead hands because, remote as it is, we can feed on the crumbs of life which spilled over when someone was so filled with life that he built something only that it might exist. Art, when it is not purely commercial, is an act of generosity, and therefore life, because things are generous precisely to the degree that they live.

God stands apart because God is fully alive, and therefore needs nothing. He is not just fully alive, he is life itself, or as Saint Thomas Aquinas put it, the “subsistent act of to be”. (Subsistent in this case meaning to be in itself, rather than in another as a subject; the terms of scholastic philosophy are rather specialized.) God loves things in a purely generous way. He does not love anything because it is interesting; it is interesting because he loves it. When Saint John famously said, “God is love”, that might reasonably be rendered, “God is generosity”. Generosity, after all, comes from the same root as “generate”.

God loves all things into existence that he may give them more and bring them from potentiality into full actuality with him in his eternal actuality, which is why God does not disdain the smallest thing. We disdain the small things because our needs are so great; God needs nothing, and so he disdains nothing. God is interested in everything because his ability to give is so great.

God loves beetles, and he even loves the dung which the dung beetles feed on. There is no spec of dust on any cold and lonely planet so far from its sun that the sun just looks like another star in its sky which is not immediately in the presence of God. Most of our lives are made up of mundane moments no one would ever make a movie about; perhaps we can all take comfort, as we trudge through the details of everyday life, from the fact that God is inordinately fond of beetles. For it means that the smallness and dullness of our lives is only a defect in our sight.

We Are All Beasts of Burden

If you spend much time in certain parts of the Internet you’re likely to come across the hot topic of the Burden of Proof. By which I mean people like to pass it around like they’re playing hot potato. And if you’re lucky enough to be in the right part of that part the Internet, you will occasionally see my friend Eve Keneinan put on her oven mitts, reach into the oven, and pull out a second hot potato and stuff it down the pants of someone who was trying to pass the first hot potato to her. Her wording varies, but usually it looks something like this:

You say that the burden of proof is on the person making the positive claim. That itself is a positive claim, so by your own principle you now have the burden of proof to prove that it’s true. Go ahead, I’ll wait.

There’s a very interesting reason why she does that, but before we can talk about it we have to talk about what the Burden of Proof is. So, what is it? There’s no one answer because people have borrowed it and come up with variations of it, but it’s primarily a concept in courts of law and (by imitation) in debating clubs. It exists to solve a specific and big problem which courts have: what do you do when there isn’t a clear answer?

And what courts do varies. In American courts there is, at least in theory, the presumption of innocence for the accused so that if the prosecution does not meet the evidential criteria set forth at the beginning of the trial, the accused gets to go home like he wants to. This is the prosecution having the burden of proof. However, courts are not always set up this way. Many courts have been set up under the assumption that if the police or crown or what-have-you have gone to the trouble of arresting a man for a crime, it’s for good reasons, and so the accused must prove that the competent authorities are in fact wrong. Should he fail to meet the evidentiary threshold of proving them in error, he can’t make the police not put him in prison. In this case, the defendant has the burden of proof. Even in the American legal system, once convicted a person is presumed guilty and the burden of proof shifts to him on appeal to prove that something very wrong has happened.

So what is the unifying theme in all of this? It’s this: the person who most wants something to happen must demonstrate to the people he wants to do it why they should do it.

Which results in numerous conversations that go something like this:

Atheist: If you want me to believe in God, you have to prove God to me.

Theist: I’m fine with you not believing in God, but you now have the burden of proof to show me why I should treat you like you’re mentally competent.

Atheist: You awful, terrible person. You must treat me like I’m a genius, for some reason. It would be rude not to. Didn’t Jesus tell you to treat all atheists like they’re perfect?

Theist: No, and I’m a generic theist anyway, so why are you lecturing me about Jesus?

Atheist: If I’m honest, because of daddy issues. Officially because all theists look alike to me.

Theist: Am I supposed to pretend it’s for the official reason?

Atheist: It would be offensive of you not to.

Theist: Why? You just explicitly contradicted yourself, and for some reason I’m suppose to not notice?

Atheist: I didn’t make the rules. Don’t shoot the messenger.

Theist: I’m pretty sure you just did make that rule up.

Atheist: OK, maybe you did, but if you take everything I say seriously, we’d have nothing to talk about. I mean, I don’t believe in free will. For Christ’s sake, I don’t even believe that thought is valid! I will say, with a straight face, that all of our thoughts are just post-hoc explanations for warring instincts. If any of us took what I say are my beliefs as my actual beliefs, I’d make the guys who think that they’re Napoleon look sane!

Sorry, I get carried away with dialogs sometimes. It’s just so refreshing to talk with a self-aware atheist for once! The problem is that it’s not a stable position—self-aware atheists tend to cease being atheists after a while. It’s like my friend Michael’s question about why there seem to be no atheists today who really take Nietzsche seriously. There are, but typically then they stop because they’ve become Christians. Nietzsche was a unique case because while he could see the stark raving irrationality of the atheist position, he couldn’t escape being an atheist. So he ended up dancing naked in his apartment and telling his Jewish landlord that out of gratitude for the landlord’s kindness he would wipe out all of the anti-semites. (I forget whether he was going to personally shoot them all or wipe them out with a mere thought.)

Please pardon the stream-of-consciousness of his post, but, after all, the subtitle of this blog is “Quick Observations on a Variety of Subjects”.  You can’t fault me for truth in advertising, at least.

Anyway, getting back to the point, there are a great many people who were raised in a particular sort of mostly secular way peculiar to a christian heritage which I will call social hedonism. It is probably a kind of practical utilitarianism, but its basic tenets are very familiar to anyone who grew up among non-christians with a christian heritage: fulfill your emotional needs, primarily with human relationships, and have fun, constrained by being at least halfway decent to the people around you, especially with regards to having arguments and disputes. It’s a stage in societal decay, so it is not stable and there will not be many generations of people who think like this. (If you prefer the term societal transformation to societal decay, I won’t argue it with you.) It is almost accidentally atheistic, but the real point is that it is a definite set of beliefs which people are raised with and therefore never considered. Most people never ask themselves whether the things they were raised with were true unless they run into someone who asserts something contrary. That’s why religious belief is often on the wane in pluralistic societies: it gets challenged more than other beliefs, some more true some less true, do. And now, we’re finally able to get to the question that this post started with.

So why does Eve ask people to prove where the burden of proof lies? There are several answers which are suggested not infrequently by the people she gets into this particular argument with, all of which are wildly off the mark. They’re also good examples of why knowing a person really helps in understanding what they say.

She’s an idiot.

In fact, she is extremely intelligent. That does not mean that she’s right about everything—intelligent people are very capable of making huge mistakes and in fact are more likely to stick to such mistakes far longer than a less intelligent person because their intelligence allows them plug holes in their theory for a long time. What it means is that she’s not trying to avoid the burden of proof because she can’t handle it.

She doesn’t have any reasons for what she believes.

In fact she is exceedingly well read, and could off the top of her head articulate at least 5 proofs for God and explain them in great detail. She has probably read another half dozen or more, as well as a great many arguments against God. Also everything Nietzsche wrote. And sometimes it seems like half of everything else that was ever written in philosophy. She says that her personal library contains over 10,000 books, and I believe her. I also suspect her library card has gotten a fair amount of use too. She reads Attic Greek and has studied Chinese philosophy. She’s probably seen 99% of the argument anyone has made for or against God, ever.

She’s never considered whether the religion she inherited from her parents is true.

First, she is American Orthodox, which neither of her parents ever was. Second, she spent many years as an atheist and then as a platonist, only finally coming to Orthodoxy. Each step was only after about ten thousand times more consideration than the average internet atheist puts into anything at all.

OK, so why, then?

Because being a philosophy teacher is not just what she does for a living, it’s who she is. Real philosophers aren’t content to know things, they must understand them, as well. Philosophers ask what everything is, and this includes mundane and ordinary things. She doesn’t want to shirk anything, she wants people to ask themselves what the burden of proof is, and whether it’s relevant.

She wants this because the burden of proof is a practical thing for certain cases where uncertainty is not a viable answer and so a mistake is preferable to indecision. This isn’t all of life, or even most of it. If you’re going to hang a man, you need to come to a decision whether to hang him or let him go, then you have to move on. Most of life does not have this urgency coupled with this finality, and this is especially true of big questions like, “is there a God” or “is there anything better in life than sex and drugs then kill myself quickly when they stop being fun?”

Just because we inherited an answer from our parents or rebelled as children against the answer we inherited from our parents does not mean that we may not think about these things any more. Just because we were told that there is nothing more important than getting along with friends, family, and co-workers does not in fact mean that these things are our highest good or even that they will make us happy.  The thing which should be unquestionable is reality itself, not what we’ve been assuming all along.

The point—the real point—is that in the truly important things of life, no one has the burden of proof. We all have a duty of investigation. Every man that lives has a burden of proof for the things he believes and denies. When it comes to the truth, no one may be a rider. We must all be our own beasts of burden.

Appendix A. Authority

Nothing I said above is meant as a disparagement of authority. Life is short and it is impossible to live without trusting. The key is to trust where it is appropriate. Like how helping people and accepting help are good, but adults should still blow their own noses. And all trust of human beings should be done with the fallibility of all human beings never forgotten.

Poe’s Law Isn’t Quite True

There isn’t an official version of Poe’s Law, but basically it is:

A parody of an extremist will be indistinguishable from the real thing.

In a sense this is all but definitionally true, since parody is making fun of something by presenting a more extreme version of it. If something is already maximally extreme, there is nowhere to go with a parody, so a parody will consist of saying the same things.

But… this is not quite true. It is possible to distinguish between an extremist and a parody because the extremist has a different goal than the parodist does. The parodist seeks to make people laugh. The extremist is trying to live life, and no matter who you are, life is primarily mundane. If you pay attention to what an extremist says, you will notice that most of what they say is actually fairly boring.

This stems from something Chesterton observed: a madman seems normal to himself. Since he is normal, he doesn’t think about his extreme views differently from his normal views, because to him none of them are weird. It’s not that’s he’s unaware that most people disagree with his extreme views, but that the disagreement is what will be weird to him, not his own views. We think of his extreme views as some oddity tacked on to the rest of his normal views (such as eating when hungry, sleeping when tired, and washing his hands after using the bathroom). He thinks of his extreme views as fitting in with the rest, since there’s on reality an so everything that’s true about it necessarily fits together. The result is that when he speaks, much of what he says will be prosaic, because he has no reason to speak only about his extreme views. People like to talk about the world, not merely the occasional isolated belief about it.

We thus have a way to tell the difference between an extremist and a parody: the density of extremism in the expression. Or, to put it another way, how funny the thing is. The true extremist isn’t in on the joke, so he doesn’t take care to only talk about the funny stuff. The funny stuff may not even interest the extremist all that much. The parodist, by contrast, is in on the joke, so he takes care to avoid the boring things a real extremist would say.

To put it succinctly, brevity is the soul of wit and the parodist can put on the extremist’s clothes, wear a wig, and even use makeup to change the color of his skin, but he can never change his soul.

Dualists Usually Aren’t Quite Dual

Dualists are people who believe that reality as we experience it is fundamentally different from reality as it actually is, which we can’t know (that is, we can’t know reality as it actually is). In the west this was popular before Socrates and after Descartes. A familiar example of modern dualists are Materialists who believe that there is nothing besides matter and therefore there is no such thing as free will. When it comes to actually living, they basically just shrug their shoulders and make decisions anyway because we experience free will, even if in reality it’s a complete illusion. (They’re wrong about this, of course, but I’m not going to bother with any further disclaimers to that effect; I trust you, dear reader, to supply the rest yourself.)

And there’s a curious thing about dualists: they usually believe that there is some link between reality as it actually is and the world of perception which we (supposedly) can’t escape. Most of them are more 1.95ists than true dualists. What’s significant about this is that this link is a source of power: it’s possible to use this link to modify the underlying reality in ways that affect the world of perception.

To keep with the example of Materialists (which New Atheists almost universally are), they believe that things like love, loyalty, curiosity, wonder, awe, compassion and so on are all the epiphenomena (that is, an accidental manifestation, analogous to a symptom) of base instincts which we have because they resulted in our ancestors producing us. This is not to say that the epiphenomena are themselves necessarily of any value, but the instincts which produce them must have been of some evolutionary benefit. To try to interact with these epiphenomena may be unavoidable, but it is not very likely to accomplish much since none of them are real. By contrast, there does exist an ability to probe reality. It’s limited, difficult, and tentative; and its name is science. The point is not, of course, to improve the evolutionary benefit. Just as evolution does not “care” about the individual, the individual does not care about evolution. The point is to understand the mechanisms which evolution produced in order to change those mechanisms into ones which are more convenient. A good example of this is anti-depressant medications. (Or perhaps it would be if anti-depressants were more effective.)

Even those who suffer greatly from clinical depression are often hesitant to take anti-depressant medications because psychoactive drugs are terrifying. There is of course the possibility that they won’t work in dangerous ways—there are anti-depressants whose common side-effects include frequent thoughts of suicide—but the biggest fear is that the anti-depressants would work but turn the person into somebody else. This is not really a concern for the materialist because who he is is a mere epiphenomenon, and its only value is in being happy. If the medication changes him, all that was lost was an illusion anyway. (I should note that when this is practical rather than theoretical, Materialists may well be hesitant because they know on some level that Materialism isn’t true.)

This is why Materialism goes so well with recreational drug use. Caution is of course still warranted for the heavy-duty drugs like cocaine and heroin which can destroy one’s life, but it is very compatible with non-addictive drugs like marijuana, LSD, and endorphin stimulation through promiscuous sex. The main reason to avoid these safer drugs is that they falsify one’s sense of the world and take one further away from reality and hence from the true source of happiness. They’re not just wastes of time but counter-productive because they distort one’s view of reality and pull one further away from the truth. Of course a single, low-dosage usage of such drugs is not likely to have much of an effect (ignoring quality control issues) and I don’t mean to suggest that a person who’s had a single puff on a reefer stick is doomed and bereft of hope. But this is the effect of such drugs; they are chemical lies which take a person further away from sanctity and therefore from happiness.

The situation is radically different for a Materialist, however. First, they start off massively disconnected from reality, so within their worldview their connection to itreality (more-or-less) can’t be diminished. Second, there is no real happiness which is possible, so there is nothing to lose by telling oneself pleasing lies. Happiness is itself just an accidental manifestation of underlying chemical processes in the brain, and all high-level explanations which we have for happiness are illusions, so messing with the chemistry of the brain to produce happiness is not only more reliable, it is in fact more real. Not that being more real is a virtue for the Materialist, but the argument—using drugs recreationally divorces the user from reality—will not even make sense to a Materialist.

This is, incidentally, why one runs into the oddity of the evangelical atheist. If God is dead then clearly nothing matters. Even if nothing matters in theory, however, human beings don’t cease to be human beings merely because they believe they are only flesh robots, and as Aristotle observed all men desire to be happy. The significant difference in effectiveness between trying to achieve happiness by dealing with the world according to its epiphenomena (duty, honor, morality, etc) and dealing with it as it is (scientific fun drugs) is so stark that they are moved by pity to try to spread the word to live according to the latter and not the former.

Science, Magic, and Technology

There is an interesting observation made, I believe, by Isaac Asimov:

Any sufficiently advanced technology is indistinguishable from magic.

This has been applied many times in science fiction to produce some form of techno-mage, but what’s more interesting is that the origins of modern science were in magic, specifically in astrology and alchemy. The goals of science were the same as that of magic: to control the natural elements. If you really study the history, it’s not even clear how to distinguish modern science from renaissance magic; in many ways the only real dividing line is success. There is some truth to the idea that alchemists whose techniques worked got called chemists to distinguish them from the alchemists whose ideas didn’t work. This is by no means a complete picture, because there was also at the same time natural philosophy, i.e. the desire to learn how the natural world worked purely for the sake of knowledge.

Natural philosophy has existed since the Greeks—Aristotle did no little amount of it—but it especially flourished in the renaissance with the development of optics which allowed for the creation of microscopes and telescopes. Probably more than anything else this marked the shift towards what we think of as modern science. As Edward Feser argues, the hallmark of modern science is viewing nature as a hostile witness. The ancients and medievals looked at the empirical evidence which nature gave, but they tended to trust it. Modern science tends to assume that nature is a liar. Probably more than any other single cause, being able to look at nature on scales we could not before and seeing that it looked different resulted in this shift towards distrusting nature. Some people feel a sense of wonder when looking through a microscope, but many people feel a sense of betrayal.

Another significant historical event was when the makers of technology started using the knowledge of natural philosophy in order to make better technology. This may sound strange to modern ears, who are used to thinking of technology as applied science, but in fact technological advancements very rarely rely on any new information about how the world works which was gained by disinterested researchers who published their results for the sake of curiosity. Technology mostly advances by trial and error modifying existing technology, and especially by trial and error on materials and techniques. In fact, no small amount of science has consisted of investigating why technology actually works.

But sometimes technology really does follow fairly directly from basic scientific research. One of the great examples is radio waves, which were discovered because the Maxwell’s theory of electromagnetism predicted that they existed. Another of the great examples of technology following from basic scientific research is the atomic bomb.

I suspect that these as well as other, lesser, examples, helped to solidify the identification between science and engineering. And I don’t want to overstate the distinction. In some cases the views of the natural world brought about by science have certainly helped engineers to direct their investigations into suitable materials and designs for the technology they were creating. But counterfactuals are very difficult to consider well, and it is by no means clear that the material properties which were discovered by direct investigation but also explained by scientific theories would not have been discovered at roughly the same time, or perhaps only a little later.

However that would have gone, the association between science and technology is presently a very strong one, and I think that this is why Dawkinsian atheists so often announce an almost religious devotion to science. I’ve seen it expressed like this (not an exact quote):

Science has given us cars and smartphones, so I’m going to side with science.

Anyone who actually knows anything about orthodox Christianity knows that there is no antipathy between science and religion. Though it is important to note that I mean this in the sense of there being no antipathy between natural philosophy and religion. In this sense, Christianity has been a great friend to science, providing no small amount of the faith that he universe operates according to laws (i.e. that being a creature is has a nature) and that these laws are intelligible to human reason. Moreover, the world having been created by God, it is interesting, since to learn about creation is to learn about the creator. It is no accident that plenty of scientists have been Catholic priests. The world is a profoundly interesting place to a Christian.

But there is a sense in which the Dawkinsian atheist is right, because he doesn’t really care about natural philosophy. What he cares about is technology, and when he talks about science he really means the scheme of conquering nature and bending it to our will. And this is something towards which Christianity is sometimes antagonistic. Not really to the practice, since technology is mostly a legitimate extension of our role as stewards of nature, but to the spirit. And it is antagonistic because this spirit is an idolatrous one.

The great difference between pagan worship and Christian worship is that Christian worship is an act of love, whereas pagan worship is a trade. Pagan deities gain something by being worshiped, and are willing to give benefits in exchange for it. This relationship is utterly obvious in both the Iliad and the Odyssey, but it is actually nowhere so obvious as when the Israelites worshiped the golden calf. For whatever reason this often seems to be taken to be a reversion to polytheism, where the golden calf is an alternative god to Yahweh. That is not what it is at all. If you read the text, after the Israelites gave up their gold and it was cast into the shape of a calf, they worshiped it and said:

Here is your God, O Israel, who brought you out of the land of Egypt.

The Israelites were not worshipping some new god, or some old god, but the same god who brought them out of Egypt. The problem was that they were worshiping him not as God, but as a god. That is, they were not entering into a covenant with him, but were trying to control him in order to get as much as they could out of him. Granted, as in all of paganism it was control through flattery, but at its root flattery has no regard for its object.

And this is the spirit which I think we can see in the people who say, “Science has given me the car and the iPhone, I will stick with Science.” They are pledging their allegiance to their god, because they hope it will continue to give them favors. And it is their intention to make sacrifices at its altars. This is where scientists become the (mostly unwitting) high priests of this religion; the masses do not ordinarily make sacrifices themselves, but give the sacrifices to the priests of the god to make sacrifice on their behalf. And so scientists are given money (i.e funded) as an offering.

To be clear, this is not the primary reason science gets funded. Dawkinsian atheists (and other worshipers of science) tend to be less powerful (and less numerous) than they imagine themselves. Still, this is, I think, how they view the world, except without the appropriate terminology because they look down on all other pagans.

And I think that it is largely this, and not the silly battles with fundamentalists and other young-earth creationists that result in their perception of a war between science and religion. There were other historical reasons for the belief in a war between science and religion, but I am coming to suspect that they had their historical time and then waned, and Dawkinsian atheism is resurrecting the battle for other reasons. They are idolaters, and they know Christianity is not friendly to idolatry. And idolaters always fear what will happen if their god does not get what it wants.

Authoritative Authorities

In my previous post I mentioned that people will use science’s scheme of self-correction as a support of its authority, and that this is utterly confused. In fact, here’s what I said (yes, I’m quoting myself. Think of it as saving you the trouble of clicking on the link):

(It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.)

Today is that day.

Before getting into it, I need to qualify what I mean by an authority. There are multiple meanings to the phrase authority, and the most common one—someone such as a king, judge, etc. who should be obeyed and who enforces their will through force—isn’t relevant. I’m using the term “authority” as in the material logical fallacy, “appeal to authority”. Unfortunately, appeal to authority is often misunderstood because it would be much better named “appeal to a false authority”. A true authority, in the logical sense, is  anyone or anything which can be relied upon to only say things which are true. If you actually have one of those, it is not a fallacy to appeal to their statements.

A logical authority may of course remain silent; its defining characteristic is that if it says something, you may rely on the truth of what it says. These are of course hard to come by in this world of sin and woe, and you will find absolutely none which are universally agreed upon. That doesn’t mean anything, since you will find absolutely nothing which is universally agreed upon.

To give some examples of real authorities, Catholics hold that the bible, sacred tradition, the magisterium, and the pope when speaking ex cathedra are all authorities. God has guaranteed us that they will not lead us astray. Muslims hold that the Quran is an authority.

Not everyone believes there exists any authorities at all, of course. Buddhists don’t and neither (ostensibly) do Modern philosophers. If you insist on distinguishing Modern philosophers from Postmodernists, then Postmodernists don’t believe there exist any authorities either. In general, anyone who holds that truth is completely inaccessible will not believe in any authorities.

So we come to Science, and the curious thing is that science explicitly disqualifies itself as an authority. Everything in science is officially a guess which has so far not been disproved by all attempts which have so far been made to disprove it. And yet many people want to treat science as an authority. In some cases this is sheer cognitive dissonance, where people pick what they say on the basis of which argument they’re having at the moment, but in other cases there is an interesting sort of reasoning which is employed.

Both forms tend to piggy-back the bottom 99% of science on the success of (parts of) physics, chemistry, and to a lesser extent some parts of biology. This especially goes together with conflating science and engineering.

The first and stronger sort of argument used is that science may always be subject to disproof, but that after a sufficient amount of testing, any such disproof will be at the margins and not in the main part. The primary example of this is the move from Newtonian mechanics to Relativity, where the two differ by less than our ability to measure at most energies and speeds we normally interact with.

The problem with this argument is that there is relatively little of science to which it actually applies. Physics is rare in that most physicists study a relatively small of phenomena. There are less than two hundred types of atoms, and less than two dozen elementary particles, and apparently no more than three forces. So thousands of physicists all work on basically the same stuff. (It’s not literally the same stuff, of course; physicists carve out niches, but these are small niches, and often rely on the more common things in a way where they would be likely to detect errors.) This is simply not true of other fields in science. You can study polar bears all your life and never do anything which tells you about the mating habits of zebra fish. You can study glucose metabolism for five decades straight without even incidentally learning anything about how DNA replication is error-checked. You can spend ten lifetimes in psychology doing studies where you ask people to rate perceptions on a scale of 1 to 10 and never learn anything about anything at all.

The result is that in most fields outside of physics and (to a lesser extent) chemistry, theories are not being constantly tested and re-tested by most people’s work. In some of the fluffier fields like human nutrition and psychology—where controlled experiments are basically unethical and in some cases may not even be theoretically possible—they may not even be tested the first time.

The second and weaker argument is that science is the best that we have, and so we must treat it as an authority. This is very frequently simply outright wrong. In fields where performing controlled experiments is unethical, science consists of untested guesses where the people making the guesses had a strong financial and reputational incentive to make interesting guesses, as well as often a strong financial incentive to make guesses which justify government policies that the government would like to do anyway. But that only counts if the financial incentive is provided by tobacco companies or weightloss companies. Other financial incentives leave people morally pure because most scientists have them.

Actually, there is a third argument too, though it’s almost never stated explicitly. A lot of people work hard in science and believe that they’re doing good work, so it would be rude to doubt them. This is, basically, a form of weaponized politeness. The sad truth is that lots of scientists aren’t more honest than other people, lots of scientists aren’t smart, and lots of scientists are wasting their time. It’s mean to say that. Sometimes the truth hurts. It always sucks when honesty and politeness are enemies, but if a person prefers politeness to honesty, he’s a liar, and there’s nothing to be said to him except that he’s working to make the world a worse place and should stop.

Ultimately, of course, the real reason science is held to be an authority—as opposed to a potential source of truth which must be evaluated on a case-by-case basis because a scientific theory is only as good as the evidence behind it—is because this is a cultural thing. People need authorities in order to feel secure, and if they won’t believe in the right authorities they will believe in the wrong authorities.

The Fundamental Principle of Science

In the philosophy of science, there have been many attempts to define what it is that distinguishes science from other attempts to know the world. There’s an interesting section of The Trouble With Physics where Lee Smolin discusses Paul Feyerabend’s work, and summarizes it something like this (I don’t have time to find the exact quote):

It can’t be that science has a method, because witch doctors have a method. It can’t be that science uses math, because astrologists use math. So what is it that distinguishes science?

Neither, so far as I know, came up with an answer. There is a hint in Smolin’s book that there is no answer; that each advance in science comes about because there is a weirdo whose approach to science works to make the discovery of the moment, but doesn’t work generally. This would explain why so few scientists tend to be really productive over their entire lives; usually they have a few productive years—maybe a productive decade or so—and then tend to fade: they spend a few years discovering everything that their personal quirks are suited to, then when it is exhausted, return to the normal state of discovering nothing.

There is something common, however, that one will find in all of these quirks, if one looks back over history. This is especially true if you go back far enough to notice how much of science turned out to be wrong. (It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.) There is one principle that you will find consistent between everything which has ever been science, right or wrong. That principle is: assume anything necessary in order to publish.

To see why, we must consider the evolutionary pressure that applies to science. For whatever reason, people rarely take the theory of evolution seriously. They consider it as a scientific doctrine, or an organizing principle for archaeology, or a creation myth or any number of other things, but very rarely as an operating force in the world. Yet selective pressures abound and have their effects.

Occasionally people will ask the question about what influence on science the academic doctrine of publish-or-perish has, and they are right to ask this, but it is really just a subset of a larger selective pressure: science consists exclusively of what is published. If someone were to do extensive research in his basement and discover all the secrets of the cosmos, but never tell anyone, none of his knowledge would be a part of Science. In the same sense that Chesterton said that government is force, Science is publication.

The big problem with trying to uncover the secrets of the cosmos is that they are well covered. Coming to know how the universe works is very difficult. It’s often much easier if one makes simplifying assumptions which get rid of variables or eliminate the need for expensive experiments because cheap ones will suffice. The problem is that an assumption being convenient is not a justification for making that assumption. But since science consists of what is published, there is a huge selective pressure on people to make these convenient assumptions. This may or may not influence any particular scientist, but the scientists who are willing to make these sorts of unjustified simplifying assumptions will certainly be included in Science, while the scientists who take the principled position and refuse to make unjustified assumptions may well not be, because they didn’t have results to publish. In fields where real results are difficult to come by, it’s entirely possible that this could come to dominate what is published. And as the pitchmen say, but wait, there’s more!

People who are willing to make unjustified assumptions tend to have some personality traits more than others. Arrogance and a certain sort of defensiveness tends to work well with making assumptions one can’t justify, since those discourage requests for justifications. It also works synergistically with making quick judgments based on superficial criteria (like holding unrelated unpopular opinions), since that tends to insulate the unjustified assumer from having to confront contrary arguments and evidence. And here we come to the question of evolution, because new scientists will have to get along with these people, since the scientists who have published largely serve as the gate-keepers of who gets to join science. What sort of candidates will these people accept? Who will find scientists like this tolerable?

In subsequent generations, there will be the further question of who will find tolerable the people who found the makers of unjustified assumptions tolerable? And so it will go through subsequent generations, each new generation being a mix of all sorts, but the presence of the makers of unjustified assumptions and those who they trained will act as as selective pressure even on those who don’t work with them directly, since they still must be able to work with these people as colleagues and in many cases submit journal articles to them for peer review, etc.

For any institution, if you want to know how it tends to go wrong, a good place to start is asking what are the selective pressures affecting it?

Patience is the Most Practical of the Virtues

Most of the moral virtues have a reputation for being impractical. Honesty may be the best policy, but it often makes for a great deal more work for the person telling the truth, at least in the short term. Courage is necessary to practice any other virtue, but courage also means having the courage to do things that will cause oneself a great deal of trouble. Diligence is almost the definition of impracticality; it is at least literally the opposite of laziness. And so it goes with most of the others. But patience stands apart from the others in being not only virtuous, but highly practical.

It has been said that insanity is doing the same thing and expecting different results, but the truth is that one never does precisely the same thing twice. The first time always does something, so the second time takes place in a different world. This is especially true when it comes to dealing with people, who usually remember the past. And this is where patience shows how practical it can be.

Anyone with any experience of the world knows that talk is cheap and when it comes to actions, a great many people will try anything once. Accordingly, when people state an intention, or even when they try to do something, the most likely outcome is that this is the last you have heard of them. It does not take a great deal of experience with the world to become accustomed to delaying responses. It is true that if you leave the dishes in the sink, they will be harder to clean the next day. It is also true that if you leave them on the table, the dog will probably clean them off within a few minutes so that you can stick them straight into the dish washer without having to scrape them first. The reason that procrastination is so common in this world is because it is very effective. Many, if not most, problems simply go away if you ignore them long enough.

This is why there was the story of the importunate widow in the bible. (Importunate comes from the same root as importune.) There was a judge who neither feared God nor respected man, and a widow who never ceased to demand justice from him against her enemy. For a long time he ignored her, but eventually he said to himself, “Though I neither fear God nor respect man, I must give this widow her rights or she will come slap me in the face.”

There is another practical aspect to patience, because patience must come from a source, and that source will carry a person through the execution of what they undertake. This is especially important in organizations with limited resources; to give someone what he asks is to commit resources which could be used elsewhere, even if just time. When people are willing to wait, it shows that their zeal has a reasonable chance of surviving the execution of their undertaking. Especially since all human undertakings in this fallen world will meet with adversity.

Patience is also involved in every attempt at learning. Whether it is practicing as skill or reading entire books to find out which are the good parts (if one isn’t reading Chesterton), learning will never be acquired without patience. This is perhaps especially evident at dance classes; a great many people quit because they don’t have the patience to look like a fool for a short time. It is true everywhere, though. Many people give up ice skating because they do not have the patience to fall a few hundred times. People give up learning to knit because they cannot stand to make a single misshapen scarf whose stitches are far too tight. Many a potential juggler has juggled nothing because they got tired of chasing after balls thrown wildly.

It has, for these reasons, always struck me as odd that patience is not a more commonly practiced virtue. It comes up almost any time one wants to accomplish anything, even vice. Pickpockets must wait until the right target comes along. How much more, then, will patience be required to practice virtue?

Bogeymen

The classic Bogeyman is a tale told by parents to frighten children into good behavior. There is another type of Bogeyman, however. It is a tale told by adults to themselves to explain why they’re already frightened.

We live in a fallen world, which means that we are separated from God. This is a terrible state for us to be in and more to the point we instinctively know that it is a terrible state for us to be in. In this state we are not happy and since we want to be happy we seek to know why we are not happy. Of course, if we came to the right answer we would go to church, receive the sacraments, and make progress on being happy. But not everyone does this, and the people who don’t still have a deep-seated emotional need to have an explanation for why they are unhappy. So they come up with one that isn’t true.

This explanation for why they are unhappy is what I call a Bogeyman. Bogeymen invariable have a few key traits. In particular they always:

  1. Are something which is reasonably powerful.
  2. Are something that is in theory beatable.
  3. Are something that is not in practice beatable.

If something is not powerful, it has no explanatory value for unhappiness. If it is not in theory something the unhappy person can overcome, then misery is assured and the Bogeyman leads to despair, which (most) people know to be wrong. If it is something that is beatable in practice, it will be beaten, the unhappiness will not go away, and so another Bogeyman will need to be found. Vaguely analogous to the Peter Principle, Bogeymen will be defeated until an undefeatable one that satisfies conditions 1 and 2 is found.

Bogeymen can be nearly anything that satisfies these three criteria. Groups of people are very popular, such as Republicans, Democrats, the Rich, drug users, popular entertainers, foreigners, racists, men, women, etc. Social conditions such as poverty, inequality, factory farming, industrial pollution, etc. have been not uncommonly used. Widely held social theories like capitalism, Marxism, nationalism, internationalism, Catholicism, etc. work well as bogeymen too.

This is not to say that a person will have no legitimate complaints about the real thing they are using as a Bogeyman. They almost certainly will, since real complaints work much better than imaginary complaints to create the skeleton of a scary figure. Rare is the imagination so powerful it can keep a menacing figure in view without any recourse to reality. But though the complaints are real, they will never be considered in any sort of balance. A person who focuses their fears onto a Bogeyman is inherently a utopian—someone who believes that perfection can be achieved in this life—and utopians can never consider imperfections in the world as permanent compromises. Utopians don’t mind temporary compromises, of course—hence the guillotine and the gulag—but a permanent compromise because the world will never be perfect? That is unthinkable. If that were the case, happiness would be impossible.

It’s a problem of looking for happiness in the wrong place, of course. This transitory world is not the sort of place in which you can find happiness. But if a man gives up looking for God, the wrong places to find happiness are all that are left.