Jordan Peterson, Falsehoods and Consequences

A friend of mine (rather incautiously, given how little provocation it takes to get me to write a blog post) said,

[T]here’s a part in the trailer for this movie where Peterson says “Falsehoods have consequences. That’s what makes them false.” If you discern any meaning in that statement, please tell me.

I’m now going to explain what Peterson means. (Or what I think he means—I haven’t been given the gift of reading souls.) First, I think that we can rephrase this less poetically but more clearly as:

[Falsehoods have negative consequences. That’s intrinsic to them being false.]

To break this down, we need to start with what a “falsehood” is. It’s not merely something that’s not true, but it’s an idea of something that’s not true. An idea points to something. What a false idea points to is something that’s simply not there. That is, the falsity is a relationship between the idea and reality.

Take a really simple example from classic bugs bunny cartoons: someone walks off a cliff but doesn’t look down so he keeps walking as if the ground is there. He only falls when he notices. This is funny because it’s the opposite of how reality works—in real life if you believe the cliff is a flat plain and walk off the cliff, you fall immediately. Believing the cliff to be a prairie is the falsity. Falling when you try to stand on what’s not there is the consequence.

What Peterson is trying to point out is that this relationship is inherent because truth and falsity are not properties of the idea but of the relationship of the idea to reality. We live in such a pluralistic culture and want so badly to get along with each other that we try to pretend that truth and falsity are private things—that they only apply to the idea itself. If we can believe this, we can then not care about what awful beliefs someone else has because we can pretend it doesn’t really matter.

But ideas do matter—precisely because they either correspond to reality or don’t. If you treat reality as if it’s something else, very bad things will happen because what you’re actually doing is contrary to reality. That’s the primary meaning.

However, this quote also works the other way—you can use consequences as a test for truth. This is, basically, the entire approach of science. It’s got some major problems if you take it too seriously, but if it’s only one tool in your tool belt, pragmatic truth can be a useful tool. To continue our original analogy—suppose instead of thinking that the cliff is a cliff you think it’s a canyon but the opposite side of the canyon is too far away to see. There’s a pragmatic sense in which this isn’t false—to put it in a more scientific way, your model corresponds to reality as far as you are able to measure.

A more practical example of this would be the “white lie”. Suppose your wife asks you if she looks good in a particular dress and suppose further that it’s really one of the least flattering dresses she owns. But suppose further that the question at hand—whether she knows it or not—is really, “should I be embarrassed to show my face while I wear this dress—will I be risking social ostracism by wearing it?”

If you give the answer, “yes, it looks good on you”, what is the difference between that and the strictly more accurate, “It doesn’t look very good on you but is still well within the range in which no one’s opinion of you is going to change because they love you, they will still think you put effort into your appearance for their sake, and realistically you would need to be wearing a rotting corpse or something equally extreme to change our friends’ opinion of you and hence your social standing, so by all means wear it if your favorite dress is in the wash and this is way more comfortable than the other dress which looks better on you and is clean”?

Assuming for the sake of the example the obviously unrealistic idea that your wife could accept such a robot-like answer at face value, neither of them has any sort of negative consequence to living—in both cases your wife will wear the dress, feel that she didn’t quite make the maximal effort she could have, and not worry more than she would regardless of what she was wearing. So in a practical sense, neither of these statements is false—that is, neither of them corresponds to reality so badly that you’re going to walk off a metaphorical cliff by acting according to it.

When you put these two things together, you have the meaning of the original quote:

Falsehoods have consequences. That’s what makes them false.

Models vs. Reality

A little-known change in the attempt to learn about nature happened, in a sense, several hundred years ago. People replaced Natural Philosophy with mathematical Science, in which the attempt to know what nature is was replaced with mathematical models of nature which can predict measurable aspects of nature.

The difference between these two things is that a model may, possibly, tell you about what the underlying reality is. On the other hand, it may not. Models can be accurate entirely by accident.

Trivial examples are always easier, so consider the following model of how often Richard Dawkins is eaten by an alligator, where f is the number of times he’s been eaten by an alligator and t is the time (in the sense of precise date):

f(t) = 0

This model is accurate to more than 200 decimal places. If you conclude from this model that Richard Dawkins is alligator-proof and throw him in an alligator pit to enjoy the spectacle of frustrated alligators, you will be very sadly mistaken. But it’s so accurate!

This is of course a silly example; no one would ever confuse this model or its accuracy for a full description of reality. However, there’s a very interesting story from astronomy where people did exactly that.

I’m speaking, in particular, of the long-running Ptolemaic model of the planets and its eventual overthrow of the Copernican model. The Ptolemaic model was the one where the earth was at the center of the solar system and the planets traveled in cycles and epicycles around it. The thing about this model is that it was actually extremely accurate in its predictions.

(If you’re wondering how it could be so accurate while being so wrong, the thing you have to realize is that Special Relativity actually means that it’s just fine for the earth to be taken as the center of the coordinate. The math just gets harder for some calculations; this is basically what happened. The Ptolemaic model was, basically, a close approximation of that more complicated math.)

However, there is a yet simpler example of incorrect models producing correct results: just consider, for two minutes, that for most of history everyone believed that the Sun orbited the earth and yet they still had highly accurate calendars. Despite not thinking of a year as the time the earth takes to orbit the Sun they nevertheless recorded the years and predicted the solstices with great precision.

Incidentally, if you’re interested in a full history of the shift from the Earth being the center of the solar system to the Sun being at the center, be sure to read the extraordinarily good series of articles by TOF, The Great Ptolemaic Smackdown (originally published in Analog magazine). It is very well worth your time.

Accuracy vs. Charity

A curious experience I have from time to time is when discussing some sort of sin or other moral error, when I identify the lesser good aimed at, I’m told that I’m very charitable. This confuses me somewhat because my goal is not to be charitable, but merely to be accurate.

All sin is the seeking after of some lesser good in place of a higher good. The clearest and easiest example is idolatry—this is worshiping some created good as if it were the Creator. But that means that the idolater is seeking God in a creature. It’s not particularly charitable to note this; it’s simply accurate.

To take a slightly less obvious example, when a person is wrathful—i.e. indulging in excessive anger—they are placing the rectification of some wrong above the good of the injured party and the culprit. In true justice, a wrong done is rectified and a balance restored between aggressor and victim, so that they can return to their proper relationship of friends. When one is wrathful one seeks only to redress the wrong done to the victim, but not to restore the relationship between creatures. By giving an infinite weight to the good of which the victim was deprived, the wrathful person is never satisfied at the restitution and therefore ignores the greater good (once proper restitution has been made) of restoration of the proper relationship between sinner and victim. But saying that the wrathful person goes wrong by over-valuing the victim (or the good of which the victim was deprived which constitutes the injury to the victim) is not—in any way that I can see, at least—being charitable to the wrathful person. It’s just being accurate.

I suppose it’s possible that this is taken for charity because commonly people ascribe sin to the desire to do evil, but this is not actually possible. It’s simply a point of metaphysics that the will can only move towards some good, though it can move toward a lessor good in place of a greater good. As such, whenever a person goes wrong, you know with iron certainty that they were seeking some good, however minor. This doesn’t lessen their sin since it’s inherent in their sin being sin that they are seeking some (lesser) good.

Perhaps people think I mean that the one whose sin I’m explaining must therefore have sinned by accident, or been misled through no fault of their own? That certainly does not follow; we know from the fact that sin is voluntary that one can knowingly choose a lesser good over a greater good.

Oh well. Perhaps some day I’ll understand this.

Pride Vs Stupidity

Over on his blog, Mr. John C. Wright asks the question:

Why is the proud man angry or peeved with the stupidity (real or imagined) of his fellows? I ask because one would think a saint would be very patient with someone who was stupid, if it were honest stupidity, and not merely laziness in thinking. Whereas the devil (or Lex Luthor) is always in a state of haughtiest annoyance, because he is brighter than those around him. Their stupidity proves his superiority – yet it irks him. Why?

To answer this question we have to first answer the question, “what is pride?” (I’m taking the distinction between pride and vanity as a given.) A generally workable description of pride is an inflated sense of the worth of the self. This is, however—when properly considered—a symptom rather than a cause.

The cause of pride is a mistake about the nature of the self. This is inescapable because the value placed on something is inherently a description of its nature. (I should probably clarify that pride is an inflation of the inherent worth of the self—it’s not a utilitarian measure of the worth of the self to someone else’s purposes, as a means to their end. That’s actually a form of vanity.)

There are two possible mistakes to make about the nature of the self which aggrandize it:

  1. That one is a higher creature than one is, but still subordinate to God
  2. That one is God

While #1 is possible, I suspect it’s not the common mode of pride since it’s too subject to correctives. A human being who thinks that he’s an angel, for example, will have a hard time not noticing that he has a physical body and is, therefore, actually a human being. If he still thinks himself subordinate to God, he will in humility accept this recognition. It is, therefore, hard to see how #1 can be a long-lived error. Even Gulliver couldn’t think himself a Houyhnhnm for long at a stretch.

This leaves #2 as the common form of pride, and it is this form of pride in which stupidity angers the proud man. It angers him because it is proof that he is not God. The proud man wills that the people around him are not stupid and yet they are. This proves his limitations and therefore disproves his opinion of his own power. The larger the difference between what he wills reality to be and what it is, the greater the proof that he is not God, and therefore the greater is his anger.

Most of Life is Unknown

We live so awash in stories and news that we get a very skewed perspective on how much of real life is know to more than a few people and God. What got me thinking about this was watching the following song:

It was the theme song used by a sketch comedy group at the university I went to for my undergraduate degree. They were called Friday Night Live (as an homage to the TV show of similar name) and would put on a show about three times a semester. They had apparently been hugely popular when they started, which was at least a few years before I attended. I was in the rival sketch comedy group, Pirate Theater. At the beginning of my freshman year the attendance at our shows was lower than that of Friday Night Live, but by the end of my senior year the positions had reversed. Our audiences were 3-4 times larger and theirs were smaller even than our audiences had been in my freshman year.

A few years ago I ran into someone who was currently a student at the university and I asked about the shows. He said that Pirate Theater was still reasonably popular but Friday Night Live no longer existed. In fact, he had never even heard of it.

There had been a reasonably friendly rivalry between the two shows, and it turned out that our efforts brought success, in a sense. I don’t think that any of us pirates wanted to kill of Friday Night Live, and ultimately I suspect that it was the quality of their writing which did them in. For whatever reason (I never wrote sketches for FNL, so I couldn’t say what it was like to do so), Pirate Theater managed to attract far more of the skilled writers on campus. It also didn’t help that FNL insisted on having an intermission and hiring a local band to play in it. I think in all the time I was there—and I didn’t miss an FNL show all four years—they had one band that I didn’t leave the auditorium to get away from. Really, really cheap bands tend to be so inexpensive for a reason.

As you might imagine, I was never alone outside the doors when the band was playing. And they were always ear-hurtingly loud, too. Adding injury to insult, I suppose.

Towards the end of my senior year, there were probably less than a hundred people attending the Friday Night Live shows. The total number of people who saw their trajectory for those four years I watched it was not very large; the number who remember it now is probably much smaller. And yet the actors did work on their sketches, however little humor was in them. The “turrets family” where the “joke” was that there was a lot of yelling and cussing may not have been entertaining to watch, but it’s no easier to memorize unfunny lines, or to say them at the right time when you’re live on stage. (There was at approximately one of those per show, for all four years, by the way.)

One of the actors was limber and would do physical comedy with a folding chair, getting stuck in it. It wasn’t brilliant physical comedy, and (due to the lack of skilled writing) never really fit into the sketches it was in; one could see it coming a mile away as the sketches it was used in were basically an excuse to do the wacky chair antics. But someone did write the sketch, and people memorized it, and the actor did twist himself through a chair on stage which is not an easy thing to do. And I do have to say that the final time he did it—which was the last FNL show I attended since he graduated the same year I did—was actually kind of funny because it was actually a protracted goodbye dance with the chair, complete with sad music and longing glances.

 

This was all very real; people put real work, went through real happiness and sadness, and now it is mostly forgotten. That is ultimately the fate of (almost) all human endeavors. This was captured quite well, I think by Percy Blythe Shelley in his poem Ozymandias:

I met a traveller from an antique land,
Who said—“Two vast and trunkless legs of stone
Stand in the desert. . . . Near them, on the sand,
Half sunk a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them, and the heart that fed;
And on the pedestal, these words appear:
My name is Ozymandias, King of Kings;
Look on my Works, ye Mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.

Most things fade from memory far faster, of course. How many people now remember more than a few fragments of Friday Night Live’s sketches? I don’t think I remember more than a few fragments of the Pirate Theater sketches which I wrote, let alone those I merely performed in. I do have a DVD with a collection of the video sketches we did (many of which I’m in), but it’s been quite some time since I watched it.

It may well be longer until I watch it again. In the intervening decade and a half, I’ve gotten a profession, married, bought a house, had three children, published three novels, made a YouTube channel with over 1,973 subscribers, and a whole lot more. As much a I value my memories of my college days, I don’t want to go back in the way that video takes one back.

But the thing is, that’s not a strength. I just don’t have the time and energy for it. But all the things I’ve done since which so occupy me now will also fade in time. Eventually I will die; eventually this house will fall down or be demolished; eventually my children will die. Nothing has any permanence within time.

So the only hope we have is for permanence outside of time. There’s a great metaphor, which Saint Augustine uses in his Confessions, of God, at the end of time, gathering up the shattered moments of our lives and putting them together as a unified whole. And that’s really the only hope we have for any of our lives to be real.

You Can’t Get an Ought From an Is In Hell

One of the questions which comes up in discussions of morality is whether you can get an “ought” from an “is”. This is relevant primarily to discussions of atheism, since to the atheist everything is a brute fact, i.e. an “is” which is not directed towards anything, and therefore an atheist cannot get any “oughts” out of their description of what is. Or in simpler language, if God is dead then all things are permitted. (Note for the unpoetic: by “God is dead” we mean “there is no God”.)

There are two reasons why if God is dead all things are permitted:

  1. If God is dead, who is there to forbid anything?
  2. If God is dead, then there is no ultimate good because all is change and therefore nothing has any lasting reality.

If you argue this sort of stuff with atheists long enough, somewhere along the line while you’re explaining natural ends (telos) and natural morality, you may come by accident to a very interesting point which the atheist will bring up without realizing it. It often goes something like this:

OK, suppose that what God says is actually the only way to be eternally happy. Why should you be eternally happy? Why shouldn’t you do what you want even though it makes you unhappy?

This question sheds some very interesting light on hell, and consequently on what we mean by morality. Our understanding of morality tends to be like what Saint Augustine said of our understanding of time:

What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.

Somehow or other atheists tend to assume that ought means something that you have to do, regardless of what you want to do. It’s very tempting to assume that this is a holdover from childhood where ought meant that their parents would make them do it whether or not they wanted to. It’s tempting because it’s probably the case and because that’s not an adult understanding of ought. And it’s not because ultimately we can’t be forced to be good. (Or if this raises your hackles because I’m “placing limits on God”, then just take it as meaning that in any event we won’t be forced to be good.)

Hell is a real possibility. Or in other words, it is possible to see two options and knowingly pick the worse option.

What we actually mean by saying that we ought to do something is that the thing is directed towards the good. And we can clarify this if we bring in a bit of Thomistic moral philosophy: being is what is good. Or as the scholastic phrase goes, good is convertible with being. But being, within creation, is largely a composite entity. A statue is not just one thing, but many things (atoms, molecules, etc.) which, in being ordered toward the same end, are also one thing which is greater than their parts.

And you can see a symphony of ordering to a greater being, in a human being. Atoms are ordered into proteins (and many other things like lipids, etc), which are ordered into cells, which are ordered into organs, which are ordered into human beings. But human beings are not at the top of the hierarchy of being, for we are also ordered into community with other created things. (Please note: being part of a greater whole does not rob the individual of his inherent dignity; the infinite goodness of God means that creation is not a competition. Also note that God so exceeds all of creation that He is not in the hierarchy of being, but merely pointed to by it.)

And so we come to the real meaning of ought. To say that we ought to do something is to say that the thing is ordered towards the maximum being which is given to us. But we need not choose being; we can instead choose non-being. The great lie which the modern project (and, perhaps not coincidentally, Satan) tells us is that there is some other being available to us besides what was given to us by God. That we can make ourselves; that we can give ourselves what we haven’t got. And, not at all coincidentally, are the things which we ought not to do—that is, those things are not ordered toward being. They’re just what the atheist says that all of life is—stimulating nerve endings to fool ourselves that we’ve accomplished something.

And yet atheists complain when one says that, according to them, they’re in hell.

God, at least, has a sense of humor.

Superman’s Secret Identity

I had a conversation with my friend Andrew Stratelates recently about the question of why no one figures out that Clark Kent is actually Superman. And I figured something out about it when he pointed out that mannerisms can be very suggestive to people, but it would be very difficult to fool facial recognition software: trying to figure out Superman’s secret identity presupposes that he has a secret identity. And why on earth would anyone think that?

Superman doesn’t wear a mask, and is even clean shaven. Since one can plainly see his face, which he makes no effort to hide, there’s absolutely no reason to think that he has some sort of alternate persona he’s hiding. Moreover, if you think about it for a moment, it’s actually really quite strange that Superman does have an alternate persona. It serves no practical purpose. In most tellings, superman is not a vigilante who is wanted by the police and in any event he has a fortress of solitude which is a reasonable commute away, so it’s not like he has to pay rent to avoid capture. And if Superman did want money, he could take advantage of his super powers to earn hugely more than he could pretending to be an ordinary man. There are much more lucrative things he could do, but since he can travel at super-sonic speeds while carrying multiple tons of material, he could make a fortune as a high speed courier. The list of better ways to make money than working an office job would be quite long, and moreover, obviously quite long to everyone.

Further, there’s the fact that superman is basically an olympian god compared to ordinary men. Why would he choose to do the drudgery the rest of us are forced to do? It’s an imperfect analogy, but consider the following hypothetical:

Suppose you work for a company which makes inkjet printers, and suppose you have a co-worker in your office named Fred who looks like Donald Trump, except that he is polite, self-effacing, drives a 6-year-old Nissan Sentra, and wears glasses. And suppose another co-worker one day whispered to you, “You know what, I think that Fred is Donald Trump’s secret identity!”

Would you:

(A) Say, “You know what, if you take away the glasses he does look exactly like Donald Trump. You must be right!”

(B) Ask, “Why on earth would Donald Trump have a secret identity working a mediocre job in our printer company?”

(If Donald Trump is too polarizing a figure, you could easily substitute Jeff Bezos, Mark Zuckerberg, or the Duke of Cambridge (Prince William), and the point will remain unchanged.)

Superman’s having a secret identity makes about as much sense has his wearing his underwear on the outside of his clothing—it’s interesting, it’s very historically contingent, and it’s plausible only in the sense that life has a lot of quirks to it that we’d never expect. That it is plausible in the sense that life is stranger than fiction does actually lend people not discovering Superman’s secret identity some plausibility. And I think the wild implausibility of Superman having a secret identity is the best defense he has, since it would be trivial to detect superman otherwise, even if he wore an astonishingly realistic face mask. Just use an x-ray scanner and find the guy who’s completely solid. Alternatively, look for people the right height and build and poke them with a very thin, sharp pin until you find the guy where the pin breaks instead of going into his skin. And if you’re a villain, just do it like they did in the movie Pumaman and throw likely candidates out of high windows until you find someone who survives.

Over the Hills and Far Away

I recently discovered the singer/hury gurdist Patty Gurdy. Originally part of the band Storm Seeker, she seems to be striking out on her own. I’ve really been enjoying her songs on YouTube, and I’m particularly fond of her cover of a Storm Seeker song called The Longing:

However, the song I want to talk about is Over the Hills and Far Away:

It’s extremely reminiscent in theme of the Johnny Cash song The Long Black Veil, though I don’t know that there’s any influence:

Either way, it’s very interesting to compare the two songs. And despite the similarity of subject matter, the biggest difference is what kind of song they are: Over the Hills and Far Away is a (sort-of) love song, while The Long Black Veil is a tragedy.

This is of course facilitated by the different penalties for the different crimes. In The Long Black Veil, the man is accused of murder and his refusal to provide an alibi results in his execution, while in Over the Hills and Far Away he refuses to provide an alibi for a robbery and consequently is sentenced to 10 years in prison. This enables the latter to have the theme of eventual return, and it’s this theme which turns the song into a love song.

Which is unfortunate because the man should not return to the arms of his best friend’s wife. He should stay out of the arms of any man’s wife but even more so those of his best friend’s wife. In the song where the adulterer died, it becomes possible to take it as a simple tragedy where he was not directly punished for his adultery, but none the less was being punished indirectly because his adultery prevented him from proving his innocence. He got what he deserved, if indirectly, sort of like the plot of The Postman Always Rings Twice.

Unfortunately that sort of interpretation isn’t possible for a man who doesn’t understand what he did to be wrong (only socially unacceptable). But I find it interesting that the woman sings a song about adultery as a love song and the man sings it as a tragedy. This touches on a theme I’ve noticed in stories written by women: a man is so captivated by a woman’s beauty that he’s willing to destroy himself (and often her) because of it. This isn’t a universal theme, nor anything like that, but I’ve noticed that this is a common theme in material that I didn’t usually read until recently.

There’s a lot to say about the theme of a man so entranced by a woman’s beauty that he becomes a monster, which alas I don’t have time for now, but it is an interesting question to ponder how much the becoming a monster is intrinsic to the fantasy or whether it’s a way of defending against the accusations of wish-fulfillment which the story would be accused of if the woman’s beauty captivated the man and helped him to overcome his vices and become a saint. That latter one would be a very good story, though.

Without Midwits, Geniuses Would be Useless

Over at Amatopia, Alex wrote an interesting post called, The Curse of the Midwit:

One of the worst things to be is a midwit. And I am one. Let me explain what I mean by “midwit.” I have seen the term used many ways, and they boil down to these six points: Someone who is not as smart as the truly intelligent, but is of above-average intelligence, Who wants other […]

As usual, it’s a post worth reading, but Alex only tells half the story. He talks about the dangers of midwits but every danger is just the flip side of a virtue. (Of a natural virtue, specifically. The natural virtues are things like intelligence, strength, physical beauty, health, and so on; they are distinct from the moral virtues like courage, self control, etc.; which are again distinct from the theological virtues of faith, hope, and love.)

In short, Alex leaves out the virtue unique to midwits. Now, in what follows I’m going to paint with a very broad brush because I don’t have time to give a full description of the hierarchy of being, so I ask you to use your imagination to fill in all that I’m going to leave vague.

As I’ve said before, God’s fundamental theme within creation is delegation (technically, secondary causation). He doesn’t give to each creature everything he gives to them directly, but instead gives some of his gift to other creatures to give to their fellow creatures on his behalf. Through this He incorporates us into his love of creation and into His creative action. But within creation, this theme of delegation echoes. Instead of one intermediary, God orders the world so that there are several intermediaries. He spreads the love around, as it were.

The part of that which we’re presently concerned with is that it is not (usually) given to geniuses to be able to give their knowledge to the great mass of humanity directly. And since it is (usually) not given to them, they generally can’t do it. When a genius speaks to a common man, he’s usually quite unintelligible. If the common man knows the genius to be a genius by reputation, he’ll assume the man is saying something too genius for him to understand, rather than to be raving nonsense, but he will typically get about as much from it as if the genius was raving nonsense. This is where the midwits come in.

A midwit can understand a genius, but he can also speak in ways that common men can understand. Thus God’s knowledge is given to the common man not directly, but first to the genius, who gives it to the midwit, who then gives it to the common man. Geniuses need midwits at least as much as midwits need geniuses. In truth, all of creation needs the rest of creation since we were created to be together.

Of course the distinction of men into three tiers—genius, midwit, and common—is a drastic oversimplification. In reality there are levels of midwits and levels of geniuses, each of which tends to receive knowledge from the level above it and pass knowledge down to the level below it. For example, Aristotle would have had the merest fraction of the effect he has had were it not for an army of teachers, down through the millenia, who have explained what he taught to those who couldn’t grasp it directly.

Of course in this fallen world every aspect of this can and often does go wrong in a whole myriad of ways. And Alex is quite right that midwits can be very dangerous when they consider themselves geniuses—or really, any time that they’re wrong—because the sacred burden of teaching the great mass of common men has been given to them. Midwits have the power to do tremendous good, which means that they have the power to do tremendous harm.  But the tremendous good which midwits were given to do should never be forgotten just because many of them don’t do it.

The Evolution of Scientism

There’s a curious thing which happens to those who believe that the only real knowledge comes from science: they start to believe that nearly everything—except what they want to reject—is science. Ultimately this should not be shocking, since people who live with a philosophy will invariably change it—gradually—until it is livable.

The people who become Scientismists generally start out extremely impressed with the clear and convincing nature of the proofs offered in the physical sciences. It would be more accurate to say, with the few best proofs in the physical sciences which are offered to them in school—but the distinction isn’t of great import. In practice, most of the impressive results tend to be in the field of Chemistry. It doesn’t hurt that Chemistry is a bit akin to magic, with the astonishing substances it allows people to make, but what it’s really best at is interesting, counter-intuitive predictions. Physics, at least as presented in school, generally allows you to predict simple things like where a thrown object will land or how far a hockey puck will skid on the ice. These aren’t very practical, and the results tend to be intuitive. Chemistry, by contrast, involves the mixing of strange chemicals with the results ranging from anything to nearly nothing to things which glow to explosions to enormously strong plastics.

And Chemistry does this with astonishing accuracy. If you start with clean reagents and mix them in the appropriate steps, you actually do end up with close to the right amount of what you’re supposed to end up with. If you try to run a physics experiment, you’ll probably be nowhere close to correct simply because the experiments are so darn finicky. I still remember when my high school honors physics class broke into groups to run an experiment to calculate acceleration due to gravity at the earth’s surface. The results were scattered between 2.3m/s and 7.3m/s (the correct answer is 9.8m/s).

The problem for our budding Scientismist  is that virtually nothing outside of chemistry and (some of) physics is nearly as susceptible to repeatable experiment on demand. Even biology tends to be far less accommodating (though molecular biology is much closer to chemistry in this regard than the rest of biology is). Once you get beyond biology, things get much worse for the Scientismist; by the time you’re at things like morality, economics, crime & punishment, public decency, parenting and so forth, there aren’t any repeatable controlled experiments which you can (ethically) perform. And even if you were willing to perform unethical controlled experiments, the system involved is so complex that the very act of controlling the experiment (say, by raising a child inside of a box) affects the experiment. So what is the Scientismist to do?

What he should do, of course, is realize that Scientism is folly and give it up. The second best thing to do is to realize that (according to his theory) human beings live in near-complete ignorance and so he has nothing to say on any subject other than the hard sciences. What he actually does is to then declare all sorts of obviously non-scientific things to be science, and then accepts them as knowledge. Which is to say, he makes Scientism livable. It’s neither rational nor honest, but it is inevitable. In this great clash of reality with his ideas, something has to give—and the least painful thing to give up is a rigorous criteria for what is and is not science.

Telling Reality From a Dream

“What if real life is actually a dream?”  is a favorite question of Modern philosophers and teenagers who want to sound deep. It’s a curious thought experiment, but in reality—that is, when we’re awake—we can all easily tell the difference between reality and a dream. But how? The answer is, I think, very simple, but also telling.

Thought experiments aside, we can tell reality from a dream because—to put it a little abstractly—reality contains so much more information than a dream does. Anything we care to focus on contains a wealth of detail which is immediately apparent to us. Whether it’s the threads in a blanket or the dust in the corner of the room or just the bumps in the paint on the drywall, reality has an inexhaustible amount of complexity and detail to it. And what’s more, it has this even in the parts we’re not focusing on. Our eyes take in a truly enormous amount of information that we don’t exactly notice and yet are aware of.

Dreams, by contrast, are very simple things. They do feel real while we are in them, but I think this comes from two primary causes. One is that we’re so caught up in the plot of our dream that we’re not paying enough attention to ask ourselves the simple question, “is this a dream?”

And I think that this is because dreams are natural to us. We often lose sight of this fact because dreams are involuntary and strange. But many things we do are involuntary, in the sense of sub-conscious; our breathing is most involuntary and our heartbeat always is. Our stomachs go on without our concentrating on them and our intestines wind our food through them whatever our conscious thoughts may be. Merely being involuntary does not make a thing unnatural. And since it is natural to us to dream, it is natural that we do not ordinarily try to escape our dreams. As with our other bodily functions, we ordinarily do what we’re supposed to do.

The other reason that dreams feel real to us is because our attention is so focused in a dream that we never consider the irrelevant details. If you ever try to call a dream back in your memory, though, you’ll notice that you can recall almost no detail in them—detail which was irrelevant at the time, I mean. The things in dreams only have properties where one is paying attention. The enormous amount of information we can see without paying attention to it is missing. This is also why they have a “dreamlike” quality to them—if we turn away then come back, they may not be the same because they stopped existing while we weren’t looking at them.

Dreams lack this stable, consistent, overwhelming amount of information in them precisely because they are our creations. We can’t create an amount of information so large that we can’t take it in.

And here we come to the fitting part: the difference in richness between reality and dreams shows what inadequate Gods we are. Our creations are insubstantial, inconsistent wisps. We can tell reality from a dream at a glance between it only takes one glance at reality to know that we couldn’t have created what we’re looking at.

(Note: This is a heavily revised version of a previous post, Discerning Reality From a Dream.)

Discerning Reality From a Dream

“What if real life is actually a dream?”  is a favorite question of Modern philosophers and teenagers who want to sound deep. It’s a curious thought experiment, but in reality we can all easily tell the difference between reality and a dream. But how? The answer is, I think, very simple, but also telling.

Thought experiments aside, we can tell reality from a dream because—to put it a little abstractly—reality contains so much more information than a dream does. Anything we care to focus on contains a wealth of detail which is immediately apparent to us. Whether it’s the threads in a blanket or the dust in the corner of the room or just the bumps in the paint on the drywall, reality has an inexhaustible amount of complexity and detail to it.

Dreams, by contrast, are very simple things. They feel real only because we’re so caught up in the plot of our dream that we’re not paying enough attention to ask ourselves the simple question, “is this a dream?” But if you pay attention, dreams have almost no detail in them; the things in the dream only have properties where one is paying attention. This is also why they have a “dreamlike” quality to them—if we turn away then come back, they may not be the same because they stopped existing while we weren’t looking at them.

And here we come to the fitting part: the difference in richness between reality and dreams shows what inadequate Gods we are. Our creations are insubstantial, inconsistent wisps. We can tell reality from a dream at a glance between it only takes one glance at reality to know that we couldn’t have created what we’re looking at.

UPDATE: I’ve rewritten and expanded this post in a way that makes its point clearer: Telling Reality From a Dream

The Problem With Outrage Quoting

I’m fairly careful to limit my intake of social media to people who say reasonable things. This is in part a survival strategy for Staying Sane on Social Media. However, this still leaves a fairly large vector for things which unbalance my mood and make me less effective at the main stuff I’m supposed to be doing: outrage quoting.

This is where a person who is themselves reasonable sees a very unreasonable thing, then quotes it to express their outrage at it. There’s also a variation on this where the person quotes it to make fun of it. The latter isn’t quite as bad as the former, but both do have the following problem: one is still being exposed to the crazy stuff one was trying to avoid.

Actually, it’s a bit worse than that—the people one follows are specifically filtering through the stuff from the unreasonable people to find the craziest stuff that they say. This can be extremely unbalancing to one’s state of mind. As I talked about in Social Media is Doomed, human beings aren’t designed to deal with a large number of strangers. We deal with people by acclimating to them, but it takes time and is harder the more different sorts of people we need to acclimate to. Even when we are careful to keep our reading to a set group of people to whom we’ve acclimated—there’s no requirement that these people agree with each other or with us, only that we’ve acclimated to them—outrage quoting constantly introduces new people to our notice who are saying crazy things that we haven’t acclimated to. This is extremely stressful to human beings.

Also, please note that I’m not talking about being exposed to new ideas as being stressful. There are some circumstances in which that can be stressful, but usually it’s quite manageable. I’m talking about running into expressions of ideas we’re not used to. Perhaps we know somebody who will say #KillAllMen and we’ve gotten used to this eccentricity. There is no new argument to be found in a person saying, instead, #CastrateAllMen (I made that up; who knows, perhaps I will have actually come up with an absurd example that the universe didn’t beat me to for once). But if we’re used to the former and not the latter, the latter will be far more stressful to run into. There’s a new person here, and people are complex. They’re also dangerous. A stress reaction to having to deal with a new person is actually entirely appropriate. Best case scenario is a big drain on your emotional energy is incoming.

Except that this being a one-off quote means that actually, a big drain on one’s emotional energy isn’t incoming because you don’t actually need to get used to this new person. You’re almost certainly never going to see them again. And therein lies one strategy to help mitigate the stress from encountering outrage quoting: focus on how this is a person you’ll never see again and how they don’t really matter.

I don’t have any other good suggestions, other than be careful about people who do a lot of outrage quoting. But certainly I think the golden rule applies, here: be very careful when quoting to make sure that one isn’t outrage quoting. For example, when I wrote a humorous blog post about that CNN article on cuckolding (CNN’s Love of Cuckolding), I started it off with explaining why it doesn’t matter and isn’t worth stressing over. And I’ve stopped myself from quoting outrageous things often enough that it’s now becoming a habit to not quote outrageous things. Still, it’s something I always keep in mind—if I’m quoting something, what effect will seeing that have on the people who read what I write?

We Live In Cycles

In C.S. Lewis’s The Screwtape Letters (if you haven’t read them, see the note at the bottom for context), he observed that human beings live according to cycles. It’s in the beginning Letter 8:

Humans are amphibians—half spirit and half animal. (The Enemy’s determination to produce such a revolting hybrid was one of the things that determined Our Father to withdraw his support from Him.) AS spirits they belong to the eternal world, but as animals they inhabit time. This means that while their spirit can be directed to an eternal object, their bodies, passions, and imaginations are in continual change, for to be in time means to change. Their nearest approach to constancy, therefore, is undulation—the repeated return to a level from which they repeatedly fall back, a series of trough and peaks. If you had watched your patient carefully you would have seen this undulation in every department of his life—his interest in his work, his affection for his friends, his physical appetites, all go up and down. As long as he lives on earth, periods of emotional and bodily richness and liveliness will alternate with periods of numbness and poverty. The dryness and dullness through which your patient is now going on are not, as you fondly suppose, your workmanship; they are merely a natural phenomenon which will do us no good unless you make a good use of it.

Our lives are lived according to many cycles, some independent, some interrelated. What Lewis refers to as troughs and peaks are actually the lining up of many troughs at the same time, or many peaks at the same time. What are these cycles?

There are some obvious cycles, like the diurnal cycle we live in every day (day/night). There are longer cycles, like weekly, monthly, and yearly cycles, too. Work weeks, weekends, pay days, construction seasons, busy season, and all sorts of other cycles affect us. But probably least well appreciated are feedback cycles.

It’s not uncommon when feeling well rested to make the mistake of staying up too late. If we do this a little bit we get progressively more exhausted during the days until we simply can’t do it and start getting enough sleep. Once we’ve gotten enough sleep, we’re ready to start getting too little sleep again.

Another common feedback cycle is the stress cycle. When we’ve got plenty of emotional energy, we tend to be more tolerant of people taking up our time and placing demands on us which consume a lot of emotional energy.  More things on our to-do list, more leniency for people being annoying, more patience with people being rude or unappreciative. Lots of things can consume emotional energy which we can deter or allow to consume more. The better we’re feeling the more generous we tend to be. But as that continues, our surplus gets used up. Depending on what we tolerated, this might have resulted in increased demands past the rate at which we replenish emotional energy. This continues until we’re emotionally exhausted and start being defensive of our energy. This might result in simply turning things down, or it might result in bad temper. (Like all cycles, one deals with it best when one is realistic about it; letting oneself get pushed to complete exhaustion is a terrible idea because it makes us most likely to explode at small irritations.)

There are other feedback cycles in life, like entertainment versus unpaid work or spending time with friends versus solitude. They’re all around us, if we look for them. There’s value to identifying them, but life is complex enough that we also need to be able to recognize when there are cycles we don’t know about at work. Some days we just feel awful and if it’s the result of cycle troughs lining up, it may just be time to go to bed early and soon things will be better. Some days are great because of peaks lining up and it can be a good idea to take advantage of them rather than expect them to be the new normal. It’s also helpful to try to recognize the feedback loops and smooth them out—especially the troughs—by anticipating them and adjusting before things get too extreme.

We live tossed around in the waves. It’s a good idea to learn to surf instead of being tossed around, gasping for breath.

About The Screwtape Letters

The Screwtape Letters are written as a series of letters from the demon Screwtape to his “newphew”, the demon Wormwood. Wormwood is the demonic parody of a guardian angel assigned to a human being to try to corrupt him and trick him into damning himself. Only Screwtape’s letters offer advice to his “nephew” on how to do his evil work. All of Screwtape’s letters are good advice on how to damn a soul; as such they are really advice on how to live well (in the sense of being upright or good) presented in what you might call photographic negative. What is good, Screwtape calls evil; what is evil, Screwtape calls good. But that’s true in all cases, so one very easily learns the habit of just flipping everything around.

Reading the book—which is excellent, and I highly recommend—is an interesting experience. Probably the closest analogy I can come to is honestly examining one’s conscience for faults with the intention of improving.

Facebook Had a Bad Year

Having recently talked about how Social Media is Doomed and Another Perspective on Facebook as Social Poison, I just saw this article: 2017 Was a Bad Year For Facebook, 2018 Will Be Worse.

The article is mostly about taxation, but it does mention this:

Facebook has reacted nervously to Palihapitya’s accusations, saying he hadn’t worked at the company for a long time (he left in 2011) and wasn’t aware of Facebook’s recent initiatives. But I can’t see any practical manifestations of these efforts as a user who has drastically cut back on social networking this year for the very reasons cited by Parker and Palihapitya.

To outsiders and regulators, Facebook looks like a dangerous provider of instant gratification in a space suddenly vital to the health of society. It’s also making abuse and aggression too easy — something the U.K. Committee on Standards in Public Life pointed out in a report published on Wednesday. Sounding one of the loudest alarm bells on social media yet, the panel urged the prime minister to back legislation to “shift the balance of liability for illegal content to the social media companies.”

The article also talks about concerns related to targeted advertising.

I haven’t talked about targeted advertising, but its problems are partially related to the problems of push-based social media. One part of targeted advertising is only showing advertisements to people who might want to see them. This is a net-positive for all involved, since irrelevant advertisements are just a waste of everyone’s time. The part that’s about figuring out how to manipulate people into buying things they don’t think are a good idea, though, is far worse. It’s also related to the fundamental problem of push-based media because it’s trying to get around the adaptations people made to their environment in order to live in peace with it. Unfortunately from the advertiser’s perspective, those adaptations involve a great deal of not buying things; and hence the temptation on the part of advertisers to upset that balance which the viewer has constructed for himself.

I’d like to reiterate that my point is not that social media is evil, but rather that the push-based social media as we know it today is fundamentally flawed for human use; this makes changes to it inevitable. What form those changes take is less clear, but they are certainly coming.

Whence Comes the Book?

I read a curious article about a fan of The Mists of Avalon which is about her reaction to learning that the author of the book (Marion Zimmer Bradley) (allegedly) sexually abused her own daughter and other children. It’s curious because of the degree to which it regards the author indulging in astounding amounts of sexual evil as if it were simply a ritual impurity, rather than as something which might be woven into the book itself. A book which, by the reader’s own admission, was very unlike anything else:

I still cannot imagine anything more perfectly aligned with my thirteen-year-old sensibilities than Marion Zimmer Bradley’s masterpiece. Bradley opened my eyes to the idea that, when we look at the past, we are only ever seeing a small part of it — and usually, what we are seeing excludes the experiences of women. Encountering the vain, self-serving, diabolical Morgan le Fay transformed into the priestess Morgaine compelled me to question other received narratives in which women are to blame for the failures of men. The Mists of Avalon also gave me a glimpse of spiritual possibilities beyond male-dominated, male-defined religions. In retrospect, I can see that it gave me ways of seeing that helped me find the feminine even within patriarchal systems while studying religion as an undergrad. The impact of this book lingers in my feminism, certainly, but it also influenced my scholarly interest in folklore, and it still informs my personal spirituality.

And this is her analysis of the book in light of the revelations about the author:

The sexual act described [above] takes place around the Beltane fire. As a young reader, I was disturbed by it, but I saw it as a description of people who have passed beyond the normal world and into the sacred time of a fertility ritual. The scene was frightening for me as a child, and repellent, but also, I must admit, fascinating. In context, this passage made sense: The horror of the scene was an element of its power. And that was all I found. Everything I had always loved about the book was still there, and I didn’t find anything new to hate. So, what was I going to do with this book?

And finally, here is her conclusion:

So, what to do with this once-beloved book? I’ve read it once since Greyland spoke out, and I don’t know if I will read it again. Probably not, I’m guessing. Discovering that powerful men are predators is disturbing, but not surprising. Learning that the author who introduced me to feminine spirituality and the hidden side of history abused children — girls and boys, her own daughter — was horrifying in an existential kind of way. I’m a writer and an editor and I know that characters can exceed their creators. I would go so far as to say that that’s the goal. So I can keep Morgaine — what she has meant to me, what she has become in my personal mythology — while I reject Bradley.

This is a common thing I see in the modern world: assuming that all propositions stand alone, unconnected from all others, as if truth is not things fitting into each other but like a butterfly collection on unconnected facts.

This woman never asks herself whether the book teaching her to “question other received narratives in which women are to blame for the failures of men” is just Bradley trying to escape the blame for her own evil, projected. If in most other parts of the world, people who don’t rape their (and other) children take responsibility for their own wrongs, but a rapist teaches how to shuffle the blame off on others, perhaps the right course of action is not to keep the lesson that you should always shuffle the blame onto others.

Virtue is not a simple thing. Virtue is required for people to live together. Virtue is required for people to live together with everything, in fact, even nature. Virtue is what places us into a right relationship with the hierarchy of being. Evil people reject the hierarchy of being; they substitute their own for the real one. At the extremes you have Satan’s nolo servire—I will not serve. The more vicious an author is, the more one expects this to permeate every aspect of their being, because the fundamental solipsism of their orientation to the world cannot but touch on every interaction they have with the world. To learn life lessons from the book of a thoroughly wicked man is a fool’s errand; they will be right by accident. And since they will be right by accident, their effort will not be in making the truth attractive.

In short, if you’re going to sell your soul to the devil, don’t do it in exchange for wisdom.

The Origin of Rights

In the aftermath of the enlightenment which emphasized the rights of man, the fact that a world which thinks only of rights will fall apart is something of a problem. But the enlightenment gives no framework for reconciling rights and responsibilities, which has left many people very unsure of how to try to reconcile them. It’s actually quite simple as long as you look at the problem in the right way. The key to the whole mess is that rights come from responsibilities.

Obviously rights come from God, since all things come from God, but they don’t come directly from God. The most proximal intermediary in giving human beings rights is the responsibilities that they were given. Whatever a man has a responsibility to do, he has a right to do.

Consider, for example, feeding himself. A man has a responsibility to feed himself. Because of this, he has a right to the things intrinsically necessary to do it, such as the right to own property with which to get for himself food, the right to do the labor necessary to procure food, and so on.

Now, It is important to distinguish what is intrinsically necessary to fulfill a responsibility from what may be accidentally necessary. If I don’t happen to have any bread on hand, that doesn’t automatically give me a right to your bread because it is an accident of circumstances that you have bread on hand while I don’t. A responsibility conveys the rights that anyone would need in order to fulfill a task, not what would be necessary only for one person in some particular moment.

And this is the origin of all rights. Parental rights originate from the parental responsibility to care for one’s child. Speech rights originate from the responsibility to tell the truth. Religious rights originate from the duty to worship God.

Once you look at rights this way, the problem of reconciling them with responsibilities—or of reconciling conflicting rights—becomes a non-issue. Responsibilities exist in a hierarchy, and so whenever a right and a responsibility conflict, or when two rights conflict, one merely has to look at the responsibility from which the right derives and compare it to the other responsibility—or the responsibility from which the other right derives—and always fulfill the more important responsibility over the less important responsibility.

This also very neatly solves the problem of how to strongly defend rights without becoming a libertine. Because you never want to be this guy:

Why Is Determinism Attractive?

I used to assume that people believed in determinism (that human beings do not have free will) merely as a consequence to materialism, and that they weren’t really invested in it. More recently, however, I’ve come to suspect that it is determinism which they are primarily attracted to, and atheism is a way to achieve that determinism. (Not so explicitly, of course.)

One strong reason I suspect this is that we have direct, unequivocal experience of free will. If there wasn’t a strong attraction to determinism, this experience would render anything which contradicted free will simply unbelievable. (And for many people, it does just that.) So there must be some deeply compelling reason to want to disbelieve in free will. What can it be?

Before I answer that question, I want to note that there are several belief systems which denied free will, since there is a hint to the answer of this question in that fact. Hinduism is varied, but at least according to the hindu philosophers the monism of everything being God leaves no room for individual free will. Free will implies the existence of sin, but since everything is God nothing can be sin. (Ordinary hindus probably do believe in free will, I should note.) Buddhism does not believe in free will, which is just one of its many contradictions. (By Buddhism I mean the original Buddhism of Siddhartha Gautama which was a reaction against his failure to achieve happiness as a hindu yogi; I’m not talking about more modern, often syncretic Buddhisms.) And very interestingly, Martin Luther didn’t believe in free will either. In fact he wrote a whole book about how there’s no such thing as free will. (On the Bondage of the Will. It’s a terrible book.)

Now, what do all these things have in common, and what do they have in common with materialism? They are all reductionist systems. They all posit that reality is less than it seems, in some manner or other. But curiously only two of them are atheistic; the other two are theistic. This suggests that what people really object to is not God, but other people. And indeed, that makes sense in reductionist systems. People are messy. There are so many of them, and if they’re free they’re not explicable by a small number of easily understood rules.

To be content with understanding the universe but not being able to comprehend it (that is, to stand in right intellectual relationship to it but not to be able to fit it inside of one’s head) requires humility, and more than anything it requires trust. Trusting God, specifically (which seems to me to have been Martin Luther’s big hangup). So I suspect something like the following rule is the case:

Those who cannot trust God cannot deal with the existence of their fellow men, and will seek some philosophical means of getting rid of their fellow men as important.

In practice, the really thorny part of one’s fellow human beings is their free will. Thus to any such creature who finds trust in God to be impossible, determinism will have a huge appeal.

(As a post-script, I should note that reducing men to their base instincts is merely a less rigorous way of accomplishing the same denial of free will; wherever you find a man who reduces all men’s actions to greed or lust, you have found a man who doesn’t trust God.)

Admitting One’s Weird

In an interesting essay I suggest reading, Ed Latimore gave, “5 Lessons From Growing Up in the Hood.” One of them in particular caught my eye:

1. Good manners go a long way.

I fought a lot as a kid. That’s just par for the course growing up in the hood. I would have fought a lot more if it wasn’t for one simple phrase: “My bad.” For those of you that don’t speak hood, “My bad” is the equivalent of saying “I’m sorry.”

You bump somebody in a crowd? ‘My bad’ goes a long way. Step on someone’s foot on a crowded bus? Dude might get mad, but you can cool it quick by just saying ‘My bad.’ Say something a little too offensive that gets guys in the mood to fight? Just say ‘My bad’ and dial it down. It’s amazing what an apology can do to cool tempers in the hood.

I didn’t grow up in the hood, nor even particularly close to it, but I found the same thing applies to situations with much lower stakes: being willing to admit error where one can truthfully do so goes a long way to smoothing out human interactions. And the curious thing is that where one is telling the truth in admitting error, most people are very willing to accept that and move on. People, by and large, don’t tolerate affronts to their dignity, but they are very willing to tolerate other people’s human imperfection where it is acknowledged as such and where a person is willing to put in the work to make things right afterwards.

This applies quite a lot in the context of business. If one makes a mistake in a professional setting, simply admitting it in a straight-forward way tends to turn such mistakes into a non-issue. Professionals are there to earn money, which they do by solving problems. Co-workers’ mistakes are just one more problem to solve. This can of course become excessive to the point where you are causing more problems than you are solving, but if that’s the case you’re probably a bad fit for your job and should move on for everyone’s sake. But where you are competent at your job, people just don’t really care deeply about the occasional mistake, and if you own up to it, there’s nothing left to talk about so people just move on.

And it’s that last part that I want to talk about in another context. Most people are weird but hide it; and most people are made very uncomfortable by other people being different (which is just another way of saying that they’re weird). At its root this comes from a tribal instinct; it is not good for man to be alone—and we know it. Differences make us fear rejection, though a little bit of life experience and sense teaches us which differences matter and which don’t. But sense is surprisingly uncommon and learning from life experiences is—for quite possibly related reasons—similarly rare. So a great many people fear whatever is different from them. This can be people who look different but I think it’s far more common to be afraid of people who act differently. And one thing people do when they’re uncomfortable is talk about it.

And this is where admitting that one is weird can be a very useful strategy. To give a concrete example, I shoot an 80# bow. (For a long time it was actually 82# but string creep eventually set it and for some reason they couldn’t get it back up.) That’s pretty uncommon, these days, especially for someone with a 30″ draw length. Most men shoot a bow somewhere in the range of 55#-70# (women tend to shoot in the 35#-50# range). You’d think that an 80# bow wouldn’t seem that odd to people shooting a 70# bow, but for reasons relating to how many reps you can do in weight-lighting being a function of how close you are to your one-rep max, it actually is a pretty big jump for a lot of people. They could draw the bow, but only a few times an hour. I’m not that strong, but I’m a relatively big guy (6′ tall, over 200lbs) and so I can comfortably shoot my bow for an hour or two at a stretch without losing more accuracy than if I was shooting a 70# or a 60# bow (really the main thing affecting accuracy is that your shoulders get tired of holding the bow up at arm’s length). So it’s a very reasonable thing for me, personally, to do, but it’s pretty odd among people at the archery shop I go to. And moreover it’s not really necessary. Where I live the only common big game is whitetail deer and you can reliably kill a whitetail with a 40# bow if you’ve got a good broadhead/arrow setup and are a good shot. I do it because I like it, and because it acts like insurance. With the double-edge single-bevel broadheads I use on top of 0.175″ deflection tapered carbon fiber arrows, the whole thing weighing 715 grains, shot from an 80# bow, if I make a bad shot and hit the large bones my arrow will most likely go right through and kill the animal anyway. And I could use the same setup for hunting moose or buffalo without modification, should I ever get the opportunity. (That would fill the freezer with meat in one shot!)

So, as you can see, from my perspective this is a reasonable thing to do. But from most everyone else’s perspective, it’s weird. And moreover, it’s more than most men at the archery shop I go to can do. Some people there can’t even draw my bow, and many who could would find the strain too much to do more than a few times. It would be easy for people to suspect that I look down on them as lesser because of it, and to reject me in self-defense. If someone you respect looks down on you,  it’s painful. If someone you reject as mentally deranged looks down on you, it’s irrelevant.

So when people make jokes about me/my bow being atypical, I go along with it. I will cheerfully admit that I’m engaging is massive over-kill; I will joke along with them about the way deer are wearing bullet-proof vests these days. (My setup could probably go through a lighter bullet-proof vest since broadheads are razor sharp and can cut through kevlar. It has zero chance against the sort of vest with ceramic plates in it.) If someone characterizes me as crazy, I smile and say, “nuts, but I like it.” And in general the joking lasts for a minute then is forgotten about and things are normal. This is, I think, for two reasons:

  1. I have signaled that I know I am abnormal and am happy with the status of being abnormal. I am clearly indicating that I am not the standard against which others should be measured so I am no threat to anyone’s social standing or sense of self.
  2. It smothers the impulse to joke about me, in the sense of taking the air away from a flame. If you say that someone’s crazy and he smiles and says, “certifiable,” you just don’t have anywhere to go. Joking/teasing requires a difference of opinion. If someone agrees with you, there’s nothing left to say since a man looks like an ass if all he does is repeat himself.

Of course, this does depend on the content of what’s being said about me being something which I can agree with. In this example, “crazy” just means “abnormal,” which is quite true. If someone were to accuse me of being a criminal I would defend myself, not agree with them. The point is not to be a carpet for people to walk on but rather to learn how to pick one’s battles and only fight the ones that need to be fought. That’s a general principle of skill, by the way; skill consists in applying the right amount of force to the right place to generate the best results. A lack of skill wastes force first in applying it to the wrong place and so needing far more force to achieve the desired result, and then in needing to apply more force to correct the problems caused by having applied force to the wrong place. That’s as true of picking one’s battles as it is of swing dancing or balancing in ice skating. Or, for that matter, archery; missing the target in archery often means that you have to spend a lot of effort to pull your arrow out of a tree.

Beauty and the Beholder

In a recent blog post, John C Wright discusses beauty and where it is located. Now, there is nothing wrong with what he says, but I do submit that he is somewhat taking the meaning of a person who says the words to be overly related to the words that they say. You can see a similar thing when people criticize doing evil that good may come of it with the words, “the ends don’t justify the means”. Taken literally, this is nonsense. Of course means are justified by ends, because nothing else can justify means. There is no such thing as a self-justifying means. Pushing a sharpened piece of steel into somebody’s bodies is justified or not entirely on the basis of why you’re doing it. Is it to shiv him in revenge for a minor transgression? Or are you a surgeon cutting out a cancer to save his life? Plunging the metal into him is merely a means, and as such must find its justification in the ends for which it is used. What people really mean, of course is one of  “this end don’t justify those means” or more commonly, “remote ends do not justify proximate, intermediate ends”. But it’s less catchy, so you can see why people don’t say what they literally mean. Plus most people would have to look up what proximate means (in the proximity of; right next to, approximate meaning an estimate by way of analogy to not-right-next-to).

I submit that the same thing applies to the phrase, “beauty is in the eye of the beholder”. Taken literally, it is as Mr. Wright says a denial of beauty as a concept. But many if not most people do not mean it literally. (I’m on relatively safe ground here; when discussing anything less practical than passing the salt at dinner, people rarely mean what they say literally.) There is a real thing which is being described, which is that beauty is a direct perception of the goodness of God as reflected by the goodness in creation, and each person is given a different (if largely overlapping) perspective on the goodness of God, and hence what precise goodness each man is able to perceive does vary. Thus when beholding any particular beautiful thing, one man may see the goodness of God revealed clearly in it because it matches what he was made to see, while another may see it only dimly because he was given something else to see clearly. To generalize, there are those who like roller coasters and in them appreciate the power of God in velocity and turning; this is an aspect of God’s goodness I see only dimly, while I appreciate the stillness of a forest and the loudness if leaves falling to the ground in it quite a lot. Now, my inability to perceive God’s goodness in the rush of the roller coaster does not mean that it is not there, any more than a deaf man’s inability to hear the beauty in Mozart’s music does not mean that it is not beautiful.

It is quite wrong to say that beauty is in the eye of the beholder, but it is quite accurate to say that perceiving beauty depends on the eye of the beholder. But the second phrase is harder on the ear, and when it comes to expressing truths most people are far more traditionalists than they are philosophers, and those of us who are capable of saying what we mean should always look out in charity for those who are not. On the other hand, it is always good to give people who misuse common phrases a (metaphorical) hard slap upside the head to try to bring them to their senses, which I think is what Mr. Wright intends. So please take this post as an elaboration on the subject Mr. Wright is speaking about, and not a contention with Mr. Wright’s post.

The Probability of Theology

This is the script to my video, The Probability of Theology:

As always, it was written (by me) for me to read aloud, but it should be pretty readable.

Today I’m going to be answering a question I got from the nephew of a friend of mine from the local Chesterton society. He’s a bright young man who was (I believe) raised without any religion, and has been introduced by his aunt to some real, adult, theology, and has the intellectual integrity to seriously consider it until he can see how it’s either true or definitely wrong. Here’s his question:

I am an atheist, mostly due to a few primary objections I have with religion in general, the most prominent of which is that since there are infinite possible theologies, all with the same likelihood of being true, the probability of one single man-made theology such as Christianity, Judaism, or Islam being true is approximately zero. My aunt … is quite convinced that you can prove this idea false [and] we are both hoping that you could make a … video about this on your channel, if possible. We will be eagerly awaiting your response.

This is an excellent example of how it’s possible to ask in a few words a question which takes many pages to answer. I will attempt to be brief, but there’s a lot to unpack here, so buckle up, because it’s going to be quite a ride.

The first thing I think we need to look at is the idea of a man-made theology. And in fact there are two very distinct ideas in this, which we need to address separately. First is the concept of knowledge, which as I’ve alluded to in previous videos was hacked into an almost unrecognizable form in the Enlightenment. Originally, knowledge meant the conformity of the mind to reality, and though in no small part mediated by the senses, none the less, knowledge was understood to be a relatively direct thing. In knowledge, the mind genuinely came in contact with the world. All this changed in the aftermath of Modern Philosophy. It would take too long to give a history of it so the short version is: blame Descartes and Kant. But the upshot is that the modern conception of knowledge is at best indirect and at worst nothing at all; knowledge—to the degree it’s even thought possible—is supposed to consist of creating mental models with one’s imagination and trying to find out whether they correlate with reality and if so, to what degree. Thus there is, in the modern concept of “knowledge”—the scare quotes are essential—a complete disconnect between the mind and the world. The mind is trapped inside of the skull and cannot get out; it can only look through some dirty windows and make guesses.

This approach of making guesses and attempting (where practical) to verify them has worked well in the physical sciences, though both the degree to which it has worked and the degree to which this is even how physical science is typically carried on, is somewhat exaggerated. But outside of the physical sciences it has largely proved a failure. One need only look at the “soft sciences” to see that this is often just story-telling that borrows authority by dressing up like physicists. It is an unmitigated disaster if it’s ever applied to ordinary life; to friends and family, to listening to music and telling jokes.

There have been a few theologies which have been man-made in this modern sense; that is, created out of someone’s imagination then compared against reality—the deism that conceives of God as winding a clock and letting it go comes to mind—but this is quite atypical, and really only exists as a degeneration of a previous theology. Most theologies describe reality in the older sense; descriptively, not creatively. It is true that many of them use stories which are not literally true in order to convey important but difficult truths narratively. This is because anyone who wants to be understood—by more than a few gifted philosophers—communicates important truths as narratives. Comparatively speaking, it doesn’t matter at all whether George Washington admitted to cutting down a cherry tree because he could not tell a lie; the story conveys the idea that telling the truth is a better thing than avoiding the consequences of one’s actions, and that lesson is very true. It may well be that there was never a boy who cried “wolf!” for fun until people didn’t believe him; it’s quite possible no one was ever eaten by a wolf because he had sounded too many false alarms to be believed when he sounded a real one. But none of that matters, because it is very true that it is a terrible idea to sound false alarms, and that sounding false alarms makes true alarms less likely to be believed. None of these are theories someone made up then tested; they are knowledge of real life which is communicated through stories which are made up for the sake of clarity. And so it is with the mythology of religions. Even where they are not literally true, they are describing something true which people have encountered. I am not, of course, saying that this is what all religion is, but all religions do have this as an element, because all religions attempt to make deep truths known to simple people. So when considering anything from any religion, the first and most important question to ask about it is: what do the adherents mean by it. This is where fundamentalists of all stripes—theistic and atheistic alike—go wrong. They only ever ask what they themselves mean by what the adherents of a religion say.

So this is the first thing we must get clear: theologies are not man-made in the sense of having been created out of a man’s imagination. They are not all equally correct, of course; some theologies have far more truth in them than others, but all have some truth, and the real question about any religion is: what are the truths that it is trying to describe? Christianity describes far more truth than buddhism does, but buddhism is popular precisely because it does describe some truths: the world is not simply what it appears at first glance; the more we try to live according the world the more entangled in it we get and the worse off we are; and by learning to be detached from the world we can improve our lot. It is not the case—as many buddhisms hold—that we must reject the world outright; we need a proper relationship to it, which Saint Francis captured in his Canticle of the Sun. The world is our sibling, neither our master nor our slave. And so it goes with all religions: they are all right about at least something, because the only reason any of them existed at all was because somebody discovered something profoundly true about the world. (Pastafarianism being the exception which proves the rule; the flying spaghetti monster is a joke precisely because it was simply made up and does not embody anything true about the world. Even the Invisible Pink Unicorn falls short of this; it embodies the truth that some people don’t understand what mysteries actually are.)

The second thing we must address in the man-made part of “man-made theologies” is that—at least according to them—not all theologies are made by man, even in the more ancient sense of originating in human knowledge. The theology of Christianity originated with God, not with man. Christian theology is primarily the self-revelation of God to man. And we have every reason to believe that God would be entirely correct about Himself.

Now of course I can hear a throng of atheists screaming as one, “but how do you know that’s true?!? You didn’t hear God say it, all you’ve heard is people repeating what they say God said.” Actually, these days, they’re more likely to say, “where’s your evidence”, or accuse me of committing logical fallacies that I can’t be committing, and that they can’t even correctly define, but for the sake of time let’s pretend that only top-tier atheists watch my videos.

Oh what a nice world that would be.

Anyway, this gets to a mistake I’ve seen a lot of atheists make: evaluating religious claims on the assumption that they’re false. There’s a related example which is a bit clearer, so I’m going to give that example, then come back and show how the same thing applies here. There are people who question the validity of scripture on the basis of copying errors. “In two thousand years the texts were copied and recopied so many times we have no way of knowing what the originals said,” sums it up enough for the moment. This objection assumes that the rate of copying errors in the gospels is the same as for all other ancient documents. Actually, it also exaggerates the rate of copying errors on ancient documents, but that’s beside the point. It is reasonable enough to assume that the rate of copying errors in Christian scriptures does not greatly differ from that of other documents, if Christianity is false. Well, actually, even that is iffy since a document people hold in special reverence may get special care even if that reverence is mistaken, but forget about that for now. If Christianity is true, the gospels are not an ordinary document. They are an important part of God’s plan of salvation for us, which he entrusted to a church he personally founded and has carefully looked over throughout time, guarding it from error. In that circumstance, it would be absurd to suppose that copying errors would distort the meaning of the text despite the power of God preventing that from happening. Thus it is clear that the rate of copying errors is not a question which is independent of the truth of Christianity, and therefore a presumed rate of copying errors cannot be used as an argument against the truth of Christianity precisely because whatever rate is presumed will contain in it an assumption of the truth or falsehood of Christianity. (I should point out that what we would expect—and what the Church claims—is that God would safeguard the meaningful truth of revelation, not the insignificant details. That is, we would expect that if Christianity was true God would keep significant errors from predominating, not that he would turn scribes into photocopying machines—within Christianity God places a great deal of emphasis on free will and human cooperation. And as it happens, we have some very old copies of the gospels and while there have been the occasional copying errors, none of them have amounted to a doctrinally significant difference. Make of that what you will.)

So bringing this example back to the original point, whether Christian theology is man-made is not a question which is independent of the question of whether Christianity is true. If Christianity is false, then its theology is man-made. But if Christianity is true, then its theology is not man-made, but revealed. And as I said, while men often make mistakes, we can trust God to accurately describe himself.

So, to recap: theology is descriptive, not constructive, and in historically-based religions like Christianity, theology is revealed, not man-made. So now we can move onto the question of probabilities.

First, there is the issue that probability says nothing about one-offs. I covered this in my video The Problem with Probability, so I won’t go into that here, but since I’ve heard the objection that I only discussed the frequentist interpretation of probability, I will mention that if you want to go with a bayesian interpretation of probability, all you’re saying by assigning a probability of zero to an event is that it’s not part of your model. Now in the question we’re addressing, it’s not a probability of zero that’s being assigned but rather “approximately zero”. But the thing about the Bayesian interpretation is that probability is at least as much a description of the statistician as it is of the real world. It is, essentially, a way to quantify how little you know. Now, sometimes you have to make decisions and take actions with whatever knowledge you have at the moment, but often the correct thing to do is: learn. There is no interpretation of statistics which turns ignorance into knowledge, or in bayesian terms, the way to get better priors is outside of the scope of bayesian statistics.

But more importantly, this atomization of theologies is very misleading. Among all of the possible theologies, many of them have a great deal in common. They do not have everything important in common, obviously. There are some very substantial differences between, say, Greek Orthodoxy and say, Theravada Buddhism. But for all their differences, Islam, Christianity, Judaism, Baha’i, Sikhism, and several others have quite a lot in common. They all worship the uncreated creator of all that is. That’s actually a pretty big thing, which is to say that it’s very important. An uncreated creator who transcends time and space has all sorts of implications on the coherency of contingent beings within time (such as ourselves), the existence of a transcendent meaning to life, and lots of other things. This is in contrast to things that don’t matter much, like whether there is an Angel who has a scroll with all of the names of the blessed written on it. Whether there is one or isn’t doesn’t really matter very much. Grouping those two distinctions together as if they were of equal importance is highly misleading. Now, granted, there are all too many people who take a tribalistic, all-or-nothing approach to religion where the key thing is to pick the right group to formally pledge allegiance to. But one of the things which follows from belief in an uncreated creator is that this primitive, tribalistic approach is a human invention which is not an accurate description of reality. An uncreated creator cannot need us nor benefit from us, so he must have created us for our own sake, and so our salvation must be primarily not about something superficial like a formal pledge of allegiance, but about truth and goodness. And by goodness I mean conformity of action to the fullness of truth. For more on this, I’ll link my video debunking Believe-or-Burn, but for the moment, suffice it to say that being fairly correct, theologically, must be of some greater-than-zero value under any coherent theology with an uncreated creator behind all that exists. The correct approach is not to give up if you can’t be be completely correct. It’s to try to be as correct as possible.

And in any event there is no default position. Atheism is as much a philosophical position as any theology is. Well, that’s not strictly true. There is a default position, which is that there is Nothing. But that’s clearly wrong, there is something, so the default position is out. And while in a dictionary sense atheism is nothing but the disbelief in God—or for the moment it doesn’t even matter if you’re too intellectually weak for that and want to define atheism as the mere lack of a belief in God—western atheists tend to believe in the existence of matter, at least, as well as immaterial things like forces and laws of nature. So each atheist has a belief system, even if some refuse to admit it. The only way to not have a belief system is to give yourself a lobotomy. But until you do, since you have a belief system, it is as capable of being wrong as any theology is. And does it seem plausible that, if Christianity is true, if the version of Christianity you’ve encountered is a little inaccurate, you’ll be better off as an atheist?

I think that nearly answers the question, but there is a final topic which I think may answer an implicit part of the question: while there are infinitely many theologies which are theoretically possible, in practice there haven’t actually been all that many. This is something I’m going to cover more in my upcoming video series which surveys the world’s religions, but while there certainly are more than just one religion in the world, there aren’t nearly as many as many modern western people seem to think that there are. Usually large numbers are arrived at by counting every pagan pantheon as being a different religion, but this is not in fact how the pagans themselves thought of things. I don’t have the time to go into it—I addressed this somewhat in my video on fundamentalists, and will address it more in the future—but actual pagans thought of themselves as sharing a religion; just having some different gods and some different names for the same gods, just like French and American zoos don’t have all the same animals, and don’t use the same names for the animals they do have in common. But they will certainly recognize the other as zoos. American zookeepers do not disbelieve in French “python réticulé”.

And so it goes with other differences; those who worship nature worship the same nature. All sun worshippers worship the same sun. Those who believe in an uncreated creator recognize that others who believe in an uncreated creator are talking about the same thing, and generally hold that he can be known to some degree through examination of his creation, so they will tend to understand others who believe in an uncreated creator as having stumbled into the same basic knowledge.

And this explains why minor religions tend to die out as small groups make contact with larger groups. Those religions which are more thoroughly developed—which present more truth in an intelligible way—will appeal to those who on their own only developed a very rudimentary recognition and expression of those truths. There has been conversion by the sword in history, though it is actually most associated with Islam and often exaggerated in other faiths, but it is not generally necessary. When people come into contact with a religion which has a fuller expression of truth than the one they grew up with, they usually want to convert, because people naturally want the truth, and are attracted to intelligible expressions of it. And the key point is that the expressions of truth in better developed religions are intelligible precisely because they are fuller expressions of truths already found in one’s native religion. And this is so because religions are founded for a reason. I know there’s a myth common that religion was invented as bad science, usually something to the effect that people invented gods of nature in order to make nature seem intelligible. The fact that this is exactly backwards from what personifying inanimate objects does should be a sufficient clue that this is not the origin of religion. Think about the objects in your own life that people personify: “the printer is touchy”, “the traffic light hates me”, “don’t let the plant hear that I said it’s doing well because it will die on me out of spite”. Mostly this is just giving voice to our bewilderment at how these things work, but if this affects how mysterious the things are in any way, it makes them more mysterious, not less. If you think the printer is picky about what it prints, you’ll wonder at great length what it is about your documents it disapproves of. If you think of it as a mere machine, you turn it off, take it apart, put it back together again, and turn it on. Or you call a repairman. But if you personify it, you’ll wrap your life up in the mystery of its preferences. And anyone with any great experience of human beings has seen this. Especially if you’ve ever been the repairman to whom the printer is just a machine.

It’s also, incidentally, why many atheists have developed a shadowy, mysterious thing called “religion” which desires to subjugate humanity.

People personify what they don’t understand to communicate that it is mysterious, not to make it less mysterious. And they do this because people—having free will—are inherently and irreducibly mysterious.

So if you look past the mere surface differences, you will find that religions have generally originated for very similar reasons. So much so that more than a few people who haven’t studied the world’s religions enough are tempted to claim that there is only one universal religion to all of mankind with all differences being mere surface appearance. That’s not true either, but that this mistake is possible at all, is significant. Religions are founded for a reason, and that’s why there aren’t infinitely many of them.

Until next time, may you hit everything you aim at.

Authority Figures in Movies

One of the curious things about the roles of authority figures in movies is that they are very rarely played by people who have ever had any authority. One might think that this wouldn’t have too much of an impact since the actors are just reciting dialog which other people wrote. (People who most of the time haven’t had any authority themselves, but that’s a somewhat separate matter.) And in the end, authority is the ability to use force to compel people, so does it matter much what the mannerisms an actor uses are?

Actually, yes, because in fact a great deal of authority, in practice, is about using social skills to get people to cooperate without having to use one’s authority. And a great deal of social skills are body language, tone of voice, emphasis, and pacing. Kind of like the famous advice given by Dalton in Road House:

For some reason, authority figures are usually portrayed as grim and stern—at this point I think because it’s a shorthand so you can tell who is who—but there is a great deal which can be accomplished by smiling. There’s an odd idea that many people seem to have that smiling is only sincere if it is an instinctual, uncontrollable reaction. I’ve no idea where this crazy notion came from, but in fact smiling is primarily a form of communication. It communicates that one is not (immediately) a threat, that (in the moment) one intends cooperation, that the order of the moment is conversation rather than action. Like all communication it can of course be a lie, but the solution to that is very simple: don’t lie with your smile. Words can be lies, but the solution is not to refrain from speaking unless you can’t help yourself; it’s to tell the truth when one opens one’s mouth. So tell the truth when you smile with your mouth, too. And since actions are choices, one very viable option, if you smile at someone, is to follow through and (in the moment) be nice.

Anyone (sane) who has a dog knows that in many ways they’re terrible creatures. They steal your food, destroy everyday items, throw up on your floor when they’ve eaten things that aren’t food, get dog hair everywhere, and make your couches stink of dog. And yet, people love dogs who do these things to them for a very simple reason: any time you come home, your dog smiles at you and wags its tail and is glad to see you. And it’s human nature that it’s impossible to be angry at someone who is just so gosh darned happy that you’re in the same room as them.

People in authority are rarely there because they have a history of failure and incompetence at dealing with people; it may be a convenient movie shorthand that people in authority are stone-faced, grumpy, and stern, but in real life people in positions of authority are generally friendly. It’s easy to read too much into that friendliness, of course—they’re only friendly so long as you stay on the right side of what you’re supposed to be doing—but this unrealistic movie shorthand makes for far less interesting characters.

And I suppose I should note that there are some people in positions of authority who are often stone-faced and grim, but these are usually the people responsible for administering discipline to those already known to be transgressors. This is especially true of those dealing with children, who have little self control and less of a grasp of the gravity of most situations they’re in and who need all the help they can get in realizing that it’s not play time. By contrast, during the short time I was able to take part in my parish’s prison ministry, I noticed that the prison guards were generally friendly (if guardedly so) with the inmates. Basically, being friendly can invite people to try to take liberties, but being grumpy usually gets far less cooperation, and outside of places like Nazi death camps where you are actually willing to shoot people for being uncooperative, cooperation is usually far more useful than people trying to take liberties and having to be told “no” is inconvenient.

But most of the actors who play authority figures don’t know any of this; and when you research the individual actors they often turn out to be goofballs who don’t like authority and whose portrayal of it is largely formed by what they most dislike about it.

Atheism is Not a Religion

This is the script to my video, Atheism is Not a Religion. As always, it was written to be listened to when I read it aloud, but it should be pretty readable as text, too.

Today we’re going to look at a topic which a casual survey of atheist youtube channels and twitter feeds suggests is of importance to many atheists: that atheism is not a religion. Now, since the one thing you can’t convict internet atheists of is originality, I assume that this is because there are Christians who claim that atheism is a religion. Of course what they probably mean by this that atheism entails a set of metaphysical beliefs. And this is true enough, at least as a practical assumption if some atheists will scream at you until they’re blue in the face that it’s not what they believe in theory. But merely having metaphysical beliefs does not make something a religion; it makes it a philosophy or in more modern terms, a world-view. But a religion is far more than merely a world-view or a set of beliefs. As Saint James noted, the demons believe in God.

The first and most obvious thing which atheism lacks is: worship. Atheists do not worship anything. I know that Auguste Comte tried to remedy this with his calendar of secular holidays, but that went nowhere and has been mostly forgotten except perhaps in a joke G. K. Chesterton made about it. A few atheists have made a half-hearted go of trying to worship science. And if that had any lasting power, Sunday services might include playing a clip from Cosmos: A Spacetime Odyssey. But the would-be science worshippers haven’t gotten that far, and it is highly doubtful they ever will.

Secular Humanism is sometimes brought up as something like a religious substitute, but so far it only appears to be a name, a logo, some manifestos no one cares about, and the belief that maybe it’s possible to have morality without religion. And humanity is not a workable object of worship anyway. First, because it’s too amorphous to worship—as Chesterton noted, a god composed of seven billion persons neither dividing the substance nor confounding the persons is hard to believe in. The other reason is that worshipping humanity involves worshipping Hitler and Stalin and Mao and so forth.

Which brings us to Marxism, which is perhaps the closest thing to a secular religion so far devised. But while Marxism does focus the believer’s attention on a utopia which will someday arrive, and certainly gets people to be willing to shed an awful lot of innocent blood to make it happen sooner, I don’t think that this really constitutes worship. It’s a goal, and men will kill and die for goals, but they can’t really worship goals. Goals only really exist in the people who have them, and you can only worship what you believe actually exists.

It is sometimes argued that within a marxist utopia people worship the state, but while this is something put on propaganda posters, the people who lived in marxist nations don’t report anyone actually engaging in this sort of worship, at least not sincerely.

And I know that some people will say that atheists worship themselves—I suspect because almost all atheists define morality as nothing more than a personal preference—but, at least I’ve never seen that as anything more than a half-hearted attempt to answer the question of “what is the ground of morality”, rather than any sort of motivating belief. And in any event, it is inherently impossible to worship oneself. Worshipping something is recognizing something as above oneself, and it is not possible to place oneself above oneself. I think the physical metaphor suffices: if you are kneeling, you can’t look up and see your own feet. You might be able to see an image of yourself in a mirror, but that is not the same, and whatever fascination it might have is still not worship. So no, atheism does not worship anything.

The second reason why atheism is not a religion is that atheism gives you no one to pray to. Prayer is a very interesting phenomenon, and is much misunderstood by those who are not religious and, frankly, many who are, but it is, at its core, talking with someone who actually understands what is said. People do not ever truly understand each other because the mediation of words always strips some of the meaning away and the fact that every word means multiple things always introduces ambiguity. Like all good things in religion this reaches its crescendo in Christianity, but even in the public prayers said over pagan altars, there is the experience of real communication, in its etymological sense. Com—together unication—being one. It is in prayer—and only in prayer—that we are not alone. Atheists may decry this as talking with our imaginary friends if they like—and many of them certainly seem to like to—but in any event they are left where all men who are not praying are left: alone in the crowd of humanity, never really understood and so only ever loved very imperfectly at best. (I will note that this point will be lost on people who have never taken the trouble to find out what somebody else really means, and so assumes that everyone else means exactly the same things that he would mean by those words, and so assumes that all communication goes perfectly. You can usually identify such people by the way they think that everyone around them who doesn’t entirely agree with them is stupid. It’s the only conclusion left open to them.)

The third reason why atheism is not a religion is that it does not, in any way, serve the primary purpose of religion. The thing you find common to all religions—the thing at the center of all religions—is putting man into his proper relation with all that is; with the cosmos, in the Greek sense of the word. Anyone who looks at the world sees that there is a hierarchy of being; that plants are more than dust and beasts are more than plants and human beings are more than beasts. But if you spend any time with human beings—and I mean literally any time—you will immediately know that human beings are not the most that can be. All that we can see and hear and smell and taste and touch in this world forms an arrow which does not point at us but does run through us, pointing at something else. The primary purpose of a religion is to acknowledge that and to get it right. Of course various religions get it right to various degrees; those who understand that it points to an uncreated creator who loved the world in existence out of nothing get it far more right than those who merely believe in powerful intelligences which are beyond ours. Though if you look carefully, even those who apparently don’t, seem to often have their suspicions that here’s something important they don’t know about. But be that as it may, all religions know that there is something more than man, and give its adherents a way of putting themselves below what they are below; of standing in a right relation to that which is above them. In short, the primary purpose of all religion is humility.

And this, atheism most certainly does not have. It doesn’t matter whether you define atheism as a positive denial or a passive lack; either way atheism gives you absolutely no way to be in a right relationship to anything above you, because it doesn’t believe in anything above you. Even worse, atheism as a strong tendency, at least in the west, to collapse the hierarchy of being in the other direction, too. It is no accident that pets are acquiring human rights and there are some fringe groups trying to sue for the release of zoo animals under the theory of habeus corpus. Without someone who intended to make something out of the constituent particles which make us up, there is ultimately no reason why any particular configuration of quarks and electrons should mean anything more than any other one; human beings are simply the cleverest of the beasts that crawl the earth, and the beasts are simply the most active of the dust which is imprisoned on the earth.

We each have our preferences, of course, but anyone with any wide experience of human beings knows that we don’t all have the same preferences, and since the misanthropes are dangerous and have good reason to lie to us those who don’t look out for themselves quickly become the victims of those who do. Call it foreigners or racists or patriarchy or gynocentrism or rape culture or the disposable male or communism or capitalism or call it nature red in tooth and claw, if you want to be more poetic about it, but sooner or later you will find out that human beings, like the rest of the world, are dangerous.

Religious people know very well that other human beings are dangerous; there is no way in this world to get rid of temptation and sin. But religion gives the possibility of overcoming the collapsing in upon ourselves for which atheism gives no escape.

For some reason we always talk about pride puffing someone up, but this is almost the exact opposite of what it actually does. It’s an understandable mistake, but it is a mistake. Pride doesn’t puff the self up, it shrinks it down. It just shrinks the rest of the world down first.

In conclusion, I can see why my co-religionists would be tempted to say that atheism is a religion. There are atheist leaders who look for all the world like charismatic preachers and atheist organizations that serve no discernible secular purpose. Though not all atheists believe the same things, still, most believe such extremely similar things that they could identify on that basis. Individual atheists almost invariably hold unprovable dogmas with a blind certainty that makes the average Christian look like a skeptic. And so on; one could go on at length about how atheism looks like a religion. But all these are mere external trappings. Atheism is not a religion, which is a great pity because atheists would be far better off if it was.

Two Interesting Questions

On Twitter, @philomonty, who I believe is best described as an agnostic (he can’t tell whether nihilism or Catholicism is true), made two video requests. Here are the questions he gave me:

  1. If atheism is a cognitive defect, how may one relieve it?
  2. How can an atheist believe in Christ, when he does not know him? Not everyone has mystical experiences, so not everyone has a point of contact which establishes trust between persons, as seen in everyday life.

I suspect that I will tackle these in two separate videos, especially because the second is a question which applies to far more than just atheists. They’re also fairly big questions, so it will take me a while to work out how I want to answer them. 🙂

The first question is especially tricky because I believe there are several different kind of cognitive defects which can lead to atheism. Not everyone is a mystic, but if a person who isn’t demands mystical experience as the condition for belief, he will go very wrong. If a person who is a mystic has mystical experiences but denies them, he will go very wrong, but in a different way. There are also people who are far too trusting of the culture they’re in, thinking that fitting into it is the fullness of being human, so they will necessarily reject anything which makes it impossible or even just harder to fit in. These two will go very wrong, but in a different way from the previous ones.

To some degree this is a reference to my friend Eve Keneinan’s view that atheism is primarily caused by some sort of cognitive defect, such as an inability to sense the numinous (basically, lacking a sensus divinitatus). Since I’ve never experienced that myself, I’m certain it can’t be the entire story, though to the degree that it is part of the story it would come under the category of non-mystics who demand mystical experience. Or, possibly, mystics who have been damaged by something, though I am very dubious about that possibility. God curtails the amount of evil possible in the world to what allows for good, after all, so while that is not a conclusive argument, it does seem likely to me that God would not permit anything to make it impossible for a person to believe in him.

Anyway, these are just some initial thoughts on the topic which I’ll be mulling over as I consider how to answer. Interesting questions.

The Dunning-Kruger Effect

(This is the script for my video about the Dunning-Kruger effect. While I wrote it to be read out loud by someone who inflects words like I do, i.e. by me, it should be pretty readable as text.)

Today we’re going to be looking at the Dunning-Kruger effect. This is the other topic requested by PickUpYourPantsPatrol—once again thanks for the request!—and if you’ve disagreed with anyone in the internet in the last few years, you’ve probably been accused of suffering from it.

Perhaps the best summary of the popular version of the Dunning-Kruger effect was given by John Cleese:

The problem with people like this is that they have no idea how stupid they are. You see, if you are very very stupid, how can you possibly realize that you are very very stupid? You’d have to be relatively intelligent to know how stupid you are. There’s a wonderful bit of research by a guy called David Dunning who’s pointed out that to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place. This means, if you’re absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.

There are plenty of things to say about this summary as well as the curious problem that if an idiot is talking to an intelligent person, absent reputation being available, there is a near-certainty that both will think the other an idiot. But before I get into any of that, I’d like to talk about the Dunning Kruger study itself, because I read the paper which Dunning and Kruger published in 1999, and it’s quite interesting.

The first thing to note about the paper is that it actually discusses four studies which the researchers did, trying to test specific ideas about incompetence and self-evaluation which the paper itself points out were already common knowledge. For example, they have a very on-point quotation from Thomas Jefferson. But, they note, this common wisdom that fools often don’t know that they’re fools has never been rigorously tested in the field of psychology, so they did.

The second thing to note about this study is that—as I understand is very common in psychological studies—their research subjects were all students taking psychology courses who received extra credit for participating. Now, these four studies were conducted in Cornell University, and the classes were all undergraduates, so right away generalizing to the larger population is immediately suspect since there’s good reason to believe that undergraduates in an Ivy League university have more than a few things in common which they don’t share with the rest of humanity. This is especially the case because the researchers were testing self-evaluation of performance, which is something that Cornell undergraduates were selected for and have a lot invested in. They are, in some sense, the elite of society, or so at least I suspect most of them have been told, even if not every one of them believes it.

Moreover, the tests which they were given—which I’ll go into detail about in a minute—were all academic tests, given to people who were there because they had generally been good at academics. Ivy League undergraduates are perhaps the people most likely to give falsely high impressions of how good they are at academic tests. This is especially the case if any of these were freshmen classes (they don’t say), since a freshman at an Ivy League school has impressed the admissions board but hasn’t had the opportunity to fail out yet.
So, right off the bat the general utility of this study in confirming popular wisdom is suspect; popular opinion may have to stand on its own. On the other hand, this may be nearly the perfect study to explain the phenomenon Nassim Nicholas Taleb described as Intellectual Yet Idiot—credentialed people who have the role of intellectuals yet little of the knowledge and none of the wisdom for acting the part.

Be that as it may, let’s look at the four studies described. The first study is in many ways the strangest, since it was a test of evaluating humor. They created a compilation of 30 jokes from several sources, then had a panel of 8 professional comedians rate these jokes on a scale from 1-11. After throwing out one outlier, they took the mean answers as the “correct” answers, then gave the same test to “65 cornell undergraduates from a variety of courses in psychology who earned extra credit for their participation”.
They found that the people with the bottom quartile of test scores, who by definition have an average rank of being at the twelfth percentile, guessed (on average) their rank was the 66th percentile. The bottom three quartiles overestimated their rank, while the top quartile underestimated their rank, thinking that they were in the (eyeballing it from the graph) 75th percentile when in fact (again, by definition) they were in the 88th.
This is, I think, the least interesting of the studies, first because the way they came up with “right” and “wrong” answers is very suspect, and second because this isn’t necessarily about mis-estimation of a person’s ability, but could be entirely about mis-estimating their peer’s ability. The fact that everyone put their average rank in the class at between the 66th percentile and 75th percentile may just mean that in default of knowing how they did, Cornell students are used to guessing that they got somewhere between a a B- and a B+. Given that they were admitted to Cornell, that guess may have a lot of history behind it to back it up.

The next test, though unfortunately only given to 45 Cornell students, is far more interesting both because it used 20 questions on logical reasoning taken from an LSAT prep book—so we’re dealing with questions where there is an unambiguously right answer—and because in addition to asking students how they thought they ranked, they asked the students how many questions they thought that they got right. It’s that last part that’s really interesting, because that’s a far more direct measure of how much the students thought that they knew. And in this case, the bottom quartile thought that they got 14.2 questions right while they actually got 9.6 right. The top quartile, by contrast, thought that they got 14 correct when they actually got 16.9 correct.

So, first, the effect does in fact hold up with unambiguous answers. The bottom quartile of performers thought that they got more questions right than they did. So far, so good. But the magnitude of the error is not nearly as great as it was for the ranking error, especially for the bottom quartile. Speaking loosely, the bottom quartile knew half of the material and thought that they knew three quarters of it. That is a significant error, in the sense of being a meaningful error, but at the same time they thought that they knew about 48% more than they did, not 48,000% more than they did. The 11 Cornell undergraduates who took this class did have an over-inflated sense of their ability, to be sure, but they also had a basic competence in the field. To put this in perspective, the top quartile only scored 76% better than the bottom quartile.

The next study was on 84 Cornell undergrads who were given a 20 question test of standard English grammar taken from a National Teacher Examination prep guide. This replicated the basic findings of the previous study, with the bottom quartile estimating they got 12.9 questions right versus a real score of 9.2. (Interestingly, the top quartile very slightly over-estimated their score as 16.9 when it was actually 16.4) Again, all these are averages so the numbers are a little wonky, but anyway this time they over-estimated their performance by 3.7 points, or 40%. And again, they got close to half the questions right, so this isn’t really a test of people who are incompetent.

There’s another thing to consider in both studies, which is how many questions the students thought they got wrong. In the first study they estimated 5.4 errors while in the second 7.1 errors, and while these were under-estimates, they were correct that they did in fact get that many wrong. Unfortunately these are aggregate numbers (asked after they handed the test in, I believe) so we don’t know their accuracy on gauging whether they got particular questions wrong, but in the first test they correctly estimated about 40% of their error and on the second test they correctly estimated about 65% of their error. That is, while they did unequivocally have an over-inflated sense of their performance, they were not wildly unrealistic about how much they knew. But of course these are both subjects they had studied in the past, and their test scores did demonstrate at least basic competence with them.

The fourth study is more interesting, in part because it was on a more esoteric subject: it was a 10 question test, given to 140 cornell undergrads, about set selection. Each problem described 4 cards and gave a rule which they might match. The question was which card or cards needed to be flipped over to determine if those cards do match the rule. Each question was like that, so we can see why they only asked ten questions.

They were asked to rate how they did in the usual way, but then half of them were given a short packet that took about 10 minutes to read explaining how to do these problems, while the other half was given an unrelated filler task that also took about 10 minutes. They were then asked to rate their performance again, and in fact the group who learned how to do the problems did revise their estimate of their performance, while the other group didn’t change it very much.

And in this test we actually see a gross mis-estimation of ability by the incompetent. The bottom quartile scored on average 0.3 questions correct, but initially thought that they had gotten about 5.5 questions correct. For reference, the top quartile initially thought that they had gotten 8.9 questions correct while they had in fact gotten all ten correct. And after the training, the untrained bottom quartile slightly raised their estimation of their score (by six tenths of a question), but among the trained people the bottom quartile reduced their estimation by 4.3 questions. (In fact the two groups had slightly different performances which I averaged together; so the bottom quartile of the trained group estimated that they got exactly one question right.)

This fourth study, it seems to me, is finally more of a real test of what everyone wants the Dunning-Kruger effect to be about. An average of 0.3 questions right corresponds to roughly to 11 of the 35 people in the bottom quartile getting one question right while the rest got every question wrong. The incompetent people were actually incompetent. Further, they over-estimated their performance by over 1800%. So here, finally, we come to the substance of the quote from John Cleese, right?

Well… maybe. There are two reasons I’m hesitant to say so, though. The first is the fact that these are still all Cornell students, so they are people who are used to being above average and doing well on tests and so forth. Moreover, virtually all of them would have never been outside of academia, so it is very likely that they’ve never encountered a test which was not designed to be passable by most people. If nothing else, it doesn’t reflect well on a teacher if most of his class gets a failing grade. And probably most importantly, the skills necessary to solve these problems are fairly close to the sort of skills that Ivy League undergrads are supposed to have, so this skillset at which they are incompetent being similar to a skillset at which they are presumably competent might well have misled them.

The second reason I’m hesitant to say that this study confirms the John Cleese quote is that the incompetent people estimated that they got 55% of the questions right, not 95% of the questions right. That is to say, incompetent people thought that they were merely competent. They didn’t think that they are experts.

In the conclusion of the paper, Dunning and Kruger talked about some limitations of their study, which I will quote because it’s well written and I want to do them justice.

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods.

They go on to note that in some domains, knowledge is largely the substance of skill, like in grammar, whereas in other places knowledge and skill are not the same thing, like basketball.

They also note that there is a minimum amount of knowledge required to mistake oneself for competent. As the authors say:

Most people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct an 8-cylinder engine, or diagnose acute disseminated encephalomyelitis.

So where does this leave us with regard to the quote from John Cleese? I think that the real issue is not so much about the inability of the incompetent to estimate their ability, but the inability of the incompetent to reconcile new ideas with what they do actually know. Idiots may not know much, but they still know some things. They’re not rocks. When a learned person tells them something, they are prone to reject it not because they think that they already know everything, but because it seems to contradict the few things they are sure of.

There is a complex interplay between intelligence and education—and I’m talking about education, mind, not mere schooling—where intelligence allows one to see distinctions and connections quickly, while education gives one the framework of what things there are that can be distinguished or connected. If a person lacks the one or the other—and especially if they lack both—understanding new things becomes very difficult because it is hard to connect what was said to what is already known, as well as to distinguish it from possible contradictions to what is already known. If the learned, intelligent person isn’t known by reputation to the idiot, the idiot has no way of knowing whether the things said don’t make sense to him because they are nonsense or because they are too much sense, and a little experience of the world is enough to make many if not most people sufficiently cynical to assume the former.

And I think that perhaps the best way to see the difference between this and the Dunning-Kruger effect is by considering the second half of the fourth experiment: the incompetent people learned how to do what they initially couldn’t. That is, after training they became competent. That is not, in general, our experience of idiots.
Until next time, may you hit everything you aim at.

Why I Cringe When People Criticize Capitalism (in America)

Every time I hear a fellow Christian (usually Catholic, often someone with the good sense to be a fan of G.K. Chesterton) criticize capitalism, I cringe, but not for the reason I suspect most of them would expect. Why I cringe will take a little explanation, but it’s rooted in the fact that there are actually two very different things which go by the name capitalism.

The first is a theory proposed by Adam Smith that, to oversimplify and engage in some revisionist history which is not fair to him but which would take too long to go into further, holds that virtue is unreliable: if we can harness vice to do the work of virtue, we can get the same effect much more reliably. Thus if we appeal to men’s self-interest, they will do what they ought with more vigor than if we appealed to their duty and love of their fellow man. Immanuel Kant’s essay Perpetual Peace has a section which may be taken as a summary of this attitude:

The problem of the formation of the state, hard as it may sound, is not insoluble, even for a race of devils, granted that they have intelligence. It may be put thus:—“Given a multitude of rational beings who, in a body, require general laws for their own preservation, but each of whom, as an individual, is secretly inclined to exempt himself from this restraint: how are we to order their affairs and how establish for them a constitution such that, although their private dispositions may be really antagonistic, they may yet so act as a check upon one another, that, in their public relations, the effect is the same as if they had no such evil sentiments.” Such a problem must be capable of solution. For it deals, not with the moral reformation of mankind, but only with the mechanism of nature; and the problem is to learn how this mechanism of nature can be applied to men, in order so to regulate the antagonism of conflicting interests in a people that they may even compel one another to submit to compulsory laws and thus necessarily bring about the state of peace in which laws have force.

Capitalism in this sense was this general problem applied to economics: we need men to work, but all men are lazy. We can try to appeal to men to be better, but it is much simpler and more reliable to show them how hard work will satisfy their greed.

This version of capitalism is a terrible thing, and by treating men as devils has a tendency to degrade men into a race of devils. But there is something important to note about it, which is that it doesn’t really demand much of government or of men. While it appeals to men’s greed, it does not impose a requirement that a craftsman charge an exorbitant price rather than a just price. It does not forbid a man taking a portion of his just profits and giving it to the poor. It tends to degrade men into devils, but it does not produce a form of government which demands that they become devils.

That was left to Marxism, which by its materialism demanded that all men surrender their souls to the state. Marxism is an equally wrong theory of human beings to the Capitalism of the enlightenment, but it demands a form of government which is far less compatible with human virtue. Further, it demands a form of government which is intrinsically incompatible with natural justice—depriving, as it does, all men of the property necessary to fulfill their obligations to their family and to their neighbors. Marxism inherently demands that all to whom it applies becomes a race of devils.

Of course, Marxism was never historically realized in its fullness since as Roger Scruton observed, it takes an infinite amount of force to make people do what is impossible. But enough force was applied to create the approximation of Marxism known as The Soviet Union (though according to a Russian friend of mine who escaped shortly before the Soviet Union collapsed, a more accurate translation would have been “The Unified Union of United Allies”). This global superpower which was (at least apparently) bent on conquering the world in the name of Marx—well, in the name of Lenin, or communism, or The People; OK, at least bent on conquering the world—and to a marxist, who doesn’t really believe in personal autonomy and thus doesn’t believe in personal virtue, everyone else looks like a Capitalist, in the original sense of the word, since anything which is individual must inherently be greed.

So they called American capitalists. But if the devils in hell spit some criticism at you, it is only natural to take it as a compliment, and partly because of this and partly for lack of a better term, Americans started calling themselves capitalists. If the people with the overpopulated death camps for political prisoners in the frozen wastelands of Siberia despise us for being capitalists, then being a capitalist must be a pretty good thing. But in embracing the term capitalist, people were not thinking of Adam Smith’s economic theory or the problem Kant wrestled with in how to get a race of devils to cooperate, they were thinking of what they were and just using the name capitalist to describe that.

And here’s where we come to the part that makes me cringe when I hear fellow Christians complain about Capitalism. The United States of America has had many sins, but it never been capitalist in the philosophical sense. Much of what became The United States was founded as religious colonies, though to be sure there were economic colonies as well. But the economic colonies, which had all of the vices that unsupervised people tend to, were still composed of religious people who at least acknowledged the primacy of virtue over vice in theory. And for all the problems with protestantism, the famous “Protestant Work Ethic” was the diametric opposite of philosophical capitalism. The whole idea of the protestant work ethic is that men should work far beyond what is needed, because it is virtue and because idleness is dangerous. Perhaps it was always more of a theory than a practice, but even so it was not the opposite theory of capitalism that men should work to satisfy their greed.

For perhaps the first century after the founding of The United States, it was a frontier nation in which people expanded and moved around with fairly low population densities. It takes time to set up governments and small groups of people can resolve their own differences well enough, most of the time, so the paucity of government as we’re used to it today (and though in a different form people would have been used to it in Europe in the middle ages) was largely due to the historical accident of low population densities, and not to any sort of philosophical ideal that greed is the highest good, making government practically unnecessary except for contract enforcement.

And while it is true that this environment gave birth to the robber barons who made a great deal of money treating their fellow men like dirt, it also gave rise to trust busters and government regulation designed to curb the vices of men who did not feel like practicing even minimal virtue to their fellow man. Laws and regulations take time to develop, especially in a land without computers and cell phone cameras; before the advent of radio it took more than a little time to convince many people of some proposition because the skilled orators could only do the convincing one crowd at a time.

Moreover, the United States has never had a government free from corruption, but powerful men buying off politicians was not what the United States was supposed to be; all things in this fallen world are degenerate versions of themselves. Slowness to act on common principles in a fallen world does not mean that a people does not hold those principles, only that hard things like overcoming corruption are difficult and time consuming to do.

But throughout the history of the United States, if you walked up the average citizen and asked him, “ought we, as a people, to encourage men to be honest, hard working, and generous, or ought we to show each man that at least the first two are often in his self-interest and then encourage him to then be as selfish and greedy as possible?” you would have had to ask a great many people indeed to come across someone who would cheerfully give you the second answer. Being willing to give that second answer is largely a modern degeneracy of secularists who know only enough economics nor history to be dangerous, and for the most part think that you’re asking whether the government should micro-manage people’s lives to force them to be honest, hard working, and generous. Americans have many vices, but the least reliable way possible to find out what they are is to ask us.

I will grant that philosophical capitalism is also, to some degree, what is proposed by advertising. Indulge yourself! It’s sinfully delicious! You’re worth it! You deserve it! Everything is about making you happy!

I think that this may be why I cringe the most when my fellow Christians complain about our capitalist society; they should have learned by now not to believe everything they see on television.

Debunking Believe-or-Burn

This is the script from my video debunking believe-or-burn. It  was written to be read aloud, but it should be pretty readable. Or you could just listen to it.

Today we’re going to be looking at how abysmally wrong the idea of “believe or burn”, which I prefer to render as, “say the magic words or burn,” is. And to be clear, I mean wrong, not that I don’t like it or this isn’t my opinion. I’m Catholic, not evangelical, so I’m talking about how it contradicts the consistent teaching of the church since its inception 2000 years ago (and hence is also the position of the Eastern Orthodox, the Kopts, etc), and moreover how one can rationally see why “say the magic words or burn” cannot be true.

I’m not going to spend time explaining why non-Christian religions don’t believe you have to say the magic words or burn because for most of them, it’s not even relevant. In Hinduism, heavens and hells are related to your karma, not to your beliefs, and they’re all temporary anyway—as the story goes, the ants have all been Indra at some point. In Buddhism you’re trapped in the cycle of reincarnation and the whole point is to escape. To the degree that there even is a concept of hell in Buddhism, you’re there now and maybe you can get out. Many forms of paganism don’t even believe in an afterlife, and where they do—and what you do in life affects what happens to you in the afterlife—what happens to you is largely based on how virtuously you lived in society, not on worshipping any particular gods. Animistic religions are either often similar to pagan religions or they hold that the dead stick around as spirits and watch over the living. For the monotheistic religions, few of them have a well-defined theology on this point. Their attitude tends to be, “here is the way to be good, it’s bad to be evil, and for everyone else, well, that’s not a practical question.” For most of the world’s religions, “say the magic words or burn,” isn’t even wrong. And Islam is something of an exception to this, but I’m not going to get into Islam because the Quran doesn’t unambiguously answer this question and after Al Ghazali’s triumph over the philosophers in the 11th century, there really isn’t such thing as Islamic theology in the same sense that you have Christian theology. Christianity holds human reason, being finite, to be unable to comprehend God, but to be able to reason correctly about God within its limits. Since Al-Ghazali wrote The Incoherence of the Philosophers, the trend in Islam has been to deny human reason can say anything about God, past what he said about himself in the Quran. As such, any question not directly and unambiguously answered in the Quran—which, recall, is poetry—is not really something you can reason about. So as a matter of practicality I think Islam should be grouped with the other monotheisms who hold the question of what happens to non-believers acting in good faith to be impractical. And in any event there are hadith and a passage in the Quran which do talk about some Jews and Christians entering paradise, so make of that what you will.

There isn’t an official name for the doctrine of “say the magic words or burn”, but I think it’s best known because of fundamentalists who say that anyone who doesn’t believe will burn in hell. I think that the usual form is saying that everyone who isn’t a Christian will burn in hell, for some definition of Christian that excludes Roman Catholics, Eastern Orthodox, Anglicans, and anyone else who doesn’t think that the King James version of the bible was faxed down from heaven and is the sole authority in human affairs. You generally prove that you’re a Christian in this sense by saying, “Jesus Christ is my personal lord and savior”, but there’s no requirement that you understand what any of that means, so it functions exactly like a magical incantation.

As I discussed in my video on fundamentalists, when they demand people speak the magic words, what they’re asking for is not in any sense a real religious formulation, but actually a loyalty pledge to the dominant local culture. (Which is fundamentalist—all tribes have a way of pledging loyalty.) But the concept of “say the magic words or burn,” has a broader background than fundamentalists, going all the way back to the earliest Protestant reformers and being, more or less, a direct consequence of how Martin Luther and John Calvin meant the doctrine of Sola Fide.

Before I get into the origin of “say the magic words or burn”, let me give an overly brief explanation of what salvation actually means, to make sure we’re on the same page. And to do that, I have to start with what sin is: sin means that we have made ourselves less than what we are. For example, we were given language so that we could communicate truth. When we lie, not only do we fail in living up to the good we can do, we also damage our ability to tell the truth in the future. Lying (and all vices) all too easily become habits. We have hurt others and damaged ourselves. Happiness consists of being fully ourselves, and so in order to be happy we must be fixed. This is, over-simplified, what it means to say that we need salvation. Christianity holds that Jesus has done the work of that salvation, which after death we will be united with, if we accept God’s offer, and so we will become fixed, and thus being perfect, will be capable of eternal happiness. That’s salvation. Some amount of belief is obviously necessary to this, because if you don’t believe the world is good, you will not seek to be yourself. This is why nihilists like pickup artists are so miserable. They are human but trying to live life like some sort of sex-machine. They do lots of things that do them no good, and leave off doing lots of things that would do them good. Action follows belief, and so belief helps to live life well. We all have at least some sense of what is true, though, or in more classical language the natural law is written on all men’s hearts. It is thus possible for a person to do his best to be good, under the limitations of what he knows to be good. God desires the good of all of his creatures, and while we may not be able to see how a person doing some good, and some evil things under the misapprehension that they are good, can be saved, we have faith in God that he can do what men can’t. Besides, it doesn’t seem likely that God would permit errors to occur if they couldn’t be overcome. While we don’t know who will be saved, it is permissible to hope that all will be saved. As it says in the Catechism of the Catholic Church, “Those who, through no fault of their own, do not know the Gospel of Christ or his Church, but who nevertheless seek God with a sincere heart, and, moved by grace, try in their actions to do his will as they know it through the dictates of their conscience – those too may achieve eternal salvation.”

OK, so given that, where did the evil and insane idea of “say the magic words or burn” come from? Well, Sola Fide originated with Martin Luther, who as legend has it was scrupulous and couldn’t see how he could ever be good enough to enter heaven (I say, “as legend has it” because this may be an overly sympathetic telling). For some reason he couldn’t do his best and trust God for the rest, so he needed some alternative to make himself feel better. Unfortunately being Christian he was stuck with the word faith, which in the context of Christianity means trusting God. Martin Luther’s solution was to redefine the word faith to mean—well, he wasn’t exactly consistent, but at least much of the time he used it to mean something to the effect of “a pledge of allegiance”—basically, a promise of loyalty. The problem with that is that pledging your allegiance is just words. There’s even a parable Jesus told about this very thing: a man had two sons and told them go to work in his fields. The one son said no, but later thought better of it and went to work in the fields. The other said, “yes, sir” but didn’t go. Which did his father’s will? And please note, I’m not citing that to proof-text that Martin Luther was wrong. One bible passage with no context proves nothing. No, Martin Luther was obviously wrong. I’m just mentioning this parable because it’s an excellent illustration of the point about actions versus words. But as a side-note, it’s also an excellent illustration of why mainline protestants often have relatively little in common with Martin Luther and why it was left to the fundamentalists to really go whole-hog on Martin Luther’s theology: it was a direct contradiction of what Jesus himself taught.

John Calvin also had a hand in “say the magic words or burn”, though it was a bit different from the influence of Martin Luther. Though Luther and Calvin did agree on many points, they tended to agree for different reasons. While Martin Luther simply repudiated free will and the efficacy of reason—more or less believing that they never existed—John Calvin denied them because of the fall of man. According to Calvin man was free and and his reason worked before the first sin, but all that was destroyed with the first sin, resulting in the total depravity of man. Whereas Martin Luther thought that free will was nonsensical even as a concept, John Calvin understood what it meant but merely denied it. Ironically, John Calvin’s doctrines being a little more moderate than Martin Luther’s probably resulted in them having a much larger impact on the world; you had to be basically crazy to agree with Martin Luther, while you only needed to be deeply pessimistic to agree with John Calvin. Luther held that God was the author of evil, while Calvin at least said that all of the evil was a just punishment for how bad the first sin was. If outsiders can’t readily tell the difference between Calvin’s idea of God and the orthodox idea of the devil, insiders can’t even tell the difference between them in Martin Luther’s theology. Luther literally said that he had more faith than anyone else because he could believe that God is good despite choosing to damn so many and save so few. The rest of us, who don’t even try to believe blatant logical contradictions about God, just didn’t measure up. In the history of the world, Martin Luther is truly something special.

However, since both Luther and Calvin denied that there was such a thing as free will these days, Sola Fide necessarily took on a very strange meaning. Even a pledge of allegiance can’t do anything if you’re not the one who made it. So faith ends up becoming, especially for Calvin, just a sign that you will be saved. The thing is, while this is logically consistent—I mean, it may contradict common sense, but it doesn’t contradict itself—it isn’t psychologically stable. No one takes determinism seriously. The closest idea which is at least a little psychologically stable is that God is really just a god, if a really powerful god, so pledging allegiance is like becoming a citizen of a powerful, wealthy country. You’ll probably be safe and rich, but if you commit a crime you might spend some time in jail or even be deported. I realize that’s not the typical metaphor, but it’s fairly apt, and anyone born in the last several hundred years doesn’t have an intuitive understanding for what a feudal overlord is. This understanding of Sola Fide can’t be reconciled with Christianity, the whole point of which is to take seriously that God is the creator of the entire world and thus stands apart from it and loves it all. But this understanding of Sola Fide can plug into our instinct to be part of a tribe, which is why if you don’t think about it, it can be a stable belief.

So we come again to the loyalty pledge to the group—in a sense we have to because that is all a statement of belief without underlying intellectual belief ever can be—but with this crucial difference: whereas the fundamentalist generally is demanding loyalty to the immediate secular culture, the calvinist-inspired person can be pledging loyalty to something which transcends the immediate culture. I don’t want to oversell this because every culture—specific enough that a person can live in it—is always a subculture in a larger culture. But even so the calvinist-inspired magic-words-or-burn approach is not necessarily local. It is possible to be the only person who is on the team in an entire city, just like it’s possible to be the only Frenchman in Detroit. As such this form of magic-words-or-burn can have a strong appeal to anyone who feels themselves an outsider.
And the two forms of magic-words-or-burn are not very far apart and can easily become the other as circumstances dictate. And it should be borne in mind that one of those circumstances is raising children, because a problem which every parent has is teaching their children to be a part of their culture. In this fallen world, no culture is fully human, and equally problematic is that no human is fully human, so the result is that child and culture will always conflict. Beatings work somewhat, but getting buy-in from the child is much easier on the arms and vocal cords, and in the hands of less-than-perfect parents, anything which can be used to tame their children probably will be.

This would normally, I think, be a suitable conclusion to this video, but unfortunately it seems like salvation is a subject on which people are desperate to make some sort of error of exaggeration, so if we rule out the idea that beliefs are the only things that matter, many people will start running for the opposite side and try to jump off the cliff of beliefs not mattering at all. Or in other words, if salvation is possible to pagans, why should a Christian preach to them?

The short answer is that the truth is better for people than mistakes, even if mistakes aren’t deadly. This is because happiness consists in being maximally ourselves, and the only thing which allows us to do that is the truth. Silly examples are always clearer, so consider a man who thinks that he’s a tree and so stands outside with his bare feet in the dirt, arms outspread, motionless, trying to absorb water and nutrients through his toes and photosynthesize through his fingers. After a day or two, he will be very unhappy and a few days later he will die if he doesn’t repent of his mistake. Of course very few people make a mistake this stark—if nothing else anyone who does will die almost immediately, leaving only those who don’t make mistakes this extreme around. But the difference between this and thinking that life is about having sex with as many people as possible is a matter of degree, not of kind. You won’t die of thirst and starvation being a sex-maniac, and it will take you longer than a few days to become noticeably miserable, but it will happen with those who think they’re mindless sex machines as reliably as it will those who think they’re trees.

Pagans are in a similar situation to the pick-up-artists who think they’re mindless sex robots. Because paganism was a more widespread belief system that lasted much longer, it was more workable than pick-up-artistry, which is to say that it was nearer to the truth, but it was still wrong in ways that seriously affect human happiness. It varied with place and time, of course, but common mistakes were a focus on glory, the disposability of the individual, the inability of people to redeem themselves from errors, and so on. The same is true of other mistaken religions; they each have their mistakes, some more than others, and tend toward unhappiness to the degree that they’re wrong.

There is a second side to the importance of preaching Christianity to those who aren’t Christian, which is that life is real and salvation is about living life to the full, not skating by on the bare minimum. Far too many people think of this life as something unrelated to eternal life, as if once you make it to heaven you start over. What we are doing now is building creation up moment by moment. People who have been deceived will necessarily be getting things wrong and doing harm where they meant to help, and failing to help where they could have; it is not possible to be mistaken about reality and get everything right. That’s asking a person with vision problems to be an excellent marksman. A person who causes harm where they meant to help may not be morally culpable for the harm they do, but when all is made clear, they cannot be happy about the harm they did, while they will be able to be happy about the good they did. To give people the truth is to give them the opportunity to be happier. That is a duty precisely because we are supposed to love people and not merely tolerate them. Though I suppose I should also mention the balancing point that we’re supposed to give people the truth, not force it down their throats. Having given it to them, if they won’t take it, our job is done.

OK, I think I can conclude this video now. Until next time, may you hit everything you aim at.

Our Love for Formative Fiction

I think that for most of us, there are things which we loved dearly when we were children which we still love now, often greatly in excess of how much others love these things. And I think we’re used to heard this poo-pooed as mere nostalgia. But I think that for most of us, that’s not accurate.

Nostalgia is, properly speaking, a longing for the familiar. It is not merely a desire for comfort, but also a connection through the passage of time from the present to another time (usually our childhood, but it can be any previous time). As Saint Augustine noted, our lives are shattered across the moments of time, and on our own we have no power to put it back together. Nostalgia is, properly speaking, the hope that someone else’s power will eventually put the shattered moments of time back together into a cohesive whole.

But when we enjoy formative fiction, we’re not particularly thinking of the passage of time, or the connectedness of the present to the past. And the key way that we can see this is that we don’t merely relive the past, like putting on an old sweater or walking into a room we haven’t been in for years. Those are simple connections to the past, and are properly regarded as nostalgia. But when we watch formative fiction which we still enjoy (and no one enjoys all of the fiction they read/watched/etc as a child), we actually engage it as adults. We see new things that we didn’t see at first, and appreciate it in new ways.

What is really going on is not nostalgia, but the fact that everyone has a unique perspective on creation; for each of us there are things we see in ways no one else does. Part of this is our personality, but part of this is also our previous experiences. And the thing about formative fiction is that it helped to form us. The genuine teamwork in Scooby Doo, where the friends were really friends and really tried to help each other, helped me to appreciate genuine teamwork. It’s fairly uncommon on television for teammates to actually like each other—conflict is interesting! every lazy screenwriter in the world will tell you—so when I see it in Scooby Doo now, I appreciate it all the more than I’ve grown up looking for it and appreciating it where I see it. This is one of the things I love about the Cadfael stories, where Cadfael (the benedictine monk who solves murders) is on a genuine team with Hugh Berringar, the undersheriff of Shropshire. This is also one of the things I love about the Lord Peter stories with Harriet Vane—they are genuinely on each other’s side with regard to the mysteries.

And when I mention Scooby Doo, I am of course referring to the show from the 1960s, Scooby Doo! Where are you? I have liked some of the more recent Scooby Doo shows, like Scooby Doo: Mystery Inc., but by and large the more modern stuff tends to add conflict in order to make the show more interesting, and consequently makes it far less interesting for me. Cynics will say that this is merely because none of these were from my childhood, but in fact when Scooby Doo: Mystery Inc. had episodes where the entire team was functioning like a team where everyone liked each other and were on the same side, I genuinely enjoyed those episodes. (Being a father of young children means watching a lot of children’s TV.) The episodes where members of the team were fighting or the episodes where they split up were by far my least favorite episodes.

It is possible to enjoy fiction for ulterior motives, or at least to pretend to enjoy it for ulterior motives. Still, it’s also possible to enjoy fiction because one is uniquely well suited to enjoying it, and few things prepare us for life as much as our childhood did.

The Dishonesty of Defining Atheism as Lack of Belief in God

This is the script from a recent video of mine with the above title. It should be pretty readable, or you could just watch it.

Today we’re going to revisit the definition of atheism as a lack of belief in God, specifically to look at why it’s so controversial. As you may recall, Antony Flew first proposed changing the definition of atheism to lack of belief, from its traditional definition of “one who denies God,” in his 1976 essay, The Presumption of Atheism. By the way, you can see the traditional definition in the word’s etymology: atheos-ism, atheos meaning without God, and the -ism suffix denoting a belief system. Now, there’s nothing inherently wrong in changing a definition – all definitions are just an agreement that a given symbol (in this case a word) should be used to point to a particular referent. That is, any word can mean anything we all agree it does. And if a person is willing to define their terms, they can define any word to mean anything they want, so long as they stick to their own definition within the essay or book or whatever where they defined the term. Words cannot be defined correctly or incorrectly. But they can be defined usefully or uselessly. And more to the point here, they can be defined in good faith—cleary, to aid mutual understanding—or in bad faith—cleverly, in order to disguise a rhetorical trick.

And that second one is the why atheism-as-lack-of-belief is so controversial. If atheism merely denoted a psychological state—which might in fact be common between the atheist and a dead rat—no one would much care. Unless, I suppose, one wanted to date the atheist or keep the rat as a pet. But merely lacking a belief isn’t what lack-of-belief atheists actually mean. They only talk about lacking a belief to distract from the positive assertion they’ve learned to say quickly and quietly: that in default of overwhelming evidence to the contrary, one should assume atheism in the old sense. That is, until one has been convinced beyond a shadow of a doubt that God exists, one should assume that God does not exist. I’ll discuss how reasonable this is in a minute—spoiler alert: it’s not—but I’d first like to note the subtle move of people who have more or less explicitly adopted a controversial definition of atheism in order to cover for explicitly begging the question. I suspect that this is more accidental than intentional—somewhat evolutionary, where one lack-of-belief atheist did it and it worked and caught on by imitation—but it’s a highly effective rhetorical trick. Put all your effort into defending something not very important and people will ignore your real weakness. By the way, the phrase “beg the question” means that you’re assuming the answer to the question. It comes from the idea of asking that the question be given to you as settled without having to argue for it. But it’s not just assuming your conclusion, it’s asking for other people to assume your conclusion too, hence the “begging”. (“Asking for the initial point” would have been a better, if less colorful, translation of the latin “petitio principii”, itself a translation of the greek “τὸ ἐξ ἀρχῆς αἰτεῖν”. Pointing out how it’s not valid to do this goes back at least to Aristotle).

So, how reasonable is this assumption? The best argument I’ve ever heard for it is that in ordinary life we always assume things don’t exist until we have evidence for them. This is, properly speaking, something only idiots do. For example: oh look, here’s a hole in the ground. I’m going to assume it’s empty. It might be empty, of course, but in ordinary life only candidates for the Darwin Awards assume that. And in fact, taken to its logical conclusion, this default assumption would destroy all exploration. The only possible reason to try to find something is because you think it might be there. If you should act like planets in other solar systems don’t exist unless someone has already given you evidence for them, you wouldn’t point telescopes at them to see if they’re there. That’s not acting like they don’t exist; that’s acting like maybe they exist. In fact, scientific discovery is entirely predicated on the idea that you shouldn’t discount things until you’ve ruled them out. It’s also the entire reason you should control your experiments. You can’t just assume that other variables besides the one you’re studying had no effect on the outcome of your experiment unless somebody proves it to you, you’re supposed to assume that other variables do affect the outcome until you’ve proven that they don’t. This principle is literally backwards from good science.

Now, examples drawn from science will probably be lost on lack-of-belief atheists, who are in general impressively ignorant of how science actually works. But many of them probably own clothes. To buy clothes, one must first find clothes which fit. Until one gets to the clothing store, one doesn’t have evidence that they have clothes there, or that if they have clothes, that the clothes they have will fit. Properly speaking, one doesn’t even have evidence that the clothes that they sell there will have holes so the relevant parts of your body can stick out, like neck holes or leg holes. For all you know, they might lack holes of any kind, being just spheres of cloth. Do any of these atheists assume that the clothes at the clothing store lack holes? Because if they did, they’d stay home, since there’s no point in going to a store with clothes that can’t be worn.

Now, if one is trying to be clever, one could posit an atheist who goes to the store out of sheer boredom to see whether they have clothes or hippogriffs or whether the law of gravity even applies inside of the store. But they don’t, and we all know that they don’t. They reason from things that they know to infer other knowledge, then ignore their stupid principle and go buy clothes.

Now, if you were to point this out to a lack-of-belief atheist, their response would be some form of Special Pleading. Special Pleading is just the technical name for asking for different evidentiary standards for two things which aren’t different. You should have different evidentiary standards for the existence of a swan and for a law of mathematics, because those are two very different things. Sense experience is good evidence for a swan, but isn’t evidence at all for a law of mathematics, which must hold in all possible worlds. Special pleading is where you say that sense experience suffices for white swans but not for black swans. Or that one witness is enough to testify to the existence of a white swan, but three witnesses are required for a black swan. That’s the sort of thing special pleading is.

And this is what you will find immediately with lack-of-belief atheists. Their terminology varies, of course, but they will claim that God is in a special category which requires the default assumption of non-existence, unlike most of life. In my experience they won’t give any reason for why God is in this special category, presumably because there is none. But I think I know why they do it.

The special category of things they believe God is in is, roughly, the category of controversial ideas. Lack-of-belief atheists—all the ones I’ve met, at least—are remarkably unable to consider ideas they don’t believe. This is a mark, I think, of limited intellect, and people of limited intellect are remarkably screwed over by the modern world. Unable to evaluate the mess of competing ideas that our modern pluralistic environment presents to everyone, they could get by, by relying on a mentor: someone older and wiser who can tell them the correct answer until through experience they’ve learned how to navigate the world themselves. And please note that I don’t mean this in any way disparagingly. To be of limited intellect is like being short or weak or (like me) unable to tolerate capsaicin in food. It’s a limitation, but we’re all finite beings defined, to some degree, by our limits. God loves us all, and everyone’s limits are an opportunity for others to give to them. The strong can carry things for the weak, the tall can fetch things off of high shelves for the short, and people who can stand capsaicin can test the food and tell me if it’s safe. Limits are simply a part of the interdependence of creation. But the modern world with its mandatory state education and the commonality of working outside the home mean that children growing up have few—and commonly no—opportunities for mentors. Their teacher changes every year and their parents are tired from work when they are around. What are they to do when confronted with controversial ideas they’re unequipped to decide for themselves?

I strongly suspect that lack-of-belief atheism is one result. I’m not sure yet what other manifestations this situation has—given the incredible similarities between lack-of-belief atheism and Christian fundamentalism I strongly suspect that Christian fundamentalism is another result of this, but I haven’t looked into it yet.
This also suggests that the problem is not merely intellectual. That is, lack-of-belief atheists are probably not merely the victims of a bad idea. Having been deprived of the sort of stable role-models they should have had growing up, and not being able to find substitutes in great literature or make their way on their own through inspiration and native ability, they probably have also grown with what we might by analogy call a deformity in the organ of trust. They don’t know who to trust, or how to properly trust. Some will imprint on the wrong sort of thing—I think that this is what produces science-worshippers who know very little about science—but some of them simply become very mistrustful of everyone and everything.

Now, I don’t mean this as the only explanation of atheism, of course. For example, there are those who have so imprinted on the pleasure from a disordered activity that they can only see it as the one truly good thing in their life and so its incompatibility with God leads them to conclude God must not exist. There are the atheists Saint Thomas identified in the Summa Theologiae: those who disbelieve because of suffering and those who disbelieve because they think God is superfluous. But all these, I think, tend not to be lack-of-belief atheists and I’m only here talking about lack-of-belief atheists.

So finally the question becomes, what to do about lack-of-belief atheists? That is, how do we help them? I think that arguing with them is unlikely to bear much fruit, since most of what they say isn’t what they mean, and what they do mean is largely unanswerable. “I don’t know who to trust,” or, “I won’t trust anyone or anything,” can only be answered by a very long time of being trustworthy, probably for multiple decades. What I suspect is likely to be a catastrophic failure is any attempt to be “welcoming” or accommodating or inclusive. What lack-of-belief atheists are looking for—and possibly think they found already in the wrong place—is someone trustworthy who knows what they’re talking about. A person who is accommodating or inclusive is someone who thinks that group bonds matter more than what they claim is true, which means they don’t really believe it. The problem with “welcoming” is the scare quotes. There’s nothing wrong with being genuinely welcoming, since anyone genuinely welcoming is quite ready to let someone leave if he doesn’t want to stay. When you add the scare quotes you’re talking about people who are faking an emotional bond which doesn’t exist yet in order to try to manipulate someone into staying. Lack-of-belief atheists don’t need emotional manipulation, because no one needs emotional manipulation. What they need are people who are uncompromisingly honest and independent. The lack-of-belief atheist is looking for someone to depend on, not someone who will depend on them.

The good news is the same as the bad news: the best way to do this is to be a saint.

Imposter Syndrome Produces Many Fake Rules

Imposter Syndrome, which I’m using loose and not using to its clinical definition, is the feeling that a person is not actually competent at a job which they are manifestly competent at. I think that for many people it stems from being overly impressed with other people, putting those others on a pedestal, and not realizing that everybody everywhere is just “winging it”. That is, doing their best without full knowledge of what they should be doing. That is in fact the human condition—we are finite creatures and must live life by trust—but some people seem unable to accept that and have the conviction that other people must know what they’re doing. Only God knows what he’s doing; he’s the only one who accomplishes all things according to the intentions of his will. But for those who can’t accept that, they must turn others—often kicking and screaming—into God-substitutes and pretend that these people really know what they’re doing. (It’s part of the reason people turn so quickly and viciously on their idols—they view imperfect as treason, since they’ve elevated their idols to the status of God.)

Another coping mechanism which the sufferers of imposter syndrome have is to try to turn life into something they’re actually good at in this sense that no human being can be good at it. Thus they come up with a myriad of byzantine and difficult but achievable rules, then need to have everything in life go according to those rules in order to “feel in control”. These rules tend to cluster around anything with an inherently high degree of flexibility, such as around social interaction, writing fiction, etc. “When you visit someone, you must bring a food item” is really more of a ritual, being such a common rule, but it’s a way of showing that one cares and is not merely mooching. Especially in the modern world where food is absurdly available there’s little benefit to it, and so far as I know it was never the custom among rich people, but it gives something to do such that if one has done it, one did a good a good job and is not open to criticism. This is such a rule which caught on (and I’m forced to use a rule which is not particular to an individual in order that it might be generally recognizable), but they abound. Some people must always check the stove before leaving the house, some people must always hand-write thank-you notes, or send thank-you notes on paper rather than by email. An alternative way of thinking of these things is as ad-hoc superstitions.

Satanic Banality

Here is the script of the most recent video I posted. Or if you’d prefer, you can go watch it on youtube.

Some time ago, I made a video talking about the strange symbolism in the music video of Ke$ha’s song, Die Young. Here are all of the symbols she used:
kesha_die_young_symbols
The curious thing about them all is that despite the fact that the video is supposed to have a satanic theme, the symbols Ke$ha used are all actually Christian symbols. Here’s what I concluded in that video:

Ultimately what I think I find so frustrating about this video is that it’s use of symbolism is, essentially, magical thinking. Symbols have power, because they communicate something. A symbol stands in for something greater than itself, which is why it has more power than random scribbles. Using symbols without reference to what they mean is trying to use get power without invoking their function – it’s trying to steal their power.

But on further consideration, I’ve realized that this is actually quite fitting. Yes, this was rather incompetent satanism, but that is really the most consistent satanism possible. Diligence is a virtue; if she put a lot of work into her satanism—if she really tried to do a good job—that would undermine the entire point. Skillful Satanism is actually something of a contradiction in terms.

And this is something C.S. Lewis complained about in literature. In his preface to The Screwtape Letters, talking about artistic representations of the angelic and diabolic, he said: “The literary symbols are more dangerous because they are not so easily recognized as symbols. Those of Dante are the best. Before his angels we sink in awe. His devils, as Ruskin rightly remarked, in their rage, spite, and obscenity, are far more like what the reality must be than anything in Milton. Milton’s devils, by their grandeur and high poetry, have done great harm, and his angels owe too much too Homer and Raphael. But the really pernicious image is Goethe’s Mephistopheles. It is Faust, not he, who really exhibits the ruthless, sleepless, unsmiling concentration upon self which is the mark of Hell. The humorous, civilised, sensible, adaptable Mephistopheles has helped to strengthen the illusion that evil is liberating.”

There’s nothing all that particular to Satanism in these complaints, though. It’s really the same as a mistake that we tend to make about all evil. I think that the origin of this mistake is, roughly, the intuition that if a person is trading their soul for something, there must be something quite valuable which tempted them to do it. Consider the scene in A Man For All Seasons where Richard Rich has just perjured himself to produce false evidence that will get Sir Thomas More executed for treason:

More: There is one question I would like to ask the witness. That’s a chain of office you’re wearing. May I see it? The red dragon. What’s this?

Cromwell: Sir Richard is appointed Attorney General for Wales.

More: For Wales? Why Richard, it profits a man nothing to give his soul for the whole world. But for Wales?

(If you haven’t seen A Man for All Seasons, please do. It is an excellent movie.)

Why would somebody do something evil if it doesn’t benefit them? The answer to this question is straightforward, but we need a few concepts in order to be able to give the simple explanation. The first is the the Greek concept of hamartia. It comes from the verb hamartenein, which was, for example, what an archer did when he didn’t hit his target. It means, roughly, to miss. Hamartia thus means an error, or a mistake, or by the time you get to the early Christian church, sin. The key insight is that evil is not something positive, but something negative.

I think that people go wrong here by not taking nihilism seriously enough. We think of a world working in perfect harmony and unity as the default, and of evil as a deviation from that. But in fact the default is nothing. There need not be anything at all. No matter, no energy, no space or time or physics. Just pure nothing, is the default. And yet, there is something. I don’t even care at the moment whether you attribute that creation to God or to a “quantum fluctuation”—well, I care a little bit because the latter is still assuming that some sort of contingent laws of physics exist, but whatever. The point is that anything whatever that exists—in our contingent world—is more than had to exist. Whether you think of it as a gift or as something that fell off of some cosmic truck that was driving by, from our perspective it is all a positive addition to the nothingness which is logically prior to it.

When you look at it this way, you can see that good is not a maintenance of the status quo, but an addition to it. But of course good is not merely anything at all existing. This is why a table is better than a pile of splinters, and why in the ordinary course of events using an axe to turn a table into a pile of splinters is wrong. It is bringing the world closer to the default of nothing. Good is not just any existence, but existence ordered according to a rational relationship. By a rational ordering, small things can become something more than themselves. Put together in the right shape, splinters can be beautiful and hold things up off the ground. That is, they can be a table.

Incidentally, this is why hyper-reductionists have such an easy time seeing through everything. Because every good thing is a rational relationship of lesser things, it is always possible to deny that the relationship is real. You can look at a table and see no more than a pile of splinters. Why a reductionist is proud of seeing less than everyone else is a subject for another day, but if you look at anything you know to be good, you will see this. It is itself made up of a rational relationship of parts that form more than they would in some other relationship. Further, all good things themselves fit in a rational relationship with other good things. Anywhere you look, whether chickens or statues or vaccines or video games; all good things have this property. And all evil—murder, arson, terrorism, or just lying—all have the property that they destroy rational relationships between things. They destroy the whole which is greater than the sum of its parts.

It is also the case that there is no other possibility for what constitutes good and evil. I don’t have time to go into details, but if you examine any attempt to define good and evil which is not convertible into this definition, it invariably consists of taking one sort of rational relationship and calling that the only good. Good is doing your duty, or good is the family, or good is the state or good is pleasure. Every such thing, if you really spend some time looking into it and seeing what its proponents actually mean by their words and actions—they are all taking some rational relationships and elevating them above all other rational relationships. They are taking a part and treating it as the whole.

And this is why sin is analogous to an archer missing what he was shooting at. We all aim for doing the good, but it’s very rare that we actually hit our target. Sometimes our aim is off because we twitch—that is, we can’t hold steady—but very often it’s because we mistake what we’re looking at. We think it’s closer or further, or that we’re looking at one part when we’re looking at another. We go wrong not because we think, “oh man would it be great to shoot this deer in the log under it!” but because we thought we were looking at its chest. We weren’t, as proved by where our arrow struck. Or we can go wrong by being mistaken about where we’re aiming, thinking that because we’re looking at something, that’s where we are pointing our arrow. Know thyself is often quoted by unpractical people, but it’s actually intensely practical advice.

The drug addled, sex-crazed rock star doesn’t think she’s using Christian imagery when she’s trying to be Satanic. She has not traded looking like a buffoon for some amazing benefit we can’t see. In her mind, she doesn’t look like a buffoon. She thinks she looks awesome; that anyone sensible would cower in awe of her satanic majesty. She has missed her target, and hasn’t yet gone to see where her arrow has actually struck. There’s a reason why pop musicians rarely last a decade; once they realize what they’re doing, they stop doing it; once they stop believing in it, they can’t sell the illusion anymore. And then their popularity fades, because it was not them, but the illusion they were selling, which was so popular.

Satanic Majesty is always an illusion, which is why you can only ever encounter it in art. Art contrives to convey experience; to show you what the world looks like through someone else’s eyes. But Satanic Majesty always looks banal from the outside; it’s only from the inside that it looks spectacular. This is part of why pride is the deadliest of the sins: if you wrap yourself up inside yourself, you can fool yourself forever without anything to check your downward, inward progress. And this is why music videos feature so many reaction shots. It’s also why movies and TV and virtually everything fictive, features so many reaction shots. The thing itself rarely looks very impressive, but people’s reactions are limited only by their imagination and acting skills. It’s why in Power Rangers series, after they lower the camera to the monster’s feet, the next shot is always the power rangers looking up. Our age has been called the age of many things, but it is the age of nothing so much as it is the age of the reaction shot. TV news shows the reactions of people on the street, but it never shows you the considered opinions of people on something that happened ten years ago. Collectively, we don’t like reality; you can tell a tree by its fruit, which is why we prefer to look at seedlings.

It’s everywhere in entertainment—in which category news most certainly belongs— but it can be found throughout life, too. We endlessly discuss people’s reactions, but we rarely discuss things and ideas. And if we look at ourselves, when we are tempted, we can see the same thing. We do not consider our temptations in themselves, but only how they will make us feel. I mean when we’re experiencing them, not when we’re regretting having given into them afterwards. In the actual moment of giving in, our attention is never on the reality of what we’re about to do; we’re concentrating on how happy it will make us. That’s why one of the techniques for avoiding temptation is to face up to what we’re actually doing. Of course sometimes we can’t avoid facing up to what we’re actually doing; in addiction it’s called hitting rock bottom. But when one is young and healthy, it’s very rare that reality makes us face up to what we’re doing. On TV they always pick pretty people who smile for the camera, and it’s so hard to believe that anything can be wrong when pretty people are happy. On Facebook people post pictures of when things are going well, and the very fact that it’s rude to tell people about how bad your day was means that we don’t often face up to the reality of what is going on in life. A person has to be very unhappy indeed before they won’t smile for the camera.

Which is a pity, because so many people use reactions to tell whether the thing being reacted to is good or bad. Since people will put their best foot forward, this doesn’t work; to know right from wrong we must investigate the things themselves. And in fact in our world whether an action is defended on its own or by the reactions to it is actually a good heuristic for figuring out whether it is moral or immoral—if you can say something good about the action itself, it is probably moral. If it is only defended by people’s reactions to it, it is probably immoral. That’s only a heuristic, of course; people dance because it’s fun, and dancing is legitimate. But dancing is also beautiful, at least when it’s done well. There’s very little you can say about heroin except that it’s fun.

That’s all for now. Until next time, may you hit everything you aim at.