Authority Figures in Movies

One of the curious things about the roles of authority figures in movies is that they are very rarely played by people who have ever had any authority. One might think that this wouldn’t have too much of an impact since the actors are just reciting dialog which other people wrote. (People who most of the time haven’t had any authority themselves, but that’s a somewhat separate matter.) And in the end, authority is the ability to use force to compel people, so does it matter much what the mannerisms an actor uses are?

Actually, yes, because in fact a great deal of authority, in practice, is about using social skills to get people to cooperate without having to use one’s authority. And a great deal of social skills are body language, tone of voice, emphasis, and pacing. Kind of like the famous advice given by Dalton in Road House:

For some reason, authority figures are usually portrayed as grim and stern—at this point I think because it’s a shorthand so you can tell who is who—but there is a great deal which can be accomplished by smiling. There’s an odd idea that many people seem to have that smiling is only sincere if it is an instinctual, uncontrollable reaction. I’ve no idea where this crazy notion came from, but in fact smiling is primarily a form of communication. It communicates that one is not (immediately) a threat, that (in the moment) one intends cooperation, that the order of the moment is conversation rather than action. Like all communication it can of course be a lie, but the solution to that is very simple: don’t lie with your smile. Words can be lies, but the solution is not to refrain from speaking unless you can’t help yourself; it’s to tell the truth when one opens one’s mouth. So tell the truth when you smile with your mouth, too. And since actions are choices, one very viable option, if you smile at someone, is to follow through and (in the moment) be nice.

Anyone (sane) who has a dog knows that in many ways they’re terrible creatures. They steal your food, destroy everyday items, throw up on your floor when they’ve eaten things that aren’t food, get dog hair everywhere, and make your couches stink of dog. And yet, people love dogs who do these things to them for a very simple reason: any time you come home, your dog smiles at you and wags its tail and is glad to see you. And it’s human nature that it’s impossible to be angry at someone who is just so gosh darned happy that you’re in the same room as them.

People in authority are rarely there because they have a history of failure and incompetence at dealing with people; it may be a convenient movie shorthand that people in authority are stone-faced, grumpy, and stern, but in real life people in positions of authority are generally friendly. It’s easy to read too much into that friendliness, of course—they’re only friendly so long as you stay on the right side of what you’re supposed to be doing—but this unrealistic movie shorthand makes for far less interesting characters.

And I suppose I should note that there are some people in positions of authority who are often stone-faced and grim, but these are usually the people responsible for administering discipline to those already known to be transgressors. This is especially true of those dealing with children, who have little self control and less of a grasp of the gravity of most situations they’re in and who need all the help they can get in realizing that it’s not play time. By contrast, during the short time I was able to take part in my parish’s prison ministry, I noticed that the prison guards were generally friendly (if guardedly so) with the inmates. Basically, being friendly can invite people to try to take liberties, but being grumpy usually gets far less cooperation, and outside of places like Nazi death camps where you are actually willing to shoot people for being uncooperative, cooperation is usually far more useful than people trying to take liberties and having to be told “no” is inconvenient.

But most of the actors who play authority figures don’t know any of this; and when you research the individual actors they often turn out to be goofballs who don’t like authority and whose portrayal of it is largely formed by what they most dislike about it.

Atheism is Not a Religion

This is the script to my video, Atheism is Not a Religion. As always, it was written to be listened to when I read it aloud, but it should be pretty readable as text, too.

Today we’re going to look at a topic which a casual survey of atheist youtube channels and twitter feeds suggests is of importance to many atheists: that atheism is not a religion. Now, since the one thing you can’t convict internet atheists of is originality, I assume that this is because there are Christians who claim that atheism is a religion. Of course what they probably mean by this that atheism entails a set of metaphysical beliefs. And this is true enough, at least as a practical assumption if some atheists will scream at you until they’re blue in the face that it’s not what they believe in theory. But merely having metaphysical beliefs does not make something a religion; it makes it a philosophy or in more modern terms, a world-view. But a religion is far more than merely a world-view or a set of beliefs. As Saint James noted, the demons believe in God.

The first and most obvious thing which atheism lacks is: worship. Atheists do not worship anything. I know that Auguste Comte tried to remedy this with his calendar of secular holidays, but that went nowhere and has been mostly forgotten except perhaps in a joke G. K. Chesterton made about it. A few atheists have made a half-hearted go of trying to worship science. And if that had any lasting power, Sunday services might include playing a clip from Cosmos: A Spacetime Odyssey. But the would-be science worshippers haven’t gotten that far, and it is highly doubtful they ever will.

Secular Humanism is sometimes brought up as something like a religious substitute, but so far it only appears to be a name, a logo, some manifestos no one cares about, and the belief that maybe it’s possible to have morality without religion. And humanity is not a workable object of worship anyway. First, because it’s too amorphous to worship—as Chesterton noted, a god composed of seven billion persons neither dividing the substance nor confounding the persons is hard to believe in. The other reason is that worshipping humanity involves worshipping Hitler and Stalin and Mao and so forth.

Which brings us to Marxism, which is perhaps the closest thing to a secular religion so far devised. But while Marxism does focus the believer’s attention on a utopia which will someday arrive, and certainly gets people to be willing to shed an awful lot of innocent blood to make it happen sooner, I don’t think that this really constitutes worship. It’s a goal, and men will kill and die for goals, but they can’t really worship goals. Goals only really exist in the people who have them, and you can only worship what you believe actually exists.

It is sometimes argued that within a marxist utopia people worship the state, but while this is something put on propaganda posters, the people who lived in marxist nations don’t report anyone actually engaging in this sort of worship, at least not sincerely.

And I know that some people will say that atheists worship themselves—I suspect because almost all atheists define morality as nothing more than a personal preference—but, at least I’ve never seen that as anything more than a half-hearted attempt to answer the question of “what is the ground of morality”, rather than any sort of motivating belief. And in any event, it is inherently impossible to worship oneself. Worshipping something is recognizing something as above oneself, and it is not possible to place oneself above oneself. I think the physical metaphor suffices: if you are kneeling, you can’t look up and see your own feet. You might be able to see an image of yourself in a mirror, but that is not the same, and whatever fascination it might have is still not worship. So no, atheism does not worship anything.

The second reason why atheism is not a religion is that atheism gives you no one to pray to. Prayer is a very interesting phenomenon, and is much misunderstood by those who are not religious and, frankly, many who are, but it is, at its core, talking with someone who actually understands what is said. People do not ever truly understand each other because the mediation of words always strips some of the meaning away and the fact that every word means multiple things always introduces ambiguity. Like all good things in religion this reaches its crescendo in Christianity, but even in the public prayers said over pagan altars, there is the experience of real communication, in its etymological sense. Com—together unication—being one. It is in prayer—and only in prayer—that we are not alone. Atheists may decry this as talking with our imaginary friends if they like—and many of them certainly seem to like to—but in any event they are left where all men who are not praying are left: alone in the crowd of humanity, never really understood and so only ever loved very imperfectly at best. (I will note that this point will be lost on people who have never taken the trouble to find out what somebody else really means, and so assumes that everyone else means exactly the same things that he would mean by those words, and so assumes that all communication goes perfectly. You can usually identify such people by the way they think that everyone around them who doesn’t entirely agree with them is stupid. It’s the only conclusion left open to them.)

The third reason why atheism is not a religion is that it does not, in any way, serve the primary purpose of religion. The thing you find common to all religions—the thing at the center of all religions—is putting man into his proper relation with all that is; with the cosmos, in the Greek sense of the word. Anyone who looks at the world sees that there is a hierarchy of being; that plants are more than dust and beasts are more than plants and human beings are more than beasts. But if you spend any time with human beings—and I mean literally any time—you will immediately know that human beings are not the most that can be. All that we can see and hear and smell and taste and touch in this world forms an arrow which does not point at us but does run through us, pointing at something else. The primary purpose of a religion is to acknowledge that and to get it right. Of course various religions get it right to various degrees; those who understand that it points to an uncreated creator who loved the world in existence out of nothing get it far more right than those who merely believe in powerful intelligences which are beyond ours. Though if you look carefully, even those who apparently don’t, seem to often have their suspicions that here’s something important they don’t know about. But be that as it may, all religions know that there is something more than man, and give its adherents a way of putting themselves below what they are below; of standing in a right relation to that which is above them. In short, the primary purpose of all religion is humility.

And this, atheism most certainly does not have. It doesn’t matter whether you define atheism as a positive denial or a passive lack; either way atheism gives you absolutely no way to be in a right relationship to anything above you, because it doesn’t believe in anything above you. Even worse, atheism as a strong tendency, at least in the west, to collapse the hierarchy of being in the other direction, too. It is no accident that pets are acquiring human rights and there are some fringe groups trying to sue for the release of zoo animals under the theory of habeus corpus. Without someone who intended to make something out of the constituent particles which make us up, there is ultimately no reason why any particular configuration of quarks and electrons should mean anything more than any other one; human beings are simply the cleverest of the beasts that crawl the earth, and the beasts are simply the most active of the dust which is imprisoned on the earth.

We each have our preferences, of course, but anyone with any wide experience of human beings knows that we don’t all have the same preferences, and since the misanthropes are dangerous and have good reason to lie to us those who don’t look out for themselves quickly become the victims of those who do. Call it foreigners or racists or patriarchy or gynocentrism or rape culture or the disposable male or communism or capitalism or call it nature red in tooth and claw, if you want to be more poetic about it, but sooner or later you will find out that human beings, like the rest of the world, are dangerous.

Religious people know very well that other human beings are dangerous; there is no way in this world to get rid of temptation and sin. But religion gives the possibility of overcoming the collapsing in upon ourselves for which atheism gives no escape.

For some reason we always talk about pride puffing someone up, but this is almost the exact opposite of what it actually does. It’s an understandable mistake, but it is a mistake. Pride doesn’t puff the self up, it shrinks it down. It just shrinks the rest of the world down first.

In conclusion, I can see why my co-religionists would be tempted to say that atheism is a religion. There are atheist leaders who look for all the world like charismatic preachers and atheist organizations that serve no discernible secular purpose. Though not all atheists believe the same things, still, most believe such extremely similar things that they could identify on that basis. Individual atheists almost invariably hold unprovable dogmas with a blind certainty that makes the average Christian look like a skeptic. And so on; one could go on at length about how atheism looks like a religion. But all these are mere external trappings. Atheism is not a religion, which is a great pity because atheists would be far better off if it was.

Two Interesting Questions

On Twitter, @philomonty, who I believe is best described as an agnostic (he can’t tell whether nihilism or Catholicism is true), made two video requests. Here are the questions he gave me:

  1. If atheism is a cognitive defect, how may one relieve it?
  2. How can an atheist believe in Christ, when he does not know him? Not everyone has mystical experiences, so not everyone has a point of contact which establishes trust between persons, as seen in everyday life.

I suspect that I will tackle these in two separate videos, especially because the second is a question which applies to far more than just atheists. They’re also fairly big questions, so it will take me a while to work out how I want to answer them.🙂

The first question is especially tricky because I believe there are several different kind of cognitive defects which can lead to atheism. Not everyone is a mystic, but if a person who isn’t demands mystical experience as the condition for belief, he will go very wrong. If a person who is a mystic has mystical experiences but denies them, he will go very wrong, but in a different way. There are also people who are far too trusting of the culture they’re in, thinking that fitting into it is the fullness of being human, so they will necessarily reject anything which makes it impossible or even just harder to fit in. These two will go very wrong, but in a different way from the previous ones.

To some degree this is a reference to my friend Eve Keneinan’s view that atheism is primarily caused by some sort of cognitive defect, such as an inability to sense the numinous (basically, lacking a sensus divinitatus). Since I’ve never experienced that myself, I’m certain it can’t be the entire story, though to the degree that it is part of the story it would come under the category of non-mystics who demand mystical experience. Or, possibly, mystics who have been damaged by something, though I am very dubious about that possibility. God curtails the amount of evil possible in the world to what allows for good, after all, so while that is not a conclusive argument, it does seem likely to me that God would not permit anything to make it impossible for a person to believe in him.

Anyway, these are just some initial thoughts on the topic which I’ll be mulling over as I consider how to answer. Interesting questions.

The Dunning-Kruger Effect

(This is the script for my video about the Dunning-Kruger effect. While I wrote it to be read out loud by someone who inflects words like I do, i.e. by me, it should be pretty readable as text.)

Today we’re going to be looking at the Dunning-Kruger effect. This is the other topic requested by PickUpYourPantsPatrol—once again thanks for the request!—and if you’ve disagreed with anyone in the internet in the last few years, you’ve probably been accused of suffering from it.

Perhaps the best summary of the popular version of the Dunning-Kruger effect was given by John Cleese:

The problem with people like this is that they have no idea how stupid they are. You see, if you are very very stupid, how can you possibly realize that you are very very stupid? You’d have to be relatively intelligent to know how stupid you are. There’s a wonderful bit of research by a guy called David Dunning who’s pointed out that to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place. This means, if you’re absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.

There are plenty of things to say about this summary as well as the curious problem that if an idiot is talking to an intelligent person, absent reputation being available, there is a near-certainty that both will think the other an idiot. But before I get into any of that, I’d like to talk about the Dunning Kruger study itself, because I read the paper which Dunning and Kruger published in 1999, and it’s quite interesting.

The first thing to note about the paper is that it actually discusses four studies which the researchers did, trying to test specific ideas about incompetence and self-evaluation which the paper itself points out were already common knowledge. For example, they have a very on-point quotation from Thomas Jefferson. But, they note, this common wisdom that fools often don’t know that they’re fools has never been rigorously tested in the field of psychology, so they did.

The second thing to note about this study is that—as I understand is very common in psychological studies—their research subjects were all students taking psychology courses who received extra credit for participating. Now, these four studies were conducted in Cornell University, and the classes were all undergraduates, so right away generalizing to the larger population is immediately suspect since there’s good reason to believe that undergraduates in an Ivy League university have more than a few things in common which they don’t share with the rest of humanity. This is especially the case because the researchers were testing self-evaluation of performance, which is something that Cornell undergraduates were selected for and have a lot invested in. They are, in some sense, the elite of society, or so at least I suspect most of them have been told, even if not every one of them believes it.

Moreover, the tests which they were given—which I’ll go into detail about in a minute—were all academic tests, given to people who were there because they had generally been good at academics. Ivy League undergraduates are perhaps the people most likely to give falsely high impressions of how good they are at academic tests. This is especially the case if any of these were freshmen classes (they don’t say), since a freshman at an Ivy League school has impressed the admissions board but hasn’t had the opportunity to fail out yet.
So, right off the bat the general utility of this study in confirming popular wisdom is suspect; popular opinion may have to stand on its own. On the other hand, this may be nearly the perfect study to explain the phenomenon Nassim Nicholas Taleb described as Intellectual Yet Idiot—credentialed people who have the role of intellectuals yet little of the knowledge and none of the wisdom for acting the part.

Be that as it may, let’s look at the four studies described. The first study is in many ways the strangest, since it was a test of evaluating humor. They created a compilation of 30 jokes from several sources, then had a panel of 8 professional comedians rate these jokes on a scale from 1-11. After throwing out one outlier, they took the mean answers as the “correct” answers, then gave the same test to “65 cornell undergraduates from a variety of courses in psychology who earned extra credit for their participation”.
They found that the people with the bottom quartile of test scores, who by definition have an average rank of being at the twelfth percentile, guessed (on average) their rank was the 66th percentile. The bottom three quartiles overestimated their rank, while the top quartile underestimated their rank, thinking that they were in the (eyeballing it from the graph) 75th percentile when in fact (again, by definition) they were in the 88th.
This is, I think, the least interesting of the studies, first because the way they came up with “right” and “wrong” answers is very suspect, and second because this isn’t necessarily about mis-estimation of a person’s ability, but could be entirely about mis-estimating their peer’s ability. The fact that everyone put their average rank in the class at between the 66th percentile and 75th percentile may just mean that in default of knowing how they did, Cornell students are used to guessing that they got somewhere between a a B- and a B+. Given that they were admitted to Cornell, that guess may have a lot of history behind it to back it up.

The next test, though unfortunately only given to 45 Cornell students, is far more interesting both because it used 20 questions on logical reasoning taken from an LSAT prep book—so we’re dealing with questions where there is an unambiguously right answer—and because in addition to asking students how they thought they ranked, they asked the students how many questions they thought that they got right. It’s that last part that’s really interesting, because that’s a far more direct measure of how much the students thought that they knew. And in this case, the bottom quartile thought that they got 14.2 questions right while they actually got 9.6 right. The top quartile, by contrast, thought that they got 14 correct when they actually got 16.9 correct.

So, first, the effect does in fact hold up with unambiguous answers. The bottom quartile of performers thought that they got more questions right than they did. So far, so good. But the magnitude of the error is not nearly as great as it was for the ranking error, especially for the bottom quartile. Speaking loosely, the bottom quartile knew half of the material and thought that they knew three quarters of it. That is a significant error, in the sense of being a meaningful error, but at the same time they thought that they knew about 48% more than they did, not 48,000% more than they did. The 11 Cornell undergraduates who took this class did have an over-inflated sense of their ability, to be sure, but they also had a basic competence in the field. To put this in perspective, the top quartile only scored 76% better than the bottom quartile.

The next study was on 84 Cornell undergrads who were given a 20 question test of standard English grammar taken from a National Teacher Examination prep guide. This replicated the basic findings of the previous study, with the bottom quartile estimating they got 12.9 questions right versus a real score of 9.2. (Interestingly, the top quartile very slightly over-estimated their score as 16.9 when it was actually 16.4) Again, all these are averages so the numbers are a little wonky, but anyway this time they over-estimated their performance by 3.7 points, or 40%. And again, they got close to half the questions right, so this isn’t really a test of people who are incompetent.

There’s another thing to consider in both studies, which is how many questions the students thought they got wrong. In the first study they estimated 5.4 errors while in the second 7.1 errors, and while these were under-estimates, they were correct that they did in fact get that many wrong. Unfortunately these are aggregate numbers (asked after they handed the test in, I believe) so we don’t know their accuracy on gauging whether they got particular questions wrong, but in the first test they correctly estimated about 40% of their error and on the second test they correctly estimated about 65% of their error. That is, while they did unequivocally have an over-inflated sense of their performance, they were not wildly unrealistic about how much they knew. But of course these are both subjects they had studied in the past, and their test scores did demonstrate at least basic competence with them.

The fourth study is more interesting, in part because it was on a more esoteric subject: it was a 10 question test, given to 140 cornell undergrads, about set selection. Each problem described 4 cards and gave a rule which they might match. The question was which card or cards needed to be flipped over to determine if those cards do match the rule. Each question was like that, so we can see why they only asked ten questions.

They were asked to rate how they did in the usual way, but then half of them were given a short packet that took about 10 minutes to read explaining how to do these problems, while the other half was given an unrelated filler task that also took about 10 minutes. They were then asked to rate their performance again, and in fact the group who learned how to do the problems did revise their estimate of their performance, while the other group didn’t change it very much.

And in this test we actually see a gross mis-estimation of ability by the incompetent. The bottom quartile scored on average 0.3 questions correct, but initially thought that they had gotten about 5.5 questions correct. For reference, the top quartile initially thought that they had gotten 8.9 questions correct while they had in fact gotten all ten correct. And after the training, the untrained bottom quartile slightly raised their estimation of their score (by six tenths of a question), but among the trained people the bottom quartile reduced their estimation by 4.3 questions. (In fact the two groups had slightly different performances which I averaged together; so the bottom quartile of the trained group estimated that they got exactly one question right.)

This fourth study, it seems to me, is finally more of a real test of what everyone wants the Dunning-Kruger effect to be about. An average of 0.3 questions right corresponds to roughly to 11 of the 35 people in the bottom quartile getting one question right while the rest got every question wrong. The incompetent people were actually incompetent. Further, they over-estimated their performance by over 1800%. So here, finally, we come to the substance of the quote from John Cleese, right?

Well… maybe. There are two reasons I’m hesitant to say so, though. The first is the fact that these are still all Cornell students, so they are people who are used to being above average and doing well on tests and so forth. Moreover, virtually all of them would have never been outside of academia, so it is very likely that they’ve never encountered a test which was not designed to be passable by most people. If nothing else, it doesn’t reflect well on a teacher if most of his class gets a failing grade. And probably most importantly, the skills necessary to solve these problems are fairly close to the sort of skills that Ivy League undergrads are supposed to have, so this skillset at which they are incompetent being similar to a skillset at which they are presumably competent might well have misled them.

The second reason I’m hesitant to say that this study confirms the John Cleese quote is that the incompetent people estimated that they got 55% of the questions right, not 95% of the questions right. That is to say, incompetent people thought that they were merely competent. They didn’t think that they are experts.

In the conclusion of the paper, Dunning and Kruger talked about some limitations of their study, which I will quote because it’s well written and I want to do them justice.

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods.

They go on to note that in some domains, knowledge is largely the substance of skill, like in grammar, whereas in other places knowledge and skill are not the same thing, like basketball.

They also note that there is a minimum amount of knowledge required to mistake oneself for competent. As the authors say:

Most people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct an 8-cylinder engine, or diagnose acute disseminated encephalomyelitis.

So where does this leave us with regard to the quote from John Cleese? I think that the real issue is not so much about the inability of the incompetent to estimate their ability, but the inability of the incompetent to reconcile new ideas with what they do actually know. Idiots may not know much, but they still know some things. They’re not rocks. When a learned person tells them something, they are prone to reject it not because they think that they already know everything, but because it seems to contradict the few things they are sure of.

There is a complex interplay between intelligence and education—and I’m talking about education, mind, not mere schooling—where intelligence allows one to see distinctions and connections quickly, while education gives one the framework of what things there are that can be distinguished or connected. If a person lacks the one or the other—and especially if they lack both—understanding new things becomes very difficult because it is hard to connect what was said to what is already known, as well as to distinguish it from possible contradictions to what is already known. If the learned, intelligent person isn’t known by reputation to the idiot, the idiot has no way of knowing whether the things said don’t make sense to him because they are nonsense or because they are too much sense, and a little experience of the world is enough to make many if not most people sufficiently cynical to assume the former.

And I think that perhaps the best way to see the difference between this and the Dunning-Kruger effect is by considering the second half of the fourth experiment: the incompetent people learned how to do what they initially couldn’t. That is, after training they became competent. That is not, in general, our experience of idiots.
Until next time, may you hit everything you aim at.

You Have the Right to Remain Innocent

I recently saw the news that the defense attorney / law professor who made the videos Don’t Talk to Cops (part 1, part 2) wrote a book on the subject. It’s called You Have the Right to Remain Innocent, and it’s a short and easy to read book which covers much of the same material, but in greater depth, with updates for recent caselaw, and without the speed-talking.

Since the basic thesis of the book is stated in its title, which is also a reasonably summary of the book’s actionable advice, it is reasonable to ask what is in the book which justifies opening the book to look at its pages. There’s actually a lot.

The book does starts with some caveats, perhaps most notably that he clarifies he’s talking about speaking with the police when they come to you, unsolicited, to ask you questions about the past. It is both a legal requirement and good sense to readily comply with the request to identify yourself and explain what you are doing in the moment, where you currently are. One of his examples is if you are breaking into your own house because you locked yourself out and a policeman asks you what you are doing, do tell him that this is your house and you don’t have your key. He mentions some other cases when you must talk with the police.

The other very notable caveat is that he takes some pains to point out that every member of society owes a great debt to the men and women who serve as police, who take personal risk to do a difficult job that keeps us safe. Throughout the book, he makes it clear that he isn’t talking about bad people, but (in the main) good people in a bad situation, which is the present criminal legal system in the United States. It is a system which sometimes convicts innocent people along with guilty people, and for reasons he makes clear throughout the book, his primary concern is giving innocent people the tools needed to avoid the pitfalls of this dangerous system. Good people make mistakes, and the mistake of a police officer or a prosecutor or a judge can cost an innocent person decades in prison. (He uses more than a few cases where the person convicted was later conclusively proved innocent by DNA evidence (often decades later) to show how wrong things can go for innocent people.)

The book has more than a few interesting insights into problems with the criminal justice system—perhaps most notably being the way that no living person has any idea even how many crimes are defined by the law, let alone what they all are—but I think its greatest value lies in the examination of particular cases where he goes on to show how even very trivial statements, which are true, can become damning evidence in light of other things which a person may not know and has no control over. The case where a man admitted to having dated a woman some time before the crime he was convicted of happened, in the neighborhood where that crime happened, helped to send a man later exonerated by DNA evidence to prison. Coincidences happen, but not all juries believe that they do.

And it is this sort of thing which is the main value of reading the entire book, I think. It is so very easy to slip into the mindset of wanting to give into the urge to cooperate, to be helpful, to be willing to answer any question which is not directly incriminating (and if I’m innocent, how could any question be directly incriminating?) which takes more than a little beating down by seeing over and over again how even minor admissions of completely true and innocent things can be disastrous. The book presents information, but I think equally reading it constitutes training. If one were ever to face a police interview it would be a very stressful situation, and when stressed we tend to forget what we know and fall back on our habitual reactions. Only through training ourselves by seeing many situations we could all too easily be in is it likely that we will remember to do what we should.

The final two chapters of the book, which are much shorter than the first, deal with the specifics of how to go about exercising one’s right to remain innocent in a practical sense. He covers many instances of how people have accidentally incriminated themselves when invoking their fifth amendment right, as well as how people have accidentally failed at refusing to talk to the police and asking for a lawyer. And again, it’s not so much knowing what to do that’s the real benefit of reading this book, but learning what not to do, and why not to do it.

The book is a short, easy read which is well written, and I think valuable for anyone living in America. I found it a valuable read even after watching the videos I linked above, and strongly recommend it.

Who’s Afraid of the Dark

I recently read Russell Newquist‘s short story, Who’s Afraid of the Dark. As you may recall, he recently reviewed my novel, Ordinary Superheroes, so in gratitude I made the time to read his short story sooner rather than later (with three small children in the house, I have very little time for leisure reading these days). I’m glad that I did, and I’m really looking forward to reading the stories which this short story serves as a rather cunning introduction to.

I’m really not sure how to review this short story without revealing any of the surprises in it, so let me apologize in advance for this review being a little oblique, but since I’ve already given away that it’s not simply what it seems, let me emphasize that it’s really not what it seems at first: it’s quite a lot more.

The beginning and middle of the story was suspenseful, while the ending of it is both satisfying and promises much larger stories to come. Stories on very interesting subjects.

If you like stories in which people who have a real chance of winning fight monsters, this story is likely to be for you. Mr. Newquist clearly understands two important ingredients in a good story of humanity fighting monsters: (1) this must always take place in a fundamentally good world, that is, one where it is possible, with blood and sweat and tears and sacrifice, to actually achieve something good and (2) the monsters must be genuinely dangerous and scary.

Update: Russell told me that there is a Peter Bishop story in Between the Wall and the Fire, which I just bought.

Why I Cringe When People Criticize Capitalism (in America)

Every time I hear a fellow Christian (usually Catholic, often someone with the good sense to be a fan of G.K. Chesterton) criticize capitalism, I cringe, but not for the reason I suspect most of them would expect. Why I cringe will take a little explanation, but it’s rooted in the fact that there are actually two very different things which go by the name capitalism.

The first is a theory proposed by Adam Smith that, to oversimplify and engage in some revisionist history which is not fair to him but which would take too long to go into further, holds that virtue is unreliable: if we can harness vice to do the work of virtue, we can get the same effect much more reliably. Thus if we appeal to men’s self-interest, they will do what they ought with more vigor than if we appealed to their duty and love of their fellow man. Immanuel Kant’s essay Perpetual Peace has a section which may be taken as a summary of this attitude:

The problem of the formation of the state, hard as it may sound, is not insoluble, even for a race of devils, granted that they have intelligence. It may be put thus:—“Given a multitude of rational beings who, in a body, require general laws for their own preservation, but each of whom, as an individual, is secretly inclined to exempt himself from this restraint: how are we to order their affairs and how establish for them a constitution such that, although their private dispositions may be really antagonistic, they may yet so act as a check upon one another, that, in their public relations, the effect is the same as if they had no such evil sentiments.” Such a problem must be capable of solution. For it deals, not with the moral reformation of mankind, but only with the mechanism of nature; and the problem is to learn how this mechanism of nature can be applied to men, in order so to regulate the antagonism of conflicting interests in a people that they may even compel one another to submit to compulsory laws and thus necessarily bring about the state of peace in which laws have force.

Capitalism in this sense was this general problem applied to economics: we need men to work, but all men are lazy. We can try to appeal to men to be better, but it is much simpler and more reliable to show them how hard work will satisfy their greed.

This version of capitalism is a terrible thing, and by treating men as devils has a tendency to degrade men into a race of devils. But there is something important to note about it, which is that it doesn’t really demand much of government or of men. While it appeals to men’s greed, it does not impose a requirement that a craftsman charge an exorbitant price rather than a just price. It does not forbid a man taking a portion of his just profits and giving it to the poor. It tends to degrade men into devils, but it does not produce a form of government which demands that they become devils.

That was left to Marxism, which by its materialism demanded that all men surrender their souls to the state. Marxism is an equally wrong theory of human beings to the Capitalism of the enlightenment, but it demands a form of government which is far less compatible with human virtue. Further, it demands a form of government which is intrinsically incompatible with natural justice—depriving, as it does, all men of the property necessary to fulfill their obligations to their family and to their neighbors. Marxism inherently demands that all to whom it applies becomes a race of devils.

Of course, Marxism was never historically realized in its fullness since as Roger Scruton observed, it takes an infinite amount of force to make people do what is impossible. But enough force was applied to create the approximation of Marxism known as The Soviet Union (though according to a Russian friend of mine who escaped shortly before the Soviet Union collapsed, a more accurate translation would have been “The Unified Union of United Allies”). This global superpower which was (at least apparently) bent on conquering the world in the name of Marx—well, in the name of Lenin, or communism, or The People; OK, at least bent on conquering the world—and to a marxist, who doesn’t really believe in personal autonomy and thus doesn’t believe in personal virtue, everyone else looks like a Capitalist, in the original sense of the word, since anything which is individual must inherently be greed.

So they called American capitalists. But if the devils in hell spit some criticism at you, it is only natural to take it as a compliment, and partly because of this and partly for lack of a better term, Americans started calling themselves capitalists. If the people with the overpopulated death camps for political prisoners in the frozen wastelands of Siberia despise us for being capitalists, then being a capitalist must be a pretty good thing. But in embracing the term capitalist, people were not thinking of Adam Smith’s economic theory or the problem Kant wrestled with in how to get a race of devils to cooperate, they were thinking of what they were and just using the name capitalist to describe that.

And here’s where we come to the part that makes me cringe when I hear fellow Christians complain about Capitalism. The United States of America has had many sins, but it never been capitalist in the philosophical sense. Much of what became The United States was founded as religious colonies, though to be sure there were economic colonies as well. But the economic colonies, which had all of the vices that unsupervised people tend to, were still composed of religious people who at least acknowledged the primacy of virtue over vice in theory. And for all the problems with protestantism, the famous “Protestant Work Ethic” was the diametric opposite of philosophical capitalism. The whole idea of the protestant work ethic is that men should work far beyond what is needed, because it is virtue and because idleness is dangerous. Perhaps it was always more of a theory than a practice, but even so it was not the opposite theory of capitalism that men should work to satisfy their greed.

For perhaps the first century after the founding of The United States, it was a frontier nation in which people expanded and moved around with fairly low population densities. It takes time to set up governments and small groups of people can resolve their own differences well enough, most of the time, so the paucity of government as we’re used to it today (and though in a different form people would have been used to it in Europe in the middle ages) was largely due to the historical accident of low population densities, and not to any sort of philosophical ideal that greed is the highest good, making government practically unnecessary except for contract enforcement.

And while it is true that this environment gave birth to the robber barons who made a great deal of money treating their fellow men like dirt, it also gave rise to trust busters and government regulation designed to curb the vices of men who did not feel like practicing even minimal virtue to their fellow man. Laws and regulations take time to develop, especially in a land without computers and cell phone cameras; before the advent of radio it took more than a little time to convince many people of some proposition because the skilled orators could only do the convincing one crowd at a time.

Moreover, the United States has never had a government free from corruption, but powerful men buying off politicians was not what the United States was supposed to be; all things in this fallen world are degenerate versions of themselves. Slowness to act on common principles in a fallen world does not mean that a people does not hold those principles, only that hard things like overcoming corruption are difficult and time consuming to do.

But throughout the history of the United States, if you walked up the average citizen and asked him, “ought we, as a people, to encourage men to be honest, hard working, and generous, or ought we to show each man that at least the first two are often in his self-interest and then encourage him to then be as selfish and greedy as possible?” you would have had to ask a great many people indeed to come across someone who would cheerfully give you the second answer. Being willing to give that second answer is largely a modern degeneracy of secularists who know only enough economics nor history to be dangerous, and for the most part think that you’re asking whether the government should micro-manage people’s lives to force them to be honest, hard working, and generous. Americans have many vices, but the least reliable way possible to find out what they are is to ask us.

I will grant that philosophical capitalism is also, to some degree, what is proposed by advertising. Indulge yourself! It’s sinfully delicious! You’re worth it! You deserve it! Everything is about making you happy!

I think that this may be why I cringe the most when my fellow Christians complain about our capitalist society; they should have learned by now not to believe everything they see on television.

Rambling about Online Discourse

I used to have a much better opinion of atheists before I talked with so many of them on twitter and youtube. And to clarify, it’s not that my opinion of atheists in general has gone down, only that I’ve come to realize that the atheists I had been in contact with before were a sub-set of all atheists. I had lucked into specially good ones. There are honest, decent people who don’t believe in God, but I’m coming to believe that they’re the exception, not the rule. (To be clear, each person must be dealt with as an individual, and never as merely an exemplar of a group, so whenever you come across an atheist, you must deal with him as him, and not as “an atheist”.) The longer I spend online, the more I come to believe that honest people may be quite atypical among atheists.

It does get tiring being insulted by dimwits on the internet, of course—and the average twitter/youtube atheist seems like they’d have trouble passing high school, at least if they had to take all honors classes, so poor is their grasp of entirely secular subjects—but I really don’t think that’s why my opinion is shifting. It’s really that the average twitter/youtube atheist says things which they clearly don’t mean and claims to believe things which they clearly don’t believe, and then takes advantage of the standard rules of politeness in order to try to force others into being complicit in their… if not exactly lies, then at least their reckless and culpable disregard for the truth.

Take for example the trope about “atheism is merely a lack of belief” which actually means, “I’m going to act like there’s no God even though I don’t believe that’s the case”. One could make an argument for probabilistic action—that when we don’t know something we have to operate on our best guesses—but even if that’s the route one went (and lack-atheists rarely argue this explicitly) one still has to make the positive case that the probability for action is above the threshold, or one is acting purely irrationally. Which is, in fact, what lack-atheists usually claim if you push them to be explicit. They don’t think, they just act; reason doesn’t actually work anyway; we’re just the most clever of the beasts who crawl the earth; etc. Which, OK, fine, but if one abjures all truth claims, one shouldn’t go on to make truth claims. But they almost always do, and expect to be taken seriously.

And that’s the part that’s really so frustrating. It’s that they demand that one take part in their lies—what else should we call truth claims they make but don’t believe? And then sometimes they’re even more explicit. I met one fellow who claimed that Jesus said we have to take the bible literally. And here’s the thing: there is no benefit of the doubt to give the guy. If he was beaten in the head with a tire iron for two hours by a team of professional strong-men, he wouldn’t be stupid enough to think that Jesus said, “you have to take the bible literally”. Because here’s the thing about literal interpretations: they’re literal. If Jesus said you have to take everything in the bible literally, it would include what he said, which to literally mean “you must take everything in the bible literally” would have to be phrased, “you must take everything in the bible literally”. Alternate phrasings would of course be fine, “you must interpret everything in the bible literally” etc. But it would have to be clear and unambiguous and require no interpretation of any kind in order to be an instruction to take it (and everything else in the bible) as clear and unambiguous and requiring no interpretation. Even the most cursory familiarity with the bible—and if one is making claims that a book says something, one has a responsibility to find out that it said it—is sufficient to know that there are no such passages. There was literally no honest way this guy could have claimed what he did. And it seems very likely that he was lying as boldly as he did because it is rude to call him a liar. But when someone unambiguously is a liar, what else are we supposed to do? It coarsens discourse, but to treat a liar like he’s honest is itself dishonest. As Tycho from Penny Arcade said:

You aren’t supposed to call people liars; it’s one of those things you aren’t supposed to do.  It seems like a rule cooked up by liars, frankly.  But what if a person dissembles madly, and writhes rhetorically, in the service of a goal oblique to their stated aims?  I see no reason to invent another word.

It’s really normal for Christians to go out of their way to try to make out atheists as being merely misguided, the victims of bad Christians who didn’t teach them well, etc. and I certainly get the impulse. There are some people who are like that. But at the end of the day, when somebody professes something obviously false like that we don’t have free will, or that reason doesn’t work, or whatever it is, they’re still human and still have a duty to actually investigate the world and try to be right about it and so the best case that you can make out for someone saying things like this then ignoring them and moving on is that they’re doing no better a job of being honest than you could expect of them given how badly they were raised. Which may be true, but so what? We’re not their judges. It’s not our job to judge whether they’re culpable for their lies; it’s first to not be complicit and second if possible to help them to stop lying. And I don’t think that failing at step 1 is likely to help succeed at step 2.

Monotheism vs Polytheism

A fascinating description of the history of the words polytheism and monotheism:

Last Eden

I have long been under the impression that “monotheism” and “polytheism” are two of the most unfortunate words in English, or in any language.  The two words present one who hears them with a near intellectual necessity to think the concepts are speaking about the same sort of thing, in exactly the same way, as “monosyllabic” means a word of only one syllable, and “polysyllabic” means a word with more than one syllable.

“Polytheism” does mean “many gods.”  However, “monotheism” does not mean “only one god,” but rather “God, rather than the gods.” The two terms seem to be saying something parallel, on the same level, but they are not.  This generates endless confusion, because monotheists are NOTHING AT ALL LIKE POLYTHEISTS.  In fact, in may be the case that all polytheists ARE ALSO monotheists—at least in one sense of the term.

I was trying to track down the…

View original post 2,269 more words

Pernicious Modernism: Cartesian Dishonesty

A very interesting in-depth look into some of Descartes’ writing which I haven’t read.

Last Eden

Modernism is pernicious.

Just about everything wrong with the world today comes from the pernicious thought of modernity, particularly the thought of self-named ‘Enlightenment.’

Most people would understand me if I said “Postmodernism is pernicious,” since they understand that the thoroughgoing relativism and subjectivism of postmodernism, the replacement of truth with rhetoric, in the belief that all truth claims reduce to attempts to assert power or domination, its nihilism, are all pernicious.

Supposing we divide Western history into periods, as traditional, we might say there are three or four: Antiquity, Christendom, Modernity and, perhaps, Postmodernity.  One reason to question whether Postmodernity is really distinct from Modernity, though, is that Postmodernity is simply Hypermodernity. It is Modernity taken to its conclusion according to its inner logic and nature.

This is not true of the other epochs.  Christendom was able to incorporate much of classical Antiquity, but it both added things which were…

View original post 2,400 more words


An excellent explanation of why falsificationism as a theory of what is scientific knowledge failed.

Last Eden

One problem with professional philosophy—and this holds for some of the sciences too, like physics and biology—is that the subject matter is difficult to master and require a great deal of time and technical training.

This does not, however, stop philosophical concepts from spilling over into popular discourse, where they are usually poorly understood, or even more commonly, completely misunderstood.

When I hear the terms “falsification” or “falsificationism” thrown around wildly, I experience something much like what I imagine a biologist experiences when he hears the term “evolution” being wildly and recklessly misapplied in contexts where it is misleading or meaningless.

What is falsificationism? It is a specific answer to a specific philosophical question given by a specific philosopher to solve a specific problem—and it is failed attempt at that, which left in its wake a sometimes useful methodological tool, and an unfortunate extra-philosophical cult following.

The philosopher in…

View original post 2,724 more words

Debunking Believe-or-Burn

This is the script from my video debunking believe-or-burn. It  was written to be read aloud, but it should be pretty readable. Or you could just listen to it.

Today we’re going to be looking at how abysmally wrong the idea of “believe or burn”, which I prefer to render as, “say the magic words or burn,” is. And to be clear, I mean wrong, not that I don’t like it or this isn’t my opinion. I’m Catholic, not evangelical, so I’m talking about how it contradicts the consistent teaching of the church since its inception 2000 years ago (and hence is also the position of the Eastern Orthodox, the Kopts, etc), and moreover how one can rationally see why “say the magic words or burn” cannot be true.

I’m not going to spend time explaining why non-Christian religions don’t believe you have to say the magic words or burn because for most of them, it’s not even relevant. In Hinduism, heavens and hells are related to your karma, not to your beliefs, and they’re all temporary anyway—as the story goes, the ants have all been Indra at some point. In Buddhism you’re trapped in the cycle of reincarnation and the whole point is to escape. To the degree that there even is a concept of hell in Buddhism, you’re there now and maybe you can get out. Many forms of paganism don’t even believe in an afterlife, and where they do—and what you do in life affects what happens to you in the afterlife—what happens to you is largely based on how virtuously you lived in society, not on worshipping any particular gods. Animistic religions are either often similar to pagan religions or they hold that the dead stick around as spirits and watch over the living. For the monotheistic religions, few of them have a well-defined theology on this point. Their attitude tends to be, “here is the way to be good, it’s bad to be evil, and for everyone else, well, that’s not a practical question.” For most of the world’s religions, “say the magic words or burn,” isn’t even wrong. And Islam is something of an exception to this, but I’m not going to get into Islam because the Quran doesn’t unambiguously answer this question and after Al Ghazali’s triumph over the philosophers in the 11th century, there really isn’t such thing as Islamic theology in the same sense that you have Christian theology. Christianity holds human reason, being finite, to be unable to comprehend God, but to be able to reason correctly about God within its limits. Since Al-Ghazali wrote The Incoherence of the Philosophers, the trend in Islam has been to deny human reason can say anything about God, past what he said about himself in the Quran. As such, any question not directly and unambiguously answered in the Quran—which, recall, is poetry—is not really something you can reason about. So as a matter of practicality I think Islam should be grouped with the other monotheisms who hold the question of what happens to non-believers acting in good faith to be impractical. And in any event there are hadith and a passage in the Quran which do talk about some Jews and Christians entering paradise, so make of that what you will.

There isn’t an official name for the doctrine of “say the magic words or burn”, but I think it’s best known because of fundamentalists who say that anyone who doesn’t believe will burn in hell. I think that the usual form is saying that everyone who isn’t a Christian will burn in hell, for some definition of Christian that excludes Roman Catholics, Eastern Orthodox, Anglicans, and anyone else who doesn’t think that the King James version of the bible was faxed down from heaven and is the sole authority in human affairs. You generally prove that you’re a Christian in this sense by saying, “Jesus Christ is my personal lord and savior”, but there’s no requirement that you understand what any of that means, so it functions exactly like a magical incantation.

As I discussed in my video on fundamentalists, when they demand people speak the magic words, what they’re asking for is not in any sense a real religious formulation, but actually a loyalty pledge to the dominant local culture. (Which is fundamentalist—all tribes have a way of pledging loyalty.) But the concept of “say the magic words or burn,” has a broader background than fundamentalists, going all the way back to the earliest Protestant reformers and being, more or less, a direct consequence of how Martin Luther and John Calvin meant the doctrine of Sola Fide.

Before I get into the origin of “say the magic words or burn”, let me give an overly brief explanation of what salvation actually means, to make sure we’re on the same page. And to do that, I have to start with what sin is: sin means that we have made ourselves less than what we are. For example, we were given language so that we could communicate truth. When we lie, not only do we fail in living up to the good we can do, we also damage our ability to tell the truth in the future. Lying (and all vices) all too easily become habits. We have hurt others and damaged ourselves. Happiness consists of being fully ourselves, and so in order to be happy we must be fixed. This is, over-simplified, what it means to say that we need salvation. Christianity holds that Jesus has done the work of that salvation, which after death we will be united with, if we accept God’s offer, and so we will become fixed, and thus being perfect, will be capable of eternal happiness. That’s salvation. Some amount of belief is obviously necessary to this, because if you don’t believe the world is good, you will not seek to be yourself. This is why nihilists like pickup artists are so miserable. They are human but trying to live life like some sort of sex-machine. They do lots of things that do them no good, and leave off doing lots of things that would do them good. Action follows belief, and so belief helps to live life well. We all have at least some sense of what is true, though, or in more classical language the natural law is written on all men’s hearts. It is thus possible for a person to do his best to be good, under the limitations of what he knows to be good. God desires the good of all of his creatures, and while we may not be able to see how a person doing some good, and some evil things under the misapprehension that they are good, can be saved, we have faith in God that he can do what men can’t. Besides, it doesn’t seem likely that God would permit errors to occur if they couldn’t be overcome. While we don’t know who will be saved, it is permissible to hope that all will be saved. As it says in the Catechism of the Catholic Church, “Those who, through no fault of their own, do not know the Gospel of Christ or his Church, but who nevertheless seek God with a sincere heart, and, moved by grace, try in their actions to do his will as they know it through the dictates of their conscience – those too may achieve eternal salvation.”

OK, so given that, where did the evil and insane idea of “say the magic words or burn” come from? Well, Sola Fide originated with Martin Luther, who as legend has it was scrupulous and couldn’t see how he could ever be good enough to enter heaven (I say, “as legend has it” because this may be an overly sympathetic telling). For some reason he couldn’t do his best and trust God for the rest, so he needed some alternative to make himself feel better. Unfortunately being Christian he was stuck with the word faith, which in the context of Christianity means trusting God. Martin Luther’s solution was to redefine the word faith to mean—well, he wasn’t exactly consistent, but at least much of the time he used it to mean something to the effect of “a pledge of allegiance”—basically, a promise of loyalty. The problem with that is that pledging your allegiance is just words. There’s even a parable Jesus told about this very thing: a man had two sons and told them go to work in his fields. The one son said no, but later thought better of it and went to work in the fields. The other said, “yes, sir” but didn’t go. Which did his father’s will? And please note, I’m not citing that to proof-text that Martin Luther was wrong. One bible passage with no context proves nothing. No, Martin Luther was obviously wrong. I’m just mentioning this parable because it’s an excellent illustration of the point about actions versus words. But as a side-note, it’s also an excellent illustration of why mainline protestants often have relatively little in common with Martin Luther and why it was left to the fundamentalists to really go whole-hog on Martin Luther’s theology: it was a direct contradiction of what Jesus himself taught.

John Calvin also had a hand in “say the magic words or burn”, though it was a bit different from the influence of Martin Luther. Though Luther and Calvin did agree on many points, they tended to agree for different reasons. While Martin Luther simply repudiated free will and the efficacy of reason—more or less believing that they never existed—John Calvin denied them because of the fall of man. According to Calvin man was free and and his reason worked before the first sin, but all that was destroyed with the first sin, resulting in the total depravity of man. Whereas Martin Luther thought that free will was nonsensical even as a concept, John Calvin understood what it meant but merely denied it. Ironically, John Calvin’s doctrines being a little more moderate than Martin Luther’s probably resulted in them having a much larger impact on the world; you had to be basically crazy to agree with Martin Luther, while you only needed to be deeply pessimistic to agree with John Calvin. Luther held that God was the author of evil, while Calvin at least said that all of the evil was a just punishment for how bad the first sin was. If outsiders can’t readily tell the difference between Calvin’s idea of God and the orthodox idea of the devil, insiders can’t even tell the difference between them in Martin Luther’s theology. Luther literally said that he had more faith than anyone else because he could believe that God is good despite choosing to damn so many and save so few. The rest of us, who don’t even try to believe blatant logical contradictions about God, just didn’t measure up. In the history of the world, Martin Luther is truly something special.

However, since both Luther and Calvin denied that there was such a thing as free will these days, Sola Fide necessarily took on a very strange meaning. Even a pledge of allegiance can’t do anything if you’re not the one who made it. So faith ends up becoming, especially for Calvin, just a sign that you will be saved. The thing is, while this is logically consistent—I mean, it may contradict common sense, but it doesn’t contradict itself—it isn’t psychologically stable. No one takes determinism seriously. The closest idea which is at least a little psychologically stable is that God is really just a god, if a really powerful god, so pledging allegiance is like becoming a citizen of a powerful, wealthy country. You’ll probably be safe and rich, but if you commit a crime you might spend some time in jail or even be deported. I realize that’s not the typical metaphor, but it’s fairly apt, and anyone born in the last several hundred years doesn’t have an intuitive understanding for what a feudal overlord is. This understanding of Sola Fide can’t be reconciled with Christianity, the whole point of which is to take seriously that God is the creator of the entire world and thus stands apart from it and loves it all. But this understanding of Sola Fide can plug into our instinct to be part of a tribe, which is why if you don’t think about it, it can be a stable belief.

So we come again to the loyalty pledge to the group—in a sense we have to because that is all a statement of belief without underlying intellectual belief ever can be—but with this crucial difference: whereas the fundamentalist generally is demanding loyalty to the immediate secular culture, the calvinist-inspired person can be pledging loyalty to something which transcends the immediate culture. I don’t want to oversell this because every culture—specific enough that a person can live in it—is always a subculture in a larger culture. But even so the calvinist-inspired magic-words-or-burn approach is not necessarily local. It is possible to be the only person who is on the team in an entire city, just like it’s possible to be the only Frenchman in Detroit. As such this form of magic-words-or-burn can have a strong appeal to anyone who feels themselves an outsider.
And the two forms of magic-words-or-burn are not very far apart and can easily become the other as circumstances dictate. And it should be borne in mind that one of those circumstances is raising children, because a problem which every parent has is teaching their children to be a part of their culture. In this fallen world, no culture is fully human, and equally problematic is that no human is fully human, so the result is that child and culture will always conflict. Beatings work somewhat, but getting buy-in from the child is much easier on the arms and vocal cords, and in the hands of less-than-perfect parents, anything which can be used to tame their children probably will be.

This would normally, I think, be a suitable conclusion to this video, but unfortunately it seems like salvation is a subject on which people are desperate to make some sort of error of exaggeration, so if we rule out the idea that beliefs are the only things that matter, many people will start running for the opposite side and try to jump off the cliff of beliefs not mattering at all. Or in other words, if salvation is possible to pagans, why should a Christian preach to them?

The short answer is that the truth is better for people than mistakes, even if mistakes aren’t deadly. This is because happiness consists in being maximally ourselves, and the only thing which allows us to do that is the truth. Silly examples are always clearer, so consider a man who thinks that he’s a tree and so stands outside with his bare feet in the dirt, arms outspread, motionless, trying to absorb water and nutrients through his toes and photosynthesize through his fingers. After a day or two, he will be very unhappy and a few days later he will die if he doesn’t repent of his mistake. Of course very few people make a mistake this stark—if nothing else anyone who does will die almost immediately, leaving only those who don’t make mistakes this extreme around. But the difference between this and thinking that life is about having sex with as many people as possible is a matter of degree, not of kind. You won’t die of thirst and starvation being a sex-maniac, and it will take you longer than a few days to become noticeably miserable, but it will happen with those who think they’re mindless sex machines as reliably as it will those who think they’re trees.

Pagans are in a similar situation to the pick-up-artists who think they’re mindless sex robots. Because paganism was a more widespread belief system that lasted much longer, it was more workable than pick-up-artistry, which is to say that it was nearer to the truth, but it was still wrong in ways that seriously affect human happiness. It varied with place and time, of course, but common mistakes were a focus on glory, the disposability of the individual, the inability of people to redeem themselves from errors, and so on. The same is true of other mistaken religions; they each have their mistakes, some more than others, and tend toward unhappiness to the degree that they’re wrong.

There is a second side to the importance of preaching Christianity to those who aren’t Christian, which is that life is real and salvation is about living life to the full, not skating by on the bare minimum. Far too many people think of this life as something unrelated to eternal life, as if once you make it to heaven you start over. What we are doing now is building creation up moment by moment. People who have been deceived will necessarily be getting things wrong and doing harm where they meant to help, and failing to help where they could have; it is not possible to be mistaken about reality and get everything right. That’s asking a person with vision problems to be an excellent marksman. A person who causes harm where they meant to help may not be morally culpable for the harm they do, but when all is made clear, they cannot be happy about the harm they did, while they will be able to be happy about the good they did. To give people the truth is to give them the opportunity to be happier. That is a duty precisely because we are supposed to love people and not merely tolerate them. Though I suppose I should also mention the balancing point that we’re supposed to give people the truth, not force it down their throats. Having given it to them, if they won’t take it, our job is done.

OK, I think I can conclude this video now. Until next time, may you hit everything you aim at.

Featured Image -- 3870

Book Review: Sense and Sensibility by Jane Austen

An interesting review of a Jane Austen book I’ve been meaning to get to. Pride & Prejudice is a truly excellent book, which I’ve read something like twenty times now, but for some reason I’ve read very little else of Austen’s works. I really do need to pick up my copy of Sense and Sensibility…

Jane Austen is chick lit.

Yeah, I know. So the next question is: Why is such a manly man of manliness like me reading this

Let’s get the manly credentials out of the way. The collection of her complete works that I’m reading belongs to my dad, a rather manly man, who read and enjoyed them very much. My brother, who is also quite manly, enjoy them too. And then there’s my good buddy the author, English scholar, and manly man who recommended them to me. So my manly credentials remain intact.

And second, most importantly, if something is good, it’s good, regardless if I’m a member of its so-called intended audience.

I recently finished reading Sense and Sensibility, Austen’s first novel, and let me tell you, it’s good. And I think men should read this book, and Jane Austen in general. Why?


View original post 675 more words

Our Love for Formative Fiction

I think that for most of us, there are things which we loved dearly when we were children which we still love now, often greatly in excess of how much others love these things. And I think we’re used to heard this poo-pooed as mere nostalgia. But I think that for most of us, that’s not accurate.

Nostalgia is, properly speaking, a longing for the familiar. It is not merely a desire for comfort, but also a connection through the passage of time from the present to another time (usually our childhood, but it can be any previous time). As Saint Augustine noted, our lives are shattered across the moments of time, and on our own we have no power to put it back together. Nostalgia is, properly speaking, the hope that someone else’s power will eventually put the shattered moments of time back together into a cohesive whole.

But when we enjoy formative fiction, we’re not particularly thinking of the passage of time, or the connectedness of the present to the past. And the key way that we can see this is that we don’t merely relive the past, like putting on an old sweater or walking into a room we haven’t been in for years. Those are simple connections to the past, and are properly regarded as nostalgia. But when we watch formative fiction which we still enjoy (and no one enjoys all of the fiction they read/watched/etc as a child), we actually engage it as adults. We see new things that we didn’t see at first, and appreciate it in new ways.

What is really going on is not nostalgia, but the fact that everyone has a unique perspective on creation; for each of us there are things we see in ways no one else does. Part of this is our personality, but part of this is also our previous experiences. And the thing about formative fiction is that it helped to form us. The genuine teamwork in Scooby Doo, where the friends were really friends and really tried to help each other, helped me to appreciate genuine teamwork. It’s fairly uncommon on television for teammates to actually like each other—conflict is interesting! every lazy screenwriter in the world will tell you—so when I see it in Scooby Doo now, I appreciate it all the more than I’ve grown up looking for it and appreciating it where I see it. This is one of the things I love about the Cadfael stories, where Cadfael (the benedictine monk who solves murders) is on a genuine team with Hugh Berringar, the undersheriff of Shropshire. This is also one of the things I love about the Lord Peter stories with Harriet Vane—they are genuinely on each other’s side with regard to the mysteries.

And when I mention Scooby Doo, I am of course referring to the show from the 1960s, Scooby Doo! Where are you? I have liked some of the more recent Scooby Doo shows, like Scooby Doo: Mystery Inc., but by and large the more modern stuff tends to add conflict in order to make the show more interesting, and consequently makes it far less interesting for me. Cynics will say that this is merely because none of these were from my childhood, but in fact when Scooby Doo: Mystery Inc. had episodes where the entire team was functioning like a team where everyone liked each other and were on the same side, I genuinely enjoyed those episodes. (Being a father of young children means watching a lot of children’s TV.) The episodes where members of the team were fighting or the episodes where they split up were by far my least favorite episodes.

It is possible to enjoy fiction for ulterior motives, or at least to pretend to enjoy it for ulterior motives. Still, it’s also possible to enjoy fiction because one is uniquely well suited to enjoying it, and few things prepare us for life as much as our childhood did.

“The Gem” or The Worst Argument in the World

Last Eden

In 1985, Australian philosopher David Stove held a “Competition to Find the Worst Argument in the World.” He split the points between how bad the argument itself was, and how influential it has been in the history of thought. In the end, he awarded the prize to himself, for the following argument (or rather, argument schema), which he christened “The Gem”:

We can know things only if condition C, which is necessary for knowledge, is satisfied,


We cannot know things as they are in themselves.

Condition C can be any number of things:

  • as they are related to us
  • under the forms of our perception and understanding
  • insofar as they fall under our conceptual schemes
  • insofar as they enter our minds
  • insofar as they are conceptualized by means of language
  • insofar as they are mediated by society or culture

As Stove points out, this argument is a good…

View original post 598 more words