Authority Figures in Movies

One of the curious things about the roles of authority figures in movies is that they are very rarely played by people who have ever had any authority. One might think that this wouldn’t have too much of an impact since the actors are just reciting dialog which other people wrote. (People who most of the time haven’t had any authority themselves, but that’s a somewhat separate matter.) And in the end, authority is the ability to use force to compel people, so does it matter much what the mannerisms an actor uses are?

Actually, yes, because in fact a great deal of authority, in practice, is about using social skills to get people to cooperate without having to use one’s authority. And a great deal of social skills are body language, tone of voice, emphasis, and pacing. Kind of like the famous advice given by Dalton in Road House:

For some reason, authority figures are usually portrayed as grim and stern—at this point I think because it’s a shorthand so you can tell who is who—but there is a great deal which can be accomplished by smiling. There’s an odd idea that many people seem to have that smiling is only sincere if it is an instinctual, uncontrollable reaction. I’ve no idea where this crazy notion came from, but in fact smiling is primarily a form of communication. It communicates that one is not (immediately) a threat, that (in the moment) one intends cooperation, that the order of the moment is conversation rather than action. Like all communication it can of course be a lie, but the solution to that is very simple: don’t lie with your smile. Words can be lies, but the solution is not to refrain from speaking unless you can’t help yourself; it’s to tell the truth when one opens one’s mouth. So tell the truth when you smile with your mouth, too. And since actions are choices, one very viable option, if you smile at someone, is to follow through and (in the moment) be nice.

Anyone (sane) who has a dog knows that in many ways they’re terrible creatures. They steal your food, destroy everyday items, throw up on your floor when they’ve eaten things that aren’t food, get dog hair everywhere, and make your couches stink of dog. And yet, people love dogs who do these things to them for a very simple reason: any time you come home, your dog smiles at you and wags its tail and is glad to see you. And it’s human nature that it’s impossible to be angry at someone who is just so gosh darned happy that you’re in the same room as them.

People in authority are rarely there because they have a history of failure and incompetence at dealing with people; it may be a convenient movie shorthand that people in authority are stone-faced, grumpy, and stern, but in real life people in positions of authority are generally friendly. It’s easy to read too much into that friendliness, of course—they’re only friendly so long as you stay on the right side of what you’re supposed to be doing—but this unrealistic movie shorthand makes for far less interesting characters.

And I suppose I should note that there are some people in positions of authority who are often stone-faced and grim, but these are usually the people responsible for administering discipline to those already known to be transgressors. This is especially true of those dealing with children, who have little self control and less of a grasp of the gravity of most situations they’re in and who need all the help they can get in realizing that it’s not play time. By contrast, during the short time I was able to take part in my parish’s prison ministry, I noticed that the prison guards were generally friendly (if guardedly so) with the inmates. Basically, being friendly can invite people to try to take liberties, but being grumpy usually gets far less cooperation, and outside of places like Nazi death camps where you are actually willing to shoot people for being uncooperative, cooperation is usually far more useful than people trying to take liberties and having to be told “no” is inconvenient.

But most of the actors who play authority figures don’t know any of this; and when you research the individual actors they often turn out to be goofballs who don’t like authority and whose portrayal of it is largely formed by what they most dislike about it.

Atheism is Not a Religion

This is the script to my video, Atheism is Not a Religion. As always, it was written to be listened to when I read it aloud, but it should be pretty readable as text, too.

Today we’re going to look at a topic which a casual survey of atheist youtube channels and twitter feeds suggests is of importance to many atheists: that atheism is not a religion. Now, since the one thing you can’t convict internet atheists of is originality, I assume that this is because there are Christians who claim that atheism is a religion. Of course what they probably mean by this that atheism entails a set of metaphysical beliefs. And this is true enough, at least as a practical assumption if some atheists will scream at you until they’re blue in the face that it’s not what they believe in theory. But merely having metaphysical beliefs does not make something a religion; it makes it a philosophy or in more modern terms, a world-view. But a religion is far more than merely a world-view or a set of beliefs. As Saint James noted, the demons believe in God.

The first and most obvious thing which atheism lacks is: worship. Atheists do not worship anything. I know that Auguste Comte tried to remedy this with his calendar of secular holidays, but that went nowhere and has been mostly forgotten except perhaps in a joke G. K. Chesterton made about it. A few atheists have made a half-hearted go of trying to worship science. And if that had any lasting power, Sunday services might include playing a clip from Cosmos: A Spacetime Odyssey. But the would-be science worshippers haven’t gotten that far, and it is highly doubtful they ever will.

Secular Humanism is sometimes brought up as something like a religious substitute, but so far it only appears to be a name, a logo, some manifestos no one cares about, and the belief that maybe it’s possible to have morality without religion. And humanity is not a workable object of worship anyway. First, because it’s too amorphous to worship—as Chesterton noted, a god composed of seven billion persons neither dividing the substance nor confounding the persons is hard to believe in. The other reason is that worshipping humanity involves worshipping Hitler and Stalin and Mao and so forth.

Which brings us to Marxism, which is perhaps the closest thing to a secular religion so far devised. But while Marxism does focus the believer’s attention on a utopia which will someday arrive, and certainly gets people to be willing to shed an awful lot of innocent blood to make it happen sooner, I don’t think that this really constitutes worship. It’s a goal, and men will kill and die for goals, but they can’t really worship goals. Goals only really exist in the people who have them, and you can only worship what you believe actually exists.

It is sometimes argued that within a marxist utopia people worship the state, but while this is something put on propaganda posters, the people who lived in marxist nations don’t report anyone actually engaging in this sort of worship, at least not sincerely.

And I know that some people will say that atheists worship themselves—I suspect because almost all atheists define morality as nothing more than a personal preference—but, at least I’ve never seen that as anything more than a half-hearted attempt to answer the question of “what is the ground of morality”, rather than any sort of motivating belief. And in any event, it is inherently impossible to worship oneself. Worshipping something is recognizing something as above oneself, and it is not possible to place oneself above oneself. I think the physical metaphor suffices: if you are kneeling, you can’t look up and see your own feet. You might be able to see an image of yourself in a mirror, but that is not the same, and whatever fascination it might have is still not worship. So no, atheism does not worship anything.

The second reason why atheism is not a religion is that atheism gives you no one to pray to. Prayer is a very interesting phenomenon, and is much misunderstood by those who are not religious and, frankly, many who are, but it is, at its core, talking with someone who actually understands what is said. People do not ever truly understand each other because the mediation of words always strips some of the meaning away and the fact that every word means multiple things always introduces ambiguity. Like all good things in religion this reaches its crescendo in Christianity, but even in the public prayers said over pagan altars, there is the experience of real communication, in its etymological sense. Com—together unication—being one. It is in prayer—and only in prayer—that we are not alone. Atheists may decry this as talking with our imaginary friends if they like—and many of them certainly seem to like to—but in any event they are left where all men who are not praying are left: alone in the crowd of humanity, never really understood and so only ever loved very imperfectly at best. (I will note that this point will be lost on people who have never taken the trouble to find out what somebody else really means, and so assumes that everyone else means exactly the same things that he would mean by those words, and so assumes that all communication goes perfectly. You can usually identify such people by the way they think that everyone around them who doesn’t entirely agree with them is stupid. It’s the only conclusion left open to them.)

The third reason why atheism is not a religion is that it does not, in any way, serve the primary purpose of religion. The thing you find common to all religions—the thing at the center of all religions—is putting man into his proper relation with all that is; with the cosmos, in the Greek sense of the word. Anyone who looks at the world sees that there is a hierarchy of being; that plants are more than dust and beasts are more than plants and human beings are more than beasts. But if you spend any time with human beings—and I mean literally any time—you will immediately know that human beings are not the most that can be. All that we can see and hear and smell and taste and touch in this world forms an arrow which does not point at us but does run through us, pointing at something else. The primary purpose of a religion is to acknowledge that and to get it right. Of course various religions get it right to various degrees; those who understand that it points to an uncreated creator who loved the world in existence out of nothing get it far more right than those who merely believe in powerful intelligences which are beyond ours. Though if you look carefully, even those who apparently don’t, seem to often have their suspicions that here’s something important they don’t know about. But be that as it may, all religions know that there is something more than man, and give its adherents a way of putting themselves below what they are below; of standing in a right relation to that which is above them. In short, the primary purpose of all religion is humility.

And this, atheism most certainly does not have. It doesn’t matter whether you define atheism as a positive denial or a passive lack; either way atheism gives you absolutely no way to be in a right relationship to anything above you, because it doesn’t believe in anything above you. Even worse, atheism as a strong tendency, at least in the west, to collapse the hierarchy of being in the other direction, too. It is no accident that pets are acquiring human rights and there are some fringe groups trying to sue for the release of zoo animals under the theory of habeus corpus. Without someone who intended to make something out of the constituent particles which make us up, there is ultimately no reason why any particular configuration of quarks and electrons should mean anything more than any other one; human beings are simply the cleverest of the beasts that crawl the earth, and the beasts are simply the most active of the dust which is imprisoned on the earth.

We each have our preferences, of course, but anyone with any wide experience of human beings knows that we don’t all have the same preferences, and since the misanthropes are dangerous and have good reason to lie to us those who don’t look out for themselves quickly become the victims of those who do. Call it foreigners or racists or patriarchy or gynocentrism or rape culture or the disposable male or communism or capitalism or call it nature red in tooth and claw, if you want to be more poetic about it, but sooner or later you will find out that human beings, like the rest of the world, are dangerous.

Religious people know very well that other human beings are dangerous; there is no way in this world to get rid of temptation and sin. But religion gives the possibility of overcoming the collapsing in upon ourselves for which atheism gives no escape.

For some reason we always talk about pride puffing someone up, but this is almost the exact opposite of what it actually does. It’s an understandable mistake, but it is a mistake. Pride doesn’t puff the self up, it shrinks it down. It just shrinks the rest of the world down first.

In conclusion, I can see why my co-religionists would be tempted to say that atheism is a religion. There are atheist leaders who look for all the world like charismatic preachers and atheist organizations that serve no discernible secular purpose. Though not all atheists believe the same things, still, most believe such extremely similar things that they could identify on that basis. Individual atheists almost invariably hold unprovable dogmas with a blind certainty that makes the average Christian look like a skeptic. And so on; one could go on at length about how atheism looks like a religion. But all these are mere external trappings. Atheism is not a religion, which is a great pity because atheists would be far better off if it was.

Two Interesting Questions

On Twitter, @philomonty, who I believe is best described as an agnostic (he can’t tell whether nihilism or Catholicism is true), made two video requests. Here are the questions he gave me:

  1. If atheism is a cognitive defect, how may one relieve it?
  2. How can an atheist believe in Christ, when he does not know him? Not everyone has mystical experiences, so not everyone has a point of contact which establishes trust between persons, as seen in everyday life.

I suspect that I will tackle these in two separate videos, especially because the second is a question which applies to far more than just atheists. They’re also fairly big questions, so it will take me a while to work out how I want to answer them. 🙂

The first question is especially tricky because I believe there are several different kind of cognitive defects which can lead to atheism. Not everyone is a mystic, but if a person who isn’t demands mystical experience as the condition for belief, he will go very wrong. If a person who is a mystic has mystical experiences but denies them, he will go very wrong, but in a different way. There are also people who are far too trusting of the culture they’re in, thinking that fitting into it is the fullness of being human, so they will necessarily reject anything which makes it impossible or even just harder to fit in. These two will go very wrong, but in a different way from the previous ones.

To some degree this is a reference to my friend Eve Keneinan’s view that atheism is primarily caused by some sort of cognitive defect, such as an inability to sense the numinous (basically, lacking a sensus divinitatus). Since I’ve never experienced that myself, I’m certain it can’t be the entire story, though to the degree that it is part of the story it would come under the category of non-mystics who demand mystical experience. Or, possibly, mystics who have been damaged by something, though I am very dubious about that possibility. God curtails the amount of evil possible in the world to what allows for good, after all, so while that is not a conclusive argument, it does seem likely to me that God would not permit anything to make it impossible for a person to believe in him.

Anyway, these are just some initial thoughts on the topic which I’ll be mulling over as I consider how to answer. Interesting questions.

The Dunning-Kruger Effect

(This is the script for my video about the Dunning-Kruger effect. While I wrote it to be read out loud by someone who inflects words like I do, i.e. by me, it should be pretty readable as text.)

Today we’re going to be looking at the Dunning-Kruger effect. This is the other topic requested by PickUpYourPantsPatrol—once again thanks for the request!—and if you’ve disagreed with anyone in the internet in the last few years, you’ve probably been accused of suffering from it.

Perhaps the best summary of the popular version of the Dunning-Kruger effect was given by John Cleese:

The problem with people like this is that they have no idea how stupid they are. You see, if you are very very stupid, how can you possibly realize that you are very very stupid? You’d have to be relatively intelligent to know how stupid you are. There’s a wonderful bit of research by a guy called David Dunning who’s pointed out that to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place. This means, if you’re absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.

There are plenty of things to say about this summary as well as the curious problem that if an idiot is talking to an intelligent person, absent reputation being available, there is a near-certainty that both will think the other an idiot. But before I get into any of that, I’d like to talk about the Dunning Kruger study itself, because I read the paper which Dunning and Kruger published in 1999, and it’s quite interesting.

The first thing to note about the paper is that it actually discusses four studies which the researchers did, trying to test specific ideas about incompetence and self-evaluation which the paper itself points out were already common knowledge. For example, they have a very on-point quotation from Thomas Jefferson. But, they note, this common wisdom that fools often don’t know that they’re fools has never been rigorously tested in the field of psychology, so they did.

The second thing to note about this study is that—as I understand is very common in psychological studies—their research subjects were all students taking psychology courses who received extra credit for participating. Now, these four studies were conducted in Cornell University, and the classes were all undergraduates, so right away generalizing to the larger population is immediately suspect since there’s good reason to believe that undergraduates in an Ivy League university have more than a few things in common which they don’t share with the rest of humanity. This is especially the case because the researchers were testing self-evaluation of performance, which is something that Cornell undergraduates were selected for and have a lot invested in. They are, in some sense, the elite of society, or so at least I suspect most of them have been told, even if not every one of them believes it.

Moreover, the tests which they were given—which I’ll go into detail about in a minute—were all academic tests, given to people who were there because they had generally been good at academics. Ivy League undergraduates are perhaps the people most likely to give falsely high impressions of how good they are at academic tests. This is especially the case if any of these were freshmen classes (they don’t say), since a freshman at an Ivy League school has impressed the admissions board but hasn’t had the opportunity to fail out yet.
So, right off the bat the general utility of this study in confirming popular wisdom is suspect; popular opinion may have to stand on its own. On the other hand, this may be nearly the perfect study to explain the phenomenon Nassim Nicholas Taleb described as Intellectual Yet Idiot—credentialed people who have the role of intellectuals yet little of the knowledge and none of the wisdom for acting the part.

Be that as it may, let’s look at the four studies described. The first study is in many ways the strangest, since it was a test of evaluating humor. They created a compilation of 30 jokes from several sources, then had a panel of 8 professional comedians rate these jokes on a scale from 1-11. After throwing out one outlier, they took the mean answers as the “correct” answers, then gave the same test to “65 cornell undergraduates from a variety of courses in psychology who earned extra credit for their participation”.
They found that the people with the bottom quartile of test scores, who by definition have an average rank of being at the twelfth percentile, guessed (on average) their rank was the 66th percentile. The bottom three quartiles overestimated their rank, while the top quartile underestimated their rank, thinking that they were in the (eyeballing it from the graph) 75th percentile when in fact (again, by definition) they were in the 88th.
This is, I think, the least interesting of the studies, first because the way they came up with “right” and “wrong” answers is very suspect, and second because this isn’t necessarily about mis-estimation of a person’s ability, but could be entirely about mis-estimating their peer’s ability. The fact that everyone put their average rank in the class at between the 66th percentile and 75th percentile may just mean that in default of knowing how they did, Cornell students are used to guessing that they got somewhere between a a B- and a B+. Given that they were admitted to Cornell, that guess may have a lot of history behind it to back it up.

The next test, though unfortunately only given to 45 Cornell students, is far more interesting both because it used 20 questions on logical reasoning taken from an LSAT prep book—so we’re dealing with questions where there is an unambiguously right answer—and because in addition to asking students how they thought they ranked, they asked the students how many questions they thought that they got right. It’s that last part that’s really interesting, because that’s a far more direct measure of how much the students thought that they knew. And in this case, the bottom quartile thought that they got 14.2 questions right while they actually got 9.6 right. The top quartile, by contrast, thought that they got 14 correct when they actually got 16.9 correct.

So, first, the effect does in fact hold up with unambiguous answers. The bottom quartile of performers thought that they got more questions right than they did. So far, so good. But the magnitude of the error is not nearly as great as it was for the ranking error, especially for the bottom quartile. Speaking loosely, the bottom quartile knew half of the material and thought that they knew three quarters of it. That is a significant error, in the sense of being a meaningful error, but at the same time they thought that they knew about 48% more than they did, not 48,000% more than they did. The 11 Cornell undergraduates who took this class did have an over-inflated sense of their ability, to be sure, but they also had a basic competence in the field. To put this in perspective, the top quartile only scored 76% better than the bottom quartile.

The next study was on 84 Cornell undergrads who were given a 20 question test of standard English grammar taken from a National Teacher Examination prep guide. This replicated the basic findings of the previous study, with the bottom quartile estimating they got 12.9 questions right versus a real score of 9.2. (Interestingly, the top quartile very slightly over-estimated their score as 16.9 when it was actually 16.4) Again, all these are averages so the numbers are a little wonky, but anyway this time they over-estimated their performance by 3.7 points, or 40%. And again, they got close to half the questions right, so this isn’t really a test of people who are incompetent.

There’s another thing to consider in both studies, which is how many questions the students thought they got wrong. In the first study they estimated 5.4 errors while in the second 7.1 errors, and while these were under-estimates, they were correct that they did in fact get that many wrong. Unfortunately these are aggregate numbers (asked after they handed the test in, I believe) so we don’t know their accuracy on gauging whether they got particular questions wrong, but in the first test they correctly estimated about 40% of their error and on the second test they correctly estimated about 65% of their error. That is, while they did unequivocally have an over-inflated sense of their performance, they were not wildly unrealistic about how much they knew. But of course these are both subjects they had studied in the past, and their test scores did demonstrate at least basic competence with them.

The fourth study is more interesting, in part because it was on a more esoteric subject: it was a 10 question test, given to 140 cornell undergrads, about set selection. Each problem described 4 cards and gave a rule which they might match. The question was which card or cards needed to be flipped over to determine if those cards do match the rule. Each question was like that, so we can see why they only asked ten questions.

They were asked to rate how they did in the usual way, but then half of them were given a short packet that took about 10 minutes to read explaining how to do these problems, while the other half was given an unrelated filler task that also took about 10 minutes. They were then asked to rate their performance again, and in fact the group who learned how to do the problems did revise their estimate of their performance, while the other group didn’t change it very much.

And in this test we actually see a gross mis-estimation of ability by the incompetent. The bottom quartile scored on average 0.3 questions correct, but initially thought that they had gotten about 5.5 questions correct. For reference, the top quartile initially thought that they had gotten 8.9 questions correct while they had in fact gotten all ten correct. And after the training, the untrained bottom quartile slightly raised their estimation of their score (by six tenths of a question), but among the trained people the bottom quartile reduced their estimation by 4.3 questions. (In fact the two groups had slightly different performances which I averaged together; so the bottom quartile of the trained group estimated that they got exactly one question right.)

This fourth study, it seems to me, is finally more of a real test of what everyone wants the Dunning-Kruger effect to be about. An average of 0.3 questions right corresponds to roughly to 11 of the 35 people in the bottom quartile getting one question right while the rest got every question wrong. The incompetent people were actually incompetent. Further, they over-estimated their performance by over 1800%. So here, finally, we come to the substance of the quote from John Cleese, right?

Well… maybe. There are two reasons I’m hesitant to say so, though. The first is the fact that these are still all Cornell students, so they are people who are used to being above average and doing well on tests and so forth. Moreover, virtually all of them would have never been outside of academia, so it is very likely that they’ve never encountered a test which was not designed to be passable by most people. If nothing else, it doesn’t reflect well on a teacher if most of his class gets a failing grade. And probably most importantly, the skills necessary to solve these problems are fairly close to the sort of skills that Ivy League undergrads are supposed to have, so this skillset at which they are incompetent being similar to a skillset at which they are presumably competent might well have misled them.

The second reason I’m hesitant to say that this study confirms the John Cleese quote is that the incompetent people estimated that they got 55% of the questions right, not 95% of the questions right. That is to say, incompetent people thought that they were merely competent. They didn’t think that they are experts.

In the conclusion of the paper, Dunning and Kruger talked about some limitations of their study, which I will quote because it’s well written and I want to do them justice.

We do not mean to imply that people are always unaware of their incompetence. We doubt whether many of our readers would dare take on Michael Jordan in a game of one-on-one, challenge Eric Clapton with a session of dueling guitars, or enter into a friendly wager on the golf course with Tiger Woods.

They go on to note that in some domains, knowledge is largely the substance of skill, like in grammar, whereas in other places knowledge and skill are not the same thing, like basketball.

They also note that there is a minimum amount of knowledge required to mistake oneself for competent. As the authors say:

Most people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct an 8-cylinder engine, or diagnose acute disseminated encephalomyelitis.

So where does this leave us with regard to the quote from John Cleese? I think that the real issue is not so much about the inability of the incompetent to estimate their ability, but the inability of the incompetent to reconcile new ideas with what they do actually know. Idiots may not know much, but they still know some things. They’re not rocks. When a learned person tells them something, they are prone to reject it not because they think that they already know everything, but because it seems to contradict the few things they are sure of.

There is a complex interplay between intelligence and education—and I’m talking about education, mind, not mere schooling—where intelligence allows one to see distinctions and connections quickly, while education gives one the framework of what things there are that can be distinguished or connected. If a person lacks the one or the other—and especially if they lack both—understanding new things becomes very difficult because it is hard to connect what was said to what is already known, as well as to distinguish it from possible contradictions to what is already known. If the learned, intelligent person isn’t known by reputation to the idiot, the idiot has no way of knowing whether the things said don’t make sense to him because they are nonsense or because they are too much sense, and a little experience of the world is enough to make many if not most people sufficiently cynical to assume the former.

And I think that perhaps the best way to see the difference between this and the Dunning-Kruger effect is by considering the second half of the fourth experiment: the incompetent people learned how to do what they initially couldn’t. That is, after training they became competent. That is not, in general, our experience of idiots.
Until next time, may you hit everything you aim at.

You Have the Right to Remain Innocent

I recently saw the news that the defense attorney / law professor who made the videos Don’t Talk to Cops (part 1, part 2) wrote a book on the subject. It’s called You Have the Right to Remain Innocent, and it’s a short and easy to read book which covers much of the same material, but in greater depth, with updates for recent caselaw, and without the speed-talking.

Since the basic thesis of the book is stated in its title, which is also a reasonably summary of the book’s actionable advice, it is reasonable to ask what is in the book which justifies opening the book to look at its pages. There’s actually a lot.

The book does starts with some caveats, perhaps most notably that he clarifies he’s talking about speaking with the police when they come to you, unsolicited, to ask you questions about the past. It is both a legal requirement and good sense to readily comply with the request to identify yourself and explain what you are doing in the moment, where you currently are. One of his examples is if you are breaking into your own house because you locked yourself out and a policeman asks you what you are doing, do tell him that this is your house and you don’t have your key. He mentions some other cases when you must talk with the police.

The other very notable caveat is that he takes some pains to point out that every member of society owes a great debt to the men and women who serve as police, who take personal risk to do a difficult job that keeps us safe. Throughout the book, he makes it clear that he isn’t talking about bad people, but (in the main) good people in a bad situation, which is the present criminal legal system in the United States. It is a system which sometimes convicts innocent people along with guilty people, and for reasons he makes clear throughout the book, his primary concern is giving innocent people the tools needed to avoid the pitfalls of this dangerous system. Good people make mistakes, and the mistake of a police officer or a prosecutor or a judge can cost an innocent person decades in prison. (He uses more than a few cases where the person convicted was later conclusively proved innocent by DNA evidence (often decades later) to show how wrong things can go for innocent people.)

The book has more than a few interesting insights into problems with the criminal justice system—perhaps most notably being the way that no living person has any idea even how many crimes are defined by the law, let alone what they all are—but I think its greatest value lies in the examination of particular cases where he goes on to show how even very trivial statements, which are true, can become damning evidence in light of other things which a person may not know and has no control over. The case where a man admitted to having dated a woman some time before the crime he was convicted of happened, in the neighborhood where that crime happened, helped to send a man later exonerated by DNA evidence to prison. Coincidences happen, but not all juries believe that they do.

And it is this sort of thing which is the main value of reading the entire book, I think. It is so very easy to slip into the mindset of wanting to give into the urge to cooperate, to be helpful, to be willing to answer any question which is not directly incriminating (and if I’m innocent, how could any question be directly incriminating?) which takes more than a little beating down by seeing over and over again how even minor admissions of completely true and innocent things can be disastrous. The book presents information, but I think equally reading it constitutes training. If one were ever to face a police interview it would be a very stressful situation, and when stressed we tend to forget what we know and fall back on our habitual reactions. Only through training ourselves by seeing many situations we could all too easily be in is it likely that we will remember to do what we should.

The final two chapters of the book, which are much shorter than the first, deal with the specifics of how to go about exercising one’s right to remain innocent in a practical sense. He covers many instances of how people have accidentally incriminated themselves when invoking their fifth amendment right, as well as how people have accidentally failed at refusing to talk to the police and asking for a lawyer. And again, it’s not so much knowing what to do that’s the real benefit of reading this book, but learning what not to do, and why not to do it.

The book is a short, easy read which is well written, and I think valuable for anyone living in America. I found it a valuable read even after watching the videos I linked above, and strongly recommend it.

Who’s Afraid of the Dark

I recently read Russell Newquist‘s short story, Who’s Afraid of the Dark. As you may recall, he recently reviewed my novel, Ordinary Superheroes, so in gratitude I made the time to read his short story sooner rather than later (with three small children in the house, I have very little time for leisure reading these days). I’m glad that I did, and I’m really looking forward to reading the stories which this short story serves as a rather cunning introduction to.

I’m really not sure how to review this short story without revealing any of the surprises in it, so let me apologize in advance for this review being a little oblique, but since I’ve already given away that it’s not simply what it seems, let me emphasize that it’s really not what it seems at first: it’s quite a lot more.

The beginning and middle of the story was suspenseful, while the ending of it is both satisfying and promises much larger stories to come. Stories on very interesting subjects.

If you like stories in which people who have a real chance of winning fight monsters, this story is likely to be for you. Mr. Newquist clearly understands two important ingredients in a good story of humanity fighting monsters: (1) this must always take place in a fundamentally good world, that is, one where it is possible, with blood and sweat and tears and sacrifice, to actually achieve something good and (2) the monsters must be genuinely dangerous and scary.

Update: Russell told me that there is a Peter Bishop story in Between the Wall and the Fire, which I just bought.