The Development of Psycho-Analysis Makes Sense if you Assume it Doesn’t Work

I recently read the transcript of Freud’s lectures explaining to a Clark University audience what Psycho-Analysis is (Five Lectures on Psycho-Analysis). One of the things that struck me was that the development of Psycho-Analysis that he outlined makes sense if you assume that Psycho-Analysis doesn’t work.

The background we need was provided by Freud in the first lecture: a description of hysteria, which was the condition he was trying to treat. Basically, it’s a catch-all for severe ideopathic symptoms in a female. That is, if there’s something really wrong in a woman and doctors can find no physical cause, that’s then called hysteria. This isn’t trivial stuff—one example Freud gave was a woman who suffered paralysis in part of her body for extended periods. But, here’s the background we need: according to Freud, instead of despairing, doctors tended to give a good, if indefinite, prognosis. That is, the symptoms often went away on their own, though on their own time frame and not a predictable one.

So before we look at Psycho-Analysis, let’s look at the properties that a scheme of treatment which doesn’t work needs to have in order for the person developing it to be able to convince himself that it works, if it’s applied to conditions which tend to eventually get better on an unpredictable time frame.

The first and most obvious property it needs to have is that it can’t be supposed to work immediately. If it was supposed to work immediately, it would be obvious that it doesn’t work. Any such scheme of treatment must, therefore, be a process. However, it cannot be a definite process, because the patient might get better before the process is finished (which would not be a disaster because it could be credited to the process working extra well, somehow, though it would sew seeds of doubt) or else they might still be ill when the definite process has finished. It must, therefore, be an indefinite process.

What sort of properties would an indefinite process need to have, given that it’s not actually doing anything? Well, it will be tremendously helpful if it consists of a series of steps, each of which does have a definite conclusion, since that will give a feeling of accomplishment. If the indefinite process were just endless repetition of the same thing (e.g. identical breathing exercises), most people will get bored. By breaking the process up into steps, the feeling of completion of each step will give a sense of accomplishment, even if the total number of steps are not known. There will be a feeling that something has happened.

It would also be helpful if at least parts of this process are enjoyable or fulfill some other human need such as companionship, sympathy, etc. People will be a lot more inclined to believe that a process is doing what they want if it’s at least doing something that they want. This one you nearly get for free, though, since it’s hard to have a human being who sees you on a recurring basis and not have this feel like some amount of companionship. As long as the process doesn’t feel entirely adversarial, most any process that involves regularly meeting another human being will check this box.

The indefinite process also needs to be able to be explained as completed whenever the patient gets better. If you were supposed to keep doing something forever and the patient gets better, that creates a big credibility problem. And remember that we’re not talking about credibility to the patient, but credibility to the practitioner. A patient can just think he got lucky and who wants to question being well too soon? But a practitioner can only get lucky so many times before he starts to think that there’s something wrong with his theory.

If the indefinite process consists of some kind of peeling back of layers, that will do a pretty good job with this, so long as there’s no way to tell how many layers there are before you hit the last layer. Each layer being peeled back will feel like an accomplishment, and whenever the patient gets better anyway, you can declare that the layer you most recently peeled back was the last layer and this explains why the patient is cured.

Another requirement for the indefinite process is that the steps involved need to be something that everyone can do. You can only remove a splinter from the skin of someone who has a splinter, but you can massage anyone who has a body. If the process is a peeling back of layers, the process needs to be something where anyone can think that they have those layers.

OK, so, given all of that, what do we see in Psycho-Analysis?

The basic premise is that the patients’ symptoms are caused by unresolved conflicts from the past which they have purposely forgotten in order to not have to deal with them (“repressed”). These must be dealt with in reverse chronological order, that is, you have to resolve the most recent first. There are various techniques for uncovering the memories so that the patient can deal with the repressed conflict but one of the chief ones is doing free association with dreams, guided by the therapist.

So, how does this correspond to what we’d expect to see in a treatment that doesn’t work for a condition which will eventually get better on its own?

Perfectly.

We have an indefinite process with distinct steps—the uncovering of each individual repressed conflict (and its resolution, though that’s often easy once it’s faced directly). This allows a feeling of accomplishment with each step. We also check the box of fulfilling some other need—regularly spending time with someone who is interested in us usually feels good. Indeed, a noted feature of psychotherapy is “transference,” which is the patient feeling for the therapist feelings that they “actually” have for someone else. Often this is sexual attraction, but it can be anything—friendship, a parent-child relationship, etc. Of course, another interpretation of this is that the patient, who is lonely in some way, is starting to believe that the therapist is meeting this need. That will certainly provide the reason to keep coming back.

We also have a peeling back of layers. Each repressed conflict must be dealt with before the next one, starting from the most recent to the oldest. This can be terminated at any time—once the symptoms stop, you conclude that you’ve finally uncovered the original repressed conflict. We also have the feature that anyone can do the work. One of the main techniques is to free associate on the substance of one’s dreams. We all dream, and anyone can say whatever comes into one’s head when thinking of some part of the dream. The analyst’s chief job in this free association is to direct it. The analyst picks up on the key parts and asks for more free association on that, as well as asking questions about the subject. Whenever that stops working, there are always more dreams and more free associations to be made. Truly, anyone can do it.

In short, I could not have predicted Psycho-Analysis merely by the assumption that it doesn’t work at treating conditions which tend to get better on their own, but nothing about it surprised me at all.

Well, that’s not quite true. I didn’t expect Freud to redefine “sexual”to mean “sensory.” Which means that a lot of his theories about things like the oedipal complex aren’t nearly as whackadoodle as they sound when you first hear them. I’m dubious that they’re true, but they’re not “had your brains surgically replaced with rat droppings” insane.

Unsustainable Things Give the Biggest Short-Term Benefits

Change in dynamic systems always brings with it opportunities, and, in particular, unsustainable opportunities. These opportunities come from the mismatch between the parts of the system adapted to the new system and the parts which have not yet adapted. And unsustainable things usually give the biggest short-term benefits, which creates an incentive for people to instigate change in order to take advantage of the huge short-term benefits available before the system has adapted.

A simple example can be seen in the inflation or deflation of a currency. Let’s take deflation since it’s less common and less likely to have negative associations. In deflation, money is removed from an economy. The same amount of economic activity can go on as long as the price of everything lowers, and indeed this is what will eventually happen as the people who still have money offer less of it to others for goods and services and out of desperation they take it. The money then flows from the people who have it to the people who don’t, prices tend to lower, and we’re eventually back to where we started but with different numbers. Instead of the average wage being one Florentine per hour, it’s now half a Florentine per hour, and instead of a loaf of bread costing one Florentine it now costs half a Florentine. (Florentine is, I hope, a made-up currency purely for the purpose of illustration. It can be paper or gold or platinum, it doesn’t matter.) So the same amount of labor buys the same amount of bread, but the numbers have changed. We’re back to a stable situation, because a human economy needs (roughly) a certain relationship between the price of labor and the price of bread in order to function. It will go back to that. But what happened along the way? A lot of things, including a lot of suffering, but the relevant part here is a lot of opportunity.

If a person foresees the coming deflation, he will do what he can to save money, knowing that it will go up in value. He will forgo luxury goods and save, while he works extra hours to amass even more money. Then when the deflation hits he finally pays himself back, with all the money he saved buying twice what it would have back when he earned it. His new riches will only last with his savings; eventually he will have to go back to work and there will be the same relationship between his labor and the things he can buy with it as before the inflation. But while it lasts, he’s living high. And people who realize this will have a motivation to try to influence government policy to create deflationary periods. If his country is on a gold standard, he will have a temptation to help revolutionaries who want to sink ships carrying other people’s gold.

(We don’t see deflation nearly as often because far more people appreciate the potential for personal short-term benefit in inflation, but that’s a discussion for another day.)

You see similar opportunities for short-term gain in social changes as you do in economic ones, though because society is more complex and also more subtle than economics, these are often better disguised. Let’s take a simple case, though. Suppose a man in the 1950s desires to insert his penis to the vaginas of many women who, unlike him, are not interested in being promiscuous. The number of promiscuous women is irrelevant to this man since promiscuous women are, by hypothesis, not the object of his desire. If you need a story to make this more plausible, suppose that he is attracted to the feeling of conquest in bedding a woman who is saving herself for marriage, or if that is too old fashioned for you, who only feels sexual attraction within the context of what she beliefs to be a long-term relationship. In stable times, this will not work. His dreams of many such penis-insertions will result in very few actual insertions, and most of those will end up being with women who deceived him while he was trying to deceive them. He may, however, have the opportunity to realize his dream during times of social change.

If the social norms protecting women who are only interested in coitus within the confines of marriage or at least a long-term relationship are shifting, some of these women will rely on the old social protections while they are no longer being afforded and will, because of that, be easily deceived. To give a concrete example, suppose that women no longer tend to stay near family members but instead are exposed to unrelated young men whose reputation they do not know. Let us suppose, for example, that public schooling as been instituted and that automotive transport has brought a large number of people together, and moreover it has become normal for teenagers to use cars to go to places where none of their family are. While people are still getting used to this new normal, some young women may rely on reputation and their family not allowing males of ill intent near them to filter out the males of ill intent, and so a pretty face coupled with charming words may well convince her that she is consummating a marriage with him that they effected (the sacrament of marriage is confected by the couple, not by any priest or officiant) while he has simply lied to her because he is a bad man.

This state of affairs will not last; young women will, fairly quickly, learn to rely on different things to vet males than applied in their old environment. But during this transition, they will have none of these things, and some will be easy prey.

It is interesting to note, though few will care because people are naturally less sympathetic to males and even less so to bad males, that the changing social norms will also result in young women who are eager to be promiscuous having a better shot at this hypothetical male who only desires to insert his genitalia into women who wouldn’t want him if they knew what he was doing. During these hypothetical changes in social norms, he will be far more easily misled into thinking that all women are shrinking violets who object to using sexual intercourse like heroin because that might as well have been the case under the previous social norm and the exceptions were easy to spot.

When everyone gets used to the new circumstances, things will return to their previous difficulty, albeit with small modifications for differences in exact circumstance. People will develop new ways of getting to know a person’s reputation, people will treat strangers as unfiltered by people they trust, etc. etc. etc. There will be no lasting benefit, but there can be huge short-term benefits.

(Bear in mind that this example was a change in social circumstances that didn’t alter people’s fundamental preferences. It’s not an example of temporary sterilization. That will still cause changes that can be taken advantage of, but it also alters people’s fundamental preferences and the changes that will be adapted to are in things affected by it but not its direct consequences.)

The example I gave above was of a social change induced by a shift in (transportation) technology, which our hypothetical cad had no real control over. Yet even there, you can imagine, if he was sufficiently far-sighted, how he could champion government funding for roads as well as mandatory public schooling.

In practice, of course, the sorts of advocacy that people can have on social changes tend to be far more limited in effect and tend to look far more like simple bad advice. Loosen up, don’t be such a prude, you only want to treat sex like it’s not a safer form of heroin because the mean Christians are trying to control you, etc. etc. etc. These people are not, in the main, Machiavellian masterminds who are trying to create chaos to take advantage of it before they settle down. Mostly they are fools who think that the good times will last forever. In ten or fifteen years they’ll probably be writing op-eds about how great jumping off the cliff was but you don’t want to take things to their logical conclusion, you just want to keep falling forever because it’s a lot more fun. What they’re trying to do is to get the advantages of the change.

A big part of why they don’t realize that this is what they’re doing is because a lot of people never consider that human beings have two phases: childhood and adulthood. Childhood is a time of change, when human beings are easily molded. People can still change in adulthood, but nowhere nearly as easily. Accordingly, if you institute a social change in all of society, it will take far more hold in the young than in older people. The young will take it to its logical conclusions because they’re not held back by being stuck on adaptations to a previous order.

To give an example (painted with an absurdly broad brush), social norms were changed in the 1970s to where family, friends, and aquaintances no longer protected young women from the sexual advances of bad men. So for a decade or so, bad men could sexually harass women to their heart’s content and it was a cad’s paradise. But then young women who were raised without the expectation of social connections helping them adapted to the circumstance and sought the protection of law, and we had the crime of sexual harassment, as well as all sorts of corporate policies against it. And things went back to more-or-less normal.

As a brief aside, it is amusing to see people who grew up at just the right time think that the 1970s were representative of how society worked throughout all of human history up until some people agitated for legal protections. These people have clearly never watched movies from before the 1970s! Back then, important customers could get thrown out of an office for making advances on a secretary in terms sufficiently veiled that they’d never get past the initial stages of filing a sexual harassment lawsuit. Heaven help an employee who was sexually aggressive with fellow employees! This weird historical myopia is a subject for another day, but it is funny how people have managed to continuously think of their grandparents as downtrodden slaves and themselves as the first generation to be free for several generations in a row.

Anyway, the amnesiac attitude towards the developmental stages of human beings is often behind quite a bit of agitation for social change; the people doing the agitating only ever think about what things will be like when people set in the old ways partially change over, and are always shocked at what people who grow up with the changes do in order to lead human lives within the new order.

Admittedly, part of that is that people rarely adapt to change well within a single generation. They go to excess on some things and utterly miss out on others. It takes time to refine complex systems. The people having to do the adapting often suffer for it, too. Adults have fewer needs because their lives are already largely set; children have a ton of work to do in setting up their lives and will often do less of it due to the uncertainty of tumultuous times. The adults who advocate for social change thus reap more of the rewards and pay fewer of the costs, then blame the new generation for not doing as well as them. It’s a bit cheeky to burn the furniture then complain that people don’t sit down, but then most people are not philosophers.

One final note I should add is that none of the above means that social change is always and everywhere bad. Much of it is inevitable with a changing environment (such as is caused by developments in technology). Some of it is needed merely in order to fix the mistakes of the past. Indeed, as the paradox of Chesterton’s post states, you need constant change merely to be conservative. As he so rightly said, if you leave a white post alone, it will, in short order, become a dirty grey post. Only by continually repainting it white will you and future generations have a white post.

Change there must be, but it’s often best to limit it to fixing mistakes. And have a thought for the people who have to grow up in the new system because they won’t have the advantages of having grown up in the old one. Only their descendants will have that advantage, and only if the people who have to grow up in the new system don’t change it again.

Psycho-Analysis Began in Hypnosis

In my (low-key) quest to understand how on earth Freud’s theories were ever respected, I’ve recently read Five Lectures on Psycho-Analysis. It’s definitely been interesting. (If you don’t know, this is the transcript of five lectures he gave on five consecutive days at Clark University in Worcester, Massachusetts in 1909 which were meant to give a concise summary of Psycho-Analysis.)

Something I did not realize, but which makes perfect sense in retrospect, is that Psycho-Analysis began in hypnosis. A tiny bit of background is necessary, here: In the 1800s and early 1900s, the term “hysteria” seems to refer to any idiopathic problem in women with severe physical symptoms. Basically, when a woman developed bad symptoms and called in a doctor and he could find no physical cause, the diagnosis was “hysteria,” which basically meant “I don’t know, in a woman.” At this point, since the symptoms don’t have physical causes it is assumed that they must have mental causes and so doctors of the mind would step in to try to help, supposing, of course, that the patient or her family could afford it.

Freud begins with an interesting story about a patient that a colleague of his, Dr. Breuer, was treating. It was a young woman under great stress (nursing her dying father) who started developing a bunch of really bad symptoms that sound, to my ear, like a series of small strokes. She couldn’t use her right arm or leg for a while, sometimes she couldn’t use her left side, she forgot her native language (German) and could only speak English, etc. She also developed a severe inability to drink water and survived fro several weeks on melons and other high-water foods. And here’s where it gets interesting. Dr. Breuer hypnotized her and in a hypnotic state she related the story of having gone into a companion’s room and seen the woman’s dog drinking from a glass. This disgusted her terribly but she gave no indication of it because she didn’t want to offend the woman. He then gave the young woman a glass of water, brought her out of hypnosis, and she was able to drink normally from then on.

Freud moved away from hypnosis for several reasons, but the big one seems to be that most people can’t be hypnotized, which makes it a therapeutic tool of dubious value. The particulars of how he moved away is interesting, but I’ll get to that in a little bit. Before that, I want to focus on the hypnosis.

The history of hypnosis is interesting in itself, but a bit complex, and the relevant part is really how it was more popularly perceived than by what it was intended as. In its early stages, hypnosis was seem as something very different from normal waking life and, as a result, excited an enormous amount of interest from people who desired secret knowledge of the universe’s inner secrets. There were plenty of people who wanted to believe in a hidden world that they could access if only they had the key (spurred on, I suspect, by the many discoveries of the microscope in the late 1600s and the continued discoveries as a result of better and better microscopes). Hypnotism, where a man’s mind seemed to alter to a completely different state, and in particular where it could receive commands that it would obey without remembering in a subsequent waking state, was perfect for just such a belief. Here there seemed to be another behind, behind the mind we observe, which seemed to govern the observable mind’s operation. This is the sort of stuff out of which real power is based—if you can control the real source of the mind, you can control the mind!

This context really makes Pysho-Analysis’s model of the compartmentalized mind and further its insistence on the power of the sub-conscious mind make sense.

As I said, Freud abandoned hypnotism, and the means by which he did it really should have been a tip-off to his whole theory being wrong. What led him to discard hypnotism were some experiments he became aware of in which a person who could not remember what he did under hypnosis could be induced, without any further hypnosis, to remember. Freud only took this instrumentally rather than considering that it undermined the whole idea of the powerful subconscious and went about bringing up the “repressed” memories which were (putatively) causing physical symptoms by talking with the patient without hypnotism. I suppose that the idea of this secret knowledge was too attractive to give up.

How to Balance Gratitude With Ambition

I was watching a Chris Williamson Q&A video recently and a question he was asked was how to balance gratitude with ambition (or aspiration for improvement, if you dislike the term ambition). The exact phrasing of the question was:

How do I manage the dichotomy between being grateful for how far I’ve come and wanting to become more? The dichotomy between working for my future and being present in the moment.

There are several answer to this, and the thing is, they’re all primarily religious. It’s actually kind of interesting how often hard-won, top-level secular wisdom is beginning religious education. The Jewish sabbath is exactly this. God created the heavens and the earth in six days, and on the seventh day God rested, so human beings will work for six days and rest on the seventh. (Bear in mind that rest implies contemplation, not merely sleeping.) There you go, there’s your management of the dichotomy between working and gratitude. (The Christian moving of the day of rest to Sunday is an interesting and rich topic, but all of that rich symbolism doesn’t materially affect the current subject.) To put this in secular terms, a regular 6-to-1 balance of time dedicated to work with time dedicated to contemplation will keep your balance. If you keep it regular (that is, according to a rule), it will ensure that the effects of contemplation do not wear off. And guess what: you need to impose rules on yourself to make yourself do it because human beings don’t perfectly auto-regulate. (Just don’t make the rules so rigid you can’t live; the sabbath was made for man, not man for the sabbath.)

Another answer, here, is to keep God always in mind. This will make you strive to be perfect as your Father in heaven is perfect and also make you grateful for all that He’s already given you.

Here’s where Jordan Peterson’s language of “God is the highest good” falls a bit short, since keeping the highest good in mind will stimulate ambition, but it doesn’t tend nearly so much to gratitude. For gratitude you need to keep in mind the nothingness from which you came and which you could, apart from the positive action of The Good, become again. This requires a leap of faith that the world is not evil, though. If you can do this, you’re not going to be secular for long, and the whole exercise of trying to put this into secular language will be unnecessary. If you can’t take this leap of faith that the world exists because of good, then you’ll never actually be grateful anyway. People try to use “grateful” as an intransitive verb, but it’s not. It’s a transitive verb. You don’t have to conceive of God as a person to be grateful to Him, though it helps. But if the world is just a cruel joke with no punchline which no one told, gratitude is nonsensical. But here’s the thing: if you aren’t sure whether life is a cruel joke with no punchline that no one has told, that is equally paralyzing.

To see why, consider this thought experiment: you receive a text message from a friend which says something complementary about you, but there are enough odd word choices that you think it might just be his phone unlocked in his pocket interacting with auto-correct. Try to feel grateful for this message which you think might be a real compliment and might just be random noise that accidentally looks like a message. You will find that you can’t do it.

Nevertheless, it can still be interesting to say what is true, even if it will do no one any good: the way you keep perspective is by comparing, not to one thing, but to two things. If you want to keep perspective on your achievements, you must compare them both to the fullness of what you can achieve as well as to the nothing which is the least you could have achieved. Comparing to only one will not give you a proper perspective, because neither, on its own, is the full picture. Only by looking at the full picture will you have a correct perspective on where your achievements are within it. This is as true of metaphorical photographs as it is of literal photographs.

Socially Awkward Women Have a Really Hard Time

I came across the subject of how women interact with each other socially when studying female bullying, originally with the books Queen Bees and Wannabes and Odd Girl Out. (They’re both very interesting books and I recommend them.) I’ve studied more about it since then and one of the conclusions I’ve come to is that socially awkward women have an incredibly hard time. (This probably includes, but certainly is not limited to, women on the autism spectrum.)

The background you need to know (and will probably know better than I am if you are female, in which case please bear with me) is that women tend to prefer, within social interactions, subtle interactions to explicit ones. You can tell Just So evopsych stories about women being more vulnerable and needing to not offend people to explain it if you like, but the preference for more subtle nudging than direct confrontation means that women are (as a rule) highly attuned to subtle signals. (None of this comes with any value judgement attached; like all natural substrates it is the canvas upon which moral virtues are painted—in other words, it can be used well or badly.) In general this works out, in much the same way that if you have a quiet speaker and a sensitive microphone, you get a recording at a normal volume. Or to vary the metaphor, if you have a dim light and a wide-open pupil, your eye sees clearly.

By contrast—and of course I’m painting with a broad brush—men tend to dislike subtlety in social interactions. We value openness and directness. It does need to be said that that’s not the same thing as being a bull in a china shop. You can be direct, quiet, and precise—hence Teddy Roosevelt’s famous advice to speak softly and carry a big stick.

Now, it’s fairly obvious that these two strategies don’t mesh perfectly; when the male is trying to communicate to the female this can be like shouting into a sensitive microphone, and when the female is trying to communicate to the male this can be like whispering into a mic with the gain turned really low. This often causes problems to males and females who are just starting to communicate with each other (i.e. teenagers) but women pretty quickly learn to stop looking for subtle queues from men, often with the explanation that “men are simple” or “men are dumb.” A similar phenomenon happens when a woman is first married—she’ll often be trying to figure out what’s wrong all the time until she figures out that if something’s wrong the man will say, and most of the time she can’t figure out what’s going on with him, it’s not that he’s being too subtle or she not sensitive enough, it’s that nothing (relevant) is going on. This is the classic case of the woman wondering why the man is staring off into space and trying to guess why he’s angry at her while he’s just trying to figure out whether he thinks it’s actually plausible that batman could be superman in a fight. I mean, superman has super-speed, so even if batman has cryptonite…

And, again, after a while most young wives figure out that a husband staring off into space probably doesn’t mean anything, and “men are just weird/simple/stupid/big children/different”.

All well and good for women interacting with males.

But for the most part, it seems that women can’t learn to make these allowances for other women.

And this causes enormous problems for women who need them.

I’m speaking, of course, of socially awkward women. They don’t give off appropriate subtle queues, especially the positive ones, which often causes other women to take offense. This probably needs some explanation.

Often, the way women communicate that they have been offended is to somewhat reduce the amount of positive signals they’re giving, or to still give them but to make them less enthusiastic. Since the other woman is hyper-vigilant and analyzes her behavior in great detail to see where she might have given offense, she’ll probably figure this out and take action to repair the relationship. If the woman does not do this analysis and take that action, this communicates her disinclination to a close relationship, i.e. is an insult. Hence the offense.

A socially awkward woman may or may not notice the subtle variations in the other woman’s positive signals, but if she does she’ll have no idea how to respond and so the other woman is highly likely to take offense when she gets it wrong.

There’s also a pretty good chance that the socially awkward woman will have no idea how to respond properly to when her female friends try to do collaborative emotional processing with her, making the experience unsatisfying for them if they don’t interpret her actions as being judgmental or all negative and taking offense when this doesn’t seem right.

All of this will cause female friendships to be very stressful for the socially awkward woman, and in all likelihood, short-lived.

None of these problems apply to friendships with males, though, so there’s a pretty good chance that you’ll find socially awkward women having mostly male friends. This has its own pitfalls, of course, because a woman who shares a man’s interests and likes talking to him about them is extraordinarily attractive to males who are looking for a wife. There’s the further issue that women of marriageable age usually won’t talk (extensively) to males of marriageable age unless they’re open to romantic interest because they’re very sensitive to whether there’s interest and careful to not encourage it. Again, I’m painting with a very broad brush and there are tons of exceptions to that—especially in contexts which are not purely social, such as workplaces. But the point is, there’s a real danger in her friendships with males that the male will develop romantic interest in the socially awkward woman and if she’s not interested that will kill the friendship.

So we come back to the title of this post. Life is really hard for socially awkward women, and I think they deserve more sympathy than they often get.

Testing Computer Programs

My oldest son, who does yet know how to program, told me a great joke about programmers testing the programs they’ve written:

A programmer writes the implementation of a bartender. He then goes into the bar and orders one beer. He then orders two beers. He orders 256 beers. He order 257 beers. He order 9,999 beers. He orders 0.1 beers. He orders zero beers. He orders -1 beers. Everything works properly.

A customer walks in and asks where the bathroom is. The bar catches fire.

It’s funny ’cause it’s true.

It’s easy, when you design a tool, to test that it works for the purpose the tool exists for. What it’s very easy to miss is all of the other possible uses of the tool. To take a simple example: when you’re making a screwdriver, it’s obvious to test the thing for driving screws. It’s less obvious to test it as a pry bar, a chisel, an awl, or a tape dispenser.

This disparity is inherent in the nature of making tools versus using them. Tools are made by tool-makers. The best tool makers use their own tools, but they are only one person. Each person has his way of solving a problem, and he tends to stick to that way because he’s gotten good at it. When he goes to make a tool, he makes it work well for how he will use it, and often adds features for variations on how he can think to use it to solve the problems he’s making the tool to solve. If he’s fortunate enough to have the resources to talk to other people who will use the tool, he’ll ask them and probably get some good ideas on alternative ways to use it. But he can’t talk to everyone, and he especially can’t talk to the people who haven’t even considered using the tool he hasn’t made yet.

That last group is especially difficult, since there’s no way to know what they will need. But they will come, because once the tool exists, people who have problems where this new tool will at least partially solve their problem will start using it to do so, since they’re better off with it than they were before, even though the tool was never meant to do that.

This isn’t much of a problem with simple tools like a screwdriver, since it doesn’t really have any subtleties to it. This can be a big problem with complex tools, and especially with software. When it comes to software design, you can talk to a bunch of people, but mostly you have to deal with this through trial-and-error, with people reporting “bugs” and you going, “why on earth would you do that?” and then you figure it out and (probably) make changes to make that use case work.

The flip side is a big more generally practical, though: when considering tools, you will usually have the most success with them if you use them for what they were designed to do. The more you are using the tool for some other purpose, the more likely you are to run into problems with it and discover bugs.

For me this comes up a lot when picking software libraries. Naive programmers will look at a library and ask, “can I use this to do what I want?” With more experience, you learn to ask, “was this library designed to do what I want to do?” Code re-use is a great thing, as is not re-inventing the wheel, but this needs to be balanced out against whether the tool was designed for the use for which you want to use it, or whether you’re going to be constantly fighting it. You can use the fact that a car’s differential means that its drive wheels will spin in the mud to dig holes, but that will stop working when car manufacturers come out with limited-slip differentials because they’re making cars for transportation, not digging holes.

That’s not to say that one should never be creative in one’s use of a tool. Certainly there are books which work better for propping up a table than they do for being read. Just be careful with it.

The Idolatry of Art

Something I’ve come across in real life, but far more in (English) literature from the early-through-mid 1900s, is a weird idolatry of art. In real life this tends to be an excuse by young women to tolerate things they shouldn’t tolerate from good looking men they’re attracted to. In literature, though, there is generally far less of an obvious explanation for it.

Chesterton talked about the phenomenon as “art for art’s sake” and the thing always strikes me as having one of the great hallmarks of desperation: a mighty struggle to pretend that a thing is what one wants it to be.

I think I would do well, at this point, to give an example of what I mean. A good one that comes to mind is in Dorothy L. Sayers’ masterpiece, Gaudy Night.

“You see how easy it is, when you stick to the rules,” said Wimsey. “Miss Vane feels no compunction. She wipes me out with a firm hand, rather than damage my reputation. But the question isn’t always so simple. How about the artist of genius who has to choose between letting his family starve and painting pot-boilers to keep them?”

“He has no business to have a wife and family,” said Miss Hillyard.

“Poor devil! Then he has the further interesting choice between repressions and immorality. Mrs. Goodwin, I gather, would object to the repressions and some people might object to the immorality.”

“That doesn’t matter,” said Miss Pyke. “You have hypothesized a wife and family. Well—he could stop painting. That, if he really is a genius, would be a loss to the world. But he mustn’t paint bad pictures—that would be really immoral.”

“Why?” asked Miss Edwards. “What do a few bad pictures matter, more or less?”

“Of course they matter,” said Miss Shaw. She knew a good deal about painting. “A bad picture by a good painter is a betrayal of truth—his own truth.”

Now that I’ve typed it out it’s not quite what I had in mind. You can see it, perhaps more clearly, in The Unsolved Puzzle of the Man With No Face. I can’t give details without spoiling the story (it’s a short story), but murder is committed because of an obsession with art and offense taken at the quality of the art not being recognized.

You also see this kind of thing, though not shared by the rest of the cast, in the character of Henrietta Savernake in The Hollow. She is disconnected from the rest of humanity because she is so intensely an artist, and art is more important than life. She went around in a daze trying to find the perfect model for a statue she was sculpting, then destroyed it because she realized she had, in some indefinable way, included the spite of the model (who blathered on self-importantly while modeling) into the face which otherwise had exactly what she wanted. But she wasn’t just discontent with it, she woke up from sleeping with this terrible revelation and had to run and destroy the sculpture immediately while she still had the power to do it and wasn’t too attached to it. You can also see this in how she couldn’t mourn the victim, she could only make a sculpture to express her grief.

You can see a similar thing, though in negative, in the discussion of Ann Dormer’s paintings in the Lord Peter story The Unpleasantness At the Bellona Club. Ann Dorland’s paintings were judged terrible. Not merely incompetent, but outright bad. It has something of the flavor of the ancient Greek horror at hubris.

I’ve seen many similar things which, unfortunately, are not coming to mind; hopefully you have too and know to what I am referring.

The phenomenon of artist-as-creative-god seems to be a phenomenon of, primarily, the first half of the nineteen hundreds. As far as I can tell it did predate the first world war, though it does not seem to have outlasted the second.

I can’t help but wonder if this is related to what G.K. Chesterton said (in Orthodoxy) about the will-worshipers:

At the beginning of this preliminary negative sketch I said that our mental ruin has been wrought by wild reason, not by wild imagination. A man does not go mad because he makes a statue a mile high, but he may go mad by thinking it out in square inches. Now, one school of thinkers has seen this and jumped at it as a way of renewing the pagan health of the world. They see that reason destroys; but Will, they say, creates. The ultimate authority, they say, is in will, not in reason. The supreme point is not why a man demands a thing, but the fact that he does demand it. I have no space to trace or expound this philosophy of Will. It came, I suppose, through Nietzsche, who preached something that is called egoism. That, indeed, was simpleminded enough; for Nietzsche denied egoism simply by preaching it. To preach anything is to give it away. First, the egoist calls life a war without mercy, and then he takes the greatest possible trouble to drill his enemies in war. To preach egoism is to practise altruism. But however it began, the view is common enough in current literature. The main defence of these thinkers is that they are not thinkers; they are makers. They say that choice is itself the divine thing. Thus Mr. Bernard Shaw has attacked the old idea that men’s acts are to be judged by the standard of the desire of happiness. He says that a man does not act for his happiness, but from his will. He does not say, “Jam will make me happy,” but “I want jam.” And in all this others follow him with yet greater enthusiasm. Mr. John Davidson, a remarkable poet, is so passionately excited about it that he is obliged to write prose. He publishes a short play with several long prefaces. This is natural enough in Mr. Shaw, for all his plays are prefaces: Mr. Shaw is (I suspect) the only man on earth who has never written any poetry. But that Mr. Davidson (who can write excellent poetry) should write instead laborious metaphysics in defence of this doctrine of will, does show that the doctrine of will has taken hold of men. Even Mr. H.G.Wells has half spoken in its language; saying that one should test acts not like a thinker, but like an artist, saying, “I FEEL this curve is right,” or “that line SHALL go thus.” They are all excited; and well they may be. For by this doctrine of the divine authority of will, they think they can break out of the doomed fortress of  rationalism. They think they can escape.

The Modern world, which was very much confronting the problems of Modern Philosophy in the late 1800s, faces the problem of the radical skepticism which defined Modern Philosophy. It is in the prison of doubt and has trouble bringing itself to that faith required even for simple things like getting up in the morning. (If anyone doubts this, one merely needs to look at the rate of prescriptions for antidepressants.) It strikes me that there might be a relation, here. That is, the worship of art was, perhaps, a moderately disguised worship of will in an attempt to evade the mental paralysis of Modern Philosophy. It was not sensible because it was driven by desperation.

I don’t know if this is the explanation, but it does explain the phenomenon.

Speaking Ill of the Dead

It is a very old aphorism that one should not speak ill of the dead. According to Wikipedia, it dates back at least to Chilon of Sparta. To the degree that justification is given for it, it’s usually that the dead are not here to defend themselves from accusations. Like many aphorisms, it has some wisdom to it, but it can be taken too far.

The main thing to say for it is that, in the ordinary course of life, the wicked deeds of the dead are no longer relevant. The obvious practical exception are when the dead leave behind them some means to give restitution for their wickedness; if a man stole a horse and dies, the horse should be given back to its owner, and establishing this will necessarily entail some speaking of the fact that the horse was stolen. But, leaving aside this kind of restitution, whatever bad deeds a man did while alive, he no longer has the power to harm anyone, so there is no benefit to be gained.

Not often spoken about but also relevant is that anyone who valued something good about the dead man will have that tarnished by accusations against him. There is, generally, nothing gained by diminishing their ability to enjoy what good the dead man did.

The more common reason given—that a dead man is not around to defend himself—does also have some merit to it. The dead man would usually be in the position to give the strongest defense of himself, so any such accusations will have the suspicion attached to them that they could not have stood up to defense.

So much for it.

There is a place, however, where it is clearly inapplicable: when the dead man has published things which are still read/watched/have influence. A good example is Christopher Hitchens. He was an atheist popular among atheists in the first decade and a half of the third millenium. He is still often quoted, though like most people his influence has diminished after his dead. It has diminished, but it has not gone away. People still quote him, and find him inspirational in their rejection of religion. And the problem is that it his personality that attracts people, not the quality of his arguments. In fact, so far as I know, he never made arguments. All he ever made was impassioned rhetoric. (See my video, Christopher Hitchens Isn’t Serious: No, Heaven Is Not A Spiritual North Korea. In at least one place I break up his flow just to show that, absent his voice carrying one through, his conclusion in no way followed from what he said before it.)

Impassioned rhetoric is a kind of argument, though mostly an implicit argument. It rests upon the premise that the one who is impassioned has a reasonable cause for the passion in his rhetoric. That is, the man himself is one of the premises in his argument. This is not unreasonable, but it does mean that the man must be examinable if his argument is to be considered. Since Christopher Hitchens’ impassioned rhetoric had, as its premise, the correctness of his judgement, we must be free to examine whether his judgment actually was correct. And there we get to the fact that it was not. Hitchens was, in fact, a habitual drunkard who didn’t think that anything he talked about so passionately when drunk was worth bothering about when he was sober. That is, the passion in his rhetoric came not from his own good judgement, but from a bottle.

The reasons why one should not speak ill of the dead do not apply here, for several reasons. In the first case, the man is still doing damage, so it is relevant to work to end that damage. In the second case, this is counteracting the man’s bad work, which itself gets in the way of people privately remembering him for whatever virtues he might have had. In the third case, death is not a free pass to cause as much harm as one can, and in the special case of Christopher Hitchens he very candidly admitted to being a drunk, so there’s no question of defending himself anyway.

The inexpensive written word (since publishing, and especially since digital distribution) and even more so video have created a new context which requires some revision to this ancient heuristic.

Chesterton on the Consequences of Not Having a Philosophy of Life

The best reason for a revival of philosophy is that unless a man has a philosophy certain horrible things will happen to him.  He will be practical; he will be progressive; he will cultivate efficiency; he will trust in evolution; he will do the work that lies nearest; he will devote himself to deeds, not words.  Thus struck down by blow after blow of blind stupidity and random fate, he will stagger on to a miserable death with no comfort but a series of catchwords; such as those which I have catalogued above.  Those things are simply substitutes for thoughts.  In some cases they are the tags and tail-ends of somebody else’s thinking.  That means that a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts.  He will only have the used-up scraps of somebody else’s philosophy; which the beasts do not have to inherit; hence their happiness.  Men have always one of two things: either a complete and conscious philosophy or the unconscious acceptance of the broken bits of some incomplete and shattered and often discredited philosophy.  Such broken bits are the phrases I have quoted: efficiency and evolution and the rest.  The idea of being “practical”, standing all by itself, is all that remains of a Pragmatism that cannot stand at all.  It is impossible to be practical without a Pragma.  And what would happen if you went up to the next practical man you met and said to the poor dear old duffer, “Where is your Pragma?” Doing the work that is nearest is obvious nonsense; yet it has been repeated in many albums.  In nine cases out of ten it would mean doing the work that we are least fitted to do, such as cleaning the windows or clouting the policeman over the head.  “Deeds, not words” is itself an excellent example of “Words, not thoughts”.  It is a deed to throw a pebble into a pond and a word that sends a prisoner to the gallows.  But there are certainly very futile words; and this sort of journalistic philosophy and popular science almost entirely consists of them.

Some people fear that philosophy will bore or bewilder them; because they think it is not only a string of long words, but a tangle of complicated notions.  These people miss the whole point of the modern situation.  These are exactly the evils that exist already; mostly for want of a philosophy.  The politicians and the papers are always using long words.  It is not a complete consolation that they use them wrong.  The political and social relations are already hopelessly complicated.  They are far more complicated than any page of medieval metaphysics; the only difference is that the medievalist could trace out the tangle and follow the complications; and the moderns cannot.  The chief practical things of today, like finance and political corruption, are frightfully complicated.  We are content to tolerate them because we are content to misunderstand them, not to understand them.  The business world needs metaphysics — to simplify it.

Philosophy is merely thought that has been thought out.  It is often a great bore.  But man has no alternative, except between being influenced by thought that has been thought out and being influenced by thought that has not been thought out.  The latter is what we commonly call culture and enlightenment today.  But man is always influenced by thought of some kind, his own or somebody else’s; that of somebody he trusts or that of somebody he never heard of, thought at first, second or third hand; thought from exploded legends or unverified rumours; but always something with the shadow of a system of values and a reason for preference.  A man does test everything by something.  The question here is whether he has ever tested the test.

­—G.K. Chesterton, The Common Man, The Revival of Philosophy—Why?

It is, indeed, a problem that so many people have never put so much as a few minutes thought into their idea of what the world that they live in is and what are the things that they find in it. One runs into this constantly.

It is not new, of course. It is merely more obvious at present. It’s more obvious because modern cultures do not have any dominant philosophy of life and so unthinking people do not have that accidental consistency which can give the misimpression that they believe consistent things. Each “used-up scrap of somebody else’s philosophy” comes, in modern times, from a different somebody else, which makes this lack of understanding of what he is saying far more obvious.

Which is not to say, of course, that there never was a time and place where it was more common for people to be taught to think and actually do a little bit of it. One of the effects of modern culture being a muddle of many different people’s philosophies is that it discourages a great many people from doing any thinking, just as a storm discourages a great many people from going outside.

It is worth noting, though, that human beings have tendend toward not thinking rigorously since the fall of man. Times and places vary with how much people actually bother to think, but they var far more with how obvious it is that they haven’t.

G.K. Chesterton on Marriage

I was recently trying to find a quote from G.K. Chesterton on how the point of a wedding is the marriage vow, and the point of the marriage vow is that it’s daring. I wasn’t able to find the original, what I did find was a newspaper called The Holy Name Journal which seems to have been from Michigan. In the August 1921 edition, someone quotes Chesterton’s article almost in full. Since it was only available as a photograph (though, thanks to Google, a text-searchable photograph), I transcribed it for easier quoting:

A writer of the Westminster Gazett recently made the proposal to alter the marriage formula: “As to the vow at the altar, it seems conceivable that under other conditions the form of words ordained by the Prayer Book might be revised.” And the writer adds that may have omitted the words “to obey”, others might omit the words “til death do us part.” The following is Mr. G.K. Chesterton’s rejoined to The New Witness:

It never seems to occur to him that others might omit the wedding. What is the point of the ceremony except that it involves the vow? What is the point of the vow except that it involves vowing something dramatic and final? Why walk all the way to a church in order to say that you will retain a connection as long as you find it convenient? Why stand in front of an altar to announce that you will enjoy somebody’s society as long as you find it enjoyable? The writer talks of the reason for omitting some of the words, without realizing that it is an even better reason for omitting all the words. In fact, the proof that the vow is what I describe, and what Mr. Hocking apparently cannot even manage, a unique thing not to be confounded with a contract, can be found in the very form and terminology of the vow itself. It can be found in the very fact that the vow becomes verbally ridiculous when it is thus verbally amended. The daring dogmatic terms of the promise become ludicrous in face of the timidity and triviality, of the thing promised. To say “I swear to God, in the face of this congregation as I shall answer at the dreadful day of judgment, that Maria and I will be friends until we quarrel” is a thing of which the very diction implies the derision. It is like saying, “In the name of the angels and archangels and the whole company of heaven, I think I prefer Turkish to Egyptian cigarettes,” or “Crying aloud on the everlasting mercy, I confess I have grave doubts about whether sardines are good for me.” Obviously nobody would ever have invented such a ceremony, or invented any ceremony, to celebrate such a promise. Men would merely have done what they liked, as millions of healthy men have done, without any ceremony at all.

Divorce and re-marriage are simply a heavy and hypocritical masquerade for free love and no marriage; and I have far more respect for the revolutionists who from the first have described their free love as free. But of the marriage service obviously refers to a totally different order of ideas; the rather unfashionable [stuff?] that may be called heroic ideas. Perhaps it is unreasonable to expect the fatigued fatalist of this school and period to understand these ideas; and I only ask here that they should understand their own ideas. Every one of their own arguments leads direct to promiscuity; and leaves no kind of use or meaning in marriage of any kind. But the idea of the vow is perhaps a little too bold and bracing for them at present, and is too strong for their heads, like sea air.

Empathy Is Such a Stupid Basis For Morality

If you’ve spent more than a few minutes arguing with atheists on the internet, the subject of how they justify morality will have come up and they will have tried to justify it by saying that “they have empathy”. Usually, though not always, in very self-satisfied tones. It is curious that they are oblivious to how stupid this is. And not just in one way.

The first problem, of course, is that empathy doesn’t inevitably lead to treating people well. It’s very easy to lie to people because one doesn’t want them to suffer, to give too much candy to a child because you can’t bear to hear them cry, to give alcohol to an alcoholic because he feels miserable without it, etc. Empathy also provides no check against suffering that cannot be seen. It’s hard to shoot a man standing in front of you, and not so hard to shoot him when he’s 200 yards away, and not nearly as hard when he’s inside of a building that you’re bombing. It can be downright easy when it’s giving orders to people who don’t feel empathy to execute people in a camp hundreds of miles away.

For that matter, empathy can even lead to being cruel; if two people’s needs conflict and one feels more empathy for one person than another, that empathy can lead one to harm the other for the sake of the one more empathized with. Parents are notorious for being willing to go to great lengths for the sake of their children, even to the point of doing all sorts of immoral things to spare their children far less suffering than the harm they cause to spare it. I can testify to the temptation. If I were to consult only my feelings and not my principles, there’s no limit to the number of people I would kill for the sake of my children.

Which brings us to another problem: empathy is merely a feeling. To claim that the basis of morality is empathy is to claim that the basis of morality is a feeling. In other words, “morality is based on empathy” means “do what you feel like.” That’s not morality, that’s the absence of morality. Moreover, human beings demonstrably feel like doing bad things to each other quite often.

(Unless, of course, the atheist is trying to claim that one should privilege the feeling of empathy over feelings experienced more strongly at the time, in which case there would need to be some rational argument given, not based in empathy, for why it should be thus privileged. But if one were to try this, one would run into a sort of Euthyphro dilemma—if empathy is good because it conforms to the good, then it is not the source of goodness, and it is a distraction to talk about it; if good is good because it conforms to empathy, then to call empathy good is merely to say that it is empathy, and there is no rational basis for preferring it to other feelings.)

The fact that people feel like doing bad things to each other really gets to the heart of the problem for the atheist. It’s all very well for the atheist to say “I prefer to harm no one.” He can have no real answer to someone else replying, “but I do.” Indeed, he has no answer. If you ever suggest such a thing, the atheist merely shrieks and yells and tries to shout down the existence of such a thing. His ultimate recourse is to law, of course, which means to violence, for law is the codified application of violence by people specially charged with carrying that violence out.

(It’s hardly possible to arrest someone, try, convict, and imprison them all without at least the threat of force from the police; if you don’t think so try the following experiment: construct a medium sized steel box (with windows), walk up to some random person while manifestly carrying no weapons, and say “In my own name I arrest you and sentence you twenty years inside of my steel box. Now come along and get in. I will not force you, but I warn you that if you do not comply I shall tell you to get in again.” Do this twenty or thirty times and count how many of them the person comes along and gets in.)

Of course, when the atheist appeals to the laws which enforce his preferred morality, we may ask where his empathy for the transgressor is. Where is his empathy for all of the people in prison? It must be a terrible feeling to be arrested by the police; where is the atheist’s empathy for them?

If you go looking for it, you will find that the atheist’s empathy is often in short supply, though he credits himself in full.

Contingency and Space

The natural theology argument for the existence of God from contingency and necessity rests on the existence of something contingent. This is remarkably easy to supply, since any telling of this argument is, itself, contingent, and supplies the necessary contingent thing. However, explaining why it is contingent sometimes confuses people, because the non-existence of the contingent thing at some point in time is most typically used.

There’s nothing intrinsically wrong with this, but it can accidentally mislead people into thinking that the causal chain that must be finite (since there cannot be an actual infinity) is a temporal chain of causation. E.g. I’m here because of my parents, who are here because of their parents, and so on back to the Big Bang, which is here because of God. This can be helpful to illustrate the concept of a causal chain, but it’s not the kind that’s actually used in the argument, since it’s not the sort referenced by “actual infinity”. What’s discussed is why the contingent thing is here, now, as in, what is giving it the power to exist this moment. It cannot be something that doesn’t exist, because things which don’t exist have no power. So it must be something that also exists right now. That thing which exists right now can either be contingent or necessary, and if contingent, it too must be dependent for its existence on something else which also exists right now. And so on; this is what must terminate in something necessary because there cannot be an actual infinity.

Something that my attention was drawn to by a commentor asking me a question in one of my videos is that one can use the existence of a thing in one part of space but not another as a demonstration of contingency. If a thing were necessary and not contingent, it would exist at every point in space, since a particular location cannot cause a necessary thing to not exist. Thus anything which is someplace but not another must be contingent. The advantage to demonstrating contingency in this fashion is that space is simultaneous, and a temporal sequence will not be suggested. It is possible, then, that a person will not be accidentally led astray into thinking of a temporal sequence of events where the argument about how an actual infinity cannot exist is less clear, since the moments of time don’t exist side-by-side. (From our perspective; all moments are present to God in His eternity, of course.)

New Religions Don’t Look Like Christianity Either

To those familiar with religions throughout the world, new religions like environmentalism, veganism, wokism, marxism, etc. are pretty obviously religions and are causing a lot of damage because that’s what bad religions do. People who are not familiar with any world religions beside Christianity frequently miss this because they think that all (real) religions look like Christianity but with different names and vestments.

I suspect that the idea that all religions look like Christianity was partially due to the many protestant sects which superficially looked similar, since even the ones that did away with priests and sacraments still met in a building on Sundays for some reason. I suspect the other major part is that there is a tendency to describe other religions in (inaccurate) Christian terms in order to make them easier to understand. Thus, for example, Shaolin “monks”. There are enough similarities that if you don’t plan to learn about the thing, it works. It’s misleading, though.

You can see the same sort of thing in working out a Greek pantheon where each god had specific roles and relationships and presenting this to children in school. It’s easy to learn, because it’s somewhat familiar, but it’s not very accurate to how paganism actually worked.

All of this occurred to me when I was talking with a friend who said that the primary feature of a religion, it seemed to him, was belief in the supernatural. The thing is, the nature/supernature distinction was a Christian distinction, largely worked out as we understand it today in the middle ages. Pagans didn’t have a nature/grace distinction, and if you asked them if Poseidon was supernatural they wouldn’t have known what you meant.

Would the ancient pagans have said that there things that operated beyond human power and understanding? Absolutely, they would. Were they concerned about whether a physics textbook entirely described these things? No, not at all. For one thing, they didn’t have a physics textbook. For another, they didn’t care.

The modern obsession that atheists have with whether all of reality is described in a physics textbook is not really about physics, per se, but about one of two things:

  1. whether everything is (at least potentially) under human control
  2. whether final causality is real, i.e. do things have purposes, or can we fritter our lives away on entertainment without being a failure in life?

The first one is basically an enlightenment-era myth. Anyone with a quarter of a brain knows that human life is not even potentially under human control. That it is, is believable, basically, by rich people while they’re in good health and when they’re distracted by entertainment from considering things like plagues, asteroids, war, etc. Anyone who isn’t all of these things will reject number 1.

Regarding the second: ancient pagans didn’t tend to be strict Aristotelians, so they wouldn’t have been able to describe things in terms of final causality, but they considered people to be under all sorts of burdens, both to the family, to the city, and possibly beyond that.

If you look at the modern religions, you will find the same thing. Admittedly, they don’t tend to talk about gods as much as the ancient pagans did, though even that language is on the rise these days. In what sense the Greeks believed in Poseidon as an actual human-like being vs. Poseidon was the sea is… not well defined. Other than philosophers, who were noted for being unlike common people, I doubt you could have pinned ancient pagans down on what they meant by their gods even if you could first establish the right terminology to ask them.

As for other things, environmentalism doesn’t have a church, but pagans didn’t have churches, either. Buddhists don’t have churches, and Hindus don’t have churches, and Muslims don’t have churches. Heck, even Jews don’t have churches. Churches are a specifically Christian invention. Now, many of these religions had temples. Moderns have a preference for museums. Also, being young religions, their rites and festivals aren’t well established yet. Earth day and pride month and so on are all fairly recent; people haven’t had time to build buildings in order to be able to celebrate them well. (Actually, as a side note, it also takes time to commercialize these things. People under-estimate the degree to which ancient pagan temples were businesses.)

Another stumbling block is that modern environmentalists, vegans, progressives, etc. don’t identify these things as religions—but to some degree this is for the same reason that my atheist friend doesn’t. They, too, think of religions as basically Christianity but maybe with different doctrines and holy symbols. They don’t stop to consider that most pagans in the ancient world were not in official cults. There were cults devoted to individual gods, and they often had to do with the running of temples. Normal people were not in these cults. Normal people worshiped various gods as convenient and as seemed appropriate.

There is a related passage in G.K. Chesterton’s book The Dumb Ox which is related:

The ordinary modern critic, seeing this ascetic ideal in an authoritative Church, and not seeing it in most other inhabitants of Brixton or Brighton, is apt to say, “This is the result of Authority; it would be better to have Religion without Authority.” But in truth, a wider experience outside Brixton or Brighton would reveal the mistake. It is rare to find a fasting alderman or a Trappist politician, but it is still more rare to see nuns suspended in the air on hooks or spikes; it is unusual for a Catholic Evidence Guild orator in Hyde Park to begin his speech by gashing himself all over with knives; a stranger calling at an ordinary presbytery will seldom find the parish priest lying on the floor with a fire lighted on his chest and scorching him while he utters spiritual ejaculations. Yet all these things are done all over Asia, for instance, by voluntary enthusiasts acting solely on the great impulse of Religion; of Religion, in their case, not commonly imposed by any immediate Authority; and certainly not imposed by this particular Authority. In short, a real knowledge of mankind will tell anybody that Religion is a very terrible thing; that it is truly a raging fire, and that Authority is often quite as much needed to restrain it as to impose it. Asceticism, or the war with the appetites, is itself an appetite. It can never be eliminated from among the strange ambitions of Man. But it can be kept in some reasonable control; and it is indulged in much saner proportion under Catholic Authority than in Pagan or Puritan anarchy.

Mr. Rudyard Kipling and World Travel

In an essay about Rudyard Kipling, G.K. Chesterton commented on what the globe trotter misses out on:

Mr. Rudyard Kipling has asked in a celebrated epigram what they can know of England who know England only. It is a far deeper and sharper question to ask, “What can they know of England who know only the world?” for the world does not include England any more than it includes the Church. The moment we care for anything deeply, the world–that is, all the other miscellaneous interests–becomes our enemy. Christians showed it when they talked of keeping one’s self “unspotted from the world;” but lovers talk of it just as much when they talk of the “world well lost.” Astronomically speaking, I understand that England is situated on the world; similarly, I suppose that the Church was a part of the world, and even the lovers inhabitants of that orb. But they all felt a certain truth–the truth that the moment you love anything the world becomes your foe. Thus Mr. Kipling does certainly know the world; he is a man of the world, with all the narrowness that belongs to those imprisoned in that planet. He knows England as an intelligent English gentleman knows Venice. He has been to England a great many times; he has stopped there for long visits. But he does not belong to it, or to any place; and the proof of it is this, that he thinks of England as a place. The moment we are rooted in a place, the place vanishes. We live like a tree with the whole strength of the universe.

The globe-trotter lives in a smaller world than the peasant. He is always breathing an air of locality. London is a place, to be compared to Chicago; Chicago is a place, to be compared to Timbuctoo. But Timbuctoo is not a place, since there, at least, live men who regard it as the universe, and breathe, not an air of locality, but the winds of the world. The man in the saloon steamer has seen all the races of men, and he is thinking of the things that divide men–diet, dress, decorum, rings in the nose as in Africa, or in the ears as in Europe, blue paint among the ancients, or red paint among the modern Britons. The man in the cabbage field has seen nothing at all; but he is thinking of the things that unite men–hunger and babies, and the beauty of women, and the promise or menace of the sky. Mr. Kipling, with all his merits, is the globe-trotter; he has not the patience to become part of anything. So great and genuine a man is not to be accused of a merely cynical cosmopolitanism; still, his cosmopolitanism is his weakness. That weakness is splendidly expressed in one of his finest poems, “The Sestina of the Tramp Royal,” in which a man declares that he can endure anything in the way of hunger or horror, but not permanent presence in one place. In this there is certainly danger. The more dead and dry and dusty a thing is the more it travels about; dust is like this and the thistle-down and the High Commissioner in South Africa. Fertile things are somewhat heavier, like the heavy fruit trees on the pregnant mud of the Nile. In the heated idleness of youth we were all rather inclined to quarrel with the implication of that proverb which says that a rolling stone gathers no moss. We were inclined to ask, “Who wants to gather moss, except silly old ladies?” But for all that we begin to perceive that the proverb is right. The rolling stone rolls echoing from rock to rock; but the rolling stone is dead. The moss is silent because the moss is alive.

There is nothing inherently wrong with travel, or even travel for amusement. But Chesterton is fundamentally on to something when he takes issue with the people who think that traveling enlarges the soul. What travel does is it broadens the soul. The problem is that there are not merely two dimensions but three; travel broadens the soul but it tends to make it shallow. It makes it shallow because it is seeing life from the outside.

Life seen from the inside is love and all that that entails—labor and suffering and hardship and patience. Life seen from the outside—especially when you’re paying to see it—is all triumph and success. It would seem that this is getting the best of bargain—all of the rewards without any of the work, but it fails for the same reason that going to a trophy shop and ordering yourself an extra large trophy is not nearly as satisfying as earning it in a karate tournament, despite all of the bruises and sore muscles. It fails because we were not put on this earth merely to enjoy, but also to help build it up. Or to use a less extreme example, it is a much more rewarding things to make a decent wine than to drink an excellent wine.

The technical term for this is secondary causation, though I prefer to call it delegation. God could have created the world without anything for us to do but to enjoy it, but instead he delegates part of the act of creation to us so that we can become part of his creative act. When we give someone food, we become part of his act of creating the body. When we teach somebody something, we become part of his act of creating the mind. When we labor to help create something within creation it is not the suffering, in itself, which brings us fulfillment, but rather the taking part in its existence. The work brings suffering because we are in a fallen world and do not work right; the work is suffering because we aren’t strong enough for it.

This gets to what Chesterton said at the beginning of his essay on Kipling:

There is no such thing on earth as an uninteresting subject; the only thing that can exist is an uninterested person.

We are bored by things while God isn’t not because our intellect is stronger than God’s, but because it is weaker. It is natural enough that as a man’s capacity to enjoy something good which he has already experienced diminishes, that he will seek a stronger stimulus to make up for his weakness, just as the weaker a man’s legs, the more he looks around for stairs instead of a ladder, and a ramp instead of stairs, and ultimately an elevator instead of a ramp. And such a man may well look on at someone who is still climbing the ladder and look on him with pity, who only knows this one, difficult way of ascending, while he has sampled all of the means of going up that mankind has ever devised. And he will keep feeling this pity even as he struggles to reach the button to make the elevator go up.

One of Chesterton’s great themes was paradoxes, and indeed there is a Chestertonian paradox in the fact that the most interesting people lead the least interesting lives. This is so because unhappy people seek variety while happy people seek homogeneity. To the man who loves something, even if it is a beetle, that beetle is as big as the world, because that beetle is a world. To the man who loves nothing, the whole world is as small as a beetle. Of the two, it is the man who loves the beetle who is right, and you can tell that he is right because he is happy.

After all, God is inordinately fond of beetles.

It’s Curious How Many People Want to Use 19th Century Philosophy

G.K. Chesterton once observed:

The best reason for a revival of philosophy is that unless a man has a philosophy certain horrible things will happen to him. He will be practical; he will be progressive; he will cultivate efficiency; he will trust in evolution; he will do the work that lies nearest; he will devote himself to deeds, not words. Thus struck down by blow after blow of blind stupidity and random fate,
he will stagger on to a miserable death with no comfort but a series of catchwords; such as those which I have catalogued above. Those things are simply substitutes for thoughts. In some cases they are the tags and tail-ends of somebody else’s thinking. That means that a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy; which the beasts do not have to inherit; hence their happiness.

I’ve noticed a surprising number of people who seem to want to pretend that we are in the 19th century so that they can apply 19th century philosophy, unmodified. The real problem with that is not the 19th century philosophy part, per se (though there was a notably large amount of bad philosophy in the 19th century), but the unmodified part. This is directly related to what Chesterton said above, that a person who will not do philosophy for himself will end up with the used-up scraps of somebody else’s philosophy.

A lot of people who have never done any philosophy for themselves think that doing philosophy for oneself entails being original. This is the opposite of the truth. To truly study philosophy has, as its only legitimate goal, to be entirely unoriginal. At least in content. A philosopher may be forced by circumstances to be original in expression, though the true philosopher will usually try to avoid that whenever he can.

If a man is a philosopher, that is, if he is a lover of wisdom (philos = love, sophia = wisdom), his entire goal is to come to understand what is; that is, he conforms his mind to what pre-exists him. God understands what he creates, so the wisdom of God is creative; man loves what he did not create, so the wisdom of man is purely receptive.

Philos, though, is not any old kind of love—it is the love of friends. This has something of a dual meaning when it comes to philosophy: a man seeks to be a friend of Wisdom, but also to be the friend of other men who love wisdom. As such, the true philosopher will read other philosophers to see what his friends can tell him about what they both love. This is not harmed by the minor detail of their friend having died after writing, not even by them having died twenty four hundred years ago. But as with all true friends, their goal is not a meeting of the ears, but a meeting of the minds. That is, they want to understand the whole truth in what their friends have written, not merely to pick up a few bits and pieces of it.

Every man, by using language, communicates by using the things around him, because they are the things to which the symbols called words point. When we read things written by people long dead, to understand the contents we must know to what the words pointed when they were used, so that we can see the relationships between the things the words pointed at. When the world changes, the words no longer point to the same things, so we cannot read the words today the same way they were written. More importantly, though, things themselves change. A horse is replaced by a horseless carriage. Telegrams are replaced by telephones. Sometimes the relationships persist, sometimes they do not. This is inconvenient. It takes work to be able to separate the relationships between things from the things themselves, that is, to separate the idea contained within the expression from the expression. And here we come to the title of this post, because human beings are lazy.

It is work to read someone carefully and to separate the ideas from the expressions. It is far less work to pretend that the world has not changed, and so there is no separation required. Since we live in a profoundly lazy time, we see a great deal of people trying to pretend this very thing. It is much easier to pretend that people are still forced by grinding poverty (caused, everyone now forgets, by the collapse of the price of food grown on farms) to take the few jobs available in factories which routinely kill and main the workers, who are quickly replaced because of the legions of unemployed fleeing unprofitable farms. If one does that, then one can take a whole host of 19th century writers and apply their writings unmodified. (This does extend into the early 1900s, btw.)

Why don’t people do this with, say, medieval philosophers, or ancient Greek philosophers, or Chinese philosophers? I think that there are two main answers:

  1. The further back in time one goes, the harder it is to pretend that nothing has changed.
  2. The further back in time one goes, the less familiar is the expression of the philosophy.

I don’t mean to suggest that people have actually read Das Kapital, or even that they routinely quote Karl Marx. Far more common is for the process to be iterative, where people much closer in time to Marx rephrase his ideas, often updating the terminology but not otherwise changing the expression, and these again get rephrased a few decades later, and so on, so that what people get is a modern phrasing of the antiquated expression. Along the way, they may easily get updated to things which no longer had the original relationships. People who are starved for ideas because they don’t do much thinking may be very tempted to not care, because starving people are not picky.

This explains rather a great deal of modern discourse.

Subjectivists Don’t Really Meant It

Bishop Barron recently put out a video on the suffocating quality of subjectivism:

He’s entirely correct that one of the problems with subjectivism is that without objective value, people cannot talk to each other, they can only ignore each other or try to subjugate each other by force. It is only by appeal to transcendent truths to which human beings are morally bound to conform themselves is it possible to try to persuade someone (because persuasion is by pointing to the transcendent truths, upon the perception of which the other will voluntarily conform himself because it is the right thing to do).

In practice, though, Subjectivists never mean it. Once they have convinced someone else of subjectivism and thereby gotten the person to stop trying to persuade anyone, the very next move is always to try to smuggle objectivism in again, but only as available to the original Subjectivist.

The normal technique for trying to smuggle objectivism back is through innovations in language. You may not call art good or bad, because that is implying objective evaluation of it. But you can call it subversive or conventional, which are objective, despite just meaning good or bad. You cannot say that a person wearing immodest clothing in public is bad. That’s horribly patriarchical and body-shaming of you. You are free, however, to call it problematic.

The technique is always the same, because it has to be. First there is a move to shut down all criticism that a person doesn’t like by universally disallowing criticism. Once that is achieved, criticism becomes re-allowed using different language, which initially only lends itself to criticizing whatever the putative Subjectivist wants to criticize.

The Subjectivist’s victory tends to be short-lived, though. Semantic drift inevitably sets in. Whatever new word the Subjectivist has introduced in order to have a monopoly on the right to criticize quickly becomes adapted to all criticism and the Subjectivist is back to seeing criticized the things he was trying to protect. If he still has the energy for it—after a certain age, criticism tends to sting less—this begins another cycle of subjectivism.

Vox Day’s Socio-Sexual Hierarchy

Back in the year of our Lord’s incarnation 2011, Theodore Beale, under his pen name Vox Day, posted his selectively famous socio-sexual hierarchy (famous in some corners of the internet). It was, in those places, hailed as a revolution over the far more reductive alpha-beta hierarchy of males that had dominated those sorts of conversations before. I got into a conversation about it recently and want to relay a few things that came up in that conversation, but I suspect I need to go over a little background, first.

Why do such things exist? Well, we live in a culture where a great many young people aren’t raised past some basics like being trained to eat with tableware or washing their hands after going to the bathroom, so people don’t have any sort of operating model of the opposite sex—or even of their own. Yet they still have basic human desires such as finding a mate and having a family and fitting into the society in which they live. How are they to go about these things? All that’s on offer (outside of traditional homes) are non-answers such as, “you can be anything that you want to be,” “you are special just the way you are,” and “just be you.” (For the record, I don’t think that these things are as much based in trying to raise children’s self-esteem as much as they are in the fear of telling things to children because children believe what they’re told, which means that you need to be very confident in what you say to them; people who don’t believe in anything have, therefore, nothing to say to children. But silence is awkward.)

So for a young man, if he concludes that finding a young woman to pair up with is part of the path to happiness, how is he to go about making this happen? Even if he happened to come across traditional answers, they tend to only work with women who have been raised traditionally, and women who were raised traditionally won’t be interested in men who weren’t because of the deficiencies in character caused by the deficiencies in their upbringing. (Note: young men can learn and improve themselves and make up for the deficiencies in their upbringing, just as young women can. It’s just not the statistical norm for these victims of Modernity.)

So what are these poor souls to do, given that they literally haven’t heard of the good options? How is someone who was raised to believe that genitalia are for fornication and pregnancy is a type of cancer supposed to navigate male-female interactions, especially when they were also raised to believe that there are absolutely no differences between males and females except purely accidental ones that shouldn’t be talked about lest anyone mistake them for essential differences?

As a side note that is relevant, it’s helpful in understanding a lot of the frustration that one sees that—given the modern assumptions about how fornication is a meaningless act which is only about pleasure, as binding as shaking someone’s hand or riding a roller coaster next to them—the phenomenon where women say ‘no’ to offers of sex from most men makes absolutely no sense. There’s no way, under these assumptions, for refusing sex to be anything but selfishness. If it is rude to refuse somebody at a dance, why should it not be rude to refuse a quick trip to the lavatory? Yet women, for some reason, are not considered rude for saying ‘no.’ The real answer, of course, is that people cannot become as completely debased as their theories, so they cling to some shreds of reality, but this cognitive dissonance must be painful, and that pain explains a lot.

So, from the secular male perspective, women are basically defective, since they constantly don’t do what—according to the generally accepted theory—they should be doing. (Generally accepted except among traditionalist Christians, Jews, Muslims, Hindus, and other religious wackos.) One can cry out to the empty heavens about how unjust the world is, of course, but there is no one to hear these wails, so it does no good. The alternative is to figure out how to work around the defects of this broken world, which is most of what one has to do anyway. Hence is born the Pick Up Artist.

Pick Up Artistry is best understood in theory as an attempt by broken males to deal with the few bits that aren’t broken in broken females. In practice, it consists of a sort of practical psychology for figuring out how to manipulate women into wanting to fornicate with the man in front of them. Unless the man has STDs and intends to fornicate without using a condom, there is precisely no reason for her to not want to do this, which is the key to understanding why PUAs do not understand themselves to be predatory. PUAs are not, however, the really interesting group, here.

There are people who are not quite so far gone—that is, not quite so secular—who still think that there could be something more than the sum of its parts in a man and a woman sticking with each other for a prolonged period of time. They typically were raised in the same way as the pick up artists but just didn’t believe it quite as much. In consequence, they have no idea how to make their goals happen. Looking around for some ideas, PUAs are nearly the only people who aren’t religious wackos who are willing to make definite claims about the nature of reality, rather than give nice-sounding non-answers like “do what brings you joy.” (Parenting tip: if your life lessons could be the slogans of credit card companies, rethink your life and do it quickly before your child grows up and it’s too late.)

It’s true that the PUAs are secular wackos, but that’s much better than being religious wackos, so maybe they have some insights, and that’s better than nothing.

The problem is that the PUA model of reality is more than a little… anemic. PUA models vary with each person writing a book or blog post (much as gnosticism varied with each gnostic teacher), but there are some broad strokes that are very common: according to the PUA model women have one trait, physical beauty, and it can be rated on a scale of 1-10. (Some permit the use of rational numbers (fractions), turning this into a continuum, while others reject that individuals are so unique.) Men also have one relevant quality, attractiveness, which is not quite the same as beauty, but its distribution is simpler: men are either attractive or unattractive. The former is called alpha, the latter called beta. (This is in somewhat explicit reference to the well-known characterization of feeding patterns among unrelated wolves who are strangers to each other but forced to occupy the same place, such as a pen in a zoo.)

Women want to fornicate with alphas and not with betas. This is presumably for the superior genetics for their offspring which alphas offer. (Secular people always make up evolutionary just-so stories when they need to explain anything about human nature.) Since the invention of contraceptives, sex is no longer about reproduction, so this is only relevant insofar as it gives insight into how to be attractive to women: the key is to identify the external traits by which a woman identifies alphas, and then to mimic those. (Again, it’s important to realize that since the woman isn’t going to have any offspring, this is a maladaptive trait and so fooling it is more akin to how glasses fool an eye with an astygmatism into presenting an accurate image to the retina. Women come in only two varieties: hoes and unhappy women, just as men come in only two varieties: players and unhappy men. The goal of the PUA is not to make a woman do what she doesn’t want to, but to make her want to do what the PUA wants to do. Pick Up Artistry is 100% about obtaining consent.)

As I said, this world view, even in the dim vision of a mostly secular person, is more than a little bit anemic. It obviously leaves a lot of even the secular world out of account. Still, when it’s the only thing on offer, beggars can’t be choosers.

Into this near-void comes Vox Day’s socio-sexual hierarchy. Instead of dividing males into just two groups, he divides them into six. Right away, we can see that this is at least three times better. His groupings still use Greek letters (though not in alphabetical order): alphas, betas, deltas, gammas, omegas, and sigmas. (Technically there is a seventh classification, lambas, but that’s for men who have no interest in women, so it’s irrelevant to his audience.)

In Vox Day’s description, alphas are the most attractive, but betas are men who are also attractive, but not enough to be alphas. They tend to hang around alphas. Alphas get 10s while betas get 7s, 8s and 9s. Deltas are ordinary men and make up the bulk of males, and can only get 6s and below, but frequently do get them if they’re not fixated on getting 7s and above. Gammas are socially inept men of typically moderately above average intelligence who aren’t able to be as attractive as ordinary men despite thinking that they’re above them. They don’t tend to get women because they are too much in their own head to be attractive. Omegas are ugly, socially awkward men who either can’t get any kind of woman or might luck into an ugly woman who will deal with them. Sigmas are exceedingly unusual—men who are uninterested in society but who end up with 10s anyway. (I suspect that this is mostly just Vox Day himself, who was a semi-nerd who got wealthy at a young age and who has a wife of noted beauty; he may possibly be over-generalizing from his own experience, rather than taking this counter-example as a defect in his approach.)

This is vastly more satisfying to a secular man who wants to figure out how to get a girlfriend than is the PUA model, since it’s got more recognizable parts. It’s not as easy to think of exceptions (in the very limited experience of a secular man) and these exceptions will be closer to one of the categories, if for no other reason than that there are more categories.

As a side note: there is probably an optimal model sizes for superficial plausibility, where any smaller and it seems too reductionist and any bigger and it’s too hard to keep track of. If so, I suspect that Vox’s socio-sexual model is near to that ideal size.

The problem with this model, of course, is that it still leaves out most of life. It carries over from the PUA model the idea that a woman can be rated on a scale of 1-10, though it expands the male side to a scale of 1-5 from a scale of 1-2. Even if we stick to numerical scales which oversimplify things for the sake of simplicity, though, there are still far more traits to a human being that are important in a wife, or even in a girlfriend, and these traits tend to be uncorrelated with physical beauty. (I mean that they can be coincident or not, I don’t mean that they are negatively correlated.) There are traits that matter in choosing a mate such as honesty, loyalty, courage, prudence, wisdom, temperance, piety, etc. The idea that an alpha male should choose a woman who is a 10 for physical beauty without any regard for her other traits is absurd on its face, even if secular people will leave off piety. Further, since these traits are uncorrelated, and if we assume that the alpha can attract the best mate, he might well have a mate who is only a 6 for physical beauty but a 9 for honesty, a 10 for temperance, an 8 for courage, a 9 for prudence, etc. But this will depend, to no small degree, on his own wisdom, prudence, and temperance.

There is a further issue that if you read the original, he describes each group with an estimated number of lifetime sexual partners. Alphas have 4 times the average number of sexual partners or more. Betas have 2-3 times the average number of sexual partners. Etc. But the only way for a man to have more than one lifetime sexual partner is via tragedy—either the tragedy of his first wife dying or the tragedy of sin. Since first wives don’t die very often, this is mostly about the tragedy of sin.

The entire thing is predicated on going about life all wrong. It’s a bit like a guide to golf that measured success by the number of spectators that the golfer hit with a golf ball. It could give tips on how to lull the spectators into a false sense of security or how to aim out of the corner of one’s eye. Someone who followed it might really hit quite a few more spectators than someone who didn’t. Such a guide would be internally consistent, but it would be missing the entire point of the game.

Chemical Composition, or, Substance and Accidents

The Catholic doctrine of transubstantiation means that in the Eucharist, when the priest speaks Christ’s words of consecration (“this is my body”, “this is my blood”) over the bread and wine on the alter, the power of Christ is invoked, by the authority he gave to his apostles and they delegated to their successors and they delegated to the priests whom they consecrate, and it changes the bread and wine on the alter to become the body and blood of Christ. (This is sometimes called the “real presence.”) Much difficulty arises over exactly what is meant because the bread doesn’t turn into muscle tissue and the wine doesn’t develop red blood cells.

The Eastern Orthodox basically just say “it’s a mystery” and leave it at that. (I liked the styling I saw someplace, “eeeet’sss aaaaa myyyysssterrrryyyyy”.) The Catholic Church says that it’s a mystery, but it gives a few helpful details. You can actually see this in the word “transubstantiation.”

“Transubstantiation” is derived from two words: “trans” and “substance”. “Trans” meaning “change” and “substance” being that part of being which is not the accidents. Accidents, in this case, not meaning “something unintended” but rather the properties a thing has which, if they were changed or removed, would not make the thing something else. A chair might be made out of wood, but if you made it out of plastic it would still be a chair. The ability to hold up someone sitting is the substance of a chair, the material it is made out of is an accident (again, not in the colloquial sense of accident but in a technical sense). You can also do the reverse. You can take the wood a chair is made out of and rearrange it into a collection of splintery spikes protruding up. It has the same accidents (the wood), but the substance has changed. “Transubstantiation” just means that the accidents (the gluten, starch, etc. in the bread and the water, sugar, alcohol, etc. in the wine) remain the same but the substance—what it is—is what has changed.

Or, to put this more simply: in the Eucharist, the body and blood of Christ has the same chemical composition as bread and wine. Something to consider, when trying to understand this, is that a living human being has exactly the same chemical composition as a human corpse.

If Real Socialism Has Never Been Tried

One sometimes hears the claim that real socialism has never been tried. The many things that have claimed to be socialism—German National Socialism (Nazism), Italian Fascism, Soviet Communism, Chinese Communism, East German Communism, North Korean Communism, Vietnamese Communism, etc. etc. etc.—were not socialism, they were authoritarianism. I’m not, here, interested in debating the point, though I can’t help but note that defining socialism to be, roughly, “a system where people voluntarily share things rather than selling them” makes it not a political system but just a free market with impressively effective preachers of the gospel and extraordinarily receptive listeners to it (since it would be pretty much exactly how the early christian community operated in the pagan world, as described in the Acts of the Apostles, before the church expanded much outside of Jerusalem).

No, what I propose to do in this post is to just grant the proposition that no one has actually tried real socialism and see what follows from it. If we grant this premise, we come to some pretty strange conclusions. Well, perhaps not so strange.

The first question we must ask ourselves, if no one has ever tried real socialism, is: why did all of the people who set out to try real socialism fail to try it?

This is a very important question. We have had many people in many places throughout the last 100 or so years who have tried to set up socialism. People like Vladimir Lenin, Adoplh Hitler, Benito Mussolini, Mao Zedong, Kim Il-Sung and Hồ Chí Minh, were not joking. They thought that capitalism was evil and that the government and the economy should exist to benefit the people, not a rich minority or the well-born or an elite of any kind. There are plenty of others who thought the same thing, too. Karl Liebknecht and Rosa Luxemburg formed the Communist Party of Germany, which merged with the Social Democratic Party of Germany (itself a merger of other, earlier parties) to form the Socialist Unity Party of Germany, which was the ruling communist party of East Germany. They weren’t kidding. Hugo Chavez formed the Movimiento V República, which went on to join with other socialist parties to become the United Socialist Party of Venezuela. He wasn’t kidding. Does anyone think that Fidel Castro was joking?

By hypothesis, all of these people—and others—failed to try real socialism. They tried to try real socialism but just couldn’t succeed enough to actually give it a try. So what is so difficult about trying socialism that, so far in human history, every single one of the many people who have tried to try it have all failed? And they didn’t just fail a little bit, either. They have generally produced the worst hell-holes that the world has ever seen. Some of that is, undoubtedly, owing to the more advanced state of technology in the world when all of these people tried to try socialism and failed to try it. Still, they didn’t try to try socialism and end up trying multi-party democracies with thriving free-market economies. A bit like trying to catch a bullet someone shot at you with your teeth or riding a unicycle over a rope stretched across the grand canyon, failure has a pretty high cost.

So we must ask the person suggesting that we give real socialism a try because it’s never been tried before—how does he know that he’ll actually be able to try it, unlike all of the other people who have tried to try it and plunged their nations into misery when they accidentally tried something else instead? Has the world simply been waiting around for someone as great as this kindly intentioned person, that finally the human race has produced the pinnacle of evolution, with all of the multitude of powers required to actually try real socialism?

Now, supposing that the answer is yes, a further question arises—and I don’t mean how can we find out if this lovely soul is correct that they can do what so many others failed to without giving them the power necessary to try to try real socialism—supposing this wonderful fellow is right and has that rare combination of qualities necessary to try real socialsm, what happens if trying real socialism doesn’t work? The human race has finally produced a member great enough to succeed at trying real socialism—what if he really tries it, but fails to achieve it? I can really try to throw a three-point shot in basketball, but most of the time this very real attempt fails to succeed in actually putting the ball in the net. What if really trying socialism and failing is even worse than trying to try real socialism and failing to try it?

Let us, however, assume that this greatest human being ever is sufficiently great not only to try real socialism, but even to succeed at real socialism. What if real socialism is awful? Remember that, by hypothesis, real socialism is completely untested. What happens to the millions of souls who would live under the result if it turns out that, say, real socialism is even worse in practice than fake socialism, or whatever you get when you try to try real socialism but fail? No one’s ever tried real socialism, so how on earth do we know what will happen if that attempt were to actually take place?

Another curious problem is introduced by the fact that it requires the pinnacle of human evolution to succeed in trying to try real socialism—in order for this attempt at an attempt to work, we’re going to have to put this most magnificent achievement of our species in charge. If they shared responsibility with anyone, they, being inferior, would drag them down, and then how would we possible succeed at trying to try real socialism? I suppose that the magnificent one could be so great that even as one among a large group of his inferiors he would lift them up to the heights required to succeed at trying to try real socialism. That seems like asking a lot of evolution, though. We so far haven’t produced one human who can bring about real socialism and all of a sudden we have one that can turn a group of people who can’t try real socialism into a group that can? How could that much incomparable magnificence possibly be achieved in just one generation?

There is a further problem, though, even if we just assume for some reason that real socialism, if attempted, will be good instead of even worse than fake socialism—and I, for one, would much rather drink fake poison than real poison—and that this pinnacle of evolution is so magnificent he doesn’t need to be a dictator but can, by his magnificence, make an entire parliament of people who cannot, on their own, succeed even at trying to try real socialism not only succeed at trying to try real socialism but actually achieve real socialism, too. If we assume all this, what happens when this pinnacle of evolution comes to die? It happens to all of the descendants of men, after all. How are we to replace the greatest human being the world has ever produced? And if we can’t, what will happen to this real socialism now that it is run by people who, left to themselves as they now are, could not succeed even in trying to try it? Are we to suppose that this thing which is so difficult that no one has hitherto succeeded even in trying to try it will go along merrily when run by ordinary people who, in the whole course of history, have never gotten anything right until now?

And, if so—if we are to suppose that real socialism is so difficult to get going that no one has yet succeeded in trying to try it but so easy to keep going that anyone can do it—can I interest the person claiming this in buying a bridge? It’s a real nice bridge. Very popular. Tons of people drive over it. I hate to part with it.

He doesn’t even need to keep the tolls for himself. He can use the money he’ll get from it in order to fund his local socialist party.

Determinism as a Fairy Tale for Philosophers

I was recently speaking with a friend of mine who’s a Franciscan friar and retired philosophy professor. During a discussion of Spinoza he mentioned that he tends to take determinism in philosophers not literally but as a metaphor for the limits of philosophical argumentation. This is a very interesting idea.

The basic problem with determinism (other than it being false) is that, if taken seriously, it would result in complete paralysis, or at least complete philosophical paralysis. If determinism is true, there is no purpose in telling anybody anything because there is nothing he can do with it. His thoughts are all predetermined. There isn’t even a point in telling him that there is no point, because even that cannot change what he will do—even to help him to make peace with being a slave. Of course, the same applies to the philosopher himself, so he can always say that he’s engaging in pointlessly telling people they are not free because he is predetermined to do so. Determinism means never having to say you’re sorry. Unless you have to.

Even so, determinist philosophers do not, as a rule, tend to acknowledge that everything that they write is pointless. Why is that? The utter pointlessness of telling a man that he has no more choice than does a rock is obvious. So obvious that we instinctively know it about rocks. Never yet has a man preached to the rocks that they cannot do other than what their nature and circumstances foreordain them to do. Not even to rocks which someone has carved ears into. Why, then, if one’s philosophy dictates that men are no more free than rocks will the philosopher preach this bad news to men?

(If it be objected that they do not preach to rocks because rocks cannot hear, I answer that this is irrelevant. There is no practical difference between something that cannot hear and something that, though it can hear, can do nothing with the words spoken to it. A man who speaks only Russian can still hear English, but we do not waste our time preaching to him in English. To object that he cannot understand changes nothing; we do not preach things a man cannot do even to men who understand. No one walks around in their native language telling men that it would be exhilarating to jump to the moon or very convenient if they were to grow thirty feet tall. It is the usefulness of the words which governs whether they leave our lips, not the intelligibility of them.)

So why on earth do determinist philosophers preach determinism? The answer, suggests my friend, and I suspect he’s right, is that they don’t mean it. What they actually mean is a metaphor for how little philosophical argumentation sways people, even the philosopher himself. As Saint Paul complained, “I do not understand my own actions. For I do not do what I want, but I do the very thing I hate.” Determinism is, thus, a metaphor for the fallen state of humanity. And, in practice, it does seem to be a popular idea more with men who have difficulty restraining their passions. I can’t think of any extremely virtuous men who were determinists.

That said, this probably is more about how little philosophical argumentation sways the masses than about the philosopher himself. When the philosopher sees things clearly which seem to him momentous and the ordinary man shrugs his shoulders if he takes note at all—this requires some sort of explanation. There are better explanations than determinism but determinism does have a sort of superficial appeal. “Men do not care about the thoughts of the gods because they are mere beasts” is an explanation that does, to some degree, explain. That’s an important quality in an explanation—one that is missing in far more explanations than it should be.

Speaking of explanations which explain, I think that this idea makes sense of determinism, and especially why it is that its proponents never seem to take the idea seriously. They do take it seriously, just as a fairy tale to comfort them for why no one listens to them.

(I should note that I think this works synergistically with what has seemed to me to be the primary motivator of determinism: it simplifies the world. If one has to take into account many origins of causation, the world will not fit inside of one’s head. Determinism is appealing, then, because it eliminates a great deal of complexity. Even where the determinist is not an atheist, I think it functions as a sort of vicarious solipsism.)

Money: What Is It & Why Is It?

Money is an often misunderstood subject, especially because there are so many accidental things which grow up around it that are common and often mistaken for its substance. In this video I look at the history of how money develops as a medium for intermediating barter between people where only one person has something the other wants and how that develops into the sorts of monetary systems we have now. This also leads to what properties are essential to money and which are merely accidental, as well as what conditions are necessary for money to work and what conditions destroy money’s utility.

The Wages of Cynicism?

I’ve come to wonder about a trend I’ve seen in baby boomers that they tend to be very cynical but then have a streak of unbelievable credulity. It’s not all baby boomers, of course; no generation of people is homogeneous. This is merely a surprisingly large number of them, in my experience, and I’m wondering if it points to a more general human tendency (rather than merely being a strange product of the times in which it came to be). In particular, I’m wondering if, in general, being extremely cynical has a tendency to produce a sort of pressure-valve of credulity in some one thing.

The thing I most notice this in is the absurd credulity that many of the baby boomers in my life have towards news, especially newspapers. News, so they will tell me, is a bastion of the people and our one safeguard of liberty and all sorts of other nonsense, and all this in the face of things like newspaper articles which one can tell are lies simply by looking up the actual sources that the article references.

To give an example of what I mean, I read an article (sent to me by one of these boomers) which justified a claim of the Obama administration begging a heard-hearted republican congress for expanding PPE stockpiles by linking to an article which actually said that Obama’s refusal to compromise on the budget triggered automatic across-the-board 5% cuts to everything (a provision in the previous budget), that no one wanted. The most charitable interpretation of this event is that Obama wanted unlimited money and with it would have increased the budget for everything. This still gets nowhere near what the original article was trying to say, and if we limit ourselves to non-silly interpretations, Obama was clearly willing to take a 5% cut to the budget for medical stockpiles so that he wouldn’t have to compromise on things which were a higher priority to him. This is, literally, the opposite of what the source was invoked to claim. Unless one invokes the insanity defense, the article was, simply, lying. It didn’t matter, though. No matter how many lies an article tells, it is still the bullwark of the people, our sole preserver of liberty, etc.

(Oh, and the putatively supporting article also mentioned that even had the medical stockpiles seen a funding increase, they would have spent the money on rare drugs, which is their top priority, because (under normal circumstances) PPE is cheap and plentiful and easy to get more of on short notice. I did start to wonder if I was the only person who actually clicked through to verify the claims about the cited source. The degree to which it destroyed the article it was linked from was shocking, even to me.)

The more general human fault that I suspect this is an example of is the difficulty in living with ignorance. As human beings we necessarily do live in ignorance; we know very little about the very large world in which we live. The only real solution to this problem is to trust God, to whom the world is small and known. Since we are fallen creatures, however, this is hard. To be uniformly cynical to flawed sources of knowledge requires that we be able to repose in trust in God. This suggests that human credulity is, approximately, a fixed quantity; however much we fail to trust in God, that much will we be credulous. The only question is whether we will concentrate that credulity narrowly, trusting a few things far too much, or whether we will spread it out and trust many things a bit too much.

And to bring this back to the baby boomers with which this started (who are, by definition, American). I suspect that this is where history shaped the particular outcome. Having grown up during the civil rights era, the Vietnam era, and to a lesser extent the Watergate era, they learned to be cynical toward leaders and more generally the people that society normally trusts (priests, elders, etc). So they contracted their credulity towards a few sources like university professors and newsmen. Thus they were generally cynical, but with a few glaring gaps of credulity.

As I said, this is by no means all of the baby boomers, and my interest is at most only partially in the baby boomers I’m describing. Far more interesting is what general human weaknesses this is an expression of, and how to avoid them, even with different expression.

The Danger of Finding Your Meaning in Another Human Being

In this video I talk about the danger of finding one’s meaning in another human being.

(It has been pointed out, correctly, that this would constitute idolatry, but it’s a specific kind of idolatry which is somewhat easier to fall into because it doesn’t involve casting gold jewelry into statues, and bears some specific investigation.)

Why Do Moderns Write Morally Ambiguous Good Guys?

(Note: if you’re not familiar Modern spelled with a capital ‘M’, please read Why Moderns Always Modernize Stories.)

When Moderns tell a heroic story—or more often a story which is supposed to be heroic—they almost invariably write morally ambiguous good guys. Probably the most common form of this is placing the moral ambiguity in the allies who the hero protagonist trusts. It turns out that they did horrible things in the past, they’ve been lying to the protagonist (often by omission), and their motives are selfish now.

Typically this is revealed in an unresolved battle partway through the story, where the main villain has a chance to talk with the protagonist, and tells him about the awful things that the protagonist’s allies did, or are trying to do. Then the battle ends, and the protagonist confronts his allies with the allegations.

At this point two things can happen, but almost invariably the path taken is that the ally admits it, the hero gets angry and won’t let the ally explain, then eventually the ally gets a chance to explain (or someone else explains for him), and the protagonist concludes that the ally was justified.

In general this is deeply unsatisfying. So, why do Moderns do it so much?

It has its root in the modern predicament, of course. As you will recall, in the face of radical doubt, the only certainty left is will. To the Modern, therefore, good is that which is an extension of the will, and evil is the will being restricted. It’s not that he wants this; it’s that in his cramped philosophy, nothing else is possible. In general, Moderns tend to believe it but try hard to pretend that it’s not the case. Admitting it tends to make one go mad and grow one’s mustache very long:

(If you don’t recognize him, that’s Friedrich Nietzsche, who lamented the death of God—a poetic way of saying that people had come to stop believing in God—as the greatest tragedy to befall humanity. However, he concluded that since it happened, we must pick up the pieces as best we may, and that without God to give us meaning, the best we could do is to try to take his place, that is, to use our will to create values. Trying to be happy in the face of how awful life without God is drove him mad. That’s probably why atheists since him have rarely been even half as honest about what atheism means.)

The problem with good being the will and evil being the will denied is that there’s no interesting story to tell within that framework.

A Christian can tell the story of a man knowing what good is and doing the very hard work of trying to be good in spite of temptation, and this is an interesting story, because temptation is hard to overcome and so it’s interesting to see someone do it.

A Modern cannot tell the story of a man wanting something then doing it; that’s just not interesting because it happens all the time. I want a drink of water, so I pick up my cup and drink water. That’s as much an extension of my will as is anything a hero might do on a quest. In fact, it may easily be more of an extension of my will, because I’m probably more thirsty (in the moment) than I care about who, exactly, rules the kingdom. Certainly I achieve the drink more perfectly as an extension of my will than I am likely to change who rules the kingdom, since I might (if I have magical enough sword) pick the man, but I can’t pick what the man does. And what he does is an extension of his will, not mine. (This, btw, is why installing a democracy is so favored as a happy ending—it’s making the government a more direct extension of the will of the people.)

There’s actually a more technical problem which comes in because one can only will what is first perceived in the intellect. In truth, that encompasses nothing, since we do not fully know the consequence of any action in this world, but this is clearer the further into the future an action is and the more people it involves. As such, it is not really possible for the protagonist to really will a complex outcome like restoring the rightful king to the throne of the kingdom. Moderns don’t know this at a conscious level at all, but it is true and so does influence them a bit. Anyway, back to the main problem.

So what is the Modern to do, in order to tell an interesting story? He can’t tell an interesting story about doing good, since to him that’s just doing anything, and if he does something reader is not the protagonist, so it doesn’t do him any good. Granted, the reader might possible identify with the protagonist, but that’s really hard to pull off for large audiences. It requires the protagonist to have all but no characteristics. For whatever reason, this seems to be done successfully more often with female protagonists than with male protagonists, but it can never be done with complete success. The protagonist must have some response to a given stimulus, and this can’t be the same response that every reader will have.

The obvious solution, and for that reason the most common solution, is to tell the story of the protagonist not knowing what he wants. Once he knows what he wants, the only open question is whether he gets it or not, which is to say, is it a fantasy story or a tragedy? When he doesn’t know what he wants, the story can be anything, which means that there is something (potentially) interesting to the reader to find out.

Thus we have the twist, so predictable that I’m not sure it really counts as a twist, that the protagonist, who thought he knew what he wants—if you’re not sitting down for this, you may want to sit now so you don’t fall down from shock—finds out that maybe he doesn’t want what he thought he wanted!

That is, the good guys turn out to be morally ambiguous, and the hero has to figure out if he really wants to help them.

It’s not really that the Moderns think that there are no good guys. Well, OK, they do think that. Oddly, despite Modern philosophy only allowing good and evil to be imputed onto things by the projection of values, Moderns are also consequentialists, and consequentialists only see shades of grey. So, yes, Moderns think that there are no good guys.

But!

But.

Moderns are nothing if not inconsistent. It doesn’t take much talking to a Modern to note that he’s rigidly convinced that he’s a good guy. Heck, he’ll probably tell you that he’s a good person if you give him half a chance.

You’ll notice that in the formula I’ve described above, which we’re all far too familiar with, the protagonist never switches sides. Occasionally, if the show is badly written, he’ll give a speech in which he talks the two sides into compromising. If the show is particularly badly written, he will point out some way of compromising where both sides get what they want and no one has to give up anything that they care about, which neither side thought of because the writers think that the audience is dumb. However this goes, however, you almost never see the protagonist switching sides. (That’s not quite a universal, as you will occasionally see that in spy-thrillers, but there are structural reasons for that which are specific to that genre.) Why is that?

Because the Modern believes that he’s the good guy.

So one can introduce moral ambiguity to make things interesting, but it does need to be resolved so that the Modern, who identifies with the protagonist, can end up as the good guy.

The problem, of course, is that the modern is a consequentialist, so the resolution of the ambiguity almost never involves the ambiguity actually being resolved. The Modern thinks it suffices to make the consequences—or as often, curiously, the intended consequences—good, i.e. desirable to the protagonist. So this ends up ruining the story for those who believe in human nature and consequently natural law, but this really was an accident on the part of the Modern writing it. He was doing his best.

His best just wasn’t good enough.

The Scientific Method Isn’t Worth Much

It’s fairly common, at least in America, for kids to learn that there is a “scientific method” which tends to look something like:

  1. Observation
  2. Hypothesis
  3. Experiment
  4. Go back to 1.

It varies; there is often more detail. In general it’s part of the myth that there was a “scientific revolution” in which at some point people began to study the natural world in a radically different way than anyone had before. I believe (though am not certain) that this myth was propaganda during the Enlightenment, which was a philosophical movement primarily characterized by being a propagandistic movement. (Who do you think gave it the name “The Enlightenment”?)

In truth, people have been studying the natural world for thousands of years, and they’ve done it in much the same way all that time. There used to be less money in it, of course, but in broad strokes it hasn’t changed all that much.

So if that’s the case, why did Science suddenly get so much better in the last few hundred years, I hear people ask. Good question. It has a good answer, though.

Accurate measurement.

Suppose you want to measure how fast objects fall. Now suppose that the only time-keeping device you have is the rate at which a volume of sand (or water) falls through a restricted opening. (I.e. your best stopwatch is an hour glass). How accurately do you think that you’ll be able to write the formula for it? How accurately can you test that in experimentation?

To give you an idea, in physics class in high school we did an experiment where we had an electronic device that let long, thin paper go through it and it burned a mark onto the paper exactly ten times per second, with high precision. We then attached a weight to one end of the paper and dropped the weight. It was then very simple to calculate the acceleration due to gravity, since we just had to accurately measure the distance between the burn marks.

The groups in class got values between 2.8m/s and 7.4m/s (it’s been 25 years, so I might be a little off, but those are approximately correct). For reference, the correct answer, albeit in a vacuum while we were in air, is 9.8m/s.

The point being: until the invention of the mechanical watch, the high precision measurement of accurate time was not really possible. It took people a while to think of that.

It was a medieval invention, by the way. Well, not hyper-precise clocks, but the technology needed to do it. Clocks powered by falling weights were common during the high medieval time period, and the earliest existing spring driven clock was given to Phillip the Good, Duke of Burgundy, in 1430.

Another incredibly important invention for accurate measurement was the telescope. These were first invented in 1608, and spread like wildfire because they were basically just variations of eyeglasses (the first inventer, Hans Lippershey, was an eyeglass maker). Eyeglasses were another medieval invention, by the way.

And if you trace the history of science in any detail, you will discover that its advances were mostly due not to the magical properties of a method of investigation, but to increasing precision in the ability to measure things and make observations of things we cannot normally observe (e.g. the microscope).

That’s not to say that literally nothing changed; there have been shifts in emphasis, as well as the creation of an entire type of career which gives an enormous number of people the leisure to make observations and the money with which to pay for the tools to make these observations. But that’s economics, not a method.

One could try to argue that mathematical physics was something of a revolution, but it wasn’t, really. Astronomers had mathematical models of things they didn’t actually know the nature of nor inquire into since the time of Ptolemy. It’s really increasingly accurate measurements which allow the mathematicization of physics.

The other thing to notice is that anywhere that taking accurate measurements of what we actually want to measure is prohibitively difficult or expensive, the science in those fields tends to be garbage. More specifically, it tends to be the sort of garbage science commonly called cargo cult science. People go through the motions of doing science without actually doing science. What that means, specifically, is that people take measurements of something and pretend it’s measurements of the things that they actually want to measure.

We want to know what eating a lot of red meat does to people’s health over the long term. Unfortunately, no one has the budget to put a large group of people into cages for 50 years and feed them controlled diets while keeping out confounding variables like stress, lifestyle, etc.—and you couldn’t get this past an ethics review board even if you had the budget for it. So what do nutrition researchers who want to measure this do? They give people surveys asking them what they ate over the last 20 years.

Hey, it looks like science.

If you don’t look to closely.

Why Moderns Always Modernize Stories

Some friends of mine were discussing why it is that modern tellings of old stories (like Robin Hood) are always disappointing. One put forward the theory it’s because they can’t just tell the story, they have to modernize it. He’s right, but I think it’s important to realize why it is that modern storytellers have to modernize everything.

It’s because they’re Modern.

Before you click away because you think I’m joking, notice the capital “M”. I mean that they subconsciously believe in Modern Philosophy, which is the name of a particular school of philosophy which was born with Descartes, died with Immanuel Kant, and has wandered the halls of academia ever since like a zombie—eating brains but never getting any smarter for it.

The short, short version of this rather long and complicated story is that Modern Philosophy started with Descartes’ work Discourse on Method, though it was put forward better in Meditations on First Philosophy. In those works, Descartes began by doubting literally everything and seeing if he could trust anything. Thus he started with the one thing he found impossible to doubt—his own existence. It is from this that we get the famous cogito ergo sumI think, therefore I am.

The problem is that Descartes had to bring in God in order to guarantee that our senses are not always being confused by a powerful demon. In modern parlance we’d say that we’re not in The Matrix. They mean the same thing—that everything we perceive outside of our own mind is not real but being projected to us by some self-interested power. Descartes showed that from his own existence he can know that God exists, and from God’s existence he can know that he is not being continually fooled in this way.

The problem is that Descartes was in some sense cheating—he was not doubting that his own reason worked correctly. The problem is that this is doubtable, and once doubted, completely irrefutable. All refutations of doubting one’s intellect necessarily rely on the intellect being able to work correctly to follow the refutations. If that is itself in doubt, no refutation is possible, and we are left with radical doubt.

And there is only one thing which is certain, in the context of radical doubt: oneself.

To keep this short, without the senses being considered at least minimally reliable there is no object for the intellect to feed on, but the will can operate perfectly well on phantasms. So all that can be relied upon is will.

After Descartes and through Kant, Modern Philosophers worked to avoid this conclusion, but progressively failed. Kant killed off the last attempts to resist this conclusion, though it is a quirk of history that he could not himself accept the conclusion and so basically said that we can will to pretend that reason works.

Nietzsche pointed out how silly willing to pretend that reason works is, and Modern Philosophy has, for the most part, given up that attempt ever since. (Technically, with Nietzsche, we come to what is called “post-modernism”, but post-modernism is just modernism taken seriously and thought out to its logical conclusions.)

Now, modern people who are Modern have not read Descartes, Kant, or Nietzsche, of course, but these thinkers are in the water and the air—one must reject them to not breathe and drink them in. Modern people have not done that, so they hold these beliefs but for the most part don’t realize it and can’t articulate them. As Chesterton observed, if a man won’t think for himself, someone else will think for him. Actually, let me give the real quote, since it’s so good:

…a man who refuses to have his own philosophy will not even have the advantages of a brute beast, and be left to his own instincts. He will only have the used-up scraps of somebody else’s philosophy…

(From The Revival of Philosophy)

In the context of the year of our Lord’s Incarnation 2019, what Christians like my friends mean by “classic stories” are mostly stories of heroism. (Robin Hood was given as an example.) So we need to ask what heroism is.

There are varied definitions of what hero is which are useful; for the moment I will define a hero as somebody who gives of himself (in the sense of self-sacrifice) that someone else may have life, or have it more abundantly. Of course, stated like this it includes trivial things. I think that there simply is a difference of degree but not of kind between trivial self-gift and heroism; heroism is to some degree merely extraordinary self-gift.

If you look at the classic “hero’s journey” according to people like Joseph Campbell, but less insipidly as interpreted by George Lucas, the hero is an unknown and insignificant person who is called to do something very hard, which he has no special obligation to do, but who answers this call and does something great, then after his accomplishment, returns to his humble life. In this you see the self-sacrifice, for the hero has to abandon his humble life in order to do something very hard. You further see it as he does the hard thing; it costs him trouble and pain and may well get the odd limb chopped off along the way. Then, critically, he returns to normal life.

You can see elements of this in pagan heroes like Achilles, or to a lesser degree in Odysseus (who is only arguably a hero, even in the ancient Greek sense). They are what C.S. Lewis would call echoes of the true myth which had not yet been fulfilled.

You really see this in fulfillment in Christian heroes, who answer the call out of generosity, not out of obligation or desire for glory. They endure hardships willingly, even unto death, because they follow a master who endured death on a cross for their sake. And they return to a humble life because they are humble.

Now let’s look at this through the lens of Modern Philosophy.

The hero receives a call. That is, someone tries to impose their will on him. He does something hard. That is, it’s a continuation of that imposition of will. Then he returns, i.e. finally goes back to doing what he wants.

This doesn’t really make any sense as a story, after receiving the call. It’s basically the story of a guy being a slave when he could choose not to be. It is the story of a sucker. It’s certainly not a good story; it’s not a story in which a characters actions flow out of his character.

This is why we get the modern version, which is basically a guy deciding on whether he’s going to be completely worthless or just mostly worthless. This is necessarily the case because, for the story to make sense through the modern lens, the story has to be adapted into something where he wills what he does. For that to happen, and for him not to just be a doormat, he has to be given self-interested motivations for his actions. This is why the most characteristic scene in a modern heroic movie is the hero telling the people he benefited not to thank him. Gratitude robs him of his actions being his own will.

A Christian who does a good deed for someone may hide it (“do not let your left hand know what your right is doing”) or he may not (“no one puts a light under a bushel basket”), but if the recipient of his good deed knows about it, the Christian does not refuse gratitude. He may well refuse obligation; he may say “do not thank me, thank God”, or he may say “I thank God that I was able to help you,” but he will not deny the recipient the pleasure of gratitude. The pleasure of gratitude is the recognition of being loved, and the Christian values both love and truth.

A Modern hero cannot love, since to love is to will the good of the other as other. The problem is that the other cannot have any good beside his own will, since there is nothing besides his own will. To do someone good requires that they have a nature which you act according to. The Modern cannot recognize any such thing; the closest he can come is the other being able to accomplish what he wills, but that is in direct competition with the hero’s will. The same action cannot at the same time be the result of two competing wills. In a zero-sum game, it is impossible for more than one person to win.

Thus the modern can only tell a pathetic simulacrum of a hero who does what he does because he wants to, without reference to anyone else. It’s the only way that the story is a triumph and not the tragedy of the hero being a victim. Thus instead of the hero being tested, and having the courage and fortitude to push through his hardship and do what he was asked to do, we get the hero deciding whether or not he wants to help, and finding inside himself some need that helping will fulfill.

And in the end, instead of the hero happily returning to his humble life out of humility, we have the hero filled with a sense of emptiness because the past no longer exists and all that matters now is what he wills now, which no longer has anything to do with the adventure.

The hero has learned nothing because there is nothing to learn; the hero has received nothing because there is nothing to receive. He must push on because there is nothing else to do.

This is why Modern tellings of old stories suck, and must suck.

It’s because they’re Modern.

Young Scientist, Old Scientist

There’s a very interesting Saturday Morning Breakfast Cereal webcomic about a scientist as a young woman and an old woman:

This is remarkably correct, and one sees it all the time. Science is, by its nature, the examination of things which is productive to examine in the way that science examines things.

Speaking broadly, this means that science is the study of things which are easily classified, or can easily be experimented upon in controlled experiments, or the relationships between things which can be measured in standardized units. By limiting inquiry to these things, the scientist can use a set of tools which has been developed over the centuries to analyze such things.

I meant that metaphorically, but it’s actually as often true literally as metaphorically. Scientists frequently use tools which were developed for other scientists; accurate scales, measurements of distance, radio trackers, microscopes, telescopes, etc.—all these things the modern scientist buys ready-made. (This is an oft neglected aspect of how what has been studied before determines what is studied now, but that’s a subject for another day.)

This limitation of investigation to such subjects as lend themselves to such investigation is very narrowing; most interesting questions in life do not lend themselves to being studied in this way. Most answers that scientists come up with are not interesting to most people. In fact, outside of science, the almost only people who study real science with any rigor are engineers. Even the degree to which they study the results of science can be exaggerated; the good old 80/20 rule applies where 80% of utility comes from 20% of science. But, still, it’s very limiting.

This is part of why scientists are so often stereotyped as hyper-focused nerds uninterested and incompetent at the ordinary business of living. The stereotype is actually quite often not true, but this is in no small part because science has become an institutional career in which the science itself is only one part of a scientist’s day-to-day life.

That said, the stereotype exists for a reason: science is just not normal.

There are two ways of dealing with this fact. One of them is to engage in the hyper-focus of science during the day and then to hang up one’s lab coat and focus on being a full human being at night. This is not really any different than a carpenter or a plumber putting away his tools at the end of the day and focusing on all the things in life which are not carpenting or plumbing.

The other way of dealing with this is to shrink the world until one’s narrow focus encompasses it. This is what the comic I linked to at the start of this article captures so very well.

The cobbler should stick to his last as an authority, but it is a tragedy of he sticks to his last as a man.

Carrying One’s Cross

In the sixteenth chapter of the Gospel according to Matthew, it says:

Then Jesus said to his disciples, “If anyone wants to be a follower of mine, let him renounce himself and take up his cross and follow me. Anyone who wants to save his life will lose it; but anyone who loses his life for my sake will find it. What, then, will anyone gain by winning the whole world and forfeiting his life? Or what can anyone offer in exchange for his life?”

Carrying one’s cross is a common expression, though it’s often treated as an atypical thing. People talk about something troublesome as “a cross they must bear”. But I think that there are two things to note about the above passage in this regard.

The first is that carrying one’s cross is a prerequisite of discipleship. That is, carrying one’s cross is normal. Or, in other words, suffering, in this fallen world, is normal.

There was a Twitter conversation I was in, recently, where an aquaintance who goes by the nom-de-plume of Brometheus was pointing out that a lot of people are afraid of having children because they’ve been taught that “having children is a miserable experience, a thankless martyrdom of bleak misery and self-denial.”

There is much that can be said about this arising from misguided attempts to get people to avoid fornication, such as by having to “care” for robot babies whose programming is to be a periodic nuisance, but I’ll leave that for another day.

Instead, I just want to point out that having child is actually a miserable experience, an occasionally-thanked martyrdom of joyful misery and self-denial. Or in other words, it’s good work.

All good work involves suffering and self-denial; it involves this because we are imperfect. We do the wrong thing, at the wrong time; quite often for the wrong reasons. And being a parent quite often involves having to do the right thing at the right time, and if at all possible, for the right reasons. To a creature with unhelpful inclinations, that involves suffering.

And that’s OK. Everything worthwhile involves suffering, because worthwhile things make the world better, and that hurts in a world that’s flawed. Or in other words, if you want to be Jesus’ disciple, you have to renounce yourself, take up your cross, and follow him.

The problem is not that people think that having children involves suffering and self denial. It’s that they think it’s bad that it involves suffering and self-denial. The problem is that they want to put down their cross and follow the path of least resistance.

The other thing to note about the passage above is that people often talk about it like life gets more comfortable when it’s finally time to put down your cross. Perhaps it’s because execution by crucifixion is no longer practiced in the western world.

Something to remember is that if it’s your cross that you’re carrying, when you finally get to put it down the next thing that happens is that the Romans nail you to it, then hoist you up to die.

It’s not the full story, but there’s a lot of wisdom to those lines, from the Dread Pirate Roberts to Princess Buttercup, in The Princess Bride:

Life is suffering, highness. Anyone who tells you differently is selling something.

Thoughts From an Aging Sex Symbol

One of my better videos, now two years old, is Satanic Banality:

In it, I mention that celebrities can only sell the image of the bad life turning out well for a while, and when they wise up they lose their relevance. Which reminded me of this article by Raquel Welch, back in 2010. As the kids would say, here’s the nut graf:

Seriously, folks, if an aging sex symbol like me starts waving the red flag of caution over how low moral standards have plummeted, you know it’s gotta be pretty bad. In fact, it’s precisely because of the sexy image I’ve had that it’s important for me to speak up and say: Come on girls! Time to pull up our socks! We’re capable of so much better.

But in 2010, so far as I can tell, Raquel Welch no longer had any influence, so it didn’t matter. That’s the resilience of an engine which feeds on ignorance and spits out wiser people as spent fuel. When they were ignorant, the machine gave them their power. Once spit out, their knowledge is powerless.

(Except in individual cases; saving souls tends to be a personal business, not done over television screens.)

Why Consequentialists See Only Shades of Grey

There’s an infuriating thing which consequentialists do where they say that life is never black and white, it’s all shades of grey. For a long time I thought that this was just because they wanted to be evil without being caught, and were trying to disguise it. This may still be the case, but I realized that this is actually inherent in their position.

Consequentialism means judging an action as good or evil not by principles—i.e. not by what the action is—but only by the consequences of the action. To a consequentialist it doesn’t mean anything to say “it is impermissible to do evil that good may result” since, according to their moral theory, if good results, it wasn’t evil that you did. So rape, treason, murder, etc. are all to be judged on the basis of whatever good or evil comes out of them, not on whether they are intrinsically evil.

There is a problem with consequentialism, which is that one cannot foresee all the consequences to an action. In fact, one cannot foresee most of the consequences to an action. In fact, people often have trouble foreseeing even the very immediate consequences to their actions. This makes consequentialism impossible for a human being to actually evaluate, rendering it completely useless as a moral theory.

(As a side-note, consequentialism and principalism are identical in God, since he both knows all of the effects of all actions and created the world such that the consequences of principled actions are good. Consequentialism is completely un-evaluatable for anyone who is not God, however.)

But, while this is completely useless as a moral theory for making decisions, it can be applied somewhat better historically. Not actually well, of course, but at least better. And this is where the consequentialist sees everything as shades of grey. Every action has both good and bad consequences. This is intrinsic, because every action opens up some possibilities and forecloses others. To marry one woman is to not marry all of the others. To save a man’s life in the hospital is to take money from the undertaker. To save the life of a worm who crawled onto the pavement is to deprive the ants of food who would have ate its corpse. Every action disappoints someone. And this much, the consequentialist can see in hindsight.

And since, to a consequentialist, (naturally) good consequences are identical to an action being (morally) good, and (naturally) evil consequences are identical to an action being (morally) evil, an action having both naturally good and naturally evil consequences makes the action both morally good and morally evil. Since all actions, intrinsically, have both naturally good and naturally evil consequences, all actions must, to the consequentialist, be a mixture of moral good and moral evil.

This disguising the consequentialist’s own evil is just a side-benefit.