God’s blessings to you on this the thirty first day of January, in the year of our Lord’s incarnation 2017.
Today is going to be exceedingly short because I’m crazy-busy today. First, I noticed an article by John C Wright about the history of Buck Rogers I’m really looking forward to reading.
Yesterday I read an article by Jasyn Jones about the disappearance of pulp SciFi, Star Wars Stole Pulp. It was an interesting article, but I was even more intrigued because of a comment which gave a counter-point. First, the point:
Post-WWII was the era of the Campbellian Silver Age, the era of “Men with Screwdrivers” SF. Action and adventure were childish and frankly embarrassing, as were purple prose and laser swords. Barsoom? Silly. Buck Rogers? Childish. Northwest Smith? A gunslinger, not a scientist. And this was the age of SCIENCE.
Science was the focus, technology the touchstone. Stories had to be cerebral, intellectual. They had to be REALISTIC. Real science, none of this fuzzy-headed soft science stuff. SF had to shake off the wooly-headed thinking of Fantasy, the embarrassing antics of Space Opera, the adolescent focus on Adventure and Action. SF was serious business. Real Literature. It was time to grow up.
Then the counter-point, by someone calling himself K-bob:
I grew up reading the pulps because I could get a stack of them for 75 cents. I loved them more than comics, and some even had a few great illustrations. But I was also a kid when the Mercury 7 program began.
To me, the screwdriver period was new and exciting. Maybe it’s because I lived on the Space Coast back then and got to see astronauts live a few times. So I shifted to the New Kids because of the general level of excitement for real space exploration and engineering.
It’s very interesting to see that perspective, and the point about how Big Men With Screwdrivers (which is, I believe, a Niemeierian phrase, if inspired by Mystery Science Theater 3000) would have been fresh when it came out and moreover something that was exciting because it tied into the zeitgeist of an age which expected nuclear-powered flying cars in a decade or two. Going by descriptions of people who lived through the early post-war period, real life was a bit like living in the preface to a SciFi book. Basically, people thought of this as the near future:
It didn’t work out that way, of course. But if you think that the Stanford Torus is realistic, it makes a lot more sense why realistic tales of engineering in the near future would be so fascinating. I know for a fact a friend of mine who is very interested in space travel (he watches rocket launches over the internet and has as a bucket list item seeing one in person—a bucket list item he checked off). One of the things he loved so much about Andy Weir’s The Martian was its realism; how it was set in a plausible near future. And my friend does not really like literature; his favorite entertainment is usually about giant robots. One of his favorite giant robot shows involved robots so giant that they could hurl galaxies like frisbees and punch holes in the fabric of reality in order to get at different dimensions.
I recommend reading the rest of K-bob’s comment, because he talked about how this fresh and exciting new trend grew stale, as most fresh and exciting things do. And I’ve no doubt that the cultural marxists and the snobs had a hand in making SciFi worse—it is in their (fallen) nature to do so. It’s a bit like expecting scorpions to sting. But when that is given proper weight, I think that K-bob is onto something; that Big Men With Screwdrivers was able to push aside older and better things in part because it was fresh and new and in part because it spoke to an age that lived in very unusual conditions. Most people these days think of nuclear power in terms of weapons and disasters; those who are familiar with nuclear power (I know a nuclear engineer) think of it in terms of cheaper electricity with no carbon footprint. But in the post-war period nuclear power was going to turn us into gods and propel us to the stars. Given how detached from reality those expectations were, it is perhaps understandable why they found realism to be fantastic.
In case you’re interested, Russell Newquist and I recently had a long conversation about the state of social media up to the present, and the future of it. Plus other things, because no conversation I’m in is ever free from digressions. But at least they’re all interesting! 🙂
God’s blessings to you on this the thirtieth day of January in the year of our Lord’s incarnation 2017.
A life tip which is on my mind today for very practical reasons is that if one is out of sorts for whatever, but to take my example a lot of work to get done by a deadline which is not far off and a lack of sleep making it hard to get that work done. Now, one should always try to avoid making one’s problems someone else’s problems, and that extends very much to being out of sorts. If I’m having a bad day that should not mean that anyone else around me has a bad day because of it.
Alas, for people—like me—who are not perfect, keeping our troubles to ourselves doesn’t always happen, even though it should. (N.B. I’m not referring to keeping things bottled up, but rather keeping them from affecting one’s patience, tone of voice, charity, etc.) Because this sort of failure is something one can often predict, one should be on the lookout for it, and wherever it seems to be happening, it’s a good idea to tell the people affected about your stressors, so that they have context. I’m not talking about complaining at them, because that just makes their day worse. (Asterisk; if you know how to use self-deprecating humor to make complaining palatable, that can work, though it does take a lot of skill. That’s more something that should be plan B for if you catch yourself complaining that being plan A.) Rather, one should warn others not to take you too seriously right now because you are under stressors that make your actions and reactions atypical. Now, to be clear, this is not something that others owe one; it is asking for a specific type of charity. But it’s usually not a difficult charity to give, and if people are fore-warned they’re usually pretty indulgent if they don’t need to indulge you in this way too often. And one doesn’t have to be highly specific, something as general as, “I’m sorry, I’m just having a really bad day so please don’t take anything I say today too seriously? Thanks, and I’m sorry.” In my experience, people are very understanding of that sort of thing since we all have bad days where we could benefit from some charity applied to the things we say and how we say them.
Also very important on bad days: don’t forget to smile at people. Smiles which are unconscious reflexes are cute in babies, but really fairly private things in adults. Smiling at someone is primarily a form of communication, conveying:
I mean you good, not harm
I consider you a net positive in my life
Things are, for the next few moments anyway, OK
Whether or not you feel these things to be true, if you know them to be true, you should smile at people to communicate those things to them. Feelings can be highly misleading, and in the same way that if the “gas tank is empty light” on your car has burned out you should still put fuel in the tank when the needle is on empty, you should communicate true things to people even if you don’t feel them. This will improve:
their day
everyone else’s day who comes in contact with them, including
your day
and at least as importantly, your honesty
Yes, your honesty. Honesty consists of giving people truth. There are lies of omission, and if your honesty is not “authentic” in the sense of being spontaneously done without thinking about it, all that means is that you need to build better habits. In the mean time, you’re supposed to use your rational control over yourself to act according to what you know to be true, and that includes what you communicate to other people. Because, unfortunately for our laziness, a neutral expression or an unhappy expression communicates things to, and often things which aren’t true. Being a social animal requires more work than being a hermit. That may be inconvenient, but it beats the alternative.
God’s blessings to you on this the twenty ninth day of January in the year of our Lord’s incarnation 2017.
Today’s a super busy day so I don’t really have time to write much. So I wanted to share a fun quote with you:
Marriage, n: the state or condition of a community consisting of a master, a mistress, and two slaves, making in all, two. –Ambrose Bierce
Though I will say that my preferred metaphor for marriage is a two-person military unit. Two people bound together to accomplish great things under very harsh conditions. No metaphor is ever perfect, though.
God’s blessings to you on this the twenty eighth day of January in the year of our Lord’s incarnation 2017.
I was reminded recently of the advice I’ve given to young people a few times on picking a career. At least when I was in school the vast array of possible choices meant that a lot of emphasis was placed on this, and it was generally suggested that one should figure out what one was most passionate about, and try to pursue that as a career. Obviously, what one was most passionate about that could be a career. Taking naps was right out. And many things require modification to be a career, such as painting landscapes of dogs might have to turn into painting portraits of people’s pets.
And of course some dreams are just very hard to follow, like being an astronaut or a professional novelist. In many cases, you have an only slightly better chance of these than of professionally winning the lottery.
But, still, those caveats aside, it was the general advice given, and it never struck me as good advice. It has, for a very long time, struck me as a much better idea to pick one’s second favorite thing and turn that into a career. All work done for pay involves compromise, because the person paying the money has a say in the work. If this is one’s favorite thing in life, those compromises are extraordinarily painful, and there is little one can do for solace. By contrast, compromising in one’s second favorite thing isn’t great, but it’s not too bad (assuming one is talking about things like aesthetic or prudential judgment and not morality), but one can always take solace in something one loves better. The compromises necessary to make such an activity something other people will pay for is also much easier to tolerate.
God’s blessings to you on this the twenty seventh day of January, in the year of our Lord’s incarnation 2017.
I didn’t post a God’s Blessings post yesterday, but I did post an interview, so I’m going to call that a wash.
I recently came across a private discussion about the nature of forgiveness, and how my Christian friend was having to point out to a secular co-worker that forgiveness does not mean automatically pretending that nothing has happened, especially when there has been no repentance. Let’s call the people A and B, and stipulate that as co-workers B betrayed A’s trust and in fact stabbed him in the back on some occasion to A’s significant detriment. Let’s further stipulate that B does not admit to any wrongdoing, and has never apologized, repented of his wickedness, nor tried to make any sort of amends.
Now, I know what Christians mean when they say that they are not required to forgive in such a circumstance, but that’s technically incorrect. Christians are to forgive in all circumstances, because forgiveness just means that one does not cease loving a person. And as Bishop Barron puts it, love is to desire the good of the other as other. Which means that pouring out the infinite goodness of God which we ourselves are given, we give to others according to our ability to give and their ability to receive. That last part is key, and is the key to this whole problem.
Forgiveness means that we should not withhold any good from a man that we can give him, but it does not mean that we should give goods to a man who cannot receive them. In this case, a man who betrays trust is not trustworthy. Forgiving him means that if he needs help, one should help him. By all means A should (if practical) take a day off work to help B move his stuff from one apartment to another. If B is hungry, A should feed him. But there is absolutely nothing in the concept of forgiveness that means that A should trust B when there is no reason to believe that B is trustworthy and good reason to believe that B is not. Forgiveness means not holding grudges, it does not mean being unrealistic. Now, I should probably add that it is possible for people to reform, and for a man who was untrustworthy to become trustworthy. And forgiveness should be open to that possibility. But that does not in any sense mean that forgiveness should assume that such a thing has happened in default of evidence that it has, and still less in the face of evidence that it hasn’t.
And in fact, it is uncharitable to tempt a man who struggles with temptation. If B has a hard time keeping trust, it is uncharitable to place trust in him and thus expose him to the temptation to violate that trust. Telling secrets to a gossip is not only unwise, but it is unkind.
There are those who want to simply forget the past, of course, mostly because they had conflict and want it to magically disappear. That’s not forgiveness, that’s cowardice. Of course, cowardice will always try to disguise itself as something else; that’s part of the nature of cowardice. After all, you can’t expect cowardice to have the bravery to admit what it is.
In short, forgiveness means being willing to give what you can, even to a man who has hurt you. It does not mean being willing to give what you can’t.
I recently had the pleasure of interviewing P. Alexander, editor of Cirsova Magazine, over email. If you’re unfamiliar with Cirsova magazine, here’s the cover of their first issue:
They’ve got four issues out, and are working on their 2017 issues. Anyway, without further ado, here’s the interview:
Cirsova is subtitled, “Heroic Fantasy and Science Fiction Magazine”. In modern western culture, stories of heroism are often associated, I think, with children, because children are not yet sufficiently beaten down by the all-doubt of modern culture. But as Harry Potter showed, there are plenty of adults who are not cowed, even in adulthood. So who is the audience you’re looking for with Cirsova Magazine?
Well, the appellation of Heroic Fantasy is, in part, to distinguish it from much of post-modern fantasy where there are no heroes or the heroes are so horribly flawed that they are often shown as being as bad as, or worse than, the villains. While we have the occasional exception, most of our stories are about individuals performing brave and heroic deeds in fights against injustice and absolute evils.
I think that stories of heroism may be associated with children because of the post-war movement to destroy the notion heroism as childish. During their heyday, pulps were wildly popular not with children, but adults. In fact, Argosy, the first pulp magazine, abandoned children as its target demographic very early on in an attempt to save the publication from going under; they went on to decades of success and were the home to the most iconic and influential literary hero of the 20th century – Tarzan.
By the late 40s and into the 50s, the notion of heroes was dismissed and deconstructed. Romance and heroism as it had once been understood was being undermined in and removed from many outlets of publication and media in the maelstrom created by post-war cynicism, the Cold War, and critical theoreticians in pop-culture. Heroes became the domain of cheesy and banal Comics Code censored rags of the Silver Age, making it that much easier to place SFF heroics in the ‘age ghetto’.
The desire to see heroes and heroics, however, can’t be stamped out. It’s why everyone loves and still talks about Die Hard, Star Wars, and Alien. People like it when good guys stop bad guys and monsters against all odds. Cirsova Magazine is for every person who has loved science fiction and fantasy adventure, opened a book or a magazine with a spaceship or a guy or gal with a sword on the cover, and been let down. I can’t tell you how many times people have said that they’re tired of actionless, sometimes storyless, fluffy-puff pieces of thinkery masquerading as SFF that one seems to see coming out of the more well known houses and publications. Cirsova is for those people who have been let down and want those stories about heroes again. We will not let them down.
That sounds great! You mentioned the pulps, and indeed I’ve heard Cirsova mentioned in
connection with a revival of the pulps as well. (In this article by Jasyn Jones.) Did you conceive of Cirsova as being a revival of the pulps? Or do you even think of it as a revival of the pulps now? Or is it more like a spiritual fellow-traveller to the pulps? In short, what is Cirsova’s relationship to the pulps?
I originally conceived Cirsova when I’d been reading Planet Stories and thinking “I want to create something like THAT!” We ended up leaning a bit heavier towards fantasy than the sort of Raygun Romance that PS published, but that’s really the magazine that has the most direct influence on the publication, even down to the choice of interior fonts.
That said, we aren’t really “retro pulp”, but it may not be easy to explain why. I won’t say that pulp revival is a new thing or that we’re even an important part of what’s referred to as Pulp Revival or New Pulp. Those have been going on to varying degrees of success or influence for decades now. Pulp Revolution, on the other hand, is a fairly recent term that a few of our fans have tossed around to describe us and those like us and our approach to pulp.
Here is the biggest difference in my mind: a lot of what is “Pulp Revival” and “New Pulp” seems to focus largely on the campy aesthetic aspects of pulps, almost as though playing off the assumptions one would have from merely seeing a catalog of magazine covers rather than from actually reading the stories within. It’s cheesy and fun, I suppose, but the best way I can describe it is that it’s like the little kid who puts on dad’s shoes and suit from the closet to play businessman. You also see a lot strange politicization in submissions guidelines – I’ve actually seen a recently launched “Retro Pulp” zine that specifically positions itself as a ‘progressive’ outlet, warning off a lot of common (or assumed to be common) pulp tropes. The only thing we don’t really want to see at Cirsova are Big Men With Screwdrivers SF stories or stories about elves. As a result, we’ve ended up with a pretty incredible array of stories ranging from those that could be considered “progressive” to those that HAVE been ridiculed as puerile and full of unexamined privilege.
What we’re doing with Cirsova is not about being part of “Pulp Revival” or being part of a “retro” movement. We don’t want to confine ourselves to that niche. What we really want to do is bring the kind of story that was being told in the pulps, not the aesthetic, into the mainstream conversation about SFF fiction. We’ve been accused of being “regressive” publication by those who are ignorant of pulps, those who assume that pulps were full of nothing but racist and sexist trash, but in a sense they’re right about us – we’ve embraced the idea that SFF needs to regress harder. We’re using the pulps as a starting point and going forward as though the Campbellian Revolution never happened, as though Burroughs was still held as above ‘the Big Three’, as though Leigh Brackett was still the Queen of Science Fiction rather than LeGuin or Atwood, as though fun, adventure, heroics, and romance were still a good thing in SciFi.
I have nothing against any of the Pulp Revival, New Pulp, or Retro Pulp folks or those movements; we’re just doing something different and have very different goals.
It’s interesting that you mention people whose ideas of pulps are drawn purely from a catalog of magazine covers. One thing which comes to mind is that magazine covers often have little to do with the stories they are meant to represent, and I believe were in fact often done by artists who had not read the stories but only had the barest description of what the cover should look like. And further many of those publishing sci-fi magazines were businessmen at least enough to avoid going out of business, and had some realistic notions about what would catch the eye and how different that might be from the story which captures the imagination.
Have you drawn inspiration from other places, such as, for example, the penny dreadfuls which predated pulp fiction? And either way, have you read G.K. Chesterton’s A Defense of Penny Dreadfuls, which I think has much wider applicability than just to the penny dreadfuls of his day?
Back then, as today, artists were frequently working on the cheap and under tight deadlines. Only back then, you didn’t have digital, which is a huge time-saver for many artists today. While Anderson’s work on Planet Stories turned out some gorgeous pieces, it’s pretty noticeable that he’d re-use several of the same reference photos for poses – there are a number of covers where the only real differences are color of the dame’s hair, her outfit and what she’s holding over her head (usually a sword or a whip). It was also part of a magazine’s brand, as much as anything else. Weird Tales often had lurid and provocative danger. Planet Stories had sexy and romantic action. Astounding had “wholesome” action (i.e. no dames), and as it reshaped itself post-war, it tended to have a lot of men standing around. or floating heads to indicate how much more serious it was, I suppose. Similarly, Magazine of Fantasy & Science Fiction has often had a more abstract or impressionist aesthetic that jibed with its status as the ‘literary’ science fiction magazine.
As far as other inspirations, from the publishing end, I can’t really think of any, but from the storytelling end, I’d recommend Appendix N as a starting point for those curious to see just what it is I’d be looking for. When putting together the Dungeon Masters Guide for the 1st edition of Advanced Dungeons & Dragons, Gary Gygax, Tim Kask and others at TSR compiled a list of works and authors that had influenced the design and development of the game so that players would have a better understanding of where they were coming from. Any arguments about the cultural significance of the list itself aside, Appendix N undeniably includes many of the best action-packed SFF stories ever written. Jeffro Johnson, who has regularly contributed columns to Cirsova, recently published a bestseller on the topic.
I’d be lying if I said I was particularly familiar with the Penny Dreadfuls, though I am aware of them and did manage to make it a decent way into Malcolm Rymer’s Varney the Vampire before setting it down and not getting around to finishing it (the story itself was great, however the critical modern edition made the poor layout choice to use single column with tiny font in a volume roughly the size of a phone book).
You mentioned how modern technology helps artists to be more productive, and thus how art is cheaper for those who commission it. That segues rather nicely into the business side of things, which is, I think, a question on a lot of people’s minds. The pulps are today best known for certain types of writing and—to be blunt—a certain quality of writing. Not that reputation is more accurate here than it tends to be anywhere else in life, but one of the things which the pulps certainly were was cheap. Wood pulp paper was much cheaper than smooth, glossy paper, and so the pulp magazines could make money on fewer readers or smaller margins. Are you using a similar business model to the pulps but relying on the cheaper-still nature of digital distribution, or are you using a new sort of business model, or what?
Well, I would like to clarify that I don’t mean modern technology necessarily makes art cheaper, but it does offer a much greater degree of flexibility that did not exist in the past to collaborate and hash out what the art should look like (quick turn-around time on sketches and, in some cases, quick digital touch-ups and tweaks which were otherwise impossible).
Print on demand publishing has made indie and self-publishing more viable than it has ever been. Until fairly recently, self-publishing usually meant going to a vanity publisher and having several hundred books you’d never sell printed up only to languish in a closet. For Cirsova, Print on Demand through Createspace and Lulu means we don’t have any real overhead on stock. Our biggest per issue operating costs are paying our authors, commissioning cover art, and ordering proof copies, in that order. Once the issue is out there and we fulfill pre-orders/subscriptions, it doesn’t cost us anything.
We do have digital distribution, but Cirsova is meant to be something on your shelf. Interestingly enough, about half of our sales or more are physical, which is counter-intuitive to conventional wisdom about today’s market. Our softcovers are pretty cheap – with the exception of our first issue, which we have for $7.50, our regular issue price has settled into $8.50 on Amazon with an SRP of $10 (we did have a double-sized winter issue that’s $14.99). They’re a little over 100 pages, and 50k-60k words per issue. We do offer an edition for connoisseurs through Lulu that is hardbound with a dust jacket and foil lettering; these are a bit pricier, but they’re absolutely gorgeous. Prior to the Print on Demand revolution, putting out hardcovers with that degree of quality in those small quantities would be prohibitively expensive. This way, we can offer a quality product for a reasonable price, get a reasonable cut, and not have to be sunk out of pocket on physical copies we can’t move. Best of all, since we are not actually the ones selling physical products directly our readers, we don’t have to keep track of sales tax receipts. When I had a record label, the worst thing was having to fill out and mail in reports of 0 sales for months where we didn’t have any tables at shows or cutting checks for one or two bucks when we only sold a couple buttons or patches.
Our digital distribution includes typical eBooks and PDFs and the sort of stuff you’d expect from any sort of work published these days, but I’m not a fan and mostly make them available out of obligation to our fans and readers who prefer e-Readers or just don’t want the clutter of owning physical books. I always hate them, because to make Cirsova e-Reader friendly, we have to strip out all of the layout work that we put into it to give it a pulp magazine look and feel (columns, dropcaps, etc.).
The part about art being cheaper is more drawn from my own experience commissioning covers for my novels. A skilled digital painter can take advantage of the medium to be more efficient than, say, an oil painter can be. (Layers, undo, no need to wait for anything to dry, etc. seem to permit faster work for those who know how to take full advantage of them, allowing an artist to serve more clients.) I certainly don’t mean to suggest that digital painting has devalued art, because good art is of tremendous value. Anyway, thank you very much for your time, it’s been very interesting.
Fair point. Both have their advantages when it comes to economy. For instance, an artist working in a physical medium can sell the rights while still holding onto the physical piece, which they can then still sell; this allows for art to be more affordable for someone commissioning it because the artist can sell the rights and still have something of monetary value. As an example, I don’t own physical copies of any of the first four covers by Jabari Weathers, though I believe that one was purchased as a gift for a writer; if Jabari wanted to sell the originals, he absolutely could, and I know that’s part of why I was able to get such a great deal on his incredible artwork.
With all-digital art, it can be cheaper, as there’s no physical media required and an experienced artist can knock out a commission quickly, but there’s not an “original” that can be sold as well, so the artist makes their money solely on what they charge for the commission and whatever use rights they retain.
Interestingly enough, our cover for issue 5 is a hybrid – it’s digitally colored, but I got both the commercial use rights and the original line-art. (I was tempted to put it up on the block for this Kickstarter to defray costs, but decided I wanted to own at least one piece of original Cirsova artwork).
Thank you for taking the time to talk with me about the Magazine! I enjoyed it.
Recently, John C. Wright blogged about the formula used by Lester Dent to write the Doc Savage stories. He began his post with this defense of writing formulas:
There are people who object to formula fiction. Myself, I like formula fiction much better than experimental fiction, because the formula at least means the story will be workmanlike. Some complain formulas make yarns too predictable. But that is like saying the recipe for cheesecake is predictable: depends on well the cook uses the recipe, does it not?
As a small aside, though in the main Mr. Wright’s point about cheesecake is well made, there are—I get the impression—far more people who like to be surprised by fiction than like to be surprised by food. This may be related to the relative difficulty of finding new fiction to finding new foods, but at least I’ve never encountered a restaurant review which gave spoiler warnings so one could avoid hearing about what the food tastes like. Or perhaps a better analogy would be that very few people go to a winery and refuse to avoid the wine tastings because they don’t want to tarnish the experience of drinking the wine they are going to buy with the knowledge of what it tastes like. For people who thus primarily enjoy surprise in fiction, formula fiction is somewhat poorly suited. Being more a re-reader than a reader of new things I’m not very familiar with this, but I’ve heard that there are such people in the world—from their own lips, on some occasions.
Not long after Mr. Wright published his post, Brian Niemeier blogged about writing formulas, linking Mr. Wright’s blog post. Something in it caught my eye, and seemed related to what Mr. Wright said:
A long-running controversy in writing circles rages around the validity of formulas. Keep in mind that I don’t mean formulaic writing, which is just predictable and derivative.
The interplay of the two is this: formulas that work to make enjoyable stories will—with certainty, and possibly of necessity—result in more stories which are predictable and derivative. It will do this because it will result in more stories by fixing other aspects of the stories that otherwise would have died still-born on their author’s fingers or the editor’s in-box. This is in no way a criticism of formulas, but rather an entry-point into considering what it is that formulas actually do.
There are many ways in which a story can be good or bad. One dimension of good stories is characters. Specifically, are they interesting people? Does the story show off their virtues realistically? The wrong character in an interesting situation will be uninteresting because none of their virtues (especially natural virtues) will be relevant and they will remain background non-entities or automatons moved about because the plot requires it and for no other reason. A formula is not likely to help much with this.
Another dimension of good stories is the narration. A good narrator makes observations about human nature that are interesting to read. A formula will not help with this at all, and this is perhaps one of the most neglected aspects of story telling. But consider such amazing narration as you find in Pride & Prejudice:
It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.
However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered the rightful property of some one or other of their daughters.
Or again:
“[Miss Bingley speaking at length to Mr. Darcy, criticizing every feature of Elizabeth.]”
Persuaded as Miss Bingley was that Darcy admired Elizabeth, this was not the best method of recommending herself; but angry people are not always wise; and in seeing him at last look somewhat nettled, she had all the success she expected.
Or again:
Miss Darcy and her brother appeared, and this formidable introduction took place. With astonishment did Elizabeth see that her new acquaintance was at least as much embarrassed as herself. Since her being at Lambton, she had heard that Miss Darcy was exceedingly proud; but the observation of a very few minutes convinced her that she was only exceedingly shy.
Fresh observations on human nature cannot be taught by formula, but only by extensive education and observation.
Another dimension of good stories is expression. This can be word choice, but it also is related to the judgment of what to describe and what not to; using an apt quotation in place of a lengthy explanation. Formulas cannot teach one a wealth of quotations and which are apt, and when a character might use a quotation rather than an explanation.
Another dimension of writing quality is whether the author as written something the reader did not expect. As Chesterton observed (I can’t find the quote with more than a little googling, unfortunately), the only reason to read anything is to read something one did not expect to read. I think in the original context he said that the only reason to listen to a man is because one expects him to say something one did not expect him to say. Otherwise you could leave off listening and still know what he said. This is why I find it so annoying whenever a hero has been all but utterly defeated by insurmountable numbers of enemies and all he has is a baseball bat and his baseball bat just broke. We know that on the next page—since it is not the last page—that either his friends will show up or the magic amulet he’s been wearing will start to glow and imbue him with the power to defeat all his foes, depending on whether he’s been wearing a magic amulet. This is why I think it’s such a mistake to raise the stakes so much that they die of asphyxiation. The corpse of the stakes is very predictable, in the sense that one need not see how the author wrote the story to know how it was written. There was only one way to write it, excepting a deus-ex-machina which is pure cheating and worse than the predictable next scene. Formulas can help with this, in that a well-designed formula can keep a writer from getting into a situation in which there is only one way out, but it needs to be a very well designed formula to do that. And as a practical matter, it should be noted that formulas do seem especially attractive to people who have a hard time thinking of one way out of the situations they’ve gotten their characters into, but that’s just from my personal experience of a not very representative sample of people.
And finally one dimension of good writing is a plot that holds together. A series of events with no rational relationship to each other is not interesting to read, at least for those who are sober. There is nothing in them to engage the rational mind. And this is where a formula can greatly improve writing. Formulas can help a great deal with structure, and keep the author from writing the protagonist into a corner, and thus keep the author from merely teleporting the protagonist to a new location. I mean a good formula, of course. A bad formula will not keep the author from writing the protagonist into a corner, but then a bad crutch will not support a man’s weight. That does not mean that crutches are a bad idea for a man with a broken leg, and equally it doesn’t mean that formulas are a bad idea for a writer. All tools must be judged by good examples of the tool, not the worst versions of them. Judging all formulas by a bad formula is as much a mistake as judging all saws by a dull saw.
But human nature being what it is, the ability to make a story good in several dimensions tends to go together. This is related in part to the many effects of intelligence and education, but in any event it will be rare to find a man who can write excellent characters, can write fascinating insights into human nature, can do all this with language that is well suited to the needs of the moment (tight in action, luxurious in moments of leisure, and so on), and can always foresee the possibilities of his present course to ensure that there are always multiple viable paths ahead, but who can’t for the life of him come up with a way to get his characters to do what they need to do. Such a man is not impossible, but he is uncommon. By contrast, a man who can’t write characters very well, can’t say much about human nature worth reading, can barely put words together grammatically, to say nothing of concisely and clearly, and who can’t foresee the present course enough to stock the future with possibilities, but who can follow a formula in order to come up with a plot that at least has his characters acting like human beings and connecting his scenes in some sort of rational way—such a man is far more common.
The upshot of this is that if good writing formulas are well known and widely used, it will result in writing which is more predictable and derivative, because while it will elevate the occasional good writing into very good writing, and some mediocre writing into good writing, it will elevate far more bad writing into mediocre writing. And the thing is—contrary to what snobs say—mediocre is not the same thing as bad, and mediocre writing is worth reading.
Now, it should be noted that I am not at all saying that formulas will turn good writing into mediocre writing. Such a thing is, I suppose, not entirely outside of the realm of possibility, but I think it very unlikely. My point is that writing formulas, by their very function of improving writing, will—because they improve it unevenly—result in more mediocre writing. But, like the doctrine of purgatory carves a chunk of hell off and results in fewer people being damned—if, in a sense, it does lower the average quality of the blessed—a writing formula which improves writing will carve a chunk out of terrible writing and make it mediocre. This is in no way a criticism of writing formulas, but instead a studious in how counter-intuitive results can be. Perhaps it can be called a study in the law of unintended consequences.
So again, to be crystal clear—since writing formulas come under a lot of probably undeserved criticism—nothing in what I said is an argument against using writing formulas. At most, it is an argument for trying to improve one’s writing in more ways than just using a writing formula. And that, I suspect, no fool ever doubted. And certainly the men whose blog posts I linked above are no fools.
God’s blessing to you on this the twenty fifth day of January in the year of our Lord’s incarnation 2017.
So I missed yesterday, too. It’s not entirely my fault; I meant to write something in the evening but then fell asleep when I was putting my children to bed. It’s one of the hazards of parenthood, especially when tired. But at the same time I put it off because I didn’t have a subject I felt like writing about. But not because my mind was blank; I think I’m letting the times I wrote on a more important subject raise the threshold of what I consider fit for writing about, and the problem with that is that it’s not what the daily blog post is supposed to be about. I’d love for everything to be great, of course, and will certainly do the best I can, but when that means I’m not writing at least one post a day, this has certainly ventured into the territory of letting the perfect become the enemy of the good.
Incidentally, while it’s a great sentiment that one shouldn’t do that, I really don’t like how imprecise the saying is. In a truly strict sense, Hell consists in letting the good become the enemy of the perfect. In that sense, therefore, the perfect should always be the enemy of the good. And I mention this because I’ve seen people use the phrase “don’t let the perfect become the enemy of the good” to mean preferring mediocre solutions to spending more effort on better solutions. More fully specified, the saying would be, “don’t let an unattainable level of quality stop one from achieving an achievable level of quality”. Less catchy, but it can’t be the slogan of hell, so that’s a plus.
In other news, I put out a video yesterday in which I answered a rhetorical question posed by Deflating Atheism in one of his videos:
I’ve used the idea of answering a rhetorical question in some of my novels. I don’t do it much myself, but I really like it in fiction because it’s a nice reversal of expectations. I have done it in real life, though, like when I was speaking with an acquaintance who said, rhetorically, “You can’t have everything. If you did, where would you put it.” I immediately answered, “Right where it is, since you’d own that too, wouldn’t you?” Now, his saying did speak to a real truth; that while human greed is infinite the human capacity for enjoyment is finite and so greed is pointless. But my reversal does speak to a real truth too, which is that if you owned everything you’d have a fundamentally different relationship to it than a man who owns only one house. Since you owned everything you wouldn’t need to change anything; and there’s the hint that you might as well let the people currently using your stuff go on using it because this way at least they’ll take care of it rather than letting it fall apart. It touches on a mistake atheists make, though not very directly, and only abstractly; but a universal relationship is not a particular relationship multiplied out. It’s a fundamentally different kind of relationship.
God’s blessing to you on this the twenty third day of January in the year of our Lord’s incarnation 2017.
Well, I missed yesterday. The odd thing is I didn’t even notice that I did until I was writing out the date for today’s post. My apologies; yesterday was an extremely hectic day. I took my middle son to a classmate’s fifth birthday party, and there was a magician there performing for the children. I was surprised by how accurate depictions of magicians doing children’s parties on television turned out to be.
On a slightly related subject, I recently saw this rant by Harlan Ellison:
It’s a very interesting subject about which much can be said. First I’d like to mention that given the way he uses a German pronunciation of Dachau, I might have not realized he was talking about the Nazi concentration camp either, though I’m certainly aware of it. I’m only really used to seeing it written.
The copyright on the video is from 1993, and Ellison says that the story was from a few years ago, and given that at the time he was in his late fifties, that probably doesn’t mean just two years. (The older people get, the longer a period of time is covered by “a few years”.) So we’re likely talking about the late 1980s, or at the latest the very early 1990s. That makes this story especially egregious (if it’s not a matter of pronunciation) because Dachau was in living memory then; plenty of soldiers who fought in World War II were in their sixties and seventies in 1990; it was only 45 years after the liberation of the concentration camps. Incidentally, having grown up in the 1980s, I imprinted on the idea of World War II being in living memory for many people, which it isn’t any more. I’m sure that there are a few World War II veterans still alive as of the time of this writing, but World War II ended 71 years ago.
Anyway, it is a real problem that modern people are not well educated in history. The ever-increasing efficiency of distribution is making this all the more the case; with tends of thousands of books being published every year, it’s impossible to know more than the tiniest handful of them. Of course, this has long been the case; the great library at Alexandria is estimated to have had somewhere between 40,000 and 400,000 books. Even without television, no one could be familiar with them all.
This was, in part, why there was the idea, within education, of teaching the classics. It ensured that there was a common set of references that (educated) people could make that others would recognize. There was also the part about the classics being very, very good, of course, but that’s a different subject. Actually, it’s not, entirely, because the excellence which made the classics, well, classics, also made them not very accessible. This meant that the classics were in a head-on collision with widespread education, and not surprisingly widespread education won. But that’s not really what killed off the classics. Secular education was what killed off the classics, for the very simple reason that nothing secular transcends time.
This is almost true by definition, of course; especially if one permits materialists in the room what counts as secular is purely bound by the moment, and inherently has no consequence past the conditions it has bequeathed to us as the present moment, together with our (pointless) memories of it. But even apart from that, even if we permit a little bit of humanity to leak in around the edges of strict secularism, such that we actually consider ourselves to have some continuity with the past in a more meaningful sense than the historical curiosity of how present conditions came to be, it doesn’t matter very much because there are too many crimes in history for remembering them to be practical. No one’s distant ancestors were innocent of others’ blood, so on purely practical grounds—to borrow a phrase from Pride & Prejudice—in matters such as these a good memory is unpardonable.
The result is that as education became secular, it forgot history. It also forgot classics of literature and art because human nature is not a thing to be learned, but a thing to be made. Of course, having jettisoned all standards (it can take a generation or two), there becomes no reason to mold human nature into one shape rather than into another. The only thing we have to decide between alternatives is our inclinations, and to simply do what our impulses dictate requires no action. One cannot become an adept at doing whatever you feel like. So in the end, the mastery over human nature which was the goal becomes a total passivity. Learning about the raw material of human nature is useful for forging a new human nature, but since in the end the secularist has nothing to forge it to besides what it already is, he quickly discovers there’s no point to even knowing what it is. Even if there is no value to knowledge conceived of beyond control, people who do not desire self-control have no use for self-knowledge.
At which point the only value to any sort of modern classics which replace older classics is that of shared reference, but the problem is that there’s not much call for shared references. They’re not of practical value, and people with nothing to communicate don’t need a language to do it in.
As I was writing this I was thinking of the book The Catcher in the Rye, which I recall disliking. It’s a relatively recent book which is highly praised in the sorts of places which should rightly make one suspicious of it. And there’s the fact that I couldn’t remember any of it so I had to read Wikipedia’s plot summary to refresh my memory. From what I can piece together from my memory augmented by Wikipedia’s plot summary, it’s the book of a horribly disaffected boy who doesn’t fit into his world and feels very dislocated because no one gets him. But there’s nothing to get; he’s empty. There’s nothing to say about nothing. He wanders around New York City trying to find something, but looks in all the wrong places. It wouldn’t have sold nearly so well if he actually wandered into a church and realized that nothing but God could fill the emptiness inside of him, but it would have been a far more worthwhile book. But then perhaps my memory is faulty. According to Wikipedia, it was highly praised by noted promise-breaker/tax-raiser George H. W. Bush.
Anyway, while the cultural illiteracy which Harlan Ellison complains about is certainly a problem for science fiction authors, it’s somewhat combatable by picking a niche audience and writing for them. For example, you can rely on Christians getting (at least fairly commonly made) biblical references. As Chesterton said, the modern world is one wild divorce court. But in himself, Christ brings all men together.
God’s blessing to you on this the twenty first day of January in the year of our Lord’s incarnation 2017.
Previously I talked about my new mirror lens. In that post I said I’d put up some pictures taken with it. I did manage to take a few. I’m still learning how to use lens, and I think that my manual focusing is rusty, so the pictures didn’t look very good at full size, but since I think that they shrink down to decent photos, I’ve decided to put the four best up here.
The first is macro-photography of the trunk of my ficus. The minimal focal distance is about 1.5 meters, so at 500mm, it’s hardly the lens to take picture’s of a fly’s eye, but it gets closer than many lenses do. Incredibly short focal depth at that distance, though. (That’s a problem with most macro photography, though.)
The next is the roof of a shed my neighbor has. It’s been up for a long time, and has accumulated some interesting lichens. (I should note that it was a grey, overcast day.)
The clouds were looking interesting, so I decided to try some cloud photos. I figured that since they were so far away at the very least I should have a decent depth of field. On the other hand, my not being very skilled at manual focusing didn’t help.
And finally while I was out shooting, a red-tailed hawk landed on my neighbor’s roof to look for mice in the yards around it. Unfortunately I didn’t get any great photos, but this wasn’t bad:
I was shooting at 1600 iso to get a 1/400th shutter speed, which is still sub-optimal for a 500mm lens. I suspect that when taking photos of living subjects, I really need a bright sunny day to get enough light to shoot at a fast shutter speed and low ISO (the higher the ISO, the more noise you get). If I get a chance to take pictures on a bright sunny day, I’ll post some of those for comparison. And I suspect that I need to practice at manual focusing, too.
God’s blessings to you on this the twentieth day of January in the year of our Lord’s incarnation 2017.
Yesterday I wrote about some thoughts I had after reading Brian Niemeier’s short story, Izcacus. Brian mentioned my post in a post on his blog, where he clarified something I said:
One item I should point out, since Christopher discloses that he’s not normally a horror reader, is that I did some pretty extensive research before writing the story. One of my goals was clear away the accretions artificially heaped upon vampire mythology since the 19th century and depict vampires closer to how they were understood in the original folklore. What I found wasn’t a clandestine society of suave, neck-biting supermodels. In the old tales, vampirism presents much more like a disease.
This is actually something which Brian had mentioned when I interviewed him, but had slipped my mind in the intervening time, so I really should have remembered it and clarified my thoughts. The relevant section from my post yesterday was this:
This is a very interesting take on vampirism, adding some very interesting technical detail to the mechanism of becoming a vampire. It’s not as blood-centric as vampirism traditionally is… So while it’s an interesting step forward for the mechanics of vampirism, it seems to come somewhat at the expense of some of the (recent) traditional lore of vampirism.
(That is not in itself bad, of course; I gather one staple of horror is re-interpreting older horror stories so as to create fresh lore; essentially producing a sense of realism by treating previous fiction as existing but inaccurate. Horror is not one of the genres I normally seek out, so I’m not very familiar with its conventions—or perhaps I should say its unconventions.
There are two parts to what I said that should be distinguished, the more subtle one I stand by and the less subtle one I stand corrected on. Since corrections are more important than new material, I’ll address the part I stand corrected on first.
It was a simple mistake on my part to talk about Dracula as the beginning of vampire lore. Doubly so because Brian had mentioned in our conversation that he had gone back to earlier vampire mythology. This was partially an error of communication and partially an error of thought on my part. By lore, I meant the world-building done by authors writing for entertainment. I’ve heard the term used that way metaphorically in science-fiction, where it is unambiguous because no one has actually told putatively true stories of the distant future or other planets. I then used it without thinking in a context in which there is a great deal of literal lore. This was just poor use of language on my part, because there was no way for someone unable to read my thoughts to read my words as I meant them. So for that, mea maxima culpa.
This was also an error of thought on my part because my strong interest in the stories of vampires in fiction (that is, relatively modern entertainment) obscured to my thinking about the significance of the vampire lore before the advent of modern fiction. This also was an error, and Brian’s goal of going back to the source and making it fresh again is a legitimate and noble goal. I didn’t mean to imply that it wasn’t, but I may have implied that by omission—I can certainly see how my words can be reasonable read in that way—in which case mea culpa.
I have only a passing familiarity with a little bit of the traditional vampire lore that Brian mentioned, by the way. I don’t mean to imply that I already knew it and I simply forgot to mention it. I was aware that the idea of vampires being able to be seductive is relatively new, possibly originating with Dracula. I was under the (possibly mistaken) impression that vampires were originally closer to sorcerers, that is, men who sold their soul to the devil for power and lost much of their humanity in the process, becoming recluses who may fall into outright cannibalism. I’ve got no sources for any of this, and might well be mistaken in my recollections of what I heard more knowledgeable people say many years ago.
Now, as to the more subtle part which I do stand by, but would like to elaborate on, I want to defend the modern accretions which have been artificially heaped upon vampire mythology. Not as better, mind you, but merely as something with enough meaning in them as to deserve their existence as something separate.
As a small bit of background, the way that greek mythology was presented to me in school as a child was as a unified thing, with particular gods and beliefs and stories about them. That is, it was presented as if there was a canon. I’m not sure if that was intentional or just a by-product of being so familiar with Christianity the people who wrote books for children just naturally presented it in more-or-less that way, which was then filtered through the simplicity of a child unfamiliar with religions without a canon. Anyway, this turned out to be wildly inaccurate. The Greeks had different gods, different conceptions of the same gods, the same stories about different gods, and wildly different stories about the same gods. Probably the best analogy today would be if you were to ask about Spiderman. He’s appeared in at least four different comic books about him (five if you count Spiderman 2099, though that was about Miguel O’Hara, not Peter Parker), countless others where he showed up for an issue or a few issues, at least four animated series that I know of and probably several more, and maybe a dozen movies. His origin story has been retold half a dozen times, and differently. His personality has varied widely with different authors, and it is absolutely impossible to even come close to giving a chronology of his life and actions which are consistent with half of the things he’s been in, let alone all of them. And so it was with the greek myths. The gods had very different personalities when different poets were telling their stories, and again when playwrights were. None of them were official; to a great degree you just pick what you like and stick with that.
And so it is with popular re-interpretations of folklore. Dracula portrayed a vampire as someone only very slightly inhuman, but attractive rather than repulsive. Fast forwarding to Interview with the Vampire—which was an excellent movie I really need to do a review about—you get to what Brian described as neck-biting supermodels. Well, that’s not quite true because I don’t think it’s implied that all vampires become beautiful. They’re seductive, but it’s not the same thing. All vampires becoming beautiful really comes into its own with Twilight, I think. In Interview with the Vampire the vampires are still relegated to the darkness where their seduction is by candlelight. And if I recall properly there were vampires who (by movie standards) were not particularly nice-looking in the vampire theatre. There is the minor detail that the leading roles are played by movie stars, who are beautiful, but in fact I believe that was something Ann Rice objected to in the casting of Tom Cruise. And later repented of, I heard, because he undeniably did an amazing job in the role of Lestat. Anyway, in Interview the vampires are seductive because they are hypnotic; they are not naturally attractive but rather supernaturally attractive. And this seductiveness does work with the idea of damnation, which certainly is a theme in Interview. Satan lures people with empty promises, and so too do the vampires in Interview. It is suggested, though not outright stated, that their seductiveness is only active when they are hunting; that is, it is generated by their intention to kill. It is not the sort of thing which can be used as a superpower precisely because it is only in giving into their bloodlust that they have the power at all.
And this, I should note, is quite representative of something real. We’re all familiar with art that has its power by titillation; and there is no good use of this titillation. It only has its specific power as a misuse of something because the proper use necessarily curtails it. Kind of like how a wine bottle only becomes a weapon if you break it. This is a very good representation of the sort of empty promise with which Satan tempts people. Vampires live forever, but only by killing human beings. Vampires are attractive, but only when they are hunting. You can see the same thing in people who use sex to gain influence; as soon as they stop the sex they lose the influence. Thus as soon as pop stars gain enough wisdom to stop peddling sex, drugs, and rock-and-roll, they lose their platform to distribute this wisdom. Someone else still peddling those things has replaced them on stage.
So these new vampires are very different from the old vampires, though as we see in Twilight, they can easily go bad. By which I mean, revealing nothing about the human condition. (The central conundrum of Twilight is, what if you were irresistibly attracted to someone who could barely control their desire to kill you? Even if it is mildly interesting, it’s not exactly a question with broad applicability. The answer is, move out of town and change your phone number. Sometimes putting yourself out of the reach of temptation is the right answer. When someone can barely control their urge to kill you, that’s one of those times. Relatedly, if you can barely control your urge to kill someone, leave town. Leave the country if you have to. Even if it means having to call in sick to high school more often because there is less cloud coverage.) Still, abusus non tollit usum. There are good vampire stories about modern vampires left to tell.
(N.B. I don’t mean that last point to sound contradictory to Brian, who so far as I know has never claimed otherwise. I mean it more in contrast to the generally sound heuristic that modern things are bad. It’s one of the exceptions, I’m arguing.)
God’s blessings to you on this the nineteenth day of January in the year of our Lord’s incarnation 2017.
I recently read Brian Niemeier’s free short story, Izcacus. It was an interesting read, both while I was reading it and afterwards. It’s a good use of fifteen minutes. Unfortunately short stories lend themselves to short reviews, because (when well written) they’re so tightly written that talking about them gives away too much information. At least I have that problem. Russell Newquist would probably find a way around it, as he’s very good at writing reviews, I’ve noticed.
But I am going to talk about Izcacus, so this is your warning that there will be spoilers. If you don’t like spoilers, stop reading here (until you’ve gone and read the story, at which point please come back).
Or here, that would work too.
Even here, really. But that’s it. The next paragraph will have spoilers in it, so stop reading now if you haven’t read it and don’t want to encounter spoilers.
I should begin by saying that I went in knowing that Izcacus was written as an attempt to bridge the gap between religious vampires and scientific vampires. So I didn’t some at it with perfectly fresh eyes, as it were. That will naturally color my thoughts on the story, but probably it has a bigger impact on my reaction to it than my considered thoughts about it.
The first thing I find interesting about Izcacus is that it uses what my friend Michael referred to as epistolary narration. That is, several characters narrate the story in the form of emails, letters, blog posts, journal entries, and most interestingly letters to a dead brother. It’s by no means an unheard of device, but it’s not overly common, and as Michael reminded me, it is also the narrative device in Dracula, by Bram Stoker. I doubt that coincidence is accidental, though I haven’t asked Brian about it. He uses the device well and avoids its weakness—it can easily become very confusing to have multiple narrators—while taking advantage of its strength. In particular, it allows a lot of character development in few words, since the voice of the character tells you a lot about them. Not merely the words they choose or their commentary, but also what they choose to talk about and what they leave out. Editorial decisions tell you as much about a person as creative decisions, if they tell it to you more subtly.
Second is that one of the problems that every horror author is faced with in the modern world is that horror and modern technology don’t blend well. I don’t mean that they can’t, but a person with a cell phone can—in normal circumstances—call for help so that they won’t feel alone. Of course, that doesn’t always do much. (There was a news story a while back about a russian teenager who called her mother on the phone while a bear was eating her. She died before any help could arrive. More locally, there was a hunter who shot himself with a crossbow and called 911 but was dead before they arrived. If a broadhead cuts a major blood vessel, you can bleed to death in as little as about 45 seconds. I’ve seen a deer pass out in about 20 seconds.) But there is still a big difference in mood between knowing that help is on its way and won’t arrive in time versus not even being able to call for help. By setting the story on a remote mountain without cell service, and further where they had to trespass russian law to even be, this problem was solved very neatly. There are plenty of very remote places in the world and if you haven’t told anyone that you’re going there, no one will ever come looking for you there. (One reason why the Pennsylvania hunter safety course emphasizes telling people where you are going hunting and when you will be back, every single time.) Structurally, I really like this.
The mood is done well about isolation and danger and so on, but in general I’m far more interested in structure than mood—possibly because I have a very powerful and active imagination and can imagine the mood for myself even if it is not described, but my philosophical side rebels against plot holes. Pleasantly, there are no plot holes in Izcacus, which I appreciated. And the structure is very interesting indeed when we come to the central point of the story: vampirism. Izcacus, we find out, means “blood-drinker” in the local dialect, and the mountain climbers eventually find a cave with some old but suspiciously fresh corpses. And here is where Brian marries religious with scientific vampires. Vampirism is a form of demonic possession, but possession requires the cooperation of the possessed. And so the demons have created a virus—which walks the line between living and inanimate—as a means of entering healthy hosts. The virus acts in its natural fashion to weaken the host; by putting them in extremes of pain and weakness, the host becomes more willing to accept the possession which will rid them of the pain. And as the story (or rather, one of its characters) noted, after death the body becomes merely material. This is a very interesting take on vampirism, adding some very interesting technical detail to the mechanism of becoming a vampire. It’s not as blood-centric as vampirism traditionally is, and in fact one weakness of the story is that it isn’t made very clear why the vampires are called blood-drinkers at all. No one is exsanguinated that I can recall, and any wound seems to suffice for entrance of the virus. Granted, one of the characters was bitten on the neck, but another seemed to be infected by a cut on her shoulder. And this is somewhat inherent in the nature of blood-born viruses. If saliva will work for transmission, blood-to-blood contact will as well. (As will semen-to-blood transmission, but fortunately Izcacus is not that sort of story.) So while it’s an interesting step forward for the mechanics of vampirism, it seems to come somewhat at the expense of some of the (recent) traditional lore of vampirism. (Update: Brian clarified what I misunderstood.)
(That is not in itself bad, of course; I gather one staple of horror is re-interpreting older horror stories so as to create fresh lore; essentially producing a sense of realism by treating previous fiction as existing but inaccurate. Horror is not one of the genres I normally seek out, so I’m not very familiar with its conventions—or perhaps I should say its unconventions. And if you want to take that as a semi-punning reference to the undead, I’m powerless to stop you. But if you do, please feel a deep and lasting sense of shame because of it. That’s not really a pun.)
But, what it sacrifices in traditional vampire lore, it makes up for in the reason why anyone is going near the wretched things in the first place. My two favorite vampire stories are Dracula (by Bram Stoker) and Interview with the Vampire (the movie; I’ve never read the book, which a good friend has told me isn’t as good; the screenplay for the movie was written by Ann Rice who wrote the book, so it is plausible that her second try was better than her first). In both cases the vampires can pass as living men and come into human society on their own, though in Dracula he does at first lure Jonathan Harker to his castle in Transylvania by engaging his legal services. But it is really Harker’s legal services which are required, there, he isn’t interested in Harker as food (at least not for himself). In Izcacus the vampires are not nearly so able to pass in human society, so the humans must come to them. This is in line with other stories (most of which I haven’t seen or read) where the humans venture into the vampire’s territory. I think that there the lure is some sort of treasure, whether real or actual, but while greedy protagonists make for relatively pity-free vampire chow, they don’t make for sympathetic protagonists. In Izcacus there are really two motives which drive the characters; a noble motive which drives all but one of them, and a far more sinister motive which drives her. The official reason for this clandestine meeting is to recover the bodies of people who had died trying to summit Izcacus, while the hidden reason is to recover samples of the disease which was the reason the Russians sealed off access to Izcacus in the first place. Thus it is the backers of terrorism who are funding the expedition in the hope of retrieving such a virulent virus to be used as a bio-terrorism weapon (thinking of it only as deadly, and not as diabolical). I find that very satisfying because instead of a pedestrian tale like greed going wrong (who doesn’t know greed will go wrong?), it’s the much more richly symbolic tale of the problem with making deals with the devil. As Chesterton noted, the devil is a gentleman and doesn’t keep his word. The devil may promise power, but has no interest in delivering on it. I’m told there’s a line in one of the tellings of Faust where after selling his soul for knowledge, mephistopheles tells faust he doesn’t have that knowledge to give, whereupon Faust is indignant that he had been lied to. As I understand it, Mephistopheles basically said, “I’m a devil, what did you expect?” It’s one of the reasons why I’m so fond of the short form of the baptismal vows in the Catholic rite of baptism. “Do you reject Satan? And all his works? And all his empty promises?” It’s a terrible idea to expect the devil to keep his promises; it’s more his style to bite the hand he’s shaking.
God’s blessings to you on this the eighteenth day of January in the year of our Lord’s incarnation 2017.
There’s a really interesting question in why it is that we teach children to say “thank you” even when they don’t “mean it,” by which we mean, that it’s something done by choice or habit rather than a spontaneous outpouring of gratitude. When a child says “thank you” without meaning it, it is an acknowledgement of a situation in which something was done for them which they were not owed. They may not recognize this in the moment, but the acknowledgement still exists, and is something which they can contemplate (unconsciously) as time permits. A great deal of childhood is the building up of data to be understood later, even if not consciously recalled later, and taking a physical action (like speaking) to recognize something that has happened makes it far more memorable. This gets to the incarnational nature of human beings; we are body and soul united, not merely joined.
The “cartesian dualism” which predates Descartes by quite a bit—the gnostic divided soul and body, and were not the first to do so either—is an interesting thing. It seems to be very natural to our fallen nature to turn this distinction into a division. In some sense I suppose that this is inevitable because our souls can survive the death of our bodies, so the things are in fact divisible, but fallen humanity seems to want to divide them earlier than necessary. I’ve noticed that in other places, too, where people seem to want to die before they’re dead. A person having an identity is an example of that. You’ll have an identity after you’re dead. Right now, you’re a work in progress and might end up being anything. I wonder if this isn’t a fear of making choices. Making choices means that we can make bad choices, and the reality of that can be scary. It might also be related to how many people like to make the world comprehensible by reductionism. People, as origins of causality, make the world horribly complicated. Far too complicated for our finite minds to comprehend. It might be something else too; it’s an interesting question to contemplate, anyway. (Usually things don’t have single causes but are caused by multiple threads intertwining; this is especially true when multiple people do the same thing, with different threads having different amounts of influence in each person.)
In other news, I finally put up my interview with professional science fiction and horror author Brian Niemeier:
We talk about writing, fiction, theology, and more. It’s almost three hours long, but I found it very interesting when I was in it, and when I listened to it afterwards during editing. 🙂
God’s blessing to you on this the seventeenth day of January in the year of our Lord’s incarnation 2017.
I’m thinking about doing a video about the topic of the burden of proof. This is something of a pet peeve for my friend Eve Keneinan, who has a hilarious post on The Burden of Proof Fairy and the You Have To Believe Everything Monster. The topic under consideration is usually phrased, “the person making the claim has the burden of proof”. Which, as Eve rightly points out, is a claim, so she immediately invites the claimant to abide by their own principle and shoulder the burden of proof for that claim. For some reason she attracts a lot of stupid atheists on twitter so the results can be funny. The best are the people who add “this is not a claim” to the end of their claims, as if they’re children saying, “no tag-backs” in a game of tag.
I’m not sure what direction I want to go in my video. I’m thinking of starting out talking about the burden of proof in law, which is where one man (the prosecution) claims the right to punish another man (the defendant). The prosecution must meet some threshold of evidence for his claim to be granted, while the defense may try to poke holes in the prosecution’s attempt to demonstrate this. The thing is, the threshold for what evidence the prosecution must bring varies widely. In some places merely alleging the guilt of the defendant is meeting it, and the defendant must work very hard to show that the prosecution is in error. In other places, at least in theory, the prosecution must work hard to show that he’s correct beyond a reasonable doubt, while the defense does not need to prove the prosecution mistaken, only to cast doubt that the prosecution is correct. Whoever has the harder job is said to have the burden of proof, though in truth the prosecution always must meet some threshold in order to prosecute, and a defense which merely rested without saying anything will virtually never win.
Now, ordinarily no fool ever thought that courts of law provided epistemological certainty. I think many people—possibly not just fools—thought courts generally reliable. But no one ever thought the courts infallible. I’m not sure who ever thought to try to make this practical principle an epistemological one, but certainly one meets people who try to establish it as such. (Epistemology is the study of knowledge.) Of course, no one consistently applies this as an epistemological principle. I’ve yet to hear of the man who replied to, “Hi, my name is Brian” with “Prove it.” Or, “It’s a nice day, isn’t it?” with “Where’s your evidence that it’s a nice day?” No, in general the burden-of-proofers will just look up, investigate the natural world for themselves, come to their own conclusion, and then share it. That is, they’ll say, “yes it is” or possibly, “for now, but it looks like it’s going to rain”.
Of course what’s going on is that this isn’t a principle at all, it’s more of a heuristic. When it isn’t just an excuse to get out of thinking, that is. I wrote about that in We Are All Beasts of Burden. And that is really my main critique of the concept of the burden of proof as it is commonly used. It’s an attempt to avoid thinking while retaining the respect accorded to one who thinks. That’s almost a theme of the modern world. What is divorce but the attempt to retain the respectability of marriage while breaking the vows of marriage? As Chesterton said, our world is one wild divorce court, divorcing all things from each other but pretending not to.
And it’s that last part that I think is so especially troubling. A society which is pretending it is doing something other than it is doing is very far from recovery. On the other hand this is just restating the truism that the first step in solving your problem is admitting that you have a problem.
In any event, it is amusing to ask somebody who states the burden of proof is on the person making the claim if they have any evidence that they’re not a moron. In my experience the will stutter and be outraged that you would transgress the social norm of assuming that they’re not. It’s always amusing when people are angry with you for following their principles.
God’s blessing to you on this the sixteenth day of January in the year of our Lord’s incarnation 2017.
So a twitter atheist I was talking to for weeks went silent. To sum up the backstory, he discovered me because of my alinguism tweets, and then we tangled when I pointed out that “morality is subjective” just means “there is no such thing as morality” (when it isn’t just a clumsy way of saying that it depends on the circumstances; e.g. plunging a sharp piece of metal into someone is right or wrong depending on whether you’re a surgeon cutting out a cancer or a street-tough murdering someone for his shoes). Then he started asking me questions, and said that he wanted to understand what I thought. Later on, he suggested a very stupid interpretation of the story of Abraham and Isaac where he was trying to show that God had to know the choices men would make without men ever making those choices. His eventual goal was to show how God shouldn’t have created people who would end up in hell because he would know before they rejected salvation that they would. Except that “before” in this case means “logically prior”, since I had explained that God is not inside of time, and so what he was trying to claim was that God should have known what choices people would have made had they existed and made choices. That is, he’s the latest person in a long line of people who deny the existence of free will and consequently the reality of human life. (Though he generally denied it when it was put that bluntly.) And when I pointed out that his interpretation of the (almost) sacrifice of Isaac is very stupid, he criticized me by saying that calling it stupid doesn’t help prove my claims about God. (I did also point out that under his interpretation, it should be described as “the pretend test of Abraham”, since according to him the test ended before anything was actually proved anyway, and therefore the entire thing just makes no sense; why go part-way through a charade which doesn’t prove anything to anyone? As saint Augustine advised to some monks who were questioning free will, interpretations of scripture which make scripture very stupid are bad interpretations.)
Anyway, I re-iterated that I wasn’t trying to prove anything to him. He wanted to know what I thought, so I told him. And that his interpretation of the test of Abraham was stupid was one of the things I thought. I haven’t heard from him since. Perhaps I will again, we’ve had several day lulls before. But in any event it comes back to something I’ve noticed with twitter and youtube atheists. They really, really want their interactions with Christians to be Christopher Hitchens style debates. Or perhaps even more, they really, really want a Christian to be trying to convince them that God exists so they can criticize the attempt. Now, I appreciate artistic heckling as much as the next guy. Probably more, considering how many boxed sets of Mystery Science Theater 3000 I own. But like most things it’s best done honestly. Honestly both in terms of being open about what you’re doing, and honestly in terms of giving credit where credit is due and not merely refusing to be pleased at everything. There’s a great line in The Importance of Being Earnest where John Worthing was asked if he could deny that his name was John. He replied, “I could deny it, if I liked. I could deny anything, if I liked. But I won’t. It certainly is John and has always been John.” I wonder if there were people who were surprised by Oscar Wilde’s deathbed conversion to Catholicism. Presumably not. After all, he gave one of the highest praises I’ve ever heard of the catholic church:
The Catholic church is for saints and sinners alone. For respectable people, the Anglican church will do.
God’s blessing to you on this the fifteenth day of January in the year of our Lord’s incarnation 2017.
John C. Wright posted recently about the villains in Ayn Rand’s novels, giving them more praise than I’ve generally seen, but for a somewhat plausible reason. And in fairness I’ve seen an interesting argument that Ayn Rand is a sci-fi author. Certainly her plot seems to involve technologies far beyond what we have at present. At the very least rearden metal is closer to Star Trek’s duranium than it is to anything we have at present.
Anyway, I do have to concur that my favorite villains are true villains, who have made bad decisions, and not merely misunderstood good guys. This takes skill to write, since reasonably successful people who have made bad decisions tend to generally make good decisions and to have their bad decisions somewhat constrained in scope. That is to say, realistic characters aren’t easy to write. Big surprise.
I think that the most successful of these was Shakespeare’s Iago, the main villain from Othello. Iago is a soldier under Othello’s command, and has taken a hatred to Othello for promoting someone else over him. So he is out to ruin the other fellow and Othello, and has chosen to do this by convincing Othello that his wife is cheating on him with the fellow he promoted. But the real cunning of the plan is that he gets Othello to force him to plant this idea in his head. Having given Othello the idea that he (Iago) has some suspicion, he then becomes coy and refuses to divulge it, and Othello is asking for it. Then Iago says:
Good my lord, pardon me,
Though I am bound to every act of duty
I am not bound to that all slaves are free to.
Utter my thoughts? Why, say they are vile and false,
As where’s that palace whereinto foul things
Sometimes intrude not? Who has that breast so pure
Wherein uncleanly apprehensions
Keep leets and law-days and in sessions sit
With meditations lawful?
I don’t know that it can get better than the line, “I am not bound to that all slaves are free to.” And then when Othello pushes Iago further, Iago says:
I do beseech you,
Though I perchance am vicious in my guess,
As, I confess, it is my nature’s plague
To spy into abuses, and oft my jealousy
Shapes faults that are not, that your wisdom,
From one that so imperfectly conceits,
Would take no notice, nor build yourself a trouble
Out of his scattering and unsure observance.
It were not for your quiet nor your good,
Nor for my manhood, honesty, and wisdom
To let you know my thoughts.
And when Othello asks why, Iago explains:
Good name in man and woman, dear my lord,
Is the immediate jewel of their souls.
Who steals my purse steals trash. ‘Tis something, nothing:
‘Twas mine, ’tis his, and has been slave to thousands.
But he that filches from me my good name
Robs me of that which not enriches him
And makes me poor indeed.
And with more prodding, Iago cautions Othello:
Oh, beware, my lord, of jealousy!
It is the green-eyed monster which doth mock
The meat it feeds on. That cuckold lives in bliss
Who, certain of his fate, loves not his wronger,
But, oh, what damnèd minutes tells he o’er
Who dotes, yet doubts— suspects, yet soundly loves!
…
Poor and content is rich, and rich enough,
But riches fineless is as poor as winter
To him that ever fears he shall be poor.
Good heaven, the souls of all my tribe defend
From jealousy!
And the thing is, this is excellent analysis and sound advice. Here and elsewhere Iago explains with exacting precision exactly why what he’s doing is wrong. He even explains clearly that he’s not benefiting in any way. “he that filches from me my good name robs me of that which not enriches him and makes be poor indeed” is an exact description of what Iago is doing. And what makes this play so great is that Iago is a very believable villain.
This is, in a sense, the counterpoint to my contention that a protagonist does not need flaws to be interesting because it is the virtues and not the flaws which are the interesting thing in a character. This is also where the interest comes even in a villain, but the villain does need to be flawed in order to be a villain. I was going to say it’s strange that the modern world has just about reversed this, with heroes that are misunderstood bad guys and villains who are misunderstood good guys. But with a nod to Captain Renault, well… maybe not so strange.
God’s blessings to you on this the fourteenth day of January in the year of our Lord’s incarnation 2017.
I managed to get to the monthly meeting of my local Chesterton society. It’s a chapter of the American Chesterton Society, and if you’re at all interested in G. K. Chesterton’s writings and would enjoy talking about them with other people who do as well, I suggest checking the website to see if there’s a chapter near you.
In other news, I recently was reminded of the concept of “custody of the eyes”. Throwing that phrase into google I get about 8.5 million results, so there’s plenty of reading material on it, virtually all of which I haven’t read. But the basic idea is to look at one chooses to look at, rather than merely letting one’s eyes wander. Properly speaking it’s a form of Christian asceticism, but it must be remembered that the point of Christian asceticism—unlike most other forms—is not to conquer the body but to rightly order it. One of the more pressing problems addressed by custody of the eyes is that the body naturally reacts to the sight of attractive people by getting excited. In itself this is natural and not a problem, but since we are fallen creatures who do not rightly order ourselves with our reason in charge of our passions, allowing this to happen can lead us into trouble because though the initial reaction is natural, what follows (in this case, lust) is not natural. What should have happened is that the intellect notices the excitement and merely takes it in as information and does nothing else with it (where sexual excitement is inappropriate, of course; a husband and wife in their bedroom may properly cooperate with this excitement and encourage it).
There are two big points to note. The first is that the degree to which one should guard against this sort of reaction is of course commensurate with the degree to which one is liable to fall into error. Those who are very excitable must be very careful; those less so need not be as careful. For example, most doctors have no trouble examining the bodies of people of the opposite sex, so there’s no call for them to avert their eyes while doing so, and obvious reasons that they should look at their patients. As a counterpoint, of course, it’s easy to suppose one is less tempted than one is, but the chief point here is realism, not blind adherence to general rule. (Realism in fallen creatures must always include caution, of course.) But it is important to note that since we cannot know to what degree our fellow creatures are tempted by what they see, we are in no position to judge whether they are being careless or properly cautious. And advice which assumes that people will always err in one direction and so the advice itself errs in the other direction must always be taken with a grain of salt. Practical advice should never be confused with theoretical truths.
The other big point to note is that lust is only one vice to which we can be tempted by looking at things which we have no good reason to look at. To dwell upon someone’s fatness or ugliness is a temptation to judgement and pride. Even looking too long at one pretty thing may make us ignore all of the other beauty around us; we can get so enamored of paintings that we forget to enjoy trees.
Now, of course all of this is supposed to be in service of becoming perfect. Of always doing the right thing, at the right time, in the right way, for the right reasons. And if we do that, we shall be immensely happy, because one of the primary purposes for which God made us was to enjoy his goodness. In our disordered state we are far too prone to enjoy some small thing instead of enjoying better things, so we should be on our guard against getting caught in these small traps. But if we do nothing but worry about avoiding traps, we’ve only traded one form of focusing on creation to the exclusion of God for another. The Catholic Church has tended to be legalistic (in practice, rather than in theory) because the people in its care want it to be legalistic; rules are much easier to follow for many people than a wholesale dedication of one’s every thought to God and to his creation in light of Him. Because legalism is basically a series of sign posts warning you that you’re off the trail; when regarded properly legalism is entirely compatible with being a saint; a saint will follow all the laws without thinking of them as laws because he sees the reason for their existence. And many people have trouble seeing that connection, but can see the sign posts and so go right by avoiding the sign posts saying that they’re off the trail they have trouble seeing. As long as one never forgets that the sign posts are an aid to walking the trail, and are not the trail itself, all will be well. Alas, it’s a thing too often forgotten and all too often not even taught to children because it’s harder, and children are exhausting.
And of course as with all attempts to be perfect, it’s important to remember that we do not achieve perfection through our own efforts, but by God’s grace, so however often we fail, God has more than enough grace to make up for our deficiencies. All we have to do is the best that we possibly can; God will make up the difference between that and what’s needed. And when we fail, it is well to remember that actions can fail but a man cannot be a failure so long as he’s alive because he’s not finished yet. It is utterly pointless to attempt to judge a work in progress. If it’s a bad idea to judge a book by its cover, it’s a worse idea to judge a book only by its front cover because there’s no back cover yet.
God’s blessings on this the thirteenth day of January in the year of our Lord’s incarnation 2017.
I apologize for not posting yesterday. I don’t mean to let that become a habit.
So I discovered that the method I had been using to record video of me talking for my youtube channel fails after a few minutes. It’s actually an nondeterministic amount of time, but the problem is that if the camera skips any frames, guvcview doesn’t know about it and so the video and the audio go out of sync. I’m pretty sure that the video I have where this happened is just lost, because it’s not a gradual problem which can be cured by speeding up the audio; it’s that just something the time between the two permanently changes. On the plus side, I can always record it again.
But not with guvcview. And cheese is out too. (I use ubuntu, and these are all Linux software.) A friend recommended using ffmpeg, which is a command line utility. I’ve been a linux user for about 20 years now, so that’s not scary, but it does mean a ton of command line options to get ffmpeg to do what I want (because it’s an incredibly flexible piece of software). On the plus side, I’ve downloaded the source and compiled it, so I’ve got it working with nvenc which lets it do video encoding on my video card. So after a bunch of playing around, I got it to work. If anyone got to this blog post because they’re searching for a decent way to reliably capture a webcam in Linux, this is the command line I ended up using:
I put it in a shell script (so I wouldn’t have to type that out all the time), so $OUTPUT is the name of the file to store the video in (whatever.mkv). This is for capturing video from my Logitech C920 and audio from whatever I’ve configured as the input source in pulse audio (which is to say, in the default sound configuration utility). Normally I use a shotgun mic, but of course one can also use the microphone built into the webcam. Using mjpeg (motion jpeg) from the webcam seems to produce higher quality output then getting h264 from it, and since I’m recompressing it anyway, I might as well use mjpeg as the source. Btw, you’ll need to use the apeture priority mode for the automatic exposure control in order to achieve 30 frames per second, otherwise there’s a good chance the camera won’t have enough light to achieve that framerate and it will go to a lower framerate, which looks awful.
Video is a surprisingly tricky thing. Video codecs are kind of amazing, when you get down to it. Using inter-frame compression, they’re able achieve huge levels of compression. But the tradeoff is that they’re very complex things, and complexity and reliability are usually enemies. And so it is with video standards; they are very complex things, which enables them to be implemented widely in both expensive and inexpensive devices, but the downside is that implementations vary and not all implementations work with each other. And what I’m finding is a common thing: it’s often best to go with the most popular solution because it will be best debugged. That’s not strictly true, though, because I still use guvcview to manipulate the camera’s settings. For example, I’ve turned off automatic focus because I stay in one place and this way the camera doesn’t try to re-focus if I lift my hands up.
Anyway, I’m also considering investing in a decent camcorder, like this. The solution I have works, but a camcorder produces much better video because it has a much larger lens and sensor. That means far less noise in the pixels. And it’s got thoroughly tested software on board to make sure that it never drops frames and that the audio and video are perfectly synced up. It also has a zoom mode which means I could set the camera up further away from me, which would make my face look more normal. Cameras close to their subjects distort faces because they exaggerate roundness. That’s why portrait artists often use 100mm lenses from 10-12 feet away. (100mm is about a 2x magnification, if you’re comparing it to binocular magnification.) It makes the face look more natural to have some distance between you and the face. The magnification is irrelevant except to get quality because in portraiture you want the face, not the background. It’s all about the change in angle between the surface and the camera sensor; the further away one is, the smaller the change in angularity.
But, while that’s not very expensive, it’s not cheap either, so I’m going to see if I can get decent results using my current setup, at least for a while.
God’s blessing to you on this the eleventh day of January in the year of our Lord’s incarnation 2017.
Yesterday I put up a quick video I did about Occam’s Razor:
So far it got 105 views in the less than 24 hours it’s been up compared to 85 views for my quick review of Groucho Marx’s autobiography which has been up for almost four days.
This is something of a testament to the effect of making videos on subjects which people are interested in. I’ve heard that described by some popular youtubers as the viewers won’t let you make videos off of your main subject, which seems unfair to me. People have limited time and watch what they find interesting. I’m certainly no exception to that and I doubt that more popular youtubers are either. Not everyone finds everything interesting, and while certainly some people grow to trust a youtuber and watch whatever they make because that trust has been rewarded in the past, most subscribers subscribe because they like something that a youtuber does and want to see if he does more of that. Which strikes me as entirely reasonable.
But it also brings up the interesting and complicated question which Russell Newquist and I talked about as the chief one of the age in which distribution is nearly free: discoverability. If a youtuber makes videos about swords—a fairly popular subject—and one of his videos gets widely shared, that will result in him getting discovered by a fairly large number of people. This has compound effects because youtube makes recommendations for watching videos on the basis of how many people have watched a video before, so more watched videos tend to get more recommendations, and hence even more views. Good for server caching for fast playback, bad for unknown youtubers. Be that as it may, it does mean that a great deal of discovery for youtubers tends to be relatively narrow. It also poses a big problem for people just starting out: with no views on your videos, youtube (effectively) won’t recommend them, including ranking in search results.
What eventually got my videos views and hence my channel subscribers was when I met some people on Twitter with similar interests and more twitter followers, and who told their followers about my videos. Technically this can be called promotion, but it’s actually far more organic than that. I made friends based on shared interests, then made something which I thought my friends would find interesting and so I showed it to them, and because of the shared interests they told their twitter followers about it. That got me enough views to start getting youtube recommendations, and my channel has been growing since. As of this writing it has 217 subscribers, which is up by 54 subscribers over the last 30 days. (In the last few months I also did a few hangouts with other youtubers, which helped to gain me subscribers because there was enough overlap in what we did that some of their viewers checked it out and found my stuff interesting too.)
So this does illustrate the importance of sticking with things; every video, or every blog post, or every novel, or every whatever is a lottery ticket, and most win at least very small; but small adds up over time and a few big ones can really be significant since viewers and readers (etc) tend to stay. It also illustrates the benefit of making friends. And contrary to sleazy salesmen depicted in movies, the best thing about this is that friends helping each other is mostly in mutual interest. At least these days in the age of cheap distribution. Back when having a book printed required an investment of thousands of dollars in a book run then access to a distribution network that was very expensive to maintain, it was possible for someone to do you a big favor where they gave you a lot and didn’t get much in return commensurate with what they gave you. But for someone on twitter, telling their followers about something their followers will probably find interesting reinforces why the followers are following. After all, they’re following in order to come across interesting things. And there’s no point in them promoting something which their followers won’t find interesting, because their followers won’t click through and won’t stick around if they do; the result being that everyone’s interests line up.This sort of promotion is not asking for something like a free commercial from a TV network, it’s much closer to telling a friend about a book they’ll enjoy reading. Only you happen to be the author.
This requires honesty, of course, but the good news is that the incentives are lined up such that it only requires ordinary amounts of honesty and not heroic honesty. Compulsive liars will have problems, but by and large ordinary people with ordinary amounts of honesty and patience will only tell their friends about things their friends are likely to find interesting, and their friends are only likely to tell their followers/readers/whatever about things they will find interesting, and things will work out to everyone’s benefit. The only downside is that this requires time, but that’s not all that big a downside, when you think about it, because the alternative where stardom can happen in an instant is that it doesn’t last. Today’s hot model is replaced by next year’s hot model and forgotten about. And to some degree that works whether you’re talking about cars or people.
God’s blessing to you on this the tenth day of January in the year of our Lord’s incarnation 2017.
I apologize for not posting anything yesterday. A subject hadn’t recommended itself to me in the morning and then things got very hectic. On the plus side—at least if you watch my youtube channel—I interviewed Brian Niemeier last night. I hope to have it edited and published in the next few days. It was a very interesting conversation covering a variety of topics, but generally linked to writing. (Brian is a professional writer of fiction.)
In other news, The Daytime Renegade wrote an interesting blog post about what he calls people dressed in grey. That is, the sort of mandarin class America has saddled itself with where almost twenty years of schooling has taught the managerial class to be masters at conformity, if at little else. It’s an interesting take on a societal problem which I recommend reading, but there’s one part I wanted to comment on. He talks about how the sort of bad manager most of us have gotten used to are—however imperfect—at least familiar, and therefore after a fashion comfortable, and when given the opportunity for change most of us end up preferring the devil we know. Not being willing to go quietly into that good night, he says:
Maybe we should support those who want to shake things up, or at the very least think about said changes, before reflexively dismissing them. If we say we really want change and resent these non-entities, maybe we should act like it.
In the limited sense in which he means it, I believe he’s right. But in another sense, I’m not so sure. Americans all (or almost all) grow up with what I can only call a sense of potential greatness. I don’t mean that there’s something special about us as Americans, but rather that we all have the sense that greatness is something actually achievable if only we work hard enough. That should be tempered with the caveat, “and if God smiles on our endeavors,” but, well, there’s a reason why we’re a nation in decline. Anyway, this is something at the back of why Americans do most of the things we do—whether we’re motivated by it or shamed by it and compensating, we have the sense that everyone should be aiming high.
And this sort of makes sense in a nation of immigrants because a nation of immigrants is self-selected from the general pool of humanity to be the especially ambitious ones. But something which befalls all self-selected societies is that however uniform the personalities of the people who self-selected into a group of like-minded individuals, their children will be representative of the variety of humanity. This is why the only narrow societies which last are those that are made up of people who have forsworn having children and live within a larger society where they can recruit similarly unusual people to join their ranks. Basically, monastics. (The shakers made a go of living in what can be thought of as co-ed monasteries, but for the most part men and women find that if they’re not going to be having children, the opposite sex is far more trouble than it’s worth. If you are going to be having trouble, then of course the opposite sex is indispensable not just for the engendering of children but the raising of them into healthy adults. It’s all a matter of figuring out which cross is yours to carry and carrying it rather than someone else’s. Like Simon of Cyrene, sometimes you must carry someone else’s cross for a bit, but that’s a temporary thing, and temporary things work very differently than life-long ones.)
So while we were a nation of immigrants and frontiersmen, this idea of greatness was a fairly viable one, even if it was typically more theory than practice. Though considering it more theory than practice may under-estimate the difficulty of raising a family where the children are better-off than their parents; in any event it is not the norm for children to be better off than their parents; in a sense it’s even somewhat unnatural. The nature of begetting is to make something like yourself; and in this sense it is most natural for children to be neither better nor worse off than their parents, but like their parents. However that goes, it is not statistically normal for children to be better off than their parents, except in the sense of having a universally rising standard of living by dint of technological improvement.
And here’s where we come to the Daytime Renegade’s point: if we can’t make things much better, it is often a better bet to try to keep them the same. It’s all too easy to slip up and make things worse; and so I think that many people would prefer the bosses dressed in grey because they seem a good bet for stability. It may well be that those of us who want to pursue the dreams of greatness that being an American makes unavoidable (the dreams, not the pursuing) is for us to form small enclaves within society from which we recruit other like-minded people. It’s a good argument in favor of small companies because exceptions must always be small.
As a sort of post-script, I should add that I don’t mean that the bosses dressed in grey in fact are our best bet for stability. As Chesterton said:
We have remarked that one reason offered for being a progressive is that things naturally tend to grow better. But the only real reason for being a progressive is that things naturally tend to grow worse. The corruption in things is not only the best argument for being progressive; it is also the only argument against being conservative. The conservative theory would really be quite sweeping and unanswerable if it were not for this one fact. But all conservatism is based upon the idea that if you leave things alone you leave them as they are. But you do not. If you leave a thing alone you leave it to a torrent of change. If you leave a white post alone it will soon be a black post. If you particularly want it to be white you must be always painting it again; that is, you must be always having a revolution. Briefly, if you want the old white post you must have a new white post.
God’s blessings to you on this the eighth day of January in the year of our Lord’s incarnation 2017.
I was talking on twitter with someone recently who apparently hadn’t encountered the idea of aeviternity. It’s a scholastic term (scholasticism being most closely associated with Saint Thomas Aquinas) which denotes a created eternity. And eternity refers to, not an infinite amount of time, but timelessness. We live in linear time, that is, we exist in a succession of moments , one after another, which have no access to each other except that each is causally related to the moment directly following. Thus our existence is spread over a collection of moments we have no access to; we are not so much beings as becomings. We are continually coming into being, but at the same time, departing from it; what we are is, at any given moment, a razor-thin slice. Though our memory we remember the past, that is, we re-member it, but this is only calling it to mind and does not make it any more real, but it does enable us to forget how our being is scattered over myriad moments we have no control of.
By contrast, eternity is an eternal present, where there is neither coming into being nor fading out of being, but the fullness of being. You can spell it with a capital B, i.e. Being, if you like; but it’s what our memory integrating our past moments merely hints at. Since eternity is not a succession of moments, it does not interact with us as if it were a succession of moments, but rather it interacts with all of our moments simultaneously—from eternity’s perspective. Of course from our perspective, which consists of nothing but moments, eternity interacts with us moment by moment. But it has this advantage over us: since our future is equally present to eternity as our now is and our past is, eternity can foretell our future (where our future is not disturbed by this revelation).
A common analogy for this interaction of eternity with time is an author writing a book. It’s far from perfect because human authors exist in linear time, but they at least exist in a different linear time from the sequence of events which takes place in the book they’re writing. Thus they can put foreshadowing or even prophecy of events to come into the earlier parts of books because they’ve already read the later parts of the book. This is also not a great analogy because characters in a book don’t really have free will—though, I will say, having written several novels by now, it can really feel like they have free will to the author. I’ve had characters decide to do things that I never meant for them to do, and even a few times didn’t want them to do. I don’t mean that this feeling proves that they have free will—I don’t think that they do. I’m just noting it in case I’m somehow wrong, and they in fact do. 🙂
When I had explained this, my interlocutor brought up a curious objection I hadn’t heard before:
[The idea of aeviternity] negates the punishment of Satan somewhat. He is in hell forever, but also enjoying his actions as a sinner forever as well.
Of course my first thought is, “who cares?” I mean, given that separation from God is the worst possible thing, if there were some minor consolation, well, why would one begrudge that to Satan? Doesn’t vindictiveness miss the point? But then a moment’s actual consideration shows that to be anthropomorphising Satan. In particular, thinking of him as being in time. It is invoking the separation between sin and action which is possible to creatures in time but not creatures out of time. Because for those of us in time, sin is said to be pleasurable not in itself, but because of its effects. The pleasurable effects are, themselves, good. The pleasure “of sin” is thus derived from natural goods which were used incorrectly.
This will be easiest to explain by example. Take adultery. When a man cheats on his wife and has sex with another woman, this is both sinful and pleasurable. But it is the cheating which is sinful and the sex which is pleasurable. The sex is, in itself, good, and so in the moment when the sex is happening it is this good which is enjoyed. Sex, whether in wedlock or not, is cooperating with God in the creation of new people, and our bodies know this. Or rather they presume it, because of course we can use contraceptives and lie to our bodies, etc. (Sex during infertile periods is still ordered towards procreation, even if it doesn’t achieve it, and thus is still taking part in the goodness from which the pleasure is naturally derived.) The main problem with this procreation is that the man is in no position to be a good father to any children which he engenders, and further that if he does engender children he will cease to be a good father to the children he has made with his wife—if he ever was a good father to them. There are other damages which it causes, though most of them are dependent on this (whether the people so injured understand it as such or not). The sin consists in the damage caused (or very technically, in the good not participated in), and since no one can take pleasure in harming his children, it is clear, I think, that the pleasure of this sin is not in the sin, but in the goodness which is obtained sinfully. This is possible only because the good obtained and the damage caused are separated by time; even in more direct cases the good obtained and the knowledge of the damage caused are separated by time such that it is possible to enjoy the goodness before receiving the knowledge of the evil caused by taking the good incorrectly.
This is not possible for an aeviternal being; there can be no separation between a good participated in and the damage caused by it such that there is space between them to enable the enjoyment of that good. Aeviternal creatures can sin by looking for good in the wrong place, but unlike temporal sinners they can’t be temporarily mistaken about whether they’ve found it. Satan may sin, but he can’t be sinning by obtaining illicit pleasure. He must be doing it for some other reason than that.
And while concupiscence (basically, inordinate desire) may be why many human beings sin, it cannot be why angels sin. For anyone who is confused at this point how an angel can sin, then, it might help to remember that there are other deadly sins besides lust, greed, and gluttony.
God’s blessings to you on this the seventh day of January in the year of our Lord’s incarnation, 2017.
The world of photography is an interesting one. I’m very much an amateur, but owing to my mother’s much greater interest and budget for pursuing her passion, together with the progress in DSLRs (Digital Single-Lens Reflex cameras) I have a hand-me-down Nikon D300S. It’s an 8 year old camera which has been superseded by a subsequent model, but it was a great camera when it came out and it still takes better pictures that I’m capable of taking. Also, it takes the same lenses that all other Nikons do, and as a very rough rule-of-thumb, the lens is more important than the camera body. Some day I’ll probably invest in a newer body, but especially given the amount of time I have to devote to photography, it will undoubtedly be years before I’ve developed my skill to the point where the camera body is holding me back.
As a Christmas present, I was given a 500mm mirror lens. Whereas normal lenses focus light be refraction inside of glass, mirror lenses work like telescopes and focus light by reflection off of the surface of a pair of concave and convex mirrors. (The larger, concave mirror concentrates light onto the smaller, convex mirror, which then straightens it out and directs it at the camera’s sensor.) Oh, and a 500mm lens is a zoom lens roughly equivalent to 10x magnification from like a telescope or binocular. The curious thing about mirror lenses is that they are wildly cheaper than glass lenses. The cheapest glass-based 500mm lens that Nikon makes is over 12x more expensive than the lens I was given, and why this is the case is, I think, quite interesting.
Refracting light through glass has the problem that different wavelengths of light refract different amounts. This varies with the material, but the problem is that, to oversimplify, red, green, and blue light will actually have different focal points, which results in what is called “chromatic aberration”, or to be less technical, weird, slightly blurry colors. So to combat this, telephoto glass lenses have to be made out of very carefully engineered glasses. I use the plural, because in order to correct the light, telephoto lenses will actually have somewhere between 7 and 14 “elements” (i.e. a telephoto lens is really a system of a bunch of lenses), many of them made of different materials to correct imperfections in what the previous lenses did. As you can imagine, this is expensive, both because of the careful engineering, the precision of assembling that many lenses together, and just making and grinding that many pieces of optically clear glass. It’s sort of a miracle that lenses are as cheap as they are. And the good ones run into the tens of thousands of dollars!
In comparison, reflection works the same for all wavelengths of light, so a mirror lens can be made of just two mirrors as I described above. And mirrors are cheaper to make than polished optically clear glass with no internal distortions. So when you put it all together, mirror lenses are wildly cheaper than glass lenses. (Incidentally, you can also probably see why “mirror lens” is a contradiction in terms, and why the technical term for them is catadioptric optical systems.)
At this point you’re probably wondering why, if mirror lenses are so much cheaper to make at the same quality, they aren’t the standard. To some degree I wonder the same thing, but possibly one of the bigger reasons is that people generally don’t like the bokeh of a mirror lens. (Bokeh is basically how the things which are out of focus blur; glass lenses blur things into circles, which mirror lenses blur them into donuts.) There are shots where bokeh isn’t relevant, but it’s relevant in an awful lot of photography, hence the dominance of glass lenses. There’s also the fact that glass lenses come with auto-focus, and diaphragms to change the amount of light allowed onto the sensor (narrower openings give you greater depth-of-field, but require slower shutter speeds, while bigger openings give you a narrower depth of field; which is better depends on what sort of shot you’re going for). I’m not sure that’s inherent to glass lenses, though; I suspect that mirror lenses have basically found the niche of cheap, and as long as they’re going for that, they’ll sell best if they’re very cheap. At 12x cheaper than a glass lens, a 500mm mirror lens makes sense to play around with; if it was only half the cost of a glass lens, I suspect most people would just pay the extra money for the glass lens. Which brings up, once again, the curious topic that all sorts of things are technologically possible and would even make a lot of sense but aren’t done simply because there’s no market for them. Anyway, as I figure out how to get good results from my mirror lens, I’ll post some pictures on the blog.
(At 500mm, even slight shake in the camera makes the images blurry, so a tripod and a shutter-release remote are necessities, but it turns out that the camera shake caused by mirror-slap is a problem too. If you don’t know, an SLR uses a mirror in front of the sensor to allow you to look through the lens in the viewfinder. This mirror must get out of the way during photographs, and so it does, but you can’t move mass around quickly without it applying force to the body of the camera, and the sensor is mounted to the body of the camera, so it shakes. Normally this doesn’t matter, but for telephoto shots, and especially since a mirror lens is very light and thus doesn’t have enough mass to damp down the vibration, this is a real problem. Fortunately, there’s a mode my camera has where you can press the shutter button once to move the SLR mirror out of the way, and a second time to actually take the shot. Unfortunately, there’s nothing you can do about the shake from the shutter moving, at least unless you have a really top-of-the-line DSLR. Come to think of it, this is another reason to prefer glass telephoto lenses. The fact that they weight anywhere from 5-15 pounds (for the really huge ones) damps vibrations, which will give a clearer shot.)
God’s blessings to you on this the sixth day of January in the year of our Lord’s incarnation, 2017.
There’s a fair amount of unhappiness in youtube-land for reasons relating to people not seeing videos from channels to which they are subscribed. There seem to be two main causes, the first being that youtube doesn’t actually notify you about new videos from channels unless you go to the channel page and click a button to specifically indicate you want to be notified of all of the channel’s videos. As Skallagrim said, I’d have thought that’s what subscribing does, but what do I know. The other issue is that from time to time people discover that they’ve been unsubscribed from channels and have to re-subscribe.
The first one makes a certain amount of sense as being consonant with YouTube’s interests. It is certainly the case that for many of the people I subscribe to I only watch some of their videos; this is especially true of people who put out several videos a week. I imagine it’s generally true; certainly for people with more than tiny subscription rates the number of views on an average video seems to be somewhere between a tenth and a quarter of their subscriber number. (For videos that are a few days old, which is what notifications are for. Obviously the view numbers keep going up, but in the main not by subscribers being notified.) This wouldn’t be a problem except that the normal human reaction to being notified of things we’re not interested in is that we rapidly stop paying attention. This is why advertising has so little (direct) effect. I can understand why YouTube, who wants people to watch as many YouTube videos as possible, would want to adjust how often they show people notifications of new videos, ideally keeping it only to the new videos they think the subscriber will actually watch. I suspect the optimal hit rate for notifications is somewhere between 60% and 80%. High enough that the notifications are always worth checking out, but taking enough chances that not everything works. So while this is certainly counter-intuitive from a subscriber’s point of view, it does make a certain amount of sense from YouTube’s.
The other issue, though, is very strange. I’ve heard it explained that YouTube wants to get rid of a subscription model and move to a pure recommendation-based system. I haven’t seen the evidence for this, though, and there’s at least some counter-evidence. For one thing, they really encourage content creators (I loathe that term, but it’s the one that’s used) to try to get subscribers. They outright tell you in the first lessons that subscribers are very valuable because they tend to watch to the end, and that the best way to get more subscribers is to couple an on-screen request to subscribe with a verbal request to subscribe. Furthermore, they make resources available to content creators in several tiers, with the bottom tier (which is just a web interface) being available to everyone, and the higher tiers—which include perks like the ability to book studio time at YouTube studios—being available on the basis of the number of subscribers to the channel. Now, in big companies sometimes the left hand doesn’t know what he right hand is doing, and the foot may not even know that there is a right hand, etc. But still, this is certainly counter-evidence to the idea that YouTube wants to get rid of subscriptions entirely. They could just as easily have based the perk-tiers on the number of views last month or the number of minute watched last month. So while I have heard this idea from sources I’m not inclined to dismiss—and as a programmer I have no idea how one would have a bug that unsubscribes people from channels unless the code is very bad—I’m still skeptical and would like to see better evidence that it’s true. Like many things, it will be very interesting to see, a few months from now, what happened over the last few months. News is inherently unreliable, but once dust has had a chance to settle things are usually clearer.
God’s blessing to you on this the fifth day of January in the year of our Lord’s incarnation 2017.
I’ve been watching a bunch of Camille and Kennerly’s videos. Well, mostly listening, but occasionally watching. Here’s one:
I find it interesting that unlike a lot of twins, they don’t seem to feel a need to distinguish themselves. It’s possible that this is just a show-business gimmick and that in daily life they always make sure to wear their hair differently, or wear differently colored clothes, or something like that. But in the videos they make no effort whatever to indicate which one is Camille and which one is Kennerly. According to the wikipedia page on them (which calls them the Kitt sisters), they seem to collaborate a lot. For example, they both did Tae Kwon Do together (until they gave it up to focus on the harp). Apparently they’re both third degree black belts, which suggests that they’re fairly confident, or what will suffice, goal-oriented. It’s curious to speculate that might be why they don’t overly feel the need to differentiate themselves from the other. People with a sense of self don’t usually need to make sure others feel it. Anyway, I’ve got no conclusions about this; obviously I don’t actually know anything about them. I just find it interesting. (Fun fact: I have a friend who has an identical twin brother. At each one’s wedding the other wore a button saying, “Not The Groom”.)
Camille and Kennerly are fond of filming their videos in ruins, which are generally very pretty. Role playing games are very often set in ruins too, though for somewhat different reasons. RPGs need unrealistic arcs for characters to gain power (both heroes and villains, actually). Or more properly, they need unusual ways for characters to gain power. If there was a shop where for a day’s wages you could buy magically unbreakable swords of sharpness which could cleave through stone in a single blow, those swords would be an utterly unremarkable part of the world. Our modern steel knives are really quite amazing by the standards of the bronze age, but we can buy them for a few dollars at the store and no one writes a story where the hook is that someone has a tempered, high-carbon steel knife. Of course high carbon steel knives still can’t easily cut through stone, so it’s not the same thing, but on the flip side whatever can make a sword unbreakable can make armor unbreakable too. So there must be an explanation for why the heroes weapons and armor are rare. It being created by a great sorcerer is a popular enough explanation, but it’s usually a good idea to make the great sorcerers rare or some explanation must be given for why they aren’t the hero. After all, if they can create the hero’s weapon, they can probably kick the hero’s butt, and consequently the butt of whomever the hero has to kick in order to be the hero. A very practical solution to this problem is for the sorcerer to be dead. And not just technically dead, like a lich, but actually dead, as in, doing as much magic these days as the average door nail.
Plus this means that the hero gets to explore ruins to find his weapons of barely stoppable power (if they were unstoppable, where would the excitement be? and if they were very stoppable, why bother getting them?). And ruins are interesting because they’re so very suggestive. People lived in ruins, once. In fact, much of what makes ruins to interesting is that there were people who took them for granted. It’s a curious pseudo-paradox, but what makes most old things interesting is that long-dead people didn’t find them interesting. This is distinct from something like a monument, which, in general, we find interesting for the same reason that the people who erected it found it interesting, and so we don’t tend to appreciate it for being old nearly as much as we do with antiques. (The Statue of Liberty is impressive because it is large and detailed; we may appreciate the craftsmanship, but not generally the millions of tourists who came before us and appreciated the craftsmanship too.
Second, yesterday and the day before, I talked about character growth. To continue with that idea, I think that the most interesting character arcs to see in adult characters is character revelation, not character growth. That is, we don’t want the character himself to change, we want circumstances to reveal what his character actually is. There are two ways this can happen; one is through action and the other through conversation.
Action is fairly straightforward. Talk is cheap, and many virtues are simply never tried by real life. Thus it is interesting to see circumstances where a character is put in a situation which requires a virtue and he has it. Far more interesting, though, is when a character is put in a situation which requires a balance of virtues, and he has them in a reasonable balance. Merely showing one virtue is what results in flat characters. Thus the hero needs to be brave, and is, and no one much cares. Well, outside of fiction for children. They’re thrilled by simple things, as Chesterton noted. But unfortunately the reaction to adults finding this uninteresting has been to try to make it interesting by having the adult fail at the virtue. Usually not completely, or rather not consistently; it seems like about half the time the hero who failed at first gets a second try and succeeds then. Yay. The other half the time, he fails but the writer is with him and circumstances make him magically succeed anyway. Yay. Of course part of what I don’t like is that these approaches have been done to death, but what I dislike far more is that they all involve the hero failing through a lack of virtue. Moral virtue, I mean. 80s action movies consisted almost entirely of heroes who failed through lack of natural virtue but who then acquired natural virtue. Usually the ability to punch quickly, hard, and in the correct spot. The Karate Kid is perhaps one of the best examples of this, where Daniel gets beaten up, then trains at Karate and manages to win. Though of course there is that kid part. Mr. Miagi is revealed over time, but he doesn’t really grow; it is his having already grown which is what allows Daniel to grow.
In terms of adults acquiring natural virtue, that is in part what the Christopher Nolan movie Batman Begins is about. Of course it does—sort of—have moral growth on the part of Bruce Wane too, but most of that is in the first few minutes. Mostly Bruce Wayne knows that he wants to use his wealth to defeat crime, but he lacks the ability to do so and his transformation is gaining that ability. The Batman comic series which came after Knightfall—oh, right, Knightquest—is about Batman, his spine having been broken by Bane, going on a quest to regain his ability to walk. He isn’t acquiring moral virtue, he’s acquiring physical virtue. Virtually every episode of Macguyver was about Macguyver acquiring the power necessary to defeat the villains through knowledge, ingenuity, and courage.
The problem with requiring only one virtue of the hero is that a single virtue isn’t all that hard. Don’t get me wrong—in real life many people fail to be virtuous in situations which require only a single virtue. But that’s between them and God. There’s no intellectual problem to be solved, and therefore nothing to interest anyone who isn’t that person or God. The thing that’s really interesting is when virtues must be balanced against each other. When courage must be balanced against compassion, or compassion against justice, or truth against justice; these are always interesting stories, though they often have disappointing endings if the writers are not wise. That’s the problem with writing really good stories: only good men can do it. There’s an interesting section in the, I think second, preface to The Screwtape Letters, where C.S. Lewis says that the Letters are only half of the book, the other half being the letters from an archangel to the guardian angel of Wormwood’s “patient”. But, Lewis said, he couldn’t possibly write them. The letters of a fallen creature like a devil can admit of faults, but the letters of a perfect creature would have to be faultless, and even if they contained no errors, the beauty of their style would be as integral to their perfect as would the wisdom of the words. A fallen man can reasonably presume put words into the mouth of a devil, but not into the mouth of an angel. (One reason there’s never been a successful novel with Jesus as a character.)
Telling the tale of a good but fallen man is accessible to other fallen men, but while you can fake virtue, you cannot fake knowledge. What is the right balance between two virtues which both have a legitimate claim requires quite a bit of that knowledge we call wisdom. There’s really no way around this, and I don’t think that the right solution is for fools to use crutches like making the hero vicious; I think the right solution is for writers to do their damndest to become wise. It will have more benefits besides making their writing better.
And before I go, here’s Camille and Kennerly playing Pahcabel’s Canon in D:
God’s blessing to you on this the third day of January in the year of our Lord’s incarnation 2017.
I finally got the year right on the first try today. 🙂
Yesterday I mentioned the idea of character growth in stories. The way I was taking the idea of character growth was that the character himself changes, typically by learning to be more moral than he had been before. It would also be a character arc for a character to degenerate, and those are legitimate stories, both when they are degeneration-and-redemption arcs as well as when they’re simply cautionary tales (e.g. The House of the Rising Sun). However, there are some very significant differences between those and growth, specifically because growth is (or can be, depending on the specifics) a natural thing to our species, while degeneration is not.
Now, it is true that in a proper sense we all grow in every moment, for time means that we become more, one moment at a time. (Not on our own, of course, but we’re not alone; as Saint Augustine said, though not precisely in these words, God gathers up the shattered moments of our lives and puts them together into a whole.) But that’s a very concrete sort of thing; each word spoke, each byte of food, each breath taken is adding to our being in this sense. Every act of charity is building ourselves, and in a strict sense is therefore changing us (since part of us is coming into being), but it’s not changing in the more colloquial sense of becoming harder to recognize. That’s not precisely what “change” means colloquially, but it’s close enough for the moment. Actually what we mean is probably more like, “no longer corresponds exactly to a description that someone would give”. When we talk about things changing, we mean according to an abstraction that we would give concentrating on what we would find important. I suspect a really precise definition would be nearly impossible to come up with, but for the most part most of us know what the rest of us mean. 🙂
Anyway, changing in this sense is something that’s supposed to happen very slowly for adults, and not generally as a result of particular experiences. We’re supposed to be sufficiently well formed by the time we reach adulthood so as to deal with the problems that come along. That doesn’t always work, of course, and this is a fallen world, but that’s why people get so fixated on flaws in characters. Flaws can be improved, which permits a character to grow during a story, despite them already being (in theory) a grown human being. I can’t help but think that this is roughly a lazy way of achieving a character arc. I’ll talk about this more tomorrow after I’ve had a little time to think about it, but at the very least this is one reason why (I think) young adult fiction is so popular with adults. By placing the character growth where it belongs (in children), it permits stories with better people in them.
This is the script to my video, The Probability of Theology:
As always, it was written (by me) for me to read aloud, but it should be pretty readable.
Today I’m going to be answering a question I got from the nephew of a friend of mine from the local Chesterton society. He’s a bright young man who was (I believe) raised without any religion, and has been introduced by his aunt to some real, adult, theology, and has the intellectual integrity to seriously consider it until he can see how it’s either true or definitely wrong. Here’s his question:
I am an atheist, mostly due to a few primary objections I have with religion in general, the most prominent of which is that since there are infinite possible theologies, all with the same likelihood of being true, the probability of one single man-made theology such as Christianity, Judaism, or Islam being true is approximately zero. My aunt … is quite convinced that you can prove this idea false [and] we are both hoping that you could make a … video about this on your channel, if possible. We will be eagerly awaiting your response.
This is an excellent example of how it’s possible to ask in a few words a question which takes many pages to answer. I will attempt to be brief, but there’s a lot to unpack here, so buckle up, because it’s going to be quite a ride.
The first thing I think we need to look at is the idea of a man-made theology. And in fact there are two very distinct ideas in this, which we need to address separately. First is the concept of knowledge, which as I’ve alluded to in previous videos was hacked into an almost unrecognizable form in the Enlightenment. Originally, knowledge meant the conformity of the mind to reality, and though in no small part mediated by the senses, none the less, knowledge was understood to be a relatively direct thing. In knowledge, the mind genuinely came in contact with the world. All this changed in the aftermath of Modern Philosophy. It would take too long to give a history of it so the short version is: blame Descartes and Kant. But the upshot is that the modern conception of knowledge is at best indirect and at worst nothing at all; knowledge—to the degree it’s even thought possible—is supposed to consist of creating mental models with one’s imagination and trying to find out whether they correlate with reality and if so, to what degree. Thus there is, in the modern concept of “knowledge”—the scare quotes are essential—a complete disconnect between the mind and the world. The mind is trapped inside of the skull and cannot get out; it can only look through some dirty windows and make guesses.
This approach of making guesses and attempting (where practical) to verify them has worked well in the physical sciences, though both the degree to which it has worked and the degree to which this is even how physical science is typically carried on, is somewhat exaggerated. But outside of the physical sciences it has largely proved a failure. One need only look at the “soft sciences” to see that this is often just story-telling that borrows authority by dressing up like physicists. It is an unmitigated disaster if it’s ever applied to ordinary life; to friends and family, to listening to music and telling jokes.
There have been a few theologies which have been man-made in this modern sense; that is, created out of someone’s imagination then compared against reality—the deism that conceives of God as winding a clock and letting it go comes to mind—but this is quite atypical, and really only exists as a degeneration of a previous theology. Most theologies describe reality in the older sense; descriptively, not creatively. It is true that many of them use stories which are not literally true in order to convey important but difficult truths narratively. This is because anyone who wants to be understood—by more than a few gifted philosophers—communicates important truths as narratives. Comparatively speaking, it doesn’t matter at all whether George Washington admitted to cutting down a cherry tree because he could not tell a lie; the story conveys the idea that telling the truth is a better thing than avoiding the consequences of one’s actions, and that lesson is very true. It may well be that there was never a boy who cried “wolf!” for fun until people didn’t believe him; it’s quite possible no one was ever eaten by a wolf because he had sounded too many false alarms to be believed when he sounded a real one. But none of that matters, because it is very true that it is a terrible idea to sound false alarms, and that sounding false alarms makes true alarms less likely to be believed. None of these are theories someone made up then tested; they are knowledge of real life which is communicated through stories which are made up for the sake of clarity. And so it is with the mythology of religions. Even where they are not literally true, they are describing something true which people have encountered. I am not, of course, saying that this is what all religion is, but all religions do have this as an element, because all religions attempt to make deep truths known to simple people. So when considering anything from any religion, the first and most important question to ask about it is: what do the adherents mean by it. This is where fundamentalists of all stripes—theistic and atheistic alike—go wrong. They only ever ask what they themselves mean by what the adherents of a religion say.
So this is the first thing we must get clear: theologies are not man-made in the sense of having been created out of a man’s imagination. They are not all equally correct, of course; some theologies have far more truth in them than others, but all have some truth, and the real question about any religion is: what are the truths that it is trying to describe? Christianity describes far more truth than buddhism does, but buddhism is popular precisely because it does describe some truths: the world is not simply what it appears at first glance; the more we try to live according the world the more entangled in it we get and the worse off we are; and by learning to be detached from the world we can improve our lot. It is not the case—as many buddhisms hold—that we must reject the world outright; we need a proper relationship to it, which Saint Francis captured in his Canticle of the Sun. The world is our sibling, neither our master nor our slave. And so it goes with all religions: they are all right about at least something, because the only reason any of them existed at all was because somebody discovered something profoundly true about the world. (Pastafarianism being the exception which proves the rule; the flying spaghetti monster is a joke precisely because it was simply made up and does not embody anything true about the world. Even the Invisible Pink Unicorn falls short of this; it embodies the truth that some people don’t understand what mysteries actually are.)
The second thing we must address in the man-made part of “man-made theologies” is that—at least according to them—not all theologies are made by man, even in the more ancient sense of originating in human knowledge. The theology of Christianity originated with God, not with man. Christian theology is primarily the self-revelation of God to man. And we have every reason to believe that God would be entirely correct about Himself.
Now of course I can hear a throng of atheists screaming as one, “but how do you know that’s true?!? You didn’t hear God say it, all you’ve heard is people repeating what they say God said.” Actually, these days, they’re more likely to say, “where’s your evidence”, or accuse me of committing logical fallacies that I can’t be committing, and that they can’t even correctly define, but for the sake of time let’s pretend that only top-tier atheists watch my videos.
Oh what a nice world that would be.
Anyway, this gets to a mistake I’ve seen a lot of atheists make: evaluating religious claims on the assumption that they’re false. There’s a related example which is a bit clearer, so I’m going to give that example, then come back and show how the same thing applies here. There are people who question the validity of scripture on the basis of copying errors. “In two thousand years the texts were copied and recopied so many times we have no way of knowing what the originals said,” sums it up enough for the moment. This objection assumes that the rate of copying errors in the gospels is the same as for all other ancient documents. Actually, it also exaggerates the rate of copying errors on ancient documents, but that’s beside the point. It is reasonable enough to assume that the rate of copying errors in Christian scriptures does not greatly differ from that of other documents, if Christianity is false. Well, actually, even that is iffy since a document people hold in special reverence may get special care even if that reverence is mistaken, but forget about that for now. If Christianity is true, the gospels are not an ordinary document. They are an important part of God’s plan of salvation for us, which he entrusted to a church he personally founded and has carefully looked over throughout time, guarding it from error. In that circumstance, it would be absurd to suppose that copying errors would distort the meaning of the text despite the power of God preventing that from happening. Thus it is clear that the rate of copying errors is not a question which is independent of the truth of Christianity, and therefore a presumed rate of copying errors cannot be used as an argument against the truth of Christianity precisely because whatever rate is presumed will contain in it an assumption of the truth or falsehood of Christianity. (I should point out that what we would expect—and what the Church claims—is that God would safeguard the meaningful truth of revelation, not the insignificant details. That is, we would expect that if Christianity was true God would keep significant errors from predominating, not that he would turn scribes into photocopying machines—within Christianity God places a great deal of emphasis on free will and human cooperation. And as it happens, we have some very old copies of the gospels and while there have been the occasional copying errors, none of them have amounted to a doctrinally significant difference. Make of that what you will.)
So bringing this example back to the original point, whether Christian theology is man-made is not a question which is independent of the question of whether Christianity is true. If Christianity is false, then its theology is man-made. But if Christianity is true, then its theology is not man-made, but revealed. And as I said, while men often make mistakes, we can trust God to accurately describe himself.
So, to recap: theology is descriptive, not constructive, and in historically-based religions like Christianity, theology is revealed, not man-made. So now we can move onto the question of probabilities.
First, there is the issue that probability says nothing about one-offs. I covered this in my video The Problem with Probability, so I won’t go into that here, but since I’ve heard the objection that I only discussed the frequentist interpretation of probability, I will mention that if you want to go with a bayesian interpretation of probability, all you’re saying by assigning a probability of zero to an event is that it’s not part of your model. Now in the question we’re addressing, it’s not a probability of zero that’s being assigned but rather “approximately zero”. But the thing about the Bayesian interpretation is that probability is at least as much a description of the statistician as it is of the real world. It is, essentially, a way to quantify how little you know. Now, sometimes you have to make decisions and take actions with whatever knowledge you have at the moment, but often the correct thing to do is: learn. There is no interpretation of statistics which turns ignorance into knowledge, or in bayesian terms, the way to get better priors is outside of the scope of bayesian statistics.
But more importantly, this atomization of theologies is very misleading. Among all of the possible theologies, many of them have a great deal in common. They do not have everything important in common, obviously. There are some very substantial differences between, say, Greek Orthodoxy and say, Theravada Buddhism. But for all their differences, Islam, Christianity, Judaism, Baha’i, Sikhism, and several others have quite a lot in common. They all worship the uncreated creator of all that is. That’s actually a pretty big thing, which is to say that it’s very important. An uncreated creator who transcends time and space has all sorts of implications on the coherency of contingent beings within time (such as ourselves), the existence of a transcendent meaning to life, and lots of other things. This is in contrast to things that don’t matter much, like whether there is an Angel who has a scroll with all of the names of the blessed written on it. Whether there is one or isn’t doesn’t really matter very much. Grouping those two distinctions together as if they were of equal importance is highly misleading. Now, granted, there are all too many people who take a tribalistic, all-or-nothing approach to religion where the key thing is to pick the right group to formally pledge allegiance to. But one of the things which follows from belief in an uncreated creator is that this primitive, tribalistic approach is a human invention which is not an accurate description of reality. An uncreated creator cannot need us nor benefit from us, so he must have created us for our own sake, and so our salvation must be primarily not about something superficial like a formal pledge of allegiance, but about truth and goodness. And by goodness I mean conformity of action to the fullness of truth. For more on this, I’ll link my video debunking Believe-or-Burn, but for the moment, suffice it to say that being fairly correct, theologically, must be of some greater-than-zero value under any coherent theology with an uncreated creator behind all that exists. The correct approach is not to give up if you can’t be be completely correct. It’s to try to be as correct as possible.
And in any event there is no default position. Atheism is as much a philosophical position as any theology is. Well, that’s not strictly true. There is a default position, which is that there is Nothing. But that’s clearly wrong, there is something, so the default position is out. And while in a dictionary sense atheism is nothing but the disbelief in God—or for the moment it doesn’t even matter if you’re too intellectually weak for that and want to define atheism as the mere lack of a belief in God—western atheists tend to believe in the existence of matter, at least, as well as immaterial things like forces and laws of nature. So each atheist has a belief system, even if some refuse to admit it. The only way to not have a belief system is to give yourself a lobotomy. But until you do, since you have a belief system, it is as capable of being wrong as any theology is. And does it seem plausible that, if Christianity is true, if the version of Christianity you’ve encountered is a little inaccurate, you’ll be better off as an atheist?
I think that nearly answers the question, but there is a final topic which I think may answer an implicit part of the question: while there are infinitely many theologies which are theoretically possible, in practice there haven’t actually been all that many. This is something I’m going to cover more in my upcoming video series which surveys the world’s religions, but while there certainly are more than just one religion in the world, there aren’t nearly as many as many modern western people seem to think that there are. Usually large numbers are arrived at by counting every pagan pantheon as being a different religion, but this is not in fact how the pagans themselves thought of things. I don’t have the time to go into it—I addressed this somewhat in my video on fundamentalists, and will address it more in the future—but actual pagans thought of themselves as sharing a religion; just having some different gods and some different names for the same gods, just like French and American zoos don’t have all the same animals, and don’t use the same names for the animals they do have in common. But they will certainly recognize the other as zoos. American zookeepers do not disbelieve in French “python réticulé”.
And so it goes with other differences; those who worship nature worship the same nature. All sun worshippers worship the same sun. Those who believe in an uncreated creator recognize that others who believe in an uncreated creator are talking about the same thing, and generally hold that he can be known to some degree through examination of his creation, so they will tend to understand others who believe in an uncreated creator as having stumbled into the same basic knowledge.
And this explains why minor religions tend to die out as small groups make contact with larger groups. Those religions which are more thoroughly developed—which present more truth in an intelligible way—will appeal to those who on their own only developed a very rudimentary recognition and expression of those truths. There has been conversion by the sword in history, though it is actually most associated with Islam and often exaggerated in other faiths, but it is not generally necessary. When people come into contact with a religion which has a fuller expression of truth than the one they grew up with, they usually want to convert, because people naturally want the truth, and are attracted to intelligible expressions of it. And the key point is that the expressions of truth in better developed religions are intelligible precisely because they are fuller expressions of truths already found in one’s native religion. And this is so because religions are founded for a reason. I know there’s a myth common that religion was invented as bad science, usually something to the effect that people invented gods of nature in order to make nature seem intelligible. The fact that this is exactly backwards from what personifying inanimate objects does should be a sufficient clue that this is not the origin of religion. Think about the objects in your own life that people personify: “the printer is touchy”, “the traffic light hates me”, “don’t let the plant hear that I said it’s doing well because it will die on me out of spite”. Mostly this is just giving voice to our bewilderment at how these things work, but if this affects how mysterious the things are in any way, it makes them more mysterious, not less. If you think the printer is picky about what it prints, you’ll wonder at great length what it is about your documents it disapproves of. If you think of it as a mere machine, you turn it off, take it apart, put it back together again, and turn it on. Or you call a repairman. But if you personify it, you’ll wrap your life up in the mystery of its preferences. And anyone with any great experience of human beings has seen this. Especially if you’ve ever been the repairman to whom the printer is just a machine.
It’s also, incidentally, why many atheists have developed a shadowy, mysterious thing called “religion” which desires to subjugate humanity.
People personify what they don’t understand to communicate that it is mysterious, not to make it less mysterious. And they do this because people—having free will—are inherently and irreducibly mysterious.
So if you look past the mere surface differences, you will find that religions have generally originated for very similar reasons. So much so that more than a few people who haven’t studied the world’s religions enough are tempted to claim that there is only one universal religion to all of mankind with all differences being mere surface appearance. That’s not true either, but that this mistake is possible at all, is significant. Religions are founded for a reason, and that’s why there aren’t infinitely many of them.
Until next time, may you hit everything you aim at.
God’s blessings to you on this the second day of January in the year of our Lord’s incarnation 2017.
I watched part of an interesting discussion of why other people than the lady making the video liked the movie Rogue One:
There was a point she came back to several times which I found interesting: that the characters had no arc. I don’t know whether she has a rule that all good fiction has character arcs where the characters grow and develop; certainly that would rule out short fiction which usually is about revealing interesting things about a character, not about developing that character in the sense of the character themselves changing. (And, as a side note, I generally contend that in structure movies are far more like short stories than they are like novels, but that’s a conversation for another day.) There’s also the possibility that her problem with Rogue One was something else—such as boring characters with no personalities—and she is merely describing that as them not having an arc. I’ve read the advice from more than one screenwriter that feedback from non-writers tends to be correct about where the problems are and wrong about what the solutions are. This is, I think, a more general issue that people who are complaining about something will often reach for the most ready description to hand which might fit even a little bit, rather than give a truly accurate complaint. It results in a lot of complaints which at the same time—but in different senses—are correct, but also wrong. This is especially true whenever anyone’s real complaint is that another person’s displaying of a group identity which isn’t shared made the speaker feel out-group, and their complaint is either that or the person in question didn’t do enough to make them feel in-group anyway. Such complaints are almost never of that form, I think in part because people would feel childish saying, “I felt excluded because there are things we don’t have in common.” Unfortunately there’s no way to say that which isn’t childish because it is a childish feeling. Best to control one’s feelings (or rather, how much one pays attention to them and how one acts or doesn’t based on them), but at the very least accurately describing problems would be a step forward. But alas we live in a very fallen world and so such feelings are usually placed on the other person (“she’s trying too hard”, “she doesn’t care about her appearance”, “no one needs an 80# bow”, etc.) in order to preserve the dignity of the person acting in an undignified manner.
Anyway, if we assume for the moment that what might be imprecise passing comments describing a feeling are in fact carefully thought out critiques of story construction, Ms. Nicholson’s comments bring up an interesting question about whether and to what degree we really want the characters in a story to change (or “grow,” which usually means, “become morally better”). Certainly we don’t want all of the characters to change. This is especially the case in one of my favorite genres—detective fiction. I want the detective unraveling the mystery, not personally growing. If he still has significant amounts of growth to do, he shouldn’t be the detective at all. The same is true of wise old men. I want them to be wise and old, not learning and growing. Some people in life should be growing, and ideally they should be young. Others should have already grown. Star Wars wouldn’t have been half as good if Obi Wan had lots of room for character growth. If he had, there would have been no one to make Han Solo and Luke grow up. I don’t know whether stories need characters who are effectively—if not chronologically—children in them, but they certainly need some adults in them, or the children in them have no way of growing.
This only scratches the surface of the topic, which I will certainly revisit later as time permits.
God’s blessings to you on this the first day of January, in the year of our Lord’s incarnation 2017.
For those of you who celebrate it, happy new year. Last night my wife and I played Oregon Trail: The Card Game. I loved the game I played as a child on Apple II computers in school. This was nothing like that, except that a few words were similar. The tagline on the box is “you died of dysentery,” which probably should have been a warning. The game consists of very little except getting calamity cards and trying to not die, which one often fails at. It’s a bit like starting out with a banker’s health and a farmer’s wealth, and being extraordinarily unlucky. If you possibly can, I recommend you avoid this game. We didn’t even have much fun complaining about it once it was clear that the game was no fun. The original game, by contrast, was a lot of fun. I don’t know how well it would hold up these days, and I can’t help but wonder if there might be a remake where the hunting scenes are 3D rendered first-person shooters. One can hope, anyway.
Russell Newquist’s Lyonesse project was funded, which was very cool to see. I hope it does well, because it would be wonderful for there to be a viable market in short fiction.
Since it’s the season for it, I hope you have a wonderful year in 2017.
You must be logged in to post a comment.