Science vs. Christianity

Having seen enough Bishop Barron videos where he says that polls show that people who leave the church frequently cite the “conflict” between “science” and “religion”, I’m thinking that I might make some videos on my YouTube channel about how there is no conflict between science and (orthodox) Christianity. This is a multi-faceted topic, though, so it will probably result in multiple videos.

Sub-topics include:

Most People Don’t Know Science: Probably the biggest issue is that a lot of the “science” which is supposed to contradict religion isn’t actually scientific, or isn’t modern science. (E.g. the idea that human beings don’t have a common ancestor isn’t scientific, or, if you can find someone who does propose that, has no evidence to back it up.) The scientific theory of evolution is widely misunderstood, as is quite a lot of physics, too. Etc.

Many People Don’t Know Christianity: A lot of people don’t know what would and would not contradict Christianity, anyway. E.g. the universe being 12+ billion years old doesn’t contradict Christianity, evolution doesn’t contradict Christianity, etc.

Models vs. Reality: Science, or what most people think of as science, consists in the making of models, not in describing reality as it is. Models can be useful while being inaccurate, just as the ptolemaic model of the solar system was quite useful for predicting the movement of the planets. A model having predictive power about a very narrow aspect of reality doesn’t tell you much about reality as it is.

Science Isn’t Engineering: A lot of people confuse science with engineering and think that engineering means that “science” must be “true”. In fact, a very narrow bit of theoretical science actually precedes engineering, rather than tries to explain it after the fact. Where it does, the parts which are demonstrated to be “true” are things like the theory of electromagnetism.

Epistemology: the nature of the scientific method is that all it produces are hypotheses, and these hypotheses have often turned out to be wrong in the past. Even scientific experiments are often badly misinterpreted until later, when they are retroactively interpreted correctly (or, more precisely, in line with our current preferred interpretation).

Rationality: Science presumes the world to be rationally intelligible, which only makes sense within a framework where human beings are made in the image of a creator who created the world according to a rational purpose. (This includes Judaism and Islam, and some variants of philosophical theism, e.g. Platonism or Aristotelianism.)

Religion as Bad Science Creation Myth For Religion: A lot of people think of religions in general as merely being bad science, i.e. as a way to explain the world around them. As if, to quote my friend Andrew, the Romans worshiped Janus the god of doors to explain why doors existed.

No One Actually Believes Science Anyway: E.g. Neck-down darwinism (“from the neck down, our bodies are evolved, from the neck up, all men are created equal”). No one holds that physical determinism (assumed in science) means that rapists shouldn’t be punished because they couldn’t help it. The point is that people know how limited science really is when they care to.

Scientific Methodological Assumptions Aren’t Science: Science is well known for what is sometimes called “methodological naturalism,” i.e. the assumption that what it studies is purely natural. This doesn’t mean that the whole world is natural, only that science is only looking at the parts that are. In like manner, the assumption that the laws of physics are the same throughout space and time is purely an assumption with no proof. The assumption that what we can see (detect) is all that exists, despite us clearly being at the wrong scale to see what electrons do in low-energy environments, does not make it so.

What other subjects should be included? (Please comment below.)


Update: I got a request on Twitter to go into depth on theistic evolution.

Models vs. Reality

A little-known change in the attempt to learn about nature happened, in a sense, several hundred years ago. People replaced Natural Philosophy with mathematical Science, in which the attempt to know what nature is was replaced with mathematical models of nature which can predict measurable aspects of nature.

The difference between these two things is that a model may, possibly, tell you about what the underlying reality is. On the other hand, it may not. Models can be accurate entirely by accident.

Trivial examples are always easier, so consider the following model of how often Richard Dawkins is eaten by an alligator, where f is the number of times he’s been eaten by an alligator and t is the time (in the sense of precise date):

f(t) = 0

This model is accurate to more than 200 decimal places. If you conclude from this model that Richard Dawkins is alligator-proof and throw him in an alligator pit to enjoy the spectacle of frustrated alligators, you will be very sadly mistaken. But it’s so accurate!

This is of course a silly example; no one would ever confuse this model or its accuracy for a full description of reality. However, there’s a very interesting story from astronomy where people did exactly that.

I’m speaking, in particular, of the long-running Ptolemaic model of the planets and its eventual overthrow of the Copernican model. The Ptolemaic model was the one where the earth was at the center of the solar system and the planets traveled in cycles and epicycles around it. The thing about this model is that it was actually extremely accurate in its predictions.

(If you’re wondering how it could be so accurate while being so wrong, the thing you have to realize is that Special Relativity actually means that it’s just fine for the earth to be taken as the center of the coordinate. The math just gets harder for some calculations; this is basically what happened. The Ptolemaic model was, basically, a close approximation of that more complicated math.)

However, there is a yet simpler example of incorrect models producing correct results: just consider, for two minutes, that for most of history everyone believed that the Sun orbited the earth and yet they still had highly accurate calendars. Despite not thinking of a year as the time the earth takes to orbit the Sun they nevertheless recorded the years and predicted the solstices with great precision.

Incidentally, if you’re interested in a full history of the shift from the Earth being the center of the solar system to the Sun being at the center, be sure to read the extraordinarily good series of articles by TOF, The Great Ptolemaic Smackdown (originally published in Analog magazine). It is very well worth your time.

Who Works For Bad Scientists?

One of the latest scandals in science is the shoddy research of Brian Wansink, with new scrutiny on his papers resulting in many of them being revised or withdrawn. Apparently this started in the aftermath of a post on his blog titled The Grad Student Who Never Said No. I bring this up because it ties into a previous post of mine, The Fundamental Principle of Science. But the entire blog post is very interesting so let’s look at it.

A PhD student from a Turkish university called to interview to be a visiting scholar for 6 months.  Her dissertation was on a topic that was only indirectly related to our Lab’s mission, but she really wanted to come and we had the room, so I said “Yes.”

So far, no problems.

When she arrived, I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others).

Right away you have a problem with whatever they’re trying to find out because there’s no realistic way to charge people half as much as others. Most people find out what a buffet costs before ordering it, with this influencing their choice to eat there or not. Further, many people will be regular customers and thus already know what the buffet costs.

 I said, “This cost us a lot of time and our own money to collect.  There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.”

This is a really bad sign. If your experiment fails, you’re not supposed to torture the data until it tells you what you want to hear. This is called p-hacking, an it results in an awful lot of garbage. Virtually all data sets have some correlations in them by sheer chance; finding them is simply misleading.

I had three ideas for potential Plan B, C, & D directions (since Plan A had failed).  I told her what the analyses should be and what the tables should look like.  I then asked her if she wanted to do them.

Granted, this isn’t quite as bad as the approach where one uses a computer to generate hundreds or thousands of “hypotheses” and test them against the dataset to find one that will stick to the wall, but it’s a bad sign. This is such a bad practice, in fact, that some scientific journals are requiring hypotheses to be pre-registered to prevent people from doing this.

Every day she came back with puzzling new results,

This is a very bad sign. It’s a huge red flashing neon sign that your data set has a lot of randomness in it.

and every day we would scratch our heads, ask “Why,” and come up with another way to reanalyze the data with yet another set of plausible hypotheses.

Now this is just p-hacking, except without a computer. You could call it artisinal, hand-crafted p-hacking.

Eventually we started discovering solutions that help up regardless of how we pressure-tested them.

I’m actually kind of curious what he means here by “pressure-testing”. Actual pressure-testing is putting fluid into pipes at significantly higher pressures than the working system will be under to ensure that all of the joints are strong and have no leaks. Given that the data set has already been collected, I can’t think of an analog to that. Perhaps he meant throwing out the best data points to see if the rest still correlated?

I outlined the first paper, and she wrote it up, and every day for a month I told her how to rewrite it and she did.

What was going on that 30 rewrites were necessary? Perhaps this grad student just sucked at writing, but at some point one really should pick an idea and stick with it. I really doubt that thirtieth rewrite was much better than the 23rd or the 17th.

This happened with a second paper, and then a third paper

So we’re up to about 90 rewrites in 3 months? That’s is a lot of rewrites for papers about something as weak as tracking the behavior of people eating at a randomly discounted Italian buffet.

(which was one that was based on her own discovery while digging through the data).

This is pure snark, but I can’t resist: she learned to p-hack from the master.

At about this same time, I had a second data set that I thought was really cool that I had offered up to one of my paid post-docs (again, the woman from Turkey was an unpaid visitor).  In the same way this same post-doc had originally declined to analyze the buffet data because they weren’t sure where it would be published, they also declined this second data set.  They said it would have been a “side project” for them they didn’t have the personal time to do it.

It’s really interesting that we have no idea what the post-doc actually said. It’s possible that the post-doc was just being polite and came up with an excuse to avoid p-hacking. It’s also possible that the post-doc said that this seemed like p-hacking and Wansink interpreted that as trying to cover for not thinking that it was prestigious enough work.

But it’s also possible that someone who wanted to work with an apparent p-hacker like Wansink actually was concerned only with how prestigious a journal the p-hacked results could be published in.

Boundaries.  I get it.

I strongly suspect that he doesn’t get boundaries. Most people who have to talk about them this way—saying that they respect other people’s boundaries—don’t. At least in the cases I’ve seen. People who respect boundaries do so as a matter of course. It’s a bit like how people who don’t stab others in the face with spoons don’t talk about it, they just do it.

Six months after arriving, the Turkish woman had one paper accepted, two papers with revision requests, and two others that were submitted (and were eventually accepted — see below).

P-hacking is far more productive than having to find real results. That’s why it’s so tempting.

In comparison, the post-doc left after a year (and also left academia) with 1/4 as much published (per month) as the Turkish woman.

Right, but how good was it?

 I think the person was also resentful of the Turkish woman.

This could mean several things, depending on what he person actually said and meant when they declined to p-hack the buffet data set. If it was purely self-aggrandizement, then this becomes a valid criticism. If they were actually demuring from p-hacking, then the resentment makes a lot of sense since the Turkish woman made them look bad for standing on principle while others transgressed and didn’t get caught.

Balance and time management has its place, but sometimes it’s best to “Make hay while the sun shines.”

This part is certainly true. It’s rarely a good idea to disdain low hanging fruit. Unless it’s wax fruit, not real fruit.

About the third time a mentor hears a person say “No” to a research opportunity, a productive mentor will almost instantly give it to a second researcher — along with the next opportunity.

I really wonder what he thinks that the word “mentor” means. Whatever it is, it clearly doesn’t involve actually mentoring anyone. But don’t just pass over this, look at how glaring it is. The first half of the sentence, “About the third time a mentor hears a person say ‘No’ to a research opportunity”, is the setup for explaining how the mentor will then help the person to learn. Instead, the next three words are almost a contradiction in terms: “a productive mentor.” To mentor someone is to put time and energy into helping them learn. It’s the opposite of being productive. Craftsmen are productive. Mentors are supposed to be instructive. And then the rest of the sentence can be translated as, “…will just give up on the person.”

I think the word he was looking for was “foreman,” not “mentor”.

This second researcher might be less experienced, less well trained, from a lessor school, or from a lessor background, but at least they don’t waste time by saying “No” or “I’ll think about it.”  They unhesitatingly say “Yes” — even if they are not exactly sure how they’ll do it.

Yeah, the word he was looking for was “foreman”.

Facebook, Twitter, Game of Thrones, Starbucks, spinning class . . . time management is tough when there’s so many other shiny alternatives that are more inviting than writing the background section or doing the analyses for a paper.

I’ve got to say: if the reason that the post-doc wouldn’t p-hack the buffet data set was because they were too busy checking Facebook and Twitter, watching Game of Thrones, sitting chai lattes at Starbucks, and going to spinning class… that was actually a better use of time.

Yet most of us will never remember what we read or posted on Twitter or Facebook yesterday.  In the meantime, this Turkish woman’s resume will always have the five papers below.

Ironically, Wansink is likely to remember this blog post for a long time, since it drew attention to his p-hacking. And at this point, there’s a lot of it.

Good Morning December 14, 2016

Good morning on this the fourteenth day of December in the year of our Lord 2016.

I’ve been reading TOF’s The Great Ptolemaic Smackdown. It’s excellent, and you really should read it.

And then by complete coincidence, I happened across this video, which is a description of a classroom lesson on scientific experimentation:

It looks like a really good lesson for students in a science class, btw. If you don’t have time to watch the video, the professor seals a rectangle of aluminum foil inside of a block of paraffin which is very slightly larger than the foil, then asks his class to make observations and try to figure out what it is without doing destructive experiments. Especially in the version where the students can only look at it and ask the professor to turn it around and shine lights on it, this actually does a good job of representing the difficulty often faced in science: for one reason or another you can’t do the experiment which would actually tell you what you have, so you have to be crafty and clever to try to find substitute experiments. This applies to a great degree in the problem of astronomy, especially in the seventeenth century, when celestial objects were so remote and barely observable.

It’s also interesting to hear about the mistakes which the students make along the way, which to some degree mirror the progression we saw in astronomy, where assumptions always start out simple and familiar, then are disproved by experimental evidence.

Which actually brings up a really interesting topic I don’t have time to get into, about The Scientific Method versus actual science. The very short version is that half of the scientific method as typically described comes from Modern Philosophy where knowledge was reconceptualized1 from being descriptive of the real world to creative and limited to the inside of the human head. This corresponds roughly to the steps of the scientific method which are about forming a hypothesis and to a lesser degree devising tests. Since as Chesterton said the modern age is the age of publicity, Modern Philosophers have spread the idea that this is really the key to Science brand natural investigation (please read that like “Kleenex brand facial tissue”), when in fact it may be one of the less important parts. Theories, history has shown us, are a dime a dozen2. The hard part is getting good experimental data. Because as history has also shown us, experimental data is easy to come by if you don’t care whether the experimental data means anything. Experimental data where you’ve tested for the existence of variables and then controlled for them is very difficult indeed. It’s also often quite expensive. But I’d argue that it’s the experimentalists who really give science it’s glory. For some reason the theoretical physicists seem to have better publicity than the experimental physicists do, possibly because what they do is far less messy and therefore sexier than what the experimentalists do and can therefore be packaged for retail much more easily. But a great many exciting theories have turned out to be at best mediocre fiction, while experiments are often inconclusive but always true. And when they combine both, the experiments are amazing. In theory the experimentalists require the theorists to give them some idea what to experiment upon, but I’m not sure how true this is in practice. No one in the seventeenth century needed a theory in order to point a telescope at Jupiter and make detailed observations about its moons. As the saying goes:

In theory, there’s no difference between theory and practice. In practice, there often is.

God bless you.

 

1. Sorry, I just couldn’t resist this absurd, modern word to describe the absurd, modern project.

2. This is actually overstating the case; they cost about how often academic scientists publish divided by their yearly salary. This actually makes them fairly expensive unless you consider them to be side-effects of some other job such as teaching students or increasing a university’s prestige to bring in donations.

Dualists Usually Aren’t Quite Dual

Dualists are people who believe that reality as we experience it is fundamentally different from reality as it actually is, which we can’t know (that is, we can’t know reality as it actually is). In the west this was popular before Socrates and after Descartes. A familiar example of modern dualists are Materialists who believe that there is nothing besides matter and therefore there is no such thing as free will. When it comes to actually living, they basically just shrug their shoulders and make decisions anyway because we experience free will, even if in reality it’s a complete illusion. (They’re wrong about this, of course, but I’m not going to bother with any further disclaimers to that effect; I trust you, dear reader, to supply the rest yourself.)

And there’s a curious thing about dualists: they usually believe that there is some link between reality as it actually is and the world of perception which we (supposedly) can’t escape. Most of them are more 1.95ists than true dualists. What’s significant about this is that this link is a source of power: it’s possible to use this link to modify the underlying reality in ways that affect the world of perception.

To keep with the example of Materialists (which New Atheists almost universally are), they believe that things like love, loyalty, curiosity, wonder, awe, compassion and so on are all the epiphenomena (that is, an accidental manifestation, analogous to a symptom) of base instincts which we have because they resulted in our ancestors producing us. This is not to say that the epiphenomena are themselves necessarily of any value, but the instincts which produce them must have been of some evolutionary benefit. To try to interact with these epiphenomena may be unavoidable, but it is not very likely to accomplish much since none of them are real. By contrast, there does exist an ability to probe reality. It’s limited, difficult, and tentative; and its name is science. The point is not, of course, to improve the evolutionary benefit. Just as evolution does not “care” about the individual, the individual does not care about evolution. The point is to understand the mechanisms which evolution produced in order to change those mechanisms into ones which are more convenient. A good example of this is anti-depressant medications. (Or perhaps it would be if anti-depressants were more effective.)

Even those who suffer greatly from clinical depression are often hesitant to take anti-depressant medications because psychoactive drugs are terrifying. There is of course the possibility that they won’t work in dangerous ways—there are anti-depressants whose common side-effects include frequent thoughts of suicide—but the biggest fear is that the anti-depressants would work but turn the person into somebody else. This is not really a concern for the materialist because who he is is a mere epiphenomenon, and its only value is in being happy. If the medication changes him, all that was lost was an illusion anyway. (I should note that when this is practical rather than theoretical, Materialists may well be hesitant because they know on some level that Materialism isn’t true.)

This is why Materialism goes so well with recreational drug use. Caution is of course still warranted for the heavy-duty drugs like cocaine and heroin which can destroy one’s life, but it is very compatible with non-addictive drugs like marijuana, LSD, and endorphin stimulation through promiscuous sex. The main reason to avoid these safer drugs is that they falsify one’s sense of the world and take one further away from reality and hence from the true source of happiness. They’re not just wastes of time but counter-productive because they distort one’s view of reality and pull one further away from the truth. Of course a single, low-dosage usage of such drugs is not likely to have much of an effect (ignoring quality control issues) and I don’t mean to suggest that a person who’s had a single puff on a reefer stick is doomed and bereft of hope. But this is the effect of such drugs; they are chemical lies which take a person further away from sanctity and therefore from happiness.

The situation is radically different for a Materialist, however. First, they start off massively disconnected from reality, so within their worldview their connection to itreality (more-or-less) can’t be diminished. Second, there is no real happiness which is possible, so there is nothing to lose by telling oneself pleasing lies. Happiness is itself just an accidental manifestation of underlying chemical processes in the brain, and all high-level explanations which we have for happiness are illusions, so messing with the chemistry of the brain to produce happiness is not only more reliable, it is in fact more real. Not that being more real is a virtue for the Materialist, but the argument—using drugs recreationally divorces the user from reality—will not even make sense to a Materialist.

This is, incidentally, why one runs into the oddity of the evangelical atheist. If God is dead then clearly nothing matters. Even if nothing matters in theory, however, human beings don’t cease to be human beings merely because they believe they are only flesh robots, and as Aristotle observed all men desire to be happy. The significant difference in effectiveness between trying to achieve happiness by dealing with the world according to its epiphenomena (duty, honor, morality, etc) and dealing with it as it is (scientific fun drugs) is so stark that they are moved by pity to try to spread the word to live according to the latter and not the former.

Science, Magic, and Technology

There is an interesting observation made, I believe, by Isaac Asimov:

Any sufficiently advanced technology is indistinguishable from magic.

This has been applied many times in science fiction to produce some form of techno-mage, but what’s more interesting is that the origins of modern science were in magic, specifically in astrology and alchemy. The goals of science were the same as that of magic: to control the natural elements. If you really study the history, it’s not even clear how to distinguish modern science from renaissance magic; in many ways the only real dividing line is success. There is some truth to the idea that alchemists whose techniques worked got called chemists to distinguish them from the alchemists whose ideas didn’t work. This is by no means a complete picture, because there was also at the same time natural philosophy, i.e. the desire to learn how the natural world worked purely for the sake of knowledge.

Natural philosophy has existed since the Greeks—Aristotle did no little amount of it—but it especially flourished in the renaissance with the development of optics which allowed for the creation of microscopes and telescopes. Probably more than anything else this marked the shift towards what we think of as modern science. As Edward Feser argues, the hallmark of modern science is viewing nature as a hostile witness. The ancients and medievals looked at the empirical evidence which nature gave, but they tended to trust it. Modern science tends to assume that nature is a liar. Probably more than any other single cause, being able to look at nature on scales we could not before and seeing that it looked different resulted in this shift towards distrusting nature. Some people feel a sense of wonder when looking through a microscope, but many people feel a sense of betrayal.

Another significant historical event was when the makers of technology started using the knowledge of natural philosophy in order to make better technology. This may sound strange to modern ears, who are used to thinking of technology as applied science, but in fact technological advancements very rarely rely on any new information about how the world works which was gained by disinterested researchers who published their results for the sake of curiosity. Technology mostly advances by trial and error modifying existing technology, and especially by trial and error on materials and techniques. In fact, no small amount of science has consisted of investigating why technology actually works.

But sometimes technology really does follow fairly directly from basic scientific research. One of the great examples is radio waves, which were discovered because the Maxwell’s theory of electromagnetism predicted that they existed. Another of the great examples of technology following from basic scientific research is the atomic bomb.

I suspect that these as well as other, lesser, examples, helped to solidify the identification between science and engineering. And I don’t want to overstate the distinction. In some cases the views of the natural world brought about by science have certainly helped engineers to direct their investigations into suitable materials and designs for the technology they were creating. But counterfactuals are very difficult to consider well, and it is by no means clear that the material properties which were discovered by direct investigation but also explained by scientific theories would not have been discovered at roughly the same time, or perhaps only a little later.

However that would have gone, the association between science and technology is presently a very strong one, and I think that this is why Dawkinsian atheists so often announce an almost religious devotion to science. I’ve seen it expressed like this (not an exact quote):

Science has given us cars and smartphones, so I’m going to side with science.

Anyone who actually knows anything about orthodox Christianity knows that there is no antipathy between science and religion. Though it is important to note that I mean this in the sense of there being no antipathy between natural philosophy and religion. In this sense, Christianity has been a great friend to science, providing no small amount of the faith that he universe operates according to laws (i.e. that being a creature is has a nature) and that these laws are intelligible to human reason. Moreover, the world having been created by God, it is interesting, since to learn about creation is to learn about the creator. It is no accident that plenty of scientists have been Catholic priests. The world is a profoundly interesting place to a Christian.

But there is a sense in which the Dawkinsian atheist is right, because he doesn’t really care about natural philosophy. What he cares about is technology, and when he talks about science he really means the scheme of conquering nature and bending it to our will. And this is something towards which Christianity is sometimes antagonistic. Not really to the practice, since technology is mostly a legitimate extension of our role as stewards of nature, but to the spirit. And it is antagonistic because this spirit is an idolatrous one.

The great difference between pagan worship and Christian worship is that Christian worship is an act of love, whereas pagan worship is a trade. Pagan deities gain something by being worshiped, and are willing to give benefits in exchange for it. This relationship is utterly obvious in both the Iliad and the Odyssey, but it is actually nowhere so obvious as when the Israelites worshiped the golden calf. For whatever reason this often seems to be taken to be a reversion to polytheism, where the golden calf is an alternative god to Yahweh. That is not what it is at all. If you read the text, after the Israelites gave up their gold and it was cast into the shape of a calf, they worshiped it and said:

Here is your God, O Israel, who brought you out of the land of Egypt.

The Israelites were not worshipping some new god, or some old god, but the same god who brought them out of Egypt. The problem was that they were worshiping him not as God, but as a god. That is, they were not entering into a covenant with him, but were trying to control him in order to get as much as they could out of him. Granted, as in all of paganism it was control through flattery, but at its root flattery has no regard for its object.

And this is the spirit which I think we can see in the people who say, “Science has given me the car and the iPhone, I will stick with Science.” They are pledging their allegiance to their god, because they hope it will continue to give them favors. And it is their intention to make sacrifices at its altars. This is where scientists become the (mostly unwitting) high priests of this religion; the masses do not ordinarily make sacrifices themselves, but give the sacrifices to the priests of the god to make sacrifice on their behalf. And so scientists are given money (i.e funded) as an offering.

To be clear, this is not the primary reason science gets funded. Dawkinsian atheists (and other worshipers of science) tend to be less powerful (and less numerous) than they imagine themselves. Still, this is, I think, how they view the world, except without the appropriate terminology because they look down on all other pagans.

And I think that it is largely this, and not the silly battles with fundamentalists and other young-earth creationists that result in their perception of a war between science and religion. There were other historical reasons for the belief in a war between science and religion, but I am coming to suspect that they had their historical time and then waned, and Dawkinsian atheism is resurrecting the battle for other reasons. They are idolaters, and they know Christianity is not friendly to idolatry. And idolaters always fear what will happen if their god does not get what it wants.

Authoritative Authorities

In my previous post I mentioned that people will use science’s scheme of self-correction as a support of its authority, and that this is utterly confused. In fact, here’s what I said (yes, I’m quoting myself. Think of it as saving you the trouble of clicking on the link):

(It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.)

Today is that day.

Before getting into it, I need to qualify what I mean by an authority. There are multiple meanings to the phrase authority, and the most common one—someone such as a king, judge, etc. who should be obeyed and who enforces their will through force—isn’t relevant. I’m using the term “authority” as in the material logical fallacy, “appeal to authority”. Unfortunately, appeal to authority is often misunderstood because it would be much better named “appeal to a false authority”. A true authority, in the logical sense, is  anyone or anything which can be relied upon to only say things which are true. If you actually have one of those, it is not a fallacy to appeal to their statements.

A logical authority may of course remain silent; its defining characteristic is that if it says something, you may rely on the truth of what it says. These are of course hard to come by in this world of sin and woe, and you will find absolutely none which are universally agreed upon. That doesn’t mean anything, since you will find absolutely nothing which is universally agreed upon.

To give some examples of real authorities, Catholics hold that the bible, sacred tradition, the magisterium, and the pope when speaking ex cathedra are all authorities. God has guaranteed us that they will not lead us astray. Muslims hold that the Quran is an authority.

Not everyone believes there exists any authorities at all, of course. Buddhists don’t and neither (ostensibly) do Modern philosophers. If you insist on distinguishing Modern philosophers from Postmodernists, then Postmodernists don’t believe there exist any authorities either. In general, anyone who holds that truth is completely inaccessible will not believe in any authorities.

So we come to Science, and the curious thing is that science explicitly disqualifies itself as an authority. Everything in science is officially a guess which has so far not been disproved by all attempts which have so far been made to disprove it. And yet many people want to treat science as an authority. In some cases this is sheer cognitive dissonance, where people pick what they say on the basis of which argument they’re having at the moment, but in other cases there is an interesting sort of reasoning which is employed.

Both forms tend to piggy-back the bottom 99% of science on the success of (parts of) physics, chemistry, and to a lesser extent some parts of biology. This especially goes together with conflating science and engineering.

The first and stronger sort of argument used is that science may always be subject to disproof, but that after a sufficient amount of testing, any such disproof will be at the margins and not in the main part. The primary example of this is the move from Newtonian mechanics to Relativity, where the two differ by less than our ability to measure at most energies and speeds we normally interact with.

The problem with this argument is that there is relatively little of science to which it actually applies. Physics is rare in that most physicists study a relatively small of phenomena. There are less than two hundred types of atoms, and less than two dozen elementary particles, and apparently no more than three forces. So thousands of physicists all work on basically the same stuff. (It’s not literally the same stuff, of course; physicists carve out niches, but these are small niches, and often rely on the more common things in a way where they would be likely to detect errors.) This is simply not true of other fields in science. You can study polar bears all your life and never do anything which tells you about the mating habits of zebra fish. You can study glucose metabolism for five decades straight without even incidentally learning anything about how DNA replication is error-checked. You can spend ten lifetimes in psychology doing studies where you ask people to rate perceptions on a scale of 1 to 10 and never learn anything about anything at all.

The result is that in most fields outside of physics and (to a lesser extent) chemistry, theories are not being constantly tested and re-tested by most people’s work. In some of the fluffier fields like human nutrition and psychology—where controlled experiments are basically unethical and in some cases may not even be theoretically possible—they may not even be tested the first time.

The second and weaker argument is that science is the best that we have, and so we must treat it as an authority. This is very frequently simply outright wrong. In fields where performing controlled experiments is unethical, science consists of untested guesses where the people making the guesses had a strong financial and reputational incentive to make interesting guesses, as well as often a strong financial incentive to make guesses which justify government policies that the government would like to do anyway. But that only counts if the financial incentive is provided by tobacco companies or weightloss companies. Other financial incentives leave people morally pure because most scientists have them.

Actually, there is a third argument too, though it’s almost never stated explicitly. A lot of people work hard in science and believe that they’re doing good work, so it would be rude to doubt them. This is, basically, a form of weaponized politeness. The sad truth is that lots of scientists aren’t more honest than other people, lots of scientists aren’t smart, and lots of scientists are wasting their time. It’s mean to say that. Sometimes the truth hurts. It always sucks when honesty and politeness are enemies, but if a person prefers politeness to honesty, he’s a liar, and there’s nothing to be said to him except that he’s working to make the world a worse place and should stop.

Ultimately, of course, the real reason science is held to be an authority—as opposed to a potential source of truth which must be evaluated on a case-by-case basis because a scientific theory is only as good as the evidence behind it—is because this is a cultural thing. People need authorities in order to feel secure, and if they won’t believe in the right authorities they will believe in the wrong authorities.

The Fundamental Principle of Science

In the philosophy of science, there have been many attempts to define what it is that distinguishes science from other attempts to know the world. There’s an interesting section of The Trouble With Physics where Lee Smolin discusses Paul Feyerabend’s work, and summarizes it something like this (I don’t have time to find the exact quote):

It can’t be that science has a method, because witch doctors have a method. It can’t be that science uses math, because astrologists use math. So what is it that distinguishes science?

Neither, so far as I know, came up with an answer. There is a hint in Smolin’s book that there is no answer; that each advance in science comes about because there is a weirdo whose approach to science works to make the discovery of the moment, but doesn’t work generally. This would explain why so few scientists tend to be really productive over their entire lives; usually they have a few productive years—maybe a productive decade or so—and then tend to fade: they spend a few years discovering everything that their personal quirks are suited to, then when it is exhausted, return to the normal state of discovering nothing.

There is something common, however, that one will find in all of these quirks, if one looks back over history. This is especially true if you go back far enough to notice how much of science turned out to be wrong. (It is a matter for another day that people take being wrong as one of the strengths of science, ignoring that a thing which may be wrong cannot be a logical authority, by definition.) There is one principle that you will find consistent between everything which has ever been science, right or wrong. That principle is: assume anything necessary in order to publish.

To see why, we must consider the evolutionary pressure that applies to science. For whatever reason, people rarely take the theory of evolution seriously. They consider it as a scientific doctrine, or an organizing principle for archaeology, or a creation myth or any number of other things, but very rarely as an operating force in the world. Yet selective pressures abound and have their effects.

Occasionally people will ask the question about what influence on science the academic doctrine of publish-or-perish has, and they are right to ask this, but it is really just a subset of a larger selective pressure: science consists exclusively of what is published. If someone were to do extensive research in his basement and discover all the secrets of the cosmos, but never tell anyone, none of his knowledge would be a part of Science. In the same sense that Chesterton said that government is force, Science is publication.

The big problem with trying to uncover the secrets of the cosmos is that they are well covered. Coming to know how the universe works is very difficult. It’s often much easier if one makes simplifying assumptions which get rid of variables or eliminate the need for expensive experiments because cheap ones will suffice. The problem is that an assumption being convenient is not a justification for making that assumption. But since science consists of what is published, there is a huge selective pressure on people to make these convenient assumptions. This may or may not influence any particular scientist, but the scientists who are willing to make these sorts of unjustified simplifying assumptions will certainly be included in Science, while the scientists who take the principled position and refuse to make unjustified assumptions may well not be, because they didn’t have results to publish. In fields where real results are difficult to come by, it’s entirely possible that this could come to dominate what is published. And as the pitchmen say, but wait, there’s more!

People who are willing to make unjustified assumptions tend to have some personality traits more than others. Arrogance and a certain sort of defensiveness tends to work well with making assumptions one can’t justify, since those discourage requests for justifications. It also works synergistically with making quick judgments based on superficial criteria (like holding unrelated unpopular opinions), since that tends to insulate the unjustified assumer from having to confront contrary arguments and evidence. And here we come to the question of evolution, because new scientists will have to get along with these people, since the scientists who have published largely serve as the gate-keepers of who gets to join science. What sort of candidates will these people accept? Who will find scientists like this tolerable?

In subsequent generations, there will be the further question of who will find tolerable the people who found the makers of unjustified assumptions tolerable? And so it will go through subsequent generations, each new generation being a mix of all sorts, but the presence of the makers of unjustified assumptions and those who they trained will act as as selective pressure even on those who don’t work with them directly, since they still must be able to work with these people as colleagues and in many cases submit journal articles to them for peer review, etc.

For any institution, if you want to know how it tends to go wrong, a good place to start is asking what are the selective pressures affecting it?