Programmed to Overeat?

One of the causes that you will see put forward as to why so many people are overweight, fat, or obese is that we evolved for a food-scarce environment and now live in a food-rich environment, so our natural inclination to eat everything available and store fat for the lean times is no longer adaptive. This hypothesis has a natural conclusion about how to not get fat: limit what you eat and always be hungry. To lose weight, limit what you eat even more and always be hungrier until you’re thin, then just limit what you eat and always be hungry.

Like the idea that carbs are more filling that fats because carbs have 4 Calories per gram while fats have 9 Calories per gram, so carbs take up more room in your stomach, this is one of those ideas that’s strange that anyone says with a straight face, at least if they’ve spent more than a few days living as a human being. Because if you have any experience of living as a human being, this is just obviously false. And there’s a super-obvious thing which disproves both: dessert.

Observe any normal people eating dinner and they will eat until they are full and don’t want to eat anymore. Then bring out some tasty treats like pie, ice cream, etc. and suddenly they have room in their stomach after all. This simple experiment, which virtually all people have participated in themselves in one form or another, irrefutably disproves both of those hypotheses.

You can also easily see this if you have any experience of animals which actually do eat all food that’s available until they physically can’t, such as the cichlid fish called the Oscar.

By Tiia Monto – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=53098090

You feed oscars feeder fish, and they will keep eating them until there is no more room left in their stomach, throat, and mouth. They, literally, only stop eating once their mouths are full and fit nothing more in them. They then swim around with several tails sticking out of their mouth until their stomach makes room and they can move everything down.

That’s what a hunger signal with no feedback mechanism to stop because the creature evolved in a food-scarce environment looks like. (Oscars who are fed a lot grow extremely rapidly and very large.)

But you can also disprove this from the other direction. Yes, lots of people are fat, but they’re not fat-mouse fat.

Fat mouse was created by lesioning the part of the brain responsible for satiety. Fat mouse then kept eating and eating, without stop, rapidly ballooning into nearly being spherical. (Incidentally, are we to believe that normal mice eat have a satiety limit to their eating because mice evolved in a food-rich environment? When you look at field mice, is “abundant food” really the first thing that comes to mind?)

Now, it’s possible to attempt to save the food-scarce-environment hypothesis by modifying it, saying that we’re genetically predisposed to being fat and unhealthy because that worked out in a food-scarce environment, but not too fat, for whatever reason. This suffers from being arbitrary, but then it is the prerogative of evolution to be arbitrary (obviously nothing needs to make any sense if you’re an atheist, but for the rest of us the influence of fallen angels on evolution, within the limits God permits them to work, has the same result—that’s one of the things that confuses atheists).

Of course, the problem with even this modified hypothesis is that there are plenty of naturally thin people and if you talk to them they’re not constantly hungry and denying themselves the food needed for satiety at every moment.

There’s also the problem of the timing of the rapid fattening of the population. Yes, it took place at a time when food was abundant, but there have been sections of the population for whom food is abundant as far back as there is recorded history. They were not all obese. More recently, in the 1800s, upper middle class and rich people could easily afford enough food to get fat on, yet they were not all obese. And in much of history, when food was scarce, people’s preferences in women were for plump women. Just look up paintings of Venus:

Which makes sense in that context—when people mostly don’t have enough food, women who manage to be plump in this environment are healthier, can have more children, survive the rigors of pregnancy, take care of the children, etc. Hence when painting a goddess of beauty, they painted her to the standards of their day and made her plump. But they didn’t make her obese.

To be fair, you can find the venus of willendorf:

By Oke – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1152966

But this dates to a time (30,000 years ago) from which food was supposed to be scarce and—so the hypothesis goes—no one actually looked like that because they were in the environment their constant food cravings were adapted to.

Ultimately, what I find so odd about the programmed-to-overeat hypothesis of modern obesity is not that it’s obviously false. It’s that it’s obviously false and the people who push it have clearly never considered the evidence against it.

You don’t see this with, for example, Young Earth Creationists. They have explanations for why radio-isotope dating doesn’t work and how geology is all wrong and fossil records are being misinterpreted because the dinosaurs were all animals that didn’t make it onto the Ark, etc. etc. etc. Say what you want about Young Earth Creationists, they at least take their ideas seriously.

As far as I can tell, the people saying that we’re programmed to overeat are just saying things.

Stupid Things Said About Saturated Fat

Dietary saturated fat has been blamed for all manner of health problems, but the evidence for this ranges from low quality to complete garbage. That the evidence quality is low is not surprising, since there are good reasons to believe that saturated fat is healthy for humans.

The first and most important reason is that saturated fat is the kind of fat that humans make if they have extra carbohydrates or proteins around and need to store the energy. And that’s going to be a large fraction of the carbs we eat. And when I say a large fraction, I do mean large. A 200 pound athlete would be able to store about 500 grams of glycogen in his muscles and another 100 grams in his liver. (And less than 10g of glucose in his bloodstream, which tends to be nearly constant anyway, so we can ignore this.) But the thing is: these are very rarely empty, especially if one regularly eats carbs. And if you’re following any kind of normal American diet, you’re eating a lot of carbs. If you follow the USDA food pyramid and eat a 2000 Calorie diet (which is the Calorie requirements of a small person who isn’t very active) you’re probably eating at least 250 grams of carbohydrate per day. So your glycogen stores will start off mostly full, and while your body will try to get rid of the glucose by using it in muscles, in your brain, etc., it can’t do that very quickly and needs to get rid of the glucose very quickly, so the overwhelming majority of it will get converted to fat. (This is less true for people who spend most of the day moving, such as people who work some kinds of manual labor jobs, but that’s not typical. And humans love to rest after eating.)

(Whether a large fraction of the protein one eats gets converted to fat depends on whether one gets an unusually high amount of protein in one’s diet. Most people can’t use more than about 1 gram of protein per pound of lean bodymass per day, but most people also eat less than that in protein.)

Oh, I should mention that it’s actually very normal for the human body to use fat as fuel. When insulin isn’t high to try to make cells take up glucose, and in that process suppressing the fat cells from putting fatty acids into the blood, our fat cells regularly break fat (which is insoluble in water) down into fatty acids (which are soluble in water) and put them in our bloodstream so we have a constant, dependable supply of energy. Like anything which can be said about biology in human language this is a massive oversimplification, but at its level of generality it’s correct and important.

Anyway, the primary output of denovo lipogenesis (making fat from scratch) is palmitic acid, which is a saturated fatty acid. This can be converted into other fatty acids such as stearic acid (another saturated fat) and oleic acid (an omega-9 unsaturated fat) and many others, but human beings—and mammals in general—tend to leave it as palmitic acid, then take three of them and attach them to a glycerin spine, making them fat. We do this because it allows them to store very compactly without needing any water around them, which is extremely weight-efficient. This is important for animals because moving weight requires energy, so the lighter we can store the energy the more efficient it is. Saturated fats pack together especially well, which is why animals with very high energy needs like mammals prefer them.

So believing that saturated fat is bad for us requires believing that our bodies turn most of the carbohydrates we take in into something that’s bad for us.

Incidentally, this all happens in the liver. Since fats are insoluble in water (they don’t form a solution; this is why oil floats at the top of water rather than dissolving in it like salt), the liver can’t get these fats to the rest of the body by just sticking them in the bloodstream. That would be a disaster. So it creates transport crates for the fats called “lipoproteins”. These start out as VLDL—Very Low Density Lipoprotein. They’re very low density because they’re crammed full of fats, which is less dense than water. These transport crates are then dumped into the bloodstream where the proteins on the outside enable it to interact nicely with the water in our blood and move about without causing problems. These transport crates do something which can be analogized to docking at cells and then the cells take some of the fats inside. As this process happens the lipoproteins shrink and their density goes up. Thus they eventually turn into plain old “LDL” (low density lipoprotein). Interestingly, High Density Lipoprotein (HDL) is not caused by them becoming depleted; instead HDL is made empty in the liver and sent out to collect cholesterol and related molecules.

Interestingly, dietary fats get transported by a different system. The intestines create a similar but larger kind of lipoprotein transport crate called a chylomicron. These shuttle dietary fats from the intestines through the blood to our cells.

In both cases, you can see that idea that “saturated fat congeals and clogs your arteries” is nonsense, even apart from saturated fat congealing at room temperature, not body temperature. The most liquid fat in the world would be terrible to have in one’s blood since it doesn’t mix with water, and the human body doesn’t do that. The fats don’t matter at all as they’re being transported.

Where they can matter is once they’ve been added to fat cells and the fat cells break them down into fatty acids and put those into the blood. (This is a tightly regulated process to make sure that energy is available at all time.) That’s because these fatty acids, in addition to being an energy source, also are precursors for hormones and also can interact with various receptors. (This is where things like omega-3 versus omega-6 come in.)

This is also why you see claims that eating large amounts of saturated fat induces insulin resistance in rats. Now, before we proceed, I do want to mention that it’s important to remember that, while animal models can be useful, rats aren’t humans and their exact dietary requirements are a bad guide for the ideal diet for human beings. You shouldn’t feed bears, pigs, dogs, or cats like rats for optimal health, and there’s no reason to believe should feed us like rats (or bears, pigs, dogs, or cats), either. (You can’t feed us like cows—we’re not build to get a meaningful number of Calories from fibrous plant matter.) So these studies on rats are, at best, interesting. That very large grain of salt taken, what the studies find is that various kinds of fats which are pro-inflammatory, when taken in large quantities, promote inflammation which can induce insulin resistance. The study I linked to found that the effect went away for saturated fat if the rats were fed about 10% of their fat as fish oil, which is rich in omega-3 fatty acids like DHA and EPA, which are anti-inflammatory. That is, it’s all about the net effect of the entire diet, not one particular component and not about the fact that the fats are fats. (Again, in rats; how pro- or anti-inflammatory the various fatty acids are in humans may be similar or very different, on a per-molecule basis. And there’s probably significant individual variation, too.)

Inflammation, by the way, is not at all bad. Inflammation is a very useful reaction; it’s how our bodies deal with damage such as clotting in a cut, immune responses to foreign invaders, muscle damage from exercise, and so forth. The problem is when pro-inflammation factors dominate to produce more inflammation than is necessary for the circumstances. Quite a few problems happen when a balanced system becomes imbalanced.

Incidentally, while palmitic acid (the dominant fatty acid in mammal-produced fat) seems to be mildly pro-inflammatory, omega-6 fatty acids may be significantly more pro-inflammatory. And they’ve been making up a much larger proportion of western diets—especially of American diets—since the introduction of corn oil and other heavily processed seed oils.

The Science of Test Driving a Potential Spouse

I recently saw someone try to support the idea of “test driving” a potential marriage partner prior to getting married in order to ensure that they are “sexually compatible”, and then in the ensuing discussion I was told to look up the research on “the wide variability in female sexual responsiveness due to both psychological and anatomical reasons”. My understanding is that what research in this “field” exists doesn’t support the importance of “test driving” a potential marriage partner, but that’s irrelevant because there simply can’t be any good science on this subject. We can tell that by the simple expedient of asking what kinds of experiments could get us the data we want, and discovering that it’s not possible to do them.

So, what kind of experiment would show us that “there’s a wide variability in female sexual responsiveness due to both psychological and anatomical reasons”? Clearly, we’ll need to have a large number of females copulate with a wide variety of partners and measure their responsiveness during each copulation, then compare the things to which each female maximally responded to in order to see how big the range is. You can’t leave off any of these things; if you only study a few women, you won’t have the statistical power to conclude anything. If you leave off the wide variety of partners, then you can’t differentiate between there being a wide variety in what women respond to versus there simply being a wide variety in the degree to which women respond at all. If you leave off measuring, instead relying on surveys, you can’t differentiate between there being a wide variety in what women respond to and there being a wide variety in how women describe their response.

This experiment is both impractical and impossible; let’s discuss the impracticality of it first. One obvious problem is recruitment: there are very few people willing to copulate with a large number of strangers in a laboratory, covered in probes to measure responsiveness, and observed by experts, on command. Also, since you will have to pay the participants and this amounts to prostitution, there are relatively few places you can legally conduct this experiment, especially since bringing in the variety of women you want may well count as sex trafficking, doubly so because of the use of blindfolds to eliminate attractiveness as a confounding factor when measuring the effect of physical variations of anatomy. Moreover, getting this approved by an IRB (ethics committee) is pretty dicey. Never say never, of course.

But supposing one were to manage to work all of these practicalities out and conduct the experiment, it would not produce any data relevant to real life because people’s enjoyment and satisfaction in copulation is largely determined by their relationship to the person with whom they are copulating. Married people frequently report greater enjoyment of sex after five or ten years of marriage than right at the beginning, and it is impossible to have your experimental subjects form real relationships for years to each of the many subjects with whom they will be paired. If nothing else human beings don’t live that long, but repeated pair bonding is also well known to weaken subsequent bonds, especially without time between them. Plus people don’t form real bonds on command.

It is thus impossible, even in theory, to scientifically study the kinds of things which might support the idiotic idea of “test driving” a potential spouse. And bad science is worse than no science.


I should probably mention that the idea of test driving a spouse, in addition to being immoral, is also idiotic because it’s predicated on two premises, both of which are false:

  1. people can’t learn
  2. people don’t change

Young people are told to not pay too much attention to the looks of a potential husband or wife because looks are only skin deep and virtue, character, and personality matter far more. This is all quite true, but it’s also the case that selecting a husband or wife based on their looks is futile anyway because their looks will change as they age. You can find this with any celebrity who is in their sixties—just look at pictures of them from the various decades and while they are recognizable, they will be quite different. And celebrities tend to be selected for being people who change the least as they age.

In the same way, people’s tastes and preferences change. Women’s bodies change after pregnancy and childbirth. Quite apart from the immorality of the thing, the idea that finding who people who happen to match each other in their sexual enjoyments will be conducive to lasting happiness is simply unrelated to reality. Everyone must learn and adapt. There are no exceptions to that in this world.

Science Is Only As Good As Its Instruments

There’s a popular myth that science progressed because of a revolution in the way people approach knowledge. This is a self-serving myth that arose in the 1600s by people who wanted to claim special authority. This is why they came up with the marketing term “The Enlightenment” for their philosophical movement. If you look into the actual history of science, scientific discoveries pretty much invariably arose a little while after the technology which enabled their discovery was invented.

There is a reason we did not get the heliocentric (really, Copernican) theory of the solar system until a little while after the invention of the telescope. There is a reason why we did not get cell biology until a little while after the invention of the microscope. If you dig into the history of specific scientific discoveries, it’s often the case that several people discovered the same thing within months of each other and the person we credit with the discovery is generally the one who published first.

This is not to say that there are never flashes of insight or brilliance. So far as I can tell Einstein’s theory that E=mc2 was not merely the obvious result of measuring things using new technology. That said, it would almost certainly never have happened had radioactivity not been discovered a decade earlier, which would not have been possible without certain kinds of photographic plates existing (radioactive decay was discovered by Henri Becquerel and Marie Curie in the 1890s as they were studying phosphorescence and exposed photographic plates wrapped in black paper, which showed that something else was going on besides phosphorescence, many further experiments clarified what was going on by the time Einstein was working on the mass-energy equivalence).

Which gets me to modern science: there are a lot of things that we want to know, for which the relevant technology does not seem to exist. Nutrition is a great example. What are the long-term health effects of eating a high carbohydrate diet? How can you find out? It’s not practical to run a double-blind study of one group of people eating a high carbohydrate diet and the other eating a low-carbohydrate diet for fifty years. The current approach follows the fundamental principle of science (assume anything necessary in order to publish): it studies people for a few weeks or months, and measures various things assumed to correlate perfectly to good long-term health. That works for publishing, but if you’re more concerned with accuracy to reality than you are with being able to publish (and if you’re reading the study, you have to be), that’s more than a little iffy. Then if you spend any effort digging into the actual specifics, let’s just say that the top ten best reasons to believe these assumptions are all related group-think and the unpleasantness of being in the out-group. (Please actually look into this for yourself; the only way you’ll know what happens if you don’t just take people’s word for something is by not taking their word for it, including mine.)

And the problem with science, at the moment, when it comes to things like long-term nutrition is that the technology to actually study it just isn’t there. (It’s different if you want to study things like acute stimulation of muscle protein synthesis related to protein intake timing or the effects on serum glucose in the six hours following a meal.) And when the technology to do good studies doesn’t exist, all that can exist are bad studies.

This is why we see so much of people turning to anecdotes and wild speculation. Anecdotes and wild speculation are at least as good as bad studies. And when the bad studies tend to cluster (for obvious reasons unrelated to truth) on answers that seem very likely to be wrong, anecdotes and wild speculation are better than bad studies.

That doesn’t mean that anecdotes and wild theories are good. It would be so much better to have good studies. But we can’t have good studies just because we want them, just as people before the microscope couldn’t have cell biology no matter how much they wanted it. The ancient Greeks would have loved to have known about bacteria and viruses, but without microscopes, x-ray crystallography, and PCR, they were never going to find out about them.

As, indeed, they didn’t.

Always Question Science

One of the great things about science is that, when done properly, it’s easy to scrutinize it. So whenever you see someone cite a scientific study, always look into it. A friend recently gave me a link to this article in the NY Post titled, A Third of Women Only Date Men Because of the Free Food: Study. (note: he didn’t endorse it, just provided it for context).

If you look at the article, it links to this article in The Society for Personality and Social Psychology. This article describes the study in slightly more detail, but we need to look at the actual study, which is titled Foodie Calls: When Women Date Men for a Free Meal (Rather Than a Relationship).

So, first question: what was the study? (There were actually two, since my purpose is to illustrate why one should read the original paper critically, for brevity I’m going to only discuss the first study; go read the paper for the second one.) It was a survey of 820 women on Amazon’s Mechanical Turk service who were paid $.26 to answer a survey. (If you’re not familiar, Mechanical Turk is Amazon’s service where people are paid small amounts to do extremely short, simple tasks; it works because Amazon streamlines the process of getting many small tasks in succession so it’s worth it to the people doing it.) These were then filtered down to 698 self-identified heterosexual women. They were given personality questions as well as the question which makes the headline.

Have you ever agreed to date someone (who you were not interested in a relationship with) because he might pay for your meal?

Right off the bat, I dislike the phrasing on this because I’m used to “date” as a transitive verb meaning to be in a relationship with someone where the couple regularly go on dates. Which would make this question nonsense because it would be asking whether the women have been in a relationship with someone they were not interested in a relationship with. Clearly, by “date someone” they mean “go on a date with someone,” but this weird usage is going to influence how people respond. Among the possible reactions is to interpret the question more loosely, which means that both “yes” and “no” answers will mean a wider variety of things depending on how the responder interpreted the question.

And that’s apart from the way that people may well vary in interpreting the question. I could easily see women interpreting this to mean, “Did you ever go on a date with a man who hadn’t piqued your interest but, since he was paying for the meal, you thought you’d give him a chance to see if he improved on acquaintance?”

If what they wanted to ask was whether the woman ever intentionally misled a man into thinking she was open to a relationship with him when all she wanted as free food, why didn’t they ask that? Because such harsh language would color the results? Because if they said what they actually meant women might be embarrassed to admit it? So what was the goal? To try to trick them into revealing the truth?

I’m going to get back to that in a moment, but let’s take a short break to point out that when you read the paper, a third of women answered positively to the question, which only asks if they’ve ever done this even once. The study had a followup question about frequency; 20% of the women who went on a “foodie call” did so frequently or very frequently; since that’s 20% of 33%, that works out to 6.6% of all women. This is a long ways away from “a third of women only date men because of the free food.”

But back to the question: I imagine that people would try to defend the ambiguous language because words lie “deceive” imply judgement, and so will discourage respondents. Perhaps, but that’s because the thing being described is bad. Anyway way that the person understands of describing the intentional deception of a person to defraud them out of material goods will sound bad, because it is bad. The only way to make it sound not-bad is to phrase it in such a way that the respondent doesn’t know what you’re talking about.

Which gets me to the bigger point about this kind of psychological research: the simple expedient of phrasing your question ambiguously guarantees you publishable results. There’s no need to engage in p-hacking or other statistical tricks. Unlike with some of the stricter sciences like biology, getting fake results can be done with everything being completely above-board. It’s a great racket, which is why it will keep going for quite some time. Which is why you should never trust a summary of the results. Always track down the study and find out what the actual questions were.

Always question science. Good science is made to be questioned.

Calories In vs. Calories Out

When it comes to health and fitness, and in particular to reducing the amount of fat on one’s body, the dominant story within our culture, at least from the sort of people who present themselves as experts, is that fat gain or loss is just Calories-in-vs-Calories-out so just take however many Calories you burn and eat less than that until you’re thin.

Now, obviously there is something truth to this because if you stop eating you will waste away until you die, and you will be very thin shortly before your death. (Though, interestingly, if you autopsy the corpses of people who’ve starved to death you will find tiny amounts of fat still remaining.) Of course, the problem with just not eating until you’re thin is that starvation makes you unfit for pretty much any responsibilities and it’s also bad for your health. (Among many problems, if you literally stop eating your muscles will substantially atrophy, including your heart.)

So the big question is: is there a way to eat fewer Calories than you burn while remaining a functioning adult who can do what the people you have responsibilities to need you to do, which doesn’t wreck your health?

The good news is that there are methods that accomplish this balance. The bad news is that (at least as far as I can tell) there’s no one method that works for everyone.

Since this post is about the Calories-in-vs-Calories-out mantra (from here on out, Ci-Co), I’m only going to discuss moderate Calorie restriction—oversimplifying, aiming for a deficit that results in about a half a percent of bodyweight reduction per week, for a period of 6-12 weeks, before returning to maintenance for an approximately equal length of time. (This is a version of what bodybuilders do and they’re probably the experts at losing fat because bodybuilding can be described, not entirely inaccurately, as competitive dieting.)

Now, at first glance, this isn’t too far off what the Ci-Co people seems to be saying. However, it’s very different in practice, and those differences will be illuminating, because they’re all things that the Ci-Co people get wrong.

The first big problem with trying to implement Ci-Co is: what on earth is your daily Calorie expenditure? There are highly accurate ways of measuring this which are extremely expensive with most being infeasible outside of a laboratory. Apart from that, there’s no good short term way. The best way—which is what bodybuilders do—is to carefully measure your Calorie intake and your weight over a period of time, then see what your weight does, and calculate your Calorie expenditure from your intake plus what your weight did. For example: suppose you take 3000Cal/day and over 14 days lost a pound. A pound of fat contains roughly 3600 Calories, so your actual expenditure was 3000 + (3600/14) = 3257. From there you can refine your intake to achieve what you want. (Bodybuilders also have phases where they put on muscle, which means gaining weight, so they will have to eat at a surplus to provide energy for building the extra muscle tissue.)

This looks nothing like what the Ci-Co people suggest, which usually amounts to either taking the USDA random-number of 2000 or else using an online tool which estimates your Calorie expenditure from your height, weight, and some description of how active you are. These are generally accurate to +/- 50%, which is not obviously distinguishable from useless. Using myself as an example, entering 6′ and 215 pounds with high activity, it estimated my maintenance Calories as 2900 and a weight loss target of 2450. I’ve actually been using the MacroFactor app to track approximately 100% of what I eat and weighing myself every morning when I wake up. It estimates my maintenance Calories as about 3900 Cal/day and I’m losing a little over a pound a week with a target Calories of 3200 Cal/day. On days when I eat about 2800 Calories I go to bed hungry and am very hungry the next day. If I tried to lose weight at 2400 Cal/day in a week or two I’d be constantly ravenous, unable to concentrate, barely able to do my job (I’m a programmer), and miserable to be around.

Because here’s the thing: the human body can tolerate small (consistent) Calorie deficits without worrying, but if they become too large the body freaks out and concludes that something very, very bad is going on and the top priority for the foreseeable future is getting through it. That means two things, both very bad for losing fat:

  1. Spending all your waking hours trying to find enough food
  2. Reducing your Calorie expenditures as much as possible to conserve what energy we do have until the bad times have past.

The second point is probably the bigger deal. What the CiCo people don’t realize is that your Calorie expenditure is nowhere near fixed. If your body thinks it’s a good idea, you can maintain on a surprisingly large number of Calories. If your body thinks it’s a good idea, you can maintain on a surprisingly small number of Calories. The former looks like having a lot of energy and feeling good. The latter looks like being tired and cold all the time.

Even worse, there is reason to believe—though this is nowhere nearly as well established—that if you make your body freak out and think it needs to survive a famine too many times, it will start to prepare for the next famine as soon as food becomes readily available again, much as people who’ve been broke a few times and also had good times tend to live like misers and save money the next time things go well. (In the the case of your body, this means gaining the fat you will need to survive the next famine, just like bears put on a ton of fat in summertime in order to get through the coming winter.)

This is why the other critical part of how bodybuilders diet is that they only do it for 6-12 weeks at a time, then take long maintenance breaks at their new weight. (The variability because they pay attention to how their body reacts and if it seems to be starting to freak out, they stop losing weight and start maintaining so it doesn’t have to adapt to the diet—there are many factors which go into how long it’s possible to diet before the body starts to freak out.) This relatively short fat-loss window ensures that the body never goes into surviving-famine mode. And the maintenance Calories are not a fixed number, either. They can easily increase for a few weeks as your body gets used to the extra food and raises your metabolism because it seems safe to do so.

When you put this all together, it’s why the Ci-Co people give the laws of thermodynamics a bad name. It may be perfectly true that losing weight is the result of one number that’s not easy to measure being lower than another number that’s impractically expensive to measure and impossible to usefully estimate, but knowing that that’s true has no practical value.

For a much more entertaining take on a closely related subject, check out Tom Naughton’s post Toilet Humor And The HOW vs. WHY Of Getting Fat.


This post was about the problems with Calories In vs Calories Out, but I would be remiss to point out that everything I said up above about how bodybuilders reduce fat is predicated on having a reasonably well-regulated metabolism to begin with. There are all sorts of ways for the human metabolism to become disregulated and if yours is disregulated your odds of successfully reducing fat are much lower until you figure out what’s wrong and fix it. In my own case, I’m about 99% certain that at times in my life I’ve induced insulin insensitivity in my body through excessive fructose consumption. (I can eat a pound of chocolate for lunch if I let myself and there was a period back when I was in grad school when I was drinking full-sugar Mountain Dew and eating cake mix out of the box with a spoon. That stuff has more sugar and flour in it. This is during a period when I was unemployed and depressed as well as young and dumb, and I had yet shaken off being raised during the low-fat craze of the 1980s and 1990s.) I believe some extensive low-carb eating has allowed my body to mostly reset its relationship with insulin and at this point I’m only willing to eat candy/ice cream/etc. on Christmas, Easter, and my birthday. That said, when I’m cutting (reducing fat), I find it much easier and more successful if I go back to eating low carb or even keto.

That’s me; I suspect that many people are in a similar boat because fructose is way more common in processed food than people normally realize and it’s reasonably well established that extremely high fructose consumption (much higher than anything you’d get from any reasonable intake of fresh fruit, btw) can induce non-alcoholic fatty liver disease, which seems to have a causative relationship with insulin resistance/metabolic syndrome. That said, this is not everyone who’s got excess fat. There are tons of things that can go wrong to disregulate one’s metabolism/appetite, some of them dietary, some of them endocrine, and some I don’t even begin to have an idea. The human body is unbelievably complex and there are a lot of ways it can malfunction. There’s really no substitute for trying things and seeing what works. And at least we know that it’s a good idea to get regular exercise no matter how much excess fat you’re carrying. It may not make you lean, but it will certainly make you healthier and happier than if you don’t do it. After the first few months.

Oh yeah—and I’m no expert, so please do your own research and don’t take my word for it.

Psycho-Analysis Began in Hypnosis

In my (low-key) quest to understand how on earth Freud’s theories were ever respected, I’ve recently read Five Lectures on Psycho-Analysis. It’s definitely been interesting. (If you don’t know, this is the transcript of five lectures he gave on five consecutive days at Clark University in Worcester, Massachusetts in 1909 which were meant to give a concise summary of Psycho-Analysis.)

Something I did not realize, but which makes perfect sense in retrospect, is that Psycho-Analysis began in hypnosis. A tiny bit of background is necessary, here: In the 1800s and early 1900s, the term “hysteria” seems to refer to any idiopathic problem in women with severe physical symptoms. Basically, when a woman developed bad symptoms and called in a doctor and he could find no physical cause, the diagnosis was “hysteria,” which basically meant “I don’t know, in a woman.” At this point, since the symptoms don’t have physical causes it is assumed that they must have mental causes and so doctors of the mind would step in to try to help, supposing, of course, that the patient or her family could afford it.

Freud begins with an interesting story about a patient that a colleague of his, Dr. Breuer, was treating. It was a young woman under great stress (nursing her dying father) who started developing a bunch of really bad symptoms that sound, to my ear, like a series of small strokes. She couldn’t use her right arm or leg for a while, sometimes she couldn’t use her left side, she forgot her native language (German) and could only speak English, etc. She also developed a severe inability to drink water and survived fro several weeks on melons and other high-water foods. And here’s where it gets interesting. Dr. Breuer hypnotized her and in a hypnotic state she related the story of having gone into a companion’s room and seen the woman’s dog drinking from a glass. This disgusted her terribly but she gave no indication of it because she didn’t want to offend the woman. He then gave the young woman a glass of water, brought her out of hypnosis, and she was able to drink normally from then on.

Freud moved away from hypnosis for several reasons, but the big one seems to be that most people can’t be hypnotized, which makes it a therapeutic tool of dubious value. The particulars of how he moved away is interesting, but I’ll get to that in a little bit. Before that, I want to focus on the hypnosis.

The history of hypnosis is interesting in itself, but a bit complex, and the relevant part is really how it was more popularly perceived than by what it was intended as. In its early stages, hypnosis was seem as something very different from normal waking life and, as a result, excited an enormous amount of interest from people who desired secret knowledge of the universe’s inner secrets. There were plenty of people who wanted to believe in a hidden world that they could access if only they had the key (spurred on, I suspect, by the many discoveries of the microscope in the late 1600s and the continued discoveries as a result of better and better microscopes). Hypnotism, where a man’s mind seemed to alter to a completely different state, and in particular where it could receive commands that it would obey without remembering in a subsequent waking state, was perfect for just such a belief. Here there seemed to be another behind, behind the mind we observe, which seemed to govern the observable mind’s operation. This is the sort of stuff out of which real power is based—if you can control the real source of the mind, you can control the mind!

This context really makes Pysho-Analysis’s model of the compartmentalized mind and further its insistence on the power of the sub-conscious mind make sense.

As I said, Freud abandoned hypnotism, and the means by which he did it really should have been a tip-off to his whole theory being wrong. What led him to discard hypnotism were some experiments he became aware of in which a person who could not remember what he did under hypnosis could be induced, without any further hypnosis, to remember. Freud only took this instrumentally rather than considering that it undermined the whole idea of the powerful subconscious and went about bringing up the “repressed” memories which were (putatively) causing physical symptoms by talking with the patient without hypnotism. I suppose that the idea of this secret knowledge was too attractive to give up.

Astronomers Find a Waterworld Planet With Deep Oceans in the Habitable Zone – Universe Today

I recently came across an article titled “Astronomers Find a Waterworld Planet With Deep Oceans in the Habitable Zone“. Curious what they actually found, I clicked through to the article. It was about what I expected.

The entire subject of discovering exoplanets is one that does not fill me with confidence. I get the basic approach used, which is looking for regular dimming of stars caused by the transit of a planet in front of the star as it orbits the star. And, indeed, you would expect a planet orbiting a star to (slightly) dim the light coming from that star if you’re lucky enough for the planet to pass right in front of it relative to us. That said, when I say slight, I mean slight. To put it into perspective, our sun has a diameter 109 times larger than the diameter of the earth. In terms of cross-sectional area, that means that the earth’s shadow is about 1/10,000th that of the sun’s. It will block out a little more of the sun than that, since it’s a few million miles in front of the sun rather than directly in front of it, but since we’re observing stars that are light-years away, it won’t be that much more. Jupiter, which is nearly as large as planets can get (as a gas giant’s mass goes up much past Jupiter’s, its gravity causes it to contract), would block out about 1/100th of the sun. So what astronomers are looking for is somewhere between a 1% dimming and a 0.01% dimming.

Even less confidence inspiring, when you look into the actual data, the stars in question are generally around 1pixel big in the images that they’re using. This isn’t always the case, of course, but the stars are never more than a few pixels. In the article in question, when the researchers turned to a much higher resolution telescope, they were able to distinguish the two stars of the binary system where the “waterworld” orbits the larger of the two within the habitable zone. (If you’re not familiar, the habitable zone of a star is the distance away where the heat from the star would result in liquid surface water, as we have here on earth. Too close and the planet will be too hot and the oceans will boil off, too far and they will freeze.) Oh, and these two stars are orbiting each other from roughly twice the distance that Pluto is from the sun. And the high resolution telescope was able to make them out as two distinct source of light.

No one has ever seen this supposed “Water world”. What we have is a periodic dimming of the host star. From the magnitude of that dimming we can calculate the size of the thing crossing in front of it. From the period of the dimming and the time between the dimming we can calculate the orbital period and thus the distance from the star. From the size and orbital period we can calculate the mass, and hence the density.

That last part is the basis of the claim for a “water world” came from, by the way. The density of the planet that was detected is too low to be a rocky planet like earth, and too high to be a gaseous planet. Since it’s in the habitable zone of its star, it’s unlikely to be icy, and so it is a good candidate for being a water world. This in no way justifies calling it a water world, nor does it justify the artist’s rendition of what the surface of it might look like that’s in the article (which is just a picture of the sun setting over the ocean here on earth). It also doesn’t justify the Star Trek like artist’s rendition of the planet near to a sun-like star. The star that the planet is orbiting is a red dwarf. They’re called red dwarves because they don’t put out white light like our sun does. If you look up TOI-1452A (the red dwarf star; TOI-1452 b is the planet) it has a surface temperature of 3185k. It’s not that it puts out literally no blue light, but it puts out very little. This is the dingy yellow-orange light of a low wattage “warm white” incandescent bulb. Oh, and the star only puts out 0.7% of the light that our sun does.

These sort of articles really annoy me because they pretend to have an enormous amount of certainty that we don’t have. What’s actually going on is a little bit of data and a whole lot of calculations. This is interesting, but it does a great disservice to people to pretend that what we have is a lot of data. We don’t.

Moreover, these are all unverified calculations. No one alive today is ever going to set eyes on a photograph of one of these planets to get an independent source of data about their size or composition, or even their existence. It took nine years for the New Horizons probe to fly out to Pluto. Here’s the best picture Wikipedia has of Haumea, a dwarf planet in our solar system:

Haumea is only about 10 AU further away from the sun that Pluto is. (An AU is the distance from the earth to the sun.) Here’s Eris, which is more massive than Pluto, though not quite as large, and which is much further away:

Eris is, at its farthest, about twice as far away from the sun as Pluto. And this is the best picture that we have of it. (Or at least it’s the best picture that Wikipedia has.)

If this is the best that we have of dwarf planets in our own solar system, it suggests that a bit of humility is warranted when it comes to conclusions about planets orbiting other stars. Our galaxy is a big place. There’s no reason to suppose that there is nothing besides exoplanets which will regularly result in the slight dimming of a star’s light. That’s not to say that there’s something wrong with going with what we know—that is, with saying that if the slight regular dimming is caused by an exoplanet, then the exoplanet would have such and such properties. If people are going to get tired and drop the “if”, then perhaps it would be better to stop talking about the subject at all.

Dozens of Major Cancer Studies Can’t Be Replicated

I recently came across an interesting article in Science News on widespread replication failure in cancer studies. It’s interesting, though not particularly shocking, that the Replication Crisis has claimed one more field.

If you’re not familiar with the Replication Crisis, it has to do with how it was widely assumed that scientific experiments described in peer-reviewed journals were reproducible—that is, if someone else performed the experiment, they would get the same result. Reproducibility of experiments is the foundation of trust in the sciences. The theory is that once somebody has done the hard work of designing an experiment which produces a useful result, others can merely follow the experimental method to verify that the result really happens and that after an experiment has been widely reproduced, people can be very confident in the result because so many people have seen it for themselves and we have widespread testimony of it. Or, indeed, people can perform these experiments as they work their way through their scientific education.

That’s the theory.

Practice is a bit different.

The problem is that science became a well-funded profession. The consequence is that experiments became extraordinarily expensive and time-intensive to perform. The most obvious example would cloud-chamber experiments in super-colliders. The Large Hadron Collider cost somewhere around $9,000,000,000 to build and requires teams of people to operate. Good luck verifying the experiments it performs for yourself.

Even when you’re on radically smaller scales and don’t require expensive apparatus—say you want to assess the health effects of people cutting out coffee from their diet—putting together as study is enormously time-intensive. And it costs money to recruit people; you generally have to pay them for their participation, and you need someone skilled in periodically assessing whatever health metrics you want to assess. Blood doesn’t draw itself and run lipid panels, after all.

OK, so amateurs don’t replicate experiments anymore. But what about other professionals?

Here we come to one of the problems introduced by “Publish Or Perish”. Academics only get status and money for achieving new results. For the most part people don’t get grants to do experiments that other people have already done and get the same results that they got. This should be a massive monkey wrench in the scientific machine, but for a long time people ignored the problem and papered over it by saying that experiments will get verified when other people try to build on the results of previous experiments and fail.

It turns out that doesn’t work, at least not nearly well enough.

The first field in which people got serious funding to try to actual replicate results to see if they replicate was in psychology, and it turned out that most wouldn’t replicate. To be fair, in many cases this was because the experiment was not well-described enough that one could even set up the same experiment again, though this is, to some degree, defending oneself against a charge of negligence by claiming incompetence. Of those studies which were described well enough that it was possible to try to replicate them, something like less than half replicated. They tended to fail to replicate in one of two ways:

  1. The effect didn’t happen often enough to be statistically significant
  2. The effect was statistically significant but so small as to be practically insignificant

To give a made-up example of the first, if you deprive people of coffee for a few months and one out of a few hundred see a positive result, then it may well be you just chanced onto someone who improved for some other reason while you were trying to study coffee. To give an example of the second, you might get a result like everyone’s systolic blood pressure went down by one tenth of a millimeter of mercury. There’s virtually no way you got a result that common in the group by chance, but it’s utterly irrelevant to any reasonable goal a human being can have.

Psychology does tend to be a particularly bad field when it comes to experimental design and execution, but other fields took note and wanted to make sure that they were as much better than the psychologists as they assumed.

And it turned out that many fields were not.

I find it interesting, though not very surprising, that oncology turns out to be another field in which experiments are failing to replicate. After all, in a field which isn’t completely new, it’s easier to get interesting results that don’t replicate than it is to get interesting results that do.

Awful Scientific Paper: Cognitive Bias in Forensic Pathology Decisions

I came across a rather bad paper recently titled Cognitive Bias in Forensic Pathology Decisions. It’s impressively bad in a number of ways. Here’s the abstract:

Forensic pathologists’ decisions are critical in police investigations and court proceedings as they determine whether an unnatural death of a young child was an accident or homicide. Does cognitive bias affect forensic pathologists’ decision-making? To address this question, we examined all death certificates issued during a 10-year period in the State of Nevada in the United States for children under the age of six. We also conducted an experiment with 133 forensic pathologists in which we tested whether knowledge of irrelevant non-medical information that should have no bearing on forensic pathologists’ decisions influenced their manner of death determinations. The dataset of death certificates indicated that forensic pathologists were more likely to rule “homicide” rather than “accident” for deaths of Black children relative to White children. This may arise because the base-rate expectation creates an a priori cognitive bias to rule that Black children died as a result of homicide, which then perpetuates itself. Corroborating this explanation, the experimental data with the 133 forensic pathologists exhibited biased decisions when given identical medical information but different irrelevant non-medical information about the race of the child and who was the caregiver who brought them to the hospital. These findings together demonstrate how extraneous information can result in cognitive bias in forensic pathology decision-making.

OK, let’s take a look at the actual study. First, it notes that black children’s deaths were more likely to be ruled homicides (instead of accidents) than white children’s deaths, in the state of Nevada, between 2009 and 2019. More accurately, of those deaths of children under 6 which were given some form of unnatural death ruling, the deaths of black children were significantly more likely to be rated a homicide rather than an accident than were the deaths of white children.

It’s worth looking at the actual numbers, though. Of all of the deaths of children under 6 in Nevada between 2009 and 2019, 8.5% of the deaths of black children were ruled a homicide by forensic pathologists while 5.6% of the deaths of white children were ruled a homicide. That’s not a huge difference. They use some statistics to make it look much larger, of course, because they need to justify why they did an experiment on this.

In fairness to the authors, they do correctly note that these statistics don’t really mean much on its own, since black children might have been murdered statistically more often than white children, during those time periods in Nevada. It doesn’t reveal cognitive biases if the pathologists were simply correct about real discrepancies.

So now we come to the experiment: They got 133 forensic pathologists to participate. They took a medical vignette about a child below six who was discovered motionless on the living room floor by their caretaker, brought the ER, and died shortly afterwards. “Postmortem examination determined that the toddler had a skull fracture and subarachnoid hemorrhage of the brain.”

The participants were broken up into two groups, which I will call A and B. 65 people were assigned to A and 68 to B. All participants were given the same vignette, except that, to be consistent with typical medical information, the race of the child was specified. Group A’s information stated that the child was black, while group B’s information stated that the child was white. OK, so they then asked the pathologists to give a ruling on the child’s death as they normally would, right?

No. They included information about the caretaker. This is part of the experiment to determine bias, because information about the caretaker is not medically relevant.

OK, so they said that the caretaker had the same race as the child?

Heh. No. Nothing that would make sense like that.

The caretaker of the black child was described as the mother’s boyfriend, while the caretaker of the white child was the child’s grandmother. Their race was not specified, though for the caretaker of the white child it can be (somewhat) inferred from the blood relation, depending on what drop-of-blood rule one assumes the investigators are using to determine the child is white. Someone who is 1/4 black, where the caretaker grandmother was the black grandparent, might well be identified as white, or perhaps the 1 drop of blood rule is applied at the grandmother could be at most 1/8 black for her grandchild to qualify to the racist experimenters as white. Why do they leave out the race of the caretaker despite clearly wanting to draw conclusions about it? Why, indeed.

More to the point, these are not at all comparable things. It is basic human psychology that people are far less likely to murder their descendants than they are to murder people not related to them. Moreover, males are more likely to commit violent crimes than females are (with some asterisks; there is some evidence to suggest that women are possibly even more likely to hit children than men are but just get away with it more because people prefer to look away when women are violent, but in any event the general expectation is that a male is more likely to be violent than a female is). Finally, young people are significantly more likely to be violent than older people are.

In short, in the vignette given to group A, the dead child is black and the caretaker who brought them in is given 3 characteristics, each of which, on its own, makes violence more statistically likely. In group B, the dead child is white and the caretaker who brought them in is given 3 characteristics, each of which, on its own, makes violence more statistically unlikely. For Pete’s sake, culturally, we use grandmothers as the epitome of non-violence and gentleness! At this point, why didn’t they just give the caretaker of the black child multiple prior convictions for murdering children? Heck, why not have him give such medically extraneous information as repeatedly saying, “I didn’t hit him with the hammer that hard. I don’t get why he’s not moving.” I suppose that would have been too on-the-nose.

Now, given that we’re comparing a child in the care of mom’s boyfriend to a child in the care of the child’s grandmother, what do they call group A? Boyfriend Condition? Nope. Black Condition. Do they call group B Grandma Condition? Nope. White Condition.

OK, so now that we have a setup clearly designed to achieve a result, what are the results?

None of the pathologists rated the death “natural” or “suicide.” 78 of the 133 pathologists ruled the child’s death “undetermined” (38 from group A, 40 from group B). That is, 58.6% of pathologists rules it “undetermined”. Of the minority who ruled conclusively, 23 ruled it homicide and 32 ruled it homicide. (That is, 17.2% of all pathologists ruled it accident and 24% of all pathologists ruled it homicide.)

In group A, 23 pathologists ruled the case homicide, 4 ruled it accident, and 38 ruled it undetermined. In group B, 9 ruled it homicide, 19 ruled it accident, and 40 ruled it undetermined.

This is off from an exactly equal outcome by approximately 15 out of 133 pathologists. I.e. if about 7 pathologists in group A had ruled accident instead of homicide, and 7 pathologists in group B ruled homicide instead of accident, the results would have been equal between both groups. As it was, this is a big enough difference to get statistical significance, which is just a measure of whether the random chance you see 95% of the time is sufficient to entirely explain the results. What it doesn’t do is show a pervasive trend. If 11% of the participants had reversed their ruling, the experiment would have shown that the 18.6% of forensic pathologists on an email list of board-certified pathologists who responded to the study were paragons of impartiality.

There’s an especially interesting aspect to the last paragraph of the conclusion:

Most important is the phenomenon identified in this study, namely demonstrating that biases by medically irrelevant contextual information do affect the conclusions reached by medical examiners. The degree and the detailed nature of these biasing effects require further research, but establishing biases in forensic pathology decision-making—the first study to do so—is not diminished by the potential limitation of not knowing which specific irrelevant information biased them (the race of the child, or/and the nature of the caretaker). Also, one must remember that the experimental study is complemented and corroborated by the data from the death certificates.

The first part is making a fair point, which is that the study does demonstrate that it is possible to bias the forensic pathologist by providing medically irrelevant information, such as the caretaker being far more likely to have intentionally hurt the child. Why didn’t they make all of the children white and just have half of the vignettes including the caretaker with multiple previous felony convictions, who was inebriated, repeatedly state, “I only hit the little brat with a hammer four times”? If we’re only trying to see whether medically irrelevant information can bias the medical examiner, that would do it too. But what’s up with varying the race of the child?

While it’s probably just to be sensationalist because race-based results are currently hot, it may also be a tie-in to that last sentence: “Also, one must remember that the experimental study is complemented and corroborated by the data from the death certificates.” This sentence shows a massive problem with the researcher’s understanding of the nature of research. Two bad data sources which corroborate each other do not improve each other.

To show this, consider a randomly generated data source. Instead of giving a vignette, just have another set of pathologists randomly answer “A”, “B,” or “C”. Then decide that A corresponds to undetermined, B to homicide, and C to accident. There’s a good chance that people won’t pick these evenly, so you’ll get a disparity. If it happens to be the same, it doesn’t bolster the study to say “the results, it must be remembered, also agreed with the completely-blinded study in which pathologists picked a ruling at random, without knowing what ruling they picked”.

Meaningless data does not acquire meaning by being combined with other meaningless data.

The conclusion of the study is, curiously, entirely reasonable. It basically amounts to the observation that if you want a medical examiner making a ruling based strictly on the medical evidence, you should hide all other evidence but the medical evidence from them. This, as the British like to say, no fool ever doubted. If you want someone to make a decision based only on some information, it is a wise course of action to present them only that information. Giving them information that you don’t want them to use is merely asking for trouble. It doesn’t require a badly designed and interpreted study to make this point.