A lot of people love gmail because it filters out all of their spam. “I never see any spam!” they say, proudly. But the problem is that gmail achieves this by being way too aggressive about classifying things as spam, and the result is that it loses a lot of legitimate emails, too.
So the user is left with one of three options:
- Have things go wrong when they miss important emails.
- Check their spam folder once a day or so to make sure they don’t miss any important email.
- Don’t use email for anything important.
Option #1 is terrible and option #3 is just another way of saying that gmail is a bad email client. But the funny thing about option #2 is that the user is actually reading more spam than I am with my spam filter configuration that allows all of the important email through and only a few spams. I never have to check my spam folder, which means seeing 0-4 spams a day in my regular inbox is reading through way less spam than if I had to check my spam folder.
This relates to the concept in engineering of “which way do you want to fail?” It’s almost never the case that one can do something perfectly—getting absolutely every classification of email right. And every system is going to have a bias—would you rather when it fails your spam filter tends to mis-classify legit email as spam or spam as legit email?
The problem with focusing too much on getting the system perfect is that one can too easily forget that it won’t be perfect anyway, and then one won’t think about how it will fail when it does. A better engineered system puts some thought into figuring out the systemic biases and tweaking them to do the least harm, while also trying to get as close to perfect as is practical without changing the general target of how failure will happen.
Because failing in the wrong direction can be worse than useless. It can be actively harmful.
(The same principle applies to social engineering, by the way.)