The Maintenance Bias Effect: Why We Ignore Problems Until They Fail

Bias is entirely natural. It is the result of your mind using shortcuts. Many biases are quicker to apply than thinking a problem through. They also seem to make your life easier. Bias is a trap for anyone wanting to see the world. There is no avoiding bias, but there is the possibility of awareness.

The problem with bias is that you do not see and react to reality. So, while using bias is quicker, it also risks danger or massive misunderstandings.

10 Traps of the Human Mind

People studying human behavior noticed some packaged biases that seem to be universal. The human mind is powerful, and some naturally occurring biases mitigate power. These biases blind people to what is right in front of their eyes. None of us are immune.

The alert maintenance professional listens to which bias is in play, conducts investigations, and then makes thoughtful decisions.

1. Normalization of deviance

This bias is one of the most common problems in maintenance. We build systems (like PM task lists), processes, and procedures to avoid this bias. It is everywhere. We normalize things around us that we see often (as long as nothing bad happens).

When a machine overloads a circuit, we reset the breaker, and after a while, with no other consequences, we tell the story, “Oh, that machine always does that; don’t sweat it.”

Normalization of deviance occurs when people repeatedly accept a lower performance standard until that lower standard becomes the “norm.”

In a famous catastrophe, a NASA astronaut, Colonel Mike Mullane, relates the story of the Space Shuttle Challenger.

The NASA team accepted a lower performance standard on the solid rocket booster O-rings until that lower standard became the “norm.” They had become so comfortable with seeing occasional O-ring damage and getting away with it that the original standard, in which ANY O-ring damage was intolerable deviance, was marginalized.

2. “After the Fact, Therefore Because of the Fact”

This mistake is so popular and well-known that it has a Latin phrase. This phrase goes by the Latin “post hoc ergo propter hoc” and requires that one event occurs before the other. The phrase is Latin for “after this, therefore because of this. 

Number two is directly related to the subsequent bias. It is a fallacy that states, “Since event X followed Y, then Y must have been caused by X.” If I change grease in a bearing and the bearing fails, we might be clever and say that the new grease caused the bearing failure. But in fact, we need evidence beyond the mere relationship of time for proof.

It is legitimate (and valuable) to ask what happened before the failure or what has changed. Making a leap (without good evidence) from something happening first to that thing being the cause is Post hoc ergo propter hoc thinking.

3. “Correlation Implies Causation.”

If two facts are correlated, are they related by cause and effect? This is a trap because it is easy to assume it is accurate and tough to disprove. If we change oil vendors and immediately get a breakdown, is the change causal?

Public interpretation of medical science gets caught in this trap. Some see a relationship (correlation) between the increase in Autism and the increase in vaccines and conclude that vaccines cause Autism. The bias does not depend on truth or falsity. It seems true, so let’s run with it.

A recent medical journal stated the results of an extensive study that kids who take one or more tablets of Tylenol have twice the amount of asthma. The news article leads with “Tylenol causes Asthma? They wanted an unsophisticated public to think that “Correlation implies causation.” However, correlation does not imply causation. The relationship between two variables does not automatically mean one causes the other. In this case, kids who take Tylenol might have more colds and take more Tylenol. There is proper research showing that viruses are causally related to asthma. Correlation but no proof of causality is a prevalent problem in research.

4. The Fallacy of the Single Cause

Don’t bosses like to say, “What the heck happened? Find the dang cause and fix it!”

“What was the cause of this?” Such language implies that there is one cause when, instead, there are probably many causes. Even the phrase “Root Cause Analysis” encourages the idea that there is one cause that is the root of all evil.

In every case, there is (usually) a complex root of causes contributing to an incident. Rather than a single cause, we are looking for the most straightforward cause to fix.

Of course, we seek the cause that gives us the most leverage (least effort and significant impact). This is not the Root Cause but the cause we can eliminate or mitigate with the least effort.

5. Regression Fallacy

If we assume some attribute is randomly distributed in any population (like free throw percentages), then the appearance of outliers (making all the shots or missing all the shots) would generally be followed by a regression to the mean for that population.

An example is sometimes called the Sports Illustrated cover Jinx. An athlete has a great year. It is better than his past performance. He gets selected for the cover of Sports Illustrated. His subsequent performance regresses toward his old performance level.

This regression fallacy follows all kinds of results in many fields. The logical flaw is to make predictions that expect exceptional results to continue as if they were average.

People are most likely to act when the variance peaks (such as buying a stock). Then, after the results became more regular, they believed that their action was the cause of the change when, in fact, it was the normal regression reasserting itself.

The logic of the Regression Fallacy might look like this:

The problem: The students did exceptionally poorly last semester.

The cause: We assume it is because they are not motivated

The action is taken: We take away some privileges and punish them.

Result: He did much better this semester.

Therefore, Punishment is effective in improving students’ grades.

6. Circular Cause

The circular cause is where the consequence of the phenomenon is claimed to be its cause.

There are many real-world examples of circular cause-and-effect (many of them constituting virtuous or vicious cycles). Where the circular cause is a cycle, it is a complex of events that reinforces itself. A virtuous circle has favorable results, and a vicious circle has detrimental consequences.

  • More jobs cause more money in people’s pockets, which increases consumption, which requires more production and thus more jobs.
  • The expectation of an economic downturn causes people to cut back and spend less, which reduces demand, which results in layoffs, which means people have less money to spend, causing an economic downturn.

7. Third-cause Fallacy, Sometimes Called Joint Effect

In this fallacy, a third, invisible factor drives both effects. A famous example is that a city’s ice cream sales are highest when drowning in city swimming pools is highest. To conclude that one was a cause of the other is spurious. In this case, the invisible third factor could be a heatwave that was driving the ice cream sales and the increased pool use (and drowning)

8. Fundamental Attribution Error

The attribution of a problem to individuals in a system rather than to the system in which they find themselves is so pervasive that psychologists call it the “fundamental attribution error.” 

MIT System Dynamics Professor John Sterman and Nelson Repenning, published in the California Management Review in 2001 (“Nobody Ever Gets Credit for Fixing Problems that Never Happened”)

The people are not the problem; the system is. We need to fix the system so the people can do better work. We need everyone’s help to identify and fix the system, and only by working cross-functionally is it possible to succeed.

9. Cherry-picking

An observer who only sees a selected data set may thus wrongly conclude that most, or even all, of the data are like that. Cherry picking can also be part of other logical fallacies. For example, the “fallacy of anecdotal evidence” tends to overlook large amounts of data in favor of another cause.

Cherry-picking is the bane of medicine. Advertisements tout a new cure for high blood pressure or high blood sugar. If you track down the testimonials, you might find that it worked for that person in that specific situation. The problem is that the burden of proof is much higher for a medication the public uses. Even with the higher burden of proof, the regulators sometimes get it wrong.

10. Name-calling

Name-calling is a technique that uses emotional arguments as a substitute for rational arguments. People use the name-calling technique to incite fears (or arouse positive prejudices) with the intent to invoke fear (or trust). Both anxiety and faith are baseless. When this tactic is used instead of a logical argument based on evidence or experience, name-calling is thus a substitute for rational, fact-based arguments.

Current politics uses technique widely. There are some groups that (as a group) scare people (such as in the US immigrants). Name-calling invokes those feared groups to convince the audience that the other candidate will destroy our way of life.

SHARE

You May Also Like