The P-F curve is a core concept in maintenance, illustrating how a machine’s performance deteriorates over time, eventually leading to failure. It has shaped the way we understand and address machine reliability since its introduction in the 1970s. The P-F curve has been a cornerstone for maintenance management, guiding us in predicting and preventing failures.
The P-F curve keeps on giving us ideas. Since its mention by Noland and Heap in 1978 and its clarification in 1997, by John Moubray in his book RCM II, many leaders in maintenance management have written, explained, and taught us new things about the curve. I’ve learned something from each article or talk.
Why the Classic P-F Curve Wasn’t Enough
The basic idea is that an element of a machine operates at a particular performance until something happens. The “something” can be microscopic (maybe even molecular). It could be dirt falling into a bearing or a fatigue micro-crack. It could be almost anything that results in a loss of performance. Over time, the defect grows and performance declines. The slope and the deterioration rate depend on the engineering situation. The shapes of the curves for different elements are similar, even though the scales are radically distinct.
I have always argued that the traditional P-F curve tells a story about a problem but doesn’t provide any helpful answers. It explained what happened without a direction to the solution.
The original P-F curve described failure, but it didn’t illuminate its true origins.
While the P-F curve was a step forward, it didn’t explain the underlying causes of performance decline, which continued to lead to ongoing issues and failures. That’s where the next P-F curve comes in.
That started to change almost two decades ago. Douglas Plucknette wrote an article titled “Expanding the Curve ” in Uptime Magazine. In it, he introduced the D-I-P-F curve. The “D” and “I” stand for Design and Installation, respectively. His explanation was quite controversial at the time.
Plucknette’s D-I-P-F curve expanded on the traditional model by including the role of design and installation in machine reliability. It emphasized that many failures can be traced to poor design or improper installation—issues that, if addressed early, could prevent much of the performance degradation seen on the P-F curve.
How D-I-P-F Shifted the Conversation
The subsequent support from the Uptime publisher, Reliabilityweb, over the next decade helped bring the idea into the mainstream. It was a useful expansion.
It was clear to me that the D and I contributed significantly to the conversation. Many of the events that drive the P-F curve toward lower performance and failure are due to poor design or improper installation.
Issues like a critical lack of stiffness are typically rooted in design shortcomings. Failure modes like misalignment can usually be traced back to installation. So, D-I-P-F was an advance in thinking about the problem and helped identify the potential sources of issues.
Design and installation don’t just start the curve – they often dictate its entire shape.
I heard a new aspect of the PF curve at a maintenance conference in Chile at the CMC-Latam (Congreso Mantenimiento & Confiabilidad, 2025). The speaker was Noria’s president, Bennett Fitch. Fitch presented a new perspective that further enriches our understanding of the P-F curve. Building on the existing framework, he introduced a critical new idea: the impact of repair actions on performance.
The new element first concerns your actions during the repair. If we are talking about a bearing (as we seem to do), we periodically inspect it. Let’s say this inspection detects some vibration denoting deterioration. We scheduled a corrective work order to replace the bearing.
The Repair Effect – Why Machines Don’t Always Return to Baseline
Once replaced, we are back in a stable situation with one difference. The repair restores functionality, but it may leave the machine operating at lower performance than before, essentially a ‘new normal’ that is worse than the baseline before failure. The problem is stable but may be diminished.
Another approach is to consider that any deterioration is an invitation for some level of RCA. Without that added process, the lower performance might be permanent.
The replacement solves the immediate problem; we may be leaving the cause on the table to return and harass us. Much of the maintenance workload stems from repetitive failures. The repetition might be due to avoidable causes left untreated by the limited repair of only the direct problem. We might have replaced the offender (bearing) but not treated the causes (misalignment).
A repair without cause removal only resets the countdown to the next failure.
If the cause is misalignment, then replacing the bearing without treating the cause (misalignment) is just kicking the can down the road. Aligning the shafts and couplings restores the machine to like-new performance and puts us on the flat (good) part of the curve.
The great thing about this approach is that you can gradually reduce your workload. Of course, it does require an investment. Winston Ledet’s recommendation (in Don’t Just Fix It, Improve It, 2007) limits this kind of activity to no more than 1% of your workload so that your current customers don’t suffer. He was talking about working on all defects. Your overall pool of defects will drop almost 50% in 3 years. The result is a lower workload and increased time available to treat more complicated or intractable problems.
This activity is the most fun a maintenance person can have without getting arrested!









