You’ve heard the catch phrases. You’ve read the success stories. You’ve seen the return on investment (ROI) calculations and the positive trending that reliability engineers bring. You’ve even lobbied upper management. Your argument has been so convincing that you’ve secured approval in this year’s budget to hire a reliability engineer. The hiring process has begun, and you can’t wait for the results: more reliable equipment, fewer failures and increased availability and production.
If only it were that easy. Building a reliability program led by competent professionals takes work. It requires setting expectations properly. Most of all, it requires time. How do you accelerate the process to ensure the maximum benefit in the shortest time? Avoiding these top five time wasters could help.
No, we aren’t talking about whether your reliability engineer could have been a fighter pilot or whether he needs bifocals, although not being able to read the computer screen while reviewing an asset hierarchy certainly will slow one down. A lack of an overall vision for your facility’s reliability program is the No. 1 reason for inefficiency and wasted effort, not just in a reliability program, but with any initiative.
Before I became a reliability engineer, I worked as a project engineer improving equipment performance through sound engineering principles. I often was involved in maintenance engineering activities, troubleshooting equipment failures as they occurred in the hopes of getting the plant back up and running as soon as possible. My boss had read an article about the benefits of having a staff reliability engineer, and literally overnight the reliability engineer position was initiated. I was then asked to take the job.
What changed? The title on my business card and the signature line on my e-mails. I continued to perform the same maintenance engineering activities without any real strategic direction, for we had no vision of what we wanted our reliability program to look like.
I worked within my means throughout the next year to learn more about what true reliability engineering was. I learned the difference between a condition-based maintenance strategy, a time-based maintenance strategy and a run-to-failure maintenance strategy, which we had (by neglect) been practicing at our facility for years.
More importantly, I learned that a smart combination of the three strategies is ideal, depending on the criticality of the system in question. I educated my boss, and he educated his boss. We learned that reliability best practices are well-established, so there’s no need to reinvent the wheel.
We also discovered that there are no shortcuts. For example, if you want accurate failure data, you need to have a computerized maintenance-management system in place. It needs to have an accurate hierarchy that represents the parent/child relationships that represent your plant or facility. The system needs to be populated by operations and maintenance with work requests, failure codes and material/labor dollars so that accurate analysis can be performed.
Building an accurate hierarchy takes time and money, and there are companies that have done it before which can help make the process less painful. It’s the foundation of any reliability program, and like a foundation for a house, if done poorly will result in imminent collapse.
Once there’s an understanding of reliability best practices, you need a plan that’s in line with the reliability program’s overall vision. Develop a clear methodology that can be understood at all levels of your organization. It should start with an assessment of current-state reliability practices — where are you and your facility now in your reliability journey? — as well as well-defined tasks that can bring your program in line with your vision.
As with any good project plan, these tasks should have well-defined dates and persons responsible for completion. Resources such as people, money and time should be defined, and proper approval from site leadership must be obtained.
Communication is vital to ensure proper expectations are set and maintained. For example, if site leadership doesn’t understand that before maintenance plans can be developed and optimized, site hierarchy needs to be established and a criticality assessment performed, then they’ll wonder what you’re working on in all of those cross-functional development meetings, all the while not seeing the desired results.
Prioritization of effort is a challenge we all face, regardless of industry or occupation. In this economic environment, we’re constantly asked to do more with less, and there are only so many hours in the day. Where and how we, as reliability engineers, spend our time is of utmost importance.
The reliability engineer’s responsibilities are vast and overwhelming: developing cost-effective maintenance strategies for critical equipment; conducting root cause analysis investigations to eliminate or mitigate repetitive failures; implementing effective corrective actions; developing and ensuring proper use of facility management of change (MOC) policies and procedures; identifying limiting factors that lead to high energy, utility, maintenance and supply-chain costs. This list could go on and on, but the question remains the same: where do you start?
You and your site leadership can determine where the priorities are and address the most critical items first if you have a well-defined vision, a master plan and the proper education.
How do you know how well you’re performing? Are you and the reliability program getting the results you desire? Are they in line with the overall vision set forth in your reliability program? How can you tell?
Waiting for the year-end numbers is like looking at the final score of a football game to see how your team is doing: you might get the information you want but far too late to make the necessary adjustments. A robust reliability program has a strong mix of both leading indicators, which tell you how you are doing day in and day out, and lagging indicators, which tell you how you did.
If I wanted to evaluate the success of a facility’s predictive maintenance (PdM) program, I’d need to understand the number of critical assets under surveillance, whether inspections are performed on time, the percentage of identified issues that get entered into the computerized maintenance-management system (CMMS), as well as the overall program costs, dollar value of saves and annualized cost per unit asset.
Once understood, such metrics foster informed decisions.
There’s no sense in reinventing the wheel. Such indicators should be clear, concise and make sense for where you and your facility are on your reliability journey.
Whether you know it or not, all of us are on a reliability journey. Just as there’s no job that goes unplanned — even an unplanned job gets planned haphazardly during the execution process — we and the organizations we support make decisions every day that affect how smoothly the trip goes.
Proper alignment and planning can help you avoid the first three time wasters. Proper execution circumvents the fourth time waster, and proper control mitigates the final time waster. Comparison to industry-accepted standards shows us how far we’ve come and how much farther there is to go.
About the Author
Josh Rothenberg is a reliability subject matter expert at Life Cycle Engineering. Contact him at jrothenberg@lce.com.