There has been a great deal of debate over the years revolving around how maintenance or, more importantly, maintenance processes should and should not be measured.
In fact, there is even a committee within the Society for Maintenance & Reliability Professionals dedicated to answering those questions. The theories vary far and wide, but in essence, most of the debate is centered on how to measure both the current status and the end result of what has been accomplished. Should it be measured on maintenance cost per unit output, cost as a percent of replacement value, on equipment uptime, etc. The debate continues to rage, but we contend that the debate is focused on the wrong timeframe and, perhaps, framework. All of these measurements, as stated, concern themselves with “outcomes”, or after-the-fact measurements of changed variables.
If we look back 70 or 80 years, to when most of the current accounting and measurement methods were developed, we find not much has changed about how we report and measure effectiveness. But, times have changed significantly. There are three primary reasons why new performance measures are required:
1) Traditional accounting and measures are no longer relevant to a company moving toward a world-class operating environment, although they portray a certain reality.
2) Customers are requiring higher standards; competition has increased, which, in turn, requires metrics that relate to how well the organization is meeting those standards and competition.
3) Management techniques, technology and reporting mechanisms used in plants have changed significantly.
4) Behavior change is now recognized as a key contributor to the success of any process initiative
Leading vs. Lagging Indicators
The most compelling reason for the need for a change in approach is that these measures stated previously are based on lagging or outcome indicators. These are after-the-fact results, unchangeable once the time period of measurement has been completed. Many decisions are now pushed down to the shop floor. And for those individuals, and for this level of focus, we find the old high-level outcome measures inadequate. We desire measures that are meaningful to the entire organizational hierarchy. We can then use those measures, and others, to monitor and promote particular behaviors by our employees. Using outcome indicators is like looking out the back window of a car to see where you’ve gone.
Today’s environment requires measurements that can predict, determine and influence desired outcomes. We need to be able to affect the final outcomes for whatever period we are measuring by developing and monitoring interim indicators.
We would like to use an analogy. Measuring maintenance is like investing in the stock market. Our investments should be geared to high-value returns that are predictable. A common strategy is to look at leading “market” indicators to judge how well the investments are going to pay back, which is the lagging indicator.
• If the leading indicators are “bearish”, you have time to correct your actions.
• If the leading indicators are “bullish”, you know that your efforts (investments) are going to produce the required return.
Managing the trends of the leading indicators is our view of successfully managing your investment in maintenance. So, the premise is to measure both leading and lagging indicators; but, it must be done in some context, some overall process that integrates with your direction. We call such a process the “Managing System.” This forms the basis for integrating people and processes within a common framework.
The “Managing System” is the umbrella process used to guide the organization on a day-to-day basis and is considered a foundation process. The Managing System integrates the overall strategy for the plant to a series of cascaded goals and objectives that are linked down through the organization. It then is used to establish the measurements of these goals at each level, and the key process indicators needed to ensure that the processes remain healthy. These measures are a mixed set of the proper leading and lagging indicators.
This is the mechanics of setting the system up; the key is in utilizing the system. The system is used as the primary vehicle to review how well we are performing against what we said we would do. It’s the “plan, do, check, act” model. To make this work, it should be used at all levels of the organization. Have the reviews on a scheduled frequency, publish the results of the measurements, and hold people accountable for the end results. By using a mix of leading and lagging indicators, as we review the leading indicators on a weekly basis, we have the ability and time to correct deviations from expectations by the time we review the lagging indicators at the end of the month.
As we assess plants, we see a semblance of this system in place, but it’s often disjointed and based on lagging indicators that do not tie the strategic direction to tactics used at the floor level. In most cases, the measurements used are wrong and don’t relate to the behaviors that you want to produce; so, it doesn’t provide a vehicle for change management.
It’s important to measure, we all know that, but it’s imperative to measure the right things! Maintenance measurements are a part of a global set of indicators that gauge your facilities’ viability. So, it is important to ensure that we measure these right sets of information. We next need to look at what are these right measures for maintenance within this set. We will start by looking at the comprehensive set of leading and lagging indicators, which we call Process and Outcome.
The fact is that lagging measures are reflective end results of what people are doing at the front end of the process. For example, there really is no such thing as wrench time without acknowledging that people are turning the wrenches. Time has to do with people planning to be at a certain location, at a specific time and having the necessary tools and materials. In order to have the tools and materials, someone has to identify and order these ahead of time. Someone else has to make the equipment available. Only then can we complete the work and reduce the amount of time spent turning the wrench. Planning, scheduling, making equipment available, showing up when requested and performing that work in a timely manner are all behaviors.
We don’t tend to think of metrics this way. In fact, the way we label results takes the people out of the equation – well, with one major exception: reprimands upon failure to meet targets. For us to truly manage a process, we have to place people and their behavior back into the equation. We must conceptualize a Managing System that includes people’s behavior.
What Do They Look Like?
For this discussion, we will concentrate on the Process type of indicators. We will not include all of the indicators in this discussion, just a key few. The Outcome indicators are also important, but most of them are well-known and used, requiring just a brief discussion toward the end of this presentation. Remember the general rule: You must look at multiple indicators to get the big picture of what is happening in your plant.
So, with that said, let’s begin our look at this set.
Estimated Backlog in Crew Weeks: Backlog is defined as the total amount of work that has been identified (planned or unplanned) but not yet completed, including work in progress, but may or may not include PM inspections for a certain time frame. It requires that all work requests receive a rough estimate by the planner prior to going into the backlog. It is calculated by dividing the estimated work contained in the crew’s area by the available crew hours in a week. We suggest also using the same for total work in the department and total and the crew.
The backlog indicator should be managed to five to seven crew weeks. It is used to balance the amount of work being received vs. executed, and to justify the need for contractors as supplementary help as the backlog gets out of control.
The reason for the five- to seven-week level as a control is to allow a buffer time for planning and logistics to prepare and gather the resources required to perform the work. We are assuming that the organization in question is using a forward scheduling model. This measure can also use this to view the backlog by type of work, i.e., a growing trend in work resulting from inspections would show that your preventive maintenance (PM) program is being more effective (or your equipment is quickly deteriorating).
Percent PM/PdM Compliance: This is the measure of whether you are keeping up with the program as scheduled. If you have rationalized and smoothed the requirements, it is easier to manage this process and the measurement should be more stable. Mature, proactive organizations settle for no less than 100 percent compliance as this is the key to getting control.
Percentage of Reactive Work: This metric will tell you how well you are in control of the work management process. What you are measuring is the percentage of work that you are performing that is not on the schedule. In other words, this is the tracking measure for the “schedule breakers,” the work that you’re doing today that you didn’t know about when you walked in the door.
Compliance to Schedule: Again, we presume that you have forward scheduling and that this week’s schedule was prepared last week. Once this is established, we measure how many jobs were completed against the schedule. This includes all PM work, as this by definition is work planned in advance and scheduled accordingly. This indicator should be 90 percent, which takes into account that you will have some level of emergent work specified to be 10 percent or less as discussed.
This too is measured on a weekly basis by crew and by department total. As you can understand, this takes good cooperation from production to attain these figures. That’s the first piece of the measure of work management effectiveness.
Schedule Loading Factor: This measures the percent of available man-hours that are scheduled on a weekly basis. High schedule compliance with few people scheduled makes little sense. There has been lot of debate on what this number should be, either a 100 percent or a 90 percent loading factor. A 90 percent load assumes that you are having about 10 percent emergent work on a continuing basis, so why load more? The 100 percent scenario is used by more stable operations where emergent work is low, but about 5 percent of the man-hours are scheduled on very low-priority work. When the schedule is broken, these people are sent to perform the emergent work. Schedule loading is the second factor in the work management measurement.
Wrench Time: Wrench time is the measure of the percent of a workday that the craftsmen spend performing actual work, and is in reality a lagging indicator, but it is part of the overall measure of work management effectiveness. The world-class number for this indicator is about 65 percent, but most clients that we initially assess are measured at 28 percent to 35 percent. We find many barriers are placed in the way of the workers, keeping them from being effective. Typically, we see high “wait”, “travel” and “materials” delays, which are all controllable by a proper work management process.
Wrench time should be measured quarterly by performing a “day in the life of …” study using multiple assessors that spend several days with a single craftsman each and measure the time spent in the following categories: wrench time, travel, breaks, materials, instruction, waiting, meetings, administrative and tools. Additionally, the studies are performed on non-PM work.
Gauging the Work Management Process
The method that we use to measure the overall effectiveness of the work management process is what we call the Proactive Work Capacity index (PWCi). The previous part of this discussion has given us the information we need to gauge this process. We now take the previous three figures and multiply them together to get an indicator:
PWCi = (Schedule Compliance) x (Schedule Load) x (Wrench Time)
Using the best-of-class numbers quoted for the individual components, world-class levels would be:
PWCi (World Class) = (0.90) x (0.90) x (0.65) = 0.53
This index should be calculated weekly and trended over time. It is the best overall measure of the effectiveness of your entire work management process as it measures the key factors that you are trying to improve: scheduling at a maximum number, complying with that schedule, and the productivity of the workers that are executing the individual jobs on the schedule.
In most instances, a scorecard, or dashboard, is used to provide a quick visual presentation on the status of the metrics you have chosen. A red indicator would mean below target, a yellow indicator would mean slightly below or just meeting the target. A green indicator would mean that the indicator is above target.
Are They the Answer – Why Not?
As with all well-meaning intentions, this suite of indicators is not the only answer. If taken and used in the spirit for which they were created, that would be one thing. However, human beings have a natural aversion to being measured and held accountable for performance. And therein lies the rub. No matter how well-intentioned are the indicators that we’ve set up, the people on the shop floor will find a way to circumvent them. In fact, we have found clients that we’ve worked with to be quite ingenious when it comes to finding ways to “beat the system.” So, while these indicators track the apparent success of the process, they don’t tell the whole story. The key to achieving results and sustaining the process is to combine process indicators with behavioral indicators.
We have found it extremely beneficial to focus on behaviors as part of any initiative. In the past, we used to talk about “best practices”; we now talk specifically about “best behaviors.” With any initiative, we often spend a great deal of time identifying those indicators that will give us reassurance that the process is working. If we’re really on the ball, we’ll not only develop results (or lagging) indicators, but often process (leading) indicators. Tied into other systems, these provide quantitative evidence of success, or lack thereof. What happens when the pressure is reduced (i.e., consultants or external help goes away)? Often, organizations revert to the old and comfortable ways, or we find that the quantitative evidence has been creatively dealt with and results aren’t what we think or wish.
We have discovered that it is not enough to just manage the numbers. What is of the most value is that along with developing the process is to develop a list of behaviors we want the organization to exhibit. Then, develop behavioral metrics that are aligned with the desired behaviors. After process installation, or hard wiring, we then program the organization by coaching and facilitating to those desired behaviors and then provide qualitative measures.
How Do You Establish Them?
A behavior is broken down into two main functions for the sake of change management. Changing beliefs, knowledge and vision are contained in the intellectual or cognitive component.
Changing what is done, how it is done, and what is gained is the action component.
Assessment of current behavior and belief is important to establish when doing baseline “as is” metrics and indicators. An example of this is the typical belief that “we are heroes if we drop everything to correct breakdowns.” In a reactive plant environment, this is the norm. There is a rush or sense of pride and accomplishment. “See how quickly we responded and got production back on line.” This belief is often reinforced by promotions and pats on the back.
In most work management process improvements, our desire is to change the reactive belief to one that stresses zero breakdowns and planned maintenance. This leads to more profitability for the company, paying off for the individual by maintaining employment, providing a different level of satisfaction, and removing the chaos from the day. The new belief is one that states, “responding to breakdowns means that the process has failed, and that if not corrected could lead to demise of the company.”
Interviewing all levels of the workforce to find their beliefs and how they go about their jobs is important for establishing baselines. This is used to identify how the organization has moved, once the process improvement begins. It can also be used to establish scorecard “red light” behaviors.
It is extremely important that prior to commencing any installation or implementation activities that the new desired behaviors are identified. Can you think of some?
People attend planning meetings and are prepared to make decisions
A craftsperson knows what he or she will be working on during the next week
A craftsperson is confident that he or she will be allowed to perform that identified work
An operator know what equipment will be taken out of service tomorrow
A planner understands the importance of clear and concise work instructions
Work is not permitted to begin until the parts are available
Feedback is provided on PM activities
And the list goes on.
Determining the desired proactive behaviors and beliefs become the scorecard green light behaviors. Observation and self reports of sometime doing things the “new way” but occasionally reverting to the old behaviors are yellow light conditions. It is important to praise this transition phase and refrain from focusing on “not good or fast enough” to encourage continuation to green light behavior and belief. Reward and reinforcement create the desired behavioral change. Punishment only causes resentment and resistive behavior.
How to Measure
There are several tools available to measure behaviors. Two that we use during a work management process improvement are the System Installation Status (SIS) and the Behavioral Pyramid. Both of these tools are developed (populated) during the design phase of an engagement and are used two different ways.
The SIS is used to measure how well specific key elements of the engagement have been installed. This may apply to specific meetings, or elements of the engagement that are critical to successful implementation. The design team will determine at what rate they expect the installation to take place over the duration. This does not mean that the parts are in place, but more importantly, do the people involved exhibit the behaviors required to sustain the process.
As you can see in Figure 2, the key process elements are given a rating from 0 to 6, “not agreed to” to “evergreen” or self-sustaining. As the engagement progresses, the required behaviors are assessed by the participants and scored on the chart. If at the desired level, then the number turns to green. If not, red. This information can also be presented graphically as shown in Figure 3.
The Behavioral Pyramid looks only at behaviors required to sustain a process once the consultants and internal forces are long gone. Looking at Figure 4, you can see where the monitored behaviors are based on those desired behaviors identified during the design phase.
Underlying each block of the pyramid is a series of questions designed to further help both the user and the evaluator determine how well that specific behavior is embedded. As with all things, these tools just tell part of the story.
Why Behavior Focus is the Key
Having all of these metrics and tools are important, but as stated earlier, modified and new behaviors are the key to any change initiative. Toward the end of a recent engagement, we were asked by the client for a way to measure the success of the engagement. The immediate answer was, “We have met all of the targets.” But we were then challenged to assure him that once left alone, the process would continue and not revert to the past. This caused us to pause for a moment and think about what would be a prime indicator of continued progress. Well, we had the metrics and the behavioral tool, but they were measured differently. Metrics were measured quantitatively while the behaviors were measured qualitatively. In the end, we came up with the idea that a combination of both would provide an ideal vehicle to certify the organization “competent”, or “sustaining” for the process that was being modified. We would measure them on two axes: performance and behavior. Using a normalized scoring methodology, we were then able to rate the organization on a 1 to 15 scale.
At the conclusion of an engagement, we can now certify an organization to both the performance metrics and the behavioral metrics. One cannot achieve sustainability without strong evidence of the presence of both elements. Simply set forth, we measure the client against normalized assessment points for both performance and behavior. We then grade or graph against two axes, and if the client passes a set point, agreed to by the client, then we can declare the organization at either a competent, sustainable or high-performing category. It isn’t quite important at this point that we worry about the three grades. What is important is that the questions are dependent upon the process chosen, and are used to evaluate performance (quantitative) and behaviors (qualitative). They can be used at the front end for baseline determination and at the back end for a measure of organization movement.
Without going into the boring details, the certification tool has ended up being a tremendous change management tool. In one organization, the leading business unit demanded to be “certified.” All of the scorecards were green and all of the behavioral metrics (as measured by the business unit, not independently) were on track (meeting all established targets for performance). For the certification activity, we used business unit personnel and consultants. The consultants took the lead position and, lo and behold, the business unit, much to everyone’s chagrin, did not pass. Why? It was the fact that the observed behaviors of the individuals at both craft and management levels did not support the process or engage in the targeted new behaviors. After much soul searching, the manager in charge participated in determining the path forward and corrective actions. Three months later, they tried it again, and passed. What we discovered was the certification process became a tremendous focusing tool for the organization. It made them go back and reflect on those desired behaviors and galvanized them to get it right the next time.
Thus, from outward appearances, one might assume that the process is firmly in place and sustaining. This is especially true with early performance gains. However, without these requisite behaviors, starting at the very highest levels of the organization, firmly in place, one can expect a return to the status quo, once the training wheels have been removed.
In order to reap the benefits of any organizational initiative involving changes to process, qualitative as well as quantitative measures must be in place.
Leading and lagging metric indicators inherently contain the fruits of behaviors and beliefs/knowledge of workers.
For organizations to understand the impact of change, and that change to be measured and/or assessed to be successful, the underlying behavior change measurement is part of interim and "Goals Met" change measurement.
Quantitative and qualitative measures can be used together to understand if, and how much, change is happening.
Behavior change is more important to assess sustainability of change.
Quantitative metrics do very little to predict sustainability, but they can give a snapshot of what is happening at a given point in time.
To learn more about this process, or to get additional details from the figures, contact Strategic Asset Management Inc. by phone (800-706-0702) or e-mail (firstname.lastname@example.org). You may also access the company’s Web site at www.samicorp.com.