We hear a great deal about what is happening in the condition monitoring space regarding the Industrial Internet of Things (IIoT) and other digital transformation strategies. The results promised from utilizing machine learning (ML) and artificial intelligence (AI) as a form of condition monitoring have encouraged many organizations in a variety of industries to put data science to work for them.
In this way, they hope to advance the effectiveness of their maintenance efforts and guarantee the continued health of their critical assets. Like humans, computers can learn from past experiences to create informed predictions about potential future outcomes.
But is condition monitoring really that simple?
The answer is no.
Imagine telling your organization that you can identify a particular failure mode if they let the machine fail at least three times, allowing you to learn from the data and identify patterns for that specific failure mode. This would probably lead to you being escorted off the premises and your technology being scoffed at. Hence, the problem with machine learning.
One could argue that we do not want to train models to recognize individual failure mode levels and that we only need to be notified when a particular asset is presenting data that deviates from the established standards. Machine learning can do a wonderful job of this. However, so can trend data, which has been in use for decades and does not need any additional capital investments.
So, what is the real value of creating these machine learning models?
Not much, if we were to end the story here. But we have a tremendous amount of data available to help and support us; in this way, we can train a machine learning model to understand what acceptable conditions look like in comparison to unacceptable conditions.
We can also apply multi-technology and process data to this strategy, and in doing so, we can accurately identify which piece of data or which specific sensor is producing the outlier. This can then become the targeted focus of the analysis team.
But what is the value of doing this?
Historical data suggests that most facilities have roughly 80% of their assets in good health, which means that approximately 20% of their assets have an identifiable defect present. By utilizing this process, we can effectively eliminate nearly 80% of the data review time required by the analysts.
This frees up their schedules and enables them to focus on higher-level data and more complex problems that require a combination of equipment, process, and domain knowledge to resolve. In doing so, they can increase the percentage of healthy equipment and reduce the number of identifiable defects.
Most engineers and analysts do not enjoy flipping through sets of data in the hopes of finding a problem. In most cases, their real joy comes from figuring out the cause of the problem. Machine learning can be utilized to maximize the analyst’s time, which allows for a greater maintenance and reliability response effort and allow for program expansion by adding additional assets or technologies.
As previously mentioned, algorithms can be generated to identify anomalies down to the failure mode level, but they must be accompanied by robust domain knowledge that spans several disciplines, such as those prioritizing mechanical, electrical and stationary equipment. Subject matter experts should have a fundamental understanding of equipment and measurement devices.
This process is not for the faint of heart, and while it does require the collaboration of software engineers, data scientists and condition monitoring domain experts to build these precise models, the benefits are profound.
The benefits of generating algorithms include:
When we consider oil analysis, for example, the algorithm must contain information and knowledge about the asset’s individual components, parts, and metadata.
Furthermore, mappings of the source material to the specific test slates are a must, and knowledge of the proper threshold value is critical in order to create the proper machine learning models for lubrication analysis.
Likewise, in vibration analysis, defining the regions of interest and discovering patterns within the waveform and Fast Fourier Transform (FFT) is just a starting point for your team. This base-level knowledge would include understanding the metadata and its unique calculations, which tie to specific failure modes and failure reasons.
Your team must also have the knowledge and fundamental understanding of:
These are often missing in most, if not all, of the off-the-shelf offerings available today. When you strip away this fundamental knowledge and rely solely on simple linear regression, the number of inaccurate readings, which include both false positives and negatives, grows tremendously. This only serves to give machine learning technology a bad reputation.
While the role of the condition monitoring analyst will develop and evolve as time goes on, this should be viewed as a positive transition; their involvement in creating and maintaining these machine learning applications, as well as their efforts to continuously update the models, will be invaluable to the organization.
These database creation and upkeep endeavors will be at the heart of every condition monitoring program, and every machine learning and artificial intelligence algorithm’s accuracy will depend on the analyst’s skill, tenacity and knowledge.