logo


Jun-2018

Machine learning in asset maintenance

Application of machine learning to asset performance management marks a transition from estimated and statistical models towards measured patterns of behaviour.

John Hague
AspenTech

Viewed : 3787


Article Summary

New methods and cutting edge technologies are driving asset performance management (APM) well beyond historical capabilities, rapidly increasing its bottom line value. Technologies like cloud computing, data science and machine learning are now being integrated with automated methodologies directly into APM solutions.

This wave of integration places advanced analytical techniques into the hands of operators and engineers with previously unimagined scale and ease of use. The incremental progress in APM over the last 20 years pales in comparison to what is now possible through digital transformation.

Low-touch machine learning is the key catalyst to scale APM’s potential beyond existing first principles based solutions and ‘armies’ of consultant engineers and data scientists. A widespread integration of machine learning in APM marks a transition from estimated engineering and statistical models towards measuring patterns of asset behaviour.

Operators of refineries can now readily extract value from decades of existing design and operations data to better manage and optimise asset performance. This low-touch machine learning method continuously embraces changes in asset behaviour, empowering real-time APM value creation. Vetted and tested across diverse industries, scalable across multiple assets, and powered by cloud and parallel 
computing, low-touch machine learning ushers in a new era of performance and optimisation for every industry.

How we got here
The foundation for realising a new APM vision has existed for some time, with engineers having applied performance models for decades. Such pioneering APM adopters faced the challenges and constraints of the technologies surrounding their models. Disparate systems evolved to manage and optimise maintenance functions, to develop risk assessment and criticality, and to perform continuous condition monitoring.

These systems were isolated, resulting in limited connectivity and integration, as well as workflow impairments. Due to limited integration, early computers processed small volumes of available related data in batch mode instead of in real time, when insights are most valuable. Outputs came too late, typically in days or weeks. Computational power limited the advancement of new algorithms. And assured static models were fixed, low frequency and not adaptable to new failure behaviours and incremental operational changes.

As the 2000s arrived, leading industries started to better organise assets for condition based monitoring, and computational power continued to increase. Systems were still isolated, but in separate systems, engineers started to see something resembling real-time asset-level data.
By the late 2000s, this situation had changed significantly and we started to see multiple parallel technology innovations coalesce into the modern state of the art APM methodology. Best in class systems could now incorporate detection of precise patterns of normal and failure behaviours, and perform the computational isolation of key indicators of degradation. Especially important was the 2006 debut of Amazon Web Services for scalable cloud computing. Advances in structured and unstructured databases and operational data pools were tested and improved at the enterprise level during this period.

Around the same time, smart sensors saw a dramatic shift in performance, size, reliability and price. Added to this was a dramatic improvement in the computational and analytical capability of machine learning called ‘deep belief networks’ or ‘deep learning’. This breakthrough was pioneered by Geoffrey Hinton at the University of Toronto, who is now tightly coupled with Google.

The result was a quantum leap in capability, and this has enabled machine learning to surpass the performance of previous analytical techniques, which limited modelling and statistical methods. Machine learning is now the dominant analytical method in all IT fields around the world. It is used for credit card fraud detection, facial recognition by Facebook, voice recognition by Amazon, Apple, and Google, for driving automated cars, medical diagnoses and more.

The smartphone rose in prominence during this period, led by the iPhone debut in 2007, which greatly advanced computer literacy and afforded complex application (app) capabilities for the masses.

Between 2007 and 2010, culminating with the iPad debut, the process industry workforce moved from experimentation with the industrial internet of things (IIoT) to demands for smart devices and consumer style applications at work. Industrial software and technology began to update offerings with user interfaces incorporating low-touch, readily navigable applications and displays. Vendors started delivering intuitive software that did not require intense skills and experience to be productive.

At the same time, cross-industry initiatives, sponsored by many owner-operator companies, led to the development of open standards for connecting disparate systems and work process inter-operation, particularly between operations and maintenance systems.

Such initiatives assured comprehensive use of data combinations to address problems and afford solutions that were previously unattainable. Blending such methodologies and automation of technology approaches set the stage for a major leap in APM performance and value.

During this period, emergent techniques for maintenance operations on assets, particularly for mechanical assets, came under scrutiny. The movement from fail-fix – through calendar, usage and condition based planned maintenance events – all the way to reliability centred maintenance (RCM) techniques provided incremental improvements. However, the cost, complexity, time and staffing skill set requirements constrained deployments.

Today, there is a growing realisation that maintenance alone cannot solve the problems of unexpected asset breakdowns. According to the ARC Advisory Group,1 82% of mechanical breakdowns display a random failure pattern and are caused by process induced conditions that current maintenance practices do not monitor.

Market-leading companies realise that they have gone as far as they can go with traditional preventative maintenance techniques. Predictive maintenance represents the next frontier.


Add your rating:

Current Rating: 4


Your rate: