logo


Feb-2020

Value from data

Data must be refined and applied effectively in the refining industry to deliver its greatest value.

GEORGE WALKER
Novotek

Viewed : 2149


Article Summary

The oil and gas industry is one of the most important to the global economy, with the combined economic value of upstream oil and gas sectors accounting for at least an estimated 5% of the entire global economy. For context, market research by IBISWorld found that oil drilling alone contributed between 2% and 3% of global gross domestic product (GDP) in 2017.

Despite its importance to the global economy – and sometimes because of it – the oil and gas industry is placed under a great deal of pressure to operate efficiently and effectively. So, it is no surprise that oil and gas sector has been among the most active adopters of modern technologies to assist in operations: from remotely operated vehicles (ROV) for inspecting subsea pipelines to thermal imaging drones for easier inspection of tank internals.

There is a clear trend here. Most of the assets that engineers are responsible for maintaining and overseeing in the oil industry are static assets, like pipelines, flare stacks, and tanks. Unfortunately, these assets are difficult to access and have traditionally not been fitted with sensors to relay performance information to a central control system. So, unmanned and robotic systems are playing a key role.

As we move more downstream, industrial automation and control software is more prevalent, with tasks such as management of a refinery’s crude slate largely using these systems today. However, downstream has a similar problem to upstream in that maintenance remains one of the biggest challenges for engineers, due to the sheer number of complex assets to be maintained. This is exacerbated by the need to minimise downtime.

According to an ARC survey of senior executives and engineering, operations and maintenance managers in oil and gas, 3-5% of production is lost due to unplanned downtime. Maintenance can be seen as a preventative measure to combat this unplanned downtime and maximise uptime. In theory. The challenge is that system complexity often means maintenance has to be carried out using planned downtime – something that still costs the refinery production time and money, but is begrudgingly accounted for rather than being a surprise.

For these reasons, more downstream oil businesses are looking at ways of reducing the frequency of planned downtime and maintenance to save costs and maximise productivity. This has driven an interest in the idea of predictive, preventative maintenance, as well as the process data collection that underpins it.

Prediction and prevention
Traditionally, maintenance has been scheduled on a rota-style basis in most oil refineries. A manager might keep a record of when a piece of equipment was last serviced, and routine maintenance will be arranged in accordance with the directions of the equipment’s manufacturer. This fails to account for abnormalities that can accelerate a decline in performance and puts the manager and the engineer at the mercy of circumstance because their approach is purely reactive.

This is changing as the industry digitalises. With more sophisticated sensors connected to equipment and assets, engineers can remotely view the performance data of equipment in use. The sensors transmit information about key parameters, such as the operating temperature of motors or the pressure in a centrifugal compressor, to a central control system, such as a SCADA system or manufacturing execution system (MES).

MESs have been used in downstream oil and gas for much of the past decade, so most will be familiar with the benefits and functions. With modern MESs, alongside sensors capable of measuring a wide range of different metrics and communicating quickly with the latest communication networks and protocols, it is becoming possible for managers to receive all of their operational and process data in almost real time. 

Real-time access to data means that engineers can visualise data and create a snapshot of any given moment of an operation (see Figure 1). With this, it becomes far easier to identify when equipment is underperforming.

However, the vast sums of information generated by these operations can easily be too much for an engineer to manage effectively, which is why an increasing number of MESs and industrial internet of things (IIoT) platforms are turning to machine learning algorithms.

Machine learning, artificial intelligence, and many algorithms in general have become equal parts buzzwords and breakthroughs in recent years. Owing in no small part to the high number of start-ups that claim to use AI without actually using it, there is a lot of confusion surrounding exactly what these terms mean – and the benefits they provide.

In simple terms, an artificial intelligence is any computer software or algorithm that functions in a way that simulates the thinking processes of a human. And if you think of how human beings learn, much of our knowledge is acquired through our experiences and observations, as well as those of the people around us. Machine learning (ML) brings a similar concept to software, where algorithms are ‘trained’ on data to allow the algorithms to establish connections between data sets.

Analysis of industrial equipment performance data is a matter of understanding the wider connotations of the presented data and spotting patterns. It is something that lends itself to automation quite nicely, and the benefit of putting it in the hands of computers is that the system can parse thousands of data sets far quicker than a human.

This technology plays a key role in GE Digital’s Predix IIoT platform and MES, which is why Novotek often recommends it as our product of choice for industrial businesses. The platform uses ML algorithms that are trained on thousands of sets of industrial process data, so it can be integrated easily and run quickly, while using data collected by existing historian software to teach the algorithm what ‘normal’ looks like for a specific site’s operations.

The implications and potential of this are far-reaching, with the only limitations really being the sophistication and number of sensors a business has to monitor assets.

For example, it might be that a compressor is operating at too high a RPM or is vibrating abnormally while running. Even though the compressor could be an integral part of an oil refining operation, it might be something that is overlooked in the deluge of operational data, until the point where the compressor trips due to overspeed or high levels of vibration and the system starts flaring more than usual.

In a network of equipment monitored by a platform containing a ML algorithm, the software would detect the erroneous performance data and alert the most relevant maintenance engineer to attend to the compressor before tripping occurred. Potentially, this preventative maintenance could have helped the business avoid unexpected downtime – a rather understated accomplishment when you consider that unplanned downtime can cost an average of $49 million annually to offshore oil and gas companies.

This is a particularly time-sensitive example. For other applications, such as the management and mitigation of corrosion in pipework using sensors to detect the total acidity number (TAN) of fluids, the algorithms can recognise early symptoms of problems and automatically adjust maintenance schedules accordingly. This is where businesses can introduce effective predictive maintenance regimens to hydrocarbon engineering, minimising downtime across operations.

Although this application of industrial data can prove invaluable to oil engineers, it is just the beginning of the ways in which process data improves performance. Not only can the proper collection and analysis of data change the strategic and planning side of maintenance, it can also greatly enhance the process of conducting maintenance itself.


Add your rating:

Current Rating: 3


Your rate: