You are currently viewing: Articles



Apr-2017

Profiting from plant data

Predictive analytics is an evolving technology with many potential applications in and gains for the process industries

DOUGLAS WHITE
Emerson Automation Solutions
Viewed : 414
Article Summary
Typical plant problems include:
1. You are a process engineer and suspect that the catalyst in one of the site’s reactors is deactivating faster than it has historically. Is your suspicion correct, and if it is, how much faster and what is the cause? Will the catalyst last until the next shutdown? If not, what operating changes does the plant need to make to ensure it will last?

2. You are a reliability engineer trying to understand why some pumps need frequent overhauls and some do not. What are the characteristics of those that repeatedly require this service? Are there leading indicators that will allow you to identify beforehand when a pump problem is about to occur?

3. There have been several significant flare releases over the past few years with associated visible plumes. The source and composition of the releases are not obvious. You have been asked to find the cause and recommend changes to avoid or at least reduce the incidence of these releases in the future.

What is the common factor in all of these issues? Their resolution is going to require deep analysis of plant data including data beyond standard process measurements. Another commonality is that time is an explicit variable and solutions should generate predictions of future plant and equipment behaviour that supports decisions. How can these questions be addressed more efficiently – more quickly and with less manpower requirements?

In this article, some of the new methods that are available to assist with this data analysis and development of solutions will be reviewed. In recent years, rapid progress has been made in innovative algorithms and approaches. How are the process industries taking advantage of these new techniques today and how might they apply them in coming years?

In a related trend, process plants are producing more and more data. ‘Big data’ is a popular topic and the process industries are no exception. One refining company indicated it produced 80 billion data items for storage in one year from four sites. Another process company referenced a corporate historian with 10 million tags across 15 sites and plans to implement online access to the most recent three years of historical one minute sampled data for these points.

Even more data can be expected from process plants in the future. Much of the new equipment and devices purchased for process plants is already equipped with multiple sensors to monitor internal conditions and performance, embedded computing to analyse the data produced, and enhanced connectivity to transmit the results. This continuing evolution in the number of sensors and their computing and communication capabilities is 
sometimes referenced as the Industrial Internet of Things (IIoT). But how can plant staff extract value from all of this data?

Figure 1 shows one example. In years past, there might be one (or even zero) feedback signal from a field control valve. Today there can be more than 100 including sophisticated diagnostic information and generated graphical performance representations.

Of course the increase in data is not limited to the process industries. Much larger increases are reported in other industries. Walmart is reported to collect and store 2.5 pentabytes of unstructured data every hour from the more than one million customers around the world currently purchasing items.1 Amazon is estimated to have more than 1.5 million servers in their data centres.2

In this article the focus is on the plant operational impact of these developments and their potential business impact. One characteristic of most of these operational issues is that they have a temporal/dynamic character which leads to special data requirements. There has also been an equivalent and important impact on other corporate activities such as trading, customer and market analysis and financial services which has been covered in many other publications.

Data analytics
Data analytics is the theory and practice of capturing, organising, and analysing data to determine patterns, correlations, and conclusions. Engineers, of course, have been doing analytics, in practice, for as long as they have been employed. However, the continuing increase in computing capabilities has led to development of practical algorithms that can address significantly larger data sets and more heterogeneous data types.

McKinsey3 has ranked big data analytics as one of the top potential technologies that can increase productivity and GDP over the next few years. Among business sectors, manufacturing was the lead with an estimate of overall GDP yearly increase of $125-$250 billion.

Data analytics can be classified in three categories based on the time scale of interest as illustrated in Figure 2.

Descriptive/historical analytics is analysis of what has happened, when it happened and why it happened. This includes producing retrospective performance measures. An example is calculation of actual versus planned energy usage for plant equipment as well as the overall site for the past day, week or month.

Real-time analytics is the use of the most recent and historical data to interpret current plant conditions. An example is comparison of plant operating conditions against the approved ‘operating window’ and issuing an alert if the operating window is being violated.

Predictive analytics is prediction of what will happen and when it is likely to happen based on an analysis of current and past data trends and patterns. An example is estimating the likelihood that a pump will need servicing before the next shutdown. Predictive analytics is the focus of this article.

Predictive analytics applications
Figure 3 shows a typical predictive analytics application in the process industries.
Relevant sensor and event data is captured in real time. They are automatically analysed for patterns of significance and if one or more patterns is detected a prediction is generated from the model. If appropriate, an alert is generated or specific compensating action is initiated.

An example is the pump condition question at the beginning of this article. The spectral properties of the vibration measurement from the pump and the statistical properties of the outlet pressure measurement might be the inputs of significance. If both of these variables increase significantly at the same time the model might suggest a potential future pump problem and suggest action by operations and maintenance to address the problem.

Predictive model development
Central to the success of any predictive analytics project is the development of an appropriate model which accurately forecasts the occurrences under investigation. Many of the popular algorithms for model building have been known to statisticians for many years. What has changed is the ability to apply these algorithms to extremely large data sets and widespread availability of easy-to-use software for these applications. This has resulted in many recent applications in the consumer area including such familiar ones as fingerprint recognition on smart phones, recommendations for music and movies from previous selections, and driverless cars. Table 1 lists some of the major classes of algorithms and their existing and potential use in the process plant area and in more general consumer applications. There is much overlap in terms of potential applications and the approach used often depends on the preferences and experience of the user.

For more background on these algorithms see references 4 and 5.
Current Rating :  3

Add your rating:



Your rate: 1 2 3 4 5