logo


Nov-2015

Using a commercial process historian for full-featured machinery condition monitoring

Can a commercial process historian be used to replace stand-alone condition monitoring software — including acquisition and display of high-speed vibration and surge waveform data? Historically, the answer has been “no.” Today, however, a very different story is unfolding.

Steve Sabin
SETPOINT™ Vibration

Viewed : 3955


Article Summary

A Brief History. Permanent vibration monitoring systems have been with us since at least the 1960s. When the first edition of American Petroleum Institute Standard 670 was published in 1976, it helped push such systems from the pioneering few into the mainstream. Today, it is standard engineering practice to include such systems on all critical turbomachinery – almost without exception – in not just the hydrocarbon processing industries, but all industries where critical machinery is found. These systems have expanded from simply vibration monitoring to now include bearing temperatures, overspeed, surge detection, and other parameters, and have thus become “machinery protection systems” instead of merely vibration monitoring systems.

The 1980s saw the rise of something new to complement these protection systems: computer software that archived and displayed the vibration data – including detailed waveform snapshots. Today, it is estimated that 25% of API 670 systems ship with some form of online condition monitoring software, making it the fastest growing segment of the machinery vibration measurement industry. Indeed, so prevalent have such systems become, the 5th edition of API 670 now includes an annex devoted specifically to condition monitoring software, augmenting the standard’s historical focus on only machinery protection systems.

The 1980s also saw the rise of another form of online software: the commercial process historian. The main innovation was the capturing of the reams of real-time data produced by process control systems and historizing this data in computer software instead of strip chart recorders. The data could then be easily and securely saved, shared and analyzed.

This concept exploded, and there are now tens of thousands of such systems around the world, collectively accounting for billions of process tags (Figure 1).

 Perhaps the best known of these is the PI System™ software, developed by Pat Kennedy at what was then Oil Systems Inc. (and has since become OSIsoft, LLC). Kennedy’s innovation? Taking the reams of real time data produced by process control systems – often “historized” on strip chart recorders – and Such systems are no longer thought of as simply “process historians.” They are truly real-time data infrastructures that handle far more than just process data. Process historians also handle events, spatial data, and asset hierarchies, to name just a few.
As process historians grew in market adoption and sophistication, so too did online condition monitoring software. However, the process historian concerned itself primarily with tens or even hundreds of thousands of points and associated update rates of hours, minutes, or seconds. Indeed, one-second process data archival was considered the “4-minute mile” of the industry. In contrast, condition monitoring software typically encompassed only a few hundred measurement points, but required data that was sampled much faster to capture much higher frequencies – roughly comparable to the audible spectrum (20 kHz) in terms of sampling speeds and bandwidth requirements. As such, the two systems continued to exist as independent silos – one focused on hundreds of thousands of points with scan rates measured in seconds, and one focused on only a few hundred points with scan rates measured in milli or even microseconds. Where one system could convey almost all of its information in terms of trends and statuses, the other system required highly specialized plots used by vibration analysts to visualize waveform data in both time and frequency domains.

Thus, these systems evolved along two very different paths with two different sets of users. One targeted everyone in the enterprise for whom process and event data was valuable, while the other targeted the few people in the organization trained to interpret the specialized waveform data produced by its critical machinery.

The Twin Silos
Most plants have an overall control system architecture that is similar to Figure 2. Process data, as well as machinery data, flows into a Distributed Control System where operators can monitor and control the process; adjust machinery operating parameters such as speed, load, and flow, and monitor the status of subsystems such as anti-surge, vibration, and speed control. Generally, these subsystems communicate with the DCS via some type of open protocol, such as Modbus. Note the focus of data flowing into the DCS is real time control and monitoring of the process. Most DCS architectures are less capable (although they are continually improving) when it comes to historizing their data and providing a rich tool set for sharing and analyzing this data.

Thus, plants often rely on a separate process historian, such as the OSIsoft PI System, to provide these capabilities. This is particularly true when there are different types of DCSs within in a single plant or organization, and data must be shared across all of them. Historians are very adept at communicating with virtually any underlying system. For example, the PI System has more than 400 published interfaces supporting a very wide variety of protocols. In contrast, the historians offered by DCS suppliers are typically designed to work only with their own control systems – not others. What is notable about Figure 1 is that virtually every type of data originating in the plant and its mechanical assets flows into the process historian – with two exceptions:
1) Machinery vibration data
2) Compressor surge/performance data

Both of these subsystems have instead relied on development of their own specialized software “silos” to collect and display the data of interest. Namely, high-speed vibration waveform data and surge data such as compressor maps and surge cycle analysis tools. Although the vibration monitoring and surge control industries have evolved their tools over the years to become increasingly sophisticated, one thing has not changed: they exist as stand-alone software ecosystems with their own proprietary infrastructures to collect, store, and display their specialized data. The vibration software is separate from the surge software, and both are separate from the process historian software.

Silos Perpetuated
At this point, some will no doubt remark that vibration and surge data has been sent to the DCS and subsequent process historian for many years. While this is true, it is important to note that the data supplied was simply current values and status conditions – the type of data that can be conveyed with 4-20mA loops and relay annunciation, and for which Modbus communications is perfectly adequate. What was not sent to the DCS and/or process historian was the high-speed vibration or surge waveform data from (for example) a dynamic pressure transducer or vibration probe where the amplitude versus time plot requires a time scale in milliseconds – not seconds.

The reasons given for perpetuating these silos typically include the following:
• They produce too much data – there is not enough bandwidth in the process control network.
• They require too much storage space because this data is updated in milliseconds instead of minutes or seconds.
• They require specialized visualization tools to display this data – they are not just trends, bargraphs, and status lights.
• The DCS / process historian is not fast enough for the sample rates required by this data.


Add your rating:

Current Rating: 3


Your rate: