Enabling the refinery of the future – safety first
The recent downturn in oil prices has reduced feedstock purchasing costs for the hydrocarbon processing industry (HPI). As a result, refineries are running above and beyond their design parameters. This, in turn, has increased the pace of expansion, upgrades, modifications and the number of debottlenecking projects to accommodate increasing throughput.
Trinity Integrated Systems
Viewed : 2283
Within this “new” economic environment, refinery automation systems must be capable of delivering new levels of productivity and scalability, while ensuring safety, managing globalisations and addressing changing workforce demographics. Is the current automation infrastructure ready for this, and are these issues being given the appropriate priority levels?
The refinery of the future (ROF) is focused around the improved capture of data, retention and definition of knowledge and overall clarity. However, it does not refer solely to a new facility, but it also incorporates and embraces the desired functionality and operation of existing systems/sites.
Justification must exist for requirements to be introduced into these expedited brownfield projects, and the catalyst for real change needs to be centered more on the avoidance of unacceptable risk. Since the explosion and resulting fire on the Piper Alfa North Sea oil production platform in 1988, which killed 167 men and caused a loss of approximately $3.4B, the industry has moved towards safer working practices under IEC 61511, or equivalent, guidelines.
When process industry incidents are examined, the majority of causes are attributed to engineering design or maintenance procedure errors. There are multiple mediums and languages used in the communication of specification requirements, formats, design details and engineering outputs, as well as different parties involved – including end-users.; licensors; engineering, procurement and construction (EPC) companies, consultants; and contractors/integrators. It is not surprising that challenges arise at each stage.
Beyond the design, system maintainability is also a main cause of incidents. The 2005 explosion at the ISOM isomerisation process unit at BP’s Texas City refinery in Texas, which killed 15 workers and injured 170 other, and Piper Alfa are well publicised examples of traditional methodologies that have failed to protect the workforce, the asset and the environment.
Despite such compelling justification, the implementation of functional safety management (FSM) practices preached by IEC 61511 remains slows. The FSM premise is the maintenance of a compliant safety life cycle (Fig. 1.), but, because the industry is mainly brownfield, it is difficult to gain a starting point or foothold on that lifecycle.
If the ROF is to be introduced in today’s refinery environment, an entry point to implement this technology on every modification, upgrade and maintenance work package must be found.
The one common obstacle is organisational inconsistency, ranging across differences in standardisation levels, centralised engineering, fixed vendors lists, contractors vs. staff ratios, etc. Even single suppliers use different engineering centers to deliver the best margin for a single customer. The underpinning principal of all forward thinking strategies is standardisation. With the number of people, companies and cultures involved in a project, it can be difficult to implement document-based guidelines.
The safety industry has identifies tools to support its safety life cycle, but management barriers have traditionally prevented their adoption. Although the desire to attain a maintainable life cycle (Fig.2.) is common, transitioning from current systems can appear distant.
In older plants, it is understood that documents will be lost, changes will be undocumented and knowledgeable staff will leave. One of the biggest obstacles is that much of the expertise behind these systems is inside the heads of the engineers who built them, and it is not properly documented. This makes modifications difficult, potentially dangerous and certainly noncompliant. Gathering design intentions and the distilled knowledge that produces the downloaded operating code is imperative before improvements in data recording or plant expansion are possible.
There are products that enable translation of the downloaded legacy code into a future system. One such product translates the code of legacy systems into intermediate mathematical formulae, which can then be reviewed. The old flat-logic, object-oriented patterns are refined to new corporate requirements and exported to a new system or specification. Scenarios can easily be tested and refined, including negative testing to mimic potentially thousands of possibilities that explore what the system can achieve or withstand. The true value lies in documenting the increasingly complex code to demystify and fully understand its functionality.
Joining components in plant packages
An estimated 44% of incidents can be traced back to poor specification requirements (Fig.3), and remedying this can be a complex and time consuming task that requires expertise across multiple disciplines. Communicating control requirements across those disciplines requires a universally understandable platform or language.
Collaboration in the design process and on simulation models is a practical, inexpensive and invaluable first step to defining functionality and removing ambiguity. Specifications can be created and implemented in a soft environment quickly and without risk, and compliance is checked throughout the process. A database-driven lifecycle engineering environment can deliver this functionality as the components are referenced in the database.
In the ROF, system structures, form and content must be aligned and maintained by more global support infrastructures. However, attempts at standardisation across different geographic regions, languages, cultures and working practices have created problems in the past.
Moving away from pre-canned technologies
While the concept of entering domains with pre-canned technologies is easy to grasp, the execution is less simplistic. It is very rare for two “common” packages to remain the same. Tag changes are necessary, physical environments may demand different setups, company standards request various instrument suppliers, etc.
Rather than developing packages that are static, the specifications, simulations, testing regimes and FSM can now be linked to the packaged plant. If an instrument type change is entered in one location, the automated tools ensure that the documents are updated, safety integrity level (SIL) calculations are rerun, maintenance regimes are updated, loop drawings are redrafted and system functionality is updated.
Modular package –based systems are now specified based on functionality and not components, leading to original equipment manufacturer (OEM) packages aligning with corporate requirements without upsetting package suppliers’ sensitivities.
Add your rating:
Current Rating: 2