logo


Mar-2018

Opening the crude flexibility window: enhancing crude unit capabilities

Innovative online analysers help refiners to mitigate crude unit threats in real time and provide new insights into existing problems.

PHILIP THORNTHWAITE and BRAD MASON
Nalco Champion

Viewed : 2993


Article Summary

While much of the upstream oil and gas sector has struggled since crude oil prices collapsed in the summer of 2014, global refiners have enjoyed a period of relative prosperity. Falling crude prices buoyed refiners’ margins after a prolonged period of squeeze on their profitability. However, the good times for refiners may be fading; margins have already tightened as crude oil prices stabilised and refiners once again start looking at managing cash flow to maintain and maximise their profitability.

In this increasingly challenging environment, refiners are being tasked to find a compromise between competing objectives. On the one hand, refiners are being asked to maintain/improve asset reliability, increase the run length between turnarounds and to reduce both capex and opex costs. At the same time, they are also looking to improve profitability by processing more opportunity crudes, which increases the risk level.

The refineries that will survive are the ones that will successfully process a more diverse, opportunistic feedstock while improving or at least maintaining current levels of reliability and availability. However, managing the competing objectives of reliability versus profitability is easier said than done.

In the first article in this series (PTQ Q3 2017), we covered how ‘best in class’ desalting treatment accompanied by a rigorous mechanical-operational-chemical optimisation programme can deliver significant improvements in desalter performance to units that struggle to meet performance targets. This is important in terms of ‘robustness’ so that the refiner has the opportunity to capitalise on cost advantaged crude slates when they are processed. However, delivering the next level of process improvement means changing the established industry practices in terms of typical monitoring methods used to monitor these critical operations, which rely on spot samples and lagging metrics to make critical operational decisions and to troubleshoot problems.

The established approaches to monitoring desalter operations and overhead corrosion control programmes often struggle to provide the level of performance to successfully mitigate all threats, especially when processing a challenging feedstock. The crude slate to many crude units changes every two to three days or sometimes even more frequently than that. When you combine varying feed slates with significant batch to batch variation, you end up with highly variable contaminant levels in the feed to many crude units. To make control even more difficult, the operation of the unit upstream of the atmospheric distillation tower is also subject to a lot of variability. Slugs of water, solids, addition of slop in the feed to the unit, as well as changes in wash water quality can upset the desalter, resulting in a rapid increase in contaminant levels leaving the desalter and contributing to overhead corrosion control, or potentially upsetting wastewater operations.

To compound this, the problems with the variability of the process and levels of crude contaminants are exacerbated by the low measurement frequency of the key control variables in both the crude oil (for instance, salt-in-crude) and overhead sour waters (for instance, pH, chlorides, ammonia). At best, some variables like pH are perhaps measured once per shift while salt-in-crude and overhead chlorides are measured once per day. However, in some cases, these variables may only be measured two to three times per week.

With a highly variable process, trying to provide control with such infrequent measurements is impossible. The result is that the refiner gains information on what is happening at one point in time (in the past), but is in total ignorance about what is happening in the intervening periods between measurements.  

So, what is the solution to address this challenge? Feedstock and operational variability cannot be eliminated and, as already mentioned, the processing of discounted opportunity crudes represents a significant margin potential to the refiner.

Therefore, we need to look at what we can impact: the frequency and speed of measurement, reducing our dependence on lagging metrics and providing the means to obtain leading metrics that allow us to respond to events as they happen.

Data into actionable intelligence

As outlined above, the key control indicators are measured infrequently and with a long response time. Therefore, the traditional approach to monitoring these parameters can easily miss problems that occur during narrow ‘event windows’ or they may only detect problems after a significant event has already occurred.

For example, if an operator goes out to the crude unit and measures a low pH in the overhead sour water, the typical response will be to increase the neutraliser rate. What if that measurement is made towards the tail end of an excursion where the pH is already on the way up on its own? The operator, however, has no way of knowing that and so makes the change based on the available data. Neutraliser is increased at the same time as contaminant levels are falling, which results in a gross over-injection of chemical and a high pH that is not detected for one or more shifts. Similar situations can happen in the other way as well, resulting in under-injection of critical process chemicals for significant time periods and these contribute to increased damage caused by any upset. The refiner is continually being reactive and can find that it is ‘chasing its tail’.

The key to effectively managing the desalter and corrosion control applications is the ability to capture accurate data in real time. By capturing and identifying events as they occur, deviations of the key control indicators outside their control bands can be identified quickly and mitigation steps can be put in place before the problems have a chance to escalate. If we can increase the volume and speed of good data collection, it has huge value for the refiner as it allows greater insight and provides the ability to identify the root cause of any process upsets.

If we look at the relationship between data, information and knowledge, it is easy to understand why current monitoring fails to provide us with the correct insights to problem resolution.

With current monitoring methodology, data are sporadically generated from grab samples, looking at a snapshot at a single point in time. Consequently, it is difficult to put the correct context to the data and spot patterns that help identify the root causes of many problems.

Ultimately, without the required volume of accurate, quality data, it is very difficult for us to place the correct context that gives us the information to answer the ‘what, where and when’ questions. With these gaps in the data, we are at risk of extrapolating subjectively what we think is the issue, which ultimately can be a best guess or be based on assumptions that perhaps do not get to the true root cause of the issue. We then do not have the knowledge to answer the ‘why’ and this may ultimately compromise our ability to take the right course of corrective action.

Data is the product of observation, so it goes without saying that the more data we can collect on specific parameters, the greater clarity we have as to what is truly happening. This allows us to place the correct context to the data so that we can fully answer the ‘what, where and when’ questions, giving us the correct information which then provides us with the knowledge to understand why an event has occurred.


Add your rating:

Current Rating: 4


Your rate: