Real-world catalyst testing
Catalysts are tested in laboratory and pilot plant units for purposes of catalyst research, catalyst development, and catalyst selection. Hoekstra Trading’s catalyst testing work is focused strictly on catalyst selection, which is also called application testing. Its purpose is to help refiners choose among competitive alternatives for refills of commercial hydroprocessing units.
George Hoekstra, Hoekstra Trading LLC
Viewed : 1843
This article focuses on a specific question -- what is the margin for error in application testing of hydroprocessing catalysts?
The catalyst landscape
Here is a list of catalyst brand that are marketed for diesel hydrotreating:
If someone tried to learn about the catalyst market, it would be a big project to just sort through these brands and learn whose brand is who’s.
Then, within brands, there are many flavours of catalyst. You can choose CoMo or NiMo, fresh, regenerated, or rejuvenated catalysts. There are flavours for high pressure and low pressure units. You can buy performance catalysts or value catalyst. You can use a single bed, stacked bed, cylindrical or quadrilobe, high density or low density catalyst.
Eight flavours times 25 brands equals 200 options.
And the 200 options come with a multitude of performance claims. Each claim is based on its own data, mostly generated by the supplier making the claim, and applies to one of the brand/flavour permutations with no independent frame of reference to judge one claim versus competing claims.
It’s no surprise that refinery engineers get confused and frustrated by the many different brands, flavours, and claims.
For real-world application testing, there is no choice but to simplify catalyst selection.
Simplifying catalyst selection
As one step in simplifying things, we focus on cross-vendor testing of competitive CoMo products. Cross-vendor comparisons are where refiners most need data and are least likely to get it from suppliers. We rely on the catalyst suppliers to show us the differences between their CoMo, NiMo, stacked beds, and other variations of their technologies.
With this simplification, we reduced the 200 options to a grid showing the different generations of CoMo products offered by major competitors between 2000 and 2010:
These 17 CoMo catalysts were introduced by ART, Albemarle, Criterion, Haldor Topsoe, Johnson Matthey and Axens between 2000 and 2010. The products shown across the bottom row were each supplier’s first-generation Type II CoMo catalyst for diesel, introduced around 2000. As we move up the chart, we see new CoMo products as they were introduced in subsequent years.
Between 2009 and 2014, we tested 15 of these 17 CoMo products side by side using a standardised pilot plant test called the “10-20 test”. The testing programme was done on the industry’s best independent pilot plant testing platform at C Solutions in Thessaloniki, Greece, using methods that were developed and agreed by top technical representatives of all major catalyst suppliers.
This produced a relative activity ranking of CoMo catalysts from several generations of offerings from all major suppliers. The highest activity catalyst in the lot tested 75% more active than the lowest activity catalyst. The catalysts clustered into three performance tiers which are separated by 30% relative desulphurisation activity. If we define the lowest activity catalyst to have activity = 100, the performance tiers are:
• Top tier: Activity 175 +/- 15
• Middle tier 145 +/- 15
• Bottom tier: 115 +/- 15
This ranking of CoMo brands puts some structure into the catalyst landscape and helps make sense of the proliferation of brands, flavours, and claims.
A fit-for-purpose catalyst ranking
In years four and five of the programme, we tested variations of the core CoMo products, including NiMos, trimetallics, low density catalysts, regenerated and rejuvenated catalysts, bringing the total number of catalyst samples tested to 40. The final ranking from the five-year programme is fit for the purpose of choosing refill catalyst for commercial hydrotreating units; and for those who want to do their own catalyst testing, it provides a much better starting point than testing isolated candidates or trying to sort through the clutter on your own.
Catalyst ranking tiers
It is important to note that the vertical axis in Figure 2 is not activity, it represents the years the different catalysts were introduced. On an activity scale, the catalysts do not line up as indicated by Figure 2.
Our testing programme produced some surprising results. For example, the first-generation Type II CoMo catalysts from two of the suppliers tested equal to the third-generation Type II catalyst of a third supplier, even though the third supplier’s catalyst claimed to be 40% more active than first-generation Type II catalyst. As a rule, differences between different generations of catalysts are smaller than would be expected from published information and marketing claims.
With good test data in hand, we confront the need to draw lines between tiers, and the critical question arises, what is the margin of error? In addressing this, we will refer to a 2006 NPRA paper describing the standardised catalyst testing done by BP at C Solutions starting in 2001.
BP’s standardised catalyst testing programme
This excerpt from a 2006 NPRA presentation shows how BP’s hydroprocessing network ranked a group of early-vintage Type II catalysts for purposes of catalyst selection:
Two catalysts, represented in gold, shared the top ranking. Six catalysts, represented by silver bars, tied for second place. And three catalysts, represented in bronze, placed third.
BP did not divide these catalysts into more than three tiers because replicate testing showed that finer differences cannot be expected to translate into real differences in commercial unit performance. For purposes of catalyst selection, all catalysts falling within a tier were considered equal.
When catalyst rankings reflect only those differences that significantly exceed the margin of error, it lends credence to the rankings, and refinery engineers will use those rankings with confidence to drive catalyst refill decisions.
The margin of error
In every test run, we put a reference catalyst in one of the four reactors. When you draw aliquots of reference catalyst from a one-litre sample for testing and test them in different pilot plant runs over a period of time using the same batch of reference feed, you will see differences in the results. On even the best testing platform, it is common to see variations of at least 10% when testing the same reference catalyst over time.
Sources of variance include differences in the actual activity of different catalyst aliquots (not every extrudate has the same activity), variation in the reference feed (petroleum is not homogeneous, nor inert), experimental errors in measuring the catalyst quantity and loading reactors, variance in feed rates, reactor temperature, vapour/liquid separation and product analyses. These all contribute to inevitable differences in the measured activity.
This chart of product sulphur versus days on stream shows the reproducibility of our 10-20 test as measured on a reference catalyst in the first three pilot plant runs which were done over a period of 18 months:
Feed and temperature are being changed on three-day intervals. The slopes in each three-day segment show the transient line-out as the pilot plant and catalyst respond to each changing test condition. The differences in these curves provide a realistic estimate of the margin of error that can be achieved in real-world testing over time. The differences correspond to a 10% range in desulphurisation activity.
Add your rating:
Current Rating: 1