# Cost is King

In these hard times, companies are looking to cut cost. This is always a wise and prudent goal. This article examines some practical ways to lower the cost of your weather services.

## Cost vs. Price

*Cost* and *price* are rarely the same thing.

For example, Paula buys a reconditioned laptop for $500 (price), believing it will last as long as a new one. Two months later, due to a faulty power supply, the hard drive crashes and the mainboard dies. Poor Paula loses all her data for a big contract ($30,000), and she has to fork out $2,500 for a brand new laptop. Her total cost is $500 + $2,500 + $30,000 = $33,000 (cost).

Why did Paula try to minimize price instead of cost?

Clearly it was because she believed the *reconditioned laptop would last as long as a new one*. Also, she *did not take into account how laptop failure adversely impacts her business*. Paula's problem is that she does not know how to estimate the cost of a reconditioned laptop. **Price** is obvious but **cost** is not.

## How to Estimate Cost?

Companies today face exactly the same challenge: how to estimate cost?

If you're buying printing paper for the office, you know that:

- Printing paper is a standardised product.
- It does not affect other aspects of your operations.

So, you make the assumption that cost = price and pick the supplier with the lowest price.

So, for such products, lowering cost means lowering price. But clearly, at least from Paula's example, not all products fall into this category.

- Most companies simply ignore the issue (like Paula) by assuming that all alternatives to a service or product are "about the same".
- A few companies have successfully found ways of estimating cost indirectly for specific categories of products or services.

In this article, I will discuss an objective method of estimating the cost of **weather forecast services**. It is useful to do so because a weather service is not standardised product and a poor service can very adversely impact your offshore operations.

I'll also show you the 'secrets' of companies that have successfully found practical ways of minimizing the cost of a weather forecast service. In that process, I will also reveal some hidden facts of the weather business so you'll know what to look out for.

## The Cost of Weather Forecasts

It helps to define our terms:

**Bad weather**is weather that prevents some offshore activity from taking place (eg, oil production or a tow from starting, etc.).- We will call the percentage of bad weather b%. The percentage of good weather is just 100% - b%.
- We will call the average daily operating cost $H.
- We will call the average daily revenue $R.

FPSO *Equador* produces 12,000 bpd on average while at Block 555. At oil prices of $70/barrel, the average daily revenue is R = 12,000 × $70 = $840,000.

The *Equador*'s operating and hire costs are $300,000/day (ie, H = $300,000), and the percentage of bad weather (due to rough seas or storms) is b = 3%. Given **perfect** weather forecasts, the average income per day is (R - H)×(1 - b) - H×b = R×(1 - b) - H = $840,000×(1 - 0.03) - $300,000 = $514,800/day.

**perfect**weather forecasts, the average daily income = R×(1 - b) - H

How does the price of weather forecasts impact our bottom line? Simple: it just changes the daily operating cost.

The *Equador* receives **perfect** weather forecasts from Bob's Best Weather for $100/day. This means the *Equador*'s actual operating cost is $300,100/day. Running the calculation again, we find the average daily income is now $514,700.

There's hardly any change at all to the average income. We've gained a valuable insight:

This is just common sense. For most offshore operations, operating costs or revenues run in the hundreds of thousands of dollars daily. Weather forecasts comprise a *tiny* fraction of that amount.

## Imperfect Forecasts

Of course, weather forecasts are rarely perfect.

A forecast might predict bad weather when actual conditions turn out to be good. In the case of the *Equador*, this may cause unnecessary downtime. Conversely, the forecast may expect good weather when actual conditions are bad. This is a lot worse, and may result in structural damage to the *Equador* (or its riser system) or loss of life, in addition to downtime. The matrix below depicts the various outcomes:

Weather is Good | Weather is Bad | |

Forecast predicts Good Weather | Operations continue as planned | Possible structural damage or loss of life, and weather downtime |

Forecast predicts Bad Weather | Unnecessary downtime | Unavoidable weather downtime |

How does this translate to cost? Let's call the the daily cost of structural damage and loss of life (ie, amortized over the period the forecasts are received) $S. The cost matrix is therefore:

Weather is Good | Weather is Bad | |

Forecast predicts Good Weather | R - H | (S + H) |

Forecast predicts Bad Weather | (H) | (H) |

Cost Matrix for Imperfect Forecasts |

The figures in parantheses (.) denote costs. The shaded squares denote unnecessary costs which we want to reduce or eliminate altogher. The cost matrix is associated with a *probability matrix*, which assigns a probability for each cost:

Weather is Good | Weather is Bad | |

Forecast predicts Good Weather | P_{g} | Q_{b} |

Forecast predicts Bad Weather | Q_{g} | P_{b} |

Probability Matrix for Imperfect Forecasts |

The shaded squares represent the probabilities of erroneous forecasts. To work out the average daily income, we multiply each cost with its associated probability and sum:

Average daily income =
(R - H)×P_{g} -
(S + H)×Q_{b} -
H×Q_{g} -
H×P_{b}

which can be simplified to:

Average daily income =
R×P_{g} -
S×Q_{b} -
H since probabilities must always total 100%.

**imperfect**forecasts, the average daily income = R×P

_{g}- S×Q

_{b}- H

Let's key in some figures. The operator of the *Equador* goes through a 'cost-cutting' exercise.

One bright bulb at HQ says they can get *free* forecasts off the internet from All Free Weather & Pizza Services. And the accuracy is great! Based on the *Equador*'s weather thresholds All Free estimate Q_{b} = 1% and Q_{g} is about the same too.

The **price** of All Free's service is **zero** (they make their real money selling pizzas). But what about the cost?

To find that out we need to calculate:

- S, which is the cost of structural repair or loss of life amortized over the period the forecasts are received by the
*Equador*. - P
_{g}, which is the probability of correctly forecasting good weather.

Let's say the *Equador* receives forecasts from All Free for as long as it is producing at Block 555. This is a very conservative assumption, because if the forecast is used for a shorter period of time (as would usually be the case if something bad happens due to an incorrect forecast), the amortized cost increases dramatically. So, our cost estimate will be on the **low** side.

The total cost due to structural repairs or loss of life (excluding other operating costs) is say $5,000,000 for one incident. So, if production lasts for 5 years, the amortized daily cost for one incident is about $2,700. That is, S = $2,700.

P_{g} can be calculated exactly since by definition, P_{g} + Q_{g} = 100% - b%, or P_{g} + 1% = 100% - 3%. This means P_{g} = 96%

Using these figures, the average daily income = R×P_{g} -
S×Q_{b} -
H = $840,000×0.96 - $2,700×0.1 - $300,000 = $506,130/day.

So, *compared* to using Bob's Best (price=$100/day), using All-Free (price=$0/day) causes the operator to **lose** $514,700 - $506,130 = $8,570/day.

*free*forecast service costs $8,570/day

## Cost-Based Benchmarking

In the previous section, we compared an imperfect forecast to a perfect forecast, and called the difference in average daily income the "daily cost" of the imperfect service. The number we churn out with this approach is what might be called a **cost-based benchmark** of the forecast service. It's a benchmark or "rating" since there are no perfect forecast services in reality.

Another approach would be to compare the average daily incomes of two competing imperfect forecast services. This would yield **actual** estimates of loss/gain of using one service over the other.

Both approaches are valid. In fact, the actual loss/gain of using one service over another can be calculated directly if we know each cost-based benchmark. Just subtract one from the other.

In defining the cost-based benchmark, our baseline will be a hypothetical **zero-price, perfect** forecast. In effect, we're asking: *"How much money would we lose by using forecast service X compared to knowing the weather perfectly, for free? "* That loss is the cost-based benchmark.

**The Cost-Based Benchmark**

Compared to a free, perfect forecast, the

**loss**in average daily income due to utilizing an imperfect forecast whose price is $F/day, is: S×Q

_{b}+ R×Q

_{g}+ F

Joe, the senior procurement engineer for the *Equador* receives two quotes from competing weather companies. Using this formula, he calculates the cost-based benchmark for each service:

Wally's Weather Wizards | Red Day Weather Services | |

price/day | $25 | $100 |

Q_{g} & Q_{b} | 0.2% | 0.1% |

Cost-based Benchmark | $1710 | $943 |

Comparing Competing Quotes |

Red Day is the clear winner. It **costs** about half of Wally's Wizards, although its **price** is four times higher.

From this example, I hope you see how sensitive the cost is to forecast inaccuracy. This is because it is magnified by potentially large numbers (the repair costs and loss in revenue).

This is common sense, but it's good to see it supported by the math (and vice versa).

## Estimating Q_{g} and Q_{b}

Without question, the most challenging aspect of using this methodology is in accurately determining Q_{g} and Q_{b}.

**The Direct Approach**: The most accurate and direct method is by comparing forecast vs. actual observations. Here's the recipe for Q_{g}:

- Count the times the actual weather conditions were below your critical operating thresholds. Record the time of each incident. Call this number N.
- For each incident, check if the forecast was for conditions exceeding your thresholds. Call the number of times this happens n.

The estimate for Q_{g} is simply n ÷ N. For example, if there were 1,000 occasions of benign weather, while the forecast failed to agree on 10 occasions, then Q_{g} = 10 ÷ 1000 = 0.01 = 1%.

If you have multiple operations, then calculate Q_{g} separately for the most costly ones.

The recipe for Q_{b} is similar: Count the times the weather exceeded your thresholds (N), and for each incident, count the times the forecast failed to do so (n). Q_{b} = n ÷ N.

The limitation of the direct approach is that it requires both observed and measured data at the outset. Unfortunately, these are only available once operations are underway, so the direct approach can only be used as an ongoing evaluation tool of your weather service.

**The Indirect Approach**: This approach is based on Insight #2, which says that the most important feature of a forecast is its accuracy. So, we're looking for evidence that the Q's are minimized.

In the second and third part of this series, we will examine some indirect ways to evaluate a forecast service's accuracy. We will also see how savvy offshore operators select their weather services.

# Q & A

We'll answer some questions that might have:

### Q: Is this a proven methodology?

The methodology we've presented here is nothing new. In fact, it is a cornerstone of probability theory called "expectation value", which was invented by the French mathematician Blaise Pascal in 1654. So, it's has been around for about 350 years. The idea is now pervasive everywhere, from the fields of finance to ecology.

The cost-based benchmark is really just the expected loss of using an imperfect forecast instead of a free, perfect one.

### Q: Can Q_{g} and Q_{b} be estimated from the RMS and Bias values of our verification reports?

No. Not without making some very big assumptions. The problem is that the RMS errors & Bias do not take into account your operating thresholds. These statistics have their place and we will devote an upcoming article to them.

### Q: Is the given recipe for Q_{g} and Q_{b} reliable?

The reliability depends on N. The larger, the more reliable. The recipe given here is just the "sample mean" value of the Q's.

If you're serious about reliability of the Q estimates, then we suggest determining the probable lower and upper bounds of the Q's. This is known as a *confidence interval*.

Assuming the Q's are stationary and normally distributed, the confidence intervals can be determined using the t-distribution, a standard statistical tool. You can find the t-distribution in popular spreadsheet programs like Excel or OpenOffice Calc.

For example, suppose you are given the values of the Q's on a monthly basis by your weather provider (we'll make this calculation for Q_{g} only. It's the same for Q_{b}):

Month | Q_{g} |
---|---|

Jun | 0.1% |

Jul | 0.3% |

Aug | 0.2% |

Sep | 0.1% |

- The sample mean of Q
_{g}is (0.1 + 0.3 + 0.2 + 0.1)/4 =**0.175%**. - The sample standard deviation of Q
_{g}is:

(0.1 - 0.175)^{2}+ (0.3 - 0.175)^{2}+ (0.2- 0.175)^{2}+ (0.1- 0.175)^{2}/(4 - 1)

=(-0.075)^{2}+ (0.125)^{2}+ (0.025)^{2}+ (-0.075)^{2}/ 3

=0.005625 + 0.015625 + 0.000625 + 0.005625 / 3

=0.0275/3

=0.009166667

Therefore the sample Standard Deviation (SD) = √variance = √0.009166667 =**0.095742713%**.

Suppose we want to know with **90%** confidence the interval that Q_{g} lies in. This is, unsurprisingly, called a "**90%** confidence interval". We have **4** values, so we need to look up the t-distribution value of T(100% - (100% - **90%**)/2, **4** - 1) = T(95%,3) = 2.353.

- The lower limit of the 90% confidence interval is then:

mean - T×(SD/√n) = 0.175 - 2.353×(0.095742713/√4)

= 0.175 - 2.353×(0.095742713/2)

= 0.062%. - The upper limit of the 90% confidence interval is:

mean + T×(SD/√n) = 0.175 + 2.353×(0.095742713/√4)

= 0.175 + 2.353×(0.095742713/2)

= 0.288%.

So, we are 90% confident that the true mean of Q_{g} lies between 0.062% and 0.288%. This is a large range, but is reflective of the underlying variability of Q_{g}.

We can run these figures through the cost-based benchmark to get the upper and lower limits for the benchmark.

### Q: Can forecast accuracy be improved by using multiple weather forecast services?

It depends.

For example, **IF** you know for certain that forecast A consistently overforecasts the weather while service B consistently underestimates it, then taking a weighted average will yield a more accurate forecast.

However, whenever this (rather stringent) condition is not met, then the averaged forecast will be better than the worst performing one but worse than the best performing forecast.

In short, you **cannot** improve the accuracy, unless you have additional information which you can take advantage of. There is no such thing as a free lunch.

However, given alternatives, you have the **option of choosing** a better performing forecast, based on your observations of local weather conditions. So, if you receive multiple forecasts, we recommend continually monitoring the forecasts and simply choosing the best performing one for your work. Continuous monitoring is important because no forecast service gets it right all the time, especially when it comes to severe weather.

Having said that, multiple forecasts can **add value** (but not accuracy) because they give you an idea of the degree of uncertainty involved. Since uncertainty translates to risk, you can use this information in your decision making.

But again, the application is not straightforward. The uncertainty you're trying to gauge depends on the relative accuracies of each forecast. For example, since most automated forecasts do not reject scenarios based on current weather conditions, it is possible for them to be **completely** wrong. So, they might predict great uncertainty when in fact there is none. Worse, local weather might not reveal this error, so the offshore operator has no way of telling if this had occured. We have seen this happen several times, especially late in the Northeast Monsoon of the South China Sea. Some weather models **persistently** forecast the development of Tropical Disturbances, when our current weather information shows no such possibility.

It is the responsibility of your weather provider to highlight uncertain weather whenever it is likely to impact your operations. After all, isn't that what you pay us for?

# Related Links

Little Known Facts about the Weather Business

11 Secrets of Successful Offshore Operators

# Subscribe to KnowledgeVault

If you'd like to get the latest updates of articles, enter your email below and click "subscribe". We'll send the latest articles to you!