To evaluate a concession, play or a prospect with multiple objectives, etc. it may be required to calculate the sum of individual expectation curves. That process is **addition**, or **aggregation**.
In other situations there may be more than one expectation curve for a single prospect/objective, because there are alternative evaluations made with different assumptions. The latter problem is one of **merging**.

The following problems deserve attention:

- Addition of distribution parameters
- Dependence of volume distributions
- Analytical versus Monte Carlo addition
- Combination of Probabilities of Success (POS)
- Merging alternative hypotheses

Discovery | Expectation | P90 | P50 | P10 |
---|---|---|---|---|

Abel | 1.467 | 0.502 | 1.349 | 2.686 |

Barnabas | 3.340 | 0.438 | 2.906 | 6.771 |

Cherub | 6.682 | 3.544 | 6.708 | 9.747 |

Total of these 3 discoveries, column totals | 11.489 | 4.484 | 10.963 | 19.204 |

Stochastic addition | 11.49 | 7.11 | 11.28 | 16.17 |

Note the difference between the P90 and P10 sums and between the P10 and P90 of the stochastic addition. Adding stochastically with independent expectation curves reduces the coefficient of variation (the standard deviation divided by the mean). That is reflected by the smaller difference between P90 and P10 for the last row of the table.

The procedure for a Monte Carlo addition is schematically given below. It may have to involve a "shake" subroutine to randomize the sequence of items in a vector. This can be done by creating another vector, filling it with random numbers between 0 and 1 (the RND function) and then sorting this vector while carrying the other vector along.

Dependence means, statistically, that there is covariance. Prospects in the same setting may have parts of their uncertainty (variance) in common. For instance if there is doubt about the sourcerock, this may be the case for all the prospects there. This effect is called here "commonality". For this reason it is useful if an appraisal system gives an

An

The procedure to estimate dependence between two prospects cosists of:

- A statistical estimate, based on ANOVA (Nederlof, 1997).
- A geological adjustment of the statistical dependence through subjectively estimating the geological commonality.

The adjustment will, in general, reduce the statistical dependency estimate. The final dependency estimate is the correlation coefficient "** r**".

Another suggestion to estimate interdependence is the "kriging" approach. In that case the geological reasoning is less detailed than in the above method. The kriging logic is then that the success rate and/or size of the accumulations are spatially correlated. The zone of influence, or range, is used e.g. in the Gaea50 program for a bayesian update of P[HC] (probability of charge) based on the evidence from previous drilling. For interdependence the notion of spatial dependence has been used by Wees et al., 2006.

How is dependence affecting the addition of distributions? For the mean value of the sum (Prospect 1 + Prospect 2) the addition is simply the sum of the means: Mean_{(1+2)} = Mean_{1} + Mean_{2}., regardless of the dependence.

For the variance with *independence* the variance of the sum is the sum of the individual variances:

With

For instance: prospect A has a mean (expectation) of 100 and prospect B 400 mb. The variance of A is 121 and that of B is 1296. The dependence (fraction) here is 0.40 (in terms of R-square), which means that we think that 40% of variance is common to the prospects A and B. This issimilar to estimating the R-square of a regression. However the formula requires the "correlation coefficient, which is more difficult to estimate than the R-square. (So ** ρ** = 0.6325, the square root of 0.40).

Then the distribution of the sum A + B will be a distribution with a mean of 100 + 400 = 500 and a variance of approximately:

Note that in the case of total independence the variance of the sum would be 1471 and in the case of complete dependence (100% or ρ = 1.0) it would be 2263.

**Negative dependence.** In the example above we have not considered a negative correlation. In prospect appraisal a negative dependence can occur if two prospects or targets fight for the same hydrocarbon charge. Imagine a single structure with two objective horizons, both to be charged by a deeper source rock. If there are doubts about the seal separating the two reservoirs *and* the charge is thought to be limited, we get an "either/or" situation: The upper reservoir wins if the seal covering his deeper competitor is bad, whilst the alternative is a lower target charged but the upper dry.

An interesting aspect of this example is that the geological commonality is zero for the (top) seal aspect and 100% for the source or charge.

In the above formula for the variance of the sum of dependent distributions, the third term is then substracted instead of added, due to the negative ρ. This results, of course, in a smaller variance than in the above example. This can be explained by imagining two vectors of numbers A and B, as used in a MonteCarlo analysis. Negative correlation means then that relatively often a small value in A will be added to a large value in B. In positive correlation, large values in A will meet large values of B and causing a wider distribution range for A + B than in the negative r case.

A negative correlation can be simulated by partially sorting vector A in descending order, while sorting the corresponding part of vector B in ascending order.

**Unrisked Sum**. Although the above calculation can be done by hand, the "unrisked" sum of prospects requires a Monte Carlo addition, including the correlation, whereby in each cycle a prospect resource is added to the total, but only if the binomial simulation of its POS is 1. For the Mean Success Volume again a hand calculation might suffice, provided we have the means of the unrisked volumes of the prospects available. The each mean is multiplied by its POS and added to the sum.

However, it may happen that to aggregate only a table with the percentiles for a set of discoveries is available. If the P90, P50, Mean and P10 are available, the following shortcut avoids the Monte Carlo addition procedure, but only gives the result under assumption of complete independence. The sum of a set of distributions has a mean equal to the sum of the individual means. Also, provided thet the prospects are not dependent, the variance of the total is the sum of the variances. So, for each prospect we estimate the variance from the difference between the P10 and the P90 in terms of a standard deviation. For a normal distribution the (P10 - P90) = 2.56310 st. deviations. If the individual prospects show a lognormal distribution, the same formula holds, but then (LN(P10) - Ln(P90)) = 2.56310 st. deviations (as logaritms). Here are the formulas for both cases.

The normal distribution case for the sum of n distributions, where the mean of the sum is the sum of the means, but the percentiles are:

For the lognormal distributions the distribution of the sum is probably neither lognormal, nor normal. But we could use both models, realizing that the reality will be somewhere in between. The larger n, the better the sum distribution will approach the normal. The procedure is here more complicated as we need to estimate the linear variance of the individual distributions, which is not so straightforward as in the normal case. Note also that the mean of the lognormal (as logs) is the log of the P50.

In the above formula "ln" stands for natural logarithm to the base

The above shortcuts are based on the assumption that individual distributions do have the shape of a normal or lognormal. If there is risk in the form of many zero elements in the distribution (exploration prospect) than addition is statistically unsound because of the distribution shapes, and, the addition of the non-risked P90, P50, P10 meaningless and misleading. In the risk case only add the expectations! And use a Monte Carlo system if the whole sum-distribution is required.

This a common problem in the case of several prospects in a concession, or play. If we have estimated the individual propsect chances, what is then the probability to make at least one discovery? This is sometimes called the "**play risk**". The easy way to do this is to ask "what is the probability that none of the prospect is a discovery?". Then take the complement. An example with three **independent** prospects with POS values of 0.90, 0.50 and 0.20. In the case of complete dependence the combined POS is 0.20, while the nindependent combined POS is 0.96:

The above formula becomes simpler if we have equal probabilities of success for all n prospects:

Above we have assumed independent prospects. For completely

See also Rose (2001, Appendix D) for a few more examples. By the way, In a Monte Carlo Addition complete vectors of possible reserve volumes, including the zeros are added together with various degrees of dependence. The final POS can then be read from the sorted vector of the result, this being the expectation curve of the sum. In that case the POS calculations as above are not necessary.

When the prospects have full dependence, the POS of the sum (POS_{Dependent}) will be equal to the maximum POS in the prospect set.

In case of complete independence of the prospects, the maximum combined POS (POS_{Independent}) is reached. In between these two extremes we can interpolate between these values in a linear manner, using the correlation coefficient **r**. Note that the POS_{Independent} >= POS_{Dependent} in all cases.

In formulas:

For 100% dependent prospects (**r** = 1.0) the "POS_{dep}" of the sum becomes:

For the independent prospects (**r** = 0.0) the "POS_{ind} of the sum becomes:

For any **r** the formula is:

That the difference between the dependent and independent case is proportional with **r** can be demonstrated with the following Monte Carlo result, in which three prospects were summed with the individual POS's as indicated below the graph, of which 0.90 is the maximum:

The merge process is as follows:

The Gaeatools program **MAD** (Merging and Addition of distributions) allows addition with user-defined degrees of dependence (covariance).
It allows a hierarchical input of expectation curves.