The notion of probability was possibly first developed by
Gerolamo Cardano (1501-1576), an Italian Renaissance mathematician, physician, astrologer and gambler. Cardano was notoriously short of money and kept himself solvent by being an accomplished gambler and chess player. His book about games of chance, Liber de ludo aleae ("Book on Games of Chance") , written in 1526, contains the first systematic treatment of probability, as well as a section on effective cheating methods.
Assume we cast a die. The possible outcomes are 1, 2, 3, 4, 5 or 6. We repeat the experiment and count the number of cases that 6 comes on top. That is the frequency. An experiment with 60 throws of the die will, theoretically give ten times a 6. The frequentistic probability is then 10/60 = ~ 0.167. This concept of probability depends on repeated experiments. In the practice of geology, uncertainty may exist about the "state" of a situation, which is known by God, but not by the geologist. If he does not have a database with known similar situations, he does not have any frequencies to use. But he does have a subjective feeling that th́s state is more probable than thát state in this situation. Therefore we will accept the concept of a subjective probability, even in a basically quantitative appraisal model.
The problems with estimating a probability or a the distribution of an uncertain quantity have been studied by psychologists in a wide range of contexts (Tversky & Kahneman, 1974, 2007).
An experiments with 30 subjects allowed to estimate the relative likelihoods of successive steps in this scale of degrees of confidence". Interestingly, subjects would switch to the next higher term, or the next lower, when the (true) likelihood changed with a factor 2. This result is consistent with biological research on stimuli, where, for instance, increased pressure on the skin will trigger a response only when the pressure is doubled, and therefore appears to justify the above weights.
The mechanics of using this scheme is to calculate the expectation from the scores assigned to the 8 classes, using these weights. In the case of a probability we would end up with a single estimate of the probability, but one which should better represent a subject's thinking than asking immediately for this single number. Note that the mechanism should return zero if there is any score assigned to Impossible, or 1, if any score for Certain. These extremes would simply override any intermediate statements. Although the method might appeal to some because of the use of common words as a starting point, it is not proven that it has any advantage over using a simpler approach.
In practice, the above scheme does not deviate significantly from one in which we let a geologist start from a numeric scheme of estimating a histogram. First the histogram classes are fixed. For a probability one could use e.g. 10 classes of width 0.1. The the user should assign scores subjectively to these classes. Their scores would be the weights to be assigned to the class-midpoint. Here the scores are percentages, the sum of which should be 100.
This method is the "user-defined histogram" or "UDF". The class boundaries are always chosen to be relevant to the variable to be estimated.
Here is an example:
|Classes||0.0 - 0.1||0.1 - 0.2||0.2 - 0.3||0.3 - 0.4||0.4 - 0.5||0.5 - 0.6||0.6 - 0.7||0.7 - 0.8||0.8 - 0.9||0.9 - 1.0|
This statement would give a probability of 0.365 by calculating the weighted mean value of this histogram (expectation)
([5*0.15 + 15*0.25 + 50*0.35 + 20*.45 + 10*.55] / 100 = 0.365).
There are some drawbacks to this approach. In the example it is impossible to get any probability less than 0.05, or greater than 0.95. This is quite unrealistic. However, how good are we to estimate extreme chances? My experiments with a a fairly large number of subjects showed that, when asked to give confidence intervals of some variable to be estimated, these were most of the time too narrow. That is: too many true values fell outside their 95% confidence interval (it should be only 5%). Even worse are results for the 99% confidence interval. This is somewhat in contrast with the extremes of the probability range in the figure given above: People are giving a much narrower range for probabilities at the extreme ends, which means there is consensus about what a particular word means. However, consensus does not mean we are right!
For estimating a value, such as an average porosity, the UDF appears straightforward. But what about estimating a probability as we did in the above example? Does it make sense to be uncertain about a probability? To answer that question we have to return a moment to the frequentistic probability. If the probability is calculated as the ratio of a number of particular events over all events in a limited sample, we can use statistical rules to establish how uncertain our probability estimate is. In the geological practice, we rarely have such data to lean on. Some statisticians would say that we have no "probability" at all if we estimate it subjectively (or if you wish intuitively), based on some experience database in your brain. They would call it "degree of confidence" a term that we already used above for the scores. In practice, such distinction is not too important. This leaves us with the fact, like it or not, that there is usually uncertainty about an estimated probability (P) or about a degree of confidence.
In further use of these probabilities it may matter whether we use a single number for P or a distribution ("histogram") of possible P values.
These probability rules are important when evaluating a concession with more than one prospect (Damsleth, 1993), to arrive at an expectation curve for the concession.
My definition of the word "risk" is probability of failure. Hence it is the complement of the probability of success ( Risk = 1 - POS). Risk is the upper part on the vertical axis of the expectation curve, down to where the curve turns off to the right.
In prospect appraisal risk is used in the sense of "straight risk", the probability that a certain essential ingredient of the petroleum system is absent. For instance the risk that there is no viable reservoir. The straight risk is usually a subjective point estimate of a probability.
In some cases probabilities are added. For instance: How many times do we throw more than 3 with a die? That is P(4) + P(5) + P(6) = 3/6 = 0.50.
In prospect appraisal the more common situation is that we multiply probabilities. When a number of conditions affect prospectivity, the product of the individual probabilities gives the final probability of success. This simple solution requires that the individual factors are independent. If not, the covariance of amongst the factors have to be taken into account.
Here is an experiment based on Monte Carlo modeling.
In a certain prospect appraisal scheme we use four chances of fulfillment:
If the pf is calculated as the mean of the Monte Carlo-generated pf's, we call it "pfp".
Point estimates are given for these chances. If the average of these chances is low, say 0.10, then the error made in case these chances are not exact can be significant. For instance, with a 10% uncertainty (standard deviation of a normal distribution) around the subjective point estimate, the final product of probabilities is some 25% too low!
If the average chance factor is 0.8, the bias is as much as 5% overestimate for a st. dev. of 10%, 9% overestimate for a st. dev. of 20% and 26% upward bias for a 30% st.dev.
Here is the result for two cases of uncertainty in thye estimates of the individual probabilities: One for a standard deviation of 10%, the other of 20%. We graph the ratio of pf over pfp versus the average of the four input probabilities:
These probability rules are important when evaluating a concession with more than one prospect (Damsleth, 1993), to arrive at an expectation curve for the concession.In combining probabilities it may matter whether we use a single number for P or a distribution ("histogram") of possible P values. Here is an experiment based on Monte Carlo modeling with 100,000 cycles.
In summary, the final chance of fulfillment is the product of the individual chance factors for the "ingredients" of the petroleum system. If, on average these chance factors are low, the probability of success will be underestimated, compared to a system that takes the uncertainty into account. For chance factors that average above 50%, the bias will be to inflate, or overestimate the probability of success. The correct product of the individual probabilities must be the Monte Carlo-generated pfp, because it can not be argued that our estimates of the p's are absolutely sure. Just think of a delphi exercise in estimating the four p's. There would be a lot of disagreement, finally hidden by consensus numbers. Fortunately the discrepancies are not at all extreme. Only when considering a prospect with a small probability, but a high reward, could this bias play a role.
Possibly such bias can be removed, or reduced by using a so-called "fuzzy" system (Roisenberg, et al., 2008). Then the uncertainty of the probability estimate is taken into account, but I have not seen experimental evidence that this works.In summary, the final chance of fulfillment is the product of the individual chance factors for the "ingredients" of the petroleum system. If, on average these chance factors are low, the probability of success will be underestimated, compared to a system that takes the uncertainty into account. For chance factors that average above 50%, the bias will be to inflate, or overestimate the probability of success.
Possibly such bias can be removed, or reduced by using a so-called "fuzzy" system (Roisenberg, et al., 2008). Then the uncertainty of the probability estimate is taken into account.Top
Prior and Posterior Probabilities
These terms apply to estimates of probability in bayesian statistics. A prior probability is usually a distribution of possible probability values, because we only have a vague idea of what the probability is. In an undrilled area we do not know what the success ratio will be for wildcat drilling. However, worldwide experience tells us that it may somewhere between 0.0 and 0.5, with a mean value of 0.15. For instance, as soon as 5 wells have been drilled, two with success, three dry (a success ratio of 40%), this small sample of data allows us to update our prior success rate probability to a posterior one. If we see the prior as a rather flat distribution of possible success fraction between 0 and 0.5, the posterior will be sharper, with a peak closer to the location of the sample mean of 0.4, e.g. 0.25.