Geohistory analysis for maturity

The Geohist program (Gaeatools) is described here.

General about GEOHIST

The maturity of source rocks and vitrinite relectance (VR or VRE for "estimated VR")) are strongly correlated with a parameter designed by Lopatin (1971). Maturity history of a SR can be modeled with Arrhenius type equations and several sophisticated programs are available for this purpose. However, in the context of the GAEAPAS program, maturity and maturity history is based on VR estimates combined with the SR type information given by the user. Therefore it is useful to model the progression of VR through geological time, regardless of the SR type, and then enter this information into the GAEAPAS input.

This Geohist program uses the Lopatin system of calculating the TTI (Time-Temperature Integral) and there are two correlations in the program that translate the TTI values into the VR estimate. The user can also add his own TTI-VR calibration in the form of a regression equation. Decompaction (backstripping) can be applied optionally. Output of the program is a tabulation and a choice of graphs.

Time-Temperature Integral (TTI)

The idea of a Time-Temperature Integral as a measure of maturity, or to relate this in turn to other maturity measures such as VR or Tmax, comes basically from the work of Lopatin (1971). Notably Waples has done much to promote this approach (Waples,1980).

The effect of temperature on chemical reaction speed (such as transformation of kerogen into oil, or increase in VR %) can be modeled in a simple fashion by assuming that the reaction speed doubles with every 10 degrees C temperature increase. Not knowing what the actual reaction(s) is, nor the reaction speed, Lopatin worked with relative values denoted with λ. He took as a "pivot point" a rection speed of 1.0 for the temperature interval between 100 and 110 degrees Celsius. Then for the interval 90º to 100º the reaction speed is λ = 0.5, and would be 2.0 for the interval 110º to 120º. Now the TTI for an interval is defined as the product of λ times the duration in millions of years. The TTI for a particular SR at the bottom of n intervals is defined as:

where λ = relative reaction speed and age expressed in million years.

Originally, Lopatin calculations were carried out by working out the ages connected to precise temperature intervals of ten degrees Celsius. This is rather inconvenient and unnecessary as a continuous function of ( versus temperature can be obtained as:

where λ is the "rate constant", in the case of Lopatin λ = 2.0. The constant 105 is the average of 100 and 110 degrees. Although this approach has been published widely, it is not exact. The trouble is that when integrating the above equation over the interval 110 to 110, the result is not a mean reaction speed of 1 but 1.02014. Therefore a correction has to be applied to the "pivot temperature" of 105. But first the λ for any temperature interval has to be established as a continuous formula for the average λ:

The integral is solved by first simplifying the exponent:

The average λ for any interval now becomes:

Hence for the interval 100 to 110 degrees the average λ becomes:

Setting this to 1 allows to solve for "pivot". The result is then 105.2877ºC instead of the often used 105º. So:

pivot = 105.2877

Note that absolute differences between temperatures have been used. This is because for some intervals, the end temperature t2 is less than the begin temperature t1 (uplift and erosion). The result would be wrong if negative λ's would be used. The transformation of organic matter takes place whether a section is being exhumed or buried! In the special case of a hiatus, the start and end temperatures are the same. Then also the general formula for λ would not be correct. Instead the formula (1) gives the result for λ directly.

Decompaction

Decompaction is the process of estimating the thickness of formations at different, often shallower depth in geological history. An important concept in such estimation is the "solidity" of the formations, i.e. 1 - porosity, where porosity is a fraction. In other words the ratio of the solid material (grains) to the bulk rock volume. During compaction, the solidity increases gradually to a certain maximum value, theoretically to 1, if all pore space has vanished. Basing decompaction on this concept only means that we neglect the effect of cementation brought in from outside the rock volume studied and that we neither take any elastic compression of the grains themselves into account. This program assumes that the bulk of the compaction effect is related to the increase in solidity, and hence is only an approximation to the complex processes that determine compaction. Solidity can be estimated roughly by standard relationships with depth for formations that have only subsided, not uplifted.

Three equations of the same type have been derived for three lithologies.

1. Shale: The original equation of Baldwin & Butler (1985), translated into solidity:

S(z)=0.258z0.1575

2. Carbonates: The equation of Schmoker & Halley (1982) re-fitted to a log/log relationship between solidity and depth.

S(z)=0.290z0.1371

3. Sandstones: The average of the Maxwell (1964) envelope and the Sclater&Christie (1980) curves combined to give for solidity:

S(z)=0.207z0.1672

The advantage of these equations is ease of integration. The decompaction is based on the fact that, without removal or addition of solids, the amount of solids in a section of rock will remain the same during compaction. The solidity (1 - porosity) is a measure of compaction. It runs from, say about 0.50 to 1.00, although the lower end will depend very much on the lithology. For instance clay may have porosity up to 80%, so that S0 = 0.20. Now the solidity times the thickness is a constant during compaction, because it only expresses the "solid thickness":

S1T1 = S2T2
for a given formation at depth 1 and depth 2.

For a given interval between z1 (top) and z2 (base) the mean solidity can be calculated as follows. First we express the solidity/depth relationship in a general form:

S(z) = β0zβ1
The values of the two β's will depend on the lithology (see above).

The mean solidity between base and top is found by integration of the S(z) equation:

The constant solid thickness is obtained by multiplying both sides of the above equation by the depth difference between base and top (thickness), hence:

Now assume that the top of an interval is known: ztop. For decompaction w need to know the depth of the base zbase. This means that we need to know the average solidity for the same interval. but now with the top at a new depth ztop.

Multiplying again both sides of this equation with (zbase - ztop), we get:

Fortunately we can solve zbase from the above equation, as it is the only unknown:

The ST product refers to the present solidity and thickness. The restored thickness is the difference between zbase and ztop.

The first problem to solve, if only approximately, is to obtain solid thicknesses for the input intervals. The user is assumed to base his thickness estimates largely on well data. If there are no unconformities in the sequence, the cumulative thickness of the layers down to the base of a given layer is also the depth of the base of that interval. This can be used to get the solid thickness for each interval, taking the lithology class into account.

In the case of erosion, the section below this interval will be "exhumed". This reduces the depth at which it is found, but here it is assumed that compaction is simply irreversible. In reality there may be a little "rebound", especially in the case of shales, but this effect is neglected here. The irreversability of compaction complicates the calculation to some extent, because the maximum state of compaction has to be remembered by the program. For the solid thickness estimation, the maximum burial depth for each interval-top is calculated. Then the solid thickness is calculated for that depth position.

It should be noted that erosion amounts should not exceed the cumulative thickness given for the intervals below the erosion interval. If this were the case, it would mean that the target interval, i.e. the bottom of the lowest, earliest interval, cannot be present! With a number of intervals present and a number of erosion intervals, such inconsistent input might be made. The validation in the program will return a fatal error if this occurs. Even more so, the last (lowermost or oldest or "target") interval cannot be an erosion period.

Decompaction of a number of intervals has to proceed by taking the top one at top-depth zero. Then the depth of the base is estimated by formula for zbase above. This is then the top depth of the second lower interval. Then calculate the depth of the base of the second interval, etc. For each interval that is added to the overburden, the depth top has to be checked against the maximum depth it ever attained. So for each interval the "minimum thickness" of the interval is to be updated in an array. The thickness of the decompacted layer to be added cannot be more than the minimum thickness it had attained before. In this way the irreversible nature of compaction is taken into account.

Although compaction is due to the "skeleton" pressure, or "grain pressure" of the overburden, and not due to the water column in the overburden, this program does not take waterdepth into account in the decompaction process. This means that a given layer is decompacted by putting the top to zero depth, rather than to the prevailing waterdepth.The reason is that the decompaction curves derived from various publications do not appear to take this effect into account. It could be argued that the compaction curves take an unspecified "average waterdepth" into account. For most overburden layers this will not matter very much.

Restoration of eroded sections

Estimation of removed sections can often be done by lateral stratigraphic extrapolation, whereby a more complete section is compared with the location where erosion has taken place. Any lateral changes of depositional nature should be taken into account, such as a regional thinning of formations that are not the result of erosion. Thickness changes due to compaction should also taken into account. If the "complete" section is situated at a considerable greater depth, it probably underestimates the amount of removed section at the target location.

The other method is the use of sonic plots, or density plots that can indicate whether the section below the un- or dis-conformity was at some time in the past more compacted than the post-erosion section. This becomes apparent, especially on log(delta-t) vs. depth plots of shales ("delta-t is "slowness"). If there is a clear break in the delta-t trend, the pre-erosion section can be shifted downwards with respect to the post-erosion section, until the trends below and above line-up. It should be kept in mind that the delta-t is not only dependent on the campaction, but also on the "skeleton" pressure of the formation (also called grain pressure or effective pressure). The effect of skeleton pressure is to increase sound velocity, hence decreasing delta-t. This means that a section that is exhumed (uplifted because of the erosion) will now have a higher delta-t (at less skeleton pressure) than at it's maximum depth of burial. So, the lining up of the delta-t trends is slightly in error. The effect is a slight underestimation of the eroded section.Quantification of the skeleton pressure effect is no easy matter. However a rough indication can be obtained from a set of data from Issler (1992).

The Issler data set is a list of core-derived shale porosities, depth and delta-t measurements. The depth range is about 900 to 3500 m. Regression analysis shows that, although the delta-t is strongly and positively correlated with the porosity, there is a highly significant negative correlation with the depth. This depth effect can be interpreted to be the pressure effect, because other factors such as increased cementation with depth did not arise in this sample of data. The effect is 33 microseconds per km, or 10.6 microseconds per foot. Although other well data show remarkably similar results, the petrophysical process is not fully explaines as yet, and hence the size of a correction for the pressure effect is conjectural.

Yield curves

The yield curves from the GAEAPAS program are used by GEOHIST to estimate the percentages of generation and expulsion that have been reached at the various points in history. This requires the SR type to find the right yield curve. The yield is expressed as the percentage of the Potential Ultimate Yield, but also as percentage of the yield as perceived today (see below under Output).

The generation/expulsion percentages relative to yield-to-date are the most useful as inputs to the dynamic entrapment modeling of GAEAPAS.

Input requirements for Geohist

Input

The program input consists of some titles and global data about the geohistory graph and a table with the input for the calculation.

  • The title1 and title2 are used in the graphs and as a header for the tabulated results.

  • The Surface (c.q. water-bottom) temperature and the temperature gradient in the general part of the input sheet are global values, i.e. valid for all intervals. If left blank, the program expects surface temperatures and gradients to be specified in the table below for all intervals.

  • The Source Rock Type is in terms of the usual type indicators "I", "II" or "III" or combinations, such as "I/II" to indicate a mixture. This input is optional, but is required if, after "Run" a graph of the percentage generation for oil and gas is required. The percentage graph shows how much of the generation-to-date was achieved in history. So the percentage scale is from 0 to 100% on the Y-axis and the X-axis is the time scale in m.y. as for the other graphs.

  • The input table has a number of columns:

    1. Interval labels (up to 16 characters)
    2. Interval thickness in m for curve 1
    3. Age to base of interval in millions of years
    4. Lithology,(four codes possible: "SHALE", "CARB", "SAND" and "SOLID")
    5. Surface/water-bottom temperature in degrees Celsius
    6. Temperature gradient in degrees Celsius per 100 m
    7. Waterdepth in m as an average over the time interval
    8. Interval thickness for curve 2

The intervals are input in the normal vertical sequence sometimes referred to as the "driller's sequence". The oldest interval is entered last. Up to 99 intervals can be specified.

Two curves can be described. This facility can be used to construct a geohistory graph for two points, e.g. the shallowest and deepest point of a drainage area for prospect appraisal purposes. To keep the program simple, many of the inputs for curve 1 and curve 2 are assumed to be the same (items 3 through 7 in the above list). It is necessary to have exactly the same number of intervals for curve 1 and curve 2. If not, an error message will result.

The program will calculate results for the bottom and the top of the last, oldest interval described. This is also referred to as the "target" layer. This allows to see results for e.g. a thick source rock section, if this is taken as the oldest interval. If the thickness is appreciable, the maturities at top and bottom may substantially differ.

Unit conversion can be done by using the menu item INPUT/Unit Transformation. This applies to the Surface temperature, Temperature gradient, depth and waterdepth units. A dialog box will allow the user to input a unit in e.g. degrees F, feet, fathoms, which is then translated into the metric equivalent. For this purpose, the cursor has to be on the appropriate input cell, so that the metric input can be automatically inserted.

To facilitate inserting ages in million years the menu item 'INPUT/Age in m.y., ABC' present a list with stratigraphic time unit names in alphabetical order, which can be chosen. Alternatively, the age can be obtained from a list ordered in increasing geological time. The million years are then automatically inserted in the current cell, provided that it is in the age column. The age table in this program is a combination of various sources. The best reference is probably Haq & van Eysinga (1994).

Input can be saved in a file for later use, or re-running it with different options, by Files/Save. The user will be asked for a unique name for the input file. This will then be stored in the GEOHIST directory. The Files/Open menu item allows the user to start with an input previously made. A file box shows which files are available in the GEOHIST directory.

When providing the input, the user should start to divide the geological history into a number of time intervals. Then attach a thickness to each interval. The thickness can be positive (deposition), zero (hiatus) or negative (erosion).

In the case of erosion, care should be taken not to:

  • Put an erosion period immediately after the "target" interval (this is the oldest interval).

  • Have so much cumulative erosion from erosion periods in relation to the deposition intervals, that the target interval must be partially or wholly eroded. In other words, if you erode you have to deposit first!

Under Options in the GEOHIST menu there is the choice to invoke decompaction. The default of the program is no decompaction, so this option should be chosen if desired, before the calculation ("Run").
The second option is to choose a VR/TTI calibration. There are two calibrations that are only valid if the decompaction option is not active: the Waples and the Lopatin/Karpov. These calibrations were based on comparison of TTI's derived from non-decompacted burial graphs with measured vitrinite. The third calibration is a user-supplied calibration. The program does not know whether such calibration is based on decompaction or not. Therefore, if used, the user must also ensure that the decompaction option status corresponds with his calibration. Finally the fourth calibration is the Dijkstra calibration, which is a Waples calibration corrected for the decompaction effect. In case the user has chosen calibration 1 or 2, but invoked decompaction, the program will automatically ignore 1 or 2 but use calibration 4 instead.

User calibration.The fourth option under menu item "Options" is the supply of regression parameters for the user's calibration. In the calculation program there is a routine that estimates VRE ("Estimated VR") on the basis of a formula that has the following shape:

log(VRE) = β0 + β1log(tti) + β2log(tti)2 + β3log(tti)3

where log's are to the base 10. The values for the regression constants β have to be entered in the dialog box that is presented. Once the constants for the user's calibration have been specified, they will be stored in a file "GHUSERVR.TXT" so that they remain available after terminating the program. When new constants are entered under this option "Set user VR calibration", the earlier values will be overwritten.

Output

The output of the program is on screen, or by printing. The GEOHIST program tries to fit the tables and graphs to a single page, in landscape format. For long inputs the tables will extend over several pages, but this is unlikely to occur very often.

Apart from a facility to print the input sheet, there are three tables that give results:

  1. Table 1. TTI and VRE estimates
  2. Table 2. Generation & expulsion, relative to the yield-to-date, percentage
  3. Table 3. Generation & expulsion, relative to the ultimate yield, percentage

There are five graphs:

  1. Geohistory graph, depth vs. time
  2. TTI graph, TTI (logarithmic) vs. time
  3. VRE graph vs. time
  4. Generation & expulsion, relative to yield-to-date, %
  5. Generation & expulsion, relative to ultimate yield, %

Top

Home