[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2015175053A2 - Decision guidance - Google Patents

Decision guidance Download PDF

Info

Publication number
WO2015175053A2
WO2015175053A2 PCT/US2015/016036 US2015016036W WO2015175053A2 WO 2015175053 A2 WO2015175053 A2 WO 2015175053A2 US 2015016036 W US2015016036 W US 2015016036W WO 2015175053 A2 WO2015175053 A2 WO 2015175053A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
information
field
property
history
Prior art date
Application number
PCT/US2015/016036
Other languages
French (fr)
Other versions
WO2015175053A3 (en
Inventor
Aime Fournier
Natalia Ivanova
Marta Woodward
Konstantin S. Osypov
Original Assignee
Westerngeco Llc
Schlumberger Canada Limited
Westerngeco Seismic Holdings Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Westerngeco Llc, Schlumberger Canada Limited, Westerngeco Seismic Holdings Limited filed Critical Westerngeco Llc
Publication of WO2015175053A2 publication Critical patent/WO2015175053A2/en
Publication of WO2015175053A3 publication Critical patent/WO2015175053A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/40Seismology; Seismic or acoustic prospecting or detecting specially adapted for well-logging
    • G01V1/44Seismology; Seismic or acoustic prospecting or detecting specially adapted for well-logging using generators and receivers in the same well
    • G01V1/48Processing data
    • G01V1/50Analysing data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V11/00Prospecting or detecting by methods combining techniques covered by two or more of main groups G01V1/00 - G01V9/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V3/00Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation
    • G01V3/38Processing data, e.g. for analysis, for interpretation, for correction

Definitions

  • Various techniques e.g. electromagnetic or seismic techniques exist to perform surveys of subterranean structures for identifying subterranean elements of interest.
  • Examples of subterranean elements of interest include hydrocarbon bearing reservoirs, gas injection zones, thin carbonate or salt layers, and fresh water aquifers.
  • One type of electromagnetic (EM) survey technique is the controlled source electromagnetic (CSEM) survey technique, in which an electromagnetic transmitter, called a “source,” is used to generate electromagnetic signals.
  • CSEM controlled source electromagnetic
  • Surveying units, called “receivers,” are deployed within an area of interest to make measurements from which information about the subterranean structure can be derived.
  • the receivers may include a number of sensing elements for detecting any combination of electric fields, electric currents, and/or magnetic fields.
  • a seismic survey technique uses a seismic source, such as an air gun, a vibrator, or an explosive to generate seismic waves.
  • the seismic waves are propagated into the subterranean structure, with a portion of the seismic waves reflected back to the surface (earth surface, sea floor, sea surface, or wellbore surface) for receipt by seismic receivers (e.g. geophones, hydrophones, etc.).
  • Measurement data e.g. seismic measurement data and/or EM measurement data
  • the output can include an image of the subterranean structure, a model of the subterranean structure, and so forth.
  • FIGs. 1-3 are flow diagrams of various example processes according to various implementations.
  • Fig. 4 is a three-dimensional graph showing isosurfaces of various properties, according to some examples.
  • Fig. 5 is a schematic diagram of a re -parameterized covariance, according to some implementations.
  • Fig. 6 is a diagram of a tetrahedral region of parameters of orthorhombic stiffness components enforcing positive-definiteness.
  • Fig. 7 is a graph of three-dimensional Gaussian random points in accordance with some examples.
  • Fig. 8 is a graph of three-dimensional points with Gaussian distribution in curvilinear coordinates, in accordance with some examples.
  • Fig. 9 is a block diagram of an example computer system according to some implementations.
  • techniques or mechanisms according to some implementations can also be applied to perform surveys of other structures, such as human tissue, a mechanical structure, plant tissue, animal tissue, a solid volume, a substantially solid volume, a liquid volume, a gas volume, a plasma volume, a volume of space near and/or outside the atmosphere of a planet, asteroid, comet, moon, or other body, and so forth.
  • other structures such as human tissue, a mechanical structure, plant tissue, animal tissue, a solid volume, a substantially solid volume, a liquid volume, a gas volume, a plasma volume, a volume of space near and/or outside the atmosphere of a planet, asteroid, comet, moon, or other body, and so forth.
  • seismic sources and seismic receivers that are part of seismic survey equipment.
  • other types of survey equipment can be used, which can include other types of survey sources and survey receivers.
  • techniques or mechanisms according to some implementations can also be applied in contexts other than surveying of target structures, such as seismic guided drilling, reservoir modeling, field development planning, and
  • Decision guidance refers to providing information for guiding (e.g. assisting, instructing, etc.) an operator (human user or machine) in performing a subsequent action.
  • Data processing can refer to processing of data for characterizing a target structure or the properties in its surrounding volume of space, such as a subsurface structure or other target structure as discussed above.
  • the data that are processed can include any one or some combination of the following data acquired by a survey operation using survey equipment, including seismic survey equipment, electromagnetic survey equipment, borehole survey equipment); data acquired using sensors in other contexts; simulated or synthetic data; and so forth.
  • Characterizing a target structure can refer to building a model of the target structure, generating an image of the target structure (e.g. an earth formation, fractures in the subsurface, a reservoir, etc.), or producing other valuable information that provides a description or some indication regarding content of the target structure. For making decisions, characterizing that also estimates statistical uncertainty of such model, image, or other information, can be more useful. Uncertainty can refer to uncertainty in the model, image, or other information, produced by data processing.
  • Data processing can also refer to field development planning and management, e.g. deciding how many wells to drill, where to place the wells, and how long to operate the wells.
  • Data processing can also refer to other types of processing that can be performed on any type of data for any of various purposes.
  • a data-processing history can refer to information relating to past data processing that has been performed, including any or some combination of the following: information relating to where geographically, jurisdictionally, etc. the data were obtained; information relating the project associated with the data processing; information relating to the capabilities, success records, etc.
  • information relating to options or values of parameters that were set for the data processing such as options or parameter values in a computation job control file or a user interface
  • information relating to what files were read in or written out by a data processing job information relating to how much time or computing resources were consumed
  • information relating to time duration between steps of a data processing job information relating to usage of a user input device, such as a mouse; information relating to repetitive or erratic activities that may indicate problems with software, problem description, user training, etc., or other information.
  • a data processor's decision can refer to a workflow decision or a workflow choice made among a large number of options (e.g. possibly mutually dependent options) in association with the successful performance of data processing.
  • a data processor can refer to a human using a computer, a collection of computers, a hardware processor within a computer, or a collection of hardware processors within one or more computers.
  • An automatically created decision guidance can refer to information that is easily useable by a human or an automated process to take effective action in a specific task, such as solving a problem or creating a product.
  • Collection of data for data-processing histories and a data processor's decisions can be performed at various time scales, such as fractions of a second for a mouse click or keyboard stroke, up to hours or days for expert intervention, and so forth.
  • Observed decisions can be ranked, e.g. using value of information (as defined by Eq. 17, for example).
  • the ranking of observed decisions can be based on the physical insight or qualitative understanding provided by or informing a decision, the value of a decision to immediate tasks or to an overall project, the impact of the decision on accuracy and/or confidence of particular tasks of a solution, and so forth.
  • Multiple ranks can be sorted lexicographically, i.e. sorting the best of each less-significant rank for each value of each more-significant rank.
  • the analysis can characterize decisions in different ranges, e.g. data-based versus model-based; critical versus inconsequential; quantitative versus qualitative; repetitive versus singular; little revised versus much revised; discrete versus continuous, and so forth.
  • a decision guidance that is created can use behavior-modeling technology, such as used in the attention-management field, data-mining field, machine-learning field, profiling field, targeted servicing field, social-media field, social networking field, commercial advertising and marketing fields, computer gaming field, public relations field, national security field, or other fields.
  • behavior-modeling technology such as used in the attention-management field, data-mining field, machine-learning field, profiling field, targeted servicing field, social-media field, social networking field, commercial advertising and marketing fields, computer gaming field, public relations field, national security field, or other fields.
  • a quality-assurance software-use history can refer to a quality of a result produced by software in performing data processing.
  • the quality of a result produced by software can be measured using any of various techniques, such as based on comparing the result with expected results, or using other techniques.
  • a quality-assurance activity can refer to an activity in which the quality of software used to produce a result is determined.
  • a quality assurance activity can involve deriving results from data processing and performing any one or more of the following: comparing the derived results quantitatively and qualitatively against expected results or results that are plausible in a respective application, e.g. geology exploration application; testing stability of the derived results against data-processing alterations (e.g. alternative decisions); and inferring the consequences of the derived results for other goals and questions (e.g. if a project should be started or continued).
  • Fig. 1 is a flow diagram of an example process according to some implementations of the present disclosure.
  • the process can be performed by a computer system, for example.
  • a computer system can include a computer, an arrangement of computers, a hardware processor, or an arrangement of hardware processors.
  • the process of Fig. 1 includes collecting (at 102) one or more of at least one data- processing history or at least one quality assurance software-use history.
  • the process further analyzes (at 104) at least one history with respect to decisions that support completing a data processing or quality assurance activity.
  • the analysis of a history is part of an analysis of a decision.
  • the analyzing of a decision can use any of the criteria discussed above for ranking or characterizing decisions. Further details relating to decision analysis are provided in the text accompanying Eqs. 14-17 discussed below. Details relating to building up a decision analysis framework are provided further below in Section III.
  • Fig. 1 automatically creates (at 106) a decision guidance for a subsequent data processing or quality assurance activity.
  • a data perturbation can refer to use of data in a workflow that was not previously used in that workflow or that was previously used with different numerical values.
  • a model perturbation can refer to generating or updating a model, such as a model of a subsurface structure or other target structure or its surrounding volume of space. Both perturbation types can refer to relatively small volumes encompassed by a relatively much larger volume under consideration.
  • models tend to be at most three- dimensional, whereas data can be higher-dimensional depending on data-acquisition techniques used.
  • the ability to use or modify such small perturbations in a manner consistent with the larger data or model is a feature and motive of the information-based methods according to the present disclosure. Further details regarding the foregoing are provided in Section III below, and in Eqs. 70-71 and 75-77.
  • Computing the information metrics can combine multiple factors, such as factors represented by a system sensitivity matrix (also known as system design matrix, or Frechet operator, or Jacobian matrix, or matrix of covariates, regressors, or exogenous, explanatory, independent, input, or predictor variables), and/or covariance matrices of observed data (acquired by survey acquisition equipment, and also known as regressands or dependent, endogenous, measured, output, or response variables) or of properties (also known as effects, parameters, or regression coefficients) relating to a target structure.
  • covariance describes the direction and magnitude of joint random deviation of multiple variables from their expected values.
  • a system sensitivity matrix can relate a vector of properties to a vector of data.
  • Fig. 2 is a flow diagram of a process according to further implementations. The process of Fig. 2 can be performed by a computer system, for example.
  • the process computes (at 202) at least one information metric that combines multiple effects, such as the factors discussed above, including a system sensitivity matrix and a covariance matrix of one or both of data and at least one property of a target structure or properties of a volume in space surrounding the target structure.
  • the process uses (at 204) the at least one information metric automatically to create a decision guidance.
  • an information metric can include a system sensitivity matrix (e.g. a matrix related to tomography).
  • the tomography matrix can be computed by ray tracing perturbed by first or second- order derivatives of trajectory and slowness with respect to model properties, such as described in connection with Eqs. 78— 79.
  • the tomography matrix employs a dip field with anisotropic, multiscale coherence, such as described in Section IILf below.
  • an information metric can include a data-residual covariance operator.
  • a covariance operator relates a linear function of some variables to the covariance between the variables and that function.
  • This information metric can be used when data to be processed contains noise to be removed, e.g. by iterative compression. This is described further in Section IILa below.
  • the covariance operator can include an exponential correlation, such as described in Section IILb below.
  • Variance can be inflated to attain desired posterior-residual statistics, such as
  • an information metric is generalized by a data-dependent Hessian, such as described in connection with Eqs. 72— 74.
  • a residual norm can be visualized in the space of a few hyper-parameters, such as described in connection with Section IILd below.
  • localized data perturbations or model perturbations can be computed quickly with consistent global effects, such as described in Section Ill.h below.
  • different data parts e.g. checkshot data, seismic data, etc.
  • model parts can be balanced in the residual norm or prior-model norm using separate eigenvector analysis of each part.
  • the eigenvectors enable projection to remove redundant components, such as
  • the eigenvectors enable weighting each part (data part or model part) by the part's reciprocal degrees-of-freedom, such as described in connection with Eq. 88. [0045] Providing Prior Algebraic or Geometric Information To Build or Update a Model
  • Prior information can be used to build or update a model.
  • Examples of different types of prior information include different types of probability structures, including a geological probability structure, a rock-physics probability structure, and a seismic probability structure (discussed in detail further below). More generally, in some implementations, the prior information can include information describing a distribution of values of a property (or multiple properties) that relate to physical characteristics of the subterranean structure. There can be other types of prior information and other fields to which these statements apply.
  • the efficiency and/or utility of prior information can be improved by considering physical stability and/or statistical stability of properties relating to a model of a target structure or a volume in space surrounding the target structure, in constructing a
  • a value is stable if its change in response to an input change vanishes when the input returns to normal.
  • physical stability refers to a characteristic energy (such as material strain energy) reaching a minimum value that is stable against (strain field) perturbations, as a spherical bead resting at the bottom of a bowl is stable against small impacts.
  • statistical stability refers to the mode (maximizing instance) of a random field interpreted as a stable minimum of the statistical mechanical potential, i.e. of the logarithm of the reciprocal probability density.
  • Both kinds of stability can be interpreted algebraically (in which a certain matrix is specified to be positive- definite) and geometrically (in which a certain surface is specified to have positive curvature).
  • properties of a target structure can be re-parameterized, where re-parameterization can refer to finding mathematical combinations of the properties which reveal which of the properties are more certain and which of the properties are less certain, or combinations which are useful in other ways.
  • FIG. 3 is a flow diagram of another example process according to further
  • the process of Fig. 3 can be performed by a computer system.
  • the process of Fig. 3 provides (at 302) prior information based at least in part on transforming at least one property relating to a model of a target structure or a spatial volume surrounding the target structure, where the transforming considers one or both of a physical stability and a statistical stability of the at least one property.
  • the process uses (at 304) the prior information to build or update the model. Building the model can refer to initially creating the model, while updating the model can refer to updating values of parameters of the model.
  • the prior information can constrain a stiffness (elasticity) matrix (which represents a stiffness tensor) to be positive-definite.
  • the stiffness tensor gives the relationship between stresses (resulting internal stresses) and strains (resulting deformations).
  • the prior information thus constrains the strain field to be physically stable by constraining the stiffness matrix (or more specifically, a stiffness tensor) to be positive- definite.
  • a strain field refers to a representation of strains within a given volume in a target structure.
  • the stiffness tensor obeys orthorhombic symmetry (the symmetry of a common building brick, i.e. having distinct lengths in its three dimensions but its vertex angles being 90°, implying 9 independent properties). This is discussed in Section II below.
  • positive-definiteness is expressed as a triplet of correlation-like properties belonging to a certain tetrahedron (optionally using barycentric coordinates). This is described in connection with Eqs. 54-57, 61 and in Section II. f below.
  • six explicit eigenvalues and 6x 1 eigenvectors of a Cholesky factor stiffness are used, as described in connection with Eqs. 58-59.
  • a compliance (inverse stiffness) matrix is used, as described in connection with Eq. 60.
  • an algebraically indefinite stiffness is minimally transformed to a positive-definite stiffness, as described in connection with Eq. 62 and in Section lib.
  • Gaussian probability densities are used for bounded model-property transformations, as described in connection with Eqs. 64-65.
  • the stability of three Thomsen anisotropies are checked, such as according to Eq. 63.
  • the stiffness tensor obeys tilted or vertical transverse isotropic symmetry (symmetry with respect to rotations about an axis with a certain tilt, i.e. angle with respect to the vertical, or with zero tilt, and associated with five independent properties).
  • tilted or vertical transverse isotropic symmetry symmetry with respect to rotations about an axis with a certain tilt, i.e. angle with respect to the vertical, or with zero tilt, and associated with five independent properties.
  • Various features as discussed above for a stiffness tensor that obeys orthorhombic symmetry are also applicable in this case.
  • prior information is improved by applying a nonlinear property re -parameterization of properties (generally referred to as xi), which decorrelates the properties.
  • Decorrelating properties refers to finding property combinations that are uncorrected in the statistical sense. This is discussed further in Section La and in connection with Eqs. 28-29a, 38-39.
  • parameters Xi are assumed to be well resolved (resolved with high precision, i.e. low variance, in other words, small expected deviation from an expected value) and other properties are not. This is described further in Section Lb. o
  • parameters Xi such as normal-moveout velocity v n , anellipticity ⁇ , or property ratios like 3 ⁇ 4/ ⁇ 3 ⁇ 4 or Vp/v s have variances that can be estimated to be small, such as described in connection with Eqs. 29a, 29b.
  • the parameters Xi are assumed to be perfectly resolved (have zero variance) and other properties are not. o Certain parameters Xi, such as normal-moveout velocity v n and anellipticity ⁇ , can be assumed to have zero variance, such as according to Eqs. 42, 44-48. o In case some 3 ⁇ 4 do not have zero variance, linear functionals such as their depth-interval averages can be assumed to have zero variance, so that all other variance derives from the null spaces of those functionals.
  • property probability densities created by rock-physics prior re-parameterization can be efficiently represented, sampled and computed by nonlinear transformation to 3D parabolic coordinates. This is described in further detail in Section I.e and in connection with Eqs. 51-52.
  • initial re -parameterization blocks can be updated by data- covariance estimates, such as described in connection with Section I.f. o
  • Data can be assimilated into the update using a Kalman filter with linear sensitivity. See Eq. 53. o
  • an unscented transform can extend the Kalman filter to
  • the extended Kalman filter can be applied to Seismic Guided Drilling.
  • property bounds can be applied by iterative Gaussian
  • techniques or mechanisms can provide ways to combine information from both measurements (e.g. surface seismic data, well-log data, drilling data and other data about velocity, anisotropy, etc.) and prior earth-model building (EMB) and similar types of decisions, experiences and results from multiple projects and personnel, to update properties of a given model or for related purposes, including information-constrained property probability distributions and increased confidence both in those distributions and in their value to further decision making.
  • measurements e.g. surface seismic data, well-log data, drilling data and other data about velocity, anisotropy, etc.
  • EMB earth-model building
  • rock-physics estimation of prior covariance between earth-model properties can be employed in a workflow (e.g. according to techniques or mechanisms described in U.S. Patent Publication No. 2012/0281501), such as for seismic tomographic earth-model building (EMB). Techniques or mechanisms for updating a model are also described in U.S. Patent Publication No. 2009/0184958. Techniques or mechanisms for determining covariance are described in U.S. Patent Publication No. 2013/0261981. For multivariate Gaussian distributions of general model vectors x, the re-parameterization or reduction of covariance by locally linearized mappings of or constraints on x can be referred to as a re-parameterization method.
  • the re-parameterization method can be applied to the workflow that uses rock-physics or another estimation of prior covariance.
  • Re -parameterization can also be extended to either or both of potentially non- Gaussian distributions, or nonlinear constraints derived from curvilinear geometric consideration of the parameterization, while providing a natural way to include uncertainty in these constraints. See Eqs. 23— 29a.
  • forward solution for u leads to full-waveform inversion for a and in principle, log ⁇ .
  • substituting u oc ⁇ e ⁇ 1 ⁇ o( +r) yields in the high circular-frequency limit ⁇ the characteristic system
  • phase velocity II V7T 2 VT and the group velocity is ( ⁇ ® ⁇ ): ⁇ :( ⁇ ® VT).
  • Co ⁇ R-'LCo 1 ' 2 : V(A 0 l complicategii g ibie) V- ⁇ V(A® 0) V, (Eq. 12) provides the truncated posterior covariance generator ci/2 ⁇ a m V((I+A)- y2 ®0). (Eq. 13)
  • An example application is to decision analysis (DA) under utility function e.g. project profit, solution precision, etc. of some discrete parameter(s) d to be decided, e.g. number of surveys, estimated variances etc.
  • the decider compares the completely unaware case
  • VOI : EMV-MEV, (Eq. 17) which picks out the optimal path through a decision tree that bifurcates at each discrete d choice.
  • (l+2ft- 1/2 is the ratio of transverse wave speed v p j to v p
  • v p ⁇ 3 ⁇ 4 is the paraxial curvature of pressure -wave speed w.r.t. small declination angle from e.
  • the anelliptic normal-moveout (ANM) constraints can supplement an initial distribution of ( ⁇ ⁇ , ⁇ , ⁇ ), such as provided by the rock-physics modeling.
  • Rock physics modeling starts from compaction, mineral and temperature parameters and ( ⁇ ⁇ , ⁇ 2, ⁇ 2) measurements at wells, interpolates those data in the domain between wells, and infers non- Gaussian, e.g. Gaussian-mixture statistical relationships within (v p ,3 ⁇ 4, ⁇ 3 ⁇ 4) space.
  • the first aspect of this system is to rank the decision itself (not its options) w.r.t. its value to the immediate tasks, overall project, impact on accuracy/confidence of particular solution steps etc. In this way appropriate resources (time, computation etc.) are spent on decisions according to their value.
  • the VOI formalism described above (Eq. 17) and improved below is one way to create this ranking.
  • DA incorporating decisions that may: o vary in a continuum; or o otherwise affect the estimation of the model-update probabilities.
  • any combination of interest could be the starting point, e.g.
  • Fig. 4 shows various isosurfaces of v n , ⁇ and and the curves of intersections of these surfaces. The following observations are relevant.
  • the -surfaces also orthogonally intersect the red v n -surface (404) along the 3 black curves (412). Therefore the orange curve (402) orthogonally intersects the black (412) and green (410) curves, and quantified ambiguity is proportional to that curve's arc length, which is proportional to ⁇ .
  • is proportional to arc length along the orange curve (402), i.e. which orange curve we're on may be well resolved, but where we are on that curve is ambiguous.
  • the black ellipse in Fig. 2 represents 2 of the eigenvalues and eigenvectors of Co.
  • J2 denote the 1 st 2 columns of J
  • the above linearized constraints limit of coincident blue lines in Fig. 2
  • the ellipse (502) in Fig. 5 represents 2 of the eigenvalues and eigenvectors of the reduced covariance
  • CDF Gaussian cumulative distribution function
  • e) Transformation can also be designed using geometric insight.
  • a typical rock-physics ⁇ ⁇ , ⁇ , ⁇ does not appear to be Gaussian, but rather to be shaped approximately like a parabola P in (v P ,£2,&)-space, characterized by its vertex point ao with axis and tangent unit vectors jo and k, and focus point ⁇ + ⁇ .
  • this density was approximated as a Gaussian mixture whose each term's principal axis was approximately tangent to a point on P. This may or may not be a convenient
  • Fig. 7 is a graph of three-dimensional (v P ,£2,&) Gaussian random points
  • Fig. 8 is a graph of three-dimensional (v p ,£3 ⁇ 4, ⁇ 3 ⁇ 4) points with Gaussian distribution in curvilinear coordinates.
  • a ⁇ can replace samples ⁇ ⁇ 2 ⁇ ⁇ of the posterior. Note that just the computationally easier of the i or Z2 subsystems are to be solved. This is especially useful when new data z are acquired frequently, e.g. in seismic guided drilling.
  • L can be replaced in these 3 equations by its potentially nonlinear function g (e.g. full ray tracing and migration or an emulator of these) simply by replacing Co and Co everywhere by the action (using vector inner products) of multiplying by ensemble estimates of (ad) and (g[a]a l ).
  • g[a] statistics are called the unscented transform.
  • the eigenvalues of this are its 6 diagonal entries and correspond to 6 eigenvectors ((sin / 93- 1/2 )(l-sin / 93),(sin ⁇ 3- J D 1/2 )cos ⁇ 3,cos ⁇ icos ⁇ 3-cos ⁇ 2(l-sin / 9 3 ),0,0,0) t , (Eq.59)
  • the inverse Cholesky factor is ⁇ cos ft cosft j — cos 3 ⁇ 4 cos ft ? cos ft;— cos ft sirs 2 ft / ⁇ ⁇ (Eq. 60) whence the Voigt compliance matrix can be expressed analytically.
  • Fig. 6 depicts a tetrahedral region 602 of orthorhombic stiffness components enforcing positive-definiteness.
  • Another aspect of this disclosure involves building up the DA and automatic guidance framework described above, using several related techniques to compute the 3 fundamental mathematical structures required for earth model building, namely:
  • SVD is an analysis of the Frechet derivative, that is the mutual sensitivity of the "raw" preconditioned depth RT ll2 z and model update G m a that each have covariance /.
  • the case when either of R 112 or Co 112 is replaced by a singular matrix may be dealt with using the restricted SVD of De Moor and Golub (1991, SIAM Matrix Anal. Appl. 12), or similar decompositions.
  • RT 112 To begin constructing RT 112 , note that the data z can be de -noised by initially assuming it contains just noise, then iteratively projecting the currently putative noise part onto compressing basis functions such as orthogonal wavelets, and re-defining the next putative noise part to be the residual of the z reconstruction from the part of z that was successfully compressed.
  • This algorithm converges and results in 2 orthogonal parts of z, namely a signal that greatly resembles the original z, and a noise part that has a much smaller norm than the signal does.
  • R 112 can be parameterized using the leaky-derivative
  • R m is a tensor product of bidiagonal matrices.
  • the Ru can be estimated using the ergodic hypothesis over z coordinates (independent variables) other than depth and offset.
  • the 2 length scales for the exponentials in depth and offset coordinates can be estimated by iteratively finding a best linear fit to the ergodic estimate of ln[l/3 ⁇ 4] over coordinate separations where it is real valued, then finding the unique minimum of a positive function of the length-scale ratio.
  • Another use of R is in case where the posterior uncertainty C has to be constrained to some acceptable range.
  • the residual -covariance scale ⁇ o tr[R] 1/2 may be construed as a free inflation parameter to be tuned to keep C within that range.
  • e.g. by constraining the objective function not to randomly exceed its departure from its reference value ⁇ [0]
  • R ⁇ LCDR 112 provides an information density of the posterior, represented in data space.
  • the first term is what is conventionally dropped. It employs (at least implicitly) a separate model-space Hessian matrix d 2 gi/dada t for each datum prediction gi.
  • the sum of the 2 nd and 3 rd terms is the familiar posterior precision matrix so the first term is a higher-order, z-dependent contribution to that Cr x .
  • Newton's method prescribes update -vector iterates
  • I-J a is effectively the posterior (data-conditional) update covariance normalized by the prior covariance, so its entries represent the reduction in update-component variance and correlation.
  • a challenge of generalizations such as these is their high dimensionality.
  • the dimensionality can be reduced by exploring a few
  • hyperparameters ⁇ e.g. filter length scale, damping parameters, or update bounds, and study their effect on ⁇ [ ⁇ ] graphically.
  • the first term depending on the variation of the total path length, is 0 if path k does not approach a location of aj, and otherwise has to be estimated numerically.
  • the second term can be computed accurately by evolving the auxiliary Hamiltonian dynamics
  • a 6D state (dr/da,;dp/daj) stays at (0,0) until the 3D trajectory r[t] enters a region where the local stiffness aor[t] is interpolated from ay, and after r[t] departs that region the state is controlled by (dldp -dldr)dHldp.
  • L also depends on the dip-field axis e, which can be estimated as the leading eigenvector of the structure tensor vy® vy, smoothed with some scale length s, where ⁇ is any spatial scalar that characterizes the reflectors.
  • a and a r are close to (a): in a global sense if in a local sense if for the location and property associated with index j; and in a modal sense if
  • the ratios A v ,i,i/A(e,S),i,i of the squared singular spectra indicate the relative balance of velocity vs anisotropy sensitivity to the data.
  • Co 1 ' 2 by right-multiplying Co 1 ' 2 by ( ⁇ , ⁇ , ⁇ / ⁇ ( ⁇ , ⁇ ), ⁇ , ⁇ /) ⁇ 1/2 one expects to get a more balanced solution.
  • Various techniques described in this disclosure can be performed by a computer system, such as a computer system 900 depicted in Fig. 9.
  • the computer system 900 includes at least one processor 902, which can be coupled to a network interface 904 (to communicate over a network) and a machine-readable storage medium 806 (to store machine-readable instructions 908 and data).
  • the machine-readable instructions can be loaded for execution by the at least one processor 902.
  • the storage medium 906 can be implemented as one or more non-transitory computer-readable or machine-readable storage media.
  • the storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices.
  • DRAMs or SRAMs dynamic or static random access memories
  • EPROMs erasable and programmable read-only memories
  • EEPROMs electrically erasable and programmable read-only memories
  • flash memories such as fixed, floppy and removable disks
  • magnetic media such as fixed, floppy and removable disks
  • optical media such as compact disks (CDs) or digital video disks (DVDs); or other
  • the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes.
  • Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture).
  • An article or article of manufacture can refer to any manufactured single component or multiple components.
  • the storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics (AREA)
  • Engineering & Computer Science (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • Acoustics & Sound (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)
  • Debugging And Monitoring (AREA)

Abstract

In general, at least one history with respect to decisions that support completing a data processing or quality assurance is analyzed, the at least one history including a data-processing history and/or a quality assurance software-use history. Decision guidance is automatically created for a subsequent data processing or quality assurance.

Description

DECISION GUIDANCE
CROSS REFERENCE TO RELATED APPLICATION
[001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application Serial No. 61/940,729, filed February 17, 2014, which is hereby incorporated by reference.
BACKGROUND
[002] Various techniques (e.g. electromagnetic or seismic techniques) exist to perform surveys of subterranean structures for identifying subterranean elements of interest.
Examples of subterranean elements of interest include hydrocarbon bearing reservoirs, gas injection zones, thin carbonate or salt layers, and fresh water aquifers. One type of electromagnetic (EM) survey technique is the controlled source electromagnetic (CSEM) survey technique, in which an electromagnetic transmitter, called a "source," is used to generate electromagnetic signals. Surveying units, called "receivers," are deployed within an area of interest to make measurements from which information about the subterranean structure can be derived. The receivers may include a number of sensing elements for detecting any combination of electric fields, electric currents, and/or magnetic fields.
[003] A seismic survey technique uses a seismic source, such as an air gun, a vibrator, or an explosive to generate seismic waves. The seismic waves are propagated into the subterranean structure, with a portion of the seismic waves reflected back to the surface (earth surface, sea floor, sea surface, or wellbore surface) for receipt by seismic receivers (e.g. geophones, hydrophones, etc.).
[004] Measurement data (e.g. seismic measurement data and/or EM measurement data) can be analyzed to develop an output that represents a subterranean structure, where the output can include an image of the subterranean structure, a model of the subterranean structure, and so forth. BRIEF DESCRIPTION OF THE DRAWINGS
[005] Some embodiments are described with respect to the following figures.
[006] Figs. 1-3 are flow diagrams of various example processes according to various implementations.
[007] Fig. 4 is a three-dimensional graph showing isosurfaces of various properties, according to some examples.
[008] Fig. 5 is a schematic diagram of a re -parameterized covariance, according to some implementations.
[009] Fig. 6 is a diagram of a tetrahedral region of parameters of orthorhombic stiffness components enforcing positive-definiteness.
[0010] Fig. 7 is a graph of three-dimensional Gaussian random points in accordance with some examples.
[001 1] Fig. 8 is a graph of three-dimensional points with Gaussian distribution in curvilinear coordinates, in accordance with some examples.
[0012] Fig. 9 is a block diagram of an example computer system according to some implementations.
DETAILED DESCRIPTION
[0013] Although reference is made to performing surveying to characterize a subsurface structure, techniques or mechanisms according to some implementations can also be applied to perform surveys of other structures, such as human tissue, a mechanical structure, plant tissue, animal tissue, a solid volume, a substantially solid volume, a liquid volume, a gas volume, a plasma volume, a volume of space near and/or outside the atmosphere of a planet, asteroid, comet, moon, or other body, and so forth. In addition, the following describes seismic sources and seismic receivers that are part of seismic survey equipment. In other implementations, other types of survey equipment can be used, which can include other types of survey sources and survey receivers. [0014] Also, techniques or mechanisms according to some implementations can also be applied in contexts other than surveying of target structures, such as seismic guided drilling, reservoir modeling, field development planning, and so forth.
[0015] Creating Decision Guidance
[0016] In accordance with some embodiments, techniques or mechanisms are provided to collect data-processing histories and/or quality assurance software-use histories,
quantitatively and statistically analyze the present and other data processors' past decisions with respect to completing data processing successfully, and automatically create decision guidance to increase accuracy and/or efficiency of subsequent (new) data processing or quality assurance activities. Decision guidance refers to providing information for guiding (e.g. assisting, instructing, etc.) an operator (human user or machine) in performing a subsequent action.
[0017] Data processing can refer to processing of data for characterizing a target structure or the properties in its surrounding volume of space, such as a subsurface structure or other target structure as discussed above. The data that are processed can include any one or some combination of the following data acquired by a survey operation using survey equipment, including seismic survey equipment, electromagnetic survey equipment, borehole survey equipment); data acquired using sensors in other contexts; simulated or synthetic data; and so forth.
[0018] Characterizing a target structure can refer to building a model of the target structure, generating an image of the target structure (e.g. an earth formation, fractures in the subsurface, a reservoir, etc.), or producing other valuable information that provides a description or some indication regarding content of the target structure. For making decisions, characterizing that also estimates statistical uncertainty of such model, image, or other information, can be more useful. Uncertainty can refer to uncertainty in the model, image, or other information, produced by data processing.
[0019] Data processing can also refer to field development planning and management, e.g. deciding how many wells to drill, where to place the wells, and how long to operate the wells.
[0020] Data processing can also refer to other types of processing that can be performed on any type of data for any of various purposes. [0021] A data-processing history can refer to information relating to past data processing that has been performed, including any or some combination of the following: information relating to where geographically, jurisdictionally, etc. the data were obtained; information relating the project associated with the data processing; information relating to the capabilities, success records, etc. of operators that performed the data processing; information relating to options or values of parameters that were set for the data processing, such as options or parameter values in a computation job control file or a user interface; information relating to what files were read in or written out by a data processing job; information relating to how much time or computing resources were consumed; information relating to time duration between steps of a data processing job; information relating to usage of a user input device, such as a mouse; information relating to repetitive or erratic activities that may indicate problems with software, problem description, user training, etc., or other information.
[0022] A data processor's decision can refer to a workflow decision or a workflow choice made among a large number of options (e.g. possibly mutually dependent options) in association with the successful performance of data processing. A data processor can refer to a human using a computer, a collection of computers, a hardware processor within a computer, or a collection of hardware processors within one or more computers. An automatically created decision guidance can refer to information that is easily useable by a human or an automated process to take effective action in a specific task, such as solving a problem or creating a product.
[0023] Collection of data for data-processing histories and a data processor's decisions can be performed at various time scales, such as fractions of a second for a mouse click or keyboard stroke, up to hours or days for expert intervention, and so forth.
[0024] Observed decisions can be ranked, e.g. using value of information (as defined by Eq. 17, for example). The ranking of observed decisions can be based on the physical insight or qualitative understanding provided by or informing a decision, the value of a decision to immediate tasks or to an overall project, the impact of the decision on accuracy and/or confidence of particular tasks of a solution, and so forth. Multiple ranks can be sorted lexicographically, i.e. sorting the best of each less-significant rank for each value of each more-significant rank. [0025] The analysis can characterize decisions in different ranges, e.g. data-based versus model-based; critical versus inconsequential; quantitative versus qualitative; repetitive versus singular; little revised versus much revised; discrete versus continuous, and so forth.
[0026] A decision guidance that is created can use behavior-modeling technology, such as used in the attention-management field, data-mining field, machine-learning field, profiling field, targeted servicing field, social-media field, social networking field, commercial advertising and marketing fields, computer gaming field, public relations field, national security field, or other fields.
[0027] By automatically creating decision guidance based on data-processing histories and/or quality assurance software-use histories, and based on analysis of a data processor's decisions, subsequent data processing or other activities can be performed more quickly and/or efficiently. Also, accuracy, precision, and stability of answers can be increased.
[0028] A quality-assurance software-use history can refer to a quality of a result produced by software in performing data processing. The quality of a result produced by software can be measured using any of various techniques, such as based on comparing the result with expected results, or using other techniques.
[0029] A quality-assurance activity can refer to an activity in which the quality of software used to produce a result is determined. A quality assurance activity can involve deriving results from data processing and performing any one or more of the following: comparing the derived results quantitatively and qualitatively against expected results or results that are plausible in a respective application, e.g. geology exploration application; testing stability of the derived results against data-processing alterations (e.g. alternative decisions); and inferring the consequences of the derived results for other goals and questions (e.g. if a project should be started or continued).
[0030] Fig. 1 is a flow diagram of an example process according to some implementations of the present disclosure. The process can be performed by a computer system, for example. A computer system can include a computer, an arrangement of computers, a hardware processor, or an arrangement of hardware processors. [0031] The process of Fig. 1 includes collecting (at 102) one or more of at least one data- processing history or at least one quality assurance software-use history. The process further analyzes (at 104) at least one history with respect to decisions that support completing a data processing or quality assurance activity.
[0032] The analysis of a history (data-processing history and/or quality assurance software- use history) is part of an analysis of a decision. The analyzing of a decision can use any of the criteria discussed above for ranking or characterizing decisions. Further details relating to decision analysis are provided in the text accompanying Eqs. 14-17 discussed below. Details relating to building up a decision analysis framework are provided further below in Section III.
[0033] The process of Fig. 1 automatically creates (at 106) a decision guidance for a subsequent data processing or quality assurance activity.
[0034] Computing Information Metrics to Create Decision Guidance
[0035] In further implementations, techniques or mechanisms are provided to compute traditional and new, generalized information metrics (e.g. Fisher operators, resolution operators, or other metrics), and to use the information metrics to create decision guidance in regard to data perturbations and/or model (property) perturbations, such as the decision guidance created at 106 in Fig. 1. A data perturbation can refer to use of data in a workflow that was not previously used in that workflow or that was previously used with different numerical values. A model perturbation can refer to generating or updating a model, such as a model of a subsurface structure or other target structure or its surrounding volume of space. Both perturbation types can refer to relatively small volumes encompassed by a relatively much larger volume under consideration. In some examples, models tend to be at most three- dimensional, whereas data can be higher-dimensional depending on data-acquisition techniques used. The ability to use or modify such small perturbations in a manner consistent with the larger data or model is a feature and motive of the information-based methods according to the present disclosure. Further details regarding the foregoing are provided in Section III below, and in Eqs. 70-71 and 75-77. [0036] Computing the information metrics can combine multiple factors, such as factors represented by a system sensitivity matrix (also known as system design matrix, or Frechet operator, or Jacobian matrix, or matrix of covariates, regressors, or exogenous, explanatory, independent, input, or predictor variables), and/or covariance matrices of observed data (acquired by survey acquisition equipment, and also known as regressands or dependent, endogenous, measured, output, or response variables) or of properties (also known as effects, parameters, or regression coefficients) relating to a target structure. In general, covariance describes the direction and magnitude of joint random deviation of multiple variables from their expected values.
[0037] In this manner, the computation and application of traditional and new, generalized information metrics can be improved. Generally, a system sensitivity matrix can relate a vector of properties to a vector of data.
[0038] Fig. 2 is a flow diagram of a process according to further implementations. The process of Fig. 2 can be performed by a computer system, for example.
[0039] The process computes (at 202) at least one information metric that combines multiple effects, such as the factors discussed above, including a system sensitivity matrix and a covariance matrix of one or both of data and at least one property of a target structure or properties of a volume in space surrounding the target structure. The process uses (at 204) the at least one information metric automatically to create a decision guidance.
[0040] In some examples, an information metric can include a system sensitivity matrix (e.g. a matrix related to tomography).
• The tomography matrix can be computed by ray tracing perturbed by first or second- order derivatives of trajectory and slowness with respect to model properties, such as described in connection with Eqs. 78— 79.
• The tomography matrix employs a dip field with anisotropic, multiscale coherence, such as described in Section IILf below.
• The validity of linearization can be checked for various reference models, such as described in connection with Eqs. 80— 84. [0041] In other examples, an information metric can include a data-residual covariance operator.
• In general, a covariance operator relates a linear function of some variables to the covariance between the variables and that function.
• This information metric can be used when data to be processed contains noise to be removed, e.g. by iterative compression. This is described further in Section IILa below.
• The covariance operator can include an exponential correlation, such as described in Section IILb below.
• Variance can be inflated to attain desired posterior-residual statistics, such as
described in connection with Eqs. 67— 69.
[0042] In further examples, an information metric is generalized by a data-dependent Hessian, such as described in connection with Eqs. 72— 74.
• A residual norm can be visualized in the space of a few hyper-parameters, such as described in connection with Section IILd below.
• Information can be provided to rank a model space so that probability quantiles like Pio, P90 apply, as described in Section IILd below.
[0043] In other examples, localized data perturbations or model perturbations can be computed quickly with consistent global effects, such as described in Section Ill.h below.
[0044] In further examples, different data parts (e.g. checkshot data, seismic data, etc.) and/or model parts can be balanced in the residual norm or prior-model norm using separate eigenvector analysis of each part.
• The eigenvectors enable projection to remove redundant components, such as
described in connection with Eqs. 85— 87.
• The eigenvectors (or other data structures) enable weighting each part (data part or model part) by the part's reciprocal degrees-of-freedom, such as described in connection with Eq. 88. [0045] Providing Prior Algebraic or Geometric Information To Build or Update a Model
[0046] Prior information can be used to build or update a model. Examples of different types of prior information include different types of probability structures, including a geological probability structure, a rock-physics probability structure, and a seismic probability structure (discussed in detail further below). More generally, in some implementations, the prior information can include information describing a distribution of values of a property (or multiple properties) that relate to physical characteristics of the subterranean structure. There can be other types of prior information and other fields to which these statements apply.
[0047] The efficiency and/or utility of prior information can be improved by considering physical stability and/or statistical stability of properties relating to a model of a target structure or a volume in space surrounding the target structure, in constructing a
mathematical transformation of the properties. A value is stable if its change in response to an input change vanishes when the input returns to normal. Thus, physical stability refers to a characteristic energy (such as material strain energy) reaching a minimum value that is stable against (strain field) perturbations, as a spherical bead resting at the bottom of a bowl is stable against small impacts. In similar manner, statistical stability refers to the mode (maximizing instance) of a random field interpreted as a stable minimum of the statistical mechanical potential, i.e. of the logarithm of the reciprocal probability density. Both kinds of stability can be interpreted algebraically (in which a certain matrix is specified to be positive- definite) and geometrically (in which a certain surface is specified to have positive curvature). These complimentary points of view of stability, positive definiteness and positive curvature underlie many of the techniques described herein. The improved prior information can be used to build or update the model and to create a decision guidance.
[0048] In some implementations, properties of a target structure can be re-parameterized, where re-parameterization can refer to finding mathematical combinations of the properties which reveal which of the properties are more certain and which of the properties are less certain, or combinations which are useful in other ways.
[0049] Fig. 3 is a flow diagram of another example process according to further
implementations. The process of Fig. 3 can be performed by a computer system. [0050] The process of Fig. 3 provides (at 302) prior information based at least in part on transforming at least one property relating to a model of a target structure or a spatial volume surrounding the target structure, where the transforming considers one or both of a physical stability and a statistical stability of the at least one property. The process uses (at 304) the prior information to build or update the model. Building the model can refer to initially creating the model, while updating the model can refer to updating values of parameters of the model.
[0051] In some implementations, the prior information can constrain a stiffness (elasticity) matrix (which represents a stiffness tensor) to be positive-definite. The stiffness tensor gives the relationship between stresses (resulting internal stresses) and strains (resulting deformations). The prior information thus constrains the strain field to be physically stable by constraining the stiffness matrix (or more specifically, a stiffness tensor) to be positive- definite. A strain field refers to a representation of strains within a given volume in a target structure.
• In some examples, the stiffness tensor obeys orthorhombic symmetry (the symmetry of a common building brick, i.e. having distinct lengths in its three dimensions but its vertex angles being 90°, implying 9 independent properties). This is discussed in Section II below. o In some implementations, positive-definiteness is expressed as a triplet of correlation-like properties belonging to a certain tetrahedron (optionally using barycentric coordinates). This is described in connection with Eqs. 54-57, 61 and in Section II. f below. o In some implementations, six explicit eigenvalues and 6x 1 eigenvectors of a Cholesky factor stiffness are used, as described in connection with Eqs. 58-59. o In some implementations, a compliance (inverse stiffness) matrix is used, as described in connection with Eq. 60. o In some implementations, an algebraically indefinite stiffness is minimally transformed to a positive-definite stiffness, as described in connection with Eq. 62 and in Section lib. o In some implementations, Gaussian probability densities are used for bounded model-property transformations, as described in connection with Eqs. 64-65. o In some implementations, the stability of three Thomsen anisotropies (<5i,<¾,<53) are checked, such as according to Eq. 63.
• In further examples, the stiffness tensor obeys tilted or vertical transverse isotropic symmetry (symmetry with respect to rotations about an axis with a certain tilt, i.e. angle with respect to the vertical, or with zero tilt, and associated with five independent properties). Various features as discussed above for a stiffness tensor that obeys orthorhombic symmetry are also applicable in this case.
[0052] In other implementations, prior information is improved by applying a nonlinear property re -parameterization of properties (generally referred to as xi), which decorrelates the properties. Decorrelating properties refers to finding property combinations that are uncorrected in the statistical sense. This is discussed further in Section La and in connection with Eqs. 28-29a, 38-39.
• In some examples, the parameters Xi are assumed to be well resolved (resolved with high precision, i.e. low variance, in other words, small expected deviation from an expected value) and other properties are not. This is described further in Section Lb. o In some examples, parameters Xi such as normal-moveout velocity vn, anellipticity η, or property ratios like ¾/<¾ or Vp/vs have variances that can be estimated to be small, such as described in connection with Eqs. 29a, 29b. o In some implementations, parameters xi are known to create the remaining variance, e.g. xi = -2νρ 2ν 2 + (l+2¾)2 + (1+2&)2, as described in connection with Eqs. 37-40.
• In further examples, the parameters Xi are assumed to be perfectly resolved (have zero variance) and other properties are not. o Certain parameters Xi, such as normal-moveout velocity vn and anellipticity η, can be assumed to have zero variance, such as according to Eqs. 42, 44-48. o In case some ¾ do not have zero variance, linear functionals such as their depth-interval averages can be assumed to have zero variance, so that all other variance derives from the null spaces of those functionals.
• In additional implementations, property probability densities created by rock-physics prior re-parameterization can be efficiently represented, sampled and computed by nonlinear transformation to 3D parabolic coordinates. This is described in further detail in Section I.e and in connection with Eqs. 51-52.
• In other examples, initial re -parameterization blocks can be updated by data- covariance estimates, such as described in connection with Section I.f. o Data can be assimilated into the update using a Kalman filter with linear sensitivity. See Eq. 53. o As an example, an unscented transform can extend the Kalman filter to
include nonlinear sensitivity. See Section I.f.
The extended Kalman filter can be applied to Seismic Guided Drilling.
See Section I.f.
The extended Kalman filter is applied to full ray tracing. See Section I.f. o Re -parameterization of data can be compressed using mesh-free
multiresolution. See Section I.f.
• In additional examples, property bounds can be applied by iterative Gaussian
anamorphosis. See Eqs. 49-50.
[0053] Further Implementations
[0054] In some implementations, techniques or mechanisms can provide ways to combine information from both measurements (e.g. surface seismic data, well-log data, drilling data and other data about velocity, anisotropy, etc.) and prior earth-model building (EMB) and similar types of decisions, experiences and results from multiple projects and personnel, to update properties of a given model or for related purposes, including information-constrained property probability distributions and increased confidence both in those distributions and in their value to further decision making.
[0055] In some examples, rock-physics estimation of prior covariance between earth-model properties can be employed in a workflow (e.g. according to techniques or mechanisms described in U.S. Patent Publication No. 2012/0281501), such as for seismic tomographic earth-model building (EMB). Techniques or mechanisms for updating a model are also described in U.S. Patent Publication No. 2009/0184958. Techniques or mechanisms for determining covariance are described in U.S. Patent Publication No. 2013/0261981. For multivariate Gaussian distributions of general model vectors x, the re-parameterization or reduction of covariance by locally linearized mappings of or constraints on x can be referred to as a re-parameterization method.
[0056] In some implementations, the re-parameterization method can be applied to the workflow that uses rock-physics or another estimation of prior covariance.
[0057] Re -parameterization can also be extended to either or both of potentially non- Gaussian distributions, or nonlinear constraints derived from curvilinear geometric consideration of the parameterization, while providing a natural way to include uncertainty in these constraints. See Eqs. 23— 29a.
[0058] The workflow employing rock-physics estimation of prior covariance can be extended beyond tilted transverse isotropy and rock physics to:
• arbitrary anisotropy,
• arbitrary sources of prior constraints and information such as EMB experience and decisions of an analyst, client or expert and/or in a geographical region, • improving the data utilization by projecting away the redundant contributions from different data types that inform the same model parameters.
[0059] Let us derive succinctly the objective function Φ to be minimized, from "first principles" of wave dynamics. In a linearly elastic medium of mass density ρ subject to body-force vector QF, the evolution of displacement-vector u and 3 x3 symmetric stress- tensor field ρσ is governed by Newton's Second Law
Figure imgf000016_0001
and Hooke's Law σ = a:(V(g)w + (V®uf)l2. (Eq. 2)
[0060] Thus an earth model comprises ρ and the 3x3 x3 x3 stiffness tensor ρα, whose symmetries aiju = ajiki = ayik = awj enable it to be represented as a symmetric 6x6 Voigt matrix (< 21 independent entries). Given F, forward solution for u leads to full-waveform inversion for a and in principle, log ρ. Alternately, substituting u oc^e ~1<o( +r) yields in the high circular-frequency limit ω→∞ the characteristic system
(F-2HI)-A = 0 (Eq. 3) for the polarization eigenvector
Figure imgf000016_0002
where
Γ := (/<8>V7):a:(/®Vr) (Eq. 4) is the 3 x3 Christoffel tensor and ρ can no longer be determined. Mutually orthogonal solutions A exist, each with eigenvalue
Α·Γ·Α = 2Η = \. (Eq. 5)
[0061] The phase velocity is II V7T2 VT and the group velocity is (Α®Γ):α:(Α® VT). A ray- path trajectory r[t] is conjugate to the slowness vector p[t] (defined as VT evaluated along r[t ) in 6D Hamiltonian dynamics: d(r,p)/dt = (dldp -dldr)H. (Eq. 6)
The travel time along path k is the path integral w := Jpath/tlldr/diirMldrll, (Eq. 7) and each travel-time increment Δ¾ vanishes given simultaneous updates ¾ of reflector-depth deviations at offset and updates aj of any a-components at any locations, as described by the sensitivity kernel entries
Lij ·■=
Figure imgf000017_0001
= - jidxiJdziYxdxiJd j (Eq. 8)
(Euler chain rule). Given estimates of ¾ and prior updates aoj, minimizing the objective function, a residual Mahalanobis norm
Φ[α] := \\(Κ® )-υ2({ζ αοι)ι-φ,Τ)'α)\2, (Eq. 9) is equivalent to maximizing the multivariate posterior probability density β{ά] οΐ α\ζ if β ά] is Gaussian. Here 0 denotes block-diagonal concatenation. This equivalence assumes the residual and update vectors z~La and a are random with joint probability density given by the multivariate Gaussian e~*[a|/2/det[27t(R0Co)]1/2 with mean vector (Ο,αο')' and covariance matrix R0Co. Then the posterior mean is the property update vector a→{a) = Ci R- +Co^ao) = ao + C R-^z-Lao), (Eq. 10) with posterior covariance matrix
Figure imgf000017_0002
(Sherman-Morrison-Woodbury formula) and marginal z covariance matrix R+LCQD. Various measurements and prior update information can be appended to (ζ',αο') and (L I), and residual or update constraints can be built into R or Co. The eigenvector decomposition truncated to rank rk[A],
Co^ R-'LCo1'2 =: V(A 0 l„egiigibie) V- ~ V(A® 0) V, (Eq. 12) provides the truncated posterior covariance generator ci/2 ~ amV((I+A)-y2®0). (Eq. 13)
Given β ά\ one can compute statistical expectations of functions x[a]: (x) :=
Figure imgf000018_0001
<—∞<-*∑k standard Gaussian ξ x[{o)+C1/2 /k. (Eq. 14)
An example application is to decision analysis (DA) under utility function
Figure imgf000018_0002
e.g. project profit, solution precision, etc. of some discrete parameter(s) d to be decided, e.g. number of surveys, estimated variances etc. The decider compares the completely ignorant case
("starting alone, from scratch" etc.) with the perfectly or imperfectly informed or guided case. In complete ignorance, the optimum utility that the decider can achieve is the maximum expected value
MEV = maxd{Yd), (Eq. 15) whereas in the informed or guided case, there is a greater perfect or imperfect expected maximum value
EMV = (maxdTd) or EMV = Jma¾ {Yd£[- \α1) ', (Eq. 16) where £ [a a1 is the probabilistic likelihood of some known a' being the perfect update a. The value of (im)perfect information or guidance is the difference
VOI := EMV-MEV, (Eq. 17) which picks out the optimal path through a decision tree that bifurcates at each discrete d choice.
[0062] Examples of additional symmetry assumptions on the tensor a, yielding fewer independent entries, are assumptions of: triclinic (21); monoclinic (13 including «1323);
orthorhombic (9); transverse isotropic (5); cubic (3) or isotropic (2) symmetry. Under the assumption of orthorhombic isotropy, one parameterization of an earth model comprises the values at all locations of: axial pressure- and shear-wave speeds vP = vp,3 := «33331/2 and vs = vs,3 := «i2i21/2; (Eq. 18)
Thomsen pressure- and shear-wave anisotropies
<¾-/ := (αώϊ/α3333-1)/2 = ((νΡ,/νΡ)2-1)/2, (Eq. 19)
Figure imgf000018_0003
<¾-; := ((α«33+α3;3)2-(α3333-α;3;3)2)/2α3333(α3333-α3;3),
<53 := ((αι ΐ22+αΐ2ΐ2)2-(αιιιι-αΐ2ΐ2)2)/2αιιιι(αιιιι-αΐ2ΐ2) (i = 1, 2); and the unit axis vector e representing the dip field. Physically, (l+2ft- 1/2 is the ratio of transverse wave speed vpj to vp, and vp<¾ is the paraxial curvature of pressure -wave speed w.r.t. small declination angle from e. It has been observed that the (vp,£¾,<¾) variation in many models is approximately constrained to "equivalent solution" sets that have constant normal-moveout velocity vn and anellipticity η (or constant vn and η functional, e.g. depth- interval average), which have the following nonlinear and linearized expressions: v„ := (l+2<S2)1/2v (Eq. 20)
= (l+2<¾1/2Vr + (1+2<¾1/2(VP-Vr) + (l+2<Srr1/2Vr(&-<¾ + J¾
= Vnr + (Jl l,0,J3l)(vp-Vr,£2-&,<¾-<5r)t + ·¾
and η := (£2-<¾)/(l+2&) (Eq. 21)
= (a-&) (l+2&) + (l+2<5r) _ 1(fi2-a) - (l+2a)(l+2&) _2(&-&) +
= ηι + (0^22^32)( ρ- Γ,ε2-&,<52-<5Γ)1 + J¾,
where a general remainder after linearization around an arbitrary reference (vr,a,<5r) is ■■= <Τ[(νΡ-νΓ,ε2-&,&-ά)ι( Ρ- Γ,ε2-&,&-ά)]. (¾· 22)
[0063] The anelliptic normal-moveout (ANM) constraints can supplement an initial distribution of (νν,ει,δι), such as provided by the rock-physics modeling. Rock physics modeling starts from compaction, mineral and temperature parameters and (νν,ε2,δ2) measurements at wells, interpolates those data in the domain between wells, and infers non- Gaussian, e.g. Gaussian-mixture statistical relationships within (vp,¾,<¾) space.
[0064] The following describes additional details, and is divided into three main sections I, II, and III, each with respective sub-sections that have been referred to above.
I. Prior Information
[0065] There are numerous kinds of prior information to call upon in earth model building, including calling upon geographical, geological, historical, operational and other data in designing covariance matrices and seeking efficient and accurate parameterizations as described below. In general, we propose a software system that will use DA to return decision guidance to its user (who is the data processor, earth model builder, quality assurer etc.) at the junctures in his or her workflow where significant decisions are to be made.
• The first aspect of this system is to rank the decision itself (not its options) w.r.t. its value to the immediate tasks, overall project, impact on accuracy/confidence of particular solution steps etc. In this way appropriate resources (time, computation etc.) are spent on decisions according to their value. The VOI formalism described above (Eq. 17) and improved below is one way to create this ranking. • In order to perform and report useful DA automatically, we propose to:
o collect historical data on decisions by users on various time scales (from fractions of a second for a mouse click to hours or days for advanced expert intervention);
o characterize those decisions in various categories and ranges, e.g. critical- inconsequential, quantitative-qualitative, repetitive-singular, little-much revised, discrete-continuous;
o apply DA and behavior-modeling technology transferred from the social-media field, social networking field, commercial advertising field, computer gaming field, public relations field, national security field, or other fields (such as listed further above); and
o mine and model the decision patterns within these data to enable machine-learnt, artificially intelligent prediction and guidance of future decisions involved with data processing, quality assurance and other purposes.
[0066] The propagation of uncertainty from one parameterization to another (described by updated covariance matrices, for example), from one decision to another (characterized by DA and VOI, for example), from one workflow to another (event picking, dip-field estimation, velocity analysis etc.), from one project to another (e.g. exploration, field assessment and development etc.) and so on, establishes a common, unifying framework over tasks and scales: to reduce uncertainty (hence risk) by combining appropriate information sources. With this unifying framework we propose that conventional EMB answer products be augmented to include new DA services to fracture modeling and other geomechanical modeling, reservoir modeling, field development planning and management etc.
[0067] Examples of prior information include:
• the ANM or other constraints in linear or nonlinear form;
• initial or partial data to inform the model-update prior;
• other representations of or constraints (e.g. geological context or plausibility) on the model-update prior;
• positive-definiteness of the (orthorhombic) stiffness tensor; and
• DA incorporating decisions that may: o vary in a continuum; or o otherwise affect the estimation of the model-update probabilities.
[0068] Prior information is useful to obtain an optimal model update (a), but is also useful to obtain a realistic representation of update uncertainty, i.e. confidence regions in the space of the properties. Our first example introduces property re-parameterization to constrain probability densities β{α] in regard to curvilinear (vP,e2,<52)-geometry or Gaussianity. a) Given a completely arbitrary function x = x[a] of a, its induced probability density is
(Eq. 23) β^χ1] = ( [χ'-χ[·]]) = ίΑ[χ'-χ[α]]β α]άα,
where A denotes the multivariate Dirac density. When x[a] is a re-parameterization inverted by a[x], then β χ] becomes the functional composition (Eq.24) fi x → |detJ| oa[x\,
where J := (dx/d f is the Jacobian matrix. Using Taylor series about <¾·, the covariance matrix cov[x] := ((x-(x))(x-(x))t) transforms from a to x as
Figure imgf000022_0001
where J¾T ] := dx/dad . The leading error term is annihilated by setting <¾· = (a). Purely as a matter of differential geometry, certain x[a] might yield diagonal matrices
JiJ→ M := 0 diag[JlJ] (Eq. 26) throughout -space. In this case we find that
W := JM-m = J-'M 1/2 (Eq. 27) is a 3x3 variable rotation matrix: WWt = WtW= I. Let var[x] := diag[cov[x]]. If we then make the purely statistical assertion of uncorrelated x entries, cov[x]→ 0var[x], (Eq. 28) then we can construct the implied, fully correlated covariance as
Figure imgf000022_0002
explicitly representing an ellipsoid with assigned semi-axis length (eigenvalue)
corresponding to semi-axis direction (eigenvector) in column i of
Figure imgf000022_0003
utility of Eq. 29a derives from the case where some var[x;] are small so that computation focuses on the large -var[x;] properties. b) In particular, given a probability density
Figure imgf000022_0004
and information about xi = vn and X2 = η or their functionals such as depth-interval averages, we use the transformed density
Figure imgf000022_0005
where up to now X3 = can be any function that lets
(Eq. 31) be invertible, and (Eq. 32) ø, = {δΙδνν,δΙδει,δΙδδτ)χ is a gradient covector operator w.r.t. a = (νρ,¾,&)1. Note the following:
• instead of vn and η, any combination of interest could be the starting point, e.g.
property ratios such as ¾/<¾ or vp/vs;
• to enforce detJ≠ 0 it is sufficient that ^Cbe independent of the 2 covectors
Figure imgf000023_0001
= ((l+2&)1/2,0,v/(l+2&)1/2)Vc (Eq. 33) and
0r\ = (0,1/(1+2&),-(1+2ε2)/(1+2&)2)1, (Eq. 34) where c > 0 is an arbitrary parameter of the diagonal matrix
Figure imgf000023_0002
in (vp,£¾,<¾)-space.
• Defining α·■= a' f \ the angle cosine between these last two covectors is
Figure imgf000023_0003
which is negative, in which sense vn and η are anticorrelated in (vp,¾,<52)-geometry. Thus we introduce a new, slight modification of vn as v, := (1+2η(1+η))υ4νη (Eq. 37)
= (((l+2¾)2+(l+2&)2)/2)1/4vP = ((vp,i4+v„4)/2)1/4 = (1 + η/2 + η2/8)να + (Τ[η} implying^ νη' η = 0, which is useful given the (vP,e2,<52)-geometry relationship to statistical independence shown above. Extending this reasoning, it is useful to design that
Figure imgf000023_0004
by assigning the cross product covector o1/2(^v„x^?/)/c = (1+2&)-3/2 ο(-νρ, l+2¾, 1+2&)1 = (l+2<¾)-3/Vf (Eq. 39) e.g.
ζ= -vP 2/2c2 + ((v,/vp)4-l)/2 (Eq. 40)
= -vP 2/2c2 + ((l+2¾)2 + (l+2<52)2)/4 - 1/2
= "Vr2/2C2 + £r(l+a) + <Sr(l+<¾
- C-2Vr(Vp-Vr) + (1+2&)0¾-&) + (l+2&)(<¾-&) + Jfc = ζτ + (J13,J23,J33)(vp-Vr,e2-&,<¾-<5r)t + J¾.
For this ζ, detJ= (vP 2/c2 + (l+2¾)2 + (l+2<S2)2)/(l+2&)3/2 > 0. (Eq. 41)
[0069] Fig. 4 shows various isosurfaces of vn, η and and the curves of intersections of these surfaces. The following observations are relevant.
[0070] Fig. 4 shows isosurfaces in TTI property (vp,¾,<¾)-space of normal moveout velocity va→ c (404) and its intersection curve (402) with anellipticity η = 0 (406) that orthogonally intersects 3 surfaces of new ambiguity property ζ→ -1, 0, 1 (408) along 3 curves (410). The -surfaces also orthogonally intersect the red vn-surface (404) along the 3 black curves (412). Therefore the orange curve (402) orthogonally intersects the black (412) and green (410) curves, and quantified ambiguity is proportional to that curve's arc length, which is proportional to ζ.
• The curve of fixed (να,η) intersects the other curves at right angles too.
• If n and η are well resolved (have small variance), it is likely that ζ is very poorly resolved; in this sense ζναΆγ be referred to as "the" ambiguity.
• Also ζ is proportional to arc length along the orange curve (402), i.e. which orange curve we're on may be well resolved, but where we are on that curve is ambiguous.
• Also note that for weak anisotropy and vp « c, - ¾+<¾, which is orthogonal to
~ η.
[0071] To end this section, note that 0· ·0· νη = 0 too. Having shown that geometry in (vp,¾,<52)-space is statistically relevant, the (νη,η,ζ) parameters are optimally independent. c) If n and η are known exactly, var[vn] = var[//] = 0, then the marginal and conditional probability densities are
Figure imgf000024_0001
[0072] In this case the probability density of
Figure imgf000024_0002
is weighted-Dirac along the curve defined by the intersection of the vn and η surfaces, i.e. zero off that curve. Now consider Gaussian (vp,¾,<52), for which
Figure imgf000025_0001
= JiXj[vp, 2,di;(vp, 2,di),Co] (Eq. 43)
:= (det 2rtCor1/2exp[-(vp-(vp),¾-(¾),&-(&))Co-1(vp-(vp),¾-(¾),i52-(&))t/2].
The black ellipse in Fig. 2 represents 2 of the eigenvalues and eigenvectors of Co. Letting J2 denote the 1st 2 columns of J, the above linearized constraints (limit of coincident blue lines in Fig. 2) 21( ρ-νΓ,ε2-&,<¾-<¾1 = ( n- nr^-^/r)1 (Eq. 44) provide a classical result using a Co-weighted right J^'-inverse:
J := CoJaiJJCoJ-.iT1
Figure imgf000025_0002
[0073] As its thickness goes to zero, the ellipse (502) in Fig. 5 represents 2 of the eigenvalues and eigenvectors of the reduced covariance
C< := Co-JWCo < Co (Eq. 46)
(L5wner precedence), i. e. increased precision and confidence. Note that J2lC-< = 0, infinite precision in the 2 constraint directions (J:2 columns); thus
(vp-{vp),S2-(s2),02-(S2))t is to be in null[J:2], i. e, parallel to the vector^f Taking
((vr/c)2+(l+2&)2+(l+2<5r)2)var[v] as the nonzero eigenvalue of C<, the full rank-1 covariance can be written
= var[
Figure imgf000025_0003
or explicitly,
C<,v,v = var[v], C<,e,e = ((l+2&)/vr)2var[v], C<,s,s = ((l+2<¾/vr)2var[v],
C<^3 = ((l+2&)(l+2<Sr)/vr2)var[v], C<Av = -((l+2<5r)/vr)var[v], C<^ = -((l+2&)/vr)var[v].
(Eq. 48)
[0074] Alternately, to obtain a rank-3 transformation of Co with prescribed var[vn] > 0, var[//] > 0 and var[£ > 0, the approach in section La applies, and is especially useful when var[£] » max[var[vn]/c2,var[? ]], (Eq. 29b) corresponding to the ellipse 502 (in Fig. 5) without its thickness going to zero. Fig. 5 shows a schematic diagram of a re-parameterized covariance. d) The foregoing assumes Gaussian distributions, which are unbounded. When appropriate, bounds can be applied in the posterior by Gaussian anamorphosis at each grid point. That is, each single vector entry ay with non-Gaussian probability density dlJ/daj is transformed to a new variable xy with a Gaussian probability density, by xy = HG_1 [H[a/]], where Ho =
Figure imgf000026_0001
is the Gaussian cumulative distribution function (CDF). For a mutilated Gaussian, zero below and above af, then
iAaj = G[aj\l 2ix[aj\ (nG[aj>]-nG[aj ]) ( < ay < ), 0 (else). (Eq. 49)
Therefore
Xj = -co (ay < a/), (Eq. 50)
Ho 1 [(Ho [ay] ~IJG [a <] )/ (Ho [a *]- Ho [a^] )] (a/ < aj < af),
∞ (aj> < aj), which can be efficiently computed in parallel, e.g. by Taylor series (away from and
To include other entries of a than ay, iteration can be used, by generalizing the technique of Rosenblatt (1952, Ann. Math. Statist. 23:470 472) from selections of conditionally distributed entries, to invertible linear transformations of those selections.
e) Transformation can also be designed using geometric insight. For example, a typical rock-physics ^νν,ει,δτ does not appear to be Gaussian, but rather to be shaped approximately like a parabola P in (vP,£2,&)-space, characterized by its vertex point ao with axis and tangent unit vectors jo and k, and focus point αο+βο. Until now, this density was approximated as a Gaussian mixture whose each term's principal axis was approximately tangent to a point on P. This may or may not be a convenient
representation of such a £ νν,ει,δτ\ at a single domain location, but it is dubious to extrapolate Gaussian-mixture parameters throughout the domain. It seems more efficient to extrapolate a geometric representation such as the parameters (iojof,ao). An arbitrary point on P is <& = ao+(s/f)io+(s/2f†jo with unit tangent is =
Figure imgf000026_0002
normal js = ¾ x (/o x %) and arc length
Figure imgf000026_0003
An arbitrary (νΡ,ε2,&) can be orthogonally projected to
as→ arg min« Ι1α-(νρ,ε_,<52)1, (Eq. 51) as seen in Fig. 7, uniquely for non-axial (vp,£¾,<¾). Conversely, given a trivariate probability density in (s,s',s "), non-Gaussian random (vp,¾,<¾) may be synthesized as
(vp,¾,<¾) = as+s%+s "js. (Eq. 52) as seen in Fig. 8. Fig. 7 is a graph of three-dimensional (vP,£2,&) Gaussian random points, and Fig. 8 is a graph of three-dimensional (vp,£¾,<¾) points with Gaussian distribution in curvilinear coordinates.
There are other ways to modify a Co estimate.
• For example, the formula for C1/2 above may be used in conjunction with initial or partial L and R information to refine Co into a new prior covariance, for a subsequent eigenvector analysis.
• More generally, one can update C by the ensemble Kalman filter. Given samples a■= ao+Co1/2 « and z := z+Rυ2ξz of the prior model and depth updates, then in the system a\ = a + Co<¾ = a + CoLlZ2 (Eq. 53) (Co+
Figure imgf000027_0001
a\ can replace samples ΟιΙ2ξα of the posterior. Note that just the computationally easier of the i or Z2 subsystems are to be solved. This is especially useful when new data z are acquired frequently, e.g. in seismic guided drilling. Another advantage is that L can be replaced in these 3 equations by its potentially nonlinear function g (e.g. full ray tracing and migration or an emulator of these) simply by replacing Co and Co everywhere by the action (using vector inner products) of multiplying by ensemble estimates of (ad) and (g[a]al). When the a statistics use a simplex of dim[ ]+l samples whose discrete statistics match desired continuum statistics, the g[a] statistics are called the unscented transform.
• One can also seek a very efficient representation of a or z, in terms of compression of locally high spatial resolution, by using local basis functions such as mesh-free wavelets.
We end this section with DA and VOL Two generalizations of the classical scheme are to let Td[a] become a function T[d,a] of a continuum of decision parameters d, and let β α]→ also depend on d, e.g, d could be how much to invest in estimating p. Then classical decision trees are replaced by paths along mathematical surfaces. II. Property Re-Parameterization
[0075] For orthorhombic earth models we introduce a re-parameterization to enforce the prior information of positive-definite stiffness, as follows. a) The 6x6 Voigt matrix representation
<½ili «112.2 «3511 ¾323 (Eq.54) «1122 «2222 «2235. j© i3l
Figure imgf000028_0001
of a has to be positive-definite. Factoring out the 6x6 diagonal positive-definite matrix p,i© p,2© p© s,2© s,i© s on the left and right, and introducing the 3 new angle properties
0 < βί■=
Figure imgf000028_0002
mod 3)+i < π, (Eq.55) yields
1 eos ?s cos ?-\ i \ (Eq.56) cos ft 1 cos φ f i j.
vcosft cos ?! 1 / \ 1 /
The determinant of this matrix is
D■= sin2/%-cos2ft-cos2¾+2cosftcosfecos¾
= 4sin[(-/9i+/92+^3)/2]sin[0gi-/92+^3)/2]sin[0gi+/92-^3)/2]sin[0gi+/92+^3)/2] < 1,
(Eq.57) and its Cholesky factor is
Figure imgf000028_0003
The eigenvalues of this are its 6 diagonal entries and correspond to 6 eigenvectors ((sin/93- 1/2)(l-sin/93),(sin^3-JD1/2)cos^3,cos^icos^3-cos^2(l-sin/93),0,0,0)t, (Eq.59)
(O^ -D^cosft-cos cos ^AO)1,
(0,0,1,0,0,0)',
(0,0,0,1,0,0)',
(0,0,0,0, 1,0)*
and
(0,0,0,0,0,1)' respectively. The inverse Cholesky factor is
Figure imgf000028_0004
\cos ft cosftj— cos ¾ cos ft? cos ft;— cos ft sirs2 ft / ^ ^ (Eq. 60) whence the Voigt compliance matrix can be expressed analytically.
b) The constraint D > 0 is equivalent to β membership in the tetrahedron
ψ, βι+βι+βϊ<2π A -βι+βι+βι>0 A βι-βι+βι>0 A βι+βι-βι>ΰ} (Eq. 61) that has 4 vertices (0,0,0), (Ο,π,π), (π,Ο,π) and (π,π,Ο), each opposite to an outward normal (1, 1,1)V31/2, (1 -1 ,-l)V31/2, (-1, 1,-1)V31/2 and (- 1,-1, 1 )V31/2 (Eq. 62) in /9-space (Fig. 6). Fig. 6 depicts a tetrahedral region 602 of orthorhombic stiffness components enforcing positive-definiteness. In Fig. 6, it can be straightforward to map any β outside this tetrahedron (D < 0) to the nearest β on its surface (D = 0), which is that plane containing the 3 nearest vertices. This map minimally transforms an a that is not positive-definite to an a that is.
c) The Thomsen properties can be regained from
Figure imgf000029_0001
d) Note that to express D in terms of δ\, <¾, & and the velocities in these expressions is straightforward but so complicated that δ\, <¾, <¾ are almost useless for strictly enforcing positive-definiteness. In contrast, given positive vp,i, vp,2, vp, vs,2, vs,i, vs it is
straightforward and simple to enforce positive-definiteness, and to repair a case of D < 0, using ¾, ¾.
e) Also note that Gaussian distributions a cover unbounded domains and so are
inappropriate for the traditional set
vP > 0, vs > 0, £i > -1/2, si > -1/2, yi > -1/2, γ2 > -1/2, δι > -1/2, & > -1/2, δ3 > -1/2,
(Eq. 64) because of their (pseudoacoustically approximately for δι, δι, <53) bounded values, but are not inappropriate for example,
ln[vp/c], ln[vp,i/c], ln[vp,2/c], ln[vs/c], ln[vs,i/c], ln[vs,2/c], -cot^i], -cot^], -cot^],
(Eq. 65) which are each appropriately unbounded, before applying the tetrahedral constraint on βι, f) The tetrahedral constraint on
Figure imgf000029_0002
¾, βί may be easily enforced using barycentric
coordinates, at the cost of introducing a redundant parameter (#i+ ¾+ ¾)/2.
III. Building Up Decision Analysis and Guidance Framework [0076] Another aspect of this disclosure involves building up the DA and automatic guidance framework described above, using several related techniques to compute the 3 fundamental mathematical structures required for earth model building, namely:
• the data residual standardizing operator RT112;
• the earth model de-standardizing operator Co112;
• and the design matrix L that estimates the data z from the model a.
[0077] Using the eigenvector analysis as discussed above, we can define the column-space left-singular vectors U■= R~1/2LCo1/2VA~ , i.e. the singular-value decomposition (SVD) UA112^ = R ll2LCo112 provides the jointly standardized design matrix, factoring out the standardizing operations. From the L definition, the previous expression also equals
RTmLCom = R-m(dzldo )am = d(R-y2z)ld(a-mdf. (Eq. 66)
[0078] In this sense, SVD is an analysis of the Frechet derivative, that is the mutual sensitivity of the "raw" preconditioned depth RTll2z and model update G ma that each have covariance /. The case when either of R 112 or Co112 is replaced by a singular matrix may be dealt with using the restricted SVD of De Moor and Golub (1991, SIAM Matrix Anal. Appl. 12), or similar decompositions. a) To begin constructing RT112, note that the data z can be de -noised by initially assuming it contains just noise, then iteratively projecting the currently putative noise part onto compressing basis functions such as orthogonal wavelets, and re-defining the next putative noise part to be the residual of the z reconstruction from the part of z that was successfully compressed. This algorithm converges and results in 2 orthogonal parts of z, namely a signal that greatly resembles the original z, and a noise part that has a much smaller norm than the signal does.
b) Once the data are de -noised, R 112 can be parameterized using the leaky-derivative
method, which assumes that¾- := Ry/ RuRjj)m are the entries of a tensor product of exponential correlation matrices, so that R m is a tensor product of bidiagonal matrices. The Ru can be estimated using the ergodic hypothesis over z coordinates (independent variables) other than depth and offset. The 2 length scales for the exponentials in depth and offset coordinates can be estimated by iteratively finding a best linear fit to the ergodic estimate of ln[l/¾] over coordinate separations where it is real valued, then finding the unique minimum of a positive function of the length-scale ratio. c) Another use of R is in case where the posterior uncertainty C has to be constrained to some acceptable range. In this case the residual -covariance scale λ o tr[R]1/2 may be construed as a free inflation parameter to be tuned to keep C within that range. In practice, given an initial reference model mr and the mean plus random update {α)+ΟιΙ2ξ, one can find λ, e.g. by constraining the objective function not to randomly exceed its departure from its reference value Φ[0]
0 < Φ[0]-(Φ[(α)+€υ2ξ;λ] ) = Φ[0]-((Φ[(α)+€υ2ξ;λ=1 ])-1ή/λ2-1ί, (Eq. 67) where k = rk[A] above, and the equation follows from scaling by λ. Zeroing this implies λ = (({Φ[{α)+Ου2ξ; 1 )-Ιϊ)/(Φ[0 -Ιϊ))υ2. (Eq. 68)
Considering usual statistics of sums of norms of Gaussian random vectors yields
(Φ[(α)+€υ2ξ;λ]) = Φ[(α);λ] + k. (Eq. 69) d) The eigenvector analysis enables 2 useful diagnostics of the posterior uncertainty C as manifest in model precision-matrix increase C~i—Co~i = DR lL and model and data resolution matrices 7-CGf 1 = CDR lL and LCDR 1.
• Partitioning the design matrix across property blocks, e.g, L = (Lv,LE,Ls, . . .), the
diagonal of Ja,a <■= La RTlLa' is an information density in the model space in the sense that
Figure imgf000031_0001
is the change in relative entropy of the probability density of the residual (in principle, everywhere) when property vectors a and a' are perturbed by da and da'.
• Also the diagonal of Co,a~t/2Ccf ^CX'R 'Za' is an information density in the sense that
Figure imgf000031_0002
"i 2da ') (Eq. 71 ) is the change in relative entropy of the probability density of the posterior.
• Similarly R ^^LCDR 112 provides an information density of the posterior, represented in data space.
To generalize model and data resolutions, and hence information densities, to non-linear problems, consider a more general, non-quadratic objective function
Figure imgf000031_0003
Its gradient vector is
δΦΙδα =
Figure imgf000031_0004
+ Co_1(a-ao), (Eq. 73) the matrix factor dg/da evaluated at a reference model being L. The Hessian matrix is [a]■= d2 ldadat = -(R-^z-gla^Wg/dada1 + (dg/dajR-1 dg/da1 + Co"1. (Eq. 74)
[0079] The first term is what is conventionally dropped. It employs (at least implicitly) a separate model-space Hessian matrix d2gi/dadat for each datum prediction gi. The sum of the 2nd and 3rd terms is the familiar posterior precision matrix so the first term is a higher-order, z-dependent contribution to that Crx. Newton's method prescribes update -vector iterates
Figure imgf000032_0001
with each bj provided, e.g, by line search. Note that without the Hessian term in ¥[a] we would have F z\ linear in z. We introduce a new nonlinear model resolution matrix
Figure imgf000032_0002
that is independent of z if we neglect the cPgi/dada* terms, and is a nonlinear analog of CI}B lL above. In the limit that
Figure imgf000032_0003
= a we regain Ja = I. Similarly, we introduce a new nonlinear data resolution matrix
Figure imgf000032_0004
that is an analog oiLC R 1 above. In the limit that go [d] = d we regain JZ = I. Observe the following.
• Generalizing the usual case, I-Ja is effectively the posterior (data-conditional) update covariance normalized by the prior covariance, so its entries represent the reduction in update-component variance and correlation.
• Similarly, I- z is effectively the precision matrix of the data— analog of
R(R+LCoI}yl— marginalized over all Co112 -normalized updates.
• One could also consider Ja and Jz at arbitrary update values a, by applying the factors (W& gM and dg/da1 in the appropriate order.
[0080] A challenge of generalizations such as these is their high dimensionality. For the system user's workflow, the dimensionality can be reduced by exploring a few
hyperparameters Θ, e.g. filter length scale, damping parameters, or update bounds, and study their effect on Φ[α] graphically. To perform DA, we can visualize a utility Y
[θ,α[θ → Φ[α[θ];θ] as a curve, surface or volume depending on S having dimension 1, 2 or 3. The results of such explorations can be easily summarized and automatically recalled to guide future EMB decisions as described above. [0081] Besides characterizing the random updates {α)+01/2ξ ν/.ΐΛ. information or resolution, in some applications additional criteria may be used to rank them, e.g. depth and intensity of a feature such as velocity inversion. This generalizes the univariate CDF to higher dimensions, in that a realization can be assigned a unique probability such as Pio, P50, P90 w.r.t. these ranks. e) To assemble the L matrix itself one has to compute the travel-time sensitivities
dik/daj =
Figure imgf000033_0001
The first term, depending on the variation of the total path length, is 0 if path k does not approach a location of aj, and otherwise has to be estimated numerically. The second term can be computed accurately by evolving the auxiliary Hamiltonian dynamics
A(drldaj,dpldaj. )IAt = (dldp,-dldr)dHlda.j (Eq. 79)
= (d/dp,-d/dr)((dH/dp) (dp/daj)+(dH/da) : : (da/daj)).
Generally a 6D state (dr/da,;dp/daj) stays at (0,0) until the 3D trajectory r[t] enters a region where the local stiffness aor[t] is interpolated from ay, and after r[t] departs that region the state is controlled by (dldp -dldr)dHldp.
f) L also depends on the dip-field axis e, which can be estimated as the leading eigenvector of the structure tensor vy® vy, smoothed with some scale length s, where ψ is any spatial scalar that characterizes the reflectors. To converge to an e estimate at some fine s, start at the coarsest acceptable s and iterate to finer s by smoothing, and finding a new e from, a symmetrized vy® νψ that had one factor projected onto the e computed at the coarser s.
g) To check that the L linearization at reference model ar is valid, and determine how close a candidate a is to a yet-unknown optimum (a), note that
Figure imgf000033_0002
exactly, where
Figure imgf000033_0003
is a model-space gradient covector. Thus a and ar are close to (a): in a global sense if
Figure imgf000033_0004
in a local sense if
Figure imgf000034_0001
for the location and property associated with index j; and in a modal sense if
Figure imgf000034_0002
for eigenvector Vm (column m of V). Also note that the two terms of Φα are useful diagnostics to quantify the model-space directions whither the residual Lo -z and prior- mean deviation <¾·-<¾> push the posterior.
h) Turning finally to the method for solving linear systems, in some cases it is desirable to update part of a using data from part of z, while retaining the other parts.
• One way to achieve this is by updating the expression above for Cm using low-rank updates represented in spatially localized orthogonal vectors.
• Another way is to use domain decomposition, wherein the retained subdomain just contributes to the update in a static r.h.s., and the updated subdomain involves just solving a much smaller system involving the subdomain boundary nodes and a Schur complement of the full system.
i) When solving the system using SVD, there are 2 issues to address that may be referred to as balancing problems.
• In TTI the update can be partitioned as a = (νΡ,Ώ,&). Hence it is useful to perform the SVD of the 2 blocks
R-1/2(Lv,L(e,s))(Cv1/2®C(e,s) ) = (υν,δ))(Λν®Α(ε,δ))υ2ν® V{E,sjf. (Eq. 85)
The ratios Av,i,i/A(e,S),i,i of the squared singular spectra indicate the relative balance of velocity vs anisotropy sensitivity to the data. Thus by right-multiplying Co1'2 by (Λν,ο,ο/Θ (ε,ίϊ),ο,ο/)~1/2 one expects to get a more balanced solution.
• Conversely, consider the case of partitioned data L = ( s'Jc1)1, e.g. from seismic and checkshot. Two useful SVDs are then
R V2U m = Us s Vs' and Rc-y2 C0 = Uc cV . (Eq. 86)
That is, we may project away the redundant information between the data sources with the operation
Figure imgf000034_0003
on the checkshot data. A similar utility is obtained by weighting the objective- function (Eq. 9) terms due to different data types each by its reciprocal residual degrees -of-freedom :
Figure imgf000035_0001
where Jy := Ry'mLy(Ly%-lLy+C( l lLy%-tl2 for data type y. Substituting the eigenvectors (Eq. 86) shows that these weights depend just on their respective singular- value matrix∑y, possibly including any damping or other scale factors in Ry or Co. When∑y is known in truncated form or for other purposes, the weights can be computed by fitting an extended curve to the known part of the curve of∑y,ij versus i. In practice, the U operations may be replaced by their definitions of the form
U := R- LC VA- .
[0082] System Architecture
[0083] Various techniques described in this disclosure can be performed by a computer system, such as a computer system 900 depicted in Fig. 9. The computer system 900 includes at least one processor 902, which can be coupled to a network interface 904 (to communicate over a network) and a machine-readable storage medium 806 (to store machine-readable instructions 908 and data). The machine-readable instructions can be loaded for execution by the at least one processor 902.
[0084] The storage medium 906 can be implemented as one or more non-transitory computer-readable or machine-readable storage media. The storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
[0085] In the foregoing description, numerous details are set forth to provide an
understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.

Claims

What is claimed is:
1. A method comprising:
providing prior information based at least in part on transforming at least one property relating to a model of a target structure or a spatial volume surrounding the target structure, wherein the transforming considers one or both of physical stability and statistical stability of the at least one property; and
using the prior information to build or update the model.
2. The method of claim 1, wherein the prior information constrains a strain field to be physically stable by constraining a representation of a stiffness tensor to be positive-definite.
3. The method of claim 1, wherein the at least one property includes a plurality of properties, and the transforming includes applying a nonlinear property re -parameterizing of the properties to decorrelate the properties.
4. The method of claim 3, wherein a first subset of the transformed properties is well resolved, and a second subset of the transformed properties is not well resolved.
5. The method of claim 1, wherein the at least one property includes at least one selected from the group consisting of a normal-mo veout velocity, an anellipticity, a Thomsen- anisotropy ratio, an axial pressure/shear velocity ratio, or functionals of any of the foregoing.
6. The method of claim 1, further comprising:
transforming the prior information comprising a rock physics probability structure describing a distribution of values of at least one parameter relating to a physical characteristic of the target structure, to curvilinear coordinates.
7. The method of claim 1, wherein the transforming comprises re -parameterizing the at least one property, the method further comprising updating initial re -parameterization blocks by data-covariance estimates.
8. The method of claim 7, further comprising assimilating data into the updating using a Kalman filter with linear or non-linear sensitivity.
9. The method of claim 7, wherein the transforming includes re-parameterizing the at least one property, the method further comprising compressing data using mesh-free multiresolution.
10. The method of claim 7, further comprising applying property bounds by iterative Gaussian anamorphosis.
1 1. A computer system comprising:
at least one processor to:
compute at least one information metric that combines multiple effects, including a system design matrix and a covariance matrix of one or both of data and at least one property of a target structure or properties of a spatial volume surrounding the target structure; and
use the at least one information metric to automatically create a decision guidance.
12. The method of claim 1 1, wherein the information metric is computed using singular value decomposition or restricted singular value decomposition.
13. The method of claim 1 1, wherein the at least one information metric is generalized by a data-dependent Hessian.
14. The method of claim 1 1, wherein the at least one processor is to further compute localized data perturbations or model perturbations with consistent global effects.
15. The method of claim 1 1, wherein the at least one processor is to further balance different data parts or model parts in a residual norm or a prior norm, wherein the balancing uses separate eigenvector analysis for each of the parts.
16. A method comprising:
collecting one or more of at least one data-processing history or at least one quality assurance software-use history;
analyzing at least one history with respect to decisions that support completing a data processing or quality assurance, the at least one history selected from the at least one data- processing history and the at least one quality assurance software-use history; and
automatically creating decision guidance for a subsequent data processing or quality assurance.
17. The method of claim 16, wherein the at least one history includes at least one selected from the group consisting of: information relating to where data were obtained, information relating to a project, and information relating to capabilities and success records of operators.
18. The method of claim 16, wherein the at least one history includes at least one selected from among the group consisting of: information relating to options or values of parameters that were set for the data processing or software-use, information relating to what files were read in or written out by a job; information relating to how much time or computing resources were consumed; and information relating to time duration between steps of a job.
19. The method of claim 16, wherein the at least one history includes at least one selected from the group consisting of: information relating to usage of a user input device; and information relating to repetitive or erratic activities that may indicate problems.
20. The method of claim 16, wherein creating the decision guidance uses a behavior- modeling technology for at least one field selected from the group consisting of an attention- management field, a data-mining field, a machine-learning field, a profiling field, a targeted servicing field, a social-media field, a social networking field, a commercial advertising or marketing field, a computer gaming field, a public relations field, and a national security field.
PCT/US2015/016036 2014-02-17 2015-02-16 Decision guidance WO2015175053A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461940729P 2014-02-17 2014-02-17
US61/940,729 2014-02-17

Publications (2)

Publication Number Publication Date
WO2015175053A2 true WO2015175053A2 (en) 2015-11-19
WO2015175053A3 WO2015175053A3 (en) 2016-04-21

Family

ID=54480904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/016036 WO2015175053A2 (en) 2014-02-17 2015-02-16 Decision guidance

Country Status (1)

Country Link
WO (1) WO2015175053A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018052987A1 (en) * 2016-09-13 2018-03-22 Ohio State Innovation Foundation Systems and methods for modeling neural architecture
WO2019117875A1 (en) * 2017-12-12 2019-06-20 Halliburton Energy Services, Inc. Fracture configuration using a kalman filter
CN111722276A (en) * 2019-03-22 2020-09-29 中国石油天然气集团有限公司 Seismic guiding method and system for rock drilling
CN113807622A (en) * 2020-06-15 2021-12-17 海信集团有限公司 Event decision generation method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2797434C (en) * 2010-05-12 2017-09-19 Shell Internationale Research Maatschappij B.V. Seismic p-wave modelling in an inhomogeneous transversely isotropic medium with a tilted symmetry axis
WO2012078217A1 (en) * 2010-12-08 2012-06-14 Exxonmobil Upstream Research Company Constructing geologic models from geologic concepts
US8660338B2 (en) * 2011-03-22 2014-02-25 Honeywell International Inc. Wide baseline feature matching using collobrative navigation and digital terrain elevation data constraints
US9146330B2 (en) * 2011-03-29 2015-09-29 Westerngeco L.L.C. Selecting a survey setting for characterizing a target structure
US9103933B2 (en) * 2011-05-06 2015-08-11 Westerngeco L.L.C. Estimating a property by assimilating prior information and survey data

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018052987A1 (en) * 2016-09-13 2018-03-22 Ohio State Innovation Foundation Systems and methods for modeling neural architecture
US11062450B2 (en) 2016-09-13 2021-07-13 Ohio State Innovation Foundation Systems and methods for modeling neural architecture
US11200672B2 (en) 2016-09-13 2021-12-14 Ohio State Innovation Foundation Systems and methods for modeling neural architecture
WO2019117875A1 (en) * 2017-12-12 2019-06-20 Halliburton Energy Services, Inc. Fracture configuration using a kalman filter
US11319807B2 (en) 2017-12-12 2022-05-03 Halliburton Energy Services, Inc. Fracture configuration using a Kalman filter
CN111722276A (en) * 2019-03-22 2020-09-29 中国石油天然气集团有限公司 Seismic guiding method and system for rock drilling
CN113807622A (en) * 2020-06-15 2021-12-17 海信集团有限公司 Event decision generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2015175053A3 (en) 2016-04-21

Similar Documents

Publication Publication Date Title
US11435498B2 (en) Subsurface models with uncertainty quantification
US8855987B2 (en) Method for optimization with gradient information
US8892388B2 (en) Box counting enhanced modeling
US9494711B2 (en) Adaptive weighting of geophysical data types in joint inversion
Osypov et al. Model‐uncertainty quantification in seismic tomography: method and applications
EP2966602A1 (en) Regional stress inversion using frictional faults
US11169287B2 (en) Method and system for automated velocity model updating using machine learning
Zelt et al. Blind test of methods for obtaining 2-D near-surface seismic velocity models from first-arrival traveltimes
US20220283329A1 (en) Method and system for faster seismic imaging using machine learning
Grose et al. Inversion of structural geology data for fold geometry
WO2016001697A1 (en) Systems and methods for geologic surface reconstruction using implicit functions
WO2015175053A2 (en) Decision guidance
US20100211324A1 (en) Automated structural interpretation
Barbosa et al. A workflow for seismic imaging with quantified uncertainty
Jiang et al. Full waveform inversion based on inversion network reparameterized velocity
US11965998B2 (en) Training a machine learning system using hard and soft constraints
US11899150B2 (en) Velocity model for sediment-basement interface using seismic and potential fields data
US12013508B2 (en) Method and system for determining seismic processing parameters using machine learning
Sun et al. Enabling uncertainty quantification in a standard full-waveform inversion method using normalizing flows
Zhang Ensemble methods of data assimilation in porous media flow for non-Gaussian prior probability density
Grillenzoni Sequential mean shift algorithms for space–time point data
Manchuk Geostatistical modeling of unstructured grids for flow simulation
Gervais et al. Identifying influence areas with connectivity analysis–application to the local perturbation of heterogeneity distribution for history matching
US20240052734A1 (en) Machine learning framework for sweep efficiency quantification
Mendes et al. Faster determination of an initial velocity model for full-waveform inversion based on simulated annealing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15792203

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15792203

Country of ref document: EP

Kind code of ref document: A2