Methodology

hyperModel uses a Monte Carlo simulation to build velocity models. Using a feedback loop it iteratively solves a pseudo-random population of models, replacing repetitive mundane tasks with massively parallelized computer resources, enabling convergence to a global solution much faster than conventional methods.

PGS hyperModel uses PGS hyperBeam and hyperTomo to accelerate turnaround. These technologies use a beam platform to establish the initial ray kinematics of the invariant data. For hyperBeam and hyperTomo, these comprise wavelets extracted from the data through a multi-dimensional dip scanning process. This process is performed in the model space and generates the observed data. Most velocity model building flows use a form of inversion. Tomography follows three steps:

1. Residual picking - the tomographic inversion uses these to establish the misfit function. The picks are a residual depth error with offset or angle, measured in Common Image Gathers (CIGs) generated by an initial migration.

2. Ray-traced de-migration of the residuals using the migration model. This creates the observed data that are independent of a particular migration model. This ‘invariant’ data may be re-migrated to create ‘observable’ data for tomography.

3. A linear inversion to update the model by minimizing the misfit function based on the observed data.

De-migration and re-migration in step two removes the dependency of the initial residual observation on the migration model, allowing the process to solve for numerous linear inversions.

In hyperModel, we use a Monte Carlo simulation, by generating a population of randomly perturbed input models. The residuals from each model in the population are solved using an inversion, the entire output is a posterior population set. A statistical feedback loop enables the process to build a model iteratively. With each iteration, we solve a new randomly perturbed population of models based on the previous loop’s updates.

A number of variables may affect a tomographic inversion. These two broad groups are defined by the accuracy of the measures for the inversion and how the inversion is parameterized. The first is dependent on the data, whilst the second defines how we want to sample the subsurface for the inversion.

Before building the randomly perturbed population of input models, we determine the impact of the variables on the model, by applying a perturbation to the initial model and computing the observed data from the invariants. We use an inversion to update the perturbed model and analyze it to understand the effectiveness of the inversion in solving the perturbation. Following this, we determine a model sensitivity check, by applying a checkerboard perturbation to the initial model, followed by an inversion and an analysis phase.

The image below shows a schematic representation of this approach. The inversion resolves the wavelength and magnitude of the perturbation, and provides information for the generation of randomly perturbed models. The analysis phase uses a sequence of automatically derived metrics defining thresholds of suitability for the applied perturbations, fulfilling the requirement to understand the sources of uncertainty prior to a Monte Carlo simulation.

checkerboard_test_PGS_hyperModel
Checkerboard test to understand the variations in the model space the data will support. This information is used to create the model population used in the Monte Carlo simulation.

 

Once the checkerboard test is complete, we create and apply a population of random perturbations to the initial model to produce a randomly perturbed model population. Each are solved using an inversion, producing a set of posterior models. Statistical analysis of the models enables a feedback loop. The average of the posterior population is applied to the initial model, completing the first pass. A new population of randomly perturbed models are created for the next iteration. The workflow continues iteratively, and we track convergence using a gather flatness metric. Once we reach a threshold of convergence, the final statistical loop is applied and the final velocity model is created. The schematic below outlines the workflow. A pass of model uncertainty analysis may be performed on this final velocity model.

hyperModel_uncertainty
Flow diagram illustrating PGS’ hyperModel

 

An additional benefit of hyperModel’s use of randomly sampled model populations is that whilst sampling the model space, we also sample the misfit space as well. This reduces the likelihood of solutions driven by localized minima, which is more likely with a traditional single model solution.

hyperModel Runs Automatically and in Challenging Geological Environments

The hyperModel workflow is run as ‘hands-off’ as possible, as the goal is to produce an automated velocity model quickly. To achieve this each inversion solves its challenge in a global fashion, or as global as possible. Once the analysis phase is complete and the workflow set up, hyperModel is run and is only halted once convergence has been reached, or if we want to intervene to modify the anisotropy parameters.

In challenging geological environments, with rapid vertical variations in velocity, masks can be used in hyperModel to exclude areas with poor reflectivity. This deviates from the ideal global solution but may be necessary in some environments. When this is done, hyperModel may take longer to run.