Save up to Six Months with Automation in Seismic Processing

What is the maximum time saving that automation can deliver for a seismic processing project of 3 000 sq. km (this would normally take 9.5 months), 10, 12 or 24 weeks?

wave imagewave image
You chose 10 weeks | This is the amount of time you would save if you used data mining and conventional velocity model building. 

Using PGS automated data mining achieves a timesaving of 10 weeks over the conventional signal processing and velocity model building (VMB) turnaround of nine and a half months. The pre-processing data mining stage will take two weeks to define and set up the workflow, with an additional two weeks required for the post-processing workflow. Only the pre-processing data mining and conventional VMB stages are on the critical path. Data mining allows the subsequent production processing and deliverable generation to run very efficiently as derivation of the signal processing flows are not constrained by a time consuming parameter testing, review and sign-off process.

Change your mind to 12 weeks? Learn what happens when conventional signal processing and automated model building is implemented.
Change your mind to 24 weeks? Learn what happens when PGS data mining and automated model building is implemented.

 

PGS Article Featured in First Break Data Processing Special Edition

Read more about current work on data mining in the June edition of First Break. It highlights a proof-of-concept project undertaken by PGS to measure the impact on data quality by automating key stages in a seismic processing project. Instead of time-consuming manual testing and analysis, the investigation used an automated data mining approach to assess if the data quality would be equivalent and if there are any efficiency gains to be made.

In a seismic processing project, the parameters used by each geophysical algorithm are tested. Combinations of parameters are checked, and semi-subjective decisions are made about the optimal values. For some steps, there may be numerous parameters to test. Assuming an algorithm had 10 different parameters each with three settings, say low, medium and high, there would be ~60 000 possible combinations to test. Add another setting, and the combinations exceed 1 000 000. An experienced Geophysicist will fine-tune the starting point based on their experience, so we never have this many tests.

The statistics in this example have not accounted for the dynamic nature of the data, as it varies in all dimensions. The optimal parameters for one window of data might not be for another, although they may be very close to it. If we want the best results everywhere, then for a typical ‘windowed’ process using 10 parameters and three settings, on a data set of 5 000 sq. km, the chance of being right everywhere is less than one trillionth of one percent. There is always a parameter null space, but the example does raise an interesting question of what is a good set of parameters, and what is not. This is particularly important if the seismic data that has never been processed.

Read the full First Break article 'Seismic processing parameter mining – the past may be the key to the present'.

Is Any Image Degragation Worth the Time Saved?

The goal of the work was to understand the quality and turnaround impact of replacing the testing of seismic processing parameters by those mined from a collectivized digitalized experience database. The database contains the parameters used for key steps from previous processing projects. The parameter mining extracts trends based on key criteria. All required parameters are determined in advance of the project, and used to populate workflows run back-to-back. The trial used a subset of seismic data from Sabah in Malaysia, and compared the results to a full-integrity processing project on the same data.

The team created quantitative metrics of quality, and similarity between the two volumes. Both the raw seismic data sets and metrics showed a striking similarity. The test was run up to and including the migration step, for which we relied on the velocity model from the full-integrity project. In the image below the initial question is to see if you can spot the differences between the two volumes.

Full Integrity
Data Mined

Raw migration example of the Sabah fold and thrust belt data shows very similar results are generated (in far less time and with far less human interaction) using parameter mining (right),compared to a full-integrity production project (left).

The figure shows stacked data, which is often a leveller of quality. Despite this, there is almost no difference between the two data sets; however, by bypassing the parameter testing, there was a significant reduction in turnaround.

The proof-of-concept project described in June’s First Break paper outlines an approach using crude data mining, maintaining data quality and reducing project turnaround times. There are some caveats, but using the collectivized experience of all PGS staff is an undoubtedly powerful tool to harness. It may help reduce testing time, or deliver high-quality fast-track products.

For more information on tailoring your workflow contact imaging.info@pgs.com.