Supervised ML relies upon a relevant training dataset of known question-answer outcomes (‘labeled data’), unsupervised ML initially uses no labeled training data and depends upon the existence of natural patterns in the data. Reinforcement ML pursues an iterative trial and reinforcement process to build the solution. In recent years, significant progress has been made in all flavors of ML as technology has developed and vast amounts of data for testing have become available. PGS is not alone in exploring the fields of seismic acquisition, processing, reservoir characterization, and interpretation for practical applications of AI and ML.
Two ML solutions briefly considered below are more automated coherent noise attenuation and more automated velocity model building—two of the most significant bottlenecks in seismic processing flows. Both ambitions benefit from the redundancy of information acquired in modern 3D seismic surveys.
Automating Velocity Model Building
Although deep neural networks (DNNs) are a well-known methodology for training a computer to learn and apply a set of rules (for example, implement some image recognition principle for interpreting certain data features), more general reinforcement methods such as Monte Carlo simulations still have great potential for manipulating data to achieve some desired outcome with minimal human intervention.
For example, a simple starting velocity model can automate a sophisticated tomographic update of an extensive survey volume in only a few days. This process historically takes weeks to months using manual iterative updates. First, automated analysis must be performed to understand the local statistical characteristics of the dataset. Then an appropriate population of random perturbations is generated and applied to the starting velocity model. All the velocity models in the population then pass through a highly efficient tomographic inversion engine (PGS hyperTomo). After several cycles, you can generate a usable velocity model with the convergence criterion being flat common-image gathers (CIGs).
Following proof-of-concept studies in 2019, PGS recently applied this hyperModel methodology to a large 3D volume (3 500 sq. km) that straddles shallow and deepwater environments from a survey in West Africa. The final results are remarkably accurate and insensitive to significant errors in the starting model (see the figure below). PGS is also exploring ways to automate Full Waveform Inversion (FWI) that benefit from recent developments in how FWI models data reflectivity: Latest results will be presented at the EAGE conference in June.
Noise occurs in many forms throughout a processing flow. Most initial seismic processing attempts to remove noise so the data can be migrated with optimum quality after the velocity model is derived. Nevertheless, migrated images often exhibit contamination from uncompensated migration swings (see the upper part of the figure below). This noise (suboptimal deconstructive interference of the migration response) is a result of uneven subsurface illumination due to limited data coverage and/or propagation through complex media. Most of the time it is easy for humans to distinguish the artifacts in the images visually. We usually address this problem by designing filters that attenuate the noise. However, it is often challenging to create filters that remove the noise without damaging the image resolution. In the figure below, a DNN was trained to attenuate coherent noise on seismic images. The application to a field dataset demonstrates the method's potential to successfully attenuate migration swings without unacceptably compromising image resolution.
Slide the bar to compare migrated seismic data containing undesirable noise artifacts (left) and the outputs of a Machine Learning effort to attenuate much of the noise without compromising image resolution (right).
The implementation stage of ML solutions is typically fast and cheap, but the training stage requires large datasets to train the system. Indeed, the automated velocity modelbuilding (PGS hyperModel) example here depends upon highly efficient tools such as beam migration and its extension to reflection tomography (PGS hyperTomo): A lot of data needs to be interrogated and processed in the background for the computer to make robust decisions. PGS is actively exploring solutions to all our activities—from predictive maintenance of seismic survey vessels to real-time acquisition QC to processing QC, the automation of human-intensive processes, and the classification and interpretation of data deliverables. PGS can now develop fundamentally different ways of thinking. Applications of automation, ML, and AI will augment how we do things: Allowing us to make better decisions using more data in less time.
The path to full automation will be a journey of collaboration with our clients that will, without a doubt, be very interesting but require a lot of careful thought.
Contact a PGS expert
If you have a question related to our Imaging & Characterization services or would like to request a quotation, please get in touch.