Date
Journal Title
Journal ISSN
Volume Title
Publisher
Fault identification is a key aspect in seismic interpretation, and model generation is an essential task to achieve any accurate, precise reservoir modeling or structural framework that solves problems in the subsurface. Manual interpretation is a common approach to map faults in seismic amplitude datasets; in addition, interpreters typically use specialized seismic attribute volumes, such as variance or coherence, to enhance fault visualization and ease their work of identifying faults. In addition to these attributes, specialized, enhanced attributes and machine learning techniques are recently being employed to enhance faults and structural features. Methods such as ant tracking, coherence enhancement, self-organized maps, convolutional neural networks, and additional data analytics methods are used to enhance faults; nonetheless, there is additional interpretation work to be done to create a fault object and differentiate faults from other complex features and noise inside seismic volumes. Recent advances in probabilistic neural networks (PNN) have shown promise in delineating seismic facies, including salt and mass transport deposits from the surrounding sedimentary matrix. In the first part of this research, I evaluate the ability of PNN to delineate faults. Probabilistic neural networks (PNN) are feedforward neural networks often used in identification problems. Although almost all seismic attributes are in some way sensitive to faults, a much smaller subset highlight faults with respect to the non-faulted background geology. For this reason, I employ an exhaustive PNN search to identify the optimal set of attributes to create a fault probability volume. For a Great South Basin, New Zealand seismic search I found that the best attributes to use were aberrancy magnitude, GLCM entropy and homogeneity, Sobel filter similarity, and envelope. While time-consuming, handpicking of faults on vertical slices through the seismic amplitude volume to generate fault “sticks” and then interpolating those fault sticks to generate a fault surface is still the most commonly used fault interpretation workflow. In the second part of this research, I use such carefully hand-picked faults as the standard, and compare these faults to those computed using coherence enhancement, probabilistic neural networks (PNN), and convolutional neural networks (CNN) for a Taranaki Basin data volume. Coherence enhancement, PNN, and CNN all provide a voxel-by-voxel likelihood of a fault, not a fault surface. To address this issue, I use active contours to convert voxel estimates of fault probability into a significantly smaller set of samples that define a named fault object. This method is a semiautomatic approach that scans high probability fault locations and moves from voxels with low to high probability to fit the fault surface's shape. Although each fault enhancement method produces fault probability volumes, differences in noise, artifacts, and fault locations are found. I compute the Hausdorff difference between the three fault delineation workflows and the hand-picked baseline and find that multispectral coherence enhancement produces the most accurate results for the Taranaki Basin dataset analyzed. I also find that data conditioning prior to attribute calculation improves the fault enhancement result. I also show that data conditioning, including structure-oriented filtering, facilitates human interpretation and provides computer-generated faults that exhibit fewer artifacts.