Paper Summary
Noise is an ongoing issue in seismic processing and our capability to successfully process ocean bottom node data relies on our capability to successfully attenuate the presence of shear wave noise in vertical geophone recordings. To attenuate this noise, we have previously relied on co-denoise techniques in various transform domains. However, these methods can be hard to parametrize and costly to apply in practice. In this study, the authors demonstrate that machine learning (ML) algorithms can be used for shear wave noise attenuation at least as a fast-track solution. This study presents two new findings. First, it's shown that for shear wave noise attenuation, ML solutions using only the vertical geophone perform as well as dual-component solutions using both the hydrophone and geophone. Second, the authors analyse the generalizability of ML solutions. Trained ML networks are shown to generalize to seismic data from a different (unseen) seismic experiment not included during training.