toward creating subsurface camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network...

12
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2018.DOI Toward Creating Subsurface Camera WENZHAN SONG 1 , FANGYU LI 1 , MARIA VALERO 1 , AND LIANG ZHAO. 2 1 Center for Cyber-Physical Systems, University of Georgia, Athens, GA 30602 USA 2 Division of Math and Computer Science, University of South Carolina Upstate, Spartanburg, SC 29303 USA Corresponding author: WenZhan Song (e-mail: [email protected]). “This work is partially supported by NSF-CNS1066391, NSF-CNS-0914371, NSF-CPS-1135814, NSF-CDI-1125165, and Southern Company.” ABSTRACT In this article, the framework and architecture of Subsurface Camera (SAMERA) is envisioned and described for the first time. A SAMERA is a geophysical sensor network that senses and processes geophysical sensor signals, and computes a 3D subsurface image in-situ in real-time. The basic mechanism is: geophysical waves propagating/reflected/refracted through subsurface enter a network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control software may be connected to this network to allow view of the 2D/3D image and adjustment of settings such as resolution, filter, regularization and other algorithm parameters. System prototypes based on seismic imaging have been designed. SAMERA technology is envisioned as a game changer to transform many subsurface survey and monitoring applications, including oil/gas exploration and production, subsurface infrastructures and homeland security, wastewater and CO2 sequestration, earthquake and volcano hazard monitoring. The system prototypes for seismic imaging have been built. Creating SAMERA requires an interdisciplinary collaboration and transformation of sensor networks, signal processing, distributed computing, and geophysical imaging. INDEX TERMS Subsurface camera, geophysical sensor network, subsurface infrastructure security, distributed computing, real-time in-situ imaging. I. INTRODUCTION I N the eighteenth century, the concept of optical cam- era was conceived. The basic mechanism is: light rays reflected from a scene enter an enclosed box through a converging lens and an image is recorded on a light-sensitive medium (film or sensor); a display, often a liquid crystal display (LCD), permits the user to view the scene to be recorded and adjust settings such as ISO speed, exposure, and shutter speed. In this article, the concept of Subsurface Camera (SAMERA) is envisioned and described for the first time. The basic mechanism (Figure 1) is as follows: geophysical waves propagating/reflected/refracted through subsurface enter a network of geophysical sensors where a 2D or 3D image is computed and recorded; a control software with graphical user interface (GUI) can be connected to this network to visualize computed images and adjust settings such as resolution, filter, regularization and other algorithm parameters. A SAMERA is a geophysical sensor network that senses and processes geophysical waveform signals, and computes a 3D subsurface image in-situ in real-time. Just as a camera can become a video camera to record a sequence of images, SAMERA can generate a time slice of subsurface images and enable searching, identifying and tracking underground dynamics for security and control applications. Also, just as flash lights may be added to an optical camera to enlighten the scene, geophysical transmitters may be added to a SAM- ERA to illuminate subsurface for faster image generation and finer resolutions, not merely relying on passive natural events (such as earthquakes). The geophysical transmitters may be add-ons to receivers (e.g., sensors) and convert re- ceivers to transceivers. For example, seismic exploration of oil/gas industry often use explosives or vibroseis to generate active seismic waves, and GPR (Ground Penetrating Radar) equips active electromagnetic wave transmitters. Geophysi- cal transmitters can transmit waves at different wavelengths to enable subsurface imaging at different ranges and reso- lutions. Waves with longer wavelength typically propagate deeper and further, but generate lower resolution images. If each geophysical transceiver is installed on mobile robots, it would enable a mobile and zoom-able SAMERA. Given an area, the geophysical transceivers can first spread out to form a sparse array to transmit waves with longer wavelength and generate a coarser subsurface image; if an interested VOLUME 4, 2016 1

Upload: others

Post on 14-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.

Digital Object Identifier 10.1109/ACCESS.2018.DOI

Toward Creating Subsurface CameraWENZHAN SONG1, FANGYU LI1, MARIA VALERO1, AND LIANG ZHAO.21Center for Cyber-Physical Systems, University of Georgia, Athens, GA 30602 USA2Division of Math and Computer Science, University of South Carolina Upstate, Spartanburg, SC 29303 USA

Corresponding author: WenZhan Song (e-mail: [email protected]).

“This work is partially supported by NSF-CNS1066391, NSF-CNS-0914371, NSF-CPS-1135814, NSF-CDI-1125165, and SouthernCompany.”

ABSTRACT In this article, the framework and architecture of Subsurface Camera (SAMERA) isenvisioned and described for the first time. A SAMERA is a geophysical sensor network that sensesand processes geophysical sensor signals, and computes a 3D subsurface image in-situ in real-time.The basic mechanism is: geophysical waves propagating/reflected/refracted through subsurface enter anetwork of geophysical sensors, where a 2D or 3D image is computed and recorded; a control softwaremay be connected to this network to allow view of the 2D/3D image and adjustment of settings suchas resolution, filter, regularization and other algorithm parameters. System prototypes based on seismicimaging have been designed. SAMERA technology is envisioned as a game changer to transform manysubsurface survey and monitoring applications, including oil/gas exploration and production, subsurfaceinfrastructures and homeland security, wastewater and CO2 sequestration, earthquake and volcano hazardmonitoring. The system prototypes for seismic imaging have been built. Creating SAMERA requiresan interdisciplinary collaboration and transformation of sensor networks, signal processing, distributedcomputing, and geophysical imaging.

INDEX TERMS Subsurface camera, geophysical sensor network, subsurface infrastructure security,distributed computing, real-time in-situ imaging.

I. INTRODUCTION

IN the eighteenth century, the concept of optical cam-era was conceived. The basic mechanism is: light rays

reflected from a scene enter an enclosed box through aconverging lens and an image is recorded on a light-sensitivemedium (film or sensor); a display, often a liquid crystaldisplay (LCD), permits the user to view the scene to berecorded and adjust settings such as ISO speed, exposure,and shutter speed. In this article, the concept of SubsurfaceCamera (SAMERA) is envisioned and described for thefirst time. The basic mechanism (Figure 1) is as follows:geophysical waves propagating/reflected/refracted throughsubsurface enter a network of geophysical sensors where a2D or 3D image is computed and recorded; a control softwarewith graphical user interface (GUI) can be connected to thisnetwork to visualize computed images and adjust settingssuch as resolution, filter, regularization and other algorithmparameters.

A SAMERA is a geophysical sensor network that sensesand processes geophysical waveform signals, and computesa 3D subsurface image in-situ in real-time. Just as a cameracan become a video camera to record a sequence of images,

SAMERA can generate a time slice of subsurface imagesand enable searching, identifying and tracking undergrounddynamics for security and control applications. Also, just asflash lights may be added to an optical camera to enlightenthe scene, geophysical transmitters may be added to a SAM-ERA to illuminate subsurface for faster image generationand finer resolutions, not merely relying on passive naturalevents (such as earthquakes). The geophysical transmittersmay be add-ons to receivers (e.g., sensors) and convert re-ceivers to transceivers. For example, seismic exploration ofoil/gas industry often use explosives or vibroseis to generateactive seismic waves, and GPR (Ground Penetrating Radar)equips active electromagnetic wave transmitters. Geophysi-cal transmitters can transmit waves at different wavelengthsto enable subsurface imaging at different ranges and reso-lutions. Waves with longer wavelength typically propagatedeeper and further, but generate lower resolution images. Ifeach geophysical transceiver is installed on mobile robots,it would enable a mobile and zoom-able SAMERA. Givenan area, the geophysical transceivers can first spread out toform a sparse array to transmit waves with longer wavelengthand generate a coarser subsurface image; if an interested

VOLUME 4, 2016 1

Page 2: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

FIGURE 1. SAMERA system architecture: sensing, processing, computing and control.

region is identified from the coarser image, the geophysi-cal transceivers can gather closer to form a dense array totransmit waves with shorter wavelength and generate finersubsurface images.

SAMERA technology is envisioned as a game changer totransform many subsurface survey and monitoring applica-tions, including oil/gas exploration and production, subsur-face infrastructures and homeland security, wastewater andCO2 sequestration, earthquake and volcano hazard monitor-ing. The system prototypes for seismic imaging have beenbuilt (section III). Creating SAMERA requires an interdisci-plinary collaboration and transformation of sensor networks,signal processing, distributed computing, and geophysicalimaging.

II. SYSTEM FRAMEWORK AND ARCHITECTUREA SAMERA system is a general subsurface explorationinstrumentation platform and may incorporate one or moretypes of geophysical sensors and imaging algorithms basedon application needs. Various geophysical sensors and meth-ods have been used for subsurface explorations:seismic meth-ods (such as reflection seismology, seismic refraction, andseismic tomography); seismoelectrical methods; geodesy andgravity techniques(such as gravimetry and gravity gradiom-etry); magnetic techniques (including aeromagnetic surveysand magnetometers); electrical techniques (including elec-trical resistivity tomography, induced polarization, sponta-neous potential and marine control source electromagnetic(mCSEM) or EM seabed logging; electromagnetic meth-ods (such as magnetotellurics, ground penetrating radarand transient/time-domain electromagnetics, surface nuclearmagnetic resonance (also known as magnetic resonancesounding)). For the environmental engineering application,often the seismic, EM and electric resistivity methods areused. All of these geophysical imaging methods can beimplemented as a type of SAMERA with the same systemframework and architecture, as illustrated in Figure 1.

For the simplicity of presentation and illustration, thefollowing sections will describe SAMERA based on seismicimaging, while the framework, architecture, and algorithmswill similarly apply to other geophysical sensors and imagingapproaches. Choosing the seismic imaging as the example isalso because seismic methods are widely used in subsurfaceexplorations ranging from meters to kilometers in distanceand depth.

A. SENSING LAYER AND HARDWARE PLATFORM

For seismic imaging, two types of seismic waves are mainlyused in sensing layer: body waves (P and S) and surfacewaves (Rayleigh wave and Love wave), as illustrated inFigure 2. They have different particle movement patterns,resulting in different waveform characteristics and veloci-ties [1]. In section III, several seismic imaging algorithms us-ing body and surface waves respectively will be introduced.Seismometers (often geophones) are used to sense/receiveseismic waves. Some seismometers can measure motionswith frequencies from 500 Hz to 0.00118 Hz. Deep andlarge planetary scale studies often use sub-Hz broadbandseismometers, while earthquake and exploration geophysicsoften use 2-120 Hz geophones. A digitizer is designed toamplify signals, suppress noises and digitize data via anAnalog-to-Digital Converters (ADC) chip. The digitizer ofseismic application typically has 16-32 bit resolution, withsampling rate 50-1000 Hz.

The SAMERA hardware prototypes have been designed(Figure 3) for seismic imaging methods. It has geophone,GPS (global positioning system), computing board, wirelessradio, solar panel and battery. Each sensor nodes equip wire-less radio to self-form sensor networks for communicationand data exchanges. Sensor networks have been success-fully deployed in harsh environments [2]–[5] for geophysicalsurveys. The GPS provides precise timestamp and locationinformation for each node. The computing board inside isRaspberry Pi 3 [6]. It has 1.2 GHz CPU, 1GB RAM and GPU

2 VOLUME 4, 2018

Page 3: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

FIGURE 2. Illustration of seismic imaging with body waves (P and S), surface waves (Love and Rayleigh)

for intensive local computing when needed, yet can be put insleep for very low power consumption. Several seismic imag-ing methods based on this hardware platform have been builtand demonstrated as described in section III. However, forsome geophysical imaging methods (such as Full WaveformInversion (FWI)), there are some concerns on whether theycan ever be performed in sensor networks, as they requiresdays even months computation on IBM mainframes whenthis article is written. Those concerns appear to be legitimatebut will gradually fade out. In 1970s, an IBM mainframecomputer ran at a speed of 12.5 MHz and cost $4.6 million.People in 1970 would similarly doubt a match box sizedboard (like Raspberry PI) in 2018 could be 100 times fasterthan a mainframe and cost only $35. In the past three decades,the CPU frequency doubles every 18 months, as predicted byMoore’s law; this trend is expected to continue in the nextdecade. In addition to expected hardware improvements, bigdata, artificial intelligence and distributed computing developquickly as well and become more and more efficient to dealwith those geophysical imaging problems.

B. PROCESSING LAYERIn the processing layer, signal processing techniques are usedto process the raw time series sensor signals and extractneeded information for a 2D/3D image reconstruction. Thesignal processing functions include data conditioning, noisecancellation, changing point detection, signal disaggregation,cross-correlation, time and frequency analysis and so on.Most signal processing tasks will be performed in each nodelocally, while a few processing tasks may need data exchangeamong nodes may, such as cross-correlation used in ambientnoise imaging (section III-C).

Data conditioning is the very first step of the process-ing layer, which includes data interpolation, axis directioninitialization, phase correction, and instrumental responseremoval. The acquired data from the field may have missingdata points and the sampling rates among different types of

FIGURE 3. SAMERA hardware platform prototype.

geophysical sensors can be different, so the interpolation isnecessary for the following processing steps [7]. For seismol-ogy experiments, the geophones are usually placed accordingto the directions, N (north), E (east), and Z (vertical). Yet,for unconventional seismic explorations, the geophone axesmay be randomly placed otherwise the deployment willtake longer time, so an axis direction initialization may beneeded based on the perforation shots from the hydraulicfracturing [8]. In addition, the recorded seismic data mayhave mixed phase because of the wave propagation throughthe complicated media, so a phase correction is adopted tomake the seismic peaks and troughs more related to thegeophysical structures. Also, the instrumental response needsto be removed from seismic data [9].

To enhance the signals of interest, it is common to applya bandpass (e.g. Butterworth) filter to attenuate low andhigh frequency noise to improve the SNR of seismic data.

VOLUME 4, 2016 3

Page 4: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

Although the bandpass filter is incapable [10] to filter outall noise components, it is still widely adopted as an initialprocessing step and a subsequent filter with more comprehen-sive performances may be included. For noise cancellationpurposes, a filter is needed to reveal seismic signals embed-ded in random noise from the seismographs with little priorknowledge of the spectral content or temporal features [11].The Wiener filter [12] is an optimal filter that minimizes themean square error criterion, under the assumption that thesignal and noise are independent.

To extract signal event information from the filtered data isanother challenge. First, signal disaggregation or wavefieldseparation is needed to isolate different types of seismicwaves or separate different wave components [13]. Second,to acquire the time/velocity information from all wave com-ponents, the arrivals of body waves (P and S) can be pickedbased on the changing point detection methods, which char-acterize the statical properties of the time series data [14].However, for the surface waves, which are highly distortedor have strong attenuation and dispersions, the arrival pickingmethods fail to obtain the wanted information. Thus, cross-correlation techniques as well as temporal stacking are em-ployed to extract the surface wave propagation properties.Based on the extracted seismic wave components, to furtheranalyze the wave properties related to velocity, space anddepth, the time-frequency analysis is usually applied to esti-mate different velocities associated with different frequenciesto characterize different subsurface properties [15].

C. COMPUTING LAYERIn the computing layer, the reconstruction of 2D/3D seismicimages often involves computations such as linear/non-linearinversion and optimization, spatial and temporal stacking.Those computations are traditionally performed in centralservers and often need data from all sensors. To implementSAMERA, a key requirement is to perform those computa-tions in sensor networks in real-time. Thus the main researchchallenge is to develop distributed iterative computing al-gorithms under network bandwidth constraints. Song et al.[16]–[26] pioneered the research on in-situ seismic imagingin distributed sensor networks. The idea is to let each nodecompute in an asynchronous fashion and communicate withneighbors only while solving 2D/3D image reconstructionproblem. By eliminating synchronization point and multi-hop communication used in existing distributed algorithms,the approach can better scale to large numbers of nodes,and exhibit better resilience and stability. Also, randomizedgossip/broadcast based iterative methods have been used. Inthese methods, each node asynchronously performs multiplerounds of iterations of its own data, gossips/broadcasts itsintermediate result with neighbors for a weighted averagingoperation. The iterative computing can be based on first-order and second-order methods (such as distributed ADMM[27] methods). The second-order methods expect to have afaster convergence rate but higher computation cost at eachiteration. Each node repeats this process until reaching a

consensus across the network.

FIGURE 4. Distributed iterative computing paradigm.

Distributed iterative computing is a paradigm-shiftingcomputing problem and has received much attention [28]–[31] in computer science, mathematics & statistics and ma-chine learning community in the past five years. It is advanc-ing rapidly, because it is increasingly necessary for many bigdata and Internet of Things applications, beyond subsurfaceimaging. In this new computing paradigm, each node holdsan objective function privately known and can only com-municate with its immediate neighbors (avoid multi-hop ifpossible). A great effort has been devoted to solving decen-tralized (fully distributed) consensus optimization problem,especially in applications like distributed machine learning,multi-agent optimization, etc. Several algorithms have beenproposed for solving general convex and (sub)differentiablefunctions. By setting the objective function as least-square,the decentralized least-square problem can be seen as aspecial case of the following problem:

minx∈Rn

F (x) :=

p∑i=1

Fi(x) (1)

where p nodes are in the network and they need to col-laboratively estimate the model parameters x. Each node ilocally holds the function Fi and can communicate only withits immediate neighbors. Figure 4 illustrates this paradigm.The literatures can be categorized into two categories: (1)Synchronous algorithms, where nodes need to synchronizethe iterations. In other words, each node needs to wait all itsneighbors’ information in order to perform the next compu-tation round. Considering the problem in (1), (sub)gradient-based methods have been proposed [32]–[37]. However, ithas been analyzed that the aforementioned methods can onlyconverge to a neighborhood of an optimal solution in the caseof fixed step size [38]. Modified algorithms have been devel-oped in [36] and [37], which use diminishing step sizes guar-antee convergence. Other related algorithms were discussedin [39]–[45], which share similar ideas. The D-NC algorithm

4 VOLUME 4, 2018

Page 5: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

proposed in [37] was demonstrated to have an an outer-loop convergence rate ofO(1/k2) in terms of objective valueerror. The rate is same as the optimal centralized Nesterov’saccelerated gradient method and decentralized algorithmsusually have slower convergence rate than the centralizedversions. However, the number of consensus iterations withinouter-loop is growing significantly along the iteration. Shi[46] developed a method based on correction on mixingmatrix for Decentralized Gradient Descent (DGD) method[38] without diminishing step sizes. (2) Asynchronous algo-rithms, where nodes do not need to synchronize the itera-tions. Decentralized optimization methods for asynchronousmodels have been designed in [41], [47], [48]. The worksin [47], [48] leverage the alternating direction method ofmultipliers (ADMM) for the computation part, and in eachiteration, one node needs to randomly wake up one of itsneighbors to exchange information. However, the commu-nication schemes in these two works are based on unicast,which is less preferable than broadcast in wireless sensornetworks. Tsitsiklis [41] proposed an asynchronous modelfor distributed optimization, while in its model each nodemaintains a partial vector of the global variable. It is differentfrom our goal of decentralized consensus such that eachnode contains an estimate of the global common interest.The first broadcast-based asynchronous distributed consen-sus method was proposed in [31]. However, the algorithm isdesigned only for consensus average problem without “realobjective function”. Nedic [49] first filled this gap by con-sidering general decentralized convex optimization similaras (1) under the asynchronous broadcast setting. It adoptedthe asynchronous broadcast model in [31] and developeda (sub)gradient-based update rule for its computation. Byreplacing (sub)gradient computation with full local optimiza-tion, an improved algorithm has been designed in terms of thenumber of communication rounds [50].

For either synchronous or asynchronous algorithms, thedesign goal shall be to generate same or near-same results ascentralized algorithm with minimal communication cost. Theresearch on distributed iterative computing advances rapidlyin the past several years. This section does not intend tosurvey all methods, but to merely point out some relatedworks and potential direction on computing layer design forSAMERA.

D. CONTROL LAYERA control software with GUI can be connected to this net-work to view the computed 2D/3D images and adjust systemand algorithm parameter settings such as resolution, filter andregularization parameters. The sensing, processing and com-puting layers can execute automatically and autonomously;on the other hand, the control layer allows users to controlthose layers, such as choose different parameters even dif-ferent algorithm combinations, to achieve the desired effects.For example, user may choose to use migration imaging vstravel-time tomography based on the sensitivity to differenttypes of seismic waves from subsurface properties and the

SNR, or the combination with ambient noise imaging to viewmore or less details at the tradeoff of resource usages. Thislayer is currently application specific and depends on userpreferences, but the user interface standard will graduallyemerge in the future.

III. SYSTEM PROTOTYPE DESIGN EXAMPLESThis section will introduce several SAMERA system pro-totype design examples based on popular seismic imagingmethods. In each of the following sections, the presentationsof processing and computing layers will be emphasized, asthey are the main intellectual challenges. The sensing andcontrol layer are more or less an engineering and interfaceissues as described in the previous section.

A. TRAVEL-TIME SEISMIC TOMOGRAPHYTravel-time seismic tomography (TomoTT) uses body wave(P and S) arrival times at sensor nodes to derive the sub-surface velocity structure; the tomography model is con-tinuously refined and evolving, as more seismic events arerecorded over time. This method is often used in earthquakeseismology, where the event source is a natural or injectioninduced (micro-)earthquake. TomoTT applies to the scenariowhere the signal-to-noise ratio (SNR) of body waves is goodenough for arrival time picking. Body wave arrival time pick-ing and tomographic inversion are performed in processingand computing layers respectively.

1) Processing Layer

When the SNR is relatively acceptable, the arrival timepicking techniques are used to identify the body waves(P or S) onset times. Because of the strong backgroundnoise in seismic data [11], [51], the arrivals are hard topick or even unidentifiable. The commonly used arrivalpicking algorithms are commonly based on the statisti-cal anomaly detection methods, including, but not limitedto, characteristic function (CF) [14], the short- and long-time average ratio (STA/LTA) [52], Akaike informationcriterion (AIC) [53], wavelet transform (WT) [54], cross-correlation [55], modified energy ratio (MER) [51], higherorder statistics (HOS) [56], [57], and so on. In general, thearrival picking [58] problem can be formulated as the ratioof short-term characteristic function and long-term charac-teristic function (STCF/LTCF). The characteristic function(CF) [14] can be some kind of statistical metrics, includingenergy [51], moments [57] and likelihood estimates [59].

Most arrival picking methods were designed based onsingle channel (e.g., vertical axis) seismic data. With triaxisgeophones or three components seismic data, two polariza-tion parameters can be used to distinguish between P andS waves, and noise [60], because the wave types and orien-tations affect the polarization of signal onsets. As the datahave three orthogonal ground-motion records correspondingto E, N, and Z, the onset polarization could indicate the types(surface or body), phase (P or S) of the wave [12].

VOLUME 4, 2016 5

Page 6: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

Band­pass filter

DenoisingArrival Picking Ray­tracing Tomographic

Inversion

Computing

Event Location

Processing

FIGURE 5. Processing and computing algorithm flows of travel-time seismic tomography. The wireless sign in the figure means communication between nodes is needed.

2) Computing LayerThe picked arrival times are then used to estimate the eventsource location and origin time in the subsurface, as shown inFig. 5. Thereafter, the ray-tracing and tomography inversionwill be performed. Given the source locations of the seismicevents and initial velocity model, ray tracing is to find theray paths from the seismic source locations to the sensornodes. After ray tracing, the seismic tomography problemis formulated as a large sparse matrix inversion problem.Suppose there are total M seismic events and N sensors, andL cells in the 3D tomography model, then let A ∈ RNM×L

be the matrix of ray information between M events and Nsensors, t ∈ RNM×1 be the vector of travel time between Mevents and N sensors, and s ∈ RL×1 be the 3D tomographymodel to calculate. The tomographic inversion problem canbe formulated as

s∗ = argmins‖t−As‖22 + λ‖s‖22 (2)

where λ be the regularization parameter. In centralized al-gorithm, the system of equations is solved by sparse matrixmethods like LSQR or other conjugate gradient methods[61]. Various parallel algorithms have also been developedto speed up the execution of these methods [62]–[64]. How-ever, designed for high-performance computers, these cen-tralized approaches need significant amount of computa-tional/memory resources and require the global information(e.g., t and A).

By the way, double-difference tomography [65] was de-veloped to simultaneously solving the event location andthree-dimensional tomography model. It claims to producemore accurate event locations and velocity structure near thesource region than standard tomography. Its mathematicalformulation is in the same format as equation 2.

3) Compute TomoTT in Sensor NetworksTo implement travel-time seismic tomography in sensor net-works, an effective approach is to let each node compute to-mography in an asynchronous fashion and communicate with

neighbors only. By eliminating synchronization point andmulti-hop communication that are used in existing distributedalgorithms, the system can scale better to a large number ofnodes, and exhibit better fault-tolerance and stability. In theharsh geological field environment, the network disruptionsare not unusual and reliable multi-hop communication is noteasy to achieve. Prototype system based on TomoTT [16]–[20], [22]–[26] was designed and demonstrated. In [26], adistributed computing algorithm based on vertical partitionwas proposed. The key idea is to split the least-square prob-lem into vertical partitions, similar as multisplitting methodbut being aligned with the geometry of tomography. Later anodes in each partition is chosen as a landlord to gather nec-essary information from other nodes in the partition and com-pute a part of tomography. The computation on each landlordis entirely local and the communication cost is bounded.After the partial solution is obtained, it is then combinedwith other local solutions to generate the entire tomographymodel. In [17], [23], [25], the block iterative kaczmarzmethod with component averaging mechanism [66], [67] wasproposed. The key idea is that each nodes runs multipleiterations of randomized kaczmarz, then their results are ag-gregated through component averaging then distributed backto each node for next iterations. After multiple iterations, thealgorithm converges and generates the tomography. Decen-tralized synchronous and synchronous methods with randomgossip and broadcast [18], [50], [68] were also developed tosolve the inversion problem in equation 2.

B. MIGRATION-BASED MICROSEISMIC IMAGINGMigration-based Microseismic Imaging (MMI) applies thereverse-time migration (RTM) principles to locate the mi-croseismic source locations [69]–[72]. With a given velocitymodel, the time-reversed extrapolation of the observed wave-fields can be calculated based on wave equations. The extrap-olated wavefields from different receivers stack together toenlighten the location of seismic sources [73], as illustratedin Figure 6. MMI [73]–[79] typically uses body waves and

6 VOLUME 4, 2018

Page 7: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

has two main steps: forward modeling and stacking (alsocalled imaging condition) in processing and computing layer,respectively.

1) Processing Layer

In the first step, a bandpass filter with an appropriate band-width shall be applied, which is narrow enough to containsignals of interest, and not too narrow to filter out signal com-ponents. For active source migration, since the sources arecontrollable and seismic acquisition array density is high, theraw data and the corresponding wavefields can be completelyseparated for specific source characterization [70], [80].While, for passive seismic source imaging, the determinationof a seismic event becomes critical, or else, there will be notenough time series data for used for wavefield construction orcomputation resources are wasted on background noises [71],[72], [81]. To extract the window of seismic signal segmentscontaining events, time framing based on energy segmenta-tion can be applied (Fig. 6). In addition, for better location, alocalized normalization operator is necessary to deal with theissue of unbalanced amplitudes [71].

Let S(x; t;xs) denote the source wavefield generated fromthe source location xs and recorded at a spatial location x

following the wave equation(

1v2(x)

∂2

∂t2 −∇2)S(x; t;xs) =

0, where v is velocity, and ∇2 is the (spatial) Laplacianoperator [82]. RTM algorithms use the zero lag of the cross-correlation between the source and receiver wavefields toproduce an image Iri at the receiver location xri [70], [83]:

Iri(x, t) = S(x; t;xs)Ri(x; t;xs) (3)

Here Ri(x; t;xs) is the receiver wavefield, which is approx-imated using a finite-difference solution of the wave equa-tion [82], [84], [85]. For microseismic imaging, under thevirtual source assumption, the source wavefield is eliminatedby seismic interferometry using receiver wavefields [72],[73], [86].

2) Computing Layer

Eq. (3) infers that every receiver generates a 4D wavefieldIri . The imaging condition step is to combine all receivers’wavefields to form the final migration images. It is oftenproduced by summation of wavefields:

I(x, t) =

N−1∑i=0

Iri(x, t) (4)

Conventionally, this is done by backward-propagating allthe data from all sensors at once. Assuming the velocitymodel is accurate and data contains zero noise, the imageI(x, t) should have non-zero values only if all the backward-propagated wavefields are non-zero at the seismicity locationx and time t. However, it does not work well with real datawith noises. Hybrid imaging condition [71] was proposed

for more effective microseismic imaging as described inequation 4:

I(x, t) =

N/(n−1)∏j=0

n−1∑k=0

Irj×n+k(x, t) (5)

where n is the local summation window length. Lengthn should be selected such that neighboring receivers arebackward-propagated together while far-apart receivers arecross-correlated. Equation 5 requires N/n computations ofreverse-time modeling. Notice that the hybrid imaging con-dition parameter decision needs a careful design and evalu-ation by considering the trade-off between network resourceconstraints and image quality.

This method is capable of producing high-resolution im-ages of multiple source locations. It adopts the migrationimaging principles for locating microseismic hypocenters. Ittreats the wavefield back-propagated from each individualreceiver as an independent wavefield, and defines microseis-mic hypocenters as the locations where all the wavefieldscoincide with maximum local energies in the final imagein both space and time. Microseismic monitoring based onmigration imaging is currently considered as the most effec-tive technique to track the geometry of stimulated fracturenetworks in resource extraction [8].

3) Compute MMI in Sensor NetworksIn sensor networks, the temporal stacking and spatial stack-ing in Fig. 6 can be implemented based on equation 6, whichis a slight modification from equation 5.

I(x) =

N/(n−1)∏j=0

Icj (x) =

N/(n−1)∏j=0

∑t

n−1∑k=0

Irj×n+k(x, t)

(6)In equation 6, Irj×n+k

(x, t) is the 4D wavefield from eachnode, and Icj (x) is the 3D temporal stacking image of eachcluster. In other words, it is a dimension reduction operationfrom 4D to 3D. The temporal stacking including summationof wavefields and time axis collapses can be performed ina cluster of sensors [87]. This decreases the communicationcost in the next step, where the spatial stacking is performedbetween clusters. In spatial stacking, the images of thesame location x from different clusters are essentially cross-correlated. The communication cost of passing the 3D imageIcj (x) is still considered expensive for sensor networks.Gaussian beam migration can be further applied to limit thecomputation and communication to a narrow beam [88],instead of full wavefield Irj×n+k

(x, t). A primitive prototypeof SAMERA on migration-based microseismic imaging hasbeen designed [89]. By using Gaussian beams around theserays, the stacking of amplitudes is restricted to physicallyrelevant regions only. This reduces tens of times of compu-tational and communicational burden, without damaging theimaging quality.

VOLUME 4, 2016 7

Page 8: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

Band­pass filter

Time Framing

Time Reversal

Forward Modeling

Temporal Stacking

Computing

Spatial Stacking

Depth (m

)

Energy

TraceLow

High

Depth (m

)

Energy

TraceLow

HighFrame

Time (ms)

Trace

Processing

Depth (m

)

Trace

FIGURE 6. Processing and computing algorithm flows of seismic migration imaging. The wireless sign in the figure means communication between nodes is needed.

Processing

Band­pass filter

Spatial-Stacking

Frequency­TimeAnalysis

ConstructingTravel­time surface

3D Inversion

Phase Velocity (km/s)0.3 0.4 0.45 0.5 0.55 0.6 0.7

2 Hz

Computing

Downsampling Denoising

Cross-correlation

3D Model

0 m800 m

CROSS­CORRELATION

Lag Time (s)

Amplitu

de

Phase Velocity (km/s)0.3 0.4 0.45 0.5 0.55 0.6 0.7

Partial Map

FIGURE 7. Processing and computing algorithm flows of ambient noise seismic imaging. The wireless sign in the figure means communication between nodes is needed.

C. AMBIENT NOISE SEISMIC IMAGING

To fully utilize the dense seismic array when there arefew earthquakes or active sources, Ambient Noise SeismicImaging (ANSI) [90]–[92] has been developed to imagesubsurface using surface waves. ANSI uses radiation fromrandom sources in the earth to first estimate the Greensfunction between pairs of stations [93]–[96] and then invertfor a 3D earth structure [91], [92]. Many applications haverelied on relatively low frequency data (between 0.05-0.5Hz) from the ocean noise [97], which images structure inthe scale of kilometers, while more local structures can beimaged with higher order wave modes (higher frequencies)and denser networks [98]–[100]. The main algorithm flowsof ANSI (Figure 7) are described as follows:

1) Processing Layer

This layer is to derive surface wave travel times through noisecross-correlations. Data conditioning will be first applied tothe raw seismic signals, such as downsampling, denoising,band-pass filtering. Thereafter, the noise cross-correlation

CAB between two stations will be performed:

CAB(t) =

∫ ∞−∞

uA(τ)uB(t+ τ)dτ

=

∫ ∞−∞

[−GAB(τ) +GAB(−τ)]dτ.(7)

where uA and uB are the recorded noises at locations A and B[101]. Theoretical studies have shown that if the noise wave-field is sufficiently diffusive, the cross-correlation betweentwo stations can be used to approximate the Green’s functionGAB between the two sensors or locations [94], [102].By calculating ambient noise cross-correlations between onecenter station and all other stations, seismic wavefield excitedby a virtual source located at the center station can beconstructed. Based on the noise cross-correlations, the perioddependent surface wave phase and group travel time can bedetermined between each pair of stations.

2) Computing LayerThis layer first generates a series of frequency dependent 2Dsurface wave phase velocity maps and then 3D inversions are

8 VOLUME 4, 2018

Page 9: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

performed across the array to form the final 3D tomography.The eikonal and Helmholtz tomography methods will beadopted to determine 2D phase velocity maps based onempirical wavefield tracking [15], [103]. For each event i,it measures surface wave phase velocities at each locationdirectly by the spatial derivatives of the observed wavefield:

1

c2i (r)= |∇τ(ri, r)|2 −

∇2Ai(r)

Ai(r)ω2, (8)

where τ and A represent phase travel time and amplitudemeasurements, and ki ∼= ∇τ(ri, r)ci(r), c and ω are di-rection of wave propagation, phase velocity and angularfrequency, respectively. ki can be derived directly by solving2D Helmholtz wave equation also called eikonal equation,can be derived from equation 8 under infinite frequencyapproximation. While the above equations are defined for‘events,’ it’s important to note that the cross-correlationmethod from equation 7 effectively turns each station into an‘event’ recorded at every other station, and so the wavefieldfrom virtual sources at each station, as well as the spatialderivatives in equations 8 can be approximated from the setof cross-correlations with that station. After the 2D imagereconstruction, the frequency dependent phase velocities ateach location can then be used to invert for vertical profiles.The combination of all 2D model and vertical profile acrossthe study area produces the final 3D model.

3) Compute ANSI in Sensor NetworksTo compute ANSI in sensor networks, there are two mainchallenges and can be addressed as follows: (1) The noisecross-correlation step will require every pair of nodes toexchange data with each other at the beginning. The com-munication cost can be reduced by subsampling, applyinga bandpass filter, and limiting the cross-correlation betweennodes in near-range while approaching the far-range cross-correlation through distributed interpolation. (2) The eikonaltomography step will require all nodes to stack their locallycalculated velocity maps to form the final 2D/3D subsurfaceimage. The stacking processing can be done through in-network data aggregation or decentralized consensus. Eachapproach has its own advantages and disadvantages: ag-gregation works better when the network is reliable, whileconsensus might be better when network is intermittent. Aprimitive prototype of SAMERA based on ANSI [21] hasbeen built and demonstrated.

IV. CONCLUSIONCreating SAMERA requires an interdisciplinary collabora-tion and advancement of sensor networks, signal processing,distributed computing, and geophysical imaging. Prototypesbased on several seismic imaging methods have been demon-strated, yet there are many research challenges and oppor-tunities, such that: (1) Fully automation: the geophysicalimaging today often involves human in the loop, which isnot a surprise as the first generation optical camera is notfully automatic too. For example, the initial velocity model

and some algorithm parameters are still based on experience,so the questions are: how to self-learn and self-optimizeparameters to make subsurface imaging computing fullyautomatic? how to build initial velocity model automatically?how to integrate machine learning (e.g., data-driven) withphysics-based modeling to enable better automation? (2) Fastcompletion: distributed iterative computing frameworks andprinciples have been laid out, yet many practical issues toaddress, such as: how to decide stopping/pausing criteria toavoid over-fitting? how to generate subsurface image fasterunder the bandwidth constraint and random network failures?(3) Data fusion: different geophysical sensors/methods aresensitive to different geophysical properties of subsurface,how to integrate different geophysical methods for joint in-version to generate better subsurface images? Those researchquestions are no longer unique to SAMERA creation andhave received much attention in big data, machine learning,Internet of Things and other domains, beyond geophysics,better and better solutions are developed everyday. Withthose recent rapid advancement and cost reduction of bothhardware and computing algorithms, it is the right time tostart creating SAMERA - the camera to see through thesubsurface is coming!

REFERENCES[1] M. Wilde-Piórko, W. H. Geissler, J. Plomerová, M. Grad, V. Babuška,

E. Brückl, J. Cyziene, W. Czuba, R. England, E. Gaczynski, and Others,“PASSEQ 2006–2008: passive seismic experiment in Trans-Europeansuture zone,” Studia geophysica et geodaetica, vol. 52, no. 3, pp. 439–448, 2008.

[2] W.-Z. Song, R. Huang, M. Xu, B. A. Shirazi, and R. LaHusen,“Design and deployment of sensor network for real-time high-fidelityvolcano monitoring,” IEEE Transactions on Parallel and DistributedSystems, vol. 21, no. 11, pp. 1658–1674, 2010. [Online]. Available:http://dx.doi.org/10.1109/TPDS.2010.37

[3] W.-Z. Song, R. Huang, M. Xu, A. Ma, B. Shirazi, and R. LaHusen, “Air-dropped sensor network for real-time high-fidelity volcano monitoring,”in Proceedings of the 7th international conference on Mobile systems,applications, and services. ACM, 2009, pp. 305–318.

[4] W.-Z. Song, B. Shirazi, R. LaHusen, S. Kedar, S. Chien, F. Webb,J. Pallister, D. Dzurisin, S. Moran, M. Lisowski, D. Tran, A. Davis, andD. Pieri, “Optimized autonomous space in-situ Sensor-Web for volcanomonitoring,” in IEEE Aerospace Conference 2008, Big Sky, MT, USA,2008.

[5] R. Huang, W.-Z. Song, M. Xu, N. Peterson, B. Shirazi, and R. LaHusen,“Real-world sensor network for long-term volcano monitoring: Designand findings,” IEEE Transactions on Parallel and Distributed Systems,vol. 23, no. 2, pp. 321–329, 2012.

[6] E. Upton, “Raspberry pi 3,” URLhttps://www.raspberrypi.org/products/raspberry-pi-3-model-b, 2016.

[7] A. Antoniou, Digital signal processing. McGraw-Hill, 2016.[8] S. Maxwell, Microseismic Imaging of Hydraulic Fracturing: Improved

Engineering of Unconventional Shale Reservoirs. SEG, 2014.[9] G. D. Bensen, M. H. Ritzwoller, M. P. Barmin, A. L. Levshin, F. Lin,

M. P. Moschetti, N. M. Shapiro, and Y. Yang, “Processing seismicambient noise data to obtain reliable broad-band surface wave dispersionmeasurements,” Geophysical Journal International, vol. 169, no. 3, pp.1239–1260, 2007.

[10] A. Douglas, “Bandpass filtering to reduce noise on seismograms: Is therea better way?” Bulletin of the Seismological Society of America, vol. 87,no. 3, pp. 770–777, 1997.

[11] Z. Du, G. R. Foulger, and W. Mao, “Noise reduction for broad-band,three-component seismograms using data-adaptive polarization filters,”Geophysical Journal International, vol. 141, no. 3, pp. 820–828, 2000.

VOLUME 4, 2016 9

Page 10: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

[12] F. Li and W. Song, “Automatic arrival identification system for real-time microseismic event location,” in SEG Technical Program ExpandedAbstracts 2017, S. E. of Geophysicists, Ed., 2017, pp. 2934–2939.

[13] N. Nakata, J. P. Chang, J. F. Lawrence, and P. Boué, “Body waveextraction and tomography at long beach, california, with ambient-noiseinterferometry,” Journal of Geophysical Research: Solid Earth, vol. 120,no. 2, pp. 1159–1173, 2015.

[14] R. V. Allen, “Automatic earthquake recognition and timing from singletraces,” Bulletin of the Seismological Society of America, vol. 68, no. 5,pp. 1521–1532, 1978.

[15] F. Lin, M. H. Ritzwoller, and R. Snieder, “Eikonal tomography:surface wave tomography by phase front tracking across a regionalbroad-band seismic array,” Geophysical Journal International, vol.177, no. 3, pp. 1091–1110, Jun. 2009. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.2009.04105.x

[16] W. Song, L. Shi, G. Kamath, Y. Xie, Z. Peng, and Others, “Real-timein-situ seismic imaging: Overview and case study,” in 2015 SEG AnnualMeeting. Society of Exploration Geophysicists, 2015.

[17] G. Kamath, L. Shi, W.-Z. Song, and J. Lees, “Distributed travel-timeseismic tomography in large-scale sensor networks,” Journal of Paralleland Distributed Computing, vol. 89, pp. 50–64, 2016.

[18] L. Zhao, W.-Z. Song, L. Shi, and X. Ye, “Decentralised seismic to-mography computing in cyber-physical sensor systems,” Cyber-PhysicalSystems, vol. 1, no. 2-4, pp. 91–112, 2015.

[19] G. Kamath, L. Shi, E. Chow, and W.-Z. Song, “Distributed tomographywith adaptive mesh refinement in sensor networks,” International Journalof Sensor Networks, vol. 23, no. 1, pp. 40–52, 2017.

[20] P. Ramanan, G. Kamath, and W.-Z. Song, “Indigo: an in situ distributedgossip framework for sensor networks,” International Journal of Dis-tributed Sensor Networks, vol. 11, no. 10, p. 706083, 2015.

[21] M. Valero, J. Clemente, G. Kamath, Y. Xie, F.-C. Lin, andW. Song, “Real-Time ambient noise subsurface imaging in distributedsensor networks,” in Smart Computing (SMARTCOMP), 2017 IEEEInternational Conference on. IEEE, 2017, pp. 1–8. [Online]. Available:http://dx.doi.org/10.1109/SMARTCOMP.2017.7947040

[22] L. Shi, W.-Z. Song, F. Dong, and G. Kamath, “Sensor network forreal-time in-situ seismic tomography,” in International Conference onInternet of Things and Big Data (IoTBD 2016), 2016. [Online].Available: http://dx.doi.org/10.5220/0005897501180128

[23] G. Kamath, W.-Z. Song, P. Ramanan, L. Shi, and J. Yang, “Dristi: Dis-tributed real-time in-situ seismic tomographic imaging,” in Computer andInformation Technology; Ubiquitous Computing and Communications;Dependable, Autonomic and Secure Computing; Pervasive Intelligenceand Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE InternationalConference on. IEEE, 2015, pp. 1244–1251.

[24] G. Kamath, L. Shi, E. Chow, and W.-Z. Song, “Distributed multigridtechnique for seismic tomography in sensor networks,” in InternationalConference on Big Data Computing and Communications. Springer,2015, pp. 297–310.

[25] G. Kamath, L. Shi, and W.-Z. Song, “Component-average based dis-tributed seismic tomography in sensor networks,” in 2013 IEEE Interna-tional Conference on Distributed Computing in Sensor Systems. IEEE,2013, pp. 88–95.

[26] L. Shi, W.-Z. Song, M. Xu, Q. Xiao, J. M. Lees, and G. Xing, “Imagingvolcano seismic tomography in sensor networks,” in The 10th AnnualIEEE Communications Society Conference on Sensor and Ad Hoc Com-munications and Networks (IEEE SECON), 2013.

[27] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, and Others, “Dis-tributed optimization and statistical learning via the alternating directionmethod of multipliers,” Foundations and Trends in Machine learning,vol. 3, no. 1, pp. 1–122, 2011.

[28] R. Sun and W. Yin, “Asynchronous coordinate descent under morerealistic assumptions,” in UCLA CAM Report 17-30. UCLA, 2017.

[29] Z. Peng, Y. Xu, M. Yan, and W. Yin, “On the convergence of asyn-chronous parallel iteration with arbitrary delay,” in UCLA CAM Report16-86. UCLA, 2016.

[30] T. Wu, K. Yuan, Q. Ling, W. Yin, and A. H. Sayed, “Decentralizedconsensus optimization with asynchrony and delay,” in IEEE AsilomarConference on Signals, Systems, and Computers. IEEE, 2016.

[31] T. C. Aysal, M. E. Yildiz, A. D. Sarwate, and A. Scaglione,“Broadcast gossip algorithms for consensus,” IEEE Transactions onSignal Processing, vol. 57, no. 7, pp. 2748–2761, Jul. 2009. [Online].Available: http://dx.doi.org/10.1109/tsp.2009.2016247

[32] I. Matei and J. S. Baras, “Performance evaluation of the Consensus-Based distributed subgradient method under random communicationtopologies,” Selected Topics in Signal Processing, IEEE Journalof, vol. 5, no. 4, pp. 754–771, Aug. 2011. [Online]. Available:http://dx.doi.org/10.1109/JSTSP.2011.2120593

[33] A. Nedic and A. Ozdaglar, “Distributed subgradient methods forMulti-Agent optimization,” Automatic Control, IEEE Transactionson, vol. 54, no. 1, pp. 48–61, Jan. 2009. [Online]. Available:http://dx.doi.org/10.1109/TAC.2008.2009515

[34] A. Nedic and A. Olshevsky, “Distributed optimization over time-varyingdirected graphs,” in Decision and Control (CDC), 2013 IEEE 52ndAnnual Conference on, Dec. 2013, pp. 6855–6860. [Online]. Available:http://dx.doi.org/10.1109/CDC.2013.6760975

[35] ——, “Stochastic Gradient-Push for strongly convex functions on Time-Varying directed graphs,” arXiv:1406.2075, 2014.

[36] A, “Fast distributed first-order methods,” PhD thesis, MassachusettsInstitute of Technology, 2012.

[37] J. M. Dusan Jakovetic, “Fast distributed gradient methods,”arXiv:1112.2972v4, 2014.

[38] K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralizedgradient descent,” arXiv:1310.7063, 2013.

[39] M. Zargham, A. Ribeiro, and A. Jadbabaie, “A distributed linesearch for network optimization,” in American Control Conference(ACC), 2012, Jun. 2012, pp. 472–477. [Online]. Available:http://dx.doi.org/10.1109/ACC.2012.6314986

[40] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensorfusion based on average consensus,” in Proceedings of the 4th Interna-tional Symposium on Information Processing in Sensor Networks, ser.IPSN ’05, 2005.

[41] J. N. Tsitsiklis, D. P. Bertsekas, and M. Athans, “Distributedasynchronous deterministic and stochastic gradient optimizationalgorithms,” Automatic Control, IEEE Transactions on, vol. 31,no. 9, pp. 803–812, Sep. 1986. [Online]. Available:http://dx.doi.org/10.1109/TAC.1986.1104412

[42] J. N. Tsitsiklis, “Problems in decentralized decision making and compu-tation,” Technical report, DTIC Document, 1984.

[43] H. Terelius, U. Topcu, and R. Murray, “Decentralized multi-agent opti-mization via dual decomposition,” IFAC, 2011.

[44] G. Shi and K. H. Johansson, “Finite-time and asymptotic convergenceof distributed averaging and maximizing algorithms,” arXiv:1205.1733,2012.

[45] M. Rabbat and R. Nowak, “Distributed optimization in sensor networks,”in Information Processing in Sensor Networks, 2004. IPSN 2004. ThirdInternational Symposium on, Apr. 2004, pp. 20–27. [Online]. Available:http://dx.doi.org/10.1109/IPSN.2004.1307319

[46] W. Shi, Q. Ling, G. Wu, and W. Yin, “EXTRA: An exact First-Orderalgorithm for decentralized consensus optimization,” arXiv:1404.6264,2014.

[47] E. Wei and A. Ozdaglar, “On the o(1/k) convergence of asynchronousdistributed alternating direction method of multipliers,” arXiv:1307.8254,2013.

[48] F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem, “Asynchronous dis-tributed optimization using a randomized alternating direction method ofmultipliers,” arXiv:1303.2837, 2013.

[49] A. Nedic, “Asynchronous Broadcast-Based convex optimizationover a network,” IEEE Transactions on Automatic Control,vol. 56, no. 6, pp. 1337–1351, Jun. 2011. [Online]. Available:http://dx.doi.org/10.1109/TAC.2010.2079650

[50] L. Zhao, W.-Z. Song, X. Ye, and Y. Gu, “Asynchronous broadcast-based decentralized learning in sensor networks,” International Journalof Parallel, Emergent and Distributed Systems, pp. 1–19, 2017.

[51] J. Wong, L. Han, J. Bancroft, and R. Stewart, “Automatic time-picking offirst arrivals on noisy microseismic data,” CSEG. 0 0.2 0.4 0.6 0.8, vol. 1,no. 1.2, pp. 1–4, 2009.

[52] M. Baer and U. Kradolfer, “An automatic phase picker for local andteleseismic events,” Bulletin of the Seismological Society of America,vol. 77, no. 4, pp. 1437–1445, 1987.

[53] T. Takanami and G. Kitagawa, “Estimation of the arrival times of seismicwaves by multivariate time series model,” Annals of the Institute ofStatistical mathematics, vol. 43, no. 3, pp. 407–433, 1991.

[54] K. S. Anant and F. U. Dowla, “Wavelet transform methods for phaseidentification in three-component seismograms,” Bulletin of the Seismo-logical Society of America, vol. 87, no. 6, pp. 1598–1612, 1997.

10 VOLUME 4, 2018

Page 11: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

[55] J. B. Molyneux and D. R. Schmitt, “First-break timing: Arrival onsettimes by direct correlation,” Geophysics, vol. 64, no. 5, pp. 1492–1501,1999.

[56] S. K. Yung and L. T. Ikelle, “An example of seismic time picking by third-order bicoherence,” Geophysics, vol. 62, no. 6, pp. 1947–1952, 1997.

[57] F. Li, J. Rich, K. J. Marfurt, and H. Zhou, “Automatic event detectionon noisy microseismograms,” in SEG Technical Program ExpandedAbstracts 2014. Society of Exploration Geophysicists, 2014, pp. 2363–2367.

[58] J. Akram and D. W. Eaton, “A review and appraisal of arrival-timepicking methods for downhole microseismic dataArrival-time pickingmethods,” Geophysics, vol. 81, no. 2, pp. KS71–KS91, 2016.

[59] S. Li, Y. Cao, C. Leamon, Y. Xie, L. Shi, and W. Song, “Onlineseismic event picking via sequential change-point detection,” inCommunication, Control, and Computing (Allerton), 2016 54th AnnualAllerton Conference on. IEEE, 2016, pp. 774–779. [Online]. Available:http://dx.doi.org/10.1109/ALLERTON.2016.7852311

[60] C. Baillard, W. C. Crawford, V. Ballu, C. Hibert, and A. Mangeney, “Anautomatic kurtosis-based p-and s-phase picker designed for local seismicnetworks,” Bulletin of the Seismological Society of America, vol. 104,no. 1, pp. 394–409, 2014.

[61] J. M. Lees and R. S. Crosson, “Bayesian Art versus Conjugate GradientfMethods in Tomographic Seismic Imaging: An Application at Mount St.Helens, Washington,” Lecture Notes-Monograph Series, vol. 20, pp. 186–208, 1991.

[62] H. Urdaneta and B. Biondi, “Shortest-path calculation of first arrival trav-eltimes by expanding wavefronts,” Stanford Exploration Project, Report,vol. 82, pp. 1–7.

[63] J. S. Liu, F. T. Liu, J. Liu, and T. Y. Hao, “Parallel LSQR algorithmsused in seismic tomography,” Diqiu Wuli Xuebao(Chinese Journal ofGeophysics), vol. 49, no. 2, pp. 540–545, 2006.

[64] B. Butrylo, M. Tudruj, and L. Masko, “Distributed Formulation ofArtificial Reconstruction Technique with Reordering of Critical DataSets,” in Parallel and Distributed Computing, 2006. ISPDC’06. The FifthInternational Symposium on. IEEE, 2006, pp. 90–98.

[65] H. Zhang and C. Thurber, “Development and applications of double-difference seismic tomography,” Pure and Applied Geophysics, vol. 163,no. 2-3, pp. 373–403, 2006.

[66] D. Gordon and R. Gordon, “Component-averaged row projections: arobust, block-parallel scheme for sparse linear systems,” SIAM Journalon Scientific Computing, vol. 27, pp. 1092–1117, 2005.

[67] Y. Censor, D. Gordon, and R. Gordon, “Component averaging: Anefficient iterative parallel algorithm for large and sparse unstructuredproblems.” Parallel Computing, vol. 27, no. 6, pp. 777–808, 2001.

[68] L. Zhao, W.-Z. Song, and X. Ye, “Fast decentralized gradient descentmethod and applications to in-situ seismic tomography,” in Big Data (BigData), 2015 IEEE International Conference on. IEEE, 2015, pp. 908–917.

[69] B. Artman, “Imaging passive seismic data,” Geophysics, vol. 71, no. 4,pp. SI177–SI187, 2006.

[70] M. Wong, B. L. Biondi, and S. Ronen, “Imaging with primaries andfree-surface multiples by joint least-squares reverse time migration,”Geophysics, vol. 80, no. 6, pp. S223–S235, 2015.

[71] J. Sun, T. Zhu, S. Fomel, W.-Z. Song, and Others, “Investigating thepossibility of locating microseismic sources using distributed sensornetworks,” in 2015 SEG Annual Meeting. Society of ExplorationGeophysicists, 2015.

[72] N. Nakata and G. C. Beroza, “Reverse time migration for microseismicsources using the geometric mean as an imaging condition,” Geophysics,vol. 81, no. 2, pp. KS51–KS60, 2016.

[73] S. Wu, Y. Wang, Y. Zheng, and X. Chang, “Microseismic source locationswith deconvolution migration,” Geophysical Journal International, vol.212, no. 3, pp. 2088–2115, 2017.

[74] H. Kao and S.-J. Shan, “The Source-Scanning algorithm: mapping thedistribution of seismic sources in time and space,” Geophysical JournalInternational, vol. 157, no. 2, pp. 589–594, 2004. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.2004.02276.x

[75] D. Gajewski and E. Tessmer, “Reverse modelling for seismic eventcharacterization,” Geophysical Journal International, vol. 163, no. 1, pp.276–284, 2005. [Online]. Available: http://dx.doi.org/10.1111/j.1365-246X.2005.02732.x

[76] G. A. McMechan, “Determination of source parameters by wavefieldextrapolation,” Geophysical Journal of the Royal Astronomical

Society, vol. 71, no. 3, pp. 613–628, 1982. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.1982.tb02788.x

[77] B. Artman, I. Podladtchikov, and B. Witten, “Source location usingtime-reverse imaging,” Geophysical Prospecting, vol. 58, no. 5, pp.861–873, 2010. [Online]. Available: http://dx.doi.org/10.1111/j.1365-2478.2010.00911.x

[78] B. Witten and B. Artman, “Signal-to-noise estimates of time-reverse im-ages,” Geophysics, vol. 76, no. 2, pp. MA1–MA10, 2011. [Online]. Avail-able: http://geophysics.geoscienceworld.org/content/76/2/MA1.abstract

[79] S. Kremers, A. Fichtner, G. B. Brietzke, H. Igel, C. Larmat, L. Huang, andM. Käser, “Exploring the potentials and limitations of the time-reversalimaging of finite seismic sources,” Solid Earth, vol. 2, no. 1, pp. 95–105,2011. [Online]. Available: http://www.solid-earth.net/2/95/2011/

[80] M. E. Yavuz and F. L. Teixeira, “Space–frequency ultrawideband time-reversal imaging,” IEEE Transactions on Geoscience and Remote Sens-ing, vol. 46, no. 4, pp. 1115–1124, 2008.

[81] H. Yang, T. Li, N. Li, Z. He, and Q. H. Liu, “Time-gating-based timereversal imaging for impulse borehole radar in layered media,” IEEETransactions on Geoscience and Remote Sensing, vol. 54, no. 5, pp.2695–2705, 2016.

[82] Y. Zhang, S. Xu, N. Bleistein, and G. Zhang, “True-amplitude, angle-domain, common-image gathers from one-way wave-equation migra-tions,” Geophysics, vol. 72, no. 1, pp. S49–S58, 2007.

[83] J. F. Claerbout, Imaging the earth’s interior. Blackwell scientificpublications Oxford, 1985, vol. 1.

[84] M. Araya-Polo, J. Cabezas, M. Hanzich, M. Pericas, F. Rubio, I. Gelado,M. Shafiq, E. Morancho, N. Navarro, E. Ayguade, and Others, “Assessingaccelerator-based HPC reverse time migration,” IEEE Transactions onParallel and Distributed Systems, vol. 22, no. 1, pp. 147–162, 2011.

[85] H. Liu, Z. Long, B. Tian, F. Han, G. Fang, and Q. H. Liu, “Two-Dimensional Reverse-Time migration applied to GPR with a 3-D-to-2-D data conversion,” IEEE Journal of Selected Topics in Applied EarthObservations and Remote Sensing, vol. 10, no. 10, pp. 4313–4320, 2017.

[86] G. T. Schuster, J. Yu, J. Sheng, and J. Rickett, “Interferometric/daylightseismic imaging,” Geophysics Journal International, vol. 157, no. 2, pp.838–852, 2004. [Online]. Available: http://dx.doi.org/10.1111/j.1365-246X.2004.02251.x

[87] B. Witten and J. Shragge, “Extended wave-equation imaging conditionsfor passive seismic data,” Geophysics, vol. 80, no. 6, pp. WC61–WC72,2015.

[88] S. Rentsch, S. Buske, S. L? th, and S. A. Shapiro, “Location of seismicityusing gaussian beam type migration,” in 2004 SEG Annual Meeting,2004.

[89] S. Wang, F. Li, and W. Song, “Microseismic source location with dis-tributed reverse time migration,” 2018.

[90] F. Lin, M. P. Moschetti, and M. H. Ritzwoller, “Surface wavetomography of the western United States from ambient seismic noise: Rayleigh and Love wave phase velocity maps,” Geophysical JournalInternational, vol. 173, no. 1, pp. 281–298, 2008. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.2008.03720.x

[91] M. P. Moschetti, M. H. Ritzwoller, F. Lin, and Y. Yang, “Crustalshear wave velocity structure of the western United States inferredfrom ambient seismic noise and earthquake data,” Journal ofGeophysical Research, vol. 115, no. B10, Oct. 2010. [Online].Available: http://dx.doi.org/10.1029/2010JB007448

[92] F.-C. Lin, M. H. Ritzwoller, Y. Yang, M. P. Moschetti, and M. J. Fouch,“Complex and variable crustal and uppermost mantle seismic anisotropyin the western United States,” Nature Geoscience, vol. 4, no. 1, pp.55–61, 2011. [Online]. Available: http://dx.doi.org/10.1038/ngeo1036

[93] P. Roux, K. G. Sabra, P. Gerstoft, W. A. Kuperman, and M. C. Fehler, “P-waves from cross-correlation of seismic noise,” Geophysical ResearchLetters, vol. 32, no. 19, 2005.

[94] R. Snieder, “Extracting the Green’s function from the correlationof coda waves: A derivation based on stationary phase,” PhysicalReview E, vol. 69, no. 4, p. 046610, Apr. 2004. [Online]. Available:http://dx.doi.org/10.1103/PhysRevE.69.046610

[95] N. M. Shapiro, M. Campillo, L. Stehly, and M. H. Ritzwoller, “ High res-olution surface wave tomography from ambient seismic noise,” Science,vol. 307(5715), pp. 1615–1618, 2005.

[96] P. Gouedard, P. Roux, and M. Campillo, “Small scale seismic inversionusing surface waves extracted from noise cross-correlation,” J. Acoust.Soc. Am., vol. 123, no. 3, pp. 26–31, 2008.

[97] M. S. Longuet-Higgins, “A theory of the origin of microseisms,” Phil.Trans. R. Soc. Lond. A, vol. 243, no. 857, pp. 1–35, 1950.

VOLUME 4, 2016 11

Page 12: Toward Creating Subsurface Camerasensorweb.engr.uga.edu/wp-content/uploads/2018/09/... · network of geophysical sensors, where a 2D or 3D image is computed and recorded; a control

Song et al.: Toward Creating Subsurface Camera: SAMERA

[98] M. Picozzi, S. Parolai, D. Bindi, and Strollo, “Characterization of shallowgeology by high-frequency seismic noise tomography,” GeophysicalJournal International, vol. 176, no. 1, pp. 164–174, Jan. 2009. [Online].Available: http://dx.doi.org/10.1111/j.1365-246X.2008.03966.x

[99] Y. Yang, M. H. Ritzwoller, and C. H. Jones, “Crustal structuredetermined from ambient noise tomography near the magmaticcenters of the Coso region, southeastern California,” Geochemistry,Geophysics, Geosystems, vol. 12, no. 2, Feb. 2011. [Online]. Available:http://dx.doi.org/10.1029/2010GC003362

[100] F. Lin, D. Li, R. W. Clayton, and D. Hollis, “High-resolution 3Dshallow crustal structure in Long Beach, California: Application ofambient noise tomography on a dense seismic array,” Geophysics,vol. 78, no. 4, pp. Q45–Q56, Jul. 2013. [Online]. Available:http://dx.doi.org/10.1190/geo2012-0453.1

[101] G. D. Bensen, M. H. Ritzwoller, M. P. Barmin, Levshin, F. Lin,M. P. Moschetti, N. M. Shapiro, and Y. Yang, “Processingseismic ambient noise data to obtain reliable broad-band surfacewave dispersion measurements,” Geophysical Journal International,vol. 169, no. 3, pp. 1239–1260, Jun. 2007. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.2007.03374.x

[102] O. I. Lobkis and R. L. Weaver, “On the emergence of the Green’sfunction in the correlations of a diffuse field,” The Journal of theAcoustical Society of America, vol. 110, no. 6, p. 3011, 2001. [Online].Available: http://dx.doi.org/10.1121/1.1417528

[103] F. Lin and M. H. Ritzwoller, “Helmholtz surface wave tomography forisotropic and azimuthally anisotropic structure,” Geophysical JournalInternational, vol. 186, pp. 1104–1120, 2011. [Online]. Available:http://dx.doi.org/10.1111/j.1365-246X.2011.05070.x

12 VOLUME 4, 2018