-
Notifications
You must be signed in to change notification settings - Fork 1
/
phdthesis.bib
956 lines (912 loc) · 62.2 KB
/
phdthesis.bib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
% This file was created with JabRef 2.9.
% Encoding: ISO8859_1
%-----2020-----%
@PHDTHESIS{yin2024THsgi,
author = {Ziyi Yin},
title = {Solving geophysical inverse problems with scientific machine learning},
school = {Georgia Institute of Technology},
year = {2024},
month = {06},
address = {Atlanta},
abstract = {Solving inverse problems involves estimating unknown parameters of interest from indirect measurements. Specifically, geophysical inverse problems seek to determine various Earth properties critical for geophysical exploration, carbon control, monitoring, and earthquake detection. These problems pose unique challenges: the parameters of interest are often high-dimensional, and the mapping from parameters to observables is computationally demanding. Moreover, these problems are typically non-convex and ill-posed, meaning that multiple sets of model parameters can adequately fit the observations, and inversion algorithms depend on accurate initial model parameters.
This thesis introduces several innovative methods to tackle these challenges using scientific machine learning techniques. It discusses algorithms and software frameworks that utilize surrogate and generative models to achieve scalable and reliable inversion. It also examines the integration of conditional generative models with physics for Bayesian variational inference and uncertainty quantification. These methods have been applied to two critical inverse problems in geophysical applications: monitoring geological carbon storage and full-waveform inversion, both of which are plagued by the aforementioned computational challenges.
The thesis consists of six papers. The first two papers present a scalable, interoperable, and differentiable programming framework for learned multiphysics inversion, showcased through realistic synthetic case studies in geological carbon storage monitoring. The third paper introduces a computationally efficient and reliable algorithm that employs surrogate models, particularly Fourier neural operators, to accelerate inversion. The reliability of this algorithm is ensured by using normalizing flows as learned constraints to safeguard the accuracy of the surrogate models throughout the inversion process. The subsequent paper explores a joint inversion approach and an explainable deep neural classifier for time-lapse seismic imaging and carbon dioxide leakage detection during geological carbon storage. The final two papers introduce amortized and semi-amortized variational inference approaches that employ information-preserving physics-informed summary statistics and refinements to provide computationally feasible and reliable uncertainty quantification in high-dimensional full-waveform inversion problems. They also assess the impact of the inherent uncertainty in these ill-posed inversion problems on subsequent imaging tasks.},
keywords = {PhD, inverse problems, generative models, Bayesian inference, time-lapse, JRM, ccs, gcs, end-to-end, Fourier neural operators, normalizing flows, conditional normalizing flows, variational inference, WISE, WISER, monitoring, FWI, RTM, CIG, uncertainty quantification, scientific machine learning},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2024/yin2024THsgi/yin2024THsgi.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2024/yin2024THsgi/yin2024THsgi_pres.pdf}
}
@PHDTHESIS{zhang2023THlsw,
author = {Yijun Zhang},
title = {Large scale wavefield reconstruction via weighted matrix factorization and seismic survey design},
school = {Georgia Institute of Technology},
year = {2023},
month = {06},
address = {Atlanta},
abstract = {Seismic data acquisition plays a crucial role in identifying potential oil and gas reservoirs during the early phases of exploration. However, obtaining finely sampled seismic data can be costly and physically impossible. Recent developments in Compressive Sensing have resulted in seismic data being increasingly collected at random along spatial coordinates. Although random sampling improves acquisition efficiency, it shifts the burden from seismic acquisition to data processing. Wavefield recovery is one of the required processes for reconstructing fully seismic data from coarsely subsampled data. Among the various techniques proposed for wavefield reconstruction, matrix completion methods are computationally efficient and straightforward to implement. These methods exploit the low-rank structure of fully seismic data. However, matrix completion performs well at low-to-mid frequencies and degrades at higher frequencies due to the failure of low-rank structure to accurately approximate higher frequencies. To address this issue, this thesis proposed a recursively weighted matrix completion method. Although effective, this method is computationally expensive, and a more efficient method for handling 2D seismic data was also proposed. Compared to 2D seismic data, 3D seismic data can detect reflections outside of the 2D plane but poses a computational challenge due to its large scale. To overcome this challenge, this thesis proposed a parallel weighted reconstruction method to improve the reconstruction of 3D seismic data. Land seismic data presents a greater challenge due to contamination by ground roll, which consists of surface waves with a high spatial frequency content and large amplitude. To address this issue, a practical workflow was proposed in this thesis to improve the recovery of land seismic data. Although matrix completion is an efficient technique for reconstructing fully seismic data, the optimal acquisition design is still being investigated. Recent studies have shown that the spectral gap can be used to predict and characterize the quality of wavefield reconstruction via matrix completion for a given subsampling mask. Based on these findings, a simulation-free seismic survey design for both 2D and 3D seismic data was proposed in this thesis to obtain an improved subsampling survey by minimizing the spectral gap ratio. Furthermore, this concept was extended to the design of a time-lapse seismic survey, which is essential for reservoir management and monitoring geological carbon storage but is difficult and expensive to acquire. To improve the reconstruction of the time-lapse wavefield, a joint recovery model was proposed that leverages the benefits of the non-replicated baseline and monitor subsampled seismic data. A time-lapse seismic survey design that incorporates the joint recovery model with spectral gap was proposed to generate sparse, non-replicated time-lapse acquisition geometries that favor wavefield recovery.},
keywords = {PhD, time-lapse, JRM, acquisition, survey design, wavefield reconstruction, spectral gap, matrix factorization, 5D reconstruction, compressed sensing, frequency-domain, parallel, signal processing},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2023/zhang2023THlsw/zhang2023THlsw.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2023/zhang2023THlsw/zhang2023THlsw_pres.pdf},
url2 = {https://slim.gatech.edu/Publications/Public/Thesis/2023/zhang2023THlsw}
}
@PHDTHESIS{siahkoohi2022THdgmf,
author = {Ali Siahkoohi},
title = {Deep generative models for solving geophysical inverse problems},
school = {Georgia Institute of Technology},
year = {2022},
month = {07},
address = {Atlanta},
abstract = {My thesis presents several novel methods to facilitate solving large-scale
inverse problems by utilizing recent advances in machine learning, and
particularly deep generative modeling. Inverse problems involve reliably
estimating unknown parameters of a physical model from indirect observed data
that are noisy. Solving inverse problems presents primarily two challenges.
The first challenge is to capture and incorporate prior knowledge into
ill-posed inverse problems whose solutions cannot be uniquely identified. The
second challenge is the computational complexity of solving inverse problems,
particularly the cost of quantifying uncertainty. The main goal of this
thesis is to address these issues by developing practical data-driven methods
that are scalable to geophysical applications in which access to high-quality
training data is often limited.
There are six papers included in this thesis. A majority of these papers
focus on addressing computational challenges associated with Bayesian
inference and uncertainty quantification, while others focus on developing
regularization techniques to improve inverse problem solution quality and
accelerate the solution process. These papers demonstrate the applicability
of the proposed methods to seismic imaging, a large-scale geophysical inverse
problem with a computationally expensive forward operator for which
sufficiently capturing the variability in the Earth's heterogeneous
subsurface through a training dataset is challenging.
The first two papers present computationally feasible methods of applying a
class of methods commonly referred to as deep priors to seismic imaging and
uncertainty quantification. I also present a systematic Bayesian approach to
translate uncertainty in seismic imaging to uncertainty in downstream tasks
performed on the image. The next two papers aim to address the reliability
concerns surrounding data-driven methods for solving Bayesian inverse
problems by leveraging variational inference formulations that offer the
benefits of fully-learned posteriors while being directly informed by physics
and data. The last two papers are concerned with correcting forward modeling
errors where the first proposes an adversarially learned postprocessing step
to attenuate numerical dispersion artifacts in wave-equation simulations due
to coarse finite-difference discretizations, while the second trains a
Fourier neural operator surrogate forward model in order to accelerate the
qualification of uncertainty due to errors in the forward model
parameterization.},
keywords = {PhD, Geophysics, Inverse Problems, Generative models, Bayesian Inference, Deep Learning},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2022/siahkoohi2022THdgmf/siahkoohi2022THdgmf.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2022/siahkoohi2022THdgmf/siahkoohi2022THdgmf_pres.pdf}
}
@PHDTHESIS{sharan2020THlsh,
author = {Shashin Sharan},
title = {Large scale high-frequency seismic wavefield reconstruction, acquisition via rank minimization and sparsity-promoting source estimation},
school = {Georgia Institute of Technology},
year = {2020},
month = {11},
address = {Atlanta},
abstract = {Seismic data reconstruction on a dense periodic grid from seismic
data acquired on a coarse grid is a common approach followed by most of the
oil & gas companies. This approach allows them to save on operationally
challenging and expensive dense seismic data acquisition. Dense seismic data
is one of the key requirements for generating high-resolution images of
earth's subsurface for exploration and production decisions. Based on the
Compressive Sensing (CS) paradigm, low-rank matrix factorization based
seismic data reconstruction methods are computationally cheaper and scalable
to large datasets in comparison to sparsity-promotion based methods. The
sparsity-promotion based methods are based on transformation in certain
transform domains that can be computationally expensive for large datasets.
Although, low-rank matrix factorization based methods perform well at lower
frequencies, their performance degrades at higher frequencies due to increase
in rank of approximating matrix. One of the contributions of this thesis is a
recursively weighted matrix factorization approach to improve the quality of
reconstructed data at higher frequencies. This recursively weighted approach
exploits the similarity between adjacent frequency slices. Although,
recursively weighted method improves the data reconstruction quality at
higher frequencies, it can be computationally expensive for large scale
seismic datasets. This is because of the interdependence of frequencies
preventing simultaneous reconstruction of frequencies. Another contribution
of this thesis is a computationally efficient recursively weighted framework
for large scale dataset by parallelizing data reconstruction over rows of
low-rank factors of each frequency slices. To reduce the cost and turnaround
time of seismic data acquisition simultaneous source acquisition is adapted
by the oil and gas industry in last few years. Another contribution of this
thesis is a low-rank based method for simultaneous separation and
reconstruction of seismic data on a dense periodic grid from large scale
seismic data acquired with simultaneous source acquisition. Next part of this
thesis focuses on accurate detection of fractures created by hydraulic
fracturing in unconventional reservoirs for economical production of oil and
gas. Fracturing of rocks during hydraulic fracturing gives rise to
microseismic events, which are localized along these fractures. In this work,
a sparsity-promoting microseismic source estimation framework is proposed to
detect closely spaced microseismic sources along with estimation of their
associated source-time functions from noisy microseismic data recorded by
receivers along the earth's surface or along monitor wells. Detecting closely
spaced microseismic events helps in delineating fractures and estimation of
source-time function is useful in estimating fracture's origin in time. Also,
source-time functions can be potentially useful for estimating the
source-mechanism. Also, this method does not make any prior assumption on
number of microseismic sources or shape of their source-time functions.
Therefore, this method is useful for detecting microseismic sources with
different source signatures and frequency content. Last part of this thesis
focuses on sparsity-promoting photoacoustic imaging to detect photoabsorbers
along with estimating the associated source-time functions. Traditional
photoacoustic imaging can only estimate the locations of photoacoustic
absorbers. Also, traditional methods require dense transducer coverage
whereas sparsity-promotion based method can work with reduced transducer
sampling reducing the overall data storage cost.},
keywords = {PhD, Sparsity-promoting, Low-Rank, Wavefield Reconstruction, Compressed Sensing},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2020/sharan2020THlsh/sharan2020THlsh.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2020/sharan2020THlsh/sharan2020THlsh_pres.pdf}
}
@PHDTHESIS{louboutin2020THmfi,
author = {Mathias Louboutin},
title = {Modeling for inversion in exploration geophysics},
school = {Georgia Institute of Technology},
year = {2020},
month = {03},
address = {Atlanta},
abstract = {Seismic inversion, and more generally geophysical exploration, aims at better
understanding the earth's subsurface, which is one of today's most important
challenges. Firstly, it contains natural resources that are critical to our
technologies such as water, minerals and oil and gas. Secondly, monitoring
the subsurface in the context of CO2 sequestration, earthquake detection and
global seismology are of major interests with regard to safety and the
environment hazards. However, the technologies to monitor the subsurface or
find resources are scientifically extremely challenging. Seismic inversion
can be formulated as a mathematical optimization problem that minimizes the
difference between field recorded data and numerically modeled synthetic
data. The process of solving this optimization problem then requires to
numerically model, thousands of times, wave-propagation in large
three-dimensional representations of part of the earth subsurface. The
mathematical and computational complexity of this problem, therefore, calls
for software design that abstracts these requirements and facilitates
algorithm and software development.
My thesis addresses some of the challenges that arise from these problems;
mainly the computational cost and access to the right software for research
and development. In the first part, I will discuss a performance metric that
improves the current runtime-only benchmarks in exploration geophysics. This
metric, the roofline model, first provides insight at the hardware level of
the performance of a given implementation relative to the maximum achievable
performance. Second, this study demonstrates that the choice of numerical
discretization has a major impact on the achievable performance depending on
the hardware at hand and shows that a flexible framework with respect to the
discretization parameters is necessary. In the second part, I will introduce
and describe Devito, a symbolic finite-difference DSL that provides a
high-level interface to the definition of partial differential equations
(PDE) such as the wave equation. Devito, from the symbolic definition of
PDEs, then generates and compiles highly optimized C code on-the-fly to
compute the solution of the PDE. The combination of the high-level
abstractions and the just-in-time compiler enable research for geophysical
exploration and PDE-constrainted optimization based on the paradigm of
separation of concerns. This allows researchers to concentrate on their
respective field of study while having access to computationally performant
solvers with a flexible and easy to use interface to successfully implement
complex representations of the physics. The second part of my thesis will be
split into two sub-parts; first describing the symbolic application
programming interface (API), before describing and benchmarking the
just-in-time compiler. I will end my thesis with concluding remarks, the
latest developments and a brief description of projects that were enabled by
Devito.},
keywords = {PhD, Finite-differences, HPC, performance, Imaging, Modeling, Inversion, FWI, RTM},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2020/louboutin2020THmfi/louboutin2020THmfi.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2020/louboutin2020THmfi/louboutin2020THmfi_pres.pdf}
}
@PHDTHESIS{witte2020THsal,
author = {Philipp A. Witte},
title = {Software and algorithms for large-scale seismic inverse problems},
school = {Georgia Institute of Technology},
year = {2020},
month = {02},
address = {Atlanta},
abstract = {Seismic imaging and parameter estimation are an import class of inverse
problems with practical relevance in resource exploration, carbon control and
monitoring systems for geohazards. The goal of seismic inverse problems is to
image subsurface geological structures and estimate physical rock properties
such as wave speed or density. Mathematically, this can be achieved by
solving an optimization problem in which we minimize the mismatch between
numerically modeled data and observed data from a seismic survey. As wave
propagation through a medium is described by wave equations, seismic inverse
problems involve solving a large number of partial differential equations
(PDEs) during numerical optimization using finite difference modeling, making
them computationally expensive. Additionally, seismic inverse problems are
typically ill-posed, non-convex or ill-conditioned, thus making them
challenging from a mathematical standpoint as well. Similar to the field of
deep learning, this calls for software that is not only optimized for
performance, but also enables geophysical domain specialists to experiment
with algorithms in high-level programming languages and using different
computing environments, such as high-performance computing (HPC) clusters or
the cloud. Furthermore, they call for the adaption of dimensionality
reduction techniques and stochastic algorithms to address computational cost
from the algorithmic side.
This thesis makes three distinct contributions to address computational
challenges encountered in seismic inverse problems and to facilitate
algorithmic development in this field. Part one introduces a large-scale
framework for seismic modeling and inversion based on the paradigm of
separation of concerns, which combines a user interface based on domain
specific abstractions with a Python package for automatic code generation to
solve the underlying PDEs. The modular code structure makes it possible to
manage the complexity of a seismic inversion code, while matrix-free linear
operators and data containers enable the implementation of algorithms in a
fashion that closely resembles the underlying mathematical notation. The
second contribution of this thesis is an algorithm for seismic imaging, that
addresses its high computational cost and large memory imprint through a
combination of on-the-fly Fourier transforms, stochastic sampling techniques
and sparsity-promoting optimization. The algorithm combines the best of both
time- and frequency-domain inversion, as the memory imprint is independent of
the number of modeled time steps, while time-to-frequency conversions avoid
the need to solve Helmholtz equations, which involve inverting
ill-conditioned matrices. Part three of this thesis introduces a novel
approach for adapting the cloud for high-performance computing applications
like seismic imaging, which does not rely on a fixed cluster of permanently
running virtual machines. Instead, computational resources are automatically
started and terminated by the cloud environment during runtime and the
workflow takes advantage of cloud-native technologies such as event-driven
computations and containerized batch processing. The performance and cost
analysis shows that this approach is able to address current shortcomings of
the cloud such as inferior resilience, while at the same time reducing
operating cost up to an order of magnitude. As such, the workflow provides a
strategy for cost effectively running large-scale seismic imaging problems in
the cloud and is a viable alternative to conventional HPC clusters.},
keywords = {PhD, seismic, imaging, RTM, FWI, AWS, software, algorithms},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2020/witte2020THsal/witte2020THsal.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2020/witte2020THsal/witte2020THsal_pres.pdf}
}
@PHDTHESIS{yang2020THsiw,
author = {Mengmeng Yang},
title = {Seismic imaging with extended image volumes and source estimation},
school = {Georgia Institute of Technology},
year = {2020},
month = {03},
address = {Atlanta},
abstract = {Seismic imaging is an important tool for the exploration and production of
oil & gas, carbon sequestration, and the mitigation of geohazards. Through
the process of seismic migration, images of subsurface geological structures
are created from data collected at the surface. These images reflect changes
in the physical rock properties such as wave speed and density. While
significant progress has been made in the development of 3D imaging
technology for complex geological areas, several challenges remain, some of
which are addressed in this thesis. The first main contribution of this
thesis is in the area of creating so-called subsurface-offset gathers, which
play an increasingly important role in seismic imaging because they provide a
multitude of information ranging from the reflection mechanism itself to
information of the dips of specific reflectors and the accuracy of the
background velocity model. Unfortunately, the formation and manipulation of
these gathers
come with exceedingly high computational and storage costs because extended
image volumes are quadratic in the image size. These high costs are avoided
by using techniques from modern randomized linear algebra that allow for
compression of extended image volumes
into low-rank factorized form—i.e., the image volume is approximately
written as an outer product of a tall and a wide matrix. It is demonstrated
that this factorization provides access to different types of sub-surface
offset gathers, including common-image (point)
gathers, without the need to explicitly form this outer product. As a result,
challenging steep dip imaging situations, where conventional horizontal
offset gathers no longer focus, can be handled. Moreover, extended image
volumes for one background velocity model can directly be mapped to those of
another background velocity model. As a result, factorization costs are
incurred only once when examining imaging scenarios for different background
velocity models. The second main contribution of this thesis is on the
development of computationally efficient sparsity-promoting imaging
techniques and on-the-fly source estimation. In this work, an adaptive
technique is proposed where the unknown time signature of the sources is
estimated during imaging. Without accurate knowledge of these source
signatures, seismic images can be wrongly positioned and can have the wrong
amplitudes hampering subsequent geophysical and geological interpretations.
With the presented technique, this problem is mitigated. Finally, a
contribution is made to address the detrimental effects of surface-related
multiples. If not handled correctly, these multiples give rise to unwanted
artifacts in the image. A new technique is introduced to address this issue
in realistic settings where there is a strong density contrast at the ocean
bottom. As a result, the surface-related multiples are mapped to the
reflectors. Because bounce points at the surface can be considered as
sources, this mapping of the multiples rather than removal increases the
subsurface illumination.},
keywords = {PhD, Extended image volumes, low rank, randomized linear algebra,
sparsity-promoting inversion, multiples, source estimation},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2020/yang2020THsiw/yang2020THsiw.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2020/yang2020THsiw/yang2020THsiw_pres.pdf}
}
%-----2019-----%
@PHDTHESIS{peters2019THiso,
author = {Bas Peters},
title = {Intersections and sums of sets for the regularization of inverse problems},
school = {The University of British Columbia},
year = {2019},
month = {05},
address = {Vancouver},
abstract = {Inverse problems in the imaging sciences encompass a variety of
applications. The primary problem of interest is the identification of
physical parameters from observed data that come from experiments
governed by partial-differential-equations. The secondary type of
imaging problems attempts to reconstruct images and video that are
corrupted by, for example, noise, subsampling, blur, or saturation.
The quality of the solution of an inverse problem is sensitive to issues such
as
noise and missing entries in the data. The non-convex seismic full-waveform
inversion problem suffers from parasitic local minima that lead to wrong
solutions that may look realistic even for noiseless data. To meet some of
these
challenges, I propose solution strategies that constrain the model parameters
at
every iteration to help guide the inversion.
To arrive at this goal, I present new practical workflows, algorithms, and
software, that avoid manual tuning-parameters and that allow us to
incorporate
multiple pieces of prior knowledge. Opposed to penalty methods, I avoid
balancing the influence of multiple pieces of prior knowledge by working with
intersections of constraint sets. I explore and present advantages of
constraints for imaging. Because the resulting problems are often non-trivial
to
solve, especially on large 3D grids, I introduce faster algorithms, dedicated
to
computing projections onto intersections of multiple sets.
To connect prior knowledge more directly to problem formulations, I also
combine
ideas from additive models, such as cartoon-texture decomposition and robust
principal component analysis, with intersections of multiple constraint sets
for
the regularization of inverse problems. The result is an extension of the
concept of a Minkowski set.
Examples from non-unique physical parameter estimation problems show that
constraints in combination with projection methods provide control over the
model properties at every iteration. This can lead to improved results when
the
constraints are carefully relaxed.},
keywords = {PhD, sets, regularization, FWI, optimization},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2019/peters2019THiso/peters2019THiso.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2019/peters2019THiso/peters2019THiso_pres.pdf}
}
%-----2018-----%
@PHDTHESIS{fang2018THsea,
author = {Zhilong Fang},
title = {Source estimation and uncertainty quantification for wave-equation based seismic imaging and inversion},
school = {The University of British Columbia},
year = {2018},
month = {04},
address = {Vancouver},
abstract = {In modern seismic exploration, wave-equation-based inversion and imaging approaches are widely employed for their potential of creating high-resolution subsurface images from seismic data by using the wave equation
to describe the underlying physical model of wave propagation. Despite
their successful practical applications, some key issues remain unsolved, including local minima, unknown sources, and the largely missing uncertainty
analyses for the inversion. This thesis aims to address the following two
aspects: to perform the inversion without prior knowledge of sources, and
to quantify uncertainties in the inversion. The unknown source can hinder the success of wave-equation-based ap-
proaches. A simple time shift in the source can lead to misplaced reflectors
in linearized inversions or large disturbances in nonlinear problems. Unfortunately, accurate sources are typically unknown in real problems. The first major contribution of this thesis is, given the fact that the wave equation linearly depends on the sources, I have proposed on-the-fly source estimation techniques for the following wave-equation-based approaches: (1)
time-domain sparsity-promoting least-squares reverse-time migration; and
(2) wavefield-reconstruction inversion. Considering the linear dependence
of the wave equation on the sources, I project out the sources by solving a
linear least-squares problem, which enables us to conduct successful wave-
equation-based inversions without prior knowledge of the sources.
Wave-equation-based approaches also produce uncertainties in the resulting velocity model due to the noisy data, which would influence the
subsequent exploration and financial decisions. The difficulties related to
practical uncertainty quantification lie in: (1) expensive computation related
to wave-equation solves, and (2) the nonlinear parameter-to-data map. The
second major contribution of this thesis is the proposal of a computationally
feasible Bayesian framework to analyze uncertainties in the resulting velocity models. Through relaxing the wave-equation constraints, I obtain a less
nonlinear parameter-to-data map and a posterior distribution that can be adequately approximated by a Gaussian distribution. I derive an implicit
formulation to construct the covariance matrix of the Gaussian distribution,
which allows us to sample the Gaussian distribution in a computationally
efficient manner. I demonstrate that the proposed Bayesian framework can
provide adequately accurate uncertainty analyses for intermediate to large-
scale problems with an acceptable computational cost.},
keywords = {PhD, WRI, LS-RTM, source estimation, UQ, FWI},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2018/fang2018THsea/fang2018THsea.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2018/fang2018THsea/fang2018THsea_pres.pdf}
}
%-----2017-----%
@PHDTHESIS{dasilva2017THlso,
author = {Curt Da Silva},
title = {Large-scale optimization algorithms for missing data completion and inverse problems},
school = {The University of British Columbia},
year = {2017},
month = {09},
address = {Vancouver},
abstract = {Inverse problems are an important class of problems
found in many areas of science and engineering. In
these problems, one aims to estimate unknown
parameters of a physical system through indirect
multi-experiment measurements. Inverse problems
arise in a number of fields including seismology,
medical imaging, and astronomy, among others. An
important aspect of inverse problems is the quality
of the acquired data itself. Real-world data
acquisition restrictions, such as time and budget
constraints, often results in measured data with
missing entries. Many inversion algorithms assume
that the input data is fully sampled and relatively
noise free and produce poor results when these
assumptions are violated. Given the multidimensional
nature of real-world data, we propose a new low-rank
optimization method on the smooth manifold of
Hierarchical Tucker tensors. Tensors that exhibit
this low-rank structure can be recovered from
solving this non-convex program in an efficient
manner. We successfully interpolate realistically
sized seismic data volumes using this approach. If
our low-rank tensor is corrupted with non-Gaussian
noise, the resulting optimization program can be
formulated as a convex-composite problem. This class
of problems involves minimizing a non-smooth but
convex objective composed with a nonlinear smooth
mapping. In this thesis, we develop a level set
method for solving composite-convex problems and
prove that the resulting subproblems converge
linearly. We demonstrate that this method is
competitive when applied to examples in noisy tensor
completion, analysis-based compressed sensing, audio
declipping, total-variation deblurring and
denoising, and one-bit compressed sensing. With
respect to solving the inverse problem itself, we
introduce a new software design framework that
manages the cognitive complexity of the various
components involved. Our framework is modular by
design, which enables us to easily integrate and
replace components such as linear solvers, finite
difference stencils, preconditioners, and
parallelization schemes. As a result, a researcher
using this framework can formulate her algorithms
with respect to high-level components such as
objective functions and hessian operators. We
showcase the ease with which one can prototype such
algorithms in a 2D test problem and, with little
code modification, apply the same method to
large-scale 3D problems.},
keywords = {PhD, optimization, convex composite, tensor completion, low-rank, tensor, interpolation, software design},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2017/dasilva2017THlso/dasilva2017THlso.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2017/dasilva2017THlso/dasilva2017THlso_pres.pdf}
}
@PHDTHESIS{kumar2017THels,
author = {Rajiv Kumar},
title = {Enabling large-scale seismic data acquisition, processing and waveform-inversion via rank-minimization},
school = {The University of British Columbia},
year = {2017},
month = {08},
address = {Vancouver},
abstract = {In this thesis, I adapt ideas from the field of
compressed sensing to mitigate the computational and
memory bottleneck of seismic processing workflows
such as missing-trace interpolation, source
separation and wave-equation based inversion for
large-scale 3- and 5-D seismic data. For
interpolation and source separation using
rank-minimization, I propose three main ingredients,
namely a rank-revealing transform domain, a
subsampling scheme that increases the rank in the
transform domain, and a practical large-scale
data-consistent rank-minimization framework, which
avoids the need for expensive computation of
singular value decompositions. We also devise a
wave-equation based factorization approach that
removes computational bottlenecks and provides
access to the kinematics and amplitudes of
full-subsurface offset extended images via actions
of full extended image volumes on probing vectors,
which I use to perform the amplitude-versus- angle
analyses and automatic wave-equation migration
velocity analyses on complex geological
environments. After a brief overview of matrix
completion techniques in Chapter 1, we propose a
singular value decomposition (SVD)-free
factorization based rank-minimization approach for
large-scale matrix completion problems. Then, I
extend this framework to deal with large-scale
seismic data interpolation problems, where I show
that the standard approach of partitioning the
seismic data into windows is not required, which use
the fact that events tend to become linear in these
windows, while exploiting the low-rank structure of
seismic data. Carefully selected synthetic and
realistic seismic data examples validate the
efficacy of the interpolation framework. Next, I
extend the SVD-free rank-minimization approach to
remove the seismic cross-talk in simultaneous source
acquisition. Experimental results verify that source
separation using the SVD-free rank-minimization
approaches are comparable to the sparsity-promotion
based techniques; however, separation via
rank-minimization is significantly faster and memory
efficient. We further introduce a matrix-vector
formulation to form full-subsurface extended image
volumes, which removes the storage and computational
bottleneck found in the convention methods. I
demonstrate that the proposed matrix-vector
formulation is used to form different image gathers
with which amplitude-versus-angle and wave-equation
migration velocity analyses is performed, without
requiring prior information on the geologic
dips. Finally, I conclude the thesis by outlining
potential future research directions and extensions
of the thesis work.},
keywords = {PhD, acquisition, processing, waveform inversion, rank minimization, extended image volumes, migration velocity analysis},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2017/kumar2017THels/kumar2017THels.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2017/kumar2017THels/kumar2017THels_pres.pdf}
}
@PHDTHESIS{wason2017THsss,
author = {Haneet Wason},
title = {Simultaneous-source seismic data acquisition and processing with compressive sensing},
school = {The University of British Columbia},
year = {2017},
month = {08},
address = {Vancouver},
abstract = {The work in this thesis adapts ideas from the field of
compressive sensing (CS) that lead to new insights
into acquiring and processing seismic data, where we
can fundamentally rethink how we design seismic
acquisition surveys and process acquired data to
minimize acquisition- and processing-related
costs. Current efforts towards dense source/receiver
sampling and full azimuthal coverage to produce
high-resolution images of the subsurface have led to
the deployment of multiple sources across survey
areas. A step ahead from multisource acquisition is
simultaneous-source acquisition, where multiple
sources fire shots at near-simultaneous/random times
resulting in overlapping shot records, in comparison
to no overlaps during conventional sequential-source
acquisition. Adoption of simultaneous-source
techniques has helped to improve survey efficiency
and data density. The engine that drives
simultaneous-source technology is
simultaneous-source separation --- a methodology
that aims to recover conventional sequential-source
data from simultaneous-source data. This is
essential because many seismic processing techniques
rely on dense and periodic (or regular)
source/receiver sampling. We address the challenge
of source separation through a combination of
tailored simultaneous-source acquisition design and
sparsity-promoting recovery via convex optimization
using l1 objectives. We use CS metrics to
investigate the relationship between marine
simultaneous-source acquisition design and data
reconstruction fidelity, and consequently assert the
importance of randomness in the acquisition system
in combination with an appropriate choice for a
sparsifying transform (i.e., curvelet transform) in
the reconstruction algorithm. We also address the
challenge of minimizing the cost of expensive,
dense, periodically-sampled and replicated
time-lapse surveying and data processing by adapting
ideas from distributed compressive sensing. We show
that compressive randomized time-lapse surveys need
not be replicated to attain acceptable levels of
data repeatability, as long as we know the shot
positions (post acquisition) to a sufficient degree
of accuracy. We conclude by comparing
sparsity-promoting and rank-minimization recovery
techniques for marine simultaneous-source
separation, and demonstrate that recoveries are
comparable; however, the latter approach readily
scales to large-scale seismic data and is
computationally faster.},
keywords = {PhD, acquisition, marine, simultaneous source, source separation, compressive sensing, optimization},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2017/wason2017THsss/wason2017THsss.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2017/wason2017THsss/wason2017THsss_pres.pdf}
}
@PHDTHESIS{oghenekohwo2017THetl,
author = {Felix Oghenekohwo},
title = {Economic time-lapse seismic acquisition and imaging---{Reaping} the benefits of randomized sampling with distributed compressive sensing},
school = {The University of British Columbia},
year = {2017},
month = {08},
address = {Vancouver},
abstract = {This thesis presents a novel viewpoint on the implicit
opportunities randomized surveys bring to time-lapse
seismic - which is a proven surveillance tool for
hydrocarbon reservoir monitoring. Time-lapse (4D)
seismic combines acquisition and processing of at
least two seismic datasets (or vintages) in order to
extract information related to changes in a
reservoir within a specified time interval. The
current paradigm places stringent requirements on
replicating the 4D surveys, which is an expensive
task often requiring uneconomical dense sampling of
seismic wavefields. To mitigate the challenges of
dense sampling, several advances in seismic
acquisition have been made in recent years including
the use of multiple sources firing at near
simultaneous random times, and the adaptation of
Compressive Sensing (CS) principles to design
practical acquisition engines that improve sampling
efficiency for seismic data acquisition. However,
little is known regarding the implications of these
developments for time-lapse studies. By conducting
multiple experiments modelling surveys adhering to
the principles of CS for 4D seismic, I propose a
model that demonstrates the feasibility of
randomized acquisitions for time-lapse seismic. The
proposed joint recovery model (JRM), which derives
from distributed CS, exploits the common information
in time-lapse data during recovery of dense
wavefields from measured subsampled data, providing
highly repeatable and high-fidelity vintages. I show
that we obtain better vintages when randomized
surveys are not replicated, in contrast to standard
practice, paving the way for an opportunity to relax
the rigorous requirement to replicate surveys
precisely. We assert that the vintages obtained
using our proposed model are of sufficient quality
to serve as inputs to processes that extract
time-lapse attributes from which subsurface changes
are deduced. Additionally, I show that recovery with
the JRM is robust with respect to errors due to
differences between actual and recorded postplot
information. Finally, I present an opportunity to
adapt our model to problems related to time-lapse
seismic imaging where the main finding is that we
can better delineate time-lapse changes by adapting
the joint recovery model to wave-equation based
inversion methods.},
keywords = {PhD, time lapse, acquisition, joint recovery, thesis, compressive sensing, distributed compressive sensing},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2017/oghenekohwo2017THetl/oghenekohwo2017THetl.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2017/oghenekohwo2017THetl/oghenekohwo2017THetl_pres.pdf}
}
%-----2015-----%
@PHDTHESIS{lin2015THpes,
author = {Tim T.Y. Lin},
title = {Primary estimation with sparsity-promoting bi-convex optimization},
school = {The University of British Columbia},
year = {2015},
month = {10},
address = {Vancouver},
abstract = {This thesis establishes a novel inversion methodology
for the surface-related primaries from a given
recorded seismic wavefield, called the Robust
Estimation of Primaries by Sparse Inversion (Robust
EPSI, or REPSI). Surface-related multiples are a
major source of coherent noise in seismic data, and
inferring fine geological structures from
active-source seismic recordings typically first
necessitates its removal or mitigation. For this
task, current practice calls for data-driven
approaches which produce only approximate multiple
models that must be non-linearly subtracted from the
data, often distorting weak primary events in the
process. A recently proposed method called
Estimation of Primaries by Sparse Inversion (EPSI)
avoids this adaptive subtraction by directly
inverting for a discrete representation of the
underlying multiple-free subsurface impulse response
as a set of band-limited spikes. However, in its
original form, the EPSI algorithm exhibits a few
notable shortcomings that impede adoption. Although
it was shown that the correct impulse response can
be obtained through a sparsest solution criteria,
the current EPSI algorithm is not designed to take
advantage of this finding, but instead approximates
a sparse solution in an ad-hoc manner that requires
practitioners to decide on a multitude of inversion
parameters. The Robust EPSI method introduced in
this thesis reformulates the original EPSI problem
as a formal bi-convex optimization problem that
makes obtaining the sparsest solution an explicit
goal, while also reliably admit satisfactory
solutions using contemporary self-tuning gradient
methods commonly seen in large-scale machine
learning communities. I show that the Robust EPSI
algorithm is able to operate successfully on a
variety of datasets with minimal user input, while
also producing a more accurate model of the
subsurface impulse response when compared to the
original algorithm. Furthermore, this thesis makes
several contributions that improves the capability
and practicality of EPSI: a novel scattering-based
multiple prediction model that allows Robust EPSI to
deal with wider near-offset receiver gaps than
previously demonstrated for EPSI, as well as a
multigrid-inspired continuation strategy that
significantly reduces the computation time needed to
solve EPSI-type problems. These additions are
enabled by and built upon the formalism of the
Robust EPSI as developed in this thesis.},
keywords = {PhD, inversion, EPSI, biconvex, multiples, sparsity, optimization},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2015/lin2015THpes/lin2015THpes.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2015/lin2015THpes/lin2015THpes_pres.pdf}
}
@PHDTHESIS{tu2015THfis,
author = {Ning Tu},
title = {Fast imaging with surface-related multiples},
school = {The University of British Columbia},
year = {2015},
month = {08},
address = {Vancouver},
abstract = {Surface-related multiples, which are waves that bounce
more than once between the water surface and the
subsurface reflectors, constitute a significant part
of the data acquired in marine seismic surveys. If
left untreated, they can lead to misplaced phantom
reflectors in the image, and result in erroneous
interpretations of the subsurface structure. As a
result, these multiples are removed before the
imaging procedure in conventional seismic data
processing. However, because they interact more with
the subsurface medium, they may carry extra
information that is not present in the primaries.
Therefore instead of removing these multiples, a
more desirable alternative is to make active use of
them. We derive from the well-established
"Surface-Related Multiple Elimination" relation, and
arrive at a linearized expression of the
wave-equation based modelling that incorporates the
surface- related multiples. We then present a
computationally efficient approach to iteratively
invert this expression to obtain an image of the
subsurface from data that contain multiples. We
achieve the computational efficiency inside each
iteration by (i) using the wave-equation solver to
implicitly carry out the expensive multiple
prediction; and (ii) reducing the number of
wave-equation solves during data simulation by
subsampling the monochromatic source experiments. We
show that, compared with directly applying the
cross-correlation/deconvolutional imaging
conditions, the presented approach can suppress the
coherent imaging artifacts from multiples more
effectively. We also show that, by curvelet-domain
sparsity promoting and occasionally drawing new data
samples during the inversion, the proposed inversion
method gains improved robustness to velocity errors
in the background model, as well as modelling errors
incurred during linearization of the
wave-equation. To combine the information encoded in
both the primaries and the multiples, we then
propose a highly accurate source estimation method
to jointly invert the total upgoing wavefield. We
show with field data examples that we can reap
benefits from both the relative noise-free primaries
and the extra illumination coverage of the
multiples. We also demonstrate that the inclusion of
multiples help mitigate the amplitude ambiguity
during source estimation. We conclude the thesis
with an outlook for future research directions, as
well as potential extensions of the proposed work.},
keywords = {inversion, seismic imaging, multiples, least-squares},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2015/tu2015THfis/tu2015THfis.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2015/tu2015THfis/tu2015THfis_pres.pdf}
}
@PHDTHESIS{li2015THsps,
author = {Xiang Li},
title = {Sparsity promoting seismic imaging and full-waveform inversion},
school = {The University of British Columbia},
year = {2015},
month = {07},
address = {Vancouver},
abstract = {This thesis will address the large computational costs
of solving least-squares migration and full-waveform
inversion problems. Least-squares seismic imaging
and full-waveform inversion are seismic inversion
techniques that require iterative minimizations of
large least-squares misfit functions. Each iteration
requires an evaluation of the Jacobian operator and
its adjoint, both of which require two wave-equation
solves for all sources, creating prohibitive
computational costs. In order to reduce costs, we
utilize randomized dimensionality reduction
techniques, reducing the number of sources used
during inversion. The randomized dimensionality
reduction techniques create subsampling related
artifacts, which we mitigate by using
curvelet-domain sparsity-promoting inversion
techniques. Our method conducts least-squares
imaging at the approximate cost of one reverse-time
migration with all sources, and computes the
Gauss-Newton full-waveform inversion update at
roughly the cost of one gradient update with all
sources. Finally, during our research of the
full-waveform inversion problem, we discovered that
we can utilize our method as an alternative approach
to add sparse constraints on the entire velocity
model by imposing sparsity constraints on each model
update separately, rather than regularizing the
total velocity model as typically practiced. We also
observed this alternative approach yields a faster
decay of the residual and model error as a function
of iterations. We provided empirical arguments why
and when imposing sparsity on the updates can lead
to improved full-waveform inversion results.},
keywords = {full-waveform inversion, Gauss-Newton method, sparsity promoting, least-squares imaging, seismic imaging},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2015/li2015THsps/li2015THsps.pdf},
presentation = {https://slim.gatech.edu/Publications/Public/Thesis/2015/li2015THsps/li2015THsps_pres.pdf}
}
%-----2010-----%
@PHDTHESIS{moghaddam10phd,
author = {Peyman P. Moghaddam},
title = {Curvelet-based migration amplitude recovery},
school = {The University of British Columbia},
year = {2010},
month = {05},
address = {Vancouver},
abstract = {Migration can accurately locate reflectors in the earth
but in most cases fails to correctly resolve their
amplitude. This might lead to mis-interpretation of
the nature of reflector. In this thesis, I
introduced a method to accurately recover the
amplitude of the seismic reflector. This method
relies on a new transform-based recovery that
exploits the expression of seismic images by the
recently developed curvelet transform. The elements
of this transform, called curvelets, are
multi-dimensional, multi-scale, and
multi-directional. They also remain approximately
invariant under the imaging operator. I exploit
these properties of the curvelets to introduce a
method called Curvelet Match Filtering (CMF) for
recovering the seismic amplitude in presence of
noise in both migrated image and data. I detail the
method and illustrate its performance on synthetic
dataset. I also extend CMF formulation to other
geophysical applications and present results on
multiple removal. In addition of that, I investigate
preconditioning of the migration which results to
rapid convergence rate of the iterative method using
migration.},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2010/moghaddam10phd.pdf}
}
%-----2008-----%
@PHDTHESIS{hennenfent08phd,
author = {Gilles Hennenfent},
title = {Sampling and reconstruction of seismic wavefields in the curvelet domain},
school = {The University of British Columbia},
year = {2008},
month = {05},
address = {Vancouver},
abstract = {Wavefield reconstruction is a crucial step in the
seismic processing flow. For instance, unsuccessful
interpolation leads to erroneous multiple
predictions that adversely affect the performance of
multiple elimination, and to imaging artifacts. We
present a new non-parametric transform-based
reconstruction method that exploits the compression
of seismic data by the recently developed curvelet
transform. The elements of this transform, called
curvelets, are multi-dimensional, multi-scale, and
multi-directional. They locally resemble wavefronts
present in the data, which leads to a compressible
representation for seismic data. This compression
enables us to formulate a new curvelet-based seismic
data recovery algorithm through sparsity-promoting
inversion (CRSI). The concept of sparsity-promoting
inversion is in itself not new to
geophysics. However, the recent insights from the
field of {\textquoteleft}{\textquoteleft}compressed
sensing{\textquoteright}{\textquoteright} are new
since they clearly identify the three main
ingredients that go into a successful formulation of
a reconstruction problem, namely a sparsifying
transform, a sub-Nyquist sampling strategy that
subdues coherent aliases in the sparsifying domain,
and a data-consistent sparsity-promoting
program. After a brief overview of the curvelet
transform and our seismic-oriented extension to the
fast discrete curvelet transform, we detail the CRSI
formulation and illustrate its performance on
synthetic and real datasets. Then, we introduce a
sub-Nyquist sampling scheme, termed jittered
undersampling, and show that, for the same amount of
data acquired, jittered data are best interpolated
using CRSI compared to regular or random
undersampled data. We also discuss the large-scale
one-norm solver involved in CRSI. Finally, we extend
CRSI formulation to other geophysical applications
and present results on multiple removal and
migration-amplitude recovery.},
keywords = {curvelet transform, reconstruction, SLIM},
note = {(PhD)},
url = {https://slim.gatech.edu/Publications/Public/Thesis/2008/hennenfent08phd.pdf}
}