-
Notifications
You must be signed in to change notification settings - Fork 0
/
ManuscriptTime.tex
1096 lines (1009 loc) · 58 KB
/
ManuscriptTime.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[extra, camera,%
% onecolumn, % comment for two columns
% referee, % uncomment for double lines
% final, % uncomment for changes
]{gji}
\usepackage{timet}
% Additional packages
% \usepackage[UKenglish]{babel}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage[T1]{fontenc}
\usepackage{amssymb, amsmath, amsfonts}
\usepackage{booktabs}
\usepackage{xspace}
\usepackage[final]{graphicx}
\usepackage[position=top]{subfig}
\usepackage{ifthen}
\usepackage[addedmarkup=bf]{changes}
% \usepackage[final]{changes}
\usepackage[pdftex, final, allcolors=blue, colorlinks=true]{hyperref}
% workaround siunitx clashes unfortunately with gji
% https://tex.stackexchange.com/a/412886
\makeatletter
\let\zz@tabular\@tabular
\let\zzendtabular\endtabular
\let\zz@xtabularcr\@xtabularcr
\let\zz@tabclassz\@tabclassz
\let\zz@tabclassiv \@tabclassiv
\let\zz@tabarray\@tabarray
\makeatother
% Figures
% To print widths put this in text:
% \the\textwidth
% \the\columnwidth
%
% Widths: textwidth columnwidth tablewidth
% 1-c 504pt 252pt 504pt
% 2-c 504pt 240pt 240pt
% ref 468pt 234pt 468pt
%
\newlength{\fwidth}
\newlength{\cwidth}
\newlength{\twidth}
\ifthenelse{\boolean{GJ@referee}}{%
\setlength{\cwidth}{234pt}%
\setlength{\fwidth}{468pt}%
\setlength{\twidth}{468pt}}{%
\setlength{\fwidth}{504pt}%
\ifthenelse{\boolean{@twocolumn}}{%
\setlength{\cwidth}{240pt}%
\setlength{\twidth}{240pt}}{%
\setlength{\cwidth}{252pt}%
\setlength{\twidth}{504pt}%
}
}
\graphicspath{{./figures/}}
% Change default colour of «changes» package
% \definecolor{alizarin}{rgb}{0.82, 0.1, 0.26}
% \definecolor{mygreen}{rgb}{0, .5, 0}
% \setaddedmarkup{\color{mygreen}{\uline{#1}}}
% \setdeletedmarkup{\color{alizarin}{\uwave{#1}}}
\let\tablenum\relax
\usepackage{siunitx}
\usepackage{tabularx}
% To fix clash of gji with hyperref
% https://tex.stackexchange.com/a/384969
\AtBeginDocument{
\label{CorrectFirstPageLabel}
\def\fpage{\pageref{CorrectFirstPageLabel}}
}
% Own commands
\newcommand{\mr}[1]{\mathrm{#1}}
\newcommand{\emg}[2]{\texttt{emg#1#2}\xspace}
\newcommand{\empymod}{\texttt{empymod}\xspace}
\newcommand{\simpeg}{\texttt{SimPEG}\xspace}
\newcommand{\custem}{\texttt{custEM}\xspace}
\newcommand{\ohmm}{\ensuremath{\Omega\,}\text{m}\xspace}
% Remove once done
% \newcommand{\itodo}[1]{\todo[color=blue!40!white, inline]{\sffamily #1}}
% \newcommand{\mtodo}[1]{\todo[inline]{\sffamily #1}}
% \newcommand{\etodo}[1]{\todo[color=blue!40!white]{#1}}
% \newcommand{\dul}[1]{\underline{\underline{#1}}}
% \newcommand{\ul}[1]{\underline{#1}}
% \newcommand{\rmk}[1]{{\color{red}{#1}}}
% \newcommand{\rtodo}{{\color{red}\texttt{TODO:}}\xspace}
\title[Transforming expensive EM kernels]{Fast Fourier transform
of electromagnetic data for computationally expensive kernels}
\author[D. Werthmüller \emph{et al.}]
{\Large
Dieter Werthmüller$^1$, % orcid: 0000-0002-8575-2484
Wim A. Mulder$^{1,2}$ and % orcid: 0000-0001-7020-9297
Evert C. Slob$^1$ % orcid: 0000-0002-4529-1134
\\
{\footnotesize
$^1$ TU Delft, Building 23, Stevinweg 1 / PO-box 5048, 2628 CN Delft (NL).
E-mail: \href{mailto:[email protected]}{[email protected]}
}\\[-.3em]
{\footnotesize
$^2$ Shell Global Solutions International BV, Grasweg 31, 1031 HW
Amsterdam(NL)
}
}
\date{Accepted 2021 April 20. Received 2021 April 7; in original form 2020 June 30}
\pagerange{1336--1347}
\volume{226}
\pubyear{2021}
% Authors with AMS fonts and mssymb.tex can comment out the following
% line to get the correct symbol for Geophysical Journal International.
\let\leqslant=\leq
\newtheorem{theorem}{Theorem}[section]
\begin{document}
\label{firstpage}
% workaround siunitx clashes unfortunately with gji
% https://tex.stackexchange.com/a/412886
{\makeatletter
\let\@tabular\zz@tabular
\let\endtabular\zzendtabular
\let\@xtabularcr\zz@xtabularcr
\let\@tabclassz\zz@tabclassz
\let\@tabclassiv \zz@tabclassiv
\let\@tabarray\zz@tabarray
\maketitle
}
% \maketitle
\begin{summary}
%
Three-dimensional controlled-source electromagnetic data are often computed
directly in the domain of interest, either in the frequency domain or in the
time domain. Computing it in one domain and transforming it via a Fourier
transform to the other domain is a viable alternative. It requires the
evaluation of many responses in the computational domain if standard Fourier
transforms are used. This can make it prohibitively expensive if the kernel is
time-consuming as is the case in three-dimensional electromagnetic modelling.
The speed of modelling obtained through such a transform is defined by three
key points: solver, method and implementation of the Fourier transform, and
gridding.
%
The faster the solver, the faster modelling will be. It is important that the
solver is robust over a wide range of values (frequencies or times). The method
should require as few kernel evaluations as possible while remaining robust. As
the frequency and time ranges span many orders of magnitude, the required
values are ideally equally spaced on a logarithmic scale. The proposed fast
method uses either the digital linear filter method or the logarithmic fast
Fourier transform together with a careful selection of evaluation points and
interpolation. In frequency-to-time domain tests this methodology requires
typically 15 to 20 frequencies to cover a wide range of offsets.
%
The gridding should be frequency- or time-dependent, which is accomplished by
making it a function of skin depth. Optimising for the least number of required
cells should be combined with optimising for computational speed. Looking
carefully at these points resulted in much smaller computation times with
speedup factors of ten or more over previous methods. A computation in one
domain followed by transformation can therefore be an alternative to
computation in the other domain domain if the required evaluation points and
the corresponding grids are carefully chosen.
%
\end{summary}
% https://academic.oup.com/DocumentLibrary/GJI/gji%20keyword%20list%20updated2017.pdf
\begin{keywords}
Controlled source electromagnetics (CSEM); Numerical modelling; Fourier
analysis.
\end{keywords}
\section{Introduction} % % % % % % % % % % % % % % % % % % % % % % % % % % % %
The controlled-source electromagnetic (CSEM) method is one of the common
non-seismic tools in exploration geophysics, not only in hydrocarbon
exploration \citep{GEO.10.Constable}, but also in the search for sulfides
\citep{GRL.19.Gehrmann}, water \citep{GEO.05.Pedersen}, geothermal sources
\citep{WGC.15.Girard}, or for geological purposes \citep{NAT.19.Johanson}.
While current sources with a few frequencies are used in the deep marine
environment, transient measurements are more common in the shallow marine
environment and on land \citep[e.g., ][]{GEO.07.Ziolkowski, SEG.07.Andreis,
SEG.07.Avdeeva}. One of the main reasons is the dominance of the airwave in
shallow marine and terrestrial measurements, which can be better separated in
the time domain. CSEM is usually divided into frequency- and time-domain
methods, depending on whether the source signal is a continuous waveform, such
as a sine, or a finite waveform, such as a pseudo-random binary sequence
(PRBS). A numerical comparison of the two methods is given by
\cite{GP.13.Conell}. Acquired CSEM data are subsequently often analysed
(modelled and inverted for) in their respective domain, either the frequency or
time domain. Modellers of layered media usually exploit the horizontal
shift-invariance by computing the responses in the wavenumber-frequency domain
followed by a 2D inverse spatial Fourier transform, also called Hankel
transform, to the space-frequency domain, and a regular inverse Fourier
transform if time-domain data are required \citep[e.g., ][]{GEO.15.Hunziker}.
CSEM codes for arbitrary 2D and 3D computations, on the other hand, often
compute their responses directly in the required domain, either frequency or
time.
Electromagnetic methods in geophysics span a wide range of model scenarios and
acquisition layouts, each with its own modelling-related implications. A
well-known example is the aforementioned land or shallow marine case versus the
deep marine case, where in the former case a dominating airwave has to be
accurately modelled, whereas it can be completely ignored in the latter case.
Recent areas of particular interest in time-domain modelling that pose
numerical challenges are, for instance, simulating the fields through
steel-cased wells \citep{CAG.19.Heagy} or the effects of induced polarization
\citep{EXP.20.Kang}. The former is challenging because of the high conductivity
contrasts requiring very detailed meshing, the latter is challenging for
time-domain modelling because the models are frequency dependent.
Under certain conditions it can be a viable alternative to model time-domain
data with a frequency-domain code, as shown by \cite{GEO.08.Mulder}. They
compared the computational complexity of different time-domain methods with the
computational complexity of frequency-domain computation followed by a Fourier
transform: explicit time-stepping schemes (Du Fort-Frankel method as used in,
e.g., \cite{GEO.04.Commer, GEO.07.Maao}) and implicit schemes \citep[e.g.,
][]{GEO.04.Haber,GEO.10.Um} have a complexity of $O(n^4)$, matrix exponentials
and Lanczos reduction schemes \cite[e.g., ][]{RS.94.Druskin} have a complexity
of $O(n^4\sqrt{\log n})$, and Fourier transforms of frequency-domain solutions
have a complexity $O(n_f n^3)$; $n$ is the number of grid points in each
direction, and $n_f$ the number of frequencies. Their conclusion was that the
Fourier transform method can be favourable if the number of required
frequencies is small relative to the number of grid points in each coordinate.
The conditions for fast computation of time-domain data with a frequency-domain
code are: a sufficiently powerful solver, appropriate frequency selection and
interpolation, and an automated gridding, for which they used the
multi-frequency CSEM approach presented by \cite{GEO.07.Plessix}. We build upon
these results but improve the run time from hours to minutes. The main reasons
for this significant speed-up are the further reduction of required frequencies
by introducing lower and upper thresholds of numerically important frequencies,
an adaptive, frequency-dependent gridding scheme that minimizes the required
cells in each dimension, and a logarithmic Fourier transform such as digital
linear filters \citep[DLF, ][]{GP.71.Ghosh} or the logarithmic fast Fourier
transform \citep[FFTLog, ][]{RAS.00.Hamilton} to go from the frequency to the
time domain. The latter makes it also possible to only use the imaginary part
of the frequency-domain response, which has advantages when it comes to
interpolation.
In the next section, we briefly review the methodology as introduced by
\cite{GEO.07.Plessix} and \cite{GEO.08.Mulder} and highlight their advantages
and shortcomings. This is followed by an outline of the methodology, our
changes to the Fourier transform and the adaptive gridding. Finally, we
demonstrate the efficiency of the approach with some numerical results. These
comprise in of a homogeneous space, a layered model, a model which includes
dispersive media (induced polarization), and a three-dimensional model.
\section{Motivation} % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
Being able to model CSEM data both in the frequency domain and in the time
domain can be desirable, as both domains have advantages and disadvantages. One
way to achieve this is to implement Maxwell's equations in both domains as is
done, for instance, in \texttt{SimPEG} \citep{CAG.15.Cockett} or in
\texttt{custEM} \citep{GJI.21.Rochlitz}. Another approach is to have Maxwell's
equations only implemented in one domain, and use Fourier transforms to go to
the other. However, this approach can be costly, as many frequencies over a
wide range are required to go from the frequency domain to the time domain, or
many times over a wide range for the opposite direction. We present a
methodology which significantly reduces the amount and range of the required
frequencies through a combination of extrapolation, interpolation, and setting
the responses for high frequencies to zero. This translates into a significant
reduction in computation time. The number of required frequencies is not the
only important point for reducing computation time. Another important aspect is
the computation grid, as the required grids for low frequencies (in our case in
the order of 0.001\,Hz) and high frequencies (around 100\,Hz) are hugely
different. Low frequencies can be computed on a coarser grid, but they require
a much larger domain in order to avoid boundary effects. High frequencies, on
the other hand, require denser gridding, but they are much more limited in
reach. An adaptive gridding scheme is therefore required, which is naturally
based on the skin depth, the distance after which the amplitude of the
electromagnetic field has decayed by $1/e\approx 37\,\%$. The skin depth
$\delta$ is a function of conductivity and frequency, and for the diffusive
approximation of Maxwell's equations in an isotropic, homogeneous medium is
given by \citep[e.g.,][]{B.SEG.88.Ward}
%
\begin{equation}
\delta = \sqrt{\frac{2}{\omega\mu\sigma}}
\ \approx \ 503.3/\sqrt{f\sigma} \, ,
\label{eq:skindepth}
\end{equation}
%
where $\sigma$ is conductivity (S/m), $\omega=2\pi f$ is angular frequency of
frequency $f$ (Hz), and $\mu$ is magnetic permeability (H/m). The approximation
is obtained by using the free-space value of magnetic permeability,
$\mu_0=4\pi\times10^{-7}\,$H/m.
We build our approach upon \cite{GEO.07.Plessix}, who presented such an
adaptive gridding for multi-frequency (and multi-source) CSEM modelling. They
define the minimum cell width $\Delta_\mathrm{min}$ as a fraction of the
minimum skin depth $\delta_\mathrm{min}$, where $\delta_\mathrm{min}$ should be
2--3 times bigger than $\Delta_\mathrm{min}$. The cells have to be smallest
around the source; in the marine case, the minimum skin depth is therefore
defined by the conductivity of seawater. However, this can yield quite large
cells for low frequencies, so special care has to be taken around the source by
defining a maximum allowed $\Delta_\mathrm{min}$. The grid dimension, on the
other hand, is defined as a function of skin depth for the average conductivity
of the background, $\delta_\mathrm{ave}$. They use four times
$\delta_\mathrm{ave}$ for the $x$-, $y$-, and downward $z$-directions, and a
fixed 50\,km for the upward $z$-direction to account for the airwave. To reduce
the number of cells, it is desirable to introduce stretching, at least in the
buffer zone outside of the area where source and receivers are located.
\cite{GEO.08.Mulder} provide a computational complexity analysis of various
methods to model transient electromagnetic responses directly in the time
domain, and compare it to the computation of transient EM responses in the
frequency domain with a subsequent Fourier transform. They conclude the review
by stating \emph{\guillemotleft Although it remains to be seen which of the
four methods requires the least computer time for a given accuracy, the
frequency-domain approach appears to be attractive.\guillemotright} Their
approach is to minimize the computation time by having, in addition to the just
introduced adaptive gridding, an adaptive frequency selection scheme. This
scheme starts with computing the responses for a set of just five frequencies,
regularly sampled on a log-scale, from minimum to maximum required frequency.
All the other frequencies are interpolated with a shape-preserving
piecewise-cubic Hermite interpolation \citep[PCHIP, ][]{SIAM.80.Fritsch}.
Testing the stability of the obtained response by removing a single
frequency-value at a time their scheme decides if more frequencies in-between
the already computed ones are required. In this way frequencies are only added
if required, hence if certain criteria of response stability are not met. While
this method is good and effective for a single offset, it loses all its
advantages if one tries to compute different offsets within one computation, as
each offset requires a different set of adaptive frequencies. Additionally, it
hampers the parallelization over frequencies.
We present improvements to both the adaptive gridding and the transform from
frequency domain to time domain using the finite-integration technique
\citep{AEU.77.Weiland}, which makes time-domain modelling with a
frequency-domain code viable in comparison with time-domain codes. It is
important to note that our ideas can be used with any solver and is not bound
to one or another frequency- or time-domain solver. The ideas for the frequency
selection can be applied to time selection, and time-dependent, adaptive
gridding exists as well \citep[e.g., ][]{RS.06.Commer}. Our recommendations for
speeding up the Fourier transform for expensive kernels are independent of
spatial complexity, as diffusive EM responses are smooth functions of both
frequency and time. However, the intrinsic requirement is an accurate solver
over a sufficient wide range of frequencies or times.
\section{Methodology} % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
The requirement for any transform is a robust solver of sufficient accuracy
over a wide range of frequencies. We use for the numerical computations the
open-source (Apache License 2.0) codes \empymod \citep{GEO.17.Werthmuller} and
\emg3d \citep{JOSS.19.Werthmuller}. The former computes semi-analytical
responses of layered models, the latter is a 3D multigrid solver based on
\cite{GP.06.Mulder}, which can be used as a preconditioner for Krylov subspace
solvers or as a solver on its own. The multigrid approach works fine for the
diffusive approximation of Maxwell's equations, which assumes that
$\omega\varepsilon \ll \sigma$, where $\varepsilon$ is electric permittivity
(F/m). The remaining system to solve in the frequency domain is then given by
the second-order differential equation of the electric field,
%
\begin{equation}
\mathrm{i}\omega\sigma \mathbf{E} +
\nabla \times \mu^{-1} \nabla \times \mathbf{E}
= -\mathrm{i}\omega\mathbf{J}_\mathrm{s} \, ,
\label{eq:maxwell}
\end{equation}
%
where $\mathbf{E}$ is the electric field (V/m) and $\mathbf{J}_\mathrm{s}$ the
current source (A/m$^2$); time dependence is $\exp(\mathrm{i}\omega t)$. The
standard multigrid approach fails for severe stretching or strong anisotropy,
for which known improvements such as line-relaxation and semicoarsening
\citep{ECCFD.06.Jonsthovel} are implemented, with a non-standard Cholesky
decomposition to speed up the computation \citep{GEO.08.Mulder}. One of the big
advantages of the multigrid method is that it scales linearly (optimal) with
the grid size in both CPU and RAM usage \citep{B.Springer.20.Mulder}. This
makes it feasible to run even big models on standard computers, without the
need for big clusters. All examples in this article are run on a laptop with an
i7-6600U\,CPU\,@\,2.6\,GHz (x4) and 16\,GB of memory, using Ubuntu 20.04 and
Python 3.8. They were run using a single thread, but parallelization over
frequencies is straight forward, as the computations are independent.
Note that while we use these two algorithms for the numerical examples, any
solver that provides sufficiently accurate frequency-domain responses could be
used.
\subsection{Frequency selection} % % % % % % % % % % % % % % % % % % % % % % %
An important factor in terms of speed and accuracy for time-domain responses
obtained from frequency-domain computations is the selection of the required
frequencies. The fewer frequencies required, the quicker we obtain the
time-domain result. This is true in general but applies in particular to
expensive kernels such as is the case in 3D modelling, where the computation of
the response of a single frequency takes much longer than the transformation
itself. The same applies for the reverse operation, the fewer times required,
the quicker we obtain frequency-domain results. Even though we limit our
analysis here to frequency-to-time transformations, most of the arguments are
reversible and applicable to time-to-frequency transformations. We decided to
use a regular spacing of frequencies on a log-scale, rather than an adaptive
scheme. This approach is favourable if a wide range of offsets is needed, as
the required frequencies change with offset and an adaptive frequency selection
is therefore often tailored to a single offset or a limited range of offsets.
Also, it allows for straightforward parallelization over frequencies, which is
not completely possible with an adaptive scheme.
For the actual transform we use either the digital linear filter (DLF) method
or the logarithmic fast Fourier transform (FFTLog). The DLF method was
introduced to geophysics by \cite{GP.71.Ghosh}, and is arguably the most common
method in EM geophysics for both its simplicity and its speed. It is
implemented for the Hankel and Fourier transforms in most EM modelling codes,
e.g., \cite{GEO.09.Key}. A simple tool to design digital linear filters was
recently presented by \cite{GEO.19.Werthmuller}, together with a comprehensive
overview of the history and development of DLF in geophysics. FFTLog,
introduced by \cite{RAS.00.Hamilton}, is another transform algorithm which
proved to be powerful for the frequency-to-time domain transformation of EM
responses, e.g., \cite{INT.14.Werthmuller}. In our tests they are both about
equal in speed and accuracy. DLF requires a wider range and many more
frequencies than the FFTLog. Both methods share some important characteristics
in comparison with the standard FFT: the required input frequencies are equally
spaced on a logarithmic scale (natural logarithm in the case of the DLF and
decimal logarithm in the case of the FFTLog), and they only require either the
real or the imaginary part of the frequency-domain response. We can take
advantage of that by using only the imaginary part of the frequency-domain
response. The imaginary part goes to zero when the frequency goes either to
zero or to infinity, with the advantage that knowing the endpoints makes it
possible to convert the extrapolation of missing frequencies into an
interpolation.
Any Fourier transform needs a certain range of frequencies to obtain
time-domain results. Our approach is not to minimize the required frequencies
for the transform, but to minimize the number of frequencies for which we
actually have to compute responses. We do this in three zones: above an upper
threshold $f_\mr{max}$, we set the frequency-domain response to zero. Below a
lower threshold $f_\mr{min}$, we extrapolate the frequency-domain response. As
we know that the imaginary response goes towards zero for zero frequency, we
can turn it into an interpolation, for which we use PCHIP. And in-between the
thresholds, we use cubic spline interpolation in order to only have to actually
compute a few distinct frequencies per decade ($n_\mr{dec}^f$). Using this
approach, we found that computing the responses for 15--25 frequencies is
usually good enough to obtain time-domain responses for a large range of
offsets; even though the actual transform might need up to hundreds of
frequencies.
The required frequencies depend naturally on the conductivities and on the
chosen acquisition type and layout. A trial-and-error approach with a 3D code
is very time-intensive. However, a simplified, layered model for the required
survey setup and a fast 1D modeller makes it possible to estimate these
parameters easily. An example of this is shown in Figure~\ref{fig:FFTLog-log}.
%
\begin{figure}
\centering
\includegraphics[width=\cwidth]{01-FFTLog-log}
\caption{Example frequency selection for a simple layered model (the shown
model parameters are described in the text). This example shows the impulse
response at an offset of 5\,km, for which it uses FFTLog from 0.001\,Hz to
10\,Hz with five frequencies per decade.}
\label{fig:FFTLog-log}
\end{figure}
%
It shows the responses for a marine scenario with 1\,km water depth of
resistivity $\rho=0.3\,\ohmm$ ($\rho = \sigma^{-1}$) and a 100\,m thick target
of 100\,\ohmm at 1\,km below the seafloor in a background of 1\,\ohmm. The
source is 50\,m above the seafloor, the receiver is on the seafloor, and the
response is the inline $x$-directed $E$-field. The left subplot shows the
imaginary part in the frequency domain and the right plot the corresponding
impulse response in the time domain. The red lines are the semi-analytical
responses. The blue circles indicate the actually computed responses and the
black dots the frequencies which are interpolated or set to zero. The resulting
time-domain response has a relative error of less than 1\,\% everywhere except
for very early times.
Figure \ref{fig:FFTLog-lin} shows exactly the same on a linear scale for the
time and amplitude axes. It can be seen that with the chosen, computed
frequencies (blue dots) the time-domain response starts to divert above about
100\,s and below 0.3\,s. It also shows the oscillating high-frequency part,
which is hard to interpolate and we therefore set it to zero.
Figure~\ref{fig:DLF-lin} is the same as Figure \ref{fig:FFTLog-lin}, but
transformed with the DLF method applying the 81-point sine-cosine filter from
\cite{GEO.09.Key}. The same frequencies were computed as in the FFTLog case and
the missing ones interpolated. The error of the corresponding time-domain
response is comparable, so either FFTLog or DLF can be used.
%
\begin{figure*}
\centering
\subfloat[]{{\includegraphics[width=0.5\fwidth]{02a-FFTLog-lin}}%
\label{fig:FFTLog-lin}}\hfill
\subfloat[]{{\includegraphics[width=0.5\fwidth]{02b-DLF-lin}}%
\label{fig:DLF-lin}}\\
\caption{(a) Same as Figure \ref{fig:FFTLog-log}, but on a logarithmic scale.
(b) Using DLF instead of FFTLog, but computing the same frequency range as
for the FFTLog. The resulting time-domain response has comparable accuracy.}
\label{fig:lin-lin}
\end{figure*}
%
In the above example we used 20 frequencies, but that many would not be
required for the shown offset of 5\,km, a few of the lower and higher
frequencies could be left out. With this frequency-selection, however, we can
model a wide range of offsets. This is shown in Figure \ref{fig:multi-offset},
where the same kernels were used to yield the responses at offsets $r=1.5,3,6$,
and 12\,km. We only need to compute 50\,\% of the required frequencies in the
case of the FFTLog, and only 14\,\% in the case of the DLF.
%
\begin{figure*}
\centering
\includegraphics[width=0.75\fwidth]{03-multi-offset}
\caption{Normalized (a) frequency- and (b) time-domain responses for offsets
$r=1.5,3,6$, and 12\,km using the parameters defined in Figure
\ref{fig:FFTLog-lin}. The coloured symbols are the actually computed
responses, the black dots are the responses which are set to zero (high
frequencies) or interpolated (low frequencies). The grey curves are the
analytical responses.}
\label{fig:multi-offset}
\end{figure*}
%
The shortest offset defines the highest required frequency, and the largest
offset the lowest required frequency. So the more one can restrict the
necessary offset range, the fewer frequencies are needed. Another important
factor is how to interpolate and extrapolate from the computed frequencies to
the frequencies required for the Fourier transform. For the FFTLog only
extrapolation for higher and lower frequencies is required. The EM response
becomes highly oscillatory for high frequencies, which makes it very hard to
extrapolate the response to frequencies $f>f_\mathrm{max}$. However, if
$f_\mathrm{max}$ is chosen judiciously, the importance of higher frequencies
for the Fourier transform can be neglected and we can set those responses to
zero. The extrapolation of frequencies $f<f_\mathrm{min}$ can be changed to an
interpolation by assuming a zero imaginary response at zero frequency, and we
then use PCHIP to interpolate the missing frequencies (as we work on a
logarithmic scale we cannot choose 0\,Hz nor 0\,V/m, but instead take
$10^{-100}$\,Hz and $10^{-100}$\,V/m). In the case of the DLF method we also
have to interpolate in between the computed frequencies, for which we found it
better to use a cubic spline. As can be seen by comparing Figure
\ref{fig:FFTLog-lin} with Figure \ref{fig:DLF-lin}, using the FFTLog or the DLF
with the same actually computed frequencies results in very similar responses.
The changes to the Fourier transform in comparison with \cite{GEO.08.Mulder}
can be summarized in three points: (1) regular log-scale spacing for the
frequency selection; (2) DLF or FFTLog instead of FFT; and (3) using only the
imaginary part of the frequency-domain response. The actual speed of the
transform is unimportant, as the computation of the frequency-domain responses
takes much longer than the transform itself. What matters is solely how many
frequencies are required by it to achieve the desired precision, and how long
it takes to compute the responses for these frequencies.
\subsection{Gridding} % % % % % % % % % % % % % % % % % % % % % % % % % % % %
The proposed Fourier transform requires a robust solver that can compute
accurate results over a wide range of frequencies (times) to obtain time-domain
(frequency-domain) responses. In our case the solver uses a regular, stretched
grid and computes the electric fields in the frequency domain. This setup is
the target of the following gridding recommendations. Whilst they will look
differently for other mesh types or a time-domain code, some conclusions will
still hold.
The computation grid consists of a core or survey domain $D_\mr{s}$ that should
contain all source and receiver positions. The survey domain usually has no or
a very small cell stretching factor $\alpha_\mr{s}$. The minimum cell width is
defined as
%
\begin{equation}
\Delta_\mr{min}=\delta(f, \sigma_\mr{src})/n_\delta \, ,
\label{eq:minwidth}
\end{equation}
%
where $\sigma_\mr{src}$ is the conductivity of the media in which the source
resides, and $n_\delta$ is a positive number that defines how many cells there
should be per skin depth. The actual computational domain $D_\mr{c}$ is usually
much bigger than $D_\mr{s}$ in order to avoid artefacts from the boundary
condition. It can also have a much higher stretching factor $\alpha_\mr{c}$. In
the presented examples we have chosen $D_\mr{c}$ such that the distance for the
signal diffusing from the source to the boundary and back to the receiver
closest to the boundary is at least two wavelengths, after which the amplitude
of the signal is reduced to a millionth of its initial strength. The wavelength
$\lambda$ to compute $D_\mr{c}$ is given by
%
\begin{equation}
\lambda = 2\pi\delta(f, \sigma_\mr{ave})
\ \approx \ 3162/\sqrt{f\sigma_\mr{ave}}\, ,
\label{eq:lambda}
\end{equation}
%
where $\sigma_\mr{ave}$ is the average conductivity, which can vary for
different directions. However, the skin-depth approach fails for air, in which
the EM field propagates as a wave at the speed of light. A largest
computational domain is therefore enforced, defining the maximum distance from
the source to the boundary; this distance is by default set to 100\,km, but
this can be reduced in the marine case with increasing water depth. This also
applies to the horizontal dimensions, not only to the upward $z$-direction, and
it equally applies to very resistive basements, even in deep water. One way to
circumvent this difficulty is the use of a primary-secondary formulation, where
the primary field, including the air wave, is computed with a semi-analytical
code for layered media. We do not consider this approach here. Note that grid
stretching for complex-valued diffusion fields is essentially what is done for
wave fields with the perfectly matched layer (PML). PML is an absorbing layer
to avoid scattering from the boundary by letting the field decay to zero, which
is achieved by introducing a complex factor that causes damping. As
electromagnetic diffusion fields are damped fields by themselves, it suffices
to stretch the grid.
In summary, the adaptive gridding takes $f$, $D_\mr{s}$, $\sigma_\mr{src}$,
$\sigma_\mr{ave}$, $n_\delta$, and ranges for $\alpha_\mr{s}$, $\alpha_\mr{c}$,
where we usually fix $\alpha_\mr{s}=1$ or keep it at least below 1.05, and let
$\alpha_\mr{c}$ be anything between $[1, 1.5]$. The minimum cell width
$\Delta_\mr{min}$ can further be restricted by a user-defined range. Given
these inputs, the adaptive gridding will search for the smallest possible
number of cells which fulfils these criteria. The implemented multigrid method
puts some constraints on the number of cells, of which the adaptive gridding
takes care (the number of cells have to be powers of two multiplied by a low
prime, e.g., $\{2,3,5\}\cdot2^n$).
The main difference with \cite{GEO.08.Mulder} is that their adaptive gridding
searches for the optimal stretching factor $\alpha$ fulfilling certain
criteria, for a fixed number of cells. Our adaptive gridding, on the other
hand, searches for the smallest number of cells that still fulfils the given
criteria. The number of cells becomes therefore also a function of frequency.
It is important to note that this is our implementation of an adaptive grid,
but there are certainly other possibilities. The relevant point for fast
computations is that the adaptive gridding tries to minimize the number of
required cells. This is generally best done in a frequency- and
conductivity-dependent manner. To go from the model grid to the computational
grid, we use the volume-averaging technique on logarithmic resistivities, as
used in \cite{GEO.07.Plessix}. While this technique ensures that the total
resistivity in the subsurface remains the same, it does not consider
effective-medium theory \citep{GEO.03.Davydycheva}, for instance, the apparent
anisotropy from a stack of finely layered formations of varying resistivity.
\section{Numerical examples} % % % % % % % % % % % % % % % % % % % % % % % % %
\subsection{Homogeneous space} % % % % % % % % % % % % % % % % % % % % % % % %
The first example is the inline electric field from a source at the origin
measured by an inline receiver with an offset of 900\,m in a homogeneous space
of 1\,\ohmm. We chose this simple example to compare it with the analytical
solution and with previously published results. We used the following values to
define the required frequencies: $f_\mr{min}=0.05\,$Hz, $f_\mr{max}=21\,$Hz,
using FFTLog with 5 frequencies per decade. This results in 14 frequencies to
compute from 0.05\,Hz to 20.0\,Hz. The complete frequency range for the
transform, including the frequencies for which we use interpolation, includes
30 frequencies from 0.0002\,Hz to 126.4\,Hz. For the adaptive gridding the
following inputs were used: $n_\delta=12$, minimum cell width must be between
20 and 40\,m, and $\alpha_\mr{s}=1$, $\alpha_\mr{c}=[1,1.3]$. This yields grids
with cell numbers between \num{46080} ($80\times24\times24$, for 20.0\,Hz) and
\num{128000} ($80\times40\times40$, for 0.05\,Hz) cells. The run times for each
frequency, the corresponding number of cells, minimum cell width, and
computation domain stretching factor are listed in Table~\ref{tbl:timefull}.
The total run time to compute this model was less than two minutes.
%
\begin{table}
\begin{minipage}{\twidth}
\centering
\caption{Run times per frequency for the homogeneous space example, with the
corresponding number of cells and minimum cell width as well as the
stretching factor in the computation domain; $\alpha_\mr{s}=1$ everywhere.}
\label{tbl:timefull}
\begin{tabular}{S[table-format=2.4]rrrr}
\toprule
%
{Frequency} & Time & nx$\times$ny$\times$nz & $\Delta_\mr{min}$ & $\alpha_\mr{c}$ \\
{(Hz)} & (s) & & (m) & \\
\midrule
%
20.0 & 4 & 80$\times$24$\times$24 & 20 & 1.26 \\
12.6 & 8 & 96$\times$32$\times$32 & 20 & 1.17 \\
7.98 & 8 & 96$\times$32$\times$32 & 20 & 1.20 \\
5.03 & 8 & 96$\times$32$\times$32 & 20 & 1.23 \\
3.18 & 7 & 80$\times$32$\times$32 & 24 & 1.23 \\
2.00 & 7 & 80$\times$32$\times$32 & 30 & 1.21 \\
1.26 & 5 & 64$\times$32$\times$32 & 37 & 1.21 \\
0.798 & 5 & 64$\times$32$\times$32 & 40 & 1.23 \\
0.503 & 5 & 64$\times$32$\times$32 & 40 & 1.26 \\
0.318 & 5 & 64$\times$32$\times$32 & 40 & 1.28 \\
0.200 & 8 & 64$\times$40$\times$40 & 40 & 1.27 \\
0.126 & 8 & 64$\times$40$\times$40 & 40 & 1.30 \\
0.0798 & 10 & 80$\times$40$\times$40 & 40 & 1.26 \\
0.0503 & 11 & 80$\times$40$\times$40 & 40 & 1.28 \\
%
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
%
Figure \ref{fig:fullspace} (a) shows the frequency-domain result, where the
blue dots are the computed responses and the black dots correspond to the
interpolated values or the values set to zero.
%
\begin{figure*}
\centering
\includegraphics[width=\fwidth]{04-fullspace}
\caption{(a) Frequency- and (b) time-domain results for the homogeneous space
model. The red lines are the analytical solutions, the blue circles are the
actually computed responses, the black dots are the interpolated responses,
and the dashed black line the obtained time-domain response.}
\label{fig:fullspace}
\end{figure*}
%
Most of the computed values stay below a relative error of 1\,\%, our chosen
adaptive gridding only starts to generate considerable errors at higher
frequencies. Figure \ref{fig:fullspace}(b) shows the corresponding time-domain
result, where the dashed black line is the result obtained by transformation of
the frequency-domain response, on top of the red line which is the analytical
result. The relative error is mostly below 1\,\%, except for early times.
However, for practical reasons that is more than enough.
Figure~\ref{fig:fullspace-log} shows the same on a logarithmic scale, with
times up to 10 seconds. It clearly shows that if later times are required, we
would need to adjust our Fourier transform parameters.
%
\begin{figure}
\centering
\includegraphics[width=\cwidth]{05-fullspace-log}
\caption{Same as in Figure~\ref{fig:fullspace} (b), but on a logarithmic
scale. To improve later times we would have to compute lower frequencies;
to improve earlier times we would have to compute more frequencies per
decade to get a better resolution.}
\label{fig:fullspace-log}
\end{figure}
%
Note that for the gridding we chose $n_\delta=12$, which is very dense. This
was necessary because we are relatively close to the source. If the offsets of
interest are larger this factor can be lowered considerably; 3--4 is often
enough.
This model corresponds to the one presented in Table 1 and in Figures 3 and 4
of \cite{GEO.08.Mulder}. The response here appears to be more accurate, their
reported peak-error is roughly 1\,\%, whereas we are below 0.1\,\% at the peak
(there are no error-plots presented, so visual inspection is all we have).
However, the difference in run time is dramatic. Summing the run times for the
different frequencies of the original figure comes to a total computation time
of 3\,h 47\,min 12\,s; 0.01\,Hz was the slowest run with 31\,min 19\,s, and
2.37\,Hz was the fastest run with 2\,min 54\,s. Our example, on the other hand,
took less than two minutes in total, where the individual frequencies took
between 4 and 11\,s to run.
This massive speed-up has a couple of reasons. Computers have become more
powerful in the last twelve years, and the codes were run on different
computers. A quick test with the old scripts on our test machine shows that it
would roughly run 2--3 times faster, therefore somewhere between 1 and 2 hours.
The more important facts besides different hardware are: (1) we only used 14
frequencies instead of the 26 frequencies between 0.01 and 100\,Hz of the
original; (2) our adaptive gridding used significantly less cells
($f$-dependent) in comparison to the fixed \num{2097152} cells ($128^3$) used
in the original example. We did not see a significant difference in the speed
of the actual codes, where the kernel-algorithm of the two implementations is
the same, but in the original example it is implemented in Matlab/C, whereas
\emg3d is written in Python/Numba (Numba is a just-in-time compiler for Python
code, \cite{Numba}).
\subsection{1D Model} % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
The second example is a shallow marine, layered model with 200\,m of seawater
(3\,S/m) above a halfspace of 1\,S/m, an embedded target layer at 2\,km depth,
100\,m thick, with a conductivity of 0.02\,S/m. The source is located 20\,m
above the seafloor and the receivers are on the seafloor. We chose the
frequency range such that we can model offsets from 3 to 7 kilometers, with
$f_\mr{min}=0.007\,$Hz and $f_\mr{max}=32\,$Hz, using FFTLog with 5 frequencies
per decade. This results in computations for 19 frequencies from 0.008\,Hz to
31.8\,Hz. The complete frequency range for the transform includes 35
frequencies from $2\,10^{-5}$ to $126.4\,$Hz. For the adaptive gridding, we
used a cell width of 100\,m in the core domain and stretching outside up to a
factor 1.5, where the computation domain extends up to 50\,km in each
direction. This yielded grids between \num{204800} (higher frequencies) and
\num{245760} (lower frequencies) cells. The run times for each frequency and
their corresponding parameters are listed in Table~\ref{tbl:timemarine}.
%
\begin{table}
\begin{minipage}{\twidth}
\centering
\caption{Run times per frequency for the marine 1D example, with the
corresponding number of cells and minimum cell width as well as the
stretching factor in the computation domain; $\alpha_\mr{s}=1$ everywhere.}
\label{tbl:timemarine}
\begin{tabular}{S[table-format=2.5]rrrr}
\toprule
%
{Frequency} & time & nx$\times$ny$\times$nz & $\Delta_\mr{min}$ & $\alpha_\mr{c}$ \\
{(Hz)} & (s) & & (m) & \\
\midrule
%
31.8 & 16 & 128$\times$40$\times$40 & 100 & 1.36 \\
20.0 & 17 & 128$\times$40$\times$40 & 100 & 1.36 \\
12.6 & 17 & 128$\times$40$\times$40 & 100 & 1.36 \\
7.98 & 17 & 128$\times$40$\times$40 & 100 & 1.36 \\
5.03 & 17 & 128$\times$40$\times$40 & 100 & 1.44 \\
3.18 & 17 & 128$\times$40$\times$40 & 100 & 1.48 \\
2.00 & 17 & 128$\times$40$\times$40 & 100 & 1.49 \\
1.26 & 20 & 128$\times$40$\times$48 & 100 & 1.36 \\
0.798 & 24 & 128$\times$40$\times$48 & 100 & 1.36 \\
0.503 & 27 & 128$\times$40$\times$48 & 100 & 1.36 \\
0.318 & 27 & 128$\times$40$\times$48 & 100 & 1.36 \\
0.200 & 34 & 128$\times$40$\times$48 & 100 & 1.38 \\
0.126 & 34 & 128$\times$40$\times$48 & 100 & 1.40 \\
0.0798 & 37 & 128$\times$40$\times$48 & 100 & 1.41 \\
0.0503 & 37 & 128$\times$40$\times$48 & 100 & 1.44 \\
0.0318 & 44 & 128$\times$40$\times$48 & 100 & 1.44 \\
0.0200 & 44 & 128$\times$40$\times$48 & 100 & 1.47 \\
0.0126 & 37 & 128$\times$40$\times$48 & 100 & 1.48 \\
0.00798 & 43 & 128$\times$40$\times$48 & 100 & 1.49 \\
%
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
%
Figure~\ref{fig:marine} shows the result for an offset of 5\,km, (a) in the
frequency domain and (b) in the time domain. The recovered response with the 3D
code captures the airwave (first peak) and the subsurface (second peak) very
accurately. At later times the error starts to increase. We would need to
compute a few additional lower frequencies if we want to improve it.
%
\begin{figure*}
\centering
\includegraphics[width=\fwidth]{06-marine}
\caption{Electric inline response at an offset of 5\,km for a shallow marine,
layered scenario. (a) Frequency-domain response, where the blue circles
denote computed responses and the black dots interpolated responses or
responses set to zero. (b) Time-domain response.}
\label{fig:marine}
\end{figure*}
%
In the frequency-domain plot, it can be seen that the high frequencies are not
computed very accurately, but without too much influence on the time-domain
response. These frequencies could be left out if an offset of 5\,km is the only
objective. However, we also want to retrieve shorter offsets from the same
computation, for which these frequencies are required.
Figure~\ref{fig:marine-multioffset} shows the time-domain responses of the same
model for offsets of 3, 5, and 7\,km, all obtained with the same
frequency-domain computations and the same frequencies for the Fourier
transform. The computation of these frequencies took less than 9 minutes and it
handles any offset between 3 and 7\,km. It can be seen that the chosen
frequency selection is sufficient for this offset range; again, more low
frequencies could be added to improve late-time values.
%
\begin{figure}
\centering
\includegraphics[width=\cwidth]{07-marine-multioffset}
\caption{Time-domain responses for offsets of 3, 5, and 7\,km for the same
model as shown in Figure~\ref{fig:marine}.}
\label{fig:marine-multioffset}
\end{figure}
%
\subsection{Horizontal extent of the computation domain} % % % % % % % % % % %
The skin-depth approach fails for the air layer, as explained in the Section
\emph{Gridding}. The reason is that the EM field in the air travels at the
speed of light as a wave, and its amplitude is only reduced through geometrical
spreading. On land and in shallow marine scenarios one has therefore to include
a sufficiently large computational domain. The default in our scheme is
100\,km. The important point is that this does not only apply to the upward
$z$-direction, but also to the horizontal directions, as the airwave bounces
back horizontally and would continuously emit energy into the subsurface if the
boundaries are not chosen far enough away from the receivers. If models are
computed with very resistive layers or models with highly resistive basements,
this can even apply to deep marine scenarios.
Figure~\ref{fig:marine-wrong-x-y} shows this effect. It is the same model as in
the previous section; however, for the adaptive gridding in the horizontal
directions, $\rho_\mr{ave}=1\,\ohmm$ was used instead of
$\rho_\mr{ave}=\num{10000}\,\ohmm$. Having the boundaries too near in the
horizontal directions leads to worse results for most frequencies and entirely
wrong results for high frequencies. Comparison with the 1D result in the time
domain shows that it is the airwave whose amplitude is heavily overestimated.
%
\begin{figure*}
\centering
\includegraphics[width=\fwidth]{08-marine-wrong-x-y}
\caption{Same model as used for Figure~\ref{fig:marine}, but with the
horizontal boundaries not far enough. Although the resulting time-domain
result looks plausible, the comparison with the 1D result shows that it
significantly overestimates the amplitude of the airwave.}
\label{fig:marine-wrong-x-y}
\end{figure*}
%
It can be difficult to spot these errors in the time-domain result, as the
response looks plausible. Only a comparison with the 1D result reveals that it
is actually wrong. A possibility to detect such problems for complicated cases,
where there is no semi-analytical result to compare with, is to carry out a
mesh-convergence test by computing two or more models, moving the boundary.
When the responses stop to change, one can assume that the boundary is far
enough. Another possibility is to look at the amplitudes close to the
boundaries and ensure that they are small enough.
\subsection{Induced Polarization} % % % % % % % % % % % % % % % % % % % % % % %
The proposed Fourier-transform approach works independently of spatial
complexity, as diffusive electromagnetic fields are smooth functions of both
frequency and time. An interesting test it to see whether this applies as well
for dispersive media where the model parameters depend on frequency, as is the
case in induced polarization. To test this we use the Cole-Cole model
\citep[CCM, ][]{JCP.41.Cole} which, written in terms of conductivities instead
of electric permittivities as by the original authors, is given by
%
\begin{equation}
\sigma(\omega) = \sigma_\infty + \frac{\sigma_0 - \sigma_\infty}{1 +
(\rm{i}\omega\tau)^c}\ ,
\label{eq:ccm}
\end{equation}
%
where $\sigma_0$ and $\sigma_\infty$ refer to the low-frequency and
high-frequency conductivity values, respectively, $\tau$ is the central
relaxation time (s), and $c$ the CCM exponent describing the broadness of the
relaxation time distribution. Note that this model slightly differs from the
one phrased in terms of resistivities given by \cite{GEO.78.Pelton}, see, e.g.,
\cite{GJI.13.Tarasov}.
\cite{GJI.17.Commer} have shown that time-domain IP modelling can be difficult,
particularly for low $c$-values. Modelling frequency-dependent models in the
frequency domain is, for any given frequency, not different from modelling a
non frequency-dependent model. However, the interesting point is to see how the
Fourier transform behaves to obtain time-domain data from frequency-domain
responses. As an example, we modelled the response for a land case with air
above a halfspace of 1\,S/m. Within the halfspace is a dispersive, 100\,m thick
layer at a depth of 300\,m, with $\sigma_0=1.0\,$S/m and
$\sigma_\infty=1.25\,$S/m; $\tau=1$, and we vary $c$ from 0 (no IP) to 1.0,
0.75, 0.5, and 0.25. The x-directed step-off source and the receiver at an
inline offset of 900\,m are placed on the surface. The real and imaginary
frequency-domain responses for this model are shown in the left column of
Figure~\ref{fig:CCM}. It can be seen that the effect of the IP layer in the
frequency domain is that the real part starts to deviate from the DC value at
lower frequencies. The imaginary part decays slower towards zero when
decreasing the frequency, and it deviates from a power-law function at lower
frequencies as well. This indicates that we need lower frequency content for
the Fourier transform, the smaller $c$ becomes.
%
\begin{figure*}
\centering
\includegraphics[width=\fwidth]{11-cole-cole-model}
\caption{IP example for different $c$ values, frequency-domain response on
the left and time-domain response on the right. The frequencies indicated
by the blue vertical bars in the lower-left figure were used for our
Fourier transform approach, requiring only the computation of 27
frequencies instead of the full 747 frequencies required by the digital
linear filter method.}
\label{fig:CCM}
\end{figure*}
%
For the Fourier transform, we used the 601-point sine-cosine filter from
\cite{GEO.09.Key}. This is a very long filter, but it seemed to be by far the
best filter for low $c$-values. However, designing a new filter particularly
for IP problems might be a better approach. The selected frequency thresholds
were chosen at $f_\mr{min}=10^{-4}\,$Hz and $f_\mr{max}=500\,$Hz, and within
the thresholds we only computed every 6th frequency. Using these values only 27
frequencies have to be computed, instead of the 747 frequencies required by the
actual transform. To obtain the step-off response we computed the step-on
response and subtracted it from the DC value at 1000\,s (without the need for
any additional frequencies).
\subsection{3D Model} % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
The last example consists of a resistive, three-dimensional block embedded in
the lower of two halfspaces, as depicted in Figure~\ref{fig:3d-model}. The
target has resistivity $\rho_\mr{tg}=100\,\ohmm$, the upper halfspace
corresponds to seawater with $\rho_\mr{sea}=0.3\,\ohmm$, and the lower
halfspace is the background with $\rho_\mr{bg}=1\,\ohmm$.
%
\begin{figure}
\centering
\includegraphics[width=\cwidth]{09-3d-model}
\caption{Three-dimensional block embedded in the lower of two halfspaces. The
100-m long, $x$-directed dipole source is located 50\,m above the seafloor
at the origin, and the receiver is on the seafloor at an offset of 2\,km.}
\label{fig:3d-model}
\end{figure}
%
The source is a 100-m long, $x$-directed dipole at the origin, 50\,m above the
seafloor, and we are using a step-off source function. The $x$-directed inline
receiver is at an offset of 2\,km. The dimension of the target cube is
$1.1\times1.0\times0.4$\,km, located 300\,m below the seafloor in the centre
between source and receiver.
For the comparison, we use the open-source code \simpeg \citep{CAG.15.Cockett},
which is a framework for modelling and inversion of geophysical data such as
gravity, magnetics, and CSEM. It has Maxwell's equations implemented in both
the frequency and time domain. As such we can compare our result computed in
the frequency domain followed by a Fourier transform to a result computed
directly in the time domain. A principal difference between \simpeg and \emg3d
is that the former has various direct solvers implemented, whereas the latter
is an iterative multigrid solver. The 3D model is therefore a rather small
example in order to be able to run it on our test machine, as the memory
requirement by the direct solver would otherwise be too high. There are not
many options out there of open-source time-domain 3D CSEM codes, \simpeg being
the one we found to be suitable. A step-off response was chosen as this is the
response currently implemented in it.
The model was discretised with $100\times100\times100\,$m cells in the survey
domain $D_\mr{s}$. For the time-domain model, 14 cells in $x$-direction and 12
cells in $y$- and $z$-directions were used on both sides with a stretching of
1.3 for the total computation domain $D_\mr{c}$, which yields a mesh of