Computation of stochastic observables in - Pure

Computation of stochastic observables in - Pure
Computation of stochastic observables in
electromagnetic interaction theory : applications to
electromagnetic compatibility
Sy, O.O.
DOI:
10.6100/IR652800
Published: 01/01/2009
Document Version
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences
between the submitted version and the official published version of record. People interested in the research are advised to contact the
author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
Citation for published version (APA):
Sy, O. O. (2009). Computation of stochastic observables in electromagnetic interaction theory : applications to
electromagnetic compatibility Eindhoven: Technische Universiteit Eindhoven DOI: 10.6100/IR652800
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal ?
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Download date: 21. Oct. 2017
Computation of stochastic observables in
electromagnetic interaction theory
Applications to electromagnetic compatibility
Computation of stochastic observables in
electromagnetic interaction theory
Applications to electromagnetic compatibility
PROEFSCHRIFT
ter verkrijging van de graad van doctor aan de
Technische Universiteit Eindhoven, op gezag van de
rector magnificus, prof.dr.ir. C.J. van Duijn, voor een
commissie aangewezen door het College voor
Promoties in het openbaar te verdedigen
op dinsdag 20 oktober 2009 om 16.00 uur
door
Ousmane Oumar Sy
geboren te Parijs, Frankrijk
Dit proefschrift is goedgekeurd door de promotor:
prof.dr.ir. A.G. Tijhuis
Copromotoren:
dr.ir. M.C. van Beurden
en
ir. B.L. Michielsen
A catalogue record is available from the Eindhoven University of Technology Library
Sy, Ousmane O.
Computation of stochastic observables in electromagnetic interaction theory:
applications to electromagnetic compatibility / by Ousmane Oumar Sy - Eindhoven :
Technische Universiteit Eindhoven, 2009.
Proefschrift. - ISBN 978-90-386-2032-9
NUR 959
Trefwoorden: stochastische / kansberekening; elektromagnetisme / dunne draad
geleidende plaat; numerieke methoden / integratiemethoden / integraalvergelijkingen.
Subject headings: stochastic / probability; electromagnetic / thin wire
metallic surface; numerical methods / quadrature rules / integral equations.
Copyright © 2009 by O.O. Sy, Electromagnetics Section, Faculty of Electrical
Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
The work leading to this thesis has been financially supported by the IOP-EMVT 04302
program of Senternovem, an agency of the Dutch ministry of Economic Affairs, and by
ONERA, the French Aerospace Research Center.
Cover design: Seyed Ehsan Baha, Eindhoven, The Netherlands
Cover photo: The fast combat support ship USNS Arctic, left, conducts an underway
replenishment with the aircraft carrier USS Dwight D. Eisenhower, January 15, 2007.
(Released U.S. Navy photo, by 2nd Class M.A. Contreras)
Typeset using LATEX, printed by PrintPartners Ipskamp, Enschede, the Netherlands
Contents
1 Introduction
1.1 CEM and EMC . . . . . . . . . . . . . . . . .
1.2 Deterministic numerical models - uncertainties
1.3 Uncertainty quantification methods . . . . . .
1.4 Stochastic uncertainty quantification . . . . .
1.5 Objective of the thesis . . . . . . . . . . . . .
1.6 Outline of the thesis . . . . . . . . . . . . . .
I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Stochastic model
2 Deterministic setting
2.1 Electromagnetic fields . . . . . . . .
2.1.1 Maxwell’s equations . . . . .
2.1.2 Boundary conditions . . . . .
2.1.3 Power balance . . . . . . . . .
2.1.4 Potentials . . . . . . . . . . .
2.2 Scattering by PEC objects . . . . . .
2.2.1 Interaction configuration . . .
2.2.2 Boundary-value problem . . .
2.2.3 Method of moments . . . . .
2.3 Reciprocity theorem and observables
2.3.1 Reciprocity theorem . . . . .
2.3.2 Equivalent Thévenin network
2.3.3 Extensions . . . . . . . . . . .
2.4 Practical test cases . . . . . . . . . .
2.4.1 Elementary dipole . . . . . .
2.4.2 Thin wire . . . . . . . . . . .
2.4.3 PEC plate of finite extent . .
1
1
3
4
5
6
7
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
12
12
13
15
16
17
17
19
20
21
21
22
24
25
25
26
29
vi
Contents
2.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Stochastic parameterisation
3.1 Probability space . . . . . . . . . . . . . . . .
3.2 Randomization of the EMC problem . . . . .
3.2.1 Random input . . . . . . . . . . . . . .
3.2.2 Input probability measure PI . . . . .
3.2.3 Propagation of the randomness . . . .
3.3 Random variables . . . . . . . . . . . . . . . .
3.3.1 General definitions and properties . . .
3.3.2 Problematic definition of PY . . . . . .
3.3.3 Fundamental theorem . . . . . . . . .
3.4 Characterization of the stochastic observables
3.4.1 Characterization of components of Ve .
3.4.2 Full characterization of Ve . . . . . . .
3.5 Conclusion . . . . . . . . . . . . . . . . . . . .
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Computation of the statistical moments
4 Numerical integration on the space of stochastic
4.1 Integral of scalar-valued functions . . . . . . . . .
4.1.1 Discretization . . . . . . . . . . . . . . . .
4.1.2 Monte-Carlo method . . . . . . . . . . . .
4.1.3 Quadrature using polynomial interpolation
4.1.4 Space-filling-curve quadrature rule . . . .
4.1.5 Comparisons . . . . . . . . . . . . . . . .
4.2 Vector-valued integrals . . . . . . . . . . . . . . .
4.2.1 Strategy . . . . . . . . . . . . . . . . . . .
4.2.2 Error and convergence . . . . . . . . . . .
4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . .
33
34
35
35
36
36
37
37
38
38
39
39
41
42
45
inputs
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
5 Quadrature accelerator (1/2): Perturbation method
5.1 Statement of the problem . . . . . . . . . . . . . . . .
5.2 First-order Taylor expansions . . . . . . . . . . . . . .
5.2.1 Expansion of the operator Z α . . . . . . . . . .
5.2.2 Expansion of the traces on Sα of E [I 0 ] and E iβ
5.2.3 Resulting first-order expansions of J α and Ve .
5.2.4 Statistical moments . . . . . . . . . . . . . . . .
30
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
48
48
49
50
54
56
61
61
62
66
.
.
.
.
.
.
67
67
69
69
70
70
71
Contents
5.3
5.4
5.5
5.6
vii
First-order Neumann expansion of Z SS
α . . . .
5.3.1 Resulting first-order expansion . . . . .
5.3.2 Statistical moments . . . . . . . . . . .
Second-order perturbation method . . . . . .
5.4.1 Taylor and Neumann expansions . . .
5.4.2 Second-order expansions of J α and Ve
5.4.3 Statistical moments . . . . . . . . . . .
Applications . . . . . . . . . . . . . . . . . . .
5.5.1 First four statistical moments . . . . .
5.5.2 Accuracy of the standard deviation . .
5.5.3 Economical sample generator . . . . .
Conclusion and extensions . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Polynomial-chaos method
6.1 General method . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Standardization . . . . . . . . . . . . . . . . . . . . . .
6.1.2 Construction of orthogonal Wiener-Askey polynomials
6.1.3 Spectral decomposition of Ve . . . . . . . . . . . . . . .
6.1.4 Statistical post-processing of the PC decomposition . .
6.2 Varying wire under deterministic illumination . . . . . . . . .
6.3 Varying wire under random illumination . . . . . . . . . . . .
6.4 Perturbation and polynomial-chaos methods . . . . . . . . . .
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Semi-intrusive characterization of stochastic observables
7.1 Statement of the problem . . . . . . . . . . . . . . . . . .
7.2 Reformulation of Ve . . . . . . . . . . . . . . . . . . . . . .
7.3 Statistics of Ve in terms of the statistics of iα,ki . . . . . .
7.4 Discrete representation . . . . . . . . . . . . . . . . . . . .
7.5 Spectral decomposition of the covariance of iα,ki . . . . .
7.6 Example of a transversely varying thin wire . . . . . . . .
7.6.1 Statistical moments of iα,ki . . . . . . . . . . . . .
7.6.2 Inference of E[Ve ] and var[Ve ] . . . . . . . . . . . .
7.6.3 Computation time . . . . . . . . . . . . . . . . . .
7.7 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7.1 Random incident field . . . . . . . . . . . . . . . .
7.7.2 Higher-order statistical moments . . . . . . . . . .
7.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
73
74
75
75
76
77
77
77
79
84
86
.
.
.
.
.
.
.
.
.
87
87
87
88
89
91
91
95
101
103
.
.
.
.
.
.
.
.
.
.
.
.
.
105
106
107
108
109
110
111
112
114
116
118
118
119
121
viii
III
Contents
Post-processing of the statistics
8 Statistical moments of a complex-valued observable
8.1 Non-intrusive probabilistic logic . . . . . . . . . . . .
8.2 Observable Ve as a complex random variable . . . . .
8.3 Observable Ve as a real random vector . . . . . . . .
8.3.1 Average vector and covariance matrix of Ve . .
8.3.2 Principal component analysis (PCA) of CVn .
8.4 Higher-order moments . . . . . . . . . . . . . . . . .
8.4.1 Skewness . . . . . . . . . . . . . . . . . . . . .
8.4.2 Kurtosis . . . . . . . . . . . . . . . . . . . . .
8.4.3 Higher-order moments . . . . . . . . . . . . .
8.5 Maximum-entropy principle . . . . . . . . . . . . . .
8.6 Numerical cost . . . . . . . . . . . . . . . . . . . . .
8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . .
123
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 Discrete-inverse-Fourier-transform (DIFT) method
9.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . .
9.2 Characteristic function . . . . . . . . . . . . . . . . . . . . .
9.3 Inversion of the characteristic function . . . . . . . . . . . .
9.3.1 Probability density function fU . . . . . . . . . . . .
9.3.2 Cumulative distribution function . . . . . . . . . . .
9.4 Application to a thin wire . . . . . . . . . . . . . . . . . . .
9.4.1 Characteristic function . . . . . . . . . . . . . . . . .
9.4.2 Effect of the support of ΦU on the DIFT pdf and cdf
9.4.3 Numerical effort . . . . . . . . . . . . . . . . . . . . .
9.4.4 Limiting the complexity . . . . . . . . . . . . . . . .
9.4.5 DIFT pdf versus maximum-entropy pdf . . . . . . . .
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 Extensions
10.1 Completion of the stochastic black box . . . . . . . .
10.2 Influence of the distribution of the input . . . . . . .
10.2.1 Bank of input distributions . . . . . . . . . .
10.2.2 Test case . . . . . . . . . . . . . . . . . . . . .
10.2.3 Effect of fα on the distribution of |Ve | . . . .
10.2.4 Effect of fα on the distribution of Zi = Im(Ze )
10.2.5 Conclusion on the importance of the input pdf
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
125
126
127
130
131
132
135
135
136
140
140
142
145
.
.
.
.
.
.
.
.
.
.
.
.
147
147
148
149
149
150
151
152
152
154
156
157
160
.
.
.
.
.
.
.
161
161
163
163
164
164
170
174
Contents
ix
10.3 Localized geometrical fluctuations . . . . . . . . . . . . . . . . .
10.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.2 Conditional cdf . . . . . . . . . . . . . . . . . . . . . . .
10.3.3 Total cdf . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.4 Conclusion on the wavelet geometry . . . . . . . . . . . .
10.4 Random rectangular metallic plate . . . . . . . . . . . . . . . .
10.4.1 Deterministic configuration . . . . . . . . . . . . . . . .
10.4.2 Representative example . . . . . . . . . . . . . . . . . .
10.4.3 Mean and standard deviation . . . . . . . . . . . . . . .
10.4.4 Comparison with deterministic samples at f = 182 MHz
10.4.5 Conclusion on the example of the surface . . . . . . . . .
10.5 Circular metallic plate with a random radius . . . . . . . . . . .
10.5.1 Numerical effort . . . . . . . . . . . . . . . . . . . . . . .
10.5.2 Approximations of the pdf of |Ve | . . . . . . . . . . . . .
10.5.3 Approximation of the cdf of |Ve | . . . . . . . . . . . . . .
10.5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
175
175
177
179
179
181
181
183
184
186
191
192
194
195
198
199
11 Conclusions and recommendations
201
11.1 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
A Some concepts of probability theory
A.1 Probability space . . . . . . . . . . . . . .
A.2 Random variables . . . . . . . . . . . . . .
A.3 Stochastic processes . . . . . . . . . . . . .
A.4 Random variables and stochastic processes
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
211
211
213
214
215
B Univariate polynomial-interpolation based quadrature rules
217
B.1 Statement of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
B.2 Polynomial-interpolation-based rules . . . . . . . . . . . . . . . . . . . . . 218
C Multivariate integrals
221
C.1 Statement of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
C.2 Lattice rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
C.3 Lattice rules and space-filling-curve rule . . . . . . . . . . . . . . . . . . . 224
Bibliography
225
Summary
239
x
Contents
Samenvatting
242
List of publications
246
Curriculum vitae
250
Acknowledgements
252
Chapter 1
Introduction
1.1
CEM and EMC
One of the objectives of science is to build models to understand and represent real-life
physical phenomena. This can be achieved by elaborating theoretical frameworks based
on and verified by experiments. Over the recent decades, the growing capabilities of
computers has fostered the establishment of computational techniques as intermediates
between theory and experiments. Computational ElectroMagnetics (CEM) exemplifies
this evolution as it employs the theory of Maxwell’s equations to construct numerical
methods representing the electromagnetic interaction between fields and matter.
The development of CEM also came as an answer to the need for efficient and accurate
solvers of large mathematical problems associated with Maxwell’s equations in a wide
variety of configurations.
These configurations range from cell levels where the
electromagnetic response of biological tissues is investigated, all the way up to
macroscopic dimensions involving large antennas used in radar technology or astronomy.
In our everyday lives, the ever increasing number of electronic devices that are employed
also need to be designed correctly to co-exist “peacefully”. Problems may arise at an
internal level, when the immunity of components of an integrated circuit is analyzed, but
also at an external level to assess the performance of multiple electronic systems present
for instance on terrestrial, naval, aeronautical, or spatial vehicles. This type of problems
is at the heart of ElectroMagnetic Compatibility (EMC), which tackles the issue of the
proper functioning of electronic devices in their electromagnetic environment.
2
Introduction
One of the most spectacular and dramatic examples of the consequences of electromagnetic
interferences was given in 1967 off Vietnam, onboard the aircraft carrier
USS Forrestal [1, p. 10],[2]. An accidentally generated electric signal coupled into the
firing system of an aircraft-mounted missile, which fired the weapon into a number of
other armed and fully fueled aircrafts on the carrier deck. The resulting explosions and
fire killed 134 people and caused 72 M$ of damage not counting the 27 lost aircrafts.
Another sign of the societal relevance of EMC is indicated by the EU directive 89/336CE,
which has been regulating the emission levels of all electronic devices used in Europe since
1995 [3]. In its article 4, the directive states that
“The apparatus [. . .] shall be so constructed that: (a) the electromagnetic disturbance it generates does not exceed a level allowing radio and telecommunications equipment and other apparatus to operate as intended; (b) the apparatus
has an adequate level of intrinsic immunity to electromagnetic disturbance to
enable it to operate as intended.”
Article 4, Official Journal of the European Communities, No L 139/21.
The list of apparatus in question is provided in the Annex III of the directive and includes
domestic radio and TV receivers, mobile radio and commercial radiotelephone equipment,
medical and scientific apparatus, aeronautical and marine radio apparatus, or, lights and
fluorescent lamps. In the EU, it is a criminal offence to sell equipment which does not fulfill the requirements of the directive 89/336CE [1, p. 21]. The devices that are compliant
with the criteria of the directive are recognizable by their CE conformity marking, which
is shown in Fig. 1.1.
Figure 1.1: CE conformity marking.
1.2 Deterministic numerical models - uncertainties
1.2
3
Deterministic numerical models - uncertainties
Several numerical techniques are commonly employed in EMC problems. Among the
most important ones are the method of moments (MoM), the finite-difference time-domain
(FDTD) method, the finite-element method (FEM) or the transmission-line
method (TLM). The features of these methods are reviewed in [4]. All these methods
can be perceived as input/ouput processes. The entries of these models are formed by
the parameters of the coupling scene, viz. the material objects, the properties of the
environment and the characteristics of the incident field. The response variables, or
observables, can consist e.g. of circuit variables such as induced currents or voltages,
or scattering coefficients.
The range of applicability of these methods is assessed by their ability to faithfully
represent the reality of interactions in various situations. Their accuracy can however be
plagued by a) an inadequacy of the model to represent the physics of the interaction being
investigated, or by b) numerical errors stemming from the finite precision of
computers, or by c) a parametric uncertainty originating from an insufficient knowledge
of the actual configuration [5]. The first two sources of errors are not investigated in this
work, and the method employed will be assumed numerically valid for the range of input
parameters considered. The objective of this thesis is hence to investigate the effect of
parametric uncertainties on the performance of the numerical method.
In practical applications, the uncertainties of the input parameters of the model are
generally rooted in a limited knowledge of the configuration of the electromagnetic
coupling. This limitation can stem from changing operational conditions of the
devices, as is the case with conformal antennas mounted on vibrating objects such as
the wings of an aircraft [6], or embedded in textile [7]. In certain problems involving
a large number of components, the resulting complexity of the setup can be excessively
prohibitive to be depicted in fine details. An example hereof can be found in [8] where
the EMC analysis of the space shuttle on a launch site is considered. In an industrial
context, production drifts during the manufacturing of equipment can also lead to
significant alterations of the device’s characteristics. Uncertainties may also affect the
knowledge of the incident field, in particular of its amplitude, its direction of propagation
and its polarization, as is the case for signals propagating in an urban environment.
Neglecting these uncertainties can prove penalizing, especially for ill-conditioned models,
or nearly chaotic resonant interactions, in which slight variations of the input variables
may provoke substantial and hazardous modifications of the observable.
4
Introduction
1.3
Uncertainty quantification methods
The effect of the uncertainty of a model’s inputs can be measured via various methods.
To begin with, a deterministic sweep, in which the model is systematically evaluated
for all the possible input configurations, provides an exhaustive picture of the interaction.
However, depending on the underlying problem, such a procedure can reveal itself
numerically intractable in computational resources.
Moreover, the subsequent
“mountain of data”, as referred to by Hill [9], needs to be post-processed to extract
interpretable information. This approach is typically that of a statistician1 .
When the observable varies smoothly as a function of the changes of configurations, the
study of a few sample situations can already grant a satisfactory representation of the
overall interaction. This explains the computational efficiency of a sensitivity analysis for
smoothly varying interactions. Nonetheless, in the more general case, the local-variation
hypotheses at the core of the sensitivity analysis can limit the range of validity of the
results it yields.
Instead, a stochastic2 approach offers an appealing alternative to both of the
aforementioned methods. In this approach, the global changes of the configuration are
assumed to be random according to a known probability distribution, chosen a priori.
In such a Bayesian approach, the choice of this distribution can be based on the level
of knowledge available about the interaction. Probability theory is then employed to
propagate the randomness of the input through the numerical model and compute
relevant statistical information that characterize the observable.
Unlike the deterministic-sweep method, such a rationale is expected to allow for the direct calculation of the statistical information via a limited number of evaluations of the
model. Furthermore, the computed statistical information should be directly exploitable
and inform about the general features of the possibly large set of values of the observable.
Added to this, the stochastic approach handles the global variations of the configuration,
which marks a difference with the sensitivity approach.
1
Statistic: The German word Statistik, first introduced by the political scientist Gottfried Aschenwall
(1749), originally designated the analysis of data about the state, signifying the ”science of state”.
2
Stochastic: From the Greek ”stokhos” or aim, guess, target. It can be considered as a synonym of
“characterized by conjecture and randomness”.
1.4 Stochastic uncertainty quantification
1.4
5
Stochastic uncertainty quantification
The suitability of a stochastic approach to assess uncertainty can be illustrated by the
very genesis of the theory of probability, which is generally traced back to 1654. In that
year, the enquiry of the French Chevalier de Méré to the mathematician Pascal about
the explanation of wins and losses in gambling bets, launched a correspondence between
Pascal and Fermat that laid the foundations of probability theory [10, 11]. Subsequently,
the work of scientists as remarkable as Christiaan Huygens (Van Rekeningh in Spelen
van Geluck, 1657), Jacob Bernoulli (Ars Conjectandi, 1713), Pierre Simon de Laplace
(Théorie Analytique des Probabilités, 1812) extended the scope of this new theory to fields
as diverse as astronomical data analysis, economy, insurance, epidemiology and genetics,
social sciences, and even linguistics. In the latter domain, probability theory can serve to
derive the frequency of use of words in given languages [12, 13].
In electromagnetics, the use of stochastic methods is becoming more and more popular,
as can be seen from the flourishing literature dealing with Statistical ElectroMagnetics
(STEM or Stat-EM) [14–18].
Stochastic methods are common in rough-surface
scattering problems that arise when considering long-distance propagation of radio signals
over the ground or the sea, which is modeled as a random surface. In these applications,
the very large extent of the surface enables asymptotic simplifications of the mathematical
formulation of the electromagnetic coupling [19–22].
Statistical methods are also employed to study the propagation of waves in complex scenes
such as urban surroundings [23, 24] or vegetation [25, 26], where the use of a stochastic
reasoning allows for the inclusion of the multi-path effect created by undesired reflections.
Further, reverberation chambers, also known as mode-stirred chambers (MSC), are test
facilities employed to characterize the immunity of electronic devices. The presence of
the geometrically complex stirrer naturally invites a stochastic representation of the field
distribution inside the chamber [14]. By making the assumption of an ideal chamber with
a uniform power distribution in its test region, efficient models of the resulting random
incident field are then employed to test the response of deterministic devices [27–30].
Cables and wires are also ordinary elements of our everyday lives. They constitute one
of the earliest types of antennas and are present as interconnections in several electronic
systems. The generally intricate layout of wirings in harnesses, in vehicles, or in buildings
is more and more handled via a stochastic rationale. Most of the random models employed
in this case use analytical formulations derived from transmission-line theory [31–34].
6
Introduction
1.5
Objective of the thesis
The objective of this work is to study integral-equation-based models of electromagnetic
interactions between a device of finite extent and an incident field. The entries of this
model correspond to the characteristics of the scattering object, or scatterer, and those of
the excitation. Since the devices are considered in a receiving state, the response variables,
or observables, are chosen as the parameters of the equivalent Thévenin3 network. The
method of moments (MoM) will yield accurate results, given the small dimensions of the
scatterers with respect to the wavelength. Nevertheless, resorting to the MoM translates
into a set of linear equations of the form
Lu = f,
(1.1)
in which the full impedance matrix L needs to be filled and inverted to determine the
solution u.
Problems that involve deterministic objects and random incident fields translate into a
random right-hand side (rhs) f and a deterministic matrix L, which needs to be inverted
only once [27]. The present thesis constitutes an extension to the latter approach as it
addresses both the case of a random scatterer (random L) under deterministic illumination
(deterministic f ), and the completely stochastic coupling between a random object and a
random incident field. These different problems are summarized in Table 1.1.
Uncertainty
L
f
u
Incident field
deterministic
random
random
Geometry
random
deterministic
random
Incident field + Geometry
random
random
random
Table 1.1: Different types of stochastic problems.
The stochastic methods applied in this thesis will be probabilistic rather than statistical .
The aim will be to determine relevant information about the probability distribution of
the observable via a limited number of computations by using the theory of probability. A
statistical method would derive these statistical items by post-processing a large ensemble
of values of the observables, i.e., after having solved a large number of problems (1.1).
3
Léon Charles Thévenin (1857–1926) was a French telegraph engineer. His theorem, which was
published in 1883, extends Ohm’s law to the analysis of complex electrical circuits [35, p. 698].
1.6 Outline of the thesis
7
This Ph.D thesis is carried out within the IOP EMVT 04302 research innovation program
of the Dutch ministry of Economic Affairs. The general project is entitled “Stochastische
veldberekeningen voor EMC-problemen” (stochastic methods for field computations in
EMC problems). It consists of a collaboration with the Ph.D student ir. J.A.H.M. Vaessen,
who focusses on the development of deterministic numerical models of electromagnetic
interactions and their analysis via a statistical rationale.
1.6
Outline of the thesis
This thesis consists of three parts.
In the first part, a generic model of stochastic electromagnetic interactions is constructed.
This model is first described in a deterministic context, i.e. in the absence of uncertainty,
in Chapter 2. It is based on the solution of a frequency-domain electric-field integral
equation to obtain the observables that correspond to the equivalent Thévenin network.
In Chapter 3, we tackle the effect of uncertainties by randomizing the deterministic model.
This random parameterisation will allow for the definition of statistical moments as multidimensional integrals, which involve integrands that are pointwise computable.
The second part of this thesis addresses the issue of the computation of the statistical
moments. Chapter 4 describes several quadrature rules that we have selected to efficiently
approximate multiple multi-dimensional integrals. These range from a Cartesian-product
rule and a Monte-Carlo rule to a sparse-grid rule and a space-filling-curve rule.
Two ”quadrature-catalyzing” methods are then introduced. The first acceleration
technique is a perturbation method, detailed in Chapter 5, where local expansions around
a reference configuration are performed. Secondly, a so-called polynomial-chaos method is
employed in Chapter 6 to construct a spectral stochastic representation of the observable,
which eases the computation of its statistical moments. Thirdly, in Chapter 7, we present
a semi-intrusive approach to compute the statistics by decoupling the randomness of the
observable between a geometrically induced randomness and a randomness caused by the
incident field.
The third and last part of this thesis focusses on a variety of post-processing methods that
we have retained to interpret the key information embedded in the statistical
moments. In this respect, Chapter 8 illustrates how statistical moments can be exploited
to reveal essential features of the randomness of the complex-valued observables. We then
detail a discrete- inverse-Fourier-transform (DIFT) method, in Chapter 9, to recover the
8
Introduction
probability distribution of a real-valued observable from its characteristic function. Last
but not least, several extensions to the stochastic methods employed in this thesis are
discussed in Chapter 10, viz. the management of the uncertainty of the observable,
the study of an alternative type of geometrical randomness, and the possibility to study
problems involving stochastic surfaces.
Part I
Stochastic model
Chapter 2
Deterministic setting
The aim of this chapter is to specify a mathematical and physical framework wherein
interactions between electromagnetic fields and matter can be modeled.
As a starting point, the general laws that govern the space-time evolution of
electromagnetic fields are introduced. These equations of Maxwell define operators
acting on the fields, and are complemented by boundary conditions at the interfaces
between media with different material properties. Under the assumption of time-harmonic
fields, these equations are cast in the frequency domain, where Maxwell’s operator can be
inverted thanks to the frequency-domain counterparts of retarded potentials.
A description of the geometrical and physical properties of material devices follows. The
consideration of these objects in an electromagnetic environment naturally leads to a
scattering problem formulated as a boundary-value problem represented by an integral
equation, which is solved by a method of moments. The field-matter interaction is
further characterized as a coupling phenomenon, described by macroscopic quantities.
The reciprocity theorem plays a crucial role in the definition of these electromagnetic
responses, also known as observables. In an EMC context, studying the susceptibility of
electronic apparatus to external electromagnetic fields is a key issue. For this reason, the
parameters of the equivalent Thévenin network will be chosen as observables.
The practical test cases that will come into play throughout this thesis are then presented,
viz an elementary Hertzian dipole, a thin-wire structure over an infinite ground plane, and
finally, a plate of finite extent representing a patch antenna or a shielding surface.
12
2.1
2.1.1
Deterministic setting
Electromagnetic fields
Maxwell’s equations
A spatial domain Ω is considered, in which a right-handed orthonormal Cartesian frame
(O, ux , uy , uz ) is defined. Any point r can be represented with respect to the origin O
via its Cartesian coordinates as r = xux + yuy + zuz and the time variable is denoted t.
The domain Ω contains an electric current density J (in Am−2 ) and charges represented
by the volume density ρ (in Asm−3 ). These sources create electric fields E (in Vm−1 ) and
magnetic fields H (in Am−1 ), the space-time evolution of which is dictated by Maxwell’s
equations
∇ × E(r, t) = −∂t B(r, t),
∇ × H(r, t) = J (r, t) + ∂t D(r, t),
(2.1a)
(2.1b)
where D (in Asm−2 ) and B (in Vsm−2 ) represent the electric and magnetic flux density
respectively. The “curl” differential operator is written ∇×, while ∂t stands for the
differentiation with respect to time. The time variation of the charge distribution can be
linked to the spatial fluctuations of the current density through the equation of continuity
∂ρ
∇ · J (r, t) = − (r, t).
(2.2)
∂t
As all the fields E and H are causal by hypothesis, they cannot precede their sources.
When taken into account in the equations (2.1), the causality of the fields together with
the law of charge conservation lead to the following equations
∇ · D(r, t) = ρ(r, t),
∇ · B(r, t) = 0.
(2.3a)
(2.3b)
Since the transient regime is not considered, the time dependence of the electromagnetic
quantities can be eliminated via a Fourier transformation with respect to time
Z
X(ω) = X (t) e−jωt dt, ∀ω ∈ R,
(2.4)
R
where X (t) is assumed to be integrable, and can be recovered by the inverse Fourier
transform
Z
1
X (t) =
X(ω) ejωt dω, ∀t ∈ R.
(2.5)
2π
R
Note that while the physical variable X (t) is real-valued, its Fourier tranform X(ω) is
generally complex-valued. Consequently, Maxwell’s equations become
∇ × E(r, ω) = −jωB(r, ω),
∇ × H(r, ω) = J (r, ω) + jωD(r, ω),
(2.6a)
(2.6b)
2.1 Electromagnetic fields
13
and
∇ · D(r, ω) = ρ(r, ω),
∇ · B(r, ω) = 0,
(2.7a)
(2.7b)
with the continuity relation written as
∇ · J (r, ω) = −jωρ(r, ω).
(2.8)
The frequency can be expressed in terms of ω as f = ω/2π.
The electromagnetic fields in Ω are linked to the material properties of the medium through
the constitutive relations, which state that
D(r, ω) = ε(r, ω)E(r, ω),
(2.9a)
B(r, ω) = µ(r, ω)H(r, ω),
(2.9b)
J (r, ω) = σ(r, ω)E(r, ω),
(2.9c)
where the physical properties of the medium in Ω are characterized by the electric
permittivity ε(r, ω) (in AsV−1 m−1 ), the magnetic permeability µ(r, ω) (in VsA−1 m−1 ),
and the conductivity σ(r, ω) (in AV−1 m−1 ). In isotropic media, ε(r, ω), µ(r, ω) and
σ(r, ω) correspond to scalars, which simplify to ε(r, ω) = ε(r), µ(r, ω) = µ(r), and
σ(r, ω) = σ(r), if the material properties vary slowly. Lossless media have a vanishing
conductivity σ = 0. Free space is a particularly interesting environment with physical parameters ε0 = 1/(36π).10−9 AsV−1 m−1 and µ0 = 4π.10−7 VsA−1 m−1 . All the interactions
studied in this thesis will take place in free space.
A complete system of first-order partial differential equations has hence been obtained,
which links the electromagnetic field F = (E, H) to its source J . For it to be uniquely
solvable, boundary conditions are specified via interface conditions between different media
and radiation conditions at infinity.
2.1.2
Boundary conditions
Interface conditions
Two adjacent domains Ω1 and Ω2 that have different material properties, (ε1 , µ1 , σ1 ) and
(ε2 , µ2 , σ2 ) respectively, are now considered. The surface S separating Ω1 from Ω2 is
assumed to be sufficiently smooth to permit the definition of a normal vector n pointing
towards Ω2 . In the immediate vicinity of S, the existing electromagnetic fields are written
14
Deterministic setting
F 1 = (E 1 , H 1 ) in Ω1 , and F 2 = (E 2 , H 2 ) in Ω2 . The interface conditions describe the
evolution of the components of the electromagnetic fields during the transition from one
medium to the other.
In the general case where Ω1 and Ω2 are penetrable, if S supports a surface current density
J S and surface charge distributions ρS , the interface conditions read
n × (E 2 − E 1 ) = 0,
(2.10a)
n × (H 2 − H 1 ) = J S ,
(2.10b)
n · (D 2 − D 1 ) = ρS .
(2.10d)
n · (B 2 − B 1 ) = 0,
(2.10c)
An equation of continuity linking ρS to J S can be established with the aid of a surface
divergence operator defined on S, as is done in [36, p. 150].
Hence, the
tangential components of E and the normal components of B vary continuously
during the transition between Ω1 and Ω2 . When S is free of charge and current
densities, the corresponding interface relations are obtained by setting J S = 0 and ρS = 0
in Eqs (2.10).
Another situation of practical interest occurs when one of the domains, say Ω1 , is
impenetrable, as is the case with perfect electric conductors (PEC) in which σ → ∞.
In this case, the vanishing electromagnetic field in Ω1 simplifies Eqs (2.10) into
n × E 2 = 0,
(2.11a)
n × H2 = JS,
(2.11b)
n · D 2 = ρS .
(2.11d)
n · B 2 = 0,
(2.11c)
Radiation conditions
The behaviour of the electromagnetic fields at infinity needs also to be clarified to ensure
the uniqueness of the solution to Maxwell’s equations. To this end, radiation conditions
are formulated that respect the principle of causality by requiring that electromagnetic
fields propagate from their sources outwards, and also guarantee the asymptotic decay of
the magnitude of these fields.
To express these radiation conditions in the frequency domain, a given electric source J
is considered in free space. A sphere S 3 (0, R) of radius R and centered around the origin
2.1 Electromagnetic fields
15
is also taken into account together with its boundary denoted ∂S, and a unit vector n
normal to ∂S, which points towards the exterior of S 3 (0, R). According to the radiation
conditions [37, 38], as R → ∞, the field F = (E, H) radiated by J is such that,
E, H ∈ L2loc (R3 , C3 ),
(2.12a)
|E|∂S | = O(1/R),
(2.12b)
[(ωµ0n × H + k0 E)|∂S ] = o(1/R),
(2.12d)
|H|∂S | = O(1/R),
(2.12c)
[(k0 n × E − ωµ0H)|∂S ] = o(1/R),
(2.12e)
where Landau’s symbols O and o indicate the comparability and the negligibility,
respectively, as R increases to infinity. The first equation ensures that the product of
E or H with a function having a compact support is square integrable. The following
two equations ascertain that the electromagnetic fields decay at least as fast as 1/R. The
last two equations, also known as Silver-Müller conditions, inform on the structure of F
at infinity by enunciating that E and H should be mutually orthogonal.
2.1.3
Power balance
The radiation conditions also guarantee that the energy flows from the source outwards.
This can be seen from the definition of the Poynting vector S as
S = E × H ∗,
(2.13)
which indicates the energy flowing per unit area in the electromagnetic field F . The flux
of S through a given surface equals the instantaneous power transmitted through that
surface. Given a volume Ω bounded by the surface ∂Ω and containing the electric source
J , the complex balance relation is derived from the divergence of S as
Z
Z
Z
I
∗
2
2
2
− J · E dV = jω
ε|E| − µ|H| dV + σ|E| dV + (E × H ∗ ) · ndS. (2.14)
Ω
Ω
Ω
∂Ω
In this equation, the left-hand side corresponds to the power provided by the source, the
first term in the right-hand side is the harmonic fluctuation in the stored electromagnetic
power and the second term represents the Ohmic losses. In other words, the power
available from the source J is divided between a reactive electromagnetic power stored in
Ω, a portion lost by Ohmic effects, and a remainder radiated through the surface ∂Ω.
It is essential, in practice, that the power conveyed by the electromagnetic fields remain
finite. This constraint and the power balance equation naturally lead to the Lebesgue
space of locally square-integrable vector valued functions denoted L2loc (R3 , C3 ) for E and
H [36, 39].
16
Deterministic setting
2.1.4
Potentials
The link between electromagnetic sources in free space and the fields they radiate is
now explicitly enunciated by resorting to the frequency-domain counterparts of retarded
potentials. Although only electric current sources are considered, similar relations can be
established for magnetic sources by duality [40, 41].
Starting from the equation ∇ · H = 0, a magnetic vector potential A, and an electric
scalar potential Φ are defined such that
H = ∇ × A,
E = −jωµA −
(2.15a)
1
∇Φ.
jωε
(2.15b)
The degree of freedom that exists in the choice of A and Φ is suppressed by Lorenz’ gauge
∇ · A + Φ = 0.
(2.16)
Inserting these relations into Eqs (2.6) yields the following set of independent equations
∇2 A + k02 A = −J ,
2
∇ φ+
k02 φ
(2.17a)
= ∇ · J,
(2.17b)
√
with the wavenumber k0 = ω ε0 µ0 . The resulting inhomogeneous Helmholtz equations
have known solutions, A and φ, which can be expressed in terms of J as
Z
A(r) =
g(r ′ , r)J (r ′ )dV ′ ,
(2.18)
r′ ∈Ω
φ(r) = −
Z
r′ ∈Ω
g(r′ , r)∇r′ · J (r ′ )dV ′ ,
(2.19)
where ∇r′ denotes the gradient operator with respect to the variable r ′ . The function
exp (−jk0 |r − r ′ |)
g(r′ , r) =
is the free-space Green’s function in which the point r′
4π|r − r ′ |
represents the source point, and r the observation point.
To summarize, the electromagnetic field F = (E, H) can be written in terms of its source
J via the so-called mixed-potentials formulas
Z
H [J ] (r) = ∇r ×
g(r′ , r)J (r ′ )dV ′ ,
(2.20a)
r′ ∈Ω
E [J ] (r) = −jωµ
Z
r′ ∈Ω
1
g(r , r)J (r )dV −
∇r
jωε
′
′
′
Z
r′ ∈Ω
g(r′ , r)∇r′ · J(r ′ )dV ′ . (2.20b)
2.2 Scattering by PEC objects
17
The functional notation E [·], H [·] highlights the linearity of the fields with respect to
their source J . If the medium ∂Ω is impenetrable, the integrals in Eq. (2.20) reduce to
surface integrals over the boundary ∂Ω, and J then represents a surface current density
flowing on ∂Ω. The following Stratton-Chu formulas are thus obtained
Z
H [J] (r) = ∇r ×
g(r′ , r)J (r ′ )dS ′ ,
(2.21a)
r′ ∈∂Ω
E [J] (r) = −jωµ
Z
r′ ∈∂Ω
2.2
1
g(r , r)J (r )dS −
∇r
jωε
′
′
′
Z
r′ ∈∂Ω
g(r ′ , r)∇r′ · J (r′ )dS ′. (2.21b)
Scattering by PEC objects
A passive electronic device, or scatterer, can be regarded as a contrast of constitutive
parameters with respect to its environment. In the presence of an incident field, this
contrast leads to the excitation of equivalent surface currents and charges on the surface
of the scatterer. An explicit description of this surface is hence a prerequisite for the study
of the scattering problem involving the device.
2.2.1
Interaction configuration
Geometrical setup
The electronic device occupies the interior of a volume Ωα bounded by the surface ∂Ωα,
and is normally parameterised by a fixed domain D and its boundary ∂D. This is achieved
by the inception of a smooth mapping µα uniquely associating points of ∂D to points of
the surface ∂Ωα, i.e.
µα :
(
∂D −
7 → ∂Ωα
rD −
7 → rα = r D + hα(r D )n(r D ),
(2.22)
where n is a unit vector normal to ∂D, and directed towards the exterior of D, as depicted
in Fig. 2.1(a). The real-valued function hα is defined on ∂D, and depends on parameters
gathered in the vector α = (α1 , . . . , αM ), which belongs to a given set A ⊂ RM . Note that
all the quantities depending on α are tagged with the subscript α. The smoothness of
∂Ωα is granted by imposing that µα and its inverse be twice continuously differentiable.
The mapping µα is thus chosen as a C 2 -diffeomorphism [42].
18
Deterministic setting
ma’
ma
dDf
dDv
Wa
dWa
(a) Normal parameterisation of the Surface ∂Ωα .
ma
ma’
D
Wa’
dWa’
(b) Entire geometry of Ωα .
Figure 2.1: Geometrical parameterisation of the scatterer Ωα by the domain D.
As sketched in Fig. 2.1(b), ∂D can be partitioned into a surface ∂Df parameterising the
fixed portion of ∂Ωα, and a surface ∂Dv serving as a reference for the varying part of ∂Ωα.
Taking the physical properties of ∂Ωα into account leads to the distinction between
• a fixed surface SP ⊂ µα(∂Df ) gathering all the port regions of Ωα, which have small
dimensions compared to the wavelength λ and are generally filled with air,
• a remainder Sα consisting of a PEC surface.
S
Hence, ∂Ωα = SP Sα, where the union is non-overlapping.
A priori, the function hα appearing in Eq. (2.22) could have whichever form, as long as
it describes a smooth geometry. However, for modeling purposes, having a more generic
representation of hα is desirable. This motivates the choice of hα as a Fourier or wavelet
representation, which are sufficiently general to describe most of the smooth geometries
of practical interest. To illustrate such a geometrical parameterisation, the surface ∂D
is chosen as a rectangle lying in the xy plane, i.e. ∂D = [0; 1]m × [0; 2]m × {0}m. The
reference of the non-varying domain is ∂Df = [0; 1]m × [0; 1]m × {0}m, and the reference
of the varying portion is ∂Dv = [0; 1]m × [1; 2]m × {0}m. The normal parameterisation
along the z−axis is chosen as

if r D ∈ Df ,

 h0
3
X
hα : ∂D ∋ rD = (x, y, 0) 7→

αk sin (kπx) sin (2kπy) otherwise,
 h0 +
k=1
where h0 = 0.5 m, and α = (α1 , α2 , α3 ) takes its values in A = [−0.1; 0.1]3 m. This type
of parameterisation mimics the modal representation of a vibrating plate with fixed edges.
2.2 Scattering by PEC objects
19
The theory of differential geometry provides a wide variety of tools that can be written
explicitly in terms of µα, such as the tangent plane Tr α Sα, which contains all the vectors
that are tangential to Sα at r α ∈ Sα [43, p. 75]. The tangent bundle T Sα can also be
introduced as the set of all the tangent planes associated with points of Sα [43, p. 81]
T Sα = {Tr α Sα, where r α ∈ Sα}.
(2.23)
Incident field
The device Ωα described above, is submitted to an electromagnetic field F i = (E i , H i )
radiated by external sources Qext = (J ext , M ext ). This field acts as the incident field, i.e.
the field radiated by Qext in the absence of Ωα. All the parameters specifying F i , such as
its amplitude, its direction of propagation or its polarization angle constitute the vector
β = (β1 , . . . , βN ) ∈ B ⊂ RN , used to index the incident field as F iβ = (E iβ , H iβ ).
The electromagnetic configuration is thus entirely specified by the d-dimensional input
vector γ, where d = M + N, which gathers the parameters α and β i.e.
γ = (γ1 , . . . , γd) = (α1 , . . . , αM , β1 , . . . , βN ) = (α, β)
2.2.2
∈ G = A × B ⊂ Rd .
(2.24)
Boundary-value problem
On the perfectly conducting surface Sα, the incident electric field E iβ induces a surface
current density J γ , which in turn radiates the scattered field F s [J γ ] = (E s [J γ ] , H s [J γ ]).
Consequently and owing to the linearity of Maxwell’s equation with respect to the sources,
the total electromagnetic field F will result from the superposition of F iβ and F s [J γ ]
F = F iβ + F s [J γ ] = E iβ + E s [J γ ] , H iβ + H s [J γ ] .
(2.25)
The tangential component of the total electric field on the PEC surface Sα vanishes, i.e.
n(r) × E iβ (r) + E s [J γ ](r) = 0,
∀r ∈ Sα.
(2.26)
Given the definition of E s [J γ ] via Eq. (2.21), Eq. (2.26) is an integral equation having the
current density J γ as unknown, and n × E iβ as excitation. An electric-field trace operator
Z α can now be introduced as
Zα :
X −→ Y
ψ 7−→ Z αψ :
(
Sα −→
Tr Sα
r 7−→ n(r) × E s [ψ](r).
(2.27)
20
Deterministic setting
The domain X of this linear operator corresponds to the space of currents defined on and
tangential to Sα, and that are square integrable, as well as their divergences. The image
of this operator Y is a set of traces of electric fields defined on Sα, and such that they
are measurable together with their curl and divergence. A more detailed characterization
of these spaces via Sobolev’s topology can be found in [36, 38, 39], as well as results
establishing the invertibility of the operator Z α. Finally, the electric-field integral
equation (EFIE) associated with the scattering problem can be written as
Z αJ γ (r) = −n(r) × E iβ(r),
∀r ∈ Sα.
(2.28)
Note that in the case where Ωα is a PEC thin wire, the so-called reduced-kernel EFIE is
obtained by requiring the cancelation of the longitudinal component of the electric field
along the axis of Ωα [44].
2.2.3
Method of moments
The solution to Eq. (2.28) is determined via the method of moments (MoM), which belongs
to the family of projection methods [38, 45]. It aims at approximating the solution to the
integral equation, belonging to an infinite-dimensional space, by a sequence of finitedimensional solutions which eventually converge towards the desired solution.
Let Xn and Yn be finite-dimensional subspaces of X , resp. Y in which they are ultimately
dense, meaning that each point of X (resp. Y) is the limit of a sequence of elements
of Xn (resp. Yn ). A basis BX = (x1 , . . . , xn ) of Xn is defined, the elements of which
form the expansion functions. Similarly, let BY = (y 1 , . . . , y n ) be a basis of Yn , and
BY′ = (λ1 , . . . , λn ) be a dual basis of BY associated to the system h·; ·iY . In Galerkin’s
procedure, the testing and expansion functions are chosen to be identical [46].
The current density J γ can be approximated by an expansion on BX as
bγ =
J
n
X
k=1
jk xk ,
where jk ∈ C, for k = 1, . . . , n.
(2.29)
b γ . The
The error en between the exact and approximate solutions is given as en = J γ − J
residual error Rn is defined as the image of en and can be expressed by using the linearity
and invertibility of Z α with respect to the curent distribution
b γ = −n(r) × E i (r) − Z αJ
b γ.
Rn = Z α J γ − Z α J
β
(2.30)
This error is minimized by requiring that Rn be orthogonal to the set BY′ of testing
functions, i.e. hλi ; Rn iY = 0, for i ∈ {1, . . . , n}, which can be re-written, by using
2.3 Reciprocity theorem and observables
21
Eqs (2.29), (2.30) and the linearity of h·; ·iY with respect to its second argument, as
n
X
k=1
jk hλi ; Z αxk iY = −hλi ; n × E iβ iY ,
∀i ∈ {1, . . . , n}.
(2.31)
These n equations are summarized in the following matrix form
[Zα][I] = [V ],
(2.32)
through the introduction of the matrix [Zα] = [hλi ; Z αxk iY ]i∈[1,n],k∈[1,n] ∈ Cn×n , and
the vectors [V ] = [−hλi ; n × E iβ iY ]i∈[1,n] ∈ Cn , and [I] = [jk ]k∈[1,n] ∈ Cn . The algebraic
equation (2.32) is eligible for a numerical solution, except at some irregular frequencies [39].
When Eq. (2.32) is solvable, the amplitudes of the current are deduced as
[I] = [Zα]−1 [V ].
2.3
2.3.1
(2.33)
Reciprocity theorem and observables
Reciprocity theorem
Lorentz’ reciprocity theorem permits the comparison between two electromagnetic states,
which do not necessarily coexist in the volume Ωα. This volume is bounded by the surface
∂Ωα and does not contain any non-reciprocal element [47–49]. The first state, referred to
by the subscript A , is characterized by the sources QA = (J A , M A ), which create the field
F A = (E A , H A ), whereas in the second state, the sources QB = (J B , M B ) give rise to
the field F B = (E B , H B ).
At this point, the concept of reaction QA ; F B Ωα between the sources QA and the field
F B , as defined by Rumsey [50], is recalled
Z
A B
A
B
B
J A · E B − M A · H B dV. (2.34)
QA ; F Ωα = J ; E Ωα − M ; H Ωα =
Ωα
The reciprocity theorem can then be formulated as
Z
E A × H B − E B × H A · ndS = QA ; F B Ωα − QB ; F A Ωα ,
(2.35)
∂Ωα
where the unit vector n is normal to ∂Ωα and points outwards from Ωα. The structure
of the electromagnetic field on the boundary ∂Ωα, which is governed by the transition
or radiation conditions, can be utilized advantageously to cancel the left-hand side of
Eq. (2.35). On the other hand, when none of the sources QA and QB has its support in
the volume Ωα, the right-hand side of Eq. (2.35) vanishes.
22
Deterministic setting
2.3.2
Equivalent Thévenin network
The reciprocity theorem is a powerful theoretical tool for the definition of observables,
i.e. macroscopic variables that characterize electromagnetic interactions. This feature
is illustrated hereafter for a passive electronic system occupying the volume Ωα and
corresponding to the description given in Section 2.2.1.
In a receiving state, all the ports of the electronic system are in an open-circuit state and
under the illumination of the incident field F iβ = (E iβ , H iβ). Our objective is to determine
the parameters of the equivalent Thévenin network that, as depicted in Fig. 2.2(b), consists
of the ideal voltage source Ve in series with the impedance Ze . Seen from the port, this
generic network replaces the entire configuration displayed in fig. 2.2(a), which is made of
Sα and E iβ .
(a)
(b)
Figure 2.2: Interaction configuration (Fig. (a)) and equivalent Thévenin circuit (Fig. (b)).
In an EMC context, the Thévenin model is very helpful when another electronic equipment
needs to be connected to the port P , as it permits the study of the immunity of the receiver
to voltages induced by external sources, as well as an impedance adaptation analysis.
Thévenin voltage source Ve and impedance Ze
As detailed in [47, 49–51], the voltage induced at the port P can be represented as
Z
1
1 i
Ve (γ) = −
J α; E β ∂Ωα = −
J α(r) · E iβ(r) dS.
(2.36)
I0
I0
r∈∂Ωα
This reaction integral involves the transmitting-state current J α, which flows on ∂Ωα
when a unit current source I0 = 1A is applied at P , in the absence of E iβ. Over the
2.3 Reciprocity theorem and observables
23
surface SP , given the small dimensions of the port, and due to the presence of the current
source, J α equals J α = I0 τ P , where τ P is a unit vector tangential to SP . Since the
S
current distribution J α has ∂Ωα = SP Sα as support, Eq. (2.36) expresses also the
property that Ve (γ) results from the evaluation of the generalized function J α by the
testing field E iβ [50, 52]. It is essential to note the difference between J α and the current
density J γ induced on ∂Ωα by E iβ : unlike J γ , the density J α is independent of E iβ and
depends solely on α.
Given the structure of ∂Ωα as the disjoint union SP
S
Sα, Ve can be decomposed as
Ve (γ) = Vp,e (β) + Vs,e (γ).
(2.37)
The first contribution Vp,e (β) = − I 0 ; E iβ S arises from the direct coupling of E iβ with
P
the port and depends only on SP and β. The second term, Vs,e (γ) = − J α; E iβ S ,
α
stands for the interaction between the port and the field scattered by the PEC surface Sα
while in reception. The voltage Vs,e depends hence on γ, i.e., on both Sα and E iβ [53].
Concerning the equivalent impedance seen from P , it is defined by considering the thin
wire in a transmitting state, thus in the absence of E iβ . This parameter is therefore
independent of the vector β and is denoted Ze (α). The voltage VZ , existing at P in the
transmitting state, is related to the current I0 and to the impedance Ze via Ohm’s law
Ze (α) =
1
VZ (α).
I0
(2.38)
The reasoning underlying the definition of Ve can be re-employed to obtain VZ . In the
transmitting state, the excitation is the electromagnetic field F [I 0 ] radiated by the current
source I 0 = I0 τ P . Thus, VZ is obtained by substituting E[I 0 ] for E iβ in Eq. (2.36)
VZ (α) = −
1
hJ α; E[I 0 ]i∂Ωα .
I0
(2.39)
Transmitting-state current J α
Both Ve and Ze depend on the transmitting-state current distribution J α, which is found
by solving an EFIE that models the transmitting state. The excitation is expressed by
the field F [I 0 ] = (E[I 0 ], H[I 0 ]) and, given the presence of the device Ωα, a scattered
field F s = (E s , H s ) is created. On the perfectly conducting surface Sα, the tangential
component of the total electric field vanishes, which leads to the integral equation
n(r) × {E[I 0 ](r) + E s (r)} = 0,
for any r ∈ Sα.
(2.40)
24
Deterministic setting
As previously stated, with a PEC thin wire, the EFIE can be obtained by enforcing the
cancelation of the longitudinal component of the electric field along the axis of the wire.
The solution to this integral equation, by the method of moment presented in Section 2.2,
provides an approximation of J α written as
J α ≈ − [Zα]−1 · E[I 0 ],
(2.41)
where the matrix [Zα] results from the discretization of the EFIE operator on the set of
expansion and testing functions used in the MoM. The filling in of [Zα] and the solution
time of Eq. (2.41) amounts to a given cost, which should not be overlooked, as it dominates
the numerical effort required by this approach.
2.3.3
Extensions
The general representation of the observables, as a reaction integral between a current
distribution characterizing the system and a testing field, can be extended beyond the
scope of a reception study.
When, for instance, the device Ωα is regarded as a transmitting antenna, the term Ve can
be interpreted as the radiation pattern of J α in the direction and polarization of E iβ [50].
The impedance Ze , on the other hand, is equivalent to an antenna impedance.
In a scattering study, the coefficients of the bi-static radar cross section (RCS) can be
obtained likewise. These RCS coefficients S1,2 are usually defined as the field scattered by
Ωα in a given direction (θ2 , ϕ2 ) and polarization η2 , and caused by an incident field F 1
impinging from the direction (θ1 , ϕ1 ) with a polarization η1 [50]
S1,2 (α) = − J 1α; E 2 ∂Ωα .
(2.42)
The current J 1α is generated by the incident field F 1 on Ωα, and E 2 is a plane wave
incident from the direction (θ2 , ϕ2 ) with the polarization η2 .
Nonetheless, the primary focus of this thesis resides in the study of the reception problem
where the objective is to characterize the equivalent Thévenin network. Moreover, from
this point onwards, and without any loss of generality, the discussion will mostly focus on
the Thévenin voltage chosen as the only observable. The results obtained can be extended
without difficulty to the entire Thévenin circuit.
2.4 Practical test cases
2.4
25
Practical test cases
Several practical test cases relevant for EMC problems can be handled by the model
established in this chapter. An elementary emitter can first be taken into account in the
form of a Hertzian dipole, often employed to model electromagnetic interactions that
involve electrically small structures. A second example worth investigating concerns
a thin wire lying above an infinite metallic ground plane. This type of setup, which
often arises in transmission-line or antenna problems, is the principal example employed
throughout the thesis. The last t ofype interaction presented involves a finite metallic
plate with a dipole above. Such a configuration is relevant for shielding applications,
where the effect of the metallic plate on the electromagnetic properties of the dipole need
to be studied.
These three examples also represent a hierarchy in the complexity of the electromagnetic
models of the transmitting-state current J α. For the Hertzian dipole, J α is modeled
analytically as a point distribution, whereas in the second test case, the thin-wire
formulation describes J α through a 1–D line current found as the solution of a 1–D
integral equation. Finally, with the metallic plate, J α consists of a 2–D surface current
density determined by solving a surface integral equation.
All these structures are analyzed by using a FORTRAN program that has been
implemented on a DELL PWS690 personal computer with a 3 GHz processor.
2.4.1
Elementary dipole
A Hertzian dipole represents one of the most elementary examples of antennas. As
sketched in Fig. 2.3, the dipole, centered at r dip , consists of two metallic branches of
length Ldip that are aligned along the direction of the unit vector udip . A space of length
LP separating these branches represents the port region. By hypothesis, the dimensions of
this antenna are electrically very small compared to the wavelength, i.e. LP ≪ Ldip ≪ λ.
Consequently, the dipole is a very good model for electrically small devices [54, 55],
and can also be perceived as the “building block” of more complicated antennas. The
collection of all the parameters of the dipole forms the vector α introduced in the previous
section, viz. α = (Ldip , LP , r dip , udip ) ∈ R × R × R3 × R3 = A.
In a transmitting state, when a current source I0 = 1 A is impressed at the port, the
current density on the dipole can be expressed as
J α(r) = I0 Ldip udip δr dip (r)
∀r ∈ R3 ,
(2.43)
26
Deterministic setting
Figure 2.3: Elementary dipole.
where δr dip (r) = δ(rdip − r) is a Dirac distribution centered at r dip and such that, for any
continuous function f defined on R3 , hδr dip ; f i = f (r dip ). This expression of J α can be
obtained from the standing-wave approximation of the current [40], in the limit where the
length of the dipole is infinitesimal. The voltage Ve (γ) induced by an incident field E iβ at
the port of the dipole in a receiving state, reads
Ve (γ) = −
1 J α; E iβ r = −Ldip udip · E iβ (r dip ).
dip
I0
(2.44)
Hence, Ve (γ) represents the polarization of the field E iβ along the direction udip up to the
scaling factor Ldip . Due to this interesting property, Hertzian dipoles are often utilized as
electric field probes, for instance, in bio-medical imaging applications [55, 56].
2.4.2
Thin wire
Until the second world war, wire antennas were predominant [41]. Nowadays they can
still be encountered for instance in cheap radios. Thin-wire structures are also common in
EMC problems where they occur in integrated circuits, in wire bundles used in buildings,
or in harnesses present in vehicles and aircrafts [57]. In most of these applications, the
wires lie on top of larger surfaces such as a printed-circuit board (PCB), in the soil or
below the water surface.
In the present case, the attention is focused on a setup that is derived from an EMC
benchmark [58]: it consists of a thin-wire frame mounted on top of a PEC ground plane
2.4 Practical test cases
27
of infinite extent, as sketched in Fig. 2.4. The wire has a tubular geometry with a circular
cross-section of radius a. The geometry of the straight wire, denoted D, can be partitioned
into a fixed part Df and an undulating portion Dv . The domain Df comprises two
vertical 5 cm thin wires, one of which contains a port region denoted by P . These thin
wires are connected below to the horizontal PEC ground plane lying in the xy plane, and
above to two horizontal branches that are each 2 cm long. The remainder of the straight
wire, denoted Dv , corresponds to a 1 meter long horizontal wire that connects the 2 cm
horizontal branches.
Figure 2.4: Thin wire Ωα illuminated by E iβ .
With the straight wire D as reference, the geometry of the deformed thin wire follows
from the smooth mapping µα
D
µα :
7−→ Sα
r D = (x, y, z) 7−→ r α =
(
rD
if r D ∈ Df ,
r D + xα(y)ux + zα(y)uz if r D ∈ Dv ,
(2.45)
where the functions xα and zα are smooth and at least twice continuously differentiable.
The literature treating this type of electromagnetic interaction is rich with methods
that range from analytical transmission-line theory [31, 59, 60] to numerical integralequation rationales [44, 58, 61–63]. We have chosen to adopt the latter methodology. The
major steps of this approach are reported hereafter and the interested reader can find
more extensive details in [51, 53].
Given the geometry of the setup, it is convenient to resort to a curvilinear cylindrical
coordinate system with the right-handed orthonormal basis (ur , uϕ , us ), where us is
a vector tangential to the axis of the wire Sα. Thus, the current density J α has a
28
Deterministic setting
circumferential component Jα,ϕ and a longitudinal one denoted Jα,s . Owing to the
negligible radius a in comparison to the length Lα of the wire which is itself negligible
with respect to the wavelength, i.e. a ≪ Lα ≪ λ, the thin-wire approximation [44] is
employed and results in the fact that J α is essentially directed in the longitudinal direction.
The thin-wire approximation grounds also the assumption that J α(r = (r, ϕ, s)) is
circumferentially invariant, thus leading to the definition of the line current I α
 π

Z
I α(s) = Iα(s)us =  Jα,s (a, ϕ, s)adϕ us ,
(2.46)
−π
Hence, this thin-wire hypothesis reduces the formulation of the scattering problem to line
integrals along the axis of the thin wire1 .
In a transmitting state, the excitation corresponds to the field E[I 0 ], which is radiated
by the unit current source I 0 connected to the port. The transmitting-state current that
it induces on the geometry Sα, is found by solving a Pocklington thin-wire EFIE [44, 63].
The EFIE is derived by observing that the total tangential electric field inside the perfectly
conducting wire vanishes. Applying this relation on the axis of the wire leads to
−us (r) · E t (r) = us (r) · E s [J α](r)
(2.47)
ZLα
ZLα
1
′
′
′
= −jωµ ga (r , r)Iα(s )ds −
∂s ga (r′ , r)∂s′ Iα(s′ )ds′ ,
jωε
0
0
for any observation point r = (r, ϕ, s) = (0, 0, s) on the axis, while the source point
r ′ = (a, ϕ′ , s′ ) is on the mantle of Sα. The kernel of this integral equation is the so-called
reduced kernel ga , which depends on the distance |r − r ′ | between r and r ′ as follows
−jk0 |r − r ′ |
e
ga (r ′ , r) =
.
4π|r − r ′ |
(2.48)
An alternative EFIE is derived by expressing the nullity of the tangential electric field on
the mantle of the wire, which leads to the so-called thin-wire EFIE with exact kernel [63].
However, the presence of the observation point on the mantle requires an additional
circumferential integration that needs to be handled cautiously, as well as the singularity
arising when the source and observation points coincide [63]. These difficulties explain the
faster performances of the reduced-kernel EFIE in comparison with the exact-kernel EFIE.
1
The decision to focus on J α,s can also be justified by considering the scattered field in the far-field
region, which is dominated by the contribution of J α,s [36, 64].
2.4 Practical test cases
29
Unlike the exact-kernel EFIE, the reduced-kernel EFIE is ill-posed, which can be
problematic if the mesh of the wire is refined excessively. However, for a given level of
discretization, the reduced-kernel EFIE provides accurate results as long as the criterion
a ≪ Lα is fulfilled, and the end effects on the current are neglected [36, 40]. This approach
is hence privileged to study the thin wire over the infinite ground plane.
The transmitting state current I α is found numerically by applying a Galerkin procedure,
which uses a set of quadratic-segment basis functions, well suited for the modeling of
curved wires [61].
The thin-wire example plays a key role throughout this thesis, as it will serve to illustrate
the forthcoming developments, unless stated otherwise.
2.4.3
PEC plate of finite extent
As a third example, the coupling between a Hertzian dipole, such as the one introduced
in Section 2.4.1, and a PEC plate of finite extent is investigated. The objective is to
study the effect of the metallic plate on the voltage induced by an incident field at the
port of the dipole. The plate can be regarded either as a shielding surface, or as a
scattering obstacle interfering with the direct coupling between the incident field and the
dipole.
Figure 2.5: Deformed surface Sα and elementary dipole located at r dip , in the direction udip .
30
Deterministic setting
The curved metallic plate, shown in Fig. 2.5, is normally parameterised with respect to
the flat plate D, which is fixed and included in the horizontal plane Oxy. To this end, a
mapping µα is introduced
µα :
D
7 → Sα
−
rD = (x, y, 0) −
7 → r α = r D + zα(x, y)uz ,
(2.49)
where the function zα is smooth and at least twice continuously differentiable.
The elementary dipole is located at the fixed position r dip with a fixed orientation udip
and a moment denoted J dip .
The voltage Ve , induced at the center of the dipole by the incident field E iβ can be
expressed by Eq. (2.37) as
Ve (γ) = Vi,e (β) + Vs,e (γ).
(2.50)
1 The voltage Vi,e = −
J dip ; E iβ arises from the direct coupling between E iβ and the
I0
1 J α; E iβ S stems from the field scattered by Sα. By
dipole, whereas Vs,e (γ) = −
α
I0
definition, J α is the current distribution induced on Sα resulting from the field radiated
by the dipole in a transmitting state. This current density is determined by solving an
equivalent EFIE by a method of Galerkin, which uses Rao-Wilton-Glisson (RWG) basis
functions defined on Sα [65]. This test case will be analyzed in more depth at a later stage
of this dissertation (see Section 10.4).
2.5
Conclusion
A general representation of electromagnetic interactions has been established. This model
requires, as input, the specification of the geometry of the receiving device, and of the
properties of the incident field. For given values of such a configuration, an EFIE is solved
by the method of moments to obtain output variables, such as the Thévenin circuit, which
is chosen here as the observable. This deterministic model is schematically represented in
Fig. 2.6, with the induced voltage Ve taken as an example of observable.
Although this entire approach assumes a known deterministic configuration, the case of
multiple possible configurations needs to be tackled efficiently and is addressed in the
following chapter.
2.5 Conclusion
31
Figure 2.6: General representation of the deterministic model: inputs α ∈ A (geometry), β ∈ B
(incident field), output Ve ∈ ΩV (Thévenin voltage source).
Chapter 3
Stochastic parameterisation
Limits of the deterministic models
All the quantities that appear in the deterministic electromagnetic model established in
the previous chapter can be represented as mappings
Y : I ∋ i 7−→ Y (i) ∈ W.
(3.1)
The domain I contains the variables i which parameterise either the geometry of the
scattering object (i = α, I = A), the incident field (i = β, I = B), or both
(i = γ = (α, β), I = G = A × B). The value Y (i) can e.g. be a scalar such as
Ve (γ) or a vector field such as J α and E iβ . The ensemble W is a known Banach space
containing the range of Y , i.e. Y = Y (I) ∈ W. For instance, in the consideration of the
domain Ve (G = A × B), since the voltage Ve (γ) is a complex number, the choice W = C
makes sense.
In the deterministic case, which corresponds to the absence of uncertainty, since the
values of all the inputs i are fixed and known in I, the corresponding response Y (i) can
be computed. In real interactions however, the parameters i of the configuration generally
vary in their ranges I. These variations may be caused either by changing operational
conditions due to fatigue or ageing, by a changing environment, or by an incomplete
knowledge of the configuration of the interaction.
For sensitive or ill-conditioned problems, the uncertainty of i can produce significant
modifications of Y , with penalizing consequences on the electromagnetic compatibility of
the electronic device considered. Therefore, all these uncertainties need to be taken into
account and managed in the electromagnetic model.
34
Stochastic parameterisation
Quantification of the uncertainty of Y
In the presence of uncertainties, a systematic evaluation of the deterministic model Y
for each value of i in I, can be very demanding given the numerical cost involved in the
integral-equation-based model. Moreover, the resulting set Y still requires statistical postprocessing to reveal its essential patterns such as its average, its variance or its empirical
distribution function. These statistics are very valuable and achieve a data compression
by summarizing the key features of the possibly large set Y.
An alternative quantification method lies in a sensitivity analysis where a nominal value
i0 of the input i is chosen in I, and where the effect of small variations around i0 on Y is
assessed. This approach is usually very time-efficient as it only requires the computation
of Y (i0 ) and of some directional derivatives of Y around i0 . Despite the efficiency of such
an approach for smoothly varying processes Y in terms of i, the main drawback of the
sensitivity method generally resides in the locality of the conclusions that can be drawn
from such an analysis. The range of validity of these methods can be broadened via a
“marching-on-in-anything” method [66].
The method privileged in this thesis hinges on a stochastic rationale, where the variations
of i in I are viewed as random. Consequently, the response Y (i), which is a function of
i, will also be randomly distributed in W. Probability theory can then be employed to
determine the statistical characteristics of Y , without resorting to extensive deterministic
calculations. In addition, the objective of this rationale is to obtain statistical information
that is globally valid regardless of the smoothness of Y as a function of i, as opposed to
the sensitivity analysis. Compared to the deterministic and sensitivity approaches, the
main advantage of the stochastic method is that it considers the sets I and Y as main
objects of interest, rather than the individual values of i and Y (i).
3.1
Probability space
At the heart of the stochastic approach lies the assumption that the input parameter i
is randomly distributed in I. The mathematical transcription of this hypothesis imposes
that I be equipped with a σ-algebra EI and a measure PI to form a probability space
denoted (I, EI , PI ) [67, p. 111],[68, p. 10]. A more detailed definition of a probability
space can be found in Appendix A. The σ-algebra EI is a collection of subsets of I that
are also called events in the probabilistic vocabulary. The number PI (B) measures the
probability of occurrence of the event B ∈ EI , i.e. the probability of having i ∈ B, hence
PI (I) = 1.
3.2 Randomization of the EMC problem
35
All the probability measures considered in this thesis are regular. This implies that a
probability density function (pdf) fI can be uniquely associated with PI , where fI is a
Riemann-integrable positive function such that
Z
PI (B) = fI (i)di,
for any set B ⊂ EI .
(3.2)
B
The definition of PI highlights that the σ-algebra EI contains the useful information about
the structure of the randomness of i in I [69, p. 65]. If i is a discrete random variable,
EI will consist of collections of single elements, and PI , defined by replacing the integral
in Eq. (3.2) by a sum, will therefore measure the probability of having i equal to given
values. If on the other hand I ⊂ R, EI will be a Borel space generated by open intervals
and accordingly, PI will quantify the probability of having i in a given interval, hereby
leading to the definition of a cumulative distribution function [67, p. 3].
3.2
Randomization of the EMC problem
3.2.1
Random input
From this point onwards, the vectors α, β and γ will refer to the random components of
the parameters of the geometry, the incident field, and the configuration, respectively. This
notation highlights the randomness of the input parameters as it treats the deterministic
components implicitly.
It is reasonable to consider the parameters α of the geometry, and β of the incident field
to be mutually statistically independent and, unless stated otherwise, this assumption will
be made, i.e.
Pγ (γ ′ ∈ G) = Pα(α′ ∈ A)Pβ(β ′ ∈ B),
for γ ′ = (α′ , β ′ ) ∈ G = A × B.
(3.3)
Interactions between random incident fields (random β) and deterministic objects
(deterministic α) arise in problems as diverse as Mode-Stirred Chamber (MSC)
problems [27, 28], indoor/outdoor propagation problems [23, 24], or transmission-line
theory [31, 32]. For the deterministic model defined in the previous chapter, this type of
interaction translates into a random excitation E iβ and a deterministic EFIE operator.
The converse situation, where a deterministic incident field (deterministic β) illuminates
a randomly varying object (random α) has been tackled in rough-surface scattering
problems where the infinite extent of the scattering surfaces allows for simplifications
36
Stochastic parameterisation
of the electromagnetic equations [19, 20]. In MSC problems, the study of the effect of the
randomly varying stirrer is often avoided by making the assumption of an ideal chamber
with a prescribed distribution of the fields in the test region of the chamber [27, 28, 30].
This type of coupling has also been studied in thin-wire problems, where the randomly
varying wire structure is usually modeled by transmission-line theory [32, 52].
Our primary objective is to study integral-equation models of electromagnetic interactions
and to investigate the effect of random geometrical variations of scattering devices of finite
extent. The emphasis will therefore be put on the case where the scattering object is
random (i = α random) and submitted to a deterministic incident field (deterministic β),
whereas the fully stochastic problem (i = γ = (α, β) random) is treated as an extension.
3.2.2
Input probability measure PI
The choice of the probability measure PI is dictated by the available knowledge on the
distribution of the parameters i in I. Such knowledge can for instance be obtained from
measurements [70], or by a Kriging method applied to manufacturing data sheets [71, 72].
In rough-surface scattering problems, geometrical models of the surface of the soil or the
sea are based on measurements [73]. In propagation problems involving multiple obstacles,
the distribution of the parameters of the incident field can be deduced from the location
of intermediate obstacles that create parasitic signals [74].
In any case, the so-called maximum-entropy principle [75, 76] allows for an unambiguous
choice of PI according to the statistical knowledge available concerning the distribution
of i in I. When, for instance, I is bounded and nothing else is known about i, the
corresponding maximum-entropy distribution is uniform.
3.2.3
Propagation of the randomness
Given its definition in Eq. (3.1) and the randomness of i ∈ I, the values of the response
function Y (i) become in turn randomly distributed
in W. For example, with the induced
Z
1
voltage defined in Eq. (2.36) as Ve (γ) = −
J α(r) · E iβ (r)dS, the randomness of
I0 r∈∂Ωα
the geometry of the device Ωα affects Ve through its support ∂Ωα and the current density
J α. On the other hand, the effect of a random incident field appears via the term E iβ.
A new probability space, which is yet to be specified, is hence defined as (Y, EY , PY ), where
Y = Y (I), EY is a σ-algebra associated with Y, and PY is a probability measure on EY .
The determination of the image space Y is generally non-trivial. Unless Y is monotonic
3.3 Random variables
37
or has a simple expression in terms of i, Y can only be identified via a deterministic sweep,
i.e. by computing Y (i) for every i in I. Alternatively, the random distribution of the
samples Y (i) can be assessed in the known space W containing Y. The σ-algebra EY can
always be chosen as the σ-algebra generated by the collection of subsets of W. Since the
randomness of Y is induced by the stochastic nature of i, its probability law PY can be
deduced from PI by regarding Y as a random variable, as is done hereafter.
3.3
3.3.1
Random variables
General definitions and properties
Random variable [77, p. 14]:
Given the probability space (I, EI , PI ) and the measurable Banach space (X , EX ), where
EX is a σ-algebra on X , an X -valued random variable L is a measurable function from
I to X .
The measurability should be understood in the sense that, for any set U ∈ EX , the
reciprocal image L−1 (U) belongs to EI . For all the deterministic quantities considered in
this thesis, the mapping Y is measurable and therefore defines a random variable valued
in the measurable space (Y, EY ).
Induced measure:
The random variable Y induces a measure PY on EY , also known as the image measure
PY :
EY −→ R+
U0 7−→ PY (U0 ) = PI (Y −1 (U0 )).
(3.4)
This measure PY is hence expressed in terms of the measure PI . The knowledge of PY
provides a complete characterization of the randomness of Y in the probability space
(Y, EY , PY ) 1 . Similarly, and provided that it exists, the pdf fY of Y is defined as
Z
PY (C) = fY (Y ′ )dY ′ ,
for any set C ⊂ EY ,
(3.5)
C
as well as the expectation of Y , which is denoted E[Y ]
E[Y ] =
Z
Y
′
′
Y dPY (Y ) =
Z
Y ′ fY (Y ′ )dY ′ .
(3.6)
Y
The definitions of fY and E[Y ] depend on the availability of the probability measure PY .
1
Note the following equivalence: on the one hand, a random variable is randomly distributed in its
range; on the other hand, the identity mapping of a probability space defines a random variable.
38
3.3.2
Stochastic parameterisation
Problematic definition of PY
The definition of PY in Eq. (3.4) suffers from the mathematical and practical difficulty of
characterizing, even partially, the reciprocal mapping Y −1 . This particular issue is absent
for problems where the function Y is bijective or expressed as a tractable analytical
expression. In thin-wire scattering problems, for instance, the theory of transmissionlines provides an analytical expression for the induced voltage Ve enabling for the use of
Eq. (3.4) [31, 32].
In our case however, a complete characterization of the randomness of Y through PY and
fY is, a priori, not practically feasible. Moreover, when the process Y is function-valued
(e.g. Y : i = α 7→ µα, Y : i = β 7→ E iβ or Y : i = α 7→ J α), it is generally tedious if
not impossible to express the inverse mapping Y −1 in terms of the inputs i. Even when
Y is scalar-valued (e.g. Y : i = γ 7→ Ve (γ)), Eq. (3.4) is of little use given the fact that
Y depends on i through the solution of a boundary-value problem modeled by an integral
equation.
3.3.3
Fundamental theorem
Regarding E[Y ], however, the following fundamental theorem allows for its computation
without resorting to PY or fY .
Fundamental transport theorem
Given a measurable function h on the probability space (I, EI , PI ), it defines a random
variable h(i) that is also measurable, and, its expectation E[h(i)] can be written in
terms of the pdf fI of i as
Z
E[h(i)] = h(i)fI (i)di.
(3.7)
I
A proof of this measure-theoretical theorem can be found in [68, p. 124]. Thus E[h(i)]
can be evaluated without knowing the law Ph(i) but merely from PI and fI . For this very
interesting reason, this theorem is sometimes referred to as the Law Of The Unconscious
Statistician or LOTUS [78, p. 83].
Applied to the generic model described by Eq. (3.1), this theorem implies that, for any
measurable function g defined on Y, the expectation E[g(Y )] is obtained as
Z
E[g(Y )] = g(Y (i))fI (i)di.
(3.8)
I
3.4 Characterization of the stochastic observables
39
This Riemann integral can be evaluated numerically as it involves a known domain of
integration I, a prescribed probability measure fI , and a computable integrand g(Y (i)).
Nevertheless, probability spaces corresponding to function- or operator-valued random
variables are not always intuitive to manipulate. Moreover the definition of measurable
functions acting on these spaces requires some caution. This limitation can be circumvented
by regarding random functions or operators as indexed families of random variables
valued in spaces that are more intuitive to handle, viz R or C. The concept of stochastic
processes, presented in Appendix A, is then more suitable.
3.4
Characterization of the stochastic observables
The stochastic parameterisation of the inputs of the electromagnetic model leads to the
randomization of all the quantities that appear in the deterministic model, and most
importantly of the observable, which corresponds to the induced Thévenin voltage. Each
value Ve (γ) of this observable is a complex number obtained from the input
γ = (α, β) ∈ G = A × B defining the configuration, i.e.
Ve : G ∋ γ 7−→ Ve (γ) ∈ C.
(3.9)
The randomness of γ means that Ve (γ) becomes a complex-valued random variable, which
can be characterized either partially through one of its components, or completely by
taking into account its complex-valued nature.
3.4.1
Characterization of components of Ve
The complex random variable Ve can be partially studied through a real-valued function
of Ve , such as Y (Ve (γ)) = Re(Ve (γ)), Y (Ve (γ)) = Im(Ve (γ)) or Y (Ve (γ)) = |Ve (γ)|.
Statistical moments
Statistical moments of Y correspond to the expectation of polynomial functions of Y .
The first-order moment, or average or mean E[Y ], represents the center of gravity of PY .
It can serve to obtain a centered random variable Yc with vanishing average
Yc = Y − E[Y ].
(3.10)
The variance var[Y ] is a second-order moment, defined as
var[Y ] = E[Yc2 ] = E[Y 2 ] − (E[Y ])2 = (σ[Y ])2 ≥ 0,
(3.11)
40
Stochastic parameterisation
p
where σ[Y ] = var[Y ] is the standard deviation, or root-mean square (rms) of Y . The
standardized random variable Ys with vanishing average and unit variance, is deduced
from Y , E[Y ], and σ[Y ] as
Ys =
Y − E[Y ]
Yc
=
,
σ[Y ]
σ[Y ]
when σ[Y ] 6= 0,
so that E[Ys ] = 0, σ[Ys ] = 1. (3.12)
By using this normalization, the statistical properties of different random variables can
be compared on a common ground.
These first statistical moments E[Y ] and σ[Y ] have the same dimension as Y . They
quantify, both physically and statistically, the dispersion of Y by providing general bounds
for the distribution PY via, for example, Chebychev’s [68, p. 130] or Markov’s [68, p. 131]
inequalities. The average E[Y ] and the rms σ[Y ] are also praised for their ability to
completely define Gaussian probability distributions. Gaussian variables are of prime
importance in practice as they correspond to limiting random variables resulting from
summations of a very large number of random parameters with finite variances, as stated
by the Central Limit Theorem [68, p. 214]. Nonetheless, all the interactions considered
in this thesis will involve only a finite number of random inputs, which prevents us from
invoking asymptotic theorems to deduce the probability distribution of the observable.
The third- and fourth-order moments of Y , also known as the skewness sk[Y ] and the
kurtosis κ[Y ], respectively, are defined as
sk[Y ] = E[Ys3 ] ∈ R,
and
κ[Y ] = E[Ys4 ] ≥ 0.
(3.13)
These dimensionless moments yield qualitative information on PY in comparison with a
Gaussian distribution for which sk[Y ] = 0 and κ[Y ] = 3. Tests based on the skewness
and the kurtosis are often employed in Radio-Frequency Interference (RFI) mitigation
problems to assess the Gaussianity of electromagnetic signals [79, 80].
Higher-order moments of Y can also be computed, although their practical interpretation
is not always obvious. The determination of a large number of moments often serves to
approximate the probability distribution PY in approaches such as the maximum-entropy
principle [76], which is applied in Chapter 8.
3.4 Characterization of the stochastic observables
41
Characteristic function:
Rather than focusing on the statistical moments of Y , we can study the characteristic
function ΦY , which reads
Z
jtY ′
ΦY (t) = E e
= ejty fY (y ′)dy ′,
∀t ∈ R.
(3.14)
R
For each value of t ∈ R, the value ΦY (t) corresponds to the expectation of the random
variable ejtY , and can hence be computed via the fundamental theorem (Eq. (3.8)). Once
ΦY has been accurately evaluated, successive differentiations of ΦY around t = 0 provide
the statistical moments of Y [68, p. 153]. Moreover, Eq. (3.14) highlights the fact that φY
is the Fourier transform of the pdf fY . This crucial link permits the recovery of fY by an
inverse Fourier transformation of ΦY [81–83], as will be demonstrated in Chapter 9.
3.4.2
Full characterization of Ve
The sole study of Re(Ve ), Im(Ve ) or |Ve | only provides marginal statistical information
on Ve . The term “marginal” should be understood within the formalism of stochastic
processes, as Ve can also be regarded as the real vector with Re(Ve ) and Im(Ve ) as
components. Unless Re(Ve ) and Im(Ve ) are statistically independent, determining the
distributions PRe(Ve ) and PIm(Ve ) does not yield a complete characterization of Ve .
One way around this limitation consists in taking the vectorial nature of Ve into account as
is done in the sequel, where Ve is represented as the complex scalar
Ve = Re(Ve ) + j Im(Ve ) = (Re(Ve ), Im(Ve )).
Statistical moments
The average of Ve follows by linearity from the average of Re(Ve ) and Im(Ve )
E[Ve ] = E[Re(Ve )] + j E[Im(Ve )],
(3.15)
whereas the variance var[Ve ] is defined as
var[Ve ] = E[|Ve − E[Ve ]|2 ] = E[|Ve |2 ] − |E[Ve ]|2 ≥ 0.
(3.16)
These moments can still be used in the general probabilistic theorems to obtain bounds
on PVe , or to define a normalized variable Vn through Eq. (3.12). When Ve is identified
with the 2–D vector (Re(Ve ), Im(Ve )), as is done in Chapter 8, its covariance matrix can
be analyzed. The principal components of this matrix inform about the shape and orientation of confidence ellipses in the complex plane.
42
Stochastic parameterisation
Difficulties arise when higher-order moments of Ve need to be computed. They are mainly
due to the fact that multiple third-order polynomials in Ve can be defined, viz. Ve2 Ve∗ ,
Ve (Ve∗ )2 , Ve3 or (Ve∗ )3 , but do not necessarily have a practical interpretation for PY . It is
thus better to focus on the higher-order moments of the components of Ve : the skewness of
Re(Ve ) and Im(Ve ) will for instance determine whether the distribution of Ve is symmetrical
in the complex plane, and the kurtosis of |Ve | will quantify the variations in magnitude of
Ve around E[Ve ].
Characteristic function
The characteristic function φVe of the vector Ve = (Re(Ve ), Im(Ve )) ∈ R2 is a function on
R2 such that, for any (s, t) ∈ R2 ,
Z
ΦVe (s, t) = ej(sRVe +tIVe ) fVe (RVe , IVe )dRVe dIVe .
(3.17)
R2
The pdf fVe can now be recovered from ΦVe via a 2–D inverse Fourier transformation [84].
3.5
Conclusion
The random parameterisation of the electromagnetic model has led to the introduction of
two probability spaces: a space (I, EI , PI ) chosen a priori, which contains the parameters
specifying the configuration, and a second probability space (Y, EY , PY ) induced by the
randomness of the configuration, which is unknown a priori. The objective is hence to
determine the statistical properties of the induced space, as sketched in Fig. 3.1.
A complete characterization is theoretically possible by determining the induced
probability measure PY , however, in practice, this measure is generally difficult to
express. Instead, the statistical moments provide partial but valuable information on
the patterns of the randomness of the outputs, and can be computed explicitly.
All these statistical moments are defined as Riemann integrals and can therefore be
computed by quadrature rules, as will be demonstrated in the upcoming chapter.
3.5 Conclusion
Figure 3.1: Stochastic parameterisation of the deterministic model.
43
Part II
Computation of the statistical
moments
Chapter 4
Numerical integration on the space
of stochastic inputs
Chapter 3 has shown that the randomness of the observable Y (Y = Ve or Y = Ze ) can be
assessed via its statistical moments defined as the expectations of measurable functions g
acting on Y
Z
E[g(Y )] =
g(Y (i))fI (i)di.
(4.1)
I
This Riemannian integral [85] can be evaluated, since it involves a known domain of
integration I, which is generally multi-dimensional, a prescribed continuous pdf fI , and
an integrand g ◦ Y which is computable. Nevertheless, given the complexity of the
dependence of Y on i, these integrals can seldom be evaluated in closed form, and
therefore need to be determined numerically. Quadrature or cubature rules are hence
employed that replace the integral in Eq. (4.1) by a discrete sum involving a finite
sequence of points {Y (ik ), ik ∈ I and k = 1, . . . , N}. The efficiency of such a
collocation method depends on the number N of function evaluations needed to
accurately approximate the integral.
Among the large variety of existing quadrature rules, we have selected and implemented
four. A Monte-Carlo approach is first employed and taken as a reference given its
well-established robustness. Next, a deterministic Cartesian-product rule, based on a
trapezoidal scheme, is considered as one of the most natural ways of performing
integrations over low-dimensional domains. A sparse-grid rule, constructed from a
Clenshaw-Curtis rule, is also used. Finally, we have programmed a space-filling-curve
quadrature rule, which is derived from a sensitivity analysis method. The performances of
these rules are mutually compared with respect to their complexities and their error levels.
48
Numerical integration on the space of stochastic inputs
The stochastic methods that will be discussed in the upcoming chapters will generally
require the computation of multiple statistical moments, i.e. the evaluation of multiple
integrals. For this reason, we also adopt a strategy to optimize the computation time of
multiple integrals.
4.1
Integral of scalar-valued functions
For the sake of clarity, a real-valued observable Y (i) such as Re(Ve (γ)) or Im(Ve (γ)) is
considered first. If Y is integrable, its mean reads
Z
E[Y ] = Y (i)fI (i)di,
(4.2)
I
where the set I is a d-dimensional subset of Rd .
The possibly unbounded integration domain I can be handled using probability theory:
the random-variable transformation theorem (see [86, p. 28] or Appendix A Eq. (A.7))
permits the expression of i ∈ I in terms of a bounded random variable. The remainder
of the analysis can hence focus on integrals defined on a bounded domain I = [0, 1]d .
4.1.1
Discretization
Given an integer l ∈ N, the quadrature rule of level l approximates the continuous integral
defining E[Y ] by a discrete sum as follows
E[Y ] =
Z
I
Y (i)fI (i)di ≈
N (l)
X
k=1
Y (ik )fI (ik )wk = Ql [Y · fI ],
(4.3)
where the weights wk ≥ 0 and the grid Γl = {ik , k = 1, . . . , N(l)} ⊂ I are defined by the
type of the quadrature rule Ql . The number N(l) ∈ N, also called the complexity of Ql ,
is an increasing function of the level l. It represents the size of the grid Γl and thereby the
number of evaluations of the function Y , which strongly influences the overall numerical
cost of the quadrature rule. A first way of controlling the increase of N(l) consists in
resorting to nested grids that contain the end points of I, which is possible by setting, for
instance,
N(0) = 1, and N(l) = 2l + 1,
for l ≥ 1.
(4.4)
This choice entails that each grid Γl will be included in the grid Γl+1 , which allows for
re-using previously computed values of the observable to raise the level of the quadrature.
4.1 Integral of scalar-valued functions
49
The quality of the approximation of E[Y ] by Ql [Y · fI ] is assessed through the absolute
error el , defined as
el = |E[Y ] − Ql [Y · fI ]| .
(4.5)
The speed of convergence of Ql is monitored via the convergence rate, which describes the
decay rate of el as N(l) increases. Ideally, Ql should yield a suitable approximation with el
below a maximum admissible error, while the complexity N(l) is kept as low as possible.
The definition of el , however, assumes the knowledge of E[Y ], which is unavailable a
priori. This expectation could of course be obtained by determining the average of a very
large set of values of Y . Doing so would eventually translate into a deterministic sweep,
which contradicts the very principle of the stochastic approach. Nevertheless, for a stable
quadrature rule adapted to the smoothness of Y · fI , the value Ql [Y · fI ] should converge
to E[Y ]. The convergence will translate into a reduction of the relative error El , with
El =
|Ql+1 [Y · fI ] − Ql [Y · fI ]|
,
|Ql [Y · fI ]|
when Ql [Y · fI ] 6= 0.
(4.6)
It is worth mentioning that, although a small relative error is a necessary condition for
convergence, it is not a sufficient one. It is therefore useful to compare the results of
different quadrature rules to ascertain the convergence of Ql [Y · fI ].
4.1.2
Monte-Carlo method
The ”Monte Carlo” (MC) rule is equally weighted as shown by the formula
N (l)
1 X
Ql [Y · fI ] =
Y (ik ),
N(l) k=1
(4.7)
where the abscissae (i1 , . . . , iN (l) ) are randomly distributed in I according to the law PI ,
and are drawn with a random-number generator.
The convergence rate of this estimator is obtained by regarding the samples Y (ik ) as
statistically independent and identically distributed random variables in R, and by
invoking the Central Limit Theorem [85, p. 204]
!
σ[Y ]
el = |E[Y ] − Ql [Y · fI ]| = O p
as l → ∞,
(4.8)
N(l)
where σ[Y ] is the standard deviation of Y . This convergence rate depends only on the
size N(l) of the grid Γl , and not on the dimension of I. Further, the convergence depends
on the statistical dispersion of the values of Y accounted for by σ[Y ], but not directly on
the smoothness of Y in terms of i. Hence, the MC rule will be taken as a reference rule.
50
Numerical integration on the space of stochastic inputs
The “pros” of the MC rule are however also its “cons”. First, the convergence rate evolving
p
as 1/ N(l) is extremely slow, as it signifies that given a target accuracy reached via N(l)
samples, it would require 100 times more samples to increase the accuracy by a single
digit. Moreover, the MC rule does not exploit the smoothness of the integrand, which
is neutral for the integration of roughly-behaved integrands, but can be a disadvantage
for smooth functions. Therefore, alternative quadrature rules must be considered for the
stochastic method to be efficient.
4.1.3
Quadrature using polynomial interpolation
Polynomials constitute a very interesting approximation family for various functional
spaces. They offer the double advantage of being dense in these functional spaces,
and of being integrable in closed form. The principle of polynomial-interpolation-based
quadrature rules is hence to replace the integrand by an approximating polynomial, and
then to compute the integral over this polynomial. If the rule is stable, then the integral over the polynomial will converge towards the desired integral. In what follows, the
notation is simplified by re-writing Eq. (4.2) as
Z
E[Y ] = h(i)di,
with h(i) = Y (i)fI (i).
(4.9)
I
A. Mono-dimensional I
To begin with, the domain I is assumed to be mono-dimensional, i.e. I = [0; 1] ⊂ R. For
any integrer l1 ∈ N, given a set of N(l1 ) points Kl1 = {(ik ; h(ik )), k = 1, . . . , N(l1 )}, with
i1 = 0 and iN (l1 ) = 1, they partition I into the intervals {[ik ; ik+1 ], k = 1, . . . , N(l1 ) − 1}.
Two interpolation strategies can be followed, viz. a p-refinement and an h-refinement
approach.
Simple interpolatory quadrature (p-refinement) A simple interpolatory quadrature
employs a single interpolating polynomial PN (l1 )−1 defined on I and passing through all
the points of Kl1 . This polynomial, which is of degree N(l1 ) − 1, is constructed from the
fundamental Lagrange polynomials. The weights of the quadrature rule are obtained as
the integrals of the Lagrange that are employed1 .
The error of the quadrature rule is bounded by a factor that involves the distance between
the integrand h and the space of real-valued univariate polynomials of degree lower than
1
Appendix B provides additional information on the principle of simple univariate polynomialinterpolation-based quadrature rules.
4.1 Integral of scalar-valued functions
51
or equal to N(l1 ) − 1. This distance can be made arbitrarily small for sufficiently smooth
functions h, by increasing the order of the interpolating polynomial. However, as the
order of the polynomial PN (l1 )−1 is increased, the number N(l1 ) of abscissae and function
evaluations required to define the polynomial increases accordingly. If the integrand
h ∈ C r (I) is r times continuously differentiable on I, with a bounded r-th order derivative,
then the absolute error of the quadrature rule evolves at least as fast as [85, p. 128]
el1 = O (N(l1 ))−r .
(4.10)
The choice of uniformly spaced abscissae ik in I produces a Newton-Cotes formula. This
type of rule is however unstable and suffers from parasitic oscillations when higher-order
interpolation is employed, viz. when N(l1 ) ≥ 10 (see [87, p. 142]).
More stable quadrature formulas are obtained by selecting a Clenshaw-Curtis formula
where the points ik correspond to the extrema of Chebychev polynomials [85]
1
k−1
ik =
1 − cos π
,
for k = 1, . . . , N(l1 ).
(4.11)
2
N(l1 ) − 1
These abscissae are more clustered towards the edges of the integration domain I. The
Clenshaw-Curtis weights are obtained by projecting the interpolating polynomial PN (l1 )−1
on the space of Chebychev polynomials.
Gaussian rules represent a third example of quadrature formulas, in which the abscissae
correspond to the zeros of orthogonal polynomials and the weights are such that all
polynomials of degree lower than or equal to 2N(l1 ) − 1 are exactly integrated.
However, their grids are seldom nested2 , which increases the computational effort required
to refine the grid of the quadrature, i.e. to increase the level l1 .
Composite interpolatory quadrature (h-refinement) On the contrary, in a
composite interpolation approach, a low-order interpolation (often linear) is performed on
each sub-domain ck = [ik ; ik+1], so that the oscillations hindering the high-order NewtonCotes formulas are avoided.
(1)
With a repeated trapezoidal rule, the 1–D quadrature rule Ql1 of level l1 ∈ N is given by
N (l1 )
(1)
Ql1 [h] =
2
X
k=1
wk,1h(ik ),
where wk,1 =
(
ik+1 − ik
if 2 ≤ k ≤ N(l1 ) − 1,
(ik+1 − ik )/2 otherwise.
(4.12)
Gauss-Konrod, Radau and Lobatto formulas can be regarded as variants of Gaussian quadrature
rules, in which some abscissae are pre-assigned.
52
Numerical integration on the space of stochastic inputs
The error of such a rule is obtained via the Euler-MacLaurin formula [85, p. 146]. It reveals
the outstandingly accurate results of the compound trapezoidal rule for smooth integrands
with odd derivatives that have the same values at the edges of I. Thus, if the integrand
h ∈ C 2r+1 (I) is 2r + 1 times continuously differentiable on I, with periodic derivatives up
to the order 2r − 1, and if the derivative h(2r+1) has countably many discontinuities, then
h
i
el = O (N(l1 ))−(2r+1)
(4.13)
If the integrand is analytical, exponential convergence can even be reached [88, p. 174].
The polynomial-interpolation-based rules presented in this section therefore exploit the
p
smoothness of the integrand. Their convergence rates are much faster than the 1/ N(l1 )
rate of the MC rule. In this dissertation, we shall focus on the Clenshaw-Curtis and the
compound trapezoidal rules to approximate integrals over mono-dimensional domains.
B. Multi-dimensional I
In the multivariate case, the dimension of I is equal to d > 1 and the space I can be written
as the Cartesian product of d segments {Ik = [0, 1], k = 1, . . . , d}, i.e., I = [0, 1]d. On each
(k)
segment Ik , N(lk ) abscissae are defined that form the grid Γ1lk = {itk , tk = 1, . . . , N(lk )}.
Cartesian product of univariate rules
(1)
A natural way of obtaining a multi-dimensional rule Ql consists in applying Ql1 to each
sub-domain Ik for k = 1, . . . , d, and leads to
Ql [h] =
(1) d times
Ql1 ⊗ . . . ⊗
N (l1 )
(1)
Qld [h]
=
X
t1 =1
N (ld )
...
X
td =1
(1)
(d)
h it1 , . . . , itd ωt1 ,1 . . . ωtd ,1 .
(4.14)
(1)
where ωtk ,1 > 0 are the weights of Ql1 applied to the integral over Ik , and l = (l1 , . . . , ld ) ∈
Nd . Such a rule is known as a deterministic Cartesian product (DCP) rule.
As such, the d-dimensional rule Ql inherits the advantageous property of favoring smooth
integrands over rough ones. However, the major drawback of this method is its complexity,
d
Q
as the resulting grid Γdl = Γ1l1 × . . . × Γ1ld requires NDCP (l) =
N(lk ) evaluations of h(i),
k=1
i.e. as many solutions of the boundary-value problem defining Y . Such an exponentially
increasing complexity, also called the “curse of dimensionality” [89], is very penalizing for
high dimensions. If, for instance, d = 10 and each sub-domain Ik is sampled by 10 points,
then NDCP (l) = 1010 , which implies 10 billion function evaluations.
4.1 Integral of scalar-valued functions
53
Sparse-grid rule
An alternative quadrature rule can be built by applying Smolyak’s algorithm3 . As stated
in [90, 91], this algorithm hinges on the idea that all the abscissae are not equally important
in the DCP rule. Therefore, Smolyak’s rule aims at discarding the least important
abscissae without deteriorating the accuracy of the approximation of the integral.
(1)
Given the nested 1–D rule Ql1 associated with the grid Γ1l1 , which contains N(l1 ) elements,
the difference quadrature formula ∆l1 is introduced as the functional
(1)
(1)
∆l1 [h] = Ql1 − Ql1 −1 [h],
with Q0 [h] = 0.
(4.15)
(1)
Since Ql1 is nested, ∆l1 is defined on the grid Θl1 = Γ1l1 \ Γ1l1 −1 ⊂ Γl1 , which contains
m(l1 ) elements, with m(l1 ) < N(l1 ).
Smolyak’s d-variate quadrature formula of level l ∈ N is denoted Ql and it reads
X
Ql [h] =
(∆k1 ⊗ . . . ⊗ ∆kd )[h],
(4.16)
d≤|k|1 ≤d+l−1
d
with k = (k1 , . . . , kd ) ∈ N , and |k|1 =
d
X
kr . The sampling points of Smolyak’s formula
r=1
are given by the union of pairwise disjoint grids of the form Θk1 × . . . × Θkd [92]
]
Γdl =
Θk 1 × . . . × Θk d ,
(4.17)
d≤|k|1 ≤d+l−1
which contains NSG (l) points, where [90, p. 27]
NSG (l) =
d+l−1
X
|k|1 =d
m(k1 ) · . . . · m(kd ) = O(N(l)[log2 (N(l))]d−1 ).
(4.18)
This complexity is significantly lower than the one of the DCP rule (NDCP (l1 , . . . , l1 ) =
N(l1 )d ), and justifies the denomination of a sparse-grid (SG) quadrature rule.
Through this formula, the curse of dimensionality is reduced to a logarithmic scale, as
testified by the presence of the factor [log(N(l1 ))]d−1 .
The convergence of the SG rule is analyzed by considering the class Cdr (I) of d-variate
functions with bounded mixed derivatives up to the order r
Cdr (I) = {f : I → R, kD sf k∞ < ∞, ∀si < r} ,
3
We would like to express our gratitude to Dr. John Burkardt, from Virginia Tech university, who
provided us with the algorithm of the sparse-grid rule.
54
Numerical integration on the space of stochastic inputs
where s = (s1 , . . . , sd ) ∈ Nd , and where the differential operator D sf is defined as
∂ |s| f
D f =
,
∂xs11 . . . ∂xsdd
s
with |s| =
d
X
sk .
k=1
If the integrand h belongs to Cdr (I), then the absolute error is of the order [91, 92]
1
(d−1)(r+1)
[log(N(l))]
.
(4.19)
el,d = O
(N(l))r
Hence the accuracy of the SG rule benefits from the smoothness of the integrand.
It is worth noting that in polynomial-interpolation rules the probability distribution plays
a marginal role, the key element being the entire integrand h(i) = Y (i)fI (i). This is in
contrast to the Monte-Carlo approach and the space-filling-curve rule presented hereafter.
4.1.4
Space-filling-curve quadrature rule
The space-filling-curve (SFC) quadrature rule is derived from a sensitivity-analysis method
known as the Fourier Amplitude Sensitivity Test (FAST) [93, 94]. This rule hinges on the
transformation of the d-dimensional integral over I, into a mono-dimensional line integral
along a Peano search curve denoted χ.
With reference to Eq. (4.2) recalled below
Z
E[Y ] = Y (i)fI (i)di,
(4.20)
I
if I is d-dimensional and all the components of i = (i1 , . . . , id ) are statistically independent
and associated with the pdfs f1 , . . . , fd , then they can be expressed in terms of a single
scalar s ∈ [−π; π] as follows
uk (s) = Gk [sin(ζk s)],
for k ∈ [1, d].
(4.21)
Each function Gk : [−1, 1] → R is the solution to a differential equation established by
Weyl [95] and involving the pdfs fk [93],
π(1 − x2 )1/2 fk (Gk (x))
dGk
(x) = 1,
dx
with x ∈ [−1; 1], and Gk (0) = 0.
(4.22)
This equation ensures that the length of χ contained in given region R0 ⊂ I equals the
probability of having samples i in R0 .
4.1 Integral of scalar-valued functions
55
As s varies in [−π; π], the points i(s) describe an open curve χ which can be made to
come arbitrarily close to any point of I by selecting incommensurate frequencies ζi, i.e.
such that no integral combination of the frequencies ζi cancels. Owing to this property,
Weyl’s ergodicity theorem [95, 96] applied to Eq. (4.20) yields the following 1–D integral
Z
Z
Z
1
′
′
′
′
′
E[Y ] = Y (i )fI (i )di ≈
Y (i )dl =
Y (i(s))ds,
(4.23)
2π
I
i′ ∈χ
[−π;π]
which is evaluated numerically by a 1–D quadrature rule of level l ∈ N
N (l)
1 X
E[Y ] ≈
Y (i(sn )).
N(l) n=1
(4.24)
The abscissae sn are equally spaced in [−π; π] to ensure an exponential convergence rate
for analytic functions, as is the case with trapezoidal rules.
In practice, the finite numerical precision of computers prevents the incommensurability
condition from being fulfilled. Instead, the frequencies ζi are chosen as integers that form
an incommensurate set of order M, as proposed in [97], i.e.
d
X
i=1
νi ζi 6= 0,
for any νi ∈ N such that
d
X
i=1
|νi | ≤ M + 1.
(4.25)
This choice of frequencies causes the curve χ to be closed, i.e. i(s = −π) = i(s = π), and
periodic in terms of s ∈ [−π; π].
Nyquist’s criterion provides a lower bound for the complexity N(l) of this rule as
N(l) ≥ 2Mζmax + 1,
with ζmax = max ζi.
i
(4.26)
Cukier et al. [93] established an empirical link between N(l) and the dimension d of I
N(l) = O(dM/2+δ ),
where δ > 0.
(4.27)
In particular, for M = 4, N(l) ≈ 2.6 d2.5 which is a significantly lower than the exponential
complexity of the DCP rule.
Two sources of error can affect the accuracy of the SFC rule: first the ergodicity theorem
which replaces the integral over I by an integral over χ and second the numerical
evaluation of the integral over χ. The ergodicity error is minimized by ensuring that
the frequencies ζi form an incommensurate set. This rule critically depends on the ability
56
Numerical integration on the space of stochastic inputs
of the search curve to conveniently fill the inner volume of the domain I. Otherwise, it is
possible to have an apparently converging quadrature rule, but with an erroneous result
due to an improperly chosen curve χ. A Fourier analysis of the integrand reveals that
the error of this approximation is given by the Fourier coefficients of the integrand at
interference frequencies, which are multiples of N(l). This error is minimized by delaying
the interferences to higher frequencies, typically by choosing M ≥ 4 [98].
The convergence rate of the SFC rule takes advantage of the smoothness of the integrand
via the coefficient M. This coefficient determines the level beyond which the interferences
in the spectrum of the integrand are rejected [98]. Thus, the SFC rule will perform suitably
for integrands that have smooth spectra in the interference-free range of frequencies.
The SFC rule can be regarded as a lattice rule (see Appendix C), with the specific property
that the definition of the search curve χ incorporates the probability distribution of i.
4.1.5
Comparisons
The performance of the aforementioned quadrature rules is now compared, for the
particular case of the thin wire. Similar results can be obtained for the dipole and the
surface. The wire is placed in free-space and has a random geometry defined as
xα(y) = 0 and zα(y) = 0.05 +
d
X
k=1
αk sin[kπ(y − ym )], (in meters),
(4.28)
for y ∈ [ym ; yM ]. The amplitudes αk are mutually independent and uniformly
distributed in their domains Ak = [ak , bk ] = [−0.04/d; +0.04/d] m, which implies that, for
k = 1, . . . , d, the pdf fk of αk reads
(
1/(bk − ak ), if a′ ∈ Ak ,
fk (αk = a′ ) =
(4.29)
0,
otherwise.
This wire is excited by a deterministic incident field, which is a 500 MHz plane wave
propagating along the direction θi = 45◦ , φi = 45◦ . The electric field E iβ is parallelpolarized and has an amplitude of 1 Vm−1 . These items of information are summarized
as {|E iβ | = 1 Vm−1 , θi = 45◦ , φi = 45◦ , parallel polarization, f =500 MHz}.
4.1 Integral of scalar-valued functions
57
Grids
x3
x
3
The grids of the different quadrature algorithms are shown in Figs 4.1(a) to 4.1(d) for the
case d=3. These graphs clearly reveal the high density of points of the DCP grid, as well
as the peculiar distribution of the samples in the SG rule. The search curve of the SFC is
shown in Fig. 4.1(d) together with its sampling points.
x
2
x
2
x1
x1
(b) Cartesian-product grid
x3
x
3
(a) Monte-Carlo grid
x
x2
2
x
x1
(c) Sparse grid
1
(d) Space-filling-curve grid
Figure 4.1: Grids of the quadrature rules.
58
Numerical integration on the space of stochastic inputs
Convergence rates
Approximation of E[|Ve|]
[V]
To compare the convergence rates of the different algorithms, a wire parameterised by
two random coefficients is considered, i.e. d = 2, and the average of |Ve | is evaluated.
Figure 4.2(a) depicts the evolution of the approximation of E[|Ve |] in terms of the
complexity N(l), and demonstrates that all the rules converge to the same limit
E[|Ve |] = 0.252 V.
0.265
0.26
0.255
0.25
MC
DCP
SG
SFC
0.245
0.24
0.235 0
10
10
1
2
N(l)
10
10
3
4
10
(a) Value of the integral.
0
10
MC
DCP
SG
SFC
-1
10
-2
10
E
-3
l
10
-4
10
-5
10
-6
10
-7
10
10
0
1
10
N(l)
10
2
10
3
4
10
(b) Relative error.
Figure 4.2: Convergence properties of Ql [|Ve |] versus the number of samples N (l): MC rule (solid
line), DCP rule (dashed line), SG rule (△), SFC rule (◦).
4.1 Integral of scalar-valued functions
59
The relative error El defined by Eq. (4.34) is displayed in Fig. 4.2(b). The SG rule shows
the fastest convergence, followed by the SFC rule. The DCP rule has a constant rate
of convergence, unlike the other rules, particularly the MC rule. Already in this lowdimensional example, the DCP rule is outperformed by the other rules, in terms of their
complexities.
Effect of the dimension
The effect of d, the dimension of the integration domain I, on the convergence properties
of the quadrature formulas is now examined. An increasing sequence of values of d is
taken into account. For each of these values, the aim is to compute the variance var[Ve ]
of the voltage induced at the port of the corresponding wire described by Eq. (4.28). The
accuracy of the quadrature rules is monitored via their relative errors that are displayed
in Figs 4.3(a) to 4.3(d), for d ∈ {4, 6, 8, 10}.
10
10
10
El
10
10
10
10
0
10
-1
10
-2
10
El
-3
MC
DCP
SG
SFC
-4
-5
10
10
-6
10
0
10
10
1
2
N(l)
10
10
3
10
10
4
0
-1
-2
-3
-4
MC
SG
SFC
-5
-6
10
0
10
1
(a) d=4
10
10
10
E
l
10
10
10
10
10
3
10
4
(b) d=6
0
10
-1
10
-2
10
El
-3
-4
10
-6
0
10
10
MC
SG
SFC
-5
10
2
10
N(l)
10
1
2
10
N(l)
(c) d=8
10
3
10
4
10
-1
-2
-3
-4
MC
SG
SFC
-5
-6
10
0
10
1
2
10
N(l)
10
3
10
4
(d) d=10
Figure 4.3: Effect of the dimension d ∈ {4, 6, 8, 10} on the relative error of the quadrature rules.
60
Numerical integration on the space of stochastic inputs
The prohibitive complexity of the DCP rule can be noted in Fig. 4.3(a), since 6561 function
evaluations are necessary to reach an accuracy of 1%. Comparatively, the same level of
accuracy is attained using less than 65 samples in the other quadrature methods. Further,
with 6561 samples, the MC, the SG and the SFC rules have a relative error lower than 10−4 .
As the value of d increases, the convergence properties of the SG rule gradually degrade
with respect to the MC and SFC rules. For d = 4, the SG rule is the fastest to converge,
whereas when d = 10, it becomes slower than the MC and the SFC algorithms.
Regarding the SFC results, their robustness to the increasing dimension are worth
mentioning. At lower dimensions, the SFC’s relative error remains below the one of
the MC rule, and as d becomes larger, the SFC rule becomes comparable to the MC rule.
All these observations are confirmed by Fig. 4.4, which displays the complexity N(l)
needed to compute var[Ve ] with a relative error lower than 10−3 , for the different values
of the dimension d.
5
10
Complexity N(l) for El<10
−3
MC
DCP
SG
SFC
4
10
3
10
2
10
1
10
2
3
4
5
6
7
dimension d
8
9
10
Figure 4.4: Effect of the dimension d = dim(I) on the complexity N (l), with a maximum relative
error El ≤ 10−3 .
For this thesis, the range of dimensions we are interested in, viz 1 ≤ d ≤ 10, is referred
to as the range of “moderate” dimensions [91]. For these cases, in view of the results
presented above, the SG rule or the SFC rule are the most suitable quadrature algorithms
to determine statistical moments.
4.2 Vector-valued integrals
4.2
61
Vector-valued integrals
The quadrature methods presented so far focussed on efficiently approximating the integral
of a real function defined on a d-dimensional space I, which is isomorphic to [0; 1]d . The
case of an integrand valued in Rq also calls for an efficient handling.
Such multidimensional integrands arise when one seeks the statistics of a complex variable
such as the induced voltage Ve = (Re(Ve ), Im(Ve )). The computation of multiple statistical
moments of a real random variable such as |Ve | is another example, as calculating E[|Ve |k ]
for k = 1, . . . , q can be done by evaluating the average of the vector (|Ve |1 , . . . , |Ve |q ). An
additional example worth mentioning concerns the average of a random process such as
the transmitting-state current J α which is defined as E[J α](r D ) = E[J α(r α(r D ))]. For
a discrete indexing domain D = {r D,k , k = 1, . . . , q}, rather than computing this average
for each value of rD ∈ D, the average of the vector [J α(r α(r D,1 )), . . . , J α(r α(r D,q ))] can
be determined.
At the core of all these examples lies the more general problem of the evaluation of the
average E[Y ] of the vector-valued function Y
Y :
4.2.1
I −→
Rq
i 7−→ [Y1 (i), . . . , Yq (i)] .
(4.30)
Strategy
Using the fundamental theorem (see Section 3.3.3), the average E[Y ] can be written as


Z
Z
E[Y ] = (E[Y1 ], . . . , E[Yq ]) =  Y1 (i)fI (i)di, . . . , Yq (i)fI (i)di .
I
(4.31)
I
Since the q different integrals are defined over the same support I, they are approximated
via the same quadrature rule Ql , chosen among the rules presented in Section 4.1
E[Y ] ≈ Ql [Y ] = (Ql [Y1 ], . . . , Ql [Yq ]) =
N (l)
X
(Y1 (ik ), . . . , Yq (ik )) fI (ik )wk ,
(4.32)
k=1
where wk ≥ 0, and Γl = {ik , k = 1, . . . , N(l)} ⊂ I. In other words, we choose to
compute multiple integrals simultaneously, rather than evaluating each of these integrals
individually. This strategy requires the simultaneous convergence of all the integrals
being computed. Nevertheless, it will still be very efficient if the integrands Yk (i) have
comparable mathematical properties. In the important case where Yk (i) = gk (Y0 (i)), with
62
Numerical integration on the space of stochastic inputs
known functions gk , and the function Y0 involving all the expensive calculations such as
the solution of a boundary-value problem, Eq. (4.32) becomes
E[Y ] ≈ Ql [Y ] =
N (l)
X
(g1 (Y0 (ik )), . . . , gq (Y0 (ik ))) fI (ik )wk .
(4.33)
k=1
In this case, the samples {Y0 (ik ), k = 1, . . . , N(l)} are computed only once, and then
stored and re-used for the various functions gk . The memory requirements for storing the
samples {Y0 (ik ), k = 1, . . . , N(l)} can be a disadvantage. However, it is the price to pay
to achieve a significant gain in computation time by reducing the number of evaluations
of the “expensive” function Y0 . This method is for instance applied to compute the first
q statistical moments of |Ve |: the values {Ve (γ k ), k = 1, . . . , N(l)} are stored and then
re-used to approximate the integrals defining {E [|Ve |t ] , t = 1, . . . , q}.
4.2.2
Error and convergence
As was pointed out in Section 4.1, a proper definition of the error as the distance
between E[Y ] and Ql [Y · fI ] is impossible since E[Y ] is unknown. The approach adopted
instead consists in tracking the variations of Ql [Y · fI ] as the number of samples (i.e. l)
is increased, and to consider the convergence to be reached when increasing l leads to
“small” variations. Therefore, error indicators need to be introduced.
Considering the approximation of the average E[Y ] of the q-dimensional vector Y , the
error El is defined as the highest relative error of the components of Y , i.e.
bl (t) = |Ql [Yt · fI ] − Ql−1 [Yt · fI ]| , when Ql−1 [Yt · fI ] 6= 0.
with E
t=1,...,q
|Ql−1 [Yt · fI ]|
(4.34)
Figures 4.5(a) and 4.5(b) present a first illustration of this error for a transversely
undulating wire parameterised by using the notations of Section 4.1.5 as
bl (t),
El = max E
xα(y) = α1 sin[π(y − ym )]
and
zα(y) = 0.05 + α2 sin[π(y − ym )],
(4.35)
where y ∈ [ym ; yM ]. The components of α = (α1 , α2 ) are statistically independent and
uniformly distributed in A1 = A2 = [−0.04; +0.04] m. The incident field is identical to
the excitation employed in Section 4.1.5. The convergence properties of the quadrature
rules are compared for the approximations of E[Ve ] = (E[Re(Ve )], E[Im(Ve )]). The final
values of these moments are E[Re(Ve )] = −0.019 V and E[Im(Ve )] = 0.243 V.
4.2 Vector-valued integrals
63
0
10
MC
DCP
SG
SFC
−1
10
−2
10
−3
E
10
−4
l 10
−5
10
−6
10
−7
10
−8
10
10
0
1
2
10
10
N(l)
3
10
10
4
(a)
0
10
MC
DCP
SG
SFC
−1
10
−2
10
−3
E
10
−4
l 10
−5
10
−6
10
−7
10
−8
10
10
0
1
2
10
10
N(l)
3
10
10
4
(b)
Figure 4.5: Convergence properties of Ql [Re(Ve )] (Fig. (a)) and Ql [Im(Ve )] (Fig. (b)).
64
Numerical integration on the space of stochastic inputs
In this case, Ql [Re(Ve )] determines the complexity of the rule, as shown by Table 4.1,
which informs on the numbers of function evaluations needed to reach a relative error
below 10−3 .
Ql [Re(Ve )]
Ql [Im(Ve )]
Ql [Ve ]
MC rule
2049
257
2049
DCP rule
1089
289
1089
SG rule
65
29
65
SFC rule
513
129
513
Table 4.1: Number of samples N (l) required for El ≤ 10−3 .
The example depicted in Figs 4.6(a)-4.6(d) concerns the computation of the first four
moments of |Ve | by the MC and the SG rules, respectively. According to these graphs, the
higher-order moments require more sample evaluations, and hence dictate the complexity,
as confirmed in Table 4.2.
The fact that the SG and SFC rules take advantage of the smoothness of the integrand
explains their better performance with respect to the MC rule. As for the DCP rule,
despite benefiting from the regularity of the integrand, the curse of dimensionality hampers
its numerical efficiency assessed by its complexity.
E[|Ve |]
var[|Ve |]
sk[|Ve |]
κ[|Ve |]
First four moments of |Ve |
MC rule
33
65
1025
1025
1025
DCP rule
81
81
> 3000
> 3000
> 3000
SG rule
13
13
145
145
145
SFC rule
9
9
257
257
257
Table 4.2: Number of samples N (l) required for El ≤ 10−2 .
4.2 Vector-valued integrals
65
1
10
0
10
−1
10
−2
10
−3
El 10
−4
10
−5
10
−6
10
−7
10
−8
10 0
10
1
E[|Ve|]
var[|V |]
e
sk[|V |]
e
κ[|Ve|]
1
10
2
10
N(l)
3
10
10
0
10
−1
10
−2
10
−3
El 10
−4
10
−5
10
−6
10
−7
10
−8
10 0
10
E[|V |]
e
var[|V |]
e
sk[|Ve|]
κ[|V |]
e
1
10
(a) MC rule
3
10
(b) DCP rule
1
10
0
10
−1
10
−2
10
−3
El 10
−4
10
−5
10
−6
10
−7
10
−8
10 0
10
2
10
N(l)
1
E[|Ve|]
var[|V |]
e
sk[|V |]
e
κ[|Ve|]
1
10
2
10
N(l)
(c) SG rule
3
10
10
0
10
−1
10
−2
10
−3
El 10
−4
10
−5
10
−6
10
−7
10
−8
10 0
10
E[|V |]
e
var[|V |]
e
sk[|V |]
e
κ[|V |]
e
1
10
2
10
N(l)
(d) SFC rule
Figure 4.6: First four statistical moments of |Ve |.
3
10
66
4.3
Numerical integration on the space of stochastic inputs
Conclusion
The question regarding the numerical computation of integrals that define the statistical
moments has been addressed in this chapter. Several quadrature rules have been reviewed
that efficiently handle integrals over multi-dimensional domains. Based on these rules,
integrals of vector-valued functions can be evaluated.
All these rules are collocation rules, as they utilize a discrete set of evaluations of the
integrand. Gaussian rules, often praised for their ability to exactly integrate polynomials
of high degree, could have been taken into account as well. However, we chose not to
retain them because their grids are seldom nested. Spline-interpolation-based quadrature
rules can also be considered, provided that they are correctly defined to evaluate integrals
over multidimensional sets. All the rules employed could also be adaptive to refine the
rules according to the local roughness of the integrand.
Alternative polynomial interpolation methods, in which the interpolating polynomials are
no longer Lagrange polynomials, can also be considered. As will be shown in the next
chapter, the use of Taylor series generally leads to the so-called perturbation method,
whereas the polynomial-chaos approach utilizes a set of orthogonal polynomials.
Chapter 5
Quadrature accelerator (1/2):
Perturbation method
The price paid for using quadrature rules to evaluate the statistical moments is indicated
by the complexity of these rules. This complexity, which corresponds to the number of
deterministic evaluations of the observable required to meet a target level of accuracy, also
represents the number of EFIEs that need to be solved. This observation pinpoints the
origin of the computational cost, viz the effort required to execute the deterministic model.
The present chapter tackles this issue by applying local expansions around a reference
configuration. Examples of applications of such a perturbation method can be found
in [45, 48, 52]. Our twofold objective is to reduce the computation time required by the
deterministic model, and thereby the duration of the evaluation of the statistics, whilst
preserving a suitable accuracy in the computation of the statistics.
Interactions between randomly varying geometries and deterministic incident fields are
considered, with the Thévenin voltage source Ve as observable. Rather than directly
applying a brute-force Taylor expansion to this observable, its definition is exploited to
perform physically and computationally motivated local transformations.
5.1
Statement of the problem
The definition of Ve in Section 2.3.2 is recalled as
Ve (γ) = − I 0 ; E iβ P − J α; E iβ S ,
α
(5.1)
where the factor 1/I0 is omitted to discard the notation. When the surface Sα is random
and the incident field E iβ is deterministic, the term I 0 ; E iβ P is deterministic. Hence, the
68
Quadrature accelerator (1/2): Perturbation method
randomness of Sα imposes the evaluation of the current distribution J α for each α ∈ A
and the determination of the trace of E iβ on Sα, even when this field is deterministic.
To better highlight the cost of the deterministic model, the different quantities appearing in
Eq. (5.1) are expressed in terms of the EFIE operator. The EFIE operator Z α associated
S
with the entire structure ∂Ωα = SP Sα can be written as
"
#
Z P P Z SP
α
Zα =
,
(5.2)
Z PαS Z SS
α
where the operator Z P,S
: T SP 7−→ T Sα is defined on the tangent bundle
α
takes its values in the tangent bundle of Sα. The operators Z P,P
: T SP
α
S,P
S,S
Z α : T Sα 7−→ T SP and Z α : T Sα 7−→ T Sα are defined similarly1 .
new notation, the traces on ∂Ωα of the fields E [I 0 ] and E [J α], excited by
respectively, become
E [I 0 ]|SP = Z P P I0 ,
E [J α]|SP = Z SP
α J α,
of SP , and
7−→ T SP ,
With this
I 0 and J α
E [I 0 ]|Sα = Z PαS I0 ,
(5.3a)
E [J α]|Sα = Z SS
α J α.
(5.3b)
The EFIE is based on the cancelation of the total electric field tangential to Sα, i.e.
E [I 0 ]|Sα + E [J α]|Sα = 0, which leads to the following definition of J α
−1 P S
J α = − Z SS
Z α I0 .
(5.4)
α
As a result, the computation of the statistical moments of Ve by a quadrature rule of
complexity N will translate into the need to 1) build the EFIE operator Z α N times; 2)
−1 P S
Z α I0 to obtain the transmitting current J α, N times; and
perform the product Z SS
α
i
3) evaluate the trace of E β on Sα, also N times.
To illustrate this discussion, the coupling between an incident plane wave and a thin wire
meshed in Nseg segments (≡ Nseg + 2 basis functions) is considered. The computation
time of a single value of Ve is detailed in Table 5.1 for the case of a “typically” meshed
wire (Nseg = 224), and a “densely” meshed wire2 (Nseg = 424). All these computations
are carried out on a DELL PWS690 personal computer with a 3 GHz processor. The fill
time of the impedance matrix is the dominant step. When Nseg = 224, the current is
computed quite efficiently owing to the limited size of the problem studied. The latter
time increases with the density of the mesh, as can be seen when Nseg = 424.
Note that the discrete counterparts of I0 , E [I 0 ], J α and E iβ will be complex vectors, and those of
PS
SS
Z α , Z P P , Z SP
α , Z α and Z α will be complex matrices. The elements of these matrices correspond to
reaction integrals that are evaluated either analytically or numerically by a quadrature rule.
2
The fixed parts of the wire are meshed into 2×12 segments, and the undulating part into 200 or 400
segments.
1
5.2 First-order Taylor expansions
69
Nseg = 224
Nseg = 424
T [ms]
% of Ttotal
T [ms]
% of Ttotal
Geometry build
Filling [Z]
Computing J α
Computing Ve (γ)
1
115
15
1
0.76
87.12
11.36
0.76
26
437
112.5
21
4.02
67.59
25.14
3.25
Ttotal
131.5
100
646.5
100
Table 5.1: Computation time of Ve (γ) for a given configuration γ = (α, β).
5.2
First-order Taylor expansions
The perturbation method proposed in this section aims at approximating the impedance
matrix and the traces of the fields E [I 0 ] and E iβ on Sα, by their first-order Taylor expansions around a given configuration specified by α0 = (α0,1 , . . . , α0,d ) ∈ A (i.e. γ 0 = (α0 , β)),
with β deterministic.
5.2.1
Expansion of the operator Z α
Provided that Z α is differentiable with respect to α = (α1 , . . . , αd ) around a given α0 , a
first-order Taylor expansion leads to
e α,α + O (|α − α0 |2 )2 ,
Zα = Z
0
where
In these equations, |α|2 =
e α,α0 = Z α +
Z
0
d
P
k=1
αk2
1/2
d
X
k=1
(5.5a)
(1)
(αk − α0,k )Z α0 ,k .
(5.5b)
(1)
is the L2 -norm of α and Z α0 ,k = ∂uk Z u|u=α0
represents the derivative of Z α, around α0 , with respect to the k-th component of α, for
k = 1, . . . , d.
The explicit calculation of these derivatives can be extremely tedious given the intricate
dependence of Z α on α. Since the general term of the EFIE operator Z α is a reaction
integral over ∂Ωα, Reynold’s transport theorem could be employed to differentiate Z α as
is discussed in [36, 99, 100]. In our case however, these derivatives are approximated by a
central finite-difference approach
∂uk Z u|u=α0
1 ≈
Z α0 +ξtk − Z α0 −ξtk ,
2ξ
with 0 < ξ ≪ λ,
(5.6)
70
Quadrature accelerator (1/2): Perturbation method
where the components of the d-dimensional vector tk are identically zero except the k-th
which equals 1. Although this finite-difference approach is a brute-force one, it permits a
straightforward approximation of the partial derivatives of Z α.
5.2.2
Expansion of the traces on Sα of E [I 0] and E iβ
Given its definition in Eq. (5.3), the approximant of the trace of E [I 0 ] follows directly
e α,α0 as
from Z
e P S I 0 + O (|α − α0 |2 )2 .
E [I 0 ]|Sα = Z
(5.7)
α,α0
Concerning the trace of the incident field E iβ , a natural brute-force approach consists in
regarding the vector E iβ(r α) at any given point r α ∈ Sα, as a function of α, and then
applying expansions in α around α0 , i.e.
E iβ (rα)
in which
i(1)
E γ0 ,k
=
E iβ(r α0 )
k=1,...,d
=
+
d
X
k=1
i(1)
(αk − α0,k )E γ0 ,k + O (|α − α0 |2 )2 ,
∂uk E iβ (r u)u=α
0
k=1,...,d
(5.8)
with u = (u1 , . . . , ud) and where the
derivatives ∂uk ξ(r u) are determined by using a central finite-difference formula.
Equation (5.8) can also be obtained by exploiting the vectorial nature of E iβ to expand it
around Sα0 as
E iβ(r α) = E iβ(r α0 ) + [(r α − rα0 ) · ∇u]E iβ(u)u=rα + O (kr α − rα0 k2 )2 ,
(5.9)
0
3
where k · k2 is the Euclidian norm in R . In the particular, yet interesting, case where
E iβ is a plane wave, E iβ(r α) ≈ E 0 ejki · r α , where E 0 ∈ C3 and ki ∈ R3 are given with
ki · E 0 = 0, then Eq. (5.9) simplifies to
E iβ (r α) = E iβ (rα0 ) + j[ki · (rα − r α0 )]E iβ (rα0 ) + O (kr α − r α0 k2 )2 .
(5.10)
Given the geometrical parameterisation of Sα (see Section 2.2.1) r α(r D ) can be written as
P
r α(r D ) = dk=1 αk gk (r D ), with deterministic functions g1 , . . . , gd defined on ∂D. Hence,
the vector r α − r α0 can be expressed in terms of α − α0 , thereby casting Eq. (5.10) in a
form similar to Eq. (5.8), i.e. a polynomial depending on the components of α − α0 .
5.2.3
Resulting first-order expansions of J α and Ve
Taking all the aforementioned expansions into account leads to the following
approximation of the transmitting-state current
h SS i−1 P S
e α,α0 = − Z
e
e
Jα ≈ J
Z
(5.11)
α,α0
α,α0 I0 .
5.2 First-order Taylor expansions
71
Although Z α and E [I 0 ] are approximated by first-order terms depending on α, the
e α,α
current J
contains higher-order terms originating from the inverse
0
−1
h SS i−1
d
P
SS
SS
(1)
e
Z
= Z α0 +
(αk − α0,k )[Z α0 ,k ]
. The resulting approximation of Ve ,
α,α0
k=1
denoted Vee is obtained by including the expansions of J α and E iβ in the definition of
Ve , given by Eq. (5.1), i.e.
"
#
d
D
E
D
E
X
e α,α0 ; E i
e α,α0 ; E i(1)
Vee (γ) = − I 0 ; E iβ P + J
+
(αk − α0,k ) J
. (5.12)
β
γ0 ,k
Sα0
k=1
Sα0
The relations derived above involve only deterministic traces and operators depending
on the reference geometry specified by α0 . Therefore, these deterministic terms can be
evaluated during a pre-computation stage, then stored and re-used, thereby permitting a
significant gain in computation time.
Taylor expansions are by definition asymptotic in the sense that they are exact for α = α0
and accurate in the vicinity of α0 . Their range of validity is not trivial to determine since
it depends on the smoothness, as a function of α, of the operator Z α and of the traces
of the fields on ∂Ωα. The perturbation approximation should be all the more accurate as
the geometrical differences between the surfaces ∂Ωα and ∂Ωα0 are negligible compared
to the wavelength.
5.2.4
Statistical moments
The perturbation method is particularly helpful for the computation by quadrature of
the statistical moments of Ve . Replacing Ve by its approximation Vee implies that the first
evaluation of the voltage will generally be the most expensive as it will correspond to
the pre-computation of the deterministic terms necessary to build Vee . However, all the
following evaluations of the observable will be advantageously cheap as they re-use the
deterministic terms computed during the initial run.
The perturbation approach is illustrated through the example of a coupling between a
deterministic plane wave E iβ {|E iβ| = 1 Vm−1 , θi = 45◦ , φi = 45◦ , parallel polarization,
f =500 MHz} and a vertically varying wire meshed into 224 segments and described as
xα (y) = 0, zα (y) = 0.05 + α sin[π(y − ym )],
(in meters), for y ∈ [ym ; yM ]. (5.13)
The amplitude α is uniformly distributed in A = [−0.02; +0.02] m, and the elementary
step ξ in the finite-difference formula (Eq. (5.6)) is chosen as ξ = 0.001 m. Table 5.2
details the computation time needed to evaluate Ve both with and without a perturbation
72
Quadrature accelerator (1/2): Perturbation method
Without
With Taylor expansions
Perturbation
First call
following calls
Geometry build
Filling [Z]
Computing J α
Computing Ve (γ)
1 ms
115 ms
15 ms
1 ms
1 ms
360 ms
24 ms
8 ms
1 ms
8 ms
16 ms
1 ms
Total time (1 evaluation of Ve )
132 ms
393 ms
26 ms
Total time (N evaluations of Ve )
≈ N ∗ 132 ms
≈ 393 + N ∗ 26 ms
Table 5.2: Computation time of Ve (γ) with and without perturbation transformations.
expansion. The first evaluation of Vee takes three times longer than a single evaluation of
Ve without any transformation. Nonetheless, all the following runs of Vee are performed
more than five times faster than in the absence of perturbation expansions.
5.3
First-order Neumann expansion of Z SS
α
In spite of the gain in computation time achieved via the transformations that yield Vee ,
e SS for each
these approximations still require the inversion of the stochastic operator Z
α,α0
α ∈ A to obtain the approximation of the current J α (see Eq. (5.11)). The numerical price
of this inversion generally increases as the mesh is refined. This difficulty is dealt with by
applying a Neumann expansion [45, 101] to approximate the inverse of Z SS
α . Under the
SS −1
assumption that Z SS
can be written as
α0 is invertible, [Z α ]
SS −1 SS
SS −1
SS −1
−1
−1
Zα
= Z α0 + Z SS
= 1 − [Z SS
Z SS
[Z SS
α − Z α0
α0 ]
α0 − Z α
α0 ] , (5.14)
where 1 stands for the identity operator on T Sα. Given an operator norm k · kop , as long
SS −1
as [Z SS
Z SS
< 1, Eq. (5.14) can be approximated as
α0 ]
α0 − Z α
op
SS −1
SS
−1
−1
−1
[Z SS
≈ [Z SS
+ [Z SS
Z SS
[Z α0 ] .
α ]
α0 ]
α0 ]
α0 − Z α
(5.15)
This new expression requires only the single inversion of the deterministic operator Z SS
α0 .
SS
e
By combining this Neumann expansion and the Taylor expansion Z SS
α ≈ Z α,α0 , the
−1
following approximation of [Z SS
is deduced
α ]
−1
−1
[Z SS
≈ [Z SS
−
α ]
α0 ]
d
X
k=1
SS (1)
SS −1
−1
(αk − α0,k )[Z SS
α0 ] [Z α0 ,k ] [Z α0 ] .
(5.16)
5.3 First-order Neumann expansion of Z SS
α
73
The main effort in this transformation resides in the computation of the operator Z SS
α0 , its
SS (1)
SS −1
inverse [Z α0 ] , and its d partial derivatives [Z α0 ,k ] , for k = 1, . . . , d , each of which
requires the construction of two deterministic operators for use in the finite-difference
formula3 . All these 2 + 2d terms depend solely on the reference geometry associated
with α0 . They can therefore be determined in a pre-computation stage, then stored and
−1
re-used to approximate [Z SS
for different values of α, hereby improving the
α ]
computational efficiency of this method.
5.3.1
Resulting first-order expansion
By regrouping the aforementioned expansions in the form of Eq. (5.4), the resulting
first-order approximation of the induced current J α is derived as
J α = J α0 +
d
X
k=1
with
(1)
J α0 ,k
=
−1
−[Z SS
α0 ]
(1)
(αk − α0,k )J α0 ,k + O (|α − α0 |2 )2 ,
(1) SS
PS
[Z (1)
]
I
+
[Z
]
J
0
α0 .
α0
α0
(5.17a)
(5.17b)
The induced voltage is approximated by
with
and
Ve (γ) = Vbe (γ) + O (|α − α0 |2 )2 ,
Vbe (γ) = Ve (γ 0 ) +
d
X
k=1
(5.18a)
(1)
(αk − α0,k )Vs,e,k
D
E
(1)
(1)
Vs,e,k = − J α0 ; E γ0 ,k
Sα0
D
E
(1)
+ J α0 ,k ; E iβ
(5.18b)
Sα0
, for k = 1, . . . , d.
(5.18c)
Unlike Vee defined in Eq. (5.12), the relation between Vbe and α is explicitly linear. The
newly derived approximation of Ve involves only deterministic traces and currents that
can be pre-computed. Compared to Vee , the pre-computation step for Vbe should amount
to a similar numerical effort: this effort is mainly devoted to the Taylor expansions of
(1)
the EFIE operator and of the traces of the field, from which the currents J α0 and J α0 ,k
can be determined rapidly. However, the following evaluations of Vbe should be faster than
e SS needs to be inverted for each value of α,
those of Vee : to obtain Vee , the operator Z
α,α0
(1)
b
whereas for Ve the only operations needed consist of products between the currents J α0 ,k
and the components of (α − α0 ).
(1)
−1
The matrix [Z SS
is a sub-matrix of [Z α0 ](1) , whereas the matrix [Z SS
cannot be deduced
α0 ]
α0 ]
SS
−1
directly from [Z α0 ] : [Z α0 ] needs to be built first, as a sub-matrix of [Z α0 ], and then inverted to
−1
obtain [Z SS
.
α0 ]
3
74
Quadrature accelerator (1/2): Perturbation method
These aspects are illustrated in Table 5.3, for the test case considered in Section 5.2.4.
The first evaluation of Vbe amounts to 383 ms instead of 393 ms for Vee . On the other hand,
the following values of Vbe are obtained in 1 ms, where 26 ms are required to obtain Vee .
Without
Taylor & Neumann expansions
Perturbation
First call
following calls
Geometry build
Filling [Z]
Computing J α
Computing Ve (γ)
1 ms
115 ms
15 ms
1 ms
1 ms
327 ms
39 ms
16 ms
0
0
0
1
Total time (1 evaluation of Ve )
132 ms
383 ms
1 ms
Total time (N evaluations of Ve )
≈ N ∗ 132 ms
ms
ms
ms
ms
≈ 383 + N ∗ 1 ms
Table 5.3: Computation time of Ve (γ) with and without perturbation transformations.
5.3.2
Statistical moments
Given the analytical relations in Eq. (5.18), it is possible to directly express the statistical
moments of Vbe in terms of those of α. The average, for instance, becomes
d
X
(1)
E[Ve ] ≈ Ve (γ 0 ) +
(E[αk ] − α0,k )Vs,e,k ,
(5.19)
k=1
and if the reference geometry is chosen such that α0 = E[α] then E[Vs,e (α)] ≈ Ve (γ 0 ).
Assuming that α0 = E[α] and that the components of α are mutually statistically
independent yields the following approximation of the variance
var [Ve ] ≈
d
X
k=1
(1) 2
var [αk ] Vs,e,k .
(5.20)
Equations (5.19) and (5.20) are very interesting as they link the statistical moments of
the voltage Ve to those of the geometry, represented by α, in a linear way.
Similar relations can be established for higher-order moments of Ve , and the probability
distribution of Vbe can even be obtained from the probability distribution of α via analytical
formulas [67, 68, 102]. For instance, when α is mono-dimensional, Vbe and α will have the
same type of distribution. Alternatively, it is also possible to approximate the statistics of
Ve with the aid of a quadrature rule, in which Ve is replaced by Vbe , as is done in Section 5.5.
5.4 Second-order perturbation method
75
Such a strategy leads to fast evaluations of the statistics, owing to the smoothness of Vbe
as a function of α, and also owing to the reduction in the computational cost required to
evaluate Vbe , as highlighted by Table 5.3.
5.4
Second-order perturbation method
Even when a first-order perturbation is performed, the variance of Ve yields secondorder terms, as can be seen in Eq. (5.20). Nevertheless, these terms do not capture the
entire second-order behaviour of Ve . Hence, it is interesting to investigate the effect of an
extension of the perturbation approach via second-order Taylor and Neumann expansions.
5.4.1
Taylor and Neumann expansions
If Z α is twice differentiable around α0 ∈ A, a second-order Taylor expansion leads to
e
e α,α0 + O (|α − α0 |2 )3 ,
Zα = Z
(5.21a)
d
d
XX
e
(2)
e α,α0 = Z
e α,α0 + 1
Z
(αk − α0,k )(αl − α0,l )Z α0 ,k,l .
2
where
(5.21b)
k=1 l=1
e α,α0 is introduced in Eq. (5.5) and a central finite-difference formula is
The operator Z
(2)
used to approximate the derivatives Z α0 ,k,l
= ∂uk ∂ul Z u|u=α0 k,l=1,...d , i.e.
k,l=1,...d
Z α0 +ξtk +ξtl + Z α0 −ξtk −ξtl − Z α0 +ξtk −ξtl − Z α0 −ξtk +ξtl
, if k 6= l,
4ξ 2
Z α0 +ξtk + Z α0 −ξtk − 2Z α0
∂u2k Z uu=α0 ≈
,
where 0 < ξ ≪ λ .
ξ2
Thus, this formula requires the construction of three or four deterministic operators to
obtain each second-order derivative of Z α.
∂uk ∂ul Z u|u=α0 ≈
SS
The combination of the second-order Taylor and Neumann expansions of Z SS
α around Z α0
−1
produces the following approximation of [Z SS
α ]
−1
[Z SS
α ]
=
−1
[Z SS
α0 ]
+
−
d X
d
X
k=1 l=1
d
X
k=1
SS (1)
SS −1
−1
(αk − α0,k )[Z SS
α0 ] [Z α0 ,k ] [Z α0 ]
(5.22)
−1
SS −1
3
(αk − α0,k )(αl − α0,l )[Z SS
]
R
[Z
]
+
O
(|α
−
α
|
)
.
α
,k,l
0
2
α0
0
α0
1 SS (2)
[Z
] , for k, l = 1, . . . , d. The
2 α0 ,k,l SS −1
Neumann expansion still holds under the condition that [Z SS
Z SS
< 1.
α0 ]
α − Z α0
op
SS −1
SS (1)
(1)
where Rα0 ,k,l = [Z SS
−
α0 ,k ] [Z α0 ] [Z α0 ,l ]
76
Quadrature accelerator (1/2): Perturbation method
The second-order epansion of the trace of E iβ on Sα yields the equation
E iβ(r α) = E iβ (rα0 ) +
d
X
k=1
+
in which
derivatives
5.4.2
i(2)
E γ0 ,k,l
1
2
d X
d
X
k=1 l=1
=
k=1,...,d
∂uk ∂ul E iβ (ru) are
i(1)
(αk − α0,k )E γ0 ,k
i(2)
(αk − α0,k )(αl − α0,l )E γ0 ,k,l + O (|α − α0 |2 )3 ,
∂uk ∂ul E iβ(r u)u=α
0
k,l=1,...,d
(5.23)
, u = (u1 , . . . , ud ) and the
determined using a central finite-difference formula.
Second-order expansions of J α and Ve
By gathering all the terms and retaining only those with an order lower than or equal to
two, J α becomes
J α = J α0
+
where
(1)
J α0 ,k
1
2
d
X
(1)
+
(αk − α0,k )J α0 ,k
k=1
d
d
XX
k=1 l=1
(2)
(αk − α0,k )(αl − α0,l )J α0 ,k,l + O (|α − α0 |2 )3 ,
(5.24)
is defined according to Eq. (5.17), and where for any k, l = 1, . . . , d,
1
(2)
SS −1
PS
SS
SS (1) (1)
(2)
(2)
J α0 ,k,l = − [Z α0 ]
[Z α0 ,k,l ] I 0 + [Z α0 ,k,l] J α0 + 2[Z α0 ,k ] J α0 ,l .
(5.25)
2
The second-order expansion of the induced voltage is given by
b
Ve (γ) = Vbe (γ) + O (|α − α0 |2 )3 ,
(5.26a)
d
d
XX
b
(2)
with Vbe (γ) = Vbe (γ) +
(αk − α0,k )(αl − α0,l )Ve,k,l,
k=1 l=1
(5.26b)
E
1 D (2)
J α0 ,k,l ; E iβ
. (5.26c)
Sα0
Sα0
2
Sα0
This model captures the second-order behaviour of Ve in the neighbourhood of α0 . In
comparison with the first-order approximation Vbe defined in Eq. (5.18), the additional
b
numerical effort to obtain Vbe is imposed by the computation of the d2 second-order
D
E
(2)
i(2)
and Ve,k,l = − J α0 ; E γ0 ,k,l
D
E
(1)
i(1)
+ J α0 ,k ; E γ0 ,l
−
(2)
derivatives Ve,k,l. In total, the 1 + 2d + 4d2 deterministic terms that need to be
evaluated can be computed at a preliminary stage.
The time required to execute such an approximation of Ve for the example of Section 5.2.4
b
is provided in Table 5.4. This table shows that the first evaluation of Vbe takes 946 ms
instead of 383 ms for Vbe and 393 ms for Vee . The successive computations are performed
in merely 1 ms, similarly to Vbe .
5.5 Applications
77
Without
Second-order expansions
Perturbation
First call
following calls
Geometry build
Filling [Z]
Computing J α
Computing Ve (γ)
1 ms
115 ms
15 ms
1 ms
1 ms
858 ms
63 ms
24 ms
0
0
0
1
Total time (1 evaluation of Ve )
132 ms
946 ms
1 ms
Total time (N evaluations of Ve )
≈ N ∗ 132 ms
ms
ms
ms
ms
≈ 946 + N ∗ 1 ms
Table 5.4: Computation time of Ve (γ) with a second-order perturbation transformation.
5.4.3
Statistical moments
Interestingly, the first two statistical moments can again be related to the statistics of
the geometry. If, for instance the components of α are mutually independent, then the
average and the variance of Ve are approximated as
E[Ve ] ≈ Ve (γ0 ) +
var [Ve ] ≈
d
X
k=1
d
X
(2)
var[αk ] Ve,k,k
(5.27)
k=1
h
i
(1)
(2) ∗
+ M3> (α0 ) ,
var [αk ]] |Ve,k |2 + 2 Re Ve (γ0 )Ve,k,k
(5.28)
where the term M3> (α0 ) contains statistical moments of α of orders higher than 2.
Given the increasing number of terms that appear when higher-order moments need to be
computed, we choose instead to compute the statistical moments of the induced voltage
b
by a quadrature rule in which Ve is replaced by Vbe .
5.5
Applications
The accuracy of the statistical moments obtained via the different forms of perturbation
expansions is now investigated.
5.5.1
First four statistical moments
To begin with, the configuration of Section 5.2.4 is analyzed, i.e. a smoothly undulating
wire parameterised by
xα (y) = 0, zα (y) = 0.05 + α sin[π(y − ym )],
(in m), for y ∈ [ym ; yM ],
(5.29)
78
Quadrature accelerator (1/2): Perturbation method
where the amplitude α is uniformly distributed in A = [−0.02; +0.02] m, and where the
elementary step ξ in the finite-difference formula (Eq. (5.6)) is chosen as ξ = 0.001 m.
Since the incident field is a deterministic plane wave E iβ {|E iβ | = 1 Vm−1 , θi = 45◦ ,
φi = 45◦ , parallel polarization, f = 500 MHz}, the only random input parameter is α.
The average E[Ve ] and the standard deviation σ[Ve ] = E[|Ve − E[Ve ]|2 ] of Ve , together with
the skewness sk[|Ve |] and the kurtosis κ[|Ve |] of |Ve | are determined via a Clenshaw-Curtis
quadrature rule with a maximum relative error of 1%. The statistics obtained without any
perturbation transformation are taken as reference and compared with those obtained via
b
Vee , Vbe and Vbe . Table 5.5 highlights the accuracy of the perturbation results, particularly
for E[Ve ], σ[Ve ] and κ[|Ve |]. The skewness sk[|Ve |], which is a qualitative indicator for the
symmetry of the distribution of |Ve | around its mean, is not accurately estimated by the
perturbation calculations. Hence, in this example, the perturbation transformations do
not maintain this symmetry.
Without
perturbation
Taylor
expansions
Taylor
& Neumann
expansions
Second-order
expansions
E[Ve ] (in mV)
σ[Ve ] (in mV)
sk[|Ve |]
κ[|Ve |]
-22.7 +j 258.2
9.1
-0.486
1.860
-23.6 +j 260.2
9.3
0.117
1.709
-23.5 +j 259.8
9.1
-0.010
1.693
-22.9 +j 259.4
9.2
-0.132
1.710
Time for 65 samples Ve
≈ 8.580 s
≈ 2.083 s
≈ 0.448 s
≈ 1.011 s
Computation time
13.230 s
6.990 s
5.410 s
5.792 s
Table 5.5: First four statistical moments of Ve .
From the point of view of the complexity of the quadrature rule, regardless of the
transformation applied to Ve , the statistics are obtained through 65 samples of Ve . Hence,
the perturbation expansions do not accelerate the convergence of the quadrature rule,
which is due to the already smooth dependence of Ve on the parameters α of the
geometry. However, the perturbation approaches reduce the total computation time by a
factor of two when compared to the direct computations that use Ve . As can be seen in
Table 5.5, the actual gain in computation time is lower than the gain initially estimated
from the cost of running the transformed deterministic model. This difference stems from
the error checks of the quadrature rule, as well as the writing of intermediate results
to files, which represents an additional incompressible time consumption approximately
5.5 Applications
79
equal to 4 s. Thus, the computer time differs from the computation time of the statistical
moments.
5.5.2
Accuracy of the standard deviation
To assess the influence of the smoothness of the geometry of the scatterer on the accuracy
of the perturbation method, a thin wire
Sl (α) : xα (y) = 0, zα (y) = 0.05+α sin[lπ(y−ym )],
(in m), for y ∈ [ym ; yM ], (5.30)
is considered, where the integer l fixes the rate of undulation of the axis of the wire. As
l augments, the fluctuations of Sl (α) increase accordingly, as well as the total length of
Sl (α). The random amplitude α is assumed to be uniformly distributed in the domain
A(d) = [−d; +d], with d ∈ [0; 5) cm. A larger value of d is synonymous of a larger
range of uncertainty of α, and should therefore translate into a higher variance of the
induced voltage. The reference geometry for the perturbation method is chosen as the
straight wire, which corresponds to α0 = E[α] = 0. The incident field E iβ is a plane
wave with the following characteristics {|E iβ | = 1 Vm−1 , θi = 45◦ , φi = 45◦ , parallel
polarization} at the frequency f ∈ [100; 500] MHz. The presence of resonances in this
range of frequencies allows for an evaluation of the accuracy of the perturbation results
in these harsh situations. To this end, the different values of the standard deviation σ[Ve ]
are mutually compared. Additional examples of first-order perturbation studies that we
have carried out can be found in [53, 103].
First, the “smooth” wire S1 (α), i.e. zα (y) = 0.05 + α sin [π(y − ym )], is studied with an
amplitude α that is distributed either in the domain A(d1 = 2 cm) or in A(d2 = 4 cm).
The corresponding standard deviation σ[Ve ] is computed by a Clenshaw-Curtis rule, with
a maximum relative error of 1%. These results, taken as references, are plotted in Fig. 5.1
and reveal a larger magnitude of σ[Ve ] in the case where α ∈ A(d2) than when α ∈ A(d1 ),
as expected from the fact that d2 > d1 . In both cases, the resonance situations are signaled
by the increase of σ[Ve ] around 210 MHz, 340 MHz and 480 MHz.
The standard deviations obtained via perturbation transforms are compared to the
reference values of σ[Ve ] through their relative errors, which are depicted in Fig. 5.2(a)
for α ∈ A(d1 ) and Fig. 5.2(b) for α ∈ A(d2 ). Figure 5.2(a) shows that, overall, the
perturbation approach yields accurate results away from resonances, with a relative error
smaller than 2%. The results obtained using Vee are generally more accurate than those
b
obtained via Vbe and Vbe , which is understandable given the presence of higher-order terms
in the definition of Vee . This observation also underscores the non-negligible effect of the
80
Quadrature accelerator (1/2): Perturbation method
Figure 5.1: Wire S1 (α): Reference values of σ[Ve ] for α uniformly distributed in A(d) = [−d; +d]:
case d = d1 = 2 cm (solid line) and case d = d2 = 4 cm (dashed line).
expansion of Neumann on the precision of the perturbation results. Further, the secondorder expansions do not improve the accuracy of σ[Ve ] significantly, and that between 290
MHz and 310 MHz, the first-order expansion is even more accurate.
These observations are confirmed in Fig. 5.2(b), where the accuracy of the perturbation
results is also slightly degraded compared to the case where α belongs to the narrower
range A(d1 ). More generally, when dealing with such a smooth thin wire, the results
obtained via the perturbation method can be trusted, as long as they are not computed
around a resonance frequency, where the behaviour of Ve in terms of α becomes more
irregular and non-linear as a function of frequency.
As a second example, a “rougher” geometry S4 (α) is analyzed. Consequently, the wire is
such that zα (y) = 0.05 + α sin [4π(y − ym )]. The reference standard deviations computed
directly by quadrature, when α ∈ A(d1) and α ∈ A(d2 ), are displayed in Fig. 5.3. The
larger values of σ[Ve ] relatively to the values obtained with S1 (α) (see Fig. 5.1) imply that
the roughness of the geometry of the wire accentuates the spread of Ve . This result can
be understood intuitively by noting that for S1 (α), the wires S1 (α = 0), S1 (α = ±2cm)
and S1 (α = ±2cm) have similar geometries, whereas for S4 (α) more undulations appear
in the shape of the wire as α varies from 0 to ±2 cm or ±4 cm.
5.5 Applications
81
(a) α ∈ A(d1 )
(b) α ∈ A(d2 )
Figure 5.2: Wire S1 (α): Relative error of the perturbation σ[Ve ] with respect to the reference
σ[Ve ]: Vee ↔ Taylor expansions (solid line), Vbe ↔ first-order Taylor and Neumann expansions
b
(circled line), Vbe ↔ second-order expansions (dashed line).
82
Quadrature accelerator (1/2): Perturbation method
Figure 5.3: Wire S4 (α): Reference values of σ[Ve ] for α uniformly distributed in A(d) = [−d; +d]:
case d = d1 = 2 cm (solid line) and case d = d2 = 4 cm (dashed line).
The higher undulation rate of S4 also leads to larger errors in the standard deviations
computed by perturbation. This can be seen by comparing Fig. 5.4(a) to Fig. 5.2(a) and
Fig. 5.4(b) to Fig. 5.2(b). Similar to S1 , the performance of the perturbation approach
deteriorates in the neighbourhood of resonances, and the use of Vee produces more accurate
b
results than when Vbe or Vbe are employed. From these examples, it can be concluded that in
terms of precision, a perturbation approach based only on a Taylor expansions, i.e. using
b
Vee , should be preferred. Moreover, the approximations Vbe and Vbe still provide accurate
results, particularly away from resonances. In the present cases, second-order expansions
do not yield notable improvements of the statistics.
These conclusions should also be put in perspective with the computation time of the
b
different expansions, since using Vbe is faster than using Vbe , which in turn runs faster than
Vee , as discussed in the previous sections. Therefore, a trade-off needs to be found between
the level of accuracy desired and the time budget to be allocated.
5.5 Applications
83
(a) α ∈ A(d1 )
(b) α ∈ A(d2 )
Figure 5.4: Wire S4 (α): Relative error of the perturbation σ[Ve ] with respect to the reference
σ[Ve ]: Vee ↔ Taylor expansions (solid line), Vbe ↔ first-order Taylor and Neumann expansions
b
(circled line), Vbe ↔ second-order expansions (dashed line).
84
5.5.3
Quadrature accelerator (1/2): Perturbation method
Economical sample generator
Given its gain in computation time, the perturbation approach can also be viewed as an
“economical” means of generating samples of Ve . This feature is exploited in this section
to analyze the probability distribution of the module |Ve | of the voltage induced by a field
E iβ {|E iβ| = 1 Vm−1 , θi = 45◦ , φi = 45◦ , parallel polarization, f =500 MHz} at the port
of a wire given by
xα(y) = α1 sin[π(y − ym )],
2
X
zα(y) = 0.05 +
α1+k sin[kπ(y − ym )],
k=1
(in m), for y ∈ [ym ; yM ]. (5.31)
Empirical cdf of |Ve|
Assuming that the vector α = (α1 , α2 , α3 ) is uniformly distributed in A = [−0.02; +0.02]×
[−0.01; +0.01] × [−0.01; +0.01] m3 , an ensemble of 104 samples of Ve is constructed by
varying α uniformly in A and then by computing the associated values of the voltage,
both with and without perturbation transformations. The resulting set of values is then
sorted to obtain the empirical cumulative distribution function [67] of |Ve | that is plotted
in Fig. 5.5.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.2
V
e
Taylor
Taylor and Neumann
second−order
0.22
0.24
|V |
e
0.26
0.28
[Volts]
0.3
0.32
Figure 5.5: Empirical cumulative distribution functions of |Ve |: without perturbation (solid line),
b
with Vee (dashed line), with Vbe (dash-dotted line) and with Vbe (dotted line).
b
The shift between the reference cdf of |Ve | and those of |Vee |, |Vbe | and |Vbe | reveals the
error that is made in estimating E[|Ve |] through the perturbation method, as confirmed
5.5 Applications
85
in Table 5.6. Nonetheless, these first- and second-order expansions approximate the slope
of the cdf |Ve | correctly, which also transpires in the accuracy of σ[|Ve |].
with Ve
E[|Ve |] (in mV)
σ[|Ve |] (in mV)
256
19
with Vee
262
19
with Vbe
262
17
b
with Vbe
260
17
Table 5.6: Average and standard deviation of |Ve |.
As such, the performances of the three perturbation expansions are similar. However these
expansions exhibit different features in terms of the computational effort they imply. This
can be observed in Fig. 5.6, which depicts the computation time needed to obtain the 104
deterministic samples by varying α in A.
Figure 5.6: Computation time of up to 104 deterministic samples Ve : direct computation (solid
b
line), using Vee (dashed line), using Vbe (circled line) and using Vbe (squared line).
This graph illustrates the comparable pre-computation time required for the first-order
expansions Vee and Vbe , as well as the more important preliminary computations of the
second-order expansions. As the number of sample evaluations is increased, the
b
computation times for Vbe and Vbe have similar magnitudes, whereas the time required
86
Quadrature accelerator (1/2): Perturbation method
to determine Vee increases faster. In all cases, as soon as more than 30 samples need to be
computed, the cost of resorting to a perturbation approximation remains lower than the
cost of running the original deterministic model.
5.6
Conclusion and extensions
This chapter has presented and illustrated several methods to reduce the computational
cost of the deterministic electromagnetic model. Comparisons between three asymptotic
expansions, based on first- and second-order Taylor expansions, have shown that the
perturbation approach leads to significant gains in computation time when the statistical
moments of the observable are computed. It is worth noting that these expansions hold
regardless of the type of randomness of the geometry.
The accuracy of the perturbation statistical results has been demonstrated for smoothly
varying geometries. Second-order expansions slightly improve this accuracy. However,
higher-order expansions come at the cost of larger preliminary computations and can
produce “secular” or parasitic terms, which cause the perturbation approximation to
diverge away from the reference configuration [101, 104]. Moreover, in the vicinity of
resonance frequencies, which are such that the behaviour of Ve becomes rougher, the
accuracy of the perturbation results degrades.
The general question of the range of validity of the perturbation methods is non-trivial.
Although the prerequisite to the Neumann expansion allows for the construction of
explicit bounds, the range of accuracy of the Taylor expansions is tedious to determine.
This pinpoints the core of the difficulty of the perturbation method, viz. the calculation
of statistical moments accounting for the global variations of the observable, by resorting
to local expansions that are usually only exact for the configuration chosen as a reference.
An alternative way to broaden the range of validity of the perturbation method consists
of a “sliding-window” technique or a “marching-on-in-anything” method [36, 66], in which
several reference geometries are considered, around which the expansions are performed.
Concerning the Taylor expansions, the first and second-order derivatives can be evaluated
via more sophisticated methods. The use of Reynold’s transport theorem [99, 100], or of
an adjoint-variable approach [36, 105] represent appealing candidates that are also more
involved than a central finite-difference formula.
Chapter 6
Polynomial-chaos method
Unlike the perturbation method, which is based on local Taylor and Neumann expansions
around a reference configuration, the so-called polynomial-chaos (PC) method employs
orthogonal polynomials to interpolate over the entire random space. The resulting
interpolator has the double advantage of both capturing the randomness of Ve entirely in
the polynomials and easing the statistical post-processing of Ve . This chapter describes the
major steps towards deriving the PC representation. They include a re-parameterisation
of the problem in terms of standard random variables, the construction of the orthogonal
polynomials, and the projection of Ve on these polynomials. Examples then illustrate the
efficiency of the PC representation for the characterization of Ve ’s randomness.
6.1
General method
The random variable Ve (γ) can be regarded as a function defined on the probability space
(G, Eγ , Pγ ) which contains γ, and taking its values in C. Moreover, due to finite-energy
considerations that impose the finiteness of the variance, Ve belongs to the Hilbert space


Z


L2Pγ (G, C) = g : G → C, E |g(γ ′ )|2 = |g(γ ′ )|2 fγ (γ ′ )dγ ′ < ∞ ,
(6.1)


G
with fγ being the pdf of γ.
6.1.1
Standardization
A prerequisite of the PC method is to express the initial random input γ = (γ1 , . . . , γd ),
which belongs to the probability space (G, Eγ , Pγ ) where G ⊂ Rd , in terms of a real
random vector x = (x1 , . . . , xd ) belonging to a known probability space (X , Ex, Px). The
vector x will have mutually independent components with vanishing averages E[xk ] = 0,
88
Polynomial-chaos method
and unitary variances, i.e. E[x2k ] = 1. Regardless of the differences between the spaces
(G, Eγ , Pγ ) and (X , Ex, Px), such a transformation is always possible in the framework of
probability theory through the Karhunen-Loève and the random-variable-transformation
theorems [68, 106].
The original random input γ becomes, de facto, a function of x, so does the observable
Ve which can be rewritten as Ve (γ(x)). The finite-variance condition then implies that Ve
belongs to L2Px (X , C), which is isomorphic to the tensor product of the spaces L2Px (X , R)
and C [107, 108]
L2Px (X , C) ≃ L2Px (X , R) ⊗ C,
(6.2)
where the elements of L2Px (X , R) ⊗ C correspond to linear combinations of products
between elements of L2Px (X , R) and elements of C. Since the complex plane C, which
is isomorphic to R2 , has a known basis, the following step of the PC method is to
construct a basis for L2Px (X , R), by resorting to polynomials.
6.1.2
Construction of orthogonal Wiener-Askey polynomials
Owing to the statistical independence of the components of x, its probability distribution
Px and its pdf fx follow directly from the marginal distributions Pxk and pdfs fxk ,
Px(dx′ ) = Px1 (dx′1 ) . . . Pxd (dx′d ) and fx(x′ ) = fx1 (x′1 ) . . . fxd (x′d ),
(6.3)
for x′ = (x′1 , . . . , x′d ) ∈ X = X1 × . . . × Xd ⊂ Rd . An additional consequence of the mutual
independence of the components of x is that L2Px (X , R) corresponds to the tensor product
of the elementary spaces L2Px (Xk , R) [107]
k
L2Px (X , R) = L2Px1 (X1 , R) ⊗ . . . ⊗ L2Px (Xd , R).
d
(6.4)
For any k = 1, . . . , d, the probability distribution Pxk defines an inner product h·, ·iPxk on
L2Px (Xk , R) as
k
hg, hiPxk =
Z
fxk (t)g(t)h(t)dt,
for g, h ∈ L2Px (Xk , R),
k
(6.5)
Xk
which, in turn, induces the norm k · kPxk = h·, ·iPxk
1/2
on L2Px (Xk , R).
k
Given an integer Nk , a sequence Φ1k (Nk ) = {φ1k,s , s = 0, . . . , Nk } of real-valued
polynomials can be constructed such that 1) the degree of φ1k,s equals s and 2) for φ1k,m, φ1k,l
6.1 General method
89
l
in Φ1k (Nk ), the following orthogonality property holds hφ1k,m, φ1k,l iPxk = kφ1k,mk2Px δm
, where
k
l
1
1
1
δm is the Kronecker symbol . A family Ψk (Nk ) = {ψk,s , s = 0, . . . , Nk } of orthonormal
1
polynomials is deduced from Φ1k (N) through the normalization ψk,m
= φ1k,m /kφ1k,mkPxk ,
for m ∈ {1, . . . , Nk }. The set Ψ1k (Nk ) forms an orthonormal basis for the ensemble of realvalued polynomials with degree at most equal to Nk . In the limit where Nk → ∞, the
family Ψ1k ≡ Ψ1k (∞) constitutes an orthonormal basis which is dense in L2Px (Xk , R) [107,
k
109]. These orthogonal polynomials can be constructed in several ways, e.g. via a threeterm recurrence relation, or via a Rodriguez formula that provides the polynomials by
successive differentiations of a generating function [110]. More generally, the different
types of families of fxk -orthogonal polynomials follow from the Askey scheme. They can
be structured hierarchically into a tree that prescribes how to deduce certain families of
polynomials as limits of others [106].
A polynomial-chaos system can now be defined as the triplet consisting of the measure
fxk , its support Xk , and the orthonormal basis Ψ1k . Some examples of such PC systems
are listed in Table 6.1, among which the Hermite-Gauss system is worth mentioning as
the system initially introduced by Wiener [109].
Measure fxk
Support Xk
Polynomials Ψ1k
Uniform
Gaussian
Gamma
(-1;1)
R
(0; +∞)
Legendre
Hermite
Laguerre
Table 6.1: Examples of continuous polynomial-chaos systems.
With the aid of the families Ψ1k of univariate polynomials, we deduce the d-variate
orthogonal polynomials as
d
1
1
1
1
ψn
(x) = ψ1,n
⊗ . . . ⊗ ψd,n
(x) = ψ1,n
(x1 ) . . . ψd,n
(xd ),
(6.6)
1
1
d
d
with n = (n1 , . . . , nd ) ∈ Nd and ψn1 k ∈ Ψ1k , for 1 ≤ k ≤ d. The collection of these
multivariate polynomials constitutes the set ΨdPx which is employed as a basis for L2Px (X , R).
6.1.3
Spectral decomposition of Ve
By projection onto the basis Ψdfx , the observable Ve can be represented as
X
Ve (γ) =
VeP C (k)ψkd (x),
with k = (k1 , . . . , kd),
k∈Nd
1
l
l
The Kronecker symbol is such that δm
= 1 if l = m and δm
= 0 otherwise.
(6.7)
90
Polynomial-chaos method
where the series converges in mean square [101]. Practical constraints impose the
truncation of the sum in Eq. (6.7) at a finite order. Choosing the same order N∗ for
all the components of x leads to the approximation
Ve (γ) ≈
which contains
N∗
X
k1 =0
...
N∗
X
VeP C (k)ψkd (x),
for k = (k1 , . . . , kd ),
(6.8)
kd =0
(N∗ + d)!
multivariate monomial terms [107, 111].
N∗ !d!
Given the orthonormality of the basis ΨdPx (N∗ , . . . , N∗ ) with respect to the inner product
h·, ·iPx , the factors VeP C (k) can be determined by a Galerkin projection
Z
PC
d
(6.9)
Ve (k) =< ψk, Ve >Px = ψkd (x)Ve (γ)fx(x)dx = E ψkd (x)Ve (γ(x)) .
X
The deterministic factors VeP C (k) are complex numbers, which have the same physical
dimension as Ve . They quantify the statistical correlation between Ve and the random
variable ψkd (x), where x is distributed according to Px in X .
From a computational point of view, VeP C (k) is the expectation of ψkd (x)Ve (γ(x)) and
is computable by using the probability distribution of x. However, the support X can
be unbounded and give rise to improper integrals in Eq. (6.9). To avoid this numerical
difficulty, it is preferable to interpret the average defining VeP C (k) in terms of γ, which
is bounded by hypothesis. This is achieved by expressing x in terms of γ, component by
component, by a random-variable transformation. As a result, VeP C (k) becomes
Z
d
PC
Ve (k) = E ψk(x(γ))Ve (γ) = ψkd (x(γ ′ ))Ve (γ ′ )fγ (γ ′ )dγ ′ .
(6.10)
G
These averages are evaluated by a quadrature rule and represent most of the numerical
effort of the PC approach. For the sake of numerical efficiency, the same quadrature rule
is employed to compute the average of VeP C (0, . . . , 0), . . . , VeP C (N∗ , . . . , N∗ ) , rather than
employing a different rule for each component. Nonetheless, for a high dimension d of
x, and large truncation orders N∗ , the number of components to be computed increases
drastically. The error arising from the use of a quadrature rule is controlled by the
convergence indicators described in Chapter 4.
6.2 Varying wire under deterministic illumination
6.1.4
91
Statistical post-processing of the PC decomposition
The PC decomposition described by Eq. (6.8) provides an explicit expression of Ve , in
terms of its random input x, which eases the statistical characterization of Ve .
Beginning with E [Ve ], for instance, one can show that the vanishing averages of all the
random variables ψkd (x), except for ψ0d (x) which has a unitary average [101], implies that
E [Ve (γ)] = VeP C (0). Moreover, the orthogonality of the polynomials ψkd , with respect to
the probability distribution Px, simplifies the calculation of var[Ve ]
2
var[Ve ] = E[|Ve − E[Ve | ] =
N∗
X
...
k1 =1
N∗
X
kd =1
|VeP C (k)|2 .
(6.11)
Finally, a highly appreciable feature of the PC approximation of Ve resides in the
separation between the “physical” coefficients VeP C (k), which require the numerically
costly computations, and the contribution of the random input x, which is entirely
expressed by the polynomials ψkd (x) that are numerically cheap to compute. Since the
factors VeP C (k) are deterministic, they need to be computed only once, during a precomputation stage. They can then be stored and reused for successive manipulations of
the PC approximation. The subsequent numerical efficiency permits the determination of
higher-order statistics of Ve (γ), and an approximation of its probability distribution, via
a deterministic sweep.
6.2
Varying wire under deterministic illumination
The PC method is first utilized to study a vertically varying thin wire illuminated by a
deterministic incident field. The vertical undulations of the wire are described by
xα (y) = 0, zα (y) = 0.05 + α sin[4π(y − ym )],
(in m), for y ∈ [ym ; yM ],
(6.12)
with the random amplitude α uniformly distributed in A = [a; b] = [−0.03; +0.03] m. The
incident field E iβ is a plane wave with the following properties {|E iβ |=1 Vm−1 , θi = 45◦ ,
φi = 45◦ , parallel polarization, f =500 MHz}. The only uncertain input of this problem is
the amplitude α, which is used to parameterise the standard PC random variable x as
x(α) = Fx−1 ◦ Fα (α),
(6.13)
where Fα and Fx are the cdfs of α and x, respectively. Since α is uniformly distributed,
α′ − a
Fα (α ) =
∈ [0; 1],
b−a
′
for α′ ∈ A.
(6.14)
Two different types of PC systems are tested, namely Legendre-uniform and HermiteGauss PC decompositions.
92
Polynomial-chaos method
Legendre-uniform (LU) system [106]
In this type of polynomial chaos, the variable x is uniformly distributed in X = [−1; 1]
with a pdf fx (t) = 1/2 for −1 ≤ t < 1, and fx (t) = 0 otherwise. Its inverse cdf reads
Fx−1 (u) = 2u − 1,
for u ∈ [0, 1].
(6.15)
The non-normalized Legendre polynomials are obtained by recurrence as φ10 (t) = 1,
φ11 (t) = t, and
φ1l (t) =
2l − 1 1
l−1 1
1
t φl−1 (t) −
φl−2 (t) for l ≥ 2, with kφ1l k2 =
, for l ≥ 0.
l
l
2l + 1
Hermite-Gauss (HG) system [106]
The standard variable x, which belongs to X = R, follows a normal distribution with a
2
pdf fx (t) = (2π)−1/2 e−t /2 , and an inverse cdf
Fx−1 (u) =
√
2 erf −1 (2u − 1) ,
for u ∈ (0; 1).
(6.16)
This inverse error function erf −1 , is a special function that can be approximated through
a series expansion described in [110, p. 297]. The non-normalized Hermite polynomials
are defined recursively by φ10 (t) = 1, φ11 (t) = t, and
φ1l (t) = t φ1l−1 (t) − (l − 1)φ1l−2(t), for l ≥ 2,
where kφ1l k2 = l!, for l ≥ 0. (6.17)
PC decompositions and statistical post-processing
The first 20 components of the PC approximation of Ve are determined via a ClenshawCurtis rule, with a maximum relative error of 1%. For k = 0, . . . , 19, if the quadrature rule
employs NV deterministic samples of Ve , the relative error of the quadrature
approximation of the coefficient VeP C (k) is denoted Ek (NV ), and the highest relative
error is written as Emax (NV ) = max Ek (NV ). Given these definitions, the computational
k=0,...,19
effort involved in the determination of the PC representation can be observed in Figs 6.1(a)
and 6.1(b), which depict Ek versus k and NV for the LU and HG polynomial chaos.
This pre-computation amounts to 15 s (NV =65) for the Legendre-uniform polynomial
chaos (LU-PC) and to 28 s (NV =129) for the Hermite-Gauss polynomial chaos (HG-PC).
Both plots highlight faster convergence for the lower-order PC coefficients, which is understandable given the higher degree of the polynomials appearing in the definition of the
higher-order coefficients.
6.2 Varying wire under deterministic illumination
10
10
E
10
k
10
10
10
93
2
0
-2
-4
-6
-8
-10
10
10
0
10
NV
15
1
10
2
0
5
10
20
k
(a) LU-PC
2
10
0
10
-2
10
Ek
-4
10
-6
10
-8
10
-10
10
0
10
20
1
10
NV
15
10
10
2
10
3
5
0
(b) HG-PC
Figure 6.1: Relative error Ek (NV ) of the PC coefficients.
k
94
Polynomial-chaos method
0
10
|VLU(k)|
−1
e
HG
|Ve (k)|
10
−2
10
[Volts]
−3
10
−4
10
−5
10
−6
10
−7
10
−8
10
0
2
4
6
8
10
k
12
14
16
18
20
Figure 6.2: Magnitude of the PC coefficients |VeLU (k)| (solid line), |VeHG (k)| (dashed line).
The amplitudes of these PC components, denoted {VeLU (k)}k=0,...,19 for the LU-PC case
and {VeHG (k)}k=0,...,19 for the HG-PC case, are displayed in Fig. 6.2. As expected, both
sets of components have a common initial value equal to |E[Ve ]|= 206 mV. Although the
evolution of the components is not monotonous as a function of k, a decreasing trend is
observed. This decay rate is steeper in the case of the LU-PC, as e.g. can be seen from a
comparison between VeHG (14)=1.06 mV and VeLU (14)=6 µV .
The rate of decay of the PC coefficients also affects the accuracy of the variance deduced
from the PC representation. In the present case, the variance obtained directly by quadrature with a 1% relative error, equals var[Ve ] = 0.0116 V 2 , and is taken as reference, while
the variance deduced from a PC representation of order k, with the terms {VeP C (l)}l=0,...,k ,
P
is equal to var[Ve ]P C (k) = kl=1 |VeP C (l)|2 . These two variances are mutually compared,
in Fig. 6.3, where the relative error is defined as
∆var[Ve ]P C (k) =
|var[Ve ]P C (k) − var[Ve ]|
.
var[Ve ]
(6.18)
This figure confirms the analysis of Fig. 6.2 and highlights the faster convergence of the
variance deduced from the LU representation to the actual variance var[Ve ]. A 1% relative
error in the variance is reached by retaining 4 components in the LU-PC, as opposed to 9
terms needed for the HG-PC.
6.3 Varying wire under random illumination
95
0
10
−2
10
−4
10
−6
10
−8
10
−10
10
∆ var[Ve]LU
∆ var[V ]
e HG
−12
10
1
3
5
7
9
k
11
13
15
17
19
Figure 6.3: Accuracy of the variance: ∆var[Ve ]LU (k) (solid line), ∆var[Ve ]HG (k) (dashed line).
Next, the probability distributions of the PC approximations are compared to the
distribution of deterministic samples of Ve . To this end, 1000 samples of Ve are
computed without PC expansion by varying α in A. Similarly, 1000 samples of the PC
approximations are computed by varying the normalized random variable x in X . The
samples are then sorted to obtain the empirical pdfs of Re[Ve ] and Im[Ve ] as depicted in
Figs 6.4(a) and 6.4(b), respectively. These graphs highlight the close agreement between
the PC-based distributions and the deterministic empirical distribution. The LU-PC
pdf matches the deterministic empirical pdf both in its values and in its support. In the
HG-PC case however, deviations are observed e.g. for Im[Ve ] ∈ [0.104; 0.127] V.
Computing these 1000 samples takes 160 s in the absence of PC approximation, and
merely 2 s, once the PC representation of Ve has been obtained.
6.3
Varying wire under random illumination
The wire described by Eq. (6.12), with the random amplitude α uniformly distributed in
A = [a; b] = [−0.03; +0.03] m, is now irradiated by a random incident field E iβ . This
excitation E iβ has the same characteristics as in the previous section, except for its
direction of propagation, which is specified by the elevation angle θi = 45◦ , and a random
azimuth angle φi = β uniformly distributed in B = [0◦ ; 90◦ ]. The random input parameter
now consists of the two-dimensional vector γ = (α, β) ∈ A × B.
96
Polynomial-chaos method
60
deterministic
LU
HG
pdf of Re[Ve]
50
40
30
20
10
0
−0.15
−0.1
−0.05
0
Re[V ]
e
[V]
0.05
0.1
0.15
(a) Real part
50
pdf of Im[Ve]
40
deterministic
HG
LU
30
20
10
0
0.05
0.1
0.15
Im[V ]
e
0.2
[V]
0.25
0.3
(b) Imaginary part
Figure 6.4: Distribution of 1000 samples of Ve computed without PC transformation (histogram),
with a LU-PC (solid line), and with a HG-PC (dashed line).
6.3 Varying wire under random illumination
97
Based on the approach described in Section 6.1.2, the bi-variate Legendre and Hermite
polynomials are constructed. The N∗ −th order PC decomposition of Ve now reads
N∗
X
Ve (γ) ≈
VeP C (k1 , k2 )ψk21 ,k2 (x1 , x2 )
k1 ,k2 =0
=
N∗ X
N∗
X
1
1
VeP C (k1 , k2 )ψ1,k
(x1 )ψ2,k
(x2 ). (6.19)
1
2
k1 =0 k2 =0
The first 10 × 10 PC coefficients of Ve are determined via a sparse-grid quadrature rule,
with a maximum relative error of 5%, in 675 s for the LU and HG bases. The convergence
of the quadrature rule, plotted in Fig. 6.5, shows of the need for 3365 samples of Ve before
the target accuracy has been met.
2
10
LU−PC
HG−PC
1
Emax
10
0
10
−1
10
−2
10
10
0
1
10
10
N
2
10
3
10
4
V
Figure 6.5: Highest relative error of the PC coefficients.
The magnitudes of VeLU and VeHG are presented in Figs 6.6(a) and 6.6(b), respectively.
As k1 and k2 increase, the coefficients VeLU (k1 , k2 ) fall off faster than VeHG (k1 , k2 ). This is
illustrated by comparing, for instance, |VeLU (5, 5)| = 0.461 mV to |VeHG (5, 5)| = 3.385 mV.
In both cases, the first coefficient equals |E[Ve ]| = 105 mV.
The variance computed directly by quadrature equals var[Ve ] = 0.0442 V 2 , and is taken as
a reference to assess the accuracy of the variance deduced from a finite PC
representation containing k1 × k2 terms, and denoted var[Ve ]P C (k1 , k2 ). This accuracy
is monitored through the relative error ∆var[Ve ]P C (k1 , k2 ) defined as in Eq. (6.18), and
plotted in Figs 6.7(a) and 6.7(b). The LU-PC leads to a more accurate variance, as
∆var[Ve ]LU (k1 , k2 ) ≤ 0.01 once k1 ≥ 2, as opposed to ∆var[Ve ]HG (k1 , k2 ) ≥ 0.03, for any
98
Polynomial-chaos method
(a) LU-PC: |VeLU (k1 , k2 )|
(b) HG-PC |VeHG (k1 , k2 )|
Figure 6.6: Magnitude of the PC components of Ve : |VeP C (k1 , k2 )| for k1 , k2 ∈ {0, . . . , 9}.
k1 , k2 ∈ {0, . . . , 9}. Nonetheless, for k1 ≥ 2, the relative error ∆var[Ve ]HG (k1 , k2) of the
HG-PC representation remains below 0.09, which is still appreciable.
(a) LU-PC
(b) HG-PC
Figure 6.7: Accuracy of the variances ∆var[Ve ]P C (k1 , k2 ) vs. the numbers (k1 , k2 ) of terms in
the PC representation k1 , k2 ∈ {0, . . . , 9}.
The resulting distributions of the samples are obtained by computing 104 samples of Ve ,
both with and without PC transformations2 . When no PC projection is applied, the
evaluation of these samples requires approximately 27 minutes, much more than the 7 s
needed to compute the samples by employing the PC decomposition of Ve . The empirical
distributions of the samples are depicted in Figs 6.8(a) and 6.8(b).
2
Without PC expansions, the samples are computed by varying γ = (α, β) in G = A × B. With the
PC approximations, the normalized inputs x = (x1 , x2 ) are varied in X = X1 × X2 .
6.3 Varying wire under random illumination
99
12
deterministic
LU
HG
e
pdf of Re[V ]
10
8
6
4
2
0
−0.5
−0.4
−0.3
−0.2
−0.1
Re[V ]
e
0
0.1
[V]
0.2
0.3
(a) Real part
8
pdf of Im[Ve]
7
6
deterministic
LU
HG
5
4
3
2
1
0
−0.4
−0.3
−0.2
−0.1
0
Im[Ve]
[V]
0.1
0.2
0.3
0.4
(b) Imaginary part
Figure 6.8: Distribution of 104 samples of Ve computed without PC transformation (histogram),
with a LU-PC (solid line), and with a HG-PC (dashed line).
100
Polynomial-chaos method
The agreement between the PC-based distributions and the deterministic-sample-based
distribution is clearly visible, especially in the case of the LU-PC. For the HG-PC, in
spite of a good overall approximation, the support of the distribution of Re[Ve ] is slightly
shifted compared to that of the deterministic samples, and an overshoot can be observed
around 0.19 V, on the HG-PC distribution of Im[Ve ]. These differences are again caused
by the need to increase the number of HG-PC coefficients that are employed in the PC
approximation.
PC pdfs versus the distribution of the samples of the SG quadrature rule
Given the non-negligible number of samples required to compute the PC coefficients via
the sparse-grid quadrature rule (NV =3325), it is worth comparing the empirical pdf of
these samples, which is denoted f SG , to the pdfs obtained from the LU- and HG-PC
representations, as well as from a systematic run of Ve without any PC transformation.
To do so, the LU and HG PC approximations are employed to compute 3325 deterministic
samples that are then sorted to deduce the empirical distributions corresponding to |Ve |.
The result is shown in Fig. 6.9.
10
deterministic
LU−PC
HG−PC
SG samples
e
pdf of |V |
8
6
4
2
0
0
0.05
0.1
0.15
0.2 0.25
|Ve| [V]
0.3
0.35
0.4
0.45
Figure 6.9: Distribution of |Ve | obtained from NV = 3325 values of Ve : f SG (circled line), LU-PC
pdf (solid line), HG-PC pdf (dashed line) and deterministic samples without PC transformation
(histogram).
6.4 Perturbation and polynomial-chaos methods
101
Unlike the HG-PC pdf, the support of f SG coincides with the support of the pdf of the
deterministic samples, which serves as a reference. Nonetheless, f SG does not restitute
the shape of the reference pdf faithfully, as is the case with the LU-PC pdf. The peaks
in the graph of f SG are caused by the very definition of the sparse grids, which consist
of a concentration of samples in preferential regions of the integration domain. Hence,
the sparse-grid samples may not be used directly as such to deduce the probability of the
observable that is sought for. Instead, one should use the sparse-grid samples together
with suitably defined interpolating polynomials to then compute the deterministic samples.
This problem, however, pertains to the field of sparse-grid interpolation, which is beyond
the scope of this dissertation: in a quadrature study the objective is to ensure the accuracy
with respect to a L1 –norm, whereas the aim of an interpolation approach is to guarantee
an accuracy with respect to a L∞ –norm.
6.4
Perturbation and polynomial-chaos methods
At this stage, the perturbation and polynomial-chaos methods can be compared.
First, the transformations applied in the perturbation method are intrusive, based on local
Taylor and Neumann expansions, whereas the polynomial-chaos method is non-intrusive
and hinges on projections via orthogonal polynomials defined on the entire domain of the
random variables.
The range of validity of the perturbation approach depends critically on the
smoothness of the observable as a function of its random inputs, which often limits
the range of good accuracy to a local region around the reference input used for the
perturbation expansions. On the contrary, the PC method yields globally valid results,
the quality of which is primarily dictated by the suitability of the PC system employed to
project the observable.
When analyzing the computation time, we need to be aware that both methods are
organized around a pre-computation of the numerically costly deterministic terms. In
the perturbation method, this corresponds to a duration T0−pert which increases with
the dimension of the stochastic space, and the order of the expansions, and in the PC
method the duration T0−chaos increases with the dimension of the probability space, the
truncation order of the PC representation, and the efficiency of the quadrature rule
employed to compute the PC components. Hence, in general T0−pert < T0−chaos .
After these pre-computations, evaluating Ve simply amounts to re-using the pre-calculated
102
Polynomial-chaos method
terms, together with polynomials that are cheap to evaluate. This leads to a duration
Tα in the perturbation method, and Tγ in the PC method, which are comparable and
negligible with respect to the time TV,1 needed for a single evaluation of Ve without any
transformation, i.e. Tα ≈ Tγ ≪ TV,1 . If NV samples of the voltage are needed, the
total computation time will be Tdir = NV TV,1 without any transformation, as opposed to
Tpert = T0−pert + NV T1,α with the perturbation approach, and Tchaos = T0−chaos + NV Tγ in
the case of the polynomial chaos, as sketched in Fig. 6.10.
Cpu
Time
NVTV,1
T0-chaos
T0-chaos+NVTg
T0,pert1+NVTa
T0,pert1
TV,1
0
1
2
...
NV
Number of
samples
Figure 6.10: Computation time after NV evaluations of Ve : 1st −order perturbation method (solid
line), polynomial chaos (dotted line), direct method (dashed line).
Concerning the statistical post-processing, the perturbation rationale, presented for the
case where only the geometry is random, leads to an explicit link between the statistical
moments of Ve and those of the geometry. These statistical moments of Ve are particularly
accurate for smoothly varying geometries. In the PC method, which can handle random
geometries and incident fields varying in a wider dynamic range, the average and the
variance of Ve can be obtained directly from the PC spectrum of the voltage.
Finally, the dimension of the random input mainly affects the pre-computation stage.
Given a d-dimensional random input vector, d first-order and d2 second-order derivatives
of the observable need to be determined in the perturbation method. On the other hand,
in the polynomial-chaos approach, the dimension d will affect both the computation of the
PC coefficients (via a d-dimensional quadrature rule), and the total number of monomial
(N∗ + d)!
terms retained in a PC representation of order N∗ , which equals
.
N∗ ! d!
6.5 Conclusion
6.5
103
Conclusion
This chapter has demonstrated the possibility of expressing the observable Ve in terms of
its random input via families of orthogonal polynomials. The thus obtained formulation
proved to be statistically advantageous by allowing for a direct evaluation of the average
and the variance of Ve from the coefficients of the PC representation. Numerically, the
separation of variables granted by the PC method improves the computational efficiency
of the stochastic approach, which enables a rapid deterministic sweep through which the
probability distribution of Ve can be reconstituted empirically.
In the examples presented in this chapter, a Legendre-uniform PC and a Hermite-Gauss
PC have been applied, with the best results being obtained for the LU-PC.
The already appreciable performances of the HG-PC could be improved by computing
more PC components of Ve , although this translates into costlier pre-computations. This
pre-computation effort is mainly dictated by the slower convergence rate of the quadrature
approximation of higher-order coefficients, which however have decreasing amplitudes.
It is tempting, at first sight, to explain these differences of performances by the fact that
in the LU-PC, the standard variable x and the initial random parameter γ had the same
type of distribution, viz. uniform, albeit on a different support. However, the higher
efficiency of the LU-PC compared to the HG-PC is essentially caused by the fact that
the LU-PC system is better suited to describe the randomness of Ve , for the examples
discussed in Sections 6.2 and 6.3, i.e. it produces a faster decaying PC-spectrum of Ve .
This raises also the more general question concerning the a priori choice of the optimal PC
system for which the effect of the truncation of the PC representation would be limited
by the rapid decay of the PC coefficients. This decay rate depends on the suitability of
the PC system employed to analyze the probability distribution PVe of Ve . Ideally the PC
basis should be constructed by using PVe , which is unfortunately not available a priori.
Hence, the choice of the optimal PC system cannot be decided beforehand and a test-andtrial procedure needs to be applied to seek the best PC system.
Finally, it should be noted that, in this chapter, the PC approach has been employed in a
non-intrusive manner by directly projecting the observable Ve . More intrusive approaches
could also be followed, where each of the terms defining Ve , viz. the induced current J α
and the incident field E iβ are projected individually on a PC basis. The PC representation
of Ve would then be deduced by grouping the PC representations of J α and E iβ. Examples
of such approaches can be found in a large variety of domains [112–116].
Chapter 7
Semi-intrusive characterization of
stochastic observables
The polynomial-chaos methods perform a separation of variables between the stochastic
and deterministic parts of the observable Ve , which eases the statistical post-processing,
as shown in Section 6.1.4. An alternative strategy consists in separating the randomness
of Ve according to its origin. Given an electronic system that occupies the volume Ωα
bounded by the surface ∂Ωα, the induced voltage Ve is defined as the reaction integral
Z
i
Ve (γ) = − J α, E β ∂Ωα = −
J α(r) · E iβ (r)dS,
(7.1)
r∈∂Ωα
where γ = (α, β) gathers the parameters α of the geometry, and β of the incident field.
The normalization factor 1/I0 present in Eq. (2.36) is omitted for notational convenience.
In this representation, the effect of the geometry on Ve is entirely expressed through the
current distribution J α, i.e. its value and its support ∂Ωα, whereas the incident field E iβ
accounts for the effect of external electromagnetic sources.
It is hence natural to aim for a statistical characterization of J α and E iβ separately,
and then to combine the thus obtained statistical information to deduce the statistical
moments of Ve . Such a modus operandi, referred to as a “semi-intrusive” approach, is
presented in this chapter.
106
7.1
Semi-intrusive characterization of stochastic observables
Statement of the problem
Semi-intrusive methods have already been applied to study interactions involving random
incident fields impinging on deterministic objects [27, 28, 117]. In these cases, the
randomness affects only E iβ and the first two statistical moments of Ve can be readily
determined from the average E E iβ and the correlation C E iβ as
Z
i E [Ve ] = − J α, E E β ∂Ω = −
J α(r) · E E iβ (r) dS,
(7.2a)
α
r ∈∂Ωα
i ∗
2
E |Ve |
= hJ α, C E β , J αi∂Ωα
Z
Z
=
J α(r)t · E E iβ (r) ⊗ E iβ(r ′ )∗ · J α(r ′ )∗ dS ′dS, (7.2b)
r ∈∂Ωα r ′ ∈∂Ωα
with the notation A ⊗ B = A · B t , where B t stands for the transpose of the vector B,
and B ∗ for its complex conjugate. However, when conversely E iβ is deterministic and
the geometry Ωα varies randomly, the situation becomes more intricate and implies the
following difficulties:
1. The randomness of the surface ∂Ωα, that represents the support of J α and of the
integral in Eq. (7.1).
2. The randomness of the vector J α(r) ∈ C3 for each r ∈ ∂Ωα.
3. Although deterministic, E iβ must be evaluated at a point r ∈ ∂Ωα, which is random.
As a consequence of these observations, formulas that link the statistics of Ve to those of
J α can only be established in a distributional sense as [34, 52]
E [Ve ] = − E [J α] , E iβ R3 = −E[ J α, E iβ ∂Ω ],
(7.3a)
α
2
∗
E |Ve |
= hE iβ, C [J α] , E iβ iR3 = E hE iβ , J α ⊗ J ∗α, E iβ ∗ i∂Ωα .
(7.3b)
These formulas are essentially intrinsic, unlike Eqs (7.2). Moreover, explicit expressions
of the type E [Ve ] = − E [J α] , E iβ E[∂Ω ] and E |Ve |2 = hE iβ , E [J α ⊗ J ∗α] , E iβ ∗ iE[∂Ωα]
α
are generally erroneous: when testing the statistical tensors of J α by the trace of E iβ on
the average geometry E [∂Ωα], one neglects the variations of the incident field over all the
possible surfaces ∂Ωα, as α assumes different values in A.
The first step in the semi-intrusive method that we are proposing is to reformulate Ve so
as to allow for an explicit evaluation of its statistical moments in terms of those of J α.
The reasoning will first be conducted for a random geometry subjected to a deterministic
excitation, after which the case of a random excitation will be tackled as an extension.
7.2 Reformulation of Ve
7.2
107
Reformulation of Ve
The stochastic support of the integral in Eq. (7.1) is handled by a change of variables.
To this end, the parameterisation of the geometry by the fixed domain D, established in
Section 2.2.1, is recalled
µα : ∂D ∋ r D 7−→ r α = µα(r D ) ∈ ∂Ωα,
(7.4)
where the mapping µα is smooth and corresponds to a C 2 -diffeomorphism [42]. Including
this mapping in the definition of Ve leads to
Z
Ve (γ) = −
j α(r D ) · E iβ (µα(r D ))dS,
(7.5)
r D ∈∂D
with j α(r D ) = |µ′α(r D )|J α(µα(r D )),
(7.6)
where |µ′α(r D )| is the Jacobian, i.e. the determinant of the Jacobian matrix of µα.
After this modification, the problem of the evaluation of E iβ at the random point µα(r D )
arises. This issue has already been encountered and discussed in the perturbation method
(Section 5.2.2). A natural way of tackling it is to apply a Taylor expansion to E iβ
around the reference surface ∂D. A zero-th order expansion would substitute E iβ (r D )
with E iβ (µα(rD )) in Eq. (7.5), hereby neglecting the phase variation of E iβ between
the positions r D and µα(r D ). Unfortunately, this manipulation generally produces a
physically significant error in the resulting approximation of Ve . An alternative resides
in a first-order Taylor expansion, which includes the effect of the phase variation of E iβ .
Nonetheless, as pointed out in Chapter 5 and in [48, 52], such an expansion introduces
additional terms, viz. a dyadic term and the contribution of the magnetic field, so that
the expression of Ve becomes more intricate.
Rather than aiming for these low-order Taylor expansions that hold for general
electromagnetic fields, provided they are sufficiently smooth, we focus on a particular,
yet useful, type of incident field in the form of a plane wave. This type of field is
commonly used in electromagnetic theory to approximate the field radiated by
electromagnetic sources located far from the receiving device [37]. In addition, plane
waves can be employed as a basis to decompose general electromagnetic fields, via their
angular plan-wave spectrum [118]. Hence, let E iβ(r) = E 0 e−jki · r be a fixed plane
wave, with a polarization E 0 ∈ R3 and a wavevector ki ∈ R3 that are given, such that
E 0 · ki = 0. It becomes then possible to express E iβ (µα(r D )) exactly in terms of E iβ (r D )
E iβ(µα(r D )) = ψ(ki , µα(r D ), r D )E iβ(r D ), where ψ(ki , r, r 0 ) = e−jki · (r − r 0 ) . (7.7)
108
Semi-intrusive characterization of stochastic observables
Grouping ψ(ki , µα(rD ), r D ) with j α leads to the definition of the distribution iα,ki as
iα,ki (r D ) = ψ(ki , µα(r D ), r D )j α(r D ).
(7.8)
By construction, iα,ki will depend on the wavevector ki of the incident field through the
multiplicative factor ψ(ki , µα(r D ), r D ). This dependence marks a difference with J α.
The distribution iα,ki also differs from the current density J γ (Eq. 2.28), induced on the
surface ∂Ωα by E iβ, as iα,ki is a scaled version of J α (Eq. 2.41) which is the solution to
a boundary-value problem that is independent of E iβ, whereas J γ is computed by solving
an EFIE having E iβ as excitation.
The newly introduced quantities modify the definition of the voltage to
Z
Ve (γ) = −
iα,ki (r D ) · E iβ (rD )dS = −hiα,ki , E iβ i∂D .
r D ∈∂D
(7.9)
This equation has the double advantage that the fixed surface ∂D is involved as the domain
of integration, and that the functional iα,ki is tested by the trace of E iβ on the deterministic
surface ∂D. An additional strength of this formula resides in its exactness. The only
assumption made is that of a plane-wave excitation with a fixed direction of incidence.
Consequently, the reasoning exposed thus far will hold regardless of the amplitude and
the polarization of the incident plane wave, which could also be random1 .
7.3
Statistics of Ve in terms of the statistics of iα,ki
With Eq. (7.9) at hand and given the linearity of the expectation operator E [·] with
respect to the duality product h·, ·i∂D , it becomes possible to express E [Ve ] and var [Ve ]
through an explicit tensor notation, similar to Eqs (7.2). For instance, E [Ve ] is given by
Z
i
E [Ve ] = − E [iα,ki ] , E β ∂D = −
E [iα,ki ] (r D ) · E iβ (r D )dS,
r D ∈∂D
(7.10)
with E [iα,ki ] (r D ) = E [iα,ki (rD )]. While the support of E [iα,ki ] corresponds to the known
surface ∂D, the direction of the vector values of E [iα,ki ] (r D ) results from the summation
of the currents iα,ki (r D ) obtained by varying α in its domain A.
1
With the aid of Plancherel’s theorem applied to the definition of Ve in Eq. (7.1), we have also derived
a representation of Ve , which decouples the effect of the random geometry from the incident field [119].
7.4 Discrete representation
109
On the other hand, the variance of Ve can be written as
var [Ve ] = E |Ve |2 − |E [Ve ]|2 = E iβ , V [iα,ki ] , E iβ ∗ ∂D
Z
Z
E iβ (r D )t · V [iα,ki ] (rD , r ′D ) · E iβ (r′D )∗ dS ′ dS,
=
r D ∈∂D r ′D ∈∂D
(7.11)
in which the covariance tensor V [iα,ki ], for any r D and r ′D in ∂D, is given by
∗
V [iα,ki ] (r D , r ′D ) = {iα,ki (rD ) − E [iα,ki ] (r D )} ⊗ {iα,ki (r ′D ) − E [iα,ki ] (r ′D )} . (7.12)
Equations (7.10) and (7.11) indicate that, once the tensors E [iα,ki ] and V [iα,ki ] have been
computed, the average and the variance of Ve can be determined for any incident plane
wave impinging with a propagation vector ki , regardless of its amplitude or polarization.
7.4
Discrete representation
By definition, iα,ki is tangential to ∂Ωα, a property that it inherits from J α. It will thus
generally have a random direction and components in the x, y and z directions. To make
this feature explicit, the vector iα,ki (rD ) is identified in terms of its Cartesian components
iα,ki (r D ) = [ixα,ki (r D ), iyα,ki (rD ), izα,ki (r D )]. The incident field E iβ is similarly written
as E iβ(r D ) = Eβi,x (rD ), Eβi,y (r D ), Eβi,z (r D ) . These explicit notations will allow for an
unambiguous averaging of the components of the vectors2 .
For numerical implementation purposes, ∂D is meshed into Nelts elementary cells, the
centroids of which constitute the set SD = {cl ∈ ∂D, l = 1, . . . , Nelts }. The image of
SD by µα provides the associated sampling points on the surface ∂Ωα. The evaluation
of iα,ki over SD amounts to the consideration of the image set iα,ki (SD ), which contains
Nelts vectors of three components each. We choose to organize this set by writing the x
components of the vectors in iα,ki (SD ) first, followed by their y and z components. As a
result, the following vector is obtained
iα,ki = iα,ki (SD ) = ixα,ki (SD ), iyα,ki (SD ), izα,ki (SD ) ,
(7.13)
where ipα,ki (SD ) = ipα,ki (c1 ), . . . , ipα,ki (cNelts ) , for p ∈ {x, y, z}. Likewise, the discrete
representation of E iβ corresponds to the vector
i,y
i,z
E iβ = E iβ(SD ) = Ei,x
(S
),
E
(S
),
E
(S
)
,
(7.14)
D
D
D
β
β
β
2
It is also possible to choose a system of local Cartesian coordinates associated with the unperturbed
reference geometry ∂D.
110
Semi-intrusive characterization of stochastic observables
i,p
where Ei,p
Eβ (c1 ), . . . , Eβi,p (cNelts ) , for p ∈ {x, y, z}.
β (SD ) =
definition of Ve then reads
Ve (γ) = −
X
N
elts
X
p∈{x,y,z} l=1
The corresponding
ipα,ki (cl )Eβi,p (cl ) = −hiα,ki , E iβiSD .
(7.15)
The discrete counterparts of Eqs (7.10) and (7.11) follow from Eq. (7.15) and involve the
statistical tensors of iα,ki , i.e.
h
i
E iα,ki =
E ixα,ki (SD ) , E iyα,ki (SD ) , E izα,ki (SD ) ,


V x,x V x,y V x,z
h
i


V iα,ki =  V y,x V y,y V y,z  ,
V z,x V z,y V z,z
(7.16)
(7.17)
∗
∗
where for p, q ∈ {x, y, z}, V p,q = E ipα,ki (SD ) ⊗ iqα,ki (SD ) −E ipα,ki (SD ) ⊗E iqα,ki (SD ) .
The elements of this covariance matrix V p,q measure the statistical likelihood
between the p-component of iα,ki (r D ) and the q-component of iα,ki (r ′D ), for the different
pairs of points (rD , r ′D ) belonging to SD . Therefore, V[iα,ki ] fulfills a double statistical
role, viz. that of an isotropy covariance matrix, by comparing the different components
of iα,ki , and the role of a spatial covariance matrix, by assessing the likelihood of the
randomness of iα,ki between different pairs of points along the surface ∂D.
From a numerical perspective, the previous reformulations lead to an average vector
E[iα,ki ] ∈ C3Nelts and to a covariance matrix V[iα,ki ] ∈ C3Nelts ×3Nelts , the dimensions of
which indicate the number of averages that need to be computed by quadrature. This
numerical cost can be lowered by exploiting the Hermitian nature of V[iα,ki ]. Fortunately,
E[iα,ki ] and V[iα,ki ] must be computed only once and can then be stored and reused to
determine the average and variance of Ve via Eqs (7.10) and (7.11), for whichever type of
plane wave, impinging along the direction of ki .
7.5
Spectral decomposition of the covariance of iα,ki
If the Hermitian covariance tensor V [iα,ki ] (r D , r′D ), for r D and r ′D in ∂D, is jointly
continuous in r D and r ′D , then it can be regarded as the kernel of a linear semi-definite
positive operator acting on L2 (∂D, C3 ). Mercer’s theorem yields a spectral decomposition
of V [iα,ki ] by ascertaining the existence of an orthonormal basis of eigenfunctions of
V [iα,ki ] associated with non-negative eigenvalues [120, 121]. For a discrete covariance
matrix V[iα,ki ] with dimensions 3Nelts × 3Nelts , this theorem guarantees the existence of
7.6 Example of a transversely varying thin wire
111
a basis of 3Nelts mutually orthogonal eigenvectors u1 , . . . , u3Nelts , and 3Nelts non-negative
3N
elts
X
eigenvalues 1 ≥ λ1 ≥ . . . ≥ λ3Nelts ≥ 0 with
λl = 1, such that V[iα,ki ] can be
l=1
represented as
V[iα,ki ](m, n) = λtot
3N
elts
X
l=1
λl ul (m) ⊗ ul (n)∗ ,
for m, n = 1, . . . , 3Nelts ,
(7.18)
where λtot equals the trace of V[iα,ki ]. In practice, the representation in Eq. (7.18) is
obtained by performing an eigenvalue decomposition (EVD) of V[iα,ki ]. Owing to the
orthogonality of the eigenvectors and the ordering of the eigenvalues, Eq. (7.18) yields a
hierarchical decomposition of V[iα,ki ]. The rapid decay rate of the eigenvalues λl , in terms
of l, allows for a truncation of the sum in Eq. (7.18) at a lower value Nred ≪ 3Nelts . The
N
red
P
closer
λl is to one, the better is the quality of the truncated approximation.
l=1
For any plane wave E iβ incident along the direction ki , var [Ve ] can be approximated by
substituting Eq. (7.18) in Eq. (7.11)
D
E
i ∗
i
var [Ve ] = E β, V[iα,ki ], E β
SD
= λtot
3N
elts
X
l=1
2
λl |Vl | ≈ λtot
N
red
X
l=1
λl |Vl |2 ,
(7.19)
where the terms Vl = hul , E iβ iSD are deterministic.
7.6
Example of a transversely varying thin wire
The method developed thus far is now illustrated by considering a thin wire, in free space,
described by
xα (y) = α1 sin [π(y − ym )] and zα (y) = 0.05 + α2 sin [π(y − ym )] ,
(7.20)
where y ∈ [ym ; yM ]. The coefficients α1 and α2 are uniformly distributed in the interval
A1 = A2 = [−0.03; +0.03] m. The wire is meshed into Nelts = 124 segments. Thus, the
surface ∂D (resp. ∂Ωα) corresponds to the mantle of the wire D (resp. Ωα). Owing to the
thin-wire approximation, the different currents that appear from now on will correspond to
line currents (in Am−1 ). The incident field is a 500 MHz plane wave with an electric field
of amplitude 1 Vm−1 , and a propagation direction given by θi = 45◦ and φi = 45◦ , in polar
coordinates. For this test case, where α1 and α2 are the random inputs, the computation
of the average and covariance tensors is carried out by a sparse-grid quadrature rule, with
a maximum relative error of 1% with respect to the L2 –norm of the statistical tensors
of iα,ki .
112
7.6.1
Semi-intrusive characterization of stochastic observables
Statistical moments of iα,ki
The magnitude of E[iα,ki ] is depicted in Fig. 7.1(a) together with the magnitude of
the current J 0 flowing on the straight wire D. As expected from the geometry of the
straight wire, J 0 only possesses longitudinal components, i.e. it does not have any
x-component, but only a z-component on the 5 cm long vertical wires, and a y-component
along the 1 m long horizontal wire. On the other hand, E[iα,ki ] has components in all three
directions. Figure 7.1(a) highlights the strong resemblance between J 0 and the longitudinal
components of E[iα,ki ]. This indicates that J 0 is present as a steady-state current in
E[iα,ki ], as confirmed in Fig. 7.1(b) where the residual current E[iα,ki ] − J 0 is plotted.
The y-components of this residual are dominant and one order of magnitude larger than
the x and z components, which are comparable.
(a)
(b)
Figure 7.1: Magnitude of the current densities
E [iα,ki ] (solid line) and J 0 (dashed line) along
∂Ωα (Fig. 7.1(a)). Residual E[iα,ki ] − J 0 along ∂Ωα (Fig. 7.1(b)).
Figure 7.2(a) displays the magnitude of the covariance coefficients both on a linear and
a logarithmic scale. The symmetry of the matrix |V[iα,ki ]| stems from the Hermitian
nature of V[iα,ki ]. Concerning the self-correlation matrices, V y,y exhibits the highest
values compared to V x,x and V z,z , which have similar orders of magnitude. As for the
cross-correlation matrices, the coupling with the y-components is the most important as
testified by the coefficients in V x,y and V y,z . On the contrary, the small values of V x,z
indicate the limited magnitude of the coupling between the x and z components of iα,ki .
7.6 Example of a transversely varying thin wire
113
(a)
(b)
Figure 7.2: Magnitude of the elements of V[iα,ki ], in (A.m−1 )2 (Fig. 7.2(a)), and
|V[iα,ki ]|dB = 10 log(|V[iα,ki ]|) in dB normalized with respect to 1 (A.m−1 )2 , in (Fig. 7.2(a)).
114
Semi-intrusive characterization of stochastic observables
Next, an EVD3 is performed to obtain the spectral decomposition of V[iα,ki ]. The trace of
the covariance matrix equals λtot = 1.705 10−3 (A.m−1 )2 , and the normalized eigenvalues
are plotted in Fig. 7.3.
0
λm: Normalized eigenvalue
10
−5
10
−10
10
−15
10
−20
10
0
10
1
10
2
m
10
Figure 7.3: Normalized eigenvalues of V[iα,ki ].
This graph highlights the steep decay rate of the eigenvalues, as the first, second, and third
singular values bear 96.2%, 3.2% and 0.5% of λtot respectively. The sum in Eq. (7.18) can
hence be truncated at Nred = 3 ≪ 375 = 3Nelts .
These three eigenvalues together with their eigenvectors, shown in Figs 7.4(a) and 7.4(b),
suffice to compute the variance of Ve , for any incident plane wave propagating along ki ,
with an accuracy equal to the sum of the eigenvalues, i.e. 99.9%. The orthogonality of
the eigenvectors implies that each of them captures a specific and non-redundant aspect
of the randomness of Ve .
7.6.2
Inference of E[Ve ] and var[Ve ]
The statistical moments of iα,ki are now post-processed by considering an impinging plane
wave E iβ with an amplitude of 1 Vm−1 , a frequency of 500 MHz, and propagating along
the direction of the vector ki employed hitherto, i.e. θi = 45◦ and φi = 45◦ in polar
coordinates. The polarization angle ηi is defined as the angle between the unit vector uθ
and E iβ, as shown in Fig. 7.5. The statistical moments of Ve are computed for 50 values
of ηi between 0◦ (parallel polarization) and 90◦ (perpendicular polarization).
3
This is achieved with LAPACK’s F02HAF routine, which is based on a QR factorization of the
Hermitian matrix V[iα,ki ] (see [122, p. 409]).
7.6 Example of a transversely varying thin wire
(a) Real part
115
(b) Imaginary part
Figure 7.4: Real and imaginary parts of the first (solid line), second (dashed line) and third
(dotted line) eigenvectors of V[iα,ki ]. These eigenvectors are dimensionless by construction.
Figure 7.5: Polarization of the incident field.
116
Semi-intrusive characterization of stochastic observables
The average obtained in a semi-intrusive manner via Eq. (7.10) is compared to the average
computed directly by quadrature and taken as a reference. Figure 7.6(a) highlights the
match between the two types of results, with differences of the order of 1 mV, as expected
since the representation in Eq. (7.10) is exact for plane-wave incident fields. Likewise,
the variance obtained from Eq. (7.11) using V[iα,ki ], and the variance resulting from
the truncated principal component decomposition, i.e. Eq. (7.19), are compared to the
reference variance computed directly by quadrature. A close match between the three
approaches can be noted in Fig. 7.6(b), owing to the rapid decay of the eigenvalues: the
differences between the different methods for var[Ve ] are of the order of 10−6 V2 .
7.6.3
Computation time
Table 7.1 details the computation time needed to determine E[Ve ] and var[Ve ] via each of
of the following methods:
• method A: computation and testing of E[iα,ki ], V[iα,ki ],
• method B: computation of E[iα,ki ] and V[iα,ki ], then EVD of V[iα,ki ], followed by
the testing of E[iα,ki ] and of the dominant eigenvectors of V[iα,ki ],
• method C: non-intrusive computation of E[Ve ] and var[Ve ] by quadrature.
In the presence of a single incident field E iβ, the dominant step concerns the determination
of E[iα,ki ] and V[iα,ki ], which takes 156 s, followed by the EVD of V[iα,ki ], which requires
5 s. Testing E[iα,ki ] and V[iα,ki ] to obtain E[Ve ] and var[Ve ] (method A) translates into 62
ms, while the testing of E[iα,ki ] and the 3 most prominent eigenvectors of V[iα,ki ] (method
B) lasts merely 12 ms. These durations can be compared to the average computation time
needed to evaluate E[Ve ] and var[Ve ] directly by quadrature (method C), viz. 9 s.
Step
Method A
Method B
Method C
Computing E[iα,ki ] and V[iα,ki ]
Duration
156 s
Testing E[iα,ki ] and V[iα,ki ]
62 ms
Computing E[iα,ki ] and V[iα,ki ]
156 s
Testing E[iα,ki ] and EVD of V[iα,ki ]
12 ms
EVD of V[iα,ki ]
Computing E[Ve ] and var[Ve ] by quadrature
5s
9s
Table 7.1: Details of the computation time for a single excitation E iβ .
7.6 Example of a transversely varying thin wire
117
Re[E(Ve)] tens
0.4
Im[E(Ve)] tens
Re[E(Ve )] quad
0.3
Im[E(Ve)] quad
[V]
0.2
0.1
0
-0.1
0
10
20
30
40 50
ηi [deg]
60
70
80
90
(a) Average of Ve
-3
4
x 10
var[Ve]tens
3
var[V ]
e pca
2
[V ]
var[Ve]quad
2
1
0
0
10
20
30
40 50
ηi [deg]
60
70
80
90
(b) Variance of Ve
Figure 7.6: Statistical moments of Ve computed by quadrature (quad), by a semi-intrusive approach (tens), and by using the principal component decomposition of V[iα,ki ] (pca).
118
Semi-intrusive characterization of stochastic observables
Further, when Np different polarizations of E iβ are studied, the computation time TA of
Method A evolves as TA ≈ 156 + Np ∗ 0.062 s, for Method B the computation time TB will
be given by TB ≈ 156 + 5 + Np ∗ 0.012 s, while a linear increase of the computation time
TC is observed for Method C, TC ≈ Np ∗ 9 s. Table 7.2 illustrates these features for the
case Np = 50, in which the semi-intrusive methods (Methods A and B) are three times
faster than a non-intrusive approach (Method C).
Np
Np = 50
Method A
156 + Np * 0.062 s
161 s
Method B
161 + Np * 0.012 s
162 s
Method C
Np * 9 s
460 s
Table 7.2: Cumulative computation time for Np different polarizations of E iβ .
More generally, for a given level Nelts of geometrical sampling, the computational cost
of the semi-intrusive approaches will vary according to the effort required to evaluate
E[iα,ki ] and V[iα,ki ], which in turn depends on the dimension of the unknown input vector
α, on the smoothness of iα,ki as a function of α and on Nelts . The EVD, on the other
3
hand, depends mainly on Nelts since it evolves as O(Nelts
) [122, p. 424]. Therefore,
if the
i
h
geometrical mesh is refined, the numerical computation of E[iα,ki ] and V iα,ki , and the
EVD will become costlier due to the increased size of these tensors.
7.7
7.7.1
Extensions
Random incident field
The semi-intrusive formalism, which holds for any incident plane wave polarized in the
plane P(ki ) normal to ki , can also handle a randomly polarized incident field. More
specifically, let E iβ be polarized in P(ki ), and have a random amplitude or polarization
angle. It is further assumed that β, which contains the random parameters of E iβ , is
independent of α, with the global random input vector being γ = (α, β). The average
and the variance of Ve (γ) can then be written as
E [Ve ] = − Eα [iα,ki ] , Eβ E iβ ∂D ,
var [Ve ] = T r Vα[iα,ki ], Vβ[E iβ ] ,
(7.21)
(7.22)
with Ei [·] and Vi [·] denoting the expectation and the covariance with respect to the
random variable i, while T r (A) is the trace of the operator A over ∂D. If Vα [iα,ki ] is
7.7 Extensions
119
decomposed spectrally, then with the notations of Eq. (7.18), we obtain
var [Ve ] ≈ λtot
N
red
X
l=1
D
E
λl ul , Vβ[E iβ ], ul ∗
∂D
.
(7.23)
The covariance matrix Vβ [E iβ] can also be decomposed as follows
Vβ [E iβ](m, n)
= χtot
3N
elts
X
l=1
where χtot equals the trace of
χl el (m) ⊗ el (n)∗ , for m, n = 1, . . . , 3Nelts ,
Vβ [E iβ ],
and 1 ≥ χ1 ≥ . . . ≥ χ3Nelts ≥ 0 with
(7.24)
3N
elts
X
χl = 1,
l=1
and where the eigenvectors e1 , . . . , e3Nelts are mutually orthogonal. Truncating Eq. (7.24)
at an order Nred,2 ≪ 3Nelts , based on the rate of decrease of χl and then substituting the
resulting expression in Eq. (7.23) yields the following approximation of var [Ve ]
var [Ve ] ≈ λtot χtot
N
red,2
red N
X
X
l=1
r=1
2
λl χr ul , er ∂D .
(7.25)
Hence, var [Ve ] is obtained by summing the coefficients resulting from the interaction
between the main eigenvectors of Vα[iα,ki ] and Vβ [E iβ ]. Equation (7.25) is also very helpful
as it involves a sum of Nred Nred,2 terms, which is generally a small number. Nonetheless,
the additional cost of the EVD of Vβ [E iβ ] needs to be handled as well.
7.7.2
Higher-order statistical moments
One of the helpful features of Eq. (7.9) resides in the fact that it permits the calculation
of the first two statistical moments of Ve directly from the first two statistical moments
of iα,ki . It is therefore natural to aim for a generalization of such a link to higher-order
moments of Ve . To illustrate this problem, we consider the calculation of the third-order
moment of Ve , i.e. E[Ve3 ], for a random geometry and a deterministic incident field.
Based on Eq. (7.9), the tensorial representation of Ve3 corresponds to
Ve3 = −hiα,ki ⊗ iα,ki ⊗ iα,ki , E iβ , E iβ , E iβi∂D
Z
Z
Z
= −
dS1
dS2
dS3
r1 ∈∂D
r2 ∈∂D
(7.26)
r3 ∈∂D
[iα,ki (r 1 ) ⊗ iα,ki (r 2 ) ⊗ iα,ki (r 3 )] · E iβ (r 1 ) · E iβ (r 2 ) · E iβ (r3 ).
A prerequisite to the determination of E[Ve3 ] is therefore the computation of the
expectation of the third-order tensor iα,ki ⊗ iα,ki ⊗ iα,ki , which can be numerically
120
Semi-intrusive characterization of stochastic observables
demanding. For a geometry of the scatterer meshed into Nelts elementary cells, the tensor
E[iα,ki ⊗ iα,ki ⊗ iα,ki ] ∈ C3Nelts ×3Nelts ×3Nelts would infer the need to compute each of its
(3Nelts )3 components by quadrature, to perform the expectation operation.
An alternative to this tensor-based approach utilizes Karhunen-Loève’s theorem [101,
p. 17], which states that, given the spectral decomposition of V[iα,ki ] described by Eq. (7.18),
iα,ki can be represented as
iα,ki (r D ) ≈ E[iα,ki ](r D ) +
where, if λl 6= 0,
xl
1
= √
λtot λl
Z
rD ∈∂D
n
N
red
X
l=1
xl
p
λtot λl ul (r D ), for r D ∈ ∂D,
o
iα,ki (r D ) − E[iα,ki ](r D ) · (ul (r D ))∗ dS.
(7.27)
(7.28)
The complex-valued random variables x1 , . . . , xNred are mutually independent, such that
E[xl ] = 0 and E[|xl |2 ] = 1 for 1 ≤ l ≤ Nred . To discard the notation and without loss
of generality, let E[iα,ki ] = 0 and λtot = 1 (Am−1 )2 . With the aid of Eq. (7.27), the
third-order statistical moment of iα,ki becomes
E[iα,ki ⊗ iα,ki ⊗ iα,ki ](r 1 , r2 , r3 ) ≈
N
red
X
l,m,n=1
p
E[xl xm xn ] λl λm λn ul (r1 ) ⊗ um (r2 ) ⊗ un (r 3 ),
for r 1 , r 2 , r 3 ∈ ∂D. In spite of the presence of third-order tensors, the notable
distinction with iα,ki ⊗ iα,ki ⊗ iα,ki is that these (Nred )3 tensors ul (r 1 ) ⊗ um (r 2 ) ⊗ un (r3 )
are deterministic and need therefore be evaluated only once. Moreover, the only statistical
moments needed are the third-order moments of the random variables xl .
This reveals however the major difficulty underlying the Karhunen-Loève transform, viz.
the determination of the probability distribution of the random variables x1 , . . . , xNred .
The definition of these variables in Eq. (7.28) involves the current iα,ki , and thus the
computationally expensive current density J α, which is the solution to an EFIE.
By treating each of the xl as a function of α, the (Nred )3 averages E[xl xm xn ] can be
computed by quadrature. However, doing so implies a supplementary numerical cost that
needs to be added to the effort required to compute E[iα,ki ], V[iα,ki ] and to perform the
EVD of V[iα,ki ]. Instead, a rather brute-force approach, based on the fact that E[xl ] = 0
and E[|xl |2 ] = 1 for 1 ≤ l ≤ Nred , would consist in assuming that all the variables xl follow
a normal distribution. This assumption, although consistent with the maximum-entropy
principle [76], generally introduces an error rooted in the assumption of Gaussianity.
7.8 Conclusion
121
Soize and Ghanem [107] apply a different method according to which the characteristic
function of x1 , . . . , xNred should be expressed in terms of the characteristic function of iα,ki .
Unfortunately, in our case the characteristic function of iα,ki is not known a priori, which
prevents us from applying such a rationale.
7.8
Conclusion
This chapter has presented a semi-intrusive formulation that granted a separation of
variables of the observable Ve according to the origin of its randomness. In return for
the assumption of an excitation by an incident plane wave with a fixed direction of
propagation, the effect of the randomness of the scatterer, described by a current
distribution, can be isolated from the effect of the amplitude and the polarization of the
excitation.
The representation of the observable thus obtained allows for a
statistical analysis of the current distribution characterizing the scatterer via its
average and covariance tensors. Post-processing these statistical items of information,
which can be viewed as signatures of the randomness of the scatterer, enables an accurate
characterization of the first two statistical moments of the observable.
The case of an incident field with a random polarization can be tackled efficiently by the
semi-intrusive model. However, the determination of higher-order statistical moments of
Ve can become quite intricate from a numerical perspective.
The chapters in part III of this dissertation will illustrate the essential role played by
the average and the variance of Ve in the quantification of its uncertainty, as well as the
usefulness of higher-order moments, which are not efficiently computable via a
semi-intrusive model.
Part III
Post-processing of the statistics
Chapter 8
Statistical moments of a
complex-valued observable
In the previous chapters, the statistical moments of the observable Ve were computed by
first applying a change of representation to Ve : in the perturbation and polynomial-chaos
methods the deterministic part of Ve was separated from its random part, whereas in the
semi-intrusive method, the randomness stemming from the geometry of the scatterer was
separated from the effect of the incident field.
All of the aforementioned approaches have specific advantages and shortcomings.
The accuracy of the perturbation method, presented for randomly varying geometries, is
highly dependent on the smoothness of Ve as a function of the geometry. The construction
of the polynomial-chaos decomposition can prove computationally costly as the number
of random inputs is increased, particularly if the polynomial-chaos system employed is
not suitably chosen. The semi-intrusive method entails the computation of statistical
moments of the transmitting-state current and a spectral decomposition, which bear a
certain numerical cost. Moreover, the determination of higher-order statistical moments
of Ve via the semi-intrusive approach has proved to be computationally tedious.
To circumvent these limitations, a non-intrusive approach, also known as a stochastic
collocation method [123], is now considered. In this method, the observable Ve is viewed as
the result of a “black-box” process that has the parameters γ = (α, β) of the
configuration as input, and provides the corresponding value of the observable Ve (γ)
as output. The statistical moments of Ve are then computed directly with the aid of
quadrature rules. This chapter takes the complex nature of Ve into account by first regarding it as a complex scalar, i.e. Ve = Re(Ve )+j Im(Ve ) and then as a real-valued vector,
i.e. Ve = (Re(Ve ), Im(Ve )). The objective is to compute and interpret successive statistical
126
Statistical moments of a complex-valued observable
moments of increasing order. Both deterministic and random incident fields are handled
analogously with this method.
8.1
Non-intrusive probabilistic logic
The cornerstone of the non-intrusive approach is the double application of the
fundamental theorem (see Section 3.3.3, Eq. (3.7)). Given an observable defined as the
random variable Y : (I, EI , PI ) ∋ i 7−→ Y (i) ∈ Y = R or C, and a measurable function
h : Y 7→ W = R or C, the expectation of h(Y ) can be written in terms of the pdf fY of Y
Z
E[h(Y )] = h(y)fY (y)dy.
(8.1)
Y
In this equation, the average E[h(Y )] Zappears as the result of the evaluation of the
functional Th : F (h) ∋ g 7−→ Th [g] =
h(y)g(y)dy on the testing function fY , where
Y
the space F (h) is defined as F (h) = {g : R 7−→ R, |
Z
h(y)g(y)dy| < ∞}. With this very
Y
interesting feature, suitably chosen kernels h of Th will reveal several patterns of fY .
Due to the unavailability of fY , E[h(Y )] cannot be obtained directly from Eq. (8.1).
However, since Y is a function of the random variable i, so is h ◦ Y , which implies that
E[h(Y )] can also be expressed as a function of the pdf fI
Z
E[h(Y )] = h(Y (i))fI (i)di.
(8.2)
I
This equation is utilized to compute E[h(Y )] by quadrature.
In a way, a dialogue can be established with fY , in which the questions are posed via
Eq. (8.1): (“fY , what is your response to the functional Th ?”) and where the answers of
fY are obtained via Eq. (8.2).
This idea is hence pursued in this section, by choosing a set of functions
{hk : C → R, k = 0, . . . , N} and evaluating the corresponding answers of fY in the
form of the statistical moments {mk = Thk [fy ], k = 0, . . . , N}. To avoid redundancy, it
is desirable to have some orthogonality among the functions hk , by choosing them, for
instance, as orthogonal polynomials. However the most commonly employed functions
are canonical polynomials, which lead to the definition of standard statistical moments.
8.2 Observable Ve as a complex random variable
8.2
127
Observable Ve as a complex random variable
To begin with, the observable Ve is regarded as a complex scalar Ve = Re(Ve ) + j Im(Ve ).
The mean Es [Ve ] of Ve is then defined as1
Es [Ve ] = E[Re(Ve )] + jE[Im(Ve )].
(8.3)
It represents the center of gravity of all the values of Ve and can be employed to center Ve
by defining Vc = Ve − Es [Ve ]. The variance of Ve is
var[Ve ] = E[|Vc |2 ] = E[|Ve |2 ] − |Es [Ve ]|2 = var[Re(Ve )] + var[Im(Ve )] ≥ 0.
(8.4)
p
The standard deviation (st.dev.) σ[Ve ] = var[Ve ] has the same dimension as Ve . With
Es [Ve ] and σ [Ve ] at hand, Ve can be normalized as follows
Vn =
Ve − Es [Ve ]
,
σ[Ve ]
when σ[Ve ] 6= 0.
(8.5)
The dimensionless variable Vn has a vanishing average Es [Vn ] = 0 and a unit variance
(σ[Vn ])2 = 1. Since Vc and Vn are simply obtained by translating and scaling Ve , they
have the same types of probability distributions as Ve [67, p. 44]. Interestingly, the
definition of Vc permits the comparison of the physical spread (measured in Volts) of the
different random variables on a common graph, whereas Vn will highlight the statistical
spread (measured in σ[Ve ]) of the samples. These comparisons are for instance necessary
when the spread of Ve needs to be quantified for different frequencies or different random
configurations.
With the aid of Es [Ve ] and σ[Ve ], Chebychev’s inequality [68] provides a general bound for
the probability distribution PVe of Ve by stating that, for any m > 0,
2
σ[Ve ]
PVe (|Ve − Es [Ve ]| > m) ≤
.
(8.6)
m
This inequality is relevant for m ≥ σ[Ve ] and allows for the definition of confidence domains
Cm = {Ve / |V − Es [Ve ]| ≤ mσ[Ve ]} as concentric circular discs centered around Es [Ve ]
with radii proportional to σ [Ve ]. In terms of Vn , the normalized domains Cm become
discs centered around the origin with integral radii. In practice, Eq. (8.6) ascertains that,
for instance, C2 , C4 and C5 will contain at least 75 %, 93 % and 96% of the samples of
Ve , respectively. For many families of random variables, these bounds are rather loose.
An example hereof are Gaussian variables for which more than 95 % of the samples are
within 2σ of the mean. Nonetheless, the strength of Chebychev’s inequality is its general
applicability: it holds for every probability distribution that has a finite variance.
1
The subscript
s
indicates that Ve is viewed as a scalar.
128
Statistical moments of a complex-valued observable
Example
A vertically varying wire is considered in free space, with an axis described as
xα(y) = 0
and
zα(y) = 0.05 + α1 sin [4π(y − ym )] ,
for y ∈ [ym ; yM ], (8.7)
where the coordinates are given in meters. The amplitude α1 is uniformly distributed in
the interval A1 = [−0.04; +0.04] m. The wire is meshed into 224 quadratic segments with
200 segments assigned to the undulating portion of the wire. A few sample geometries are
shown in Fig. 8.1
Figure 8.1: 10 sample geometries obtained by varying α1 in A1 = [−0.04, +0.04] m.
This wire is illuminated by a parallel-polarized plane wave with an electric-field
amplitude of 1 Vm−1 , a frequency f , and a random direction of incidence
(θi = 45◦ , φi = β1 ), where the azimuth angle φi = β1 is uniformly distributed in the
interval B1 = [0◦ ; 90◦]. Hence, this problem involves 2 random inputs that are
mutually independent and gathered in the vector γ = (γ1 , γ2 ) = (α1 , β1 ), which belongs
to the product space G = A1 × B1 .
The mean and the st.dev. of the induced voltage have been computed via a sparse-grid
quadrature rule, for two frequencies, viz f1 = 268 MHz and f2 = 500 MHz. The results,
summarized in Table 8.1, are computed with a maximum relative error of 1% using NV
samples of Ve . The higher standard deviation at f2 indicates a larger physical spread of
Ve at f2 than at f1 .
To compare the statistics of Ve to the actual distribution of the samples of Ve , 104
deterministic samples have been computed at f1 and f2 , for different values of γ in G.
8.2 Observable Ve as a complex random variable
129
f
Es [Ve ] [mV]
σ [Ve ] [mV]
NV
f1 = 268 MHz
66 - j 138
168
145
f2 = 500 MHz
8 - j 87
209
705
Table 8.1: Average Es [Ve ], standard deviation σ [Ve ] and complexity NV .
0.6
0.6
0.4
0.4
0.2
0.2
Im(Vc) [V]
Im(Vc) [V]
Computing these samples amounts to 27 minutes. The objective underlying the evaluation
for so many samples is to compare the accuracy of the statistical moments to the actual
distribution of Ve , as one would obtain it by systematically executing the deterministic
model. The centered samples Vc , depicted in Figs 8.2(a) and 8.2(b), confirm the analysis
of Table 8.1: the samples Vc at f2 fill a larger area of the complex plane2 .
0
−0.2
−0.4
0
−0.2
−0.4
−0.6
−0.6 −0.4 −0.2
0
0.2
Re(Vc) [V]
(a) f1 =268 MHz
0.4
0.6
−0.6
−0.6 −0.4 −0.2
0
0.2
Re(Vc) [V]
0.4
0.6
(b) f2 =500 MHz
Figure 8.2: 104 deterministic samples of Ve centered to obtain Vc .
The normalized samples obtained from Eq. (8.5) are shown in Figs 8.3(a) and 8.3(b), where
the first three confidence discs derived from Chebychev’s inequality are also displayed.
Unlike what is predicted by σ[Ve ], the spread of the normalized samples is larger at f1 ,
where samples go beyond C3 , than at f2 , where almost all the samples are within C2 . This
observation is also confirmed by Table 8.2, which informs on the number of samples present
in the interior of the Chebychev discs. The Chebychev predictions are obtained from the
inequality (8.6), which guarantees that at least (1 − 1/m2 )% of the 104 deterministic
2
Due to the non-linearity of Ve as a function of γ, the distribution of the deterministic samples Vc in
the complex plane does not corresponds to a canonical shape such as a square or a circle.
Statistical moments of a complex-valued observable
4
4
3
3
2
2
1
1
Im(Vn)
Im(Vn)
130
0
0
−1
−1
−2
−2
−3
−3
−4
−4 −3 −2 −1
0
1
Re(V )
2
3
4
−4
−4 −3 −2 −1
0
1
Re(V )
2
3
4
n
n
(a) f1 =268 MHz
(b) f2 =500 MHz
Figure 8.3: 104 normalized samples Vn versus the Chebychev circles Cm , for 1 ≤ m ≤ 3.
samples lie within the disc Cm . This table also highlights that the bounds obtained via
Chebychev’s inequality represent loose constraints for the distribution of the samples.
f = 268 MHz
f = 500 MHz
Chebychev bound
Number of samples in C1
7015
7273
-
Number of samples in C2
9633
9512
7500
Number of samples in C3
9955
10 000
8888
Number of samples in C4
10 000
10 000
9375
Table 8.2: Distribution of the 104 deterministic samples of Ve versus Chebychev bounds.
Hence, although the physical spread of the samples Ve is larger at f2 than at f1 , their
statistical dispersion is conversely larger at f1 than at f2 . This behaviour will be
commented upon in further detail in Section 8.4.2.
8.3
Observable Ve as a real random vector
The circular shape of the confidence domains derived from Chebychev’s inequality
indicates the isotropic nature of the quantification performed by σ[Ve ]. This isotropy
should be understood in the sense that the specific randomness of Re(Ve ) and Im(Ve ) are
8.3 Observable Ve as a real random vector
131
not described individually via σ[Ve ]. To overcome this limitation, rather than handling Ve
as a complex scalar, it is regarded as a real-valued random vector Ve = (Re(Ve ), Im(Ve ))
by employing the natural isomorphism between C and R2 .
8.3.1
Average vector and covariance matrix of Ve
In this case, the average of Ve is defined as
E [Ve ] = (E [Re(Ve )] , E [Im(Ve )]) ,
(8.8)
and corresponds to the isomorphic image in R2 of Es [Ve ] ∈ C. Regarding the second-order
moments, the covariance matrix CVe is introduced as [67, p. 229]
!
var[Re(Ve )]
Cov[Re(Ve ), Im(Ve )]
CVe =
,
(8.9)
Cov[Re(Ve ), Im(Ve )]
var[Im(Ve )]
where var[X] = E[X 2 ] − E[X]2 , and Cov[X, Y ] = Cov[Y, X] = E[XY ] − E[X]E[Y ].
This symmetric matrix contains all the second-order statistical information on the value
distributions of Re(Ve ) and Im(Ve ). The diagonal entries of CVe measure the spread of the
samples of Re(Ve ) and Im(Ve ) separately, whereas the anti-diagonal term
Cov[Re(Ve ), Im(Ve )] assesses the correlation between Re(Ve ) and Im(Ve ). Whenever
σ[Re(Ve )] 6= 0 and σ[Im(Ve )] 6= 0, a correlation coefficient ρ[Ve ] can be defined as
ρ[Ve ] =
Cov[Re(Ve ), Im(Ve )]
∈ [−1; 1].
σ[Re(Ve )]σ[Im(Ve )]
(8.10)
This coefficient indicates the strength of the linear correlation between Re(Ve ) and Im(Ve ).
As |ρ[Ve ]| → 1, the samples of Ve are more likely to be distributed along a preferential axis
in the complex plane. In addition, whenever ρ[Ve ] 6= 0, there exists a statistical cross-talk,
and hence a redundancy in the randomness of Re(Ve ) and Im(Ve ).
A comparison between Eqs (8.3)-(8.4) and Eqs (8.8)-(8.9) shows that the vectorial
statistical moments encompass the scalar statistical moments. In particular, the
variance σ[Ve ]2 represents the trace of CVe and can hence be employed to normalize CVe
by dividing its entries by σ[Ve ]2 , which leads to the covariance matrix CVn of Vn , i.e.
CVn =
1
CV .
σ[Ve ]2 e
(8.11)
The matrix CVn has a unit trace, and the correlation coefficient of Vn is identical to the
correlation coefficient of Ve .
132
Statistical moments of a complex-valued observable
f
σ [Re(Vn )]2
σ [Im(Vn )]2
Cov[Re(Ve ), Im(Ve )]
ρ[Vn ]
f1 = 268 MHz
0.4863
0.5137
-0.2610
- 0.52
f2 = 500 MHz
0.4292
0.5708
-0.0538
- 0.11
Table 8.3: Covariance elements of Vn .
The normalized covariance matrices corresponding to the example of Section 8.2 are given
in Table 8.3. Both at f1 and f2 , σ[Re(Vn )]2 and σ[Im(Vn )]2 have comparable values,
although at f2 , σ[Im(Vn )]2 increases, and σ[Re(Vn )]2 decreases accordingly. Nevertheless,
ρ[Vn ] demonstrates that the correlation between Re(Ve ) and Im(Ve ) is five times more
important at f1 than at f2 . This observation is corroborated by the sample distribution
in Fig. 8.3(a) and Fig. 8.3(b): at f1 the samples are spread along an oblique direction,
whereas at f2 the samples describe a more circular pattern.
The non-vanishing value of ρ[Vn ] is also the sign of a statistical redundancy between the
randomness of Re(Ve ) and Im(Ve ). Hence, it is natural to seek a representation in which
the components of Ve are decoupled, as is possible by performing a principal-component
analysis (PCA) of CVn .
Based on E [Ve ] and CVe , the 2-D Chebychev inequality reads, for any m > 0, [68]
!
r
1
1
−1
(Ve − E[Ve ])t CVe (Ve − E[Ve ]) > m ≤ 2 ,
when CV−1
exists. (8.12)
PVe
e
2
m
The confidence regions obtained via this inequality can be established by a PCA of CVn ,
as we explain in more detail now.
8.3.2
Principal component analysis (PCA) of CVn
Spectral decomposition of CVn
A representation of Ve on a basis of uncorrelated complex random variables can be
established by determining the principal components of CVn . The matrix CVn is real
symmetric and positive semi-definite. Thus, it can be decomposed spectrally by applying
an eigenvalue decomposition, which yields the following representation
"
#"
#"
#
(1)
(2)
(1)
(1)
h
i h
it
2
u
u
λ
0
u
u
n,1
1
2
CVn = u(1) u(2) Λn u(1) , u(2) = 1(1) 1(2)
(8.13)
(2)
(2) ,
2
0 λn,2 u1 u2
u2 u2
8.3 Observable Ve as a real random vector
133
with ordered eigenvalues λn,1 ≥ λn,2 ≥ 0 corresponding to the principal components. The
(1)
(1)
(2)
(2)
unit vectors u(1) = (u1 , u2 ) and u(2) = (u1 , u2 ) define the principal directions that
are orthonormal by construction. Equation (8.13) shows that Λn is a representation of
CVn on the basis (u(1) , u(2) ), which, because of the normalization, implies that
T r(Λn ) = λ2n,1 + λ2n,2 = T r(CVn ) = 1,
(8.14)
where T r is the trace operator. The normalized voltage Vn is then projected orthogonally
on the basis u(1) , u(2) and becomes
Vn = Vn(1) u(1) + Vn(2) u(2) ,
(1)
(1)
(1)
(2)
(2)
(8.15)
(2)
with Vn = u1 Re(Vn ) + u2 Im(Vn ) and Vn = u1 Re(Vn ) + u2 Im(Vn ). The real(1)
(2)
valued random variables Vn and Vn are uncorrelated and their variances are equal to
the eigenvalues λ2n,1 and λ2n,2 , respectively. Consequently, the vector u(1) represents the
direction with the highest dispersion, and, λ2n,1 (respectively λ2n,2) is the contribution of
(1)
(2)
Vn (respectively Vn ) to the total variance of Vn , which is unitary.
The orthogonal frame E[Ve ], u(1) , u(2) can be deduced from (E[Ve ], Re(Ve ), Im(Ve )) by a
rotation over an angle
!
(1)
h π πi
u1
(1)
∈
− ; + , if u2 6= 0.
(8.16)
θVe = arctan
(1)
2
2
u2
The angle θVe yields more detailed statistical information than ρ[Ve ], to which it is related
as cos(θVe ) = ρ[Ve ]. Applying Chebychev’s inequality to the new representation of Vn leads
to the definition of the interior of ellipses Km as


!2
!2
(1)
(2)


Vn
Vn
Km = Vn ∈ C, √
+ √
≤ m2 , for m > 0.
(8.17)


2λn,1
2λn,2
These ellipses are centered around the origin and have their axes parallel to u(1) and u(2)
√
√
with lengths equal to 2λn,1 and 2λn,2 . Chebychev’s inequality can then be recast into
1
PVn Vn ∈ R2 \ Km ≤ 2 .
m
(8.18)
If λ1 ≫ λ2 , the values of Ve are mainly distributed along a line parallel to u(1) , and passing
(1)
through E[Ve ]. Then, the study of Vn alone would already yield a very good picture of
the spread of Ve around E[Ve ].
134
Statistical moments of a complex-valued observable
Example
Table 8.4 contains the principal components associated with the example of Section 8.2.
f
λ2n,1
λ2n,2
f1 = 268 MHz
0.761
0.239
- 46.5
f2 = 500 MHz
0.589
0.411
+ 71.4
θVe
◦
◦
Table 8.4: Principal components of CVn .
4
4
3
3
2
2
1
1
Im(Vn)
Im(Vn)
At f1 , the principal direction is predominant and bears more than 76 % of (σ[Vn ])2 , whereas
at f2 , both axes have comparable magnitudes with λ2n,1 amounting to approximately 59 %
of (σ[Vn ])2 . The Chebychev ellipses will therefore have more rounded shapes at f2 than
at f1 , as is shown in Figs 8.4(a) and 8.4(b), where the first three ellipsoids are displayed.
0
0
−1
−1
−2
−2
−3
−3
−4
−4
−3
−2
−1
0
1
Re(Vn)
(a) f1 =268 MHz
2
3
4
−4
−4
−3
−2
−1
0
1
Re(Vn)
2
3
4
(b) f2 =500 MHz
Figure 8.4: 104 normalized samples Vn versus the Chebychev ellipses Km (solid line) and the
Chebychev circles Cm (dashed line), for 1 ≤ m ≤ 3.
At f1 , the ellipses are more oblong and follow the direction of highest dispersion of the
samples. As expected, the uncertainty quantification achieved via the principal components
is more detailed than the information obtained solely from σ[Ve ]: the presence of a
preferential direction of dispersion is revealed by the PCA, while in the case of an even
spread in the complex plane, the results of the PCA reduce to those of the isotropic σ[Ve ].
8.4 Higher-order moments
135
In addition, the PCA reveals the essence of the intrinsic information contained in the
covariance matrix. Owing to the limited dimension of CVn ∈ R2 × R2 , its spectral
decomposition can be performed in a computationally cheap manner.
8.4
Higher-order moments
Higher-order statistics of Ve can also be computed via quadrature. However, unlike the
average, the variance and the covariance, the practical interpretation of these moments is
not as straightforward. For instance, when the third-order moment is considered, one can
compute the averages E[(Ve )3 ], E[(Ve )2 Ve∗ ] and their complex conjugates, or in a vectorial
case, the averages of the tensors E[Ve ⊗ Ve ⊗ Ve ], E[Ve ⊗ Ve ⊗ Ve∗ ] and their conjugates.
Nonetheless, the interpretation of the subsequent information as to its meaning for the
distribution of Ve is not obvious.
A more viable strategy consists in combining the information stemming from higher(1)
(2)
order moments of the components of Ve (Re(Ve ), Im(Ve )) or (Vn , Vn ), which can be
interpreted, to deduce higher-order information about Ve .
8.4.1
Skewness
Given a real-valued random variable Y associated with a pdf fY , its skewness, or thirdorder statistical moment, reads
sk[Y ] = E (Yn )3 ∈ R,
where
Yn =
Y − E[Y ]
, when σ[Y ] 6= 0.
σ[Y ]
(8.19)
The skewness measures the degree of asymmetry of fY about its average E[Y ]. As shown
in Fig. 8.5(a), a symmetric pdf fY , with respect to E[Y ], will possess a vanishing skewness
sk[Y ] = 0, as is the case with Gaussian pdfs. If on the other hand the mass of fY
is concentrated to the left (respectively right) of E[Y ], i.e. the median of Y is smaller
(respectively larger) than E[Y ], then sk[Y ] > 0 (respectively sk[Y ] < 0).
In the case of the complex random variable Ve , a joint utilization of sk[Re(Ve )] and
sk[Im(Ve )] permits the analysis of the symmetry of the pdf fVe in the complex plane,
as illustrated in Fig. 8.5(b).
The third-order statistical moments corresponding to the test case of Section 8.2 are listed
in Table 8.5. These values confirm the distribution of the samples observed in Figs 8.2(a)
136
Statistical moments of a complex-valued observable
Im[Ve]
PY
ask[Y]=0
ask[Y]>0
Ask[Re(Ve)]>0
Ask[Re(Ve)]<0
Ask[Im(Ve)]<0
Ask[Im(Ve)]<0
ask[Y]<0
E[Ve]
Re[Ve]
Ask[Re(Ve)]>0
Ask[Re(Ve)]<0
Ask[Im(Ve)]>0
Ask[Im(Ve)]>0
Y
E[Y]
(a) Real-valued random variable Y
(b) Complex-valued variable Ve
Figure 8.5: Skewness of real- and complex-valued random variables.
and 8.2(b), or Figs 8.3(a) and 8.3(b). Furthermore, the computation of these thirdorder moments is more numerically demanding than the computation of the variance, as
demonstrated by the increase of NV , compared to Table 8.1.
f
sk[Re(Ve )]
sk[Im(Ve )]
NV
f1 = 268 MHz
1.36
- 0.04
321
f2 = 500 MHz
0.40
- 1.16
1537
Table 8.5: Skewness of Ve , and complexity NV .
8.4.2
Kurtosis
The kurtosis, or fourth-order moment, of a real-valued random variable Y is defined as
κ[Y ] = E (Yn )4 ≥ 0,
where
Yn =
Y − E[Y ]
, when σ[Y ] 6= 0.
σ[Y ]
(8.20)
As a reference, the kurtosis of a Gaussian random variable equals 3. The kurtosis
information can first be exploited qualitatively in the vicinity of E[Y ]. As κ[Y ]
increases beyond 3, the shape of fY becomes more peaked around E[Y ]. This indication
is at the core of Gaussianity tests in Radio-Frequency Interference mitigation [124, 125].
Secondly, κ[Y ] weighs the tail of fY , i.e. the asymptotic rate of decay of fY . This
can be understood by observing, in Fig. 8.6, that κ[Y ] is obtained by multiplying the
probability distribution of Yn by the rapidly increasing even function w : y 7→ y 4, which
8.4 Higher-order moments
137
enhances the contribution for large values of Yn . Given the fact that Yn is normalized,
large values of Yn actually correspond to samples of Y located several σ[Y ]s away from
E[Y ]. Chebychev’s inequality guarantees the exceptional occurrence of such samples,
which therefore correspond to rare events that are also risky given their large magnitudes3 .
Figure 8.6: Principle of the kurtosis with the weighting function w : Yn 7→ (Yn )4 (dotted line).
Comparison of two distributions PYn,1 (solid line) and PYn,2 (dashed line).
In the case of the complex variable Ve , the observation that extreme samples of Ve will
give rise to large values of |Vn | motivates the definition of κ[Ve ] as
"
#
Ve − E [Ve ] 4
4
≥ 0.
(8.21)
κ [Ve ] = E |Vn | = E σ [Ve ] It is worth noting that the detection of risky values of Ve can also be performed by studying
the kurtosis of the magnitude |Ve |, as is done in [129, 130]. The risk indicator κ [|Ve |] reads
"
4 #
|Ve | − E [|Ve |]
κ [|Ve |] = E
≥ 0,
(8.22)
σ [|Ve |]
where E [|Ve |] and σ [|Ve |] come into play instead of E [Ve ] and σ [Ve ]. As long as E [|Ve |] is
negligible, employing κ [|Ve |] or κ [Ve ] leads to equivalent results since extreme values of Ve
will necessarily give rise to extreme values of |Ve | and vice versa. However, when E [|Ve |]
becomes large, thereby indicating that Re(E [Ve ]) or Im(E [Ve ]) is important, a sample
located around the conjugate E [Ve ]∗ could be an extreme sample (if σ [Ve ] ≪ |E [Ve ] |),
which is not detected by κ [|Ve |] because |E [Ve ]∗ | = E [Ve ]. The kurtosis κ [Ve ] does not
suffer from this ambiguity.
3
For this reason, the kurtosis is an essential tool in financial-risk analysis [126–128].
138
Statistical moments of a complex-valued observable
We have exploited this property to study resonance phenomena occurring, for instance,
in thin-wire structures [129, 130]. With the resonance dispersion interpreted as risky
behaviour, the combined analysis of σ[Ve ] and κ[Ve ] enables the distinction between
• a situation where a low variance is associated with a high kurtosis, which reveals
that although most of the samples are clustered around E[Ve ], the risk of obtaining
some extreme samples Ve is important,
• a situation where a high variance still corresponds to a limited kurtosis, which implies
that, in spite of being spread in a large domain surrounding the mean, Ve is unlikely
to assume extreme values beyond this range.
The test case considered hitherto in this chapter exemplifies the aforementioned situation,
as can be read in Table 8.6. Although σ[Ve ] is more important at f2 than at f1 , κ[Ve ] is
conversely larger at f1 than at f2 . The dominance of σ[Ve ] at f2 has been observed in
Figs 8.2(a) and 8.2(b), where the centered samples Vc exhibit a larger physical dispersion
in the complex plane. On the other hand, the higher value of κ[Ve ] at f1 explains the
larger spread of the normalized samples Vn observed in Figs 8.3(a) and 8.3(b).
f
σ[Ve ]
κ[Ve ]
NV
f1 = 268 MHz
168 mV
2.47
705
f2 = 500 MHz
209 mV
2.25
3329
f3 = 215 MHz
773 mV
2.03
705
f4 = 303 MHz
453 mV
10.12
705
Table 8.6: Standard deviation and kurtosis of Ve .
This effect is even more pronounced at the frequencies f3 = 215 MHz and f4 = 303 MHz.
Based on the results in Table 8.6, σ[Ve ] at f3 is almost twice as large as σ[Ve ] at f4 , which
could lead to expect a larger spread of the centered samples at f3 than at f4 . In practice
however, Fig. 8.7(a) shows that the samples at f3 form a cluster around the average and
that the most extreme samples are observed at f4 .
An analysis of κ[Ve ] in Table 8.6 indicates that the statistical dispersion, measured in
terms of σ[Ve ], is more important at f4 than at f3 . This indication is backed by Fig. 8.7(b),
where the normalized samples Vn are plotted: all the samples at f3 lie within 2σ[Ve ] of
E[Ve ], whereas at f4 the dispersion of Ve extends beyond 4σ[Ve ] of E[Ve ], as confirmed by
Table 8.7.
Im(Vc) [V]
8.4 Higher-order moments
139
2.5
2
1.5
1
0.5
0
−0.5
−1
−1.5
−2
−2.5
−2.5−2−1.5−1−0.5 0 0.5 1 1.5 2 2.5
Re(Vc) [V]
Im(Vn)
(a) Centered samples Vc
5
4
3
2
1
0
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1 2 3 4 5
Re(Vn)
(b) Normalized samples Vn
Figure 8.7: Distribution of 104 samples at f3 = 215 MHz (squares) and f4 = 303 MHz (stars).
140
Statistical moments of a complex-valued observable
f = 215 MHz
f = 303 MHz
Chebychev bound
Number of samples in C1
6242
8496
-
Number of samples in C2
10 000
9366
7500
Number of samples in C3
10 000
9677
8888
Number of samples in C4
10 000
9865
9375
Number of samples in C5
10 000
10 000
9600
Table 8.7: Distribution of the 104 deterministic samples of Ve versus Chebychev bounds.
8.4.3
Higher-order moments
The logic followed to define the skewness and the kurtosis can be extended to higher
orders. Moments having an odd order can be defined for Re(Ve ) and Im(Ve ), by regarding
Ve as a real-valued vector. These odd moments will again assess the symmetry of the
distribution PVe . As for the statistical moments with an even order, they can be defined
for |Vn | to evaluate the weight of the tail of PVe , although this information is already
partially provided by the kurtosis.
8.5
Maximum-entropy principle
In addition to the valuable information that higher-order moments provide about the
probability distribution, they can also be computed to approximate the pdf of the
observable.
The Edgeworth series expansion is a means of performing such an
approximation [67, p. 508]. This method employs the successive statistical moments to
build a series in which the unknown probability distribution is expressed in terms of a
given probability distribution chosen beforehand. Instead of this method, a more heuristic
approach is described in this section, a method which literally exploits our knowledge of
the statistical moments.
General principle
The maximum-entropy (MaxEnt) method [75, 76, 113] described hereafter is applied to
the case of a real-valued observable Y = h(Ve ) associated with the pdf fY , where the
function h : C → R is measurable. For the sake of simplicity, Y is normalized into the
8.5 Maximum-entropy principle
variable U defined as
Y − E[Y ]
U=
,
σ[Y ]
141
when σ[Y ] 6= 0,
where E[U] = 0 and σ[U] = 1.
The pdf fY of Y can be deduced from the pdf fU of U as
y − E[Y ]
1
fY : y ∋ R 7−→
fU
∈ [0; ∞).
σ[Y ]
σ[Y ]
(8.23)
(8.24)
To begin with, the MaxEnt method assumes that the first Nmom statistical moments of
U, denoted {mk , k = 1, . . . , Nmom }, are known. As explained in Section 8.1, each of these
moments can be regarded as the result of the evaluation of a functional on fU
Z
mk =
uk fU (u)du
∀k ∈ {0, . . . , Nmom }.
(8.25)
u∈R
By definition of a pdf, the zero-order moment m0 = 1 is also available. The statistics at
hand, gathered in the set M(Nmom ) = {mk , k = 0, . . . , Nmom }, represent the totality of
our knowledge about the randomness of U, i.e. about fU .
The knowledge about fU can be quantified by an entropy functional S inspired from
information theory and defined as [68, p. 421],[76]
Z
1
S : L (R, R+ ) ∋ fU 7−→ −
fU (u) ln (fU (u)) du ∈ R,
(8.26)
u∈R
with L1 (R, R+ ) being the Lebesgue space of measurable functions that are positive-valued.
The idea at the heart of the MaxEnt method states that U should be as unbiased as
possible and compatible with the data contained in M(Nmom ), which correspond to the
Nmom + 1 conditions (8.25)[76]. In other words, fU must not carry more information than
is required by the constraints. Thus, among all the pdfs fulfilling the constraints, the most
suitable one is the pdf fU,Nmom , which maximizes the entropy S. This translates into the
formulation of an optimization problem
fU,Nmom =
arg max
gk [b] = 0
k = 0, . . . , Nmom
where gk [b] =
Z
S[b],
(8.27)
uk b(u)du − mk , for any b ∈ L1 (R, R+ ). This problem is handled via
R
Lagrange’s multipliers method [131, p. 195] and yields the following generic form of the
pdf fU,Nmom
!
NX
mom
fU,Nmom : R ∋ u 7−→ exp −
λk u k .
(8.28)
k=0
142
Statistical moments of a complex-valued observable
To obtain the Lagrange factors {λk , k = 0, . . . , Nmom }, Eq. (8.28) is inserted into the
constraints expressed by Eq. (8.25). The resulting set of non-linear equations is solved by
using a least-squares non-linear regression algorithm [132, p. 607].
The particular form of the MaxEnt pdf in Eq. (8.28) leads to specific types of smooth
pdfs. For instance, if only the average of U is available, fU,1 will be an exponential pdf.
With the average and the variance of Y at hand, the corresponding pdf fU,2 is Gaussian.
Application
The MaxEnt method is illustrated by considering the thin wire described in Section 8.2.
The study focuses on |Ve |, which is taken as an example of a real-valued function, the
statistical moments of which are computed using a sparse-grid rule. The empirical
distribution of the magnitudes of 104 deterministic samples, computed in 27 minutes,
serves as a reference. The MaxEnt pdf is computed for an increasing number of moments
1 ≤ Nmom ≤ 6. The number NV of samples necessary to obtain these six statistical
moments with a relative error of 1 % equals NV =705 at f =268 MHz and NV =7169 at
f =500 MHz. The results are shown in Figs 8.8(a) and 8.8(b). These plots show that the
MaxEnt pdfs obtained with a low number of statistical moments give a coarse impression
of the spread of |Ve |. The quality of the MaxEnt pdf is improved by incorporating more
statistics, when they exist, as can be seen from the support and the shape of the MaxEnt
pdf obtained when Nmom = 6.
8.6
Numerical cost
The evaluation of statistical moments of increasing orders bears a certain numerical
cost. This can be seen in Figs 8.9(a) and 8.9(b), which display the relative error of the
quadrature approximation of the even moments E[|Ve |2 ], E[|Ve |4 ] and E[|Ve |6 ], as a function
of the complexitiy NV of the sparse-grid (SG) and the Monte-Carlo (MC) quadrature rules.
The quadrature rules are slower to converge at 500 MHz than at 268 MHz, which is
caused by a rougher behaviour of Ve , as a function of γ, at 500 MHz. Overall, the sparsegrid rule converges faster than the Monte-Carlo rule, which is particularly obvious for the
computation of E[|Ve |2 ]. This feature can be explained by the ability of the sparse-grid rule
to take advantage of the smoothness of the integrand to accelerate its
convergence. These plots highlight a hierarchy in the numerical effort required, as the
convergence of E[|Ve |2 ] is more rapid than the convergence of E[|Ve |4 ], which in turn
8.6 Numerical cost
143
8
deterministic
Nmom = 2
6
N
e
f|V |
mom
Nmom = 4
4
pdf
N
mom
=5
Nmom = 6
2
0
−0.1
=3
0
0.1
0.2
|V |
e
0.3
[V]
0.4
0.5
0.6
0.7
(a) f = 268 MHz
6
deterministic
N
=2
5
Nmom = 3
mom
pdf
f
e
|V |
7
N
=4
N
=5
N
=6
mom
4
mom
3
mom
2
1
0
0
0.05
0.1
0.15
0.2
|V |
e
0.25
[V]
0.3
0.35
0.4
0.45
(b) f = 500 MHz
Figure 8.8: Maximum-entropy pdf of |Ve | using Nmom ∈ {1, . . . , 6} statistical moments.
144
Statistical moments of a complex-valued observable
10
10
Relative error
10
10
10
10
10
10
10
3
E[|V |2]
2
e
SG
4
E[|Ve| ]SG
1
6
E[|V | ]
0
e
SG
2
E[|V | ]
e
−1
MC
4
E[|Ve| ]MC
−2
E[|V |6]
e
−3
MC
−4
−5
0
10
1
10
2
10
N
3
10
4
10
V
(a) f = 268 MHz
10
10
Relative error
10
10
10
10
10
10
10
3
2
E[|Ve| ]SG
2
E[|Ve|4]SG
1
E[|Ve| ]SG
0
E[|V |2]
6
e
MC
4
−1
E[|Ve| ]MC
−2
E[|Ve| ]MC
6
−3
−4
−5
0
10
1
10
2
10
N
3
10
4
10
V
(b) f = 500 MHz
Figure 8.9: Relative error versus the complexity NV of the quadrature rules: sparse-grid rule
(solid line) and Monte-Carlo rule (dashed line).
8.7 Conclusion
145
converges faster than E[|Ve |6 ]. Hence, although higher-order statistics refine the statistical
characterization of the observable Ve , it is essential to be aware of the supplementary
numerical effort necessary to obtain these higher-order statistics.
8.7
Conclusion
This chapter has addressed the question of the practical interpretation of the
statistical moments of the observable that are accurately computed by quadrature. This
interpretation was enabled by considering statistical moments as the results of functionals
tested by the pdf of the unknown.
The complex-valued nature of Ve has been handled by resorting to statistics defined either
by regarding Ve as a complex scalar, or by viewing Ve as a real-valued random vector.
Chebychev’s inequality allowed for the identification of sub-domains of the complex plane
containing the majority of the samples of Ve , either isotropically, in the form of circular
domains, or anisotropically with the aid of ellipses.
A systematic method has also been proposed to compute meaningful higher-order
statistical moments of Ve . The odd moments can be defined with respect to the real
and imaginary parts of Ve to deduce the location of the mass of Ve ’s distribution in the
complex plane. This analysis could be refined further by analyzing the odd moments
of the principal components of Ve . On the other hand, even statistical moments of the
magnitude of the normalized observable Vn can be employed to estimate the likelihood of
occurrence of rare or risky events.
Finally, a maximum-entropy method has been applied to approximate the probability
density function of |Ve |. This results in a smooth approximation of the pdf, the quality of
which can be improved by increasing the number of statistical moments taken into account
in the maximum-entropy algorithm. Such an improvement comes at the cost of a higher
numerical effort to compute higher-order statistics by quadrature.
Chapter 9
Discrete-inverse-Fourier-transform
(DIFT) method
The non-intrusive rationale on which the previous chapter has been built consists in
the interpretation of classical statistical moments as results of functionals tested by the
unknown probability distribution. The reasoning of the present chapter is somewhat
different as it aims at inverting a computable transform of the probability density function
to achieve a complete stochastic characterization of the observable. The transform of the
pdf in question is computed in a non-intrusive fashion via a quadrature rule.
9.1
Preliminary remarks
Given a measurable function h : C → R, a real-valued observable Y is defined as
Y = h(Ve ). The observable Y could for instance be the real or imaginary part of Ve ,
as well as the magnitude of Ve . In any case, the dependence of Ve on the parameters
γ = (α, β) of the configuration induces a dependence of Y on γ, which is randomly
distributed in the domain G = A × B.
The normalization of Y introduces the variable U = (Y − E[Y ])/σ[Y ]. As stated in
Section 8.2, Y and U have the same type of randomness, as can be seen from the
correspondence between their pdfs fY and fU , respectively
1
y − E[Y ]
fY (y) =
fU
,
for any y ∈ R.
(9.1)
σ[Y ]
σ[Y ]
The quest for the yet unknown pdf fU can now begin by writing fU in terms of its Fourier
transform, as
fU (u) = Fk−1 [Fr [fU (r)] (k)] (u),
for any u ∈ R,
(9.2)
148
Discrete-inverse-Fourier-transform (DIFT) method
where Fr [·] stands for the Fourier transformation with respect to the variable r, and
Fk−1 is the inverse Fourier transformation with respect to the variable k. This seemingly
tautological equation is actually very useful as will be demonstrated below.
9.2
Characteristic function
The Fourier transform of the pdf fU can be written explicitly in terms of the characteristic
function of U, which is denoted ΦU : R → C, as
Z
Fr [fU (r)] (k) = fU (r) e−jkr dr = ΦU (k)∗ = ΦU (−k),
for any k ∈ R.
(9.3)
R
This operation is well defined given the integrability of fU , which is an intrinsic property
of any pdf. In Eq. (9.3), ΦU can be regarded either as a functional or a function. As a
functional, ΦU represents the inverse Fourier transform of fU , whereas when considered
as a function, Eq. (9.3) expresses the fact that, for each k ∈ R, ΦU (k) is the average of
the random variable exp(jkU), i.e. ΦU (k) = E[exp(jkU)]. With the latter interpretation,
the dependence of U on the random parameters γ of the configuration can be exploited
together with the fundamental theorem (see Section 3.3.3, Eq. (3.7)), to rewrite Eq. (9.3)
in terms of the known pdf fγ
Z
ΦU (k) = E [exp(jkU(γ))] = fγ (γ) exp(jkU(γ))dγ,
for any k ∈ R.
(9.4)
G
Consequently, ΦU (k) can be computed with the aid of a quadrature rule Q over the
domain G. In other words, the spectrum of fU is pointwise computable. Likewise, the
value of ΦU at several points k1 , . . . , kN ∈ R is computed by applying Q to the vector
cU = [fγ (γ) exp(jk1 U(γ)), . . . , fγ (γ) exp(jkN U(γ))], which results in
Φ
[ΦU (k1 ), . . . , ΦU (kN )] ≈ Q [fγ (γ) exp(jk1 U(γ)), . . . , fγ (γ) exp(jkN U(γ))] .
cU , employing the same quadrature rule
Given the similar forms of the components of Φ
should prove numerically efficient. The convergence of this quadrature rule is monitored
P
cU .
via the evolution of the L1 –norm (|x|1 =
|xk |) of Φ
k
The values of ΦU in the vicinity of the origin k = 0, are particularly interesting: provided
that U has finite moments of order n, ΦU is n-times differentiable around k = 0, and its
n-th derivative is related to the n-th order statistical moment of U as [67, p. 412]
n
d
n
−n
E [U ] = j
ΦU (k)
.
(9.5)
dk n
k=0
If these differentiations are performed accurately, then the characteristic function can be
employed to generate statistical moments.
9.3 Inversion of the characteristic function
9.3
9.3.1
149
Inversion of the characteristic function
Probability density function fU
With the computability of the spectrum of fU established by Eq. (9.4), the following issue
regards the calculation of the inverse Fourier transform of this spectrum. Starting from
its definition, the inverse Fourier transformation is given by
Z
1
−1
fU (u) = Fk [ΦU (−k)] (u) =
ΦU (−k) ejku dk,
for any u ∈ R.
(9.6)
2π
R
The direct application of this formula is hindered by two practical limitations, viz. the
improper nature of the integral and the need to compute ΦU for each k ∈ R, which is
numerically impossible since a quadrature rule must be used in each evaluation ΦU (k).
The latter difficulty is circumvented by approximating the integral in Eq. (9.6) by a
repeated midpoint rule of step τ , which yields the following Fourier series [133, 134]
∞
1 X
fU (u) ≈
τ ΦU (−lτ ) ejlτ u ,
2π l=−∞
for any u ∈ R.
(9.7)
The sampling process produces a periodization of the spectrum of ΦU , i.e of fU . This
translates into the superposition of copies of the graph of fU at intervals of 1/τ , which
can give rise to aliasing when the copies are not suitably separated. Such a separation
is possible by ensuring that 1/τ is larger than the length ∆U of the support of fU 1 .
Nonetheless, the determination of ∆U is in itself intricate: by construction, ∆U is a
function of the range of Y , and in turn a function of the range of Ve , which is not accessible
without performing a deterministic sweep, as signaled in Section 3.2.3. However, since U
is a normalized variable (E[U] = 0 and σ[U] = 1), Chebychev’s inequality offers general
bounds as to the maximum number of samples that can be expected at a certain distance
away from E[U]. If, for instance, ∆U is chosen as an interval centered around the origin
and of half-length 5 (respectively 10), Chebychev’s inequality guarantees that, regardless
of the randomness of U, ∆U will contain at least 96 % (respectively 99 %) of the samples
of U. Owing to their pessimistic nature, Chebychev’s bounds should perform suitably for
most probability distributions.
The following numerical issue that requires caution concerns the infinite summation
involved in Eq. (9.7). This sum is truncated at a given order NΦ , which sets the highest
1
For distributions with an unbounded support, this requirement can be loosened by considering the
support of fU as the set of values u ∈ R for which fU (u) is higher than a given threshold.
150
Discrete-inverse-Fourier-transform (DIFT) method
value of k at which ΦU is evaluated. Moreover, by taking advantage of the property
ΦU (−k) = ΦU (k)∗ for any k ∈ R, fU is approximated as follows
"
#
NΦ
τ 1 X
−jlτ
u
fU (u) ≈ fU,NΦ (u) =
+
Re ΦU (lτ ) e
,
for any u ∈ R.
(9.8)
π 2 l=1
This formula involves NΦ integrals to be computed by quadrature. In spatiotemporal
vocabulary, NΦ τ represents the highest “frequency” of the truncated spectrum of fU .
The function fU,NΦ , that we shall call the discrete-inverse-Fourier-transform (DIFT) pdf,
can hence be regarded as a low-frequency approximant of fU . The quality of fU,NΦ will be
all the more appreciable as the interval [−NΦ τ ; +NΦ τ ] captures most of the support of ΦU .
For a given value of the sample step τ , the ideal solution would then consist in choosing
NΦ as high as possible to broaden the interval [−NΦ τ ; +NΦ τ ]. On the other hand, NΦ
also determines the number of values of ΦU that are required and therewith the number
of integrals that need to be computed by quadrature. In addition, for values of NΦ τ
that are too large, the factors e±jNΦ τ u cause the integral defining ΦU to become highly
oscillatory, which implies more numerical effort to accurately approximate ΦU .
Unfortunately, since the values of Ve (and thereby of U) are obtained via the solution
of a boundary-value problem, it is not easy to determine the spectrum of fU a priori.
Therefore, as suggested by Waller et al [135], the choice of the parameters of this method
requires “some experiment on the part of the statistician”.
The rationale developed thus far puts forward the resolution of the pdf fU by
ensuring that the choice of the step size τ avoids aliasing. Once τ is fixed, the number NΦ of
evaluations of the characteristic function is chosen according to the support of ΦU .
Conversely, it is also possible to fix the number NΦ of samples, and then, depending
on the support of ΦU , to deduce the corresponding value of τ . With such a procedure, fU
may suffer from aliasing.
Further, there are no constraints as to the number of evaluations of fU . This is a key
difference with a discrete Fourier transformation or a fast Fourier transformation (FFT),
in which the number of evaluations of fU and ΦU are mutually related [121, 136].
9.3.2
Cumulative distribution function
In addition to the pdf, the cumulative distribution function (cdf) FU of U is also very
helpful in practice. It completely characterizes the randomness of U by quantifying the
likelihood of observing samples lower than a given threshold value. Such information is
9.4 Application to a thin wire
151
essential for safety analysis. Owing to its definition as [68]
FU (u) =
Zu
−∞
fU (t)dt,
for any u ∈ R,
(9.9)
FU can be built by post-processing the pdf fU . Alternatively, FU can be calculated directly
from ΦU by virtue of Gil-Palaesz’ formula [133, 134]
Z
ΦU (k) −jku
1
FU (u) = −
e
dk,
for any u ∈ R.
(9.10)
2
j2πk
R
This equation is handled analogously to Eq. (9.6) to obtain a formula involving a finite
sum2 . The resulting DIFT cdf is denoted FU,NΦ and given by
NΦ
1
τ
1X
ΦU (lτ ) −jlτ u
FU,NΦ (u) = +
Im
u−
e
,
2 2π
π l=1
l
9.4
for any u ∈ R.
(9.11)
Application to a thin wire
The example of a horizontally undulating wire is now studied. The axis of the random
wire is described by
xα (y) = α sin [9π(y − ym )] ,
zα (y) = 0.05,
(in m), for y ∈ [ym ; yM ], (9.12)
where α is uniformly distributed in A = [−0.04; +0.04] m. This wire is illuminated
by a plane wave E iβ with the following characteristics {|E iβ |=1 Vm−1 , θi = 45◦ , φi =
45◦ , perpendicular polarization, f =100 MHz}. The magnitude of the induced voltage
Y = |Ve | is chosen as the observable and normalized in the variable U ≡ U(|Ve |) =
(|Ve | − E[|Ve |])/σ[|Ve |]. To begin with, 103 samples of |Ve | are computed by varying α
uniformly in its range A = [−0.04; +0.04] m. The motivation of such an over-sampling is
to sort the resulting values of |Ve | and deduce the empirical pdf femp and the cdf Femp ,
which serve as references.
2
To obtain this equation, the singularity in Eq. (9.10) around k = 0 is handled as suggested by
Bohman [133], i.e. by taking advantage of the continuity of ΦU around the origin, with ΦU (0) = 1. , and
by using the approximation
ΦU (k) −jlτ u
sin(ku)
Im
e
∼ − ΦU (0)
∼ − ΦU (0)u,
when k → 0.
k
k
152
Discrete-inverse-Fourier-transform (DIFT) method
9.4.1
Characteristic function
The characteristic function ΦU is computed over the domain k ∈ [0; 100] with a sampling
step τ = 1/10. Since α is the only random input, the different values of ΦU are evaluated
via a Clenshaw-Curtis rule with a relative error smaller than 1%.
The value and the magnitude of ΦU are plotted in Figs 9.1(a) and 9.1(b), respectively. In
the complex plane, ΦU (k) describes a spiral as k varies between 0 and 100. The decaying
amplitude of ΦU (k), as a function of k, can be observed in Fig. 9.1(b). According to
Eq. (9.5), the slope of ΦU around k = 0 contains all the information necessary to obtain
the statistical moments of |Ve |.
0
10
100
| FU(k)|
80
k
60
40
-1
10
20
0
1
1
0.5
Im[F U(k)]
-2
0.5
0
-0.5
0
-1 -1
-0.5
Re[F (k)]
(a) ΦU in the complex plane
U
10
0
10
20
30
40
50
k
60
70
80
90 100
(b) Amplitude |ΦU |
Figure 9.1: Normalized characteristic function of |Ve | for k ∈ [0; 100].
9.4.2
Effect of the support of ΦU on the DIFT pdf and cdf
Several sub-domains of the type [0; kmax ] are taken into account, with 0 ≤ kmax ≤ 100.
The DIFT method is applied to the corresponding values of ΦU to deduce the DIFT pdf
and cdf that are denoted f|Ve |,kmax and F|Ve |,kmax , respectively. The goal is to evaluate the
accuracy of these results with respect to the width of the support of ΦU employed. Ideally,
one seeks a suitable accuracy by using as limited a bandwidth as possible. The actual
values of kmax considered are kmax ∈ {10, 20, 30, 50, 100}, with kmax ≤ 30 being
considered as the “low-frequency” range and kmax ≥ 50 as the domain of “high frequencies”.
The different DIFT pdfs are displayed in Figs 9.2(a) and 9.2(b).
9.4 Application to a thin wire
153
140
femp
120
k
100
pdf of |Ve|
=10
max
kmax=20
80
k
=30
max
60
40
20
0
−20
0.23 0.24 0.25 0.26 0.27 0.28 0.29
|Ve| [V]
0.3
0.31 0.32
(a) “Low-frequency” DIFT pdfs
140
femp
120
kmax=50
pdf of |Ve|
100
kmax=100
80
60
40
20
0
−20
0.23 0.24 0.25 0.26 0.27 0.28 0.29
|Ve| [V]
0.3
0.31 0.32
(b) “High-frequency” DIFT pdfs
Figure 9.2: DIFT pdfs f|Ve |,kmax versus the empirical pdf femp (histogram) .
154
Discrete-inverse-Fourier-transform (DIFT) method
The use of a truncated Fourier transform in the DIFT method explains the undulating
aspect of the pdfs f|Ve |,kmax and causes them to assume unrealistic negative values, e.g.
around |Ve | = 0.298 V. This undershoot, known as a Gibbs oscillation [121, p. 105], stems
from the discontinuity of the pdf of |Ve | around the point |Ve | = 0.298 V. Sections 10.2.3
and 10.2.4 will demonstrate that the discontinuities of the pdf of |Ve | are generally caused
by stationary points of |Ve |, i.e. values of α that cancel the derivative ∂α |Ve |.
The low-frequency pdfs, displayed in Fig. 9.2(a) coarsely approximate the support and
the shape of femp . By increasing kmax , finer details of femp are reconstituted, such as the
discontinuity that appears around |Ve | = 0.249 V. For this particular example, the pdf of
|Ve | is correctly approximated with kmax = 30, i.e. with the values of ΦU over the interval
[0;30], as can be seen by comparing Figs 9.2(a) and 9.2(b).
As for the DIFT cdfs F|Ve |,kmax , Fig. 9.3(a) shows that the low-frequency cdf F|Ve |,10
already provides a suitable estimation of Femp . The improvement achieved by broadening
the support of ΦU appears when zooming into a given portion of the graph, as is done
in Fig. 9.3(b). Similarly to the DIFT pdf, the use of a Fourier transformation produces
slowly undulating DIFT cdfs, which can occasionally assume unrealistic values that are
either negative, or larger than 1. As a larger support of ΦU is taken into account, these
oscillations dampen, thereby improving the accuracy of the DIFT cdf.
In the DIFT method, the robustness of the cdf when compared to the pdf can be understood
by referring back to their definitions given in Eqs (9.6) and (9.10). In the integral leading
to the pdf, all values of ΦU are weighted by a complex exponential term. Hence, the
accuracy of the entire spectrum is needed to ensure a suitable accuracy of the pdf. On the
other hand, in the cdf, the division of the characteristic function ΦU (k) by the factor k is
equivalent to a low-pass filter, which emphasizes the contribution of the “low-frequency”
components, i.e. the values of ΦU (k) for lower values of k. These terms correspond to
slowly oscillating integrals, which are easier to compute than rapidly oscillating ones.
9.4.3
Numerical effort
The values of the characteristic function are computed via a quadrature rule, which at
the level l ∈ N employs NV (l) samples of Ve . At a given level l, the thus obtained
[l]
approximation of the characteristic function is written as ΦU , and the relative error EΦU
9.4 Application to a thin wire
cdf of |Ve|
1.1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1
0.23
155
Femp
kmax=10
kmax=20
kmax=30
kmax=50
kmax=100
0.24
0.25
0.26
0.27
0.28
|Ve| [V]
0.29
0.3
0.31
(a) Global shape of the DIFT cdfs
1.1
1.05
e
cdf of |V |
1
0.95
0.9
0.85
Femp
k
=10
max
kmax=20
kmax=30
kmax=50
kmax=100
0.8
0.75
0.7
0.285
0.29
0.295
0.3
|Ve| [V]
0.305
(b) Zoom around |Ve |=0.3 V
Figure 9.3: DIFT cdfs F|Ve |,kmax for kmax ∈ [0; 100] versus the empirical cdf Femp .
0.31
156
Discrete-inverse-Fourier-transform (DIFT) method
is defined in terms of NV (l) and kmax as
EΦU (NV (l), kmax ) =
Z
[0;kmax ]
[l]
[l−1]
ΦU (k) − ΦU (k) dk
Z
[0;kmax ]
[l−1] ΦU (k) dk
.
(9.13)
Figure 9.4 displays the evolution of the complexity NV that is required to evaluate ΦU
over [0; kmax ] with a relative error EΦU below 1%. The performances of the ClenshawCurtis, the trapezoidal and the Monte-Carlo rules are mutually compared. With all the
Complexity
N
V
10
10
10
10
10
4
3
2
Clenshaw−Curtis
Trapezoidal
Monte−Carlo
1
0
0
10
20
30
40
50
k
60
70
80
90
100
max
Figure 9.4: Complexity NV to evaluate ΦU over [0; kmax ] with a maximum relative error of 1 %.
quadrature rules, the determination of ΦU (k) for high values of k translates into an increase
of the complexity NV . Further, the Clenshaw-Curtis and the trapezoidal rules, which have
comparable behaviours, converge faster than the Monte-Carlo rule: determining ΦU (k)
over k ∈ [0; 30], with a precision of 1%, requires NV =129 samples for the Clenshaw-Curtis
rule, NV =257 samples for the trapezoidal rule, against NV =1025 samples in the case of
the Monte-Carlo rule.
9.4.4
Limiting the complexity
The computation of the characteristic function over a wide range can translate into a
prohibitive complexity of the quadrature rules employed, which can seriously hinder the
numerical efficiency of the DIFT method. Rather than fixing the value of kmax and then
9.4 Application to a thin wire
157
accurately computing ΦU over k ∈ [0; kmax ], it is also possible to limit, at a given value
NV,max , the maximum number of samples allowed for the quadrature rule. Doing so may
imply that not all values of ΦU have converged. The characteristic function must then be
truncated to conserve only those components that have been computed correctly.
This procedure is illustrated for the thin-wire example by choosing NV,max =127. The
amplitude of the characteristic function, after NV,max evaluations of Ve , is depicted in
Fig. 9.5(a). The values of ΦU are accurate up to k=60. Beyond this threshold, the values
of the characteristic function are no longer reliable. In Fig. 9.5(b), the pdfs f|Ve |,60 and
f|Ve |,100 are mutually compared. Unlike f|Ve |,60 which is correct, the graph of f|Ve |,100 reveals
several regions with important errors caused by the inaccurate values ΦU (k) for k > 60.
Hence, for such a complexity-limited approach to be exploitable, it ought to be accompanied
by a cleansing of the characteristic function to conserve only the accurate values of the
characteristic function. In a way this technique is similar the so-called “zero-padding”
technique [136] that is commonly employed in an FFT.
9.4.5
DIFT pdf versus maximum-entropy pdf
The DIFT pdfs are now compared to the maximum-entropy (MaxEnt) pdfs. With the
aid of a Clenshaw-Curtis rule, the first nine statistical moments of |Ve | are calculated
with a 1% accuracy, which requires merely 33 samples of Ve . These moments are then
post-processed via the maximum-entropy principle described in Section 8.5. The resulting
MaxEnt pdfs, denoted fME,Nmom when Nmom moments are employed, are displayed in
Fig. 9.6(a). To have comparable complexities, the DIFT pdfs f|Ve |,kmax are determined
using NV ∈ {33, 65, 129} samples of Ve in the quadrature rule. For each value of NV , the
support of ΦU is truncated to a higher value kmax , by retaining only the accurate values
of the characteristic function. These pdfs are plotted in Fig. 9.6(b).
Interestingly, the MaxEnt pdfs are density functions by construction, i.e. they do not
assume negative values as is the case with the DIFT pdfs, which are also affected by
Gibbs oscillations. However, the MaxEnt pdfs are also smooth functions, which do not
approximate the support of the pdf of |Ve | as faithfully as the DIFT pdfs. Moreover,
several detailed features of femp are captured only by the DIFT density functions, such as
the discontinuity which occurs around |Ve |=0.249 V, and the intensity of the peak observed
at |Ve |=0.298 V. Therefore, despite the more important numerical effort imposed by the
computation of the characteristic function, the results of the DIFT method should be
preferred to those of the MaxEnt approach, mainly regarding their precision.
158
Discrete-inverse-Fourier-transform (DIFT) method
U
Φ (k)
10
10
10
0
−1
−2
0
10
20
30
40
50
k
60
70
80
90
100
(a) Characteristic function
140
femp
120
kmax=60
pdf of |Ve|
100
k
80
=100
max
60
40
20
0
−20
0.23 0.24 0.25 0.26 0.27 0.28 0.29
|V | [V]
0.3
0.31 0.32
e
(b) DIFT pdfs
Figure 9.5: DIFT method with a maximum complexity of NV,max =127 samples of Ve .
9.4 Application to a thin wire
159
pdf of |Ve|
140
120
femp
100
Nmom = 5
80
60
Nmom = 7
N
mom
=9
40
20
0
−20
0.23 0.24 0.25 0.26 0.27 0.28 0.29
|Ve| [V]
0.3
0.31 0.32
0.3
0.31 0.32
(a) MaxEnt pdfs of |Ve |
140
120
pdf of |Ve|
100
80
60
femp
NV=33, kmax=14.3
NV=65, kmax=30.3
N =129, k
V
=58.9
max
40
20
0
−20
0.23 0.24 0.25 0.26 0.27 0.28 0.29
|V | [V]
e
(b) DIFT pdfs of |Ve |
Figure 9.6: Comparison between the MaxEnt pdf (Fig. (a)) and the DIFT pdf (Fig. (b)).
160
9.5
Discrete-inverse-Fourier-transform (DIFT) method
Conclusion
The method presented in this chapter is based on the successive application of a forward
and an inverse Fourier transformation. The strength of this procedure resides in the
computability, by quadrature, of the Fourier transform of the pdf, which is directly linked
to the characteristic function. The inverse Fourier transform is evaluated in a discrete form
via a Fourier series. The knowledge of the characteristic function completely characterizes
the randomness of the observable by allowing for the deduction of its pdf and its cdf.
In this method, the decay rate of the characteristic function is of crucial importance.
Unfortunately, this decay rate is tedious to master given the intricate definition of the
observable via the solution of a boundary-value problem. A trade-off needs to be found
between the requirement of a satisfactory resolution in the resulting pdf, which
translates into a higher sampling rate of the characteristic function, and the need to have a
manageable complexity by limiting the number of evaluations of the characteristic function.
Comparisons with the maximum-entropy approach have shown that standard statistical
moments are generally faster to compute than the characteristic function. However,
despite their numerical costs, the DIFT pdfs and cdfs have a higher resolution and are
more accurate that the maximum-entropy pdfs and cdfs.
The extension of the DIFT method to higher-dimensional problems can suffer from the
prohibitive effort required to evaluate multi-dimensional oscillatory integrals. This
well-documented hindrance is mainly caused by the inadequacy of the quadrature rules
employed in this thesis for such integration problems [137, 138]. Several remedies to this
numerical difficulty have been proposed in the literature, such as asymptotic expansions
similar to the stationary-phase approximation [139, Chapter 1],[140], or quadrature rules
such as the Levin’s, which is better suited for highly-oscillatory integrals [85, p. 238], [141].
The DIFT method has been implemented for real-valued observables, but it can be
extended to handle complex-valued ones such as Ve : Ve is treated as a real-valued random
vector and its DIFT pdf and cdf follow from 2–D forward and inverse Fourier transforms.
Finally, the method presented in this chapter suggests the possibility to consider
transformations other than Fourier’s. A multi-resolution wavelet-transform approach
could be regarded as a promising candidate. Provided that it is suitably defined, such an
approach would imply using a wavelet basis that has a limited number of oscillations as
opposed to the periodic Fourier kernel.
Chapter 10
Extensions
Several applications and extensions of the stochastic methods presented hitherto are now
investigated. The objective of such a presentation is first to demonstrate the versatility of
the stochastic approach, and second to provide additional examples of stochastic
electromagnetic interactions. The examples will show that stochastic methods cannot be
applied in a blindfolded manner.
As in the case of deterministic methods,
understanding of and intuition for the physical phenomena remains essential for getting
correct information from the computations.
First, the effect of the probability distribution of the input on the probability distribution
of the output is investigated, by evaluating the output pdf with the aid of the discreteinverse-Fourier-transform (DIFT) approach. In this case, both the equivalent voltage
source and the equivalent impedance are studied. Second, the mathematical representation
of geometrical fluctuations of the scatterer is modified by employing a wavelet geometrical
model. This setup is also analyzed via the DIFT method. Finally, the example of a
Hertzian dipole over a random PEC plate is tackled to demonstrate the possibility of
carrying out the stochastic analysis of a 2–D deterministic interaction.
10.1
Completion of the stochastic black box
The DIFT method presented in Chapter 9 completes the numerical construction of a
stochastic black box or transfer function. Given a probability distribution PI or a pdf
fI defined on the space I of uncertain input variables i, the stochastic method can be
employed to determine the pdf fY or the cdf FY of the observable Y .
162
Extensions
The cumulative distribution function FY of the real-valued observable Y is defined in
terms of the distribution PY of Y as [67, p. 3]
FY : R ∈ y 7−→ PY (Y ≤ y) ∈ [0, 1],
(10.1)
with lim FY (y) = 0 and lim FY (y) = 1. This definition highlights the usefulness of
y→−∞
y→+∞
the cdf in practical applications, in particular from a decision-making point of view. For
instance, given an interval [ya ; yb] ⊂ R, the probability of observing Y in (ya ; yb ] will be
expressed by FY (yb ) − FY (ya ). Similarly, the probability of observing values of Y above a
threshold Y0 ∈ R corresponds to 1 − FY (Y0 ) and is very useful to determine a probability
of failure. Moreover, the so-called “tail” of the probability distribution, which corresponds
to values of FY (Y0 ) for extreme values of Y0 , informs more precisely on the likelihood of
occurrence of extreme or rare events than statistical moments such as the kurtosis.
In addition, for any m ∈ (0; 100), FY offers the possibility to determine the m%-quantile
as the value of Y , denoted qm% , for which FY (Y = qm% ) = m/100. As an example, the
median corresponds to q50% and, for a positive-valued observable Y , q90% can be employed
to investigate the compliance of Y with some maximum admissible value of Y .
These cdf-based statistical tools, schematically illustrated in Fig. 10.1, form precious tools
for safety, reliability, and performance analyses. In the forthcoming section, they are
employed in an EMC context to study the equivalent Thévenin circuit of thin-wire setups.
1
m
FY(Y0)
0
Y0
qm%
Figure 10.1: Practical use of the cdf FY .
Y
10.2 Influence of the distribution of the input
163
10.2
Influence of the distribution of the input
10.2.1
Bank of input distributions
Most of the input pdfs that came into play thus far were uniform. A first advantage of
such a distribution resides in its bounded support, which is convenient for the construction
of a realistic geometry of the wire by preventing it from going through the ground plane.
Furthermore, the limited support of fI also implies that the statistical moments of the
observable Y will consist of proper integrals. From a probabilistic point of view, uniform
pdfs arise in the maximum-entropy principle as the laws obtained with minimal knowledge
about the spread of u in its bounded range I [75, 76]. Hence, with a uniform pdf, all the
values in I are equally likely to occur.
However, distributions which privilege certain values over others are also of practical
interest. They can be obtained by truncating a distribution defined on a wider domain
than is required [86, p. 38]. Alternatively, the generic definition of Beta distributions
allows for the construction of a variety of pdfs supported on a bounded domain. In the
case where i is a scalar belonging to the interval [0, 1], the pdf of the Beta distribution
B(a, b) of parameters a > 0 and b > 0 is denoted fB(a,b) and reads [86, p. 428]

 Γ(a + b) ia−1 (1 − i)b−1 for i ∈ [0; 1],
Γ(a)Γ(b)
fB(a,b) (i) =
(10.2)

0
otherwise,
where Γ is the Gamma function. Note that a uniform pdf is obtained by setting a = 1 and
b = 1 in Eq. (10.2). We restrict ourselves to the four distributions fB(1,1) , fB(4,4) , fB(2,4) ,
and fB(4,2) , the pdf and the cdf of which are plotted in Figs 10.2(a) and 10.2(b). An analogy
can be made with the theory of filtering, as indicated in Table 10.1, by considering fI as
a filter defined on I that sets the relative importance of the different elements of I.
pdf
Equivalent filter
fB(1,1)
uniform
all-pass
fB(4,4)
bell-shaped
bandpass
fB(2,4)
positive skewed
low-pass
fB(4,2)
negative skewed
high-pass
Table 10.1: Equivalence between pdfs and filters.
164
Extensions
3
f
B(1;1)
fB(4;4)
2.5
f
B(2;4)
1
B(1;1)
0.8
FB(4;4)
F
B(2;4)
fB(4;2)
2
F
0.6
FB(4;2)
1.5
0.4
1
0.2
0.5
0
0
0.2
0.4
u
0.6
0.8
1
0
0
(a) pdf fB(a,b)
0.2
0.4
u
0.6
0.8
1
(b) cdf FB(a,b)
Figure 10.2: Beta distributions: B(1, 1) (solid line), B(4, 4) (circled line), B(2, 4) (dashed line),
B(4, 2) (dotted line).
10.2.2
Test case
The effect of these input pdfs on the cdf of the output is studied through the example of
a vertically undulating thin wire with an axis described by a single sine as follows
xα (y) = 0,
zα (y) = 0.05 + α sin [2π(y − ym )] ,
(in m), for y ∈ [ym ; yM ]. (10.3)
The only random input parameter is the amplitude i = α, which is distributed in
A = [−0.04; +0.04] m, according to the pdf fα . A fixed 500 MHz parallel-polarized plane
wave impinges on this wire, with an electric field of amplitude 1 Vm−1 , and a direction of
propagation specified by θi = 45◦ and φi = 45◦ . The aim is to assess the effect of different
entry pdfs fα on the cdf of real-valued observables Y obtained from the Thévenin circuit,
i.e. Y ≡ Y (Ve , Ze ).
10.2.3
Effect of fα on the distribution of |Ve |
The magnitude of the induced voltage is chosen as observable, i.e. Y = |Ve |. As described
in the previous chapter, Y is normalized into the variable U to compute the characteristic
function ΦU (k), for k ∈ [0; 100]. By using a sampling step τ = 1/10, a total of 1000
integrals are calculated by a Clenshaw-Curtis quadrature rule. In terms of complexity,
513 function evaluations are required to reach a relative error lower than 1% with respect
to the L1 –norm of ΦU .
The amplitudes |ΦU (k)| of the characteristic functions obtained for the four different
types of Beta pdfs are depicted in Figs 10.3(a) and 10.3(b) for k ∈ [0; 100]. These figures
10.2 Influence of the distribution of the input
165
reveal a polynomial decay of |ΦU (k)| as k increases, for all the different types of entry
pdfs. By definition of the Fourier transformation, this decay indicates the presence of a
singularity in the graphs of the DIFT pdfs. Oscillations are present when fα = fB(1,1) and
fα = fB(4,2) , whereas when fα = fB(2,4) or fα = fB(4,4) , |ΦU (k)| decreases monotonically.
The oscillations stem from the presence of more than one singular point in the graph of
f|Ve | , while the monotonically decreasing characteristic functions correspond to pdfs that
contain at most one singularity.
a,b
The pdf and the cdf of |Ve | obtained for fα = fB(a,b) are denoted f|V
and F|Va,be | , respectively.
e|
The DIFT approximation of the pdf of |Ve | is determined for each of the input density
functions, and then plotted in Figs 10.4(a) and 10.4(b). There is no obvious resemblance
between the pdfs fα and f|Ve | , as was to be expected because of the intricate and nonlinear dependence of Ve on α. All the DIFT approximations of the pdfs have the same
support, viz. [Vmin ; Vmax ]=[0.196; 0.269] V. Moreover, Gibbs oscillations can be observed
around Vmax , which marks the upper limit of the support of f|Ve | , i.e. the highest value
taken by |Ve |. On the other hand, around |Ve |=0.196 V, these oscillations only appear
when fα = fB(1,1) and fα = fB(4,2) . These results are in agreement with the presence of
oscillations in the associated characteristic functions shown in Figs 10.3(a) and 10.3(b).
To ease the interpretation of the obtained pdfs, the values of |Ve | are plotted in Fig. 10.5
as a function of α. This figure reveals that the peaks observed in the pdfs occur when the
slope of |Ve | as a function of α cancels, i.e. ∂α |Ve (α)| = 0. Such is the case around the
stationary points α0,1 = −0.012 m and α0,2 = 0.035 m, which even correspond to extrema
of |Ve |. By construction, all the input pdfs allocate a significant weight to α0,1 owing to its
proximity to the center of the interval A = [−0.04; +0.04] m. This explains the presence
of a peak at |Ve | = |Ve (α0,1 )| = 0.196 V in all the pdfs. On the other hand, α0,2 is located
close to the upper bound of A and receives a sufficient weight only when fα = fB(1,1) and
fα = fB(4,2) , which then translates into a second peak at |Ve | = |Ve (α0,2 )| = 0.269 V, only
for these pdfs.
(a,b)
The graphs of the different cdfs F|Ve | , depicted in Fig. 10.6, confirm the support of |Ve | as
being equal to [Vmin ; Vmax ]=[0.196; 0.269] V. Moreover, the behaviour of F|Ve | around Vmin
and Vmax can be related to the graph of f|Ve | , by recalling that, for any v ∈ R, the value
f|Ve | (v) represents the derivative of F|Ve | at v. Hence, whenever Gibbs oscillations appear
in the graph of f|Ve | , signaling a discontinuity of f|Ve | , the slope of F|Ve | around v varies
abruptly. An illustration of this effect can be observed around Vmax for all the DIFT cdfs,
and around Vmin for F|V1,1e | and for F|V4,2e | .
166
Extensions
10
0
fα=fB(1,1)
fα=fB(4,4)
−1
U
|Φ (k)|
10
10
10
−2
−3
0
10
20
30
40
50
k
60
70
80
90
100
(a) fα = fB(1,1) and fα = fB(4,4)
10
0
f =f
α
B(4,2)
f =f
α
−1
U
|Φ (k)|
10
B(2,4)
10
10
−2
−3
0
10
20
30
40
50
k
60
70
80
90
(b) fα = fB(4,2) and fα = fB(2,4)
Figure 10.3: Amplitude of the characteristic function ΦU for k ∈ [0; 100].
100
10.2 Influence of the distribution of the input
167
2.5
f =f
α
2
B(1,1)
f =f
α
B(4,4)
1.5
f|V |
e
1
0.5
0
−0.5
0.18
0.2
0.22
0.24
|Ve| [volts]
0.26
0.28
0.3
1,1
4,4
(a) f|V
and f|V
e|
e|
2.5
fα=fB(4,2)
2
f =f
α
B(2,4)
1.5
f|V |
e
1
0.5
0
−0.5
0.18
0.2
0.22
0.24
|Ve| [volts]
0.26
4,2
2,4
(b) f|V
and f|V
e|
e|
Figure 10.4: DIFT pdf of |Ve |.
0.28
0.3
168
Extensions
0.28
|Ve| = 0.269 V
0.27
|Ve|
[Volts]
0.26
0.25
0.24
0.23
0.22
|Ve| = 0.211 V
0.21
|Ve| = 0.204 V
0.2
0.19
−0.04 −0.03 −0.02 −0.01
0
0.01
α [meters]
|Ve| = 0.196 V
0.02
0.03
0.04
Figure 10.5: Variation of |Ve | as a function of α ∈ A = [−0.04; +0.04] m.
1.1
1
0.9
0.8
0.7
F|V | 0.6
e
0.5
0.4
0.3
0.2
0.1
0
−0.1
0.18
fα=fB(1,1)
fα=fB(4,4)
fα=fB(2,4)
fα=fB(4,2)
0.2
0.22
|V |
e
0.24
[volts]
0.26
Figure 10.6: DIFT cdf of |Ve | for the different pdfs fα .
0.28
0.3
10.2 Influence of the distribution of the input
169
As outlined in Section 10.1, the cdf can be post-processed to extract useful practical
information. In an EMC context, it can be desirable to avoid excessive levels of induced
voltage. Hence, indicators such as the likelihood
P|Ve | (|Ve | > Vthresh ) = 1 − F|Ve | (Vthresh )
(10.4)
of having |Ve | above a maximum acceptable voltage Vthresh > 0, or the quantile at 90% are
helpful. These decision-making tools are summarized in Table 10.2.
fα
median q50%
q90%
P|Ve | (|Ve | > Vthresh = 0.260 V )
fB(1,1)
0.243 V
0.268 V
0.29
fB(4,4)
0.258 V
0.269 V
0.48
fB(2,4)
0.260 V
0.269 V
0.51
fB(4,2)
0.231 V
0.267 V
0.19
Table 10.2: Post-processing of F|Ve | .
Table 10.2 highlights the clear differences between the medians of the different cdfs except
for the cases F|V4,4e | and F|V2,4e | . A robustness can be noted concerning the quantile q90% which
remains close to 0.268 V, for all the different fα considered. The risk of observing Ve over
the threshold Vthresh = 0.260 V is significant, particularly for F|V4,4e | and F|V2,4e | where it is
close to 0.5. From this perspective, fα = fB(4,2) yields the lowest risk, i.e. 0.19, which
given the shape of fB(2,4) implies that the potentially risky zone of the input parameter α
corresponds to the lower half of A. This is indeed the case according to Fig. 10.5.
This example also underscores the importance of knowing the distribution of the input
parameters, as can be seen through Figs 10.4(a) to 10.6 and Table 10.2. The choice, for
instance, of a pdf such as fα = fB(4,4) or fα = fB(2,4) to represent an actually uniformly
distributed parameter α will lead to an erroneous estimation of the median of |Ve |, as well
as an over-estimation of the risk of exceeding the threshold Vthresh .
170
10.2.4
Extensions
Effect of fα on the distribution of Zi = Im(Ze )
The imaginary part of the Thévenin impedance of the thin wire is now investigated, i.e.
Y = Zi = Im[Ze ]. This observable informs on the reactive behaviour of the thin-wire setup.
The characteristic function of this new observable is computed in a similar way as in the
case of |Ve | and leads to identical computational performances for the different types of
input pdfs presented in Section 10.2.1. The amplitude of ΦU is shown in Figs 10.7(a) and
10.7(b). A similar polynomial decay is observed as for the characteristic function of |Ve |,
with ringing effects present in the cases where fα = fB(1,1) and fα = fB(4,2) .
The pdfs of Zi obtained with the aid of these characteristic functions are plotted in
Figs 10.8(a) and 10.8(b). The pdf and the cdf of Zi obtained for fα = fB(a,b) are written
as fZa,b
and FZa,b
, respectively. Once more, little resemblance is found between the pdfs of
i
i
the input α and the pdf of the output Zi , due to the non-linear dependence of Ze on α.
Gibbs oscillations affect all the pdfs around Zmin = −774 Ω, thereby marking the lower
limit of the support of fZi , i.e. the lowest value taken by Zi . Since this limit is common to
all the pdfs and given the different shapes assumed by fα , it is possible to conclude that
this minimum of Zi arises for samples α around the center of A. The upper limit of the
support of fZi depends on the type of input pdf: for fZ1,1
and fZ2,4
, Zmax = −493 Ω, while
i
i
4,4
4,2
for fZi and fZi , Zmax = −594 Ω. Furthermore, similarities can be observed between the
shapes of fZ1,1
and fZ4,2
particularly between Zmin and −594 Ω. As for fZ4,4
and fZ2,4
, they
i
i
i
i
both decay when Zi increases above Zmin .
These patterns are compared to the deterministic representation of Zi in terms of α, which
is plotted in Fig. 10.9. Two zeros of the derivative ∂α Zi (α) are observed. The first one,
′
which is located at α0,1
= −0.003 m, coincides almost with the center of A, and it gives
′
rise to a peak in all the DIFT pdfs around Zi = Im[Ze (α0,1
)] = −774 Ω. Due to its
′
location near the upper limit of A, the second zero α0,2 = 0.036 m is highlighted by a
′
peak at Zi = Im[Ze (α0,2
)] = −600 Ω only when fα = fB(1,1) or fα = fB(4,2) . It is worth
noting that the largest values of Zi occur for α ≤ −0.03 m, and they affect the DIFT pdfs
when fα = fB(1,1) and fα = fB(2,4) .
Figure 10.10 shows that all the cdfs have a discontinuous slope around Zmin consistent with
the Gibbs oscillations present in the graph of fZi around this point. Moreover, the cdfs
fZ1,1
and fZ4,2
have another break in their tangent around Zi = −594 Ω, as was suggested
i
i
by their pdfs. Differences are also observed in Fig. 10.10 concerning the upper limit of the
supports of FZi for the different input pdfs.
10.2 Influence of the distribution of the input
10
171
0
fα=fB(1,1)
|ΦU(k)|
fα=fB(4,4)
10
10
−1
−2
0
10
20
30
40
50
k
60
70
80
90
100
(a) fα = fB(1,1) and fα = fB(4,4)
10
0
f =f
α
B(4,2)
f =f
|ΦU(k)|
α
10
10
B(2,4)
−1
−2
0
10
20
30
40
50
k
60
70
80
90
(b) fα = fB(2,4) and fα = fB(4,2)
Figure 10.7: Amplitude of the characteristic function ΦU for k ∈ [0; 100].
100
172
Extensions
2.5
fα=fB(1,1)
2
fα=fB(4,4)
1.5
f
Z
i
1
0.5
0
−0.5
−850
−800
−750
−700
−650 −600
Zi [Ω]
−550
−500
−450
(a) fZ1,1
and fZ4,4
i
i
2.5
fα=fB(4,2)
2
fα=fB(2,4)
1.5
f
Z
i
1
0.5
0
−0.5
−850
−800
−750
−700
−650 −600
Zi [Ω]
(b) fZ4,2
and fZ2,4
i
i
Figure 10.8: DIFT pdf of Zi .
−550
−500
−450
10.2 Influence of the distribution of the input
173
−450
Zi = − 493 Ω
−550
Zi = − 600 Ω
−600
Zi = − 621 Ω
Zi=Im(Ze)
[Ohms]
−500
−650
−700
−750
−800
−0.04 −0.03 −0.02 −0.01
0
0.01
α [meters]
Zi = − 774 Ω
0.02
0.03
0.04
Figure 10.9: Variation of Zi = Im(Ze ) as a function of α ∈ A = [−0.04; +0.04] m.
1.1
1
0.9
0.8
0.7
FZ 0.6
i 0.5
0.4
0.3
0.2
0.1
0
−0.1
−850
fα=fB(1,1)
f =f
α
B(4,4)
fα=fB(2,4)
f =f
α
−800
−750
−700
−650 −600
Zi [Ω]
−550
Figure 10.10: DIFT cdf FZi for the different pdfs fα .
B(4,2)
−500
−450
174
Extensions
The cdf is post-processed by assessing the likelihood of observing Zi in a desired interval
[Z0 ; Z1 ]. This probability is obtained as
PZi ((Z0 ; Z1 ])) = FZi (Z1 ) − FZi (Z0 ).
(10.5)
Results corresponding to a partition of the range [−800; −500] Ω into intervals of length
100 Ω are given in Table 10.3.
fα
PZi ((−800; −700] Ω)
PZi ((−700; −600] Ω)
PZi ((−600; −500] Ω)
fB(1,1)
0.450
0.401
0.141
fB(4,4)
0.813
0.182
0.005
fB(2,4)
0.605
0.278
0.117
fB(4,2)
0.53
0.441
0.029
Table 10.3: Post-processing of FZi .
Depending on the range of values of Zi that is considered, the different input pdfs
yield different results. For instance, if the pdf fB(2,4) is employed to model samples of
α that are distributed according to fB(4,4) , the predictions of the likelihood of having
Zi ∈ (−800; −700] Ω will be underestimated as 0.605 instead of 0.813, while the chances
of having Zi ∈ (−700; −600] Ω and Zi ∈ (−600; −500] Ω will be overestimated.
10.2.5
Conclusion on the importance of the input pdf
This example of a vertically varying thin wire has demonstrated the importance of the
choice of the probability distribution of the uncertain input parameter. Understanding the
physics is hence crucial in the selection of the appropriate pdfs to describe the variations of
the inputs. In practice, one way of characterizing the distribution of the input parameter
could be to exploit manufacturing data sheets, or by analyzing recorded data.
10.3 Localized geometrical fluctuations
175
10.3
Localized geometrical fluctuations
10.3.1
Setup
Up to this stage, all the examples analyzed involved a thin-wire setup that had an axis
parameterised by a sum of sines with random amplitudes. For a vertically undulating thin
wire, for instance, the axis is described as
xα (y) = 0,
zα (y) = 0.05 +
d
X
k=1
αk sin (kπ(y − ym )) , for y ∈ [ym ; yM ],
(10.6)
where the amplitudes αk are random. This model, which can be regarded as the modal
representation of a vibrating string (see [142, p. 268]), is well suited to describe global
geometrical variations over the range y ∈ [ym ; yM ]. However, the number d of Fourier
terms necessary to build localized geometrical fluctuations is generally important and can
imply the use of many amplitudes αk . The dimension of the associated stochastic problem
augments accordingly and raises the question of the computation of statistical moments
via quadrature rules over a high-dimensional space.
One way around this difficulty consists in employing a geometrical parametrization that is
more suitable for modeling local geometrical perturbations. This is possible with wavelets
such as the so-called “Mexican hat”, which is defined as (see [136, p. 80])
"
2 #
2
y
wδy (y) = 1 −
e− (y/δy ) , for y ∈ R, δy > 0.
(10.7)
δy
This centered wavelet spreads horizontally over a range given by approximately 3δy and
has an amplitude between -0.135 and 1, as can be seen in Fig. 10.11.
In this section, local vertical deformations of a straight wire are constructed as follows
xα(y) = 0,
zα(y) = 0.05 − δz wδy (y − y∗ )
in meters,
(10.8)
where δy = 0.01 m and δz = −0.03 m. The resulting perturbation is centered around
y∗ ∈ [ym ; yM ], it spans the range [y∗ − 3δy ; y∗ + 3δy ], and has the fixed amplitude δz .
This type of deformation is for instance representative of a local defect in the manufacturing
process of a transmission line1 . The thin wire can be regarded as a transmission line that
is terminated at one end by an open-circuit state via the vertical thin wire containing the
port region and at the other end by a resistive load RL .
1
In [130, 143], we have employed such a geometrical model to study the variance and the kurtosis of
the induced voltage.
176
Extensions
1
0.6
y
wδ =1
0.8
0.4
0.2
0
-0.2
-5 -4 -3 -2 -1
0
y
1
2
3
4
5
Figure 10.11: Mexican-hat wavelet wδy (y), for δy = 1.
R
Figure 10.12: Wavelet geometrical deformation.
10.3 Localized geometrical fluctuations
177
Two different types of incident plane waves are taken into account, both propagating in
the direction θi = 45◦ , φi = 0◦ , at a frequency of 500 MHz and with an amplitude of
1 Vm−1 . First, a parallel-polarized plane wave denoted E ik is considered, and then a
perpendicular-polarized plane wave E i⊥ . Given the coordinate system employed, E ik lies
in the plane of incidence xOz, while E i⊥ is directed along the y-axis. Since Ve is a reaction
integral between the incident field and the transmitting-state current J α, the voltage Ve,z
that results from E ik will depend only on the z-component of J α, while the voltage Ve,y
induced by E i⊥ will capture the effect of the y-component of J α. The objective is to study
the probability distributions of |Ve,z | and |Ve,y |.
10.3.2
Conditional cdf
To begin with, the center y∗ ∈ [ym ; yM ] of the deformation is assumed to be random and
uniformly distributed along the axis of the wire, while the load RL takes the deterministic
values {0 Ω, 50 Ω, 100 Ω}. The resistance RL is hence a conditioning parameter. The
normalized characteristic function {ΦU (k), k ∈ [0; 100]} is computed for each value of RL
by a Clenshaw-Curtis quadrature rule.
Figures 10.13(a) and 10.13(b) display the DIFT cdfs Fz,RL and Fy,RL of |Ve,z | and |Ve,y |
respectively, for the different values of RL . The largest dispersion of |Ve,z | and |Ve,y | is
obtained when RL = 0 Ω, as can be noticed from Table 10.4, where the limits of the
support of the cdfs are given. It is worth noting that we abusively speak of the “support”
of the cdfs, to refer to the support of the associated pdfs. As RL increases from 0 Ω to
100 Ω, the support of Fz,RL is shifted upwards, whereas the support of Fy,RL is conversely
displaced towards lower voltages.
E ik
E i⊥
min |Ve,z |
max |Ve,z |
min |Ve,y |
max |Ve,y |
RL = 0 Ω
0.018 V
0.082 V
0.128 V
0.216 V
RL = 50 Ω
0.050 V
0.084 V
0.093 V
0.145 V
RL = 100 Ω
0.057 V
0.089 V
0.063 V
0.105 V
RL ∈ [0; 100] Ω
0.018 V
0.089 V
0.063 V
0.216 V
Table 10.4: Support of Fz,RL and Fy,RL .
178
Extensions
1.1
1
0.9
0.8
0.7
Fz,R 0.6
L 0.5
0.4
0.3
0.2
0.1
0
−0.1
0.01
R =0Ω
L
RL = 50 Ω
RL = 100 Ω
RL random
0.02
0.03
0.04
0.05 0.06 0.07
|Ve,z| [Volts]
0.08
0.09
0.1
(a) Fz,RL
1.1
1
0.9
0.8
0.7
Fy,R 0.6
L 0.5
0.4
0.3
0.2
0.1
0
−0.1
0.05
R =0Ω
L
RL= 50 Ω
RL= 100 Ω
RL random
0.075
0.1
0.125 0.15 0.175
|V | [Volts]
e,y
(b) Fy,RL
Figure 10.13: DIFT cdfs.
0.2
0.225
0.25
10.3 Localized geometrical fluctuations
179
The variations of RL provoke a distortion of the graphs of Fz,RL which indicates a
modification of the distribution of |Ve,z |. On the other hand, the shape of Fy,RL remains
robust under the modifications of RL , which indicates that the general pattern of the
distribution of |Ve,y | is conserved. This can be noted visually through the presence of a
slope discontinuity around q60% the quantile at 60 %, for all three values of RL .
The values of |Ve |, obtained when y ∗ and RL vary in their ranges, are plotted in Figs 10.14(a)
and 10.14(b) for E ik and E i⊥ respectively. These plots confirm the statistical conclusions
drawn from the cdfs. Unlike |Ve,z |, the evolution of |Ve,y | in terms of y ∗ is rather insensitive
to the value of RL .
10.3.3
Total cdf
Rather than being a conditioning variable, RL is now random, independent of y∗ and
uniformly distributed in [0; 100] Ω. In this case, the cdfs of |Ve,z | and |Ve,y | are denoted
Fz,rand and Fy,rand respectively. As can be seen in Table 10.4, the supports of Fz,rand and
Fy,rand encompass all the supports of the conditional cdfs. This is to be expected, given the
uniform distribution of RL which weighs all the possible values of RL equally. Regarding
the cdf of |Ve,z |, plotted in Fig. 10.13(a), a resemblance can be observed between Fz,rand
and Fz,RL=50Ω , once |Ve,z | ≥ 0.054 V. On the other hand, Fig. 10.13(b) reveals that Fy,rand
has a modified shape in comparison with the conditional cdfs Fy,RL . Moreover, the slope
discontinuity of Fy,RL now occurs around q86% the quantile at 86 %, which corresponds to
|Ve,y | = 0.140 V.
10.3.4
Conclusion on the wavelet geometry
This section illustrates the possibility to build local geometrical perturbations of thin-wire
scatterers, with the aid of wavelets. This type of geometrical parametrization should be
regarded as an alternative to the sinusoidal parametrization appearing in Eq. (10.6). Since
a sinusoidal basis is better suited to describe global undulations than a wavelet basis, and
vice versa [144], the choice of one geometrical model rather than the other is dictated by
the type of deformation one wishes to model.
A conditional statistical analysis has been conducted to assess the effect of a varying
load on the statistics of the magnitude |Ve | of the voltage. This analysis refined the
characterization of the randomness of |Ve | by revealing, e.g., a robustness of the randomness
of the voltage induced by the perpendicular-polarized incident field. This in turn implies
a robustness of the randomness of the y-component of the transmitting-state current J α.
180
Extensions
|Ve,z| [V]
0.12
0.1
0.08
0.06
0.04
0.02
0
0
20
40
RL [Ω] 60
80
100 0.1
0.2
0.3
0.4
0.5 0.6
*
y [m]
0.7
0.8
0.9
0.5 0.6
y* [m]
0.7
0.8
0.9
(a) Case of E ik
|Ve,y| [V]
0.25
0.2
0.15
0.1
0.05
0
0
20
40
RL [Ω] 60
80
100
0.1
0.2
0.3
0.4
(b) Case of E i⊥
Figure 10.14: Deterministic evolution of |Ve | as a function of y ∗ and RL .
10.4 Random rectangular metallic plate
181
In a way, this result backs the observations made in Section 7.6.1 about the statistics of
the transmitting-state current, where a steady state, corresponding to the current flowing
on the straight wire, has been pointed out.
10.4
Random rectangular metallic plate
A configuration consisting of an elementary dipole and a PEC plate of finite extent is
now analyzed. The presence of the PEC surface leads to a 2–D electromagnetic problem
unlike the case of the thin wire, which involved 1–D line integrals obtained via thinwire approximations. This configuration is representative of conformal patch antennas
mounted on the wings of an aircraft [6, 145] or embedded in textile [7, 146, 147]. Some
micro-electromechanical systems (MEMS) used for sensing in biomedical and aeronautical
applications can also be modeled as plates of varying geometries [148–150].
10.4.1
Deterministic configuration
The metallic surface, denoted ∂Ωα2 , is described by a normal parametrization with respect
to a fixed domain ∂D which is chosen as the rectangle [0; Lx ]×[0; Ly ] lying in the horizontal
plane Oxy, as depicted in Fig. 10.15.
Figure 10.15: Scattering surface ∂Ωα and elementary dipole located at r dip in the direction udip .
2
To be consistent with the notations employed for the thin wire, the surface is written as ∂Ωα even
though its interior Ωα is empty.
182
Extensions
To this end, a mapping µα is introduced between ∂D and ∂Ωα
µα :
∂D
7 → ∂Ωα
−
,
rD = (x, y, 0) −
7 → r α = r D + zα(x, y)uz
(10.9)
where the function zα is at least twice continuously differentiable, and uz is a unit
vector normal to ∂D. This function depends on parameters gathered in the matrix
α = (αk1 ,k2 )k1 =1,...,d1 ,k2 =1,...,d2 ∈ Rd1 d2 and can be chosen as a sum of sines to reproduce the
modal representation of a vibrating rectangular plate [151, 152], i.e.
zα : ∂D ∋ (x, y) 7−→
π
π
αk1 ,k2 sin k1 x sin k2 y .
Lx
Ly
=1
d1 X
d2
X
k1 =1 k2
(10.10)
Similar to the thin-wire geometry, α will be chosen as random.
This plate ∂Ωα is illuminated by an incident field E iβ . The electromagnetic coupling
between ∂Ωα and E iβ is observed through the voltage Ve induced at the center of a
Hertzian dipole, which is in a receiving state. This dipole is located at the fixed position
r dip , oriented along the direction of the unit vector udip , and has a length 2Ldip that is
much smaller than the wavelength λ. In a transmitting state, i.e. with a current source
I0 = 1A present at its port, the electric current density J dip on the dipole is given by [40]
J dip : R3 ∋ r 7−→ I0 Ldip udip δ(r − r dip ).
(10.11)
The voltage Ve is a superposition of the voltage Vi,e (β) created by the direct incidence of
E iβ on the dipole, and the voltage Vs,e (γ) induced by the field scattered by ∂Ωα
Ve (γ) = Vi,e (β) + Vs,e (γ).
(10.12)
Given the small dimensions of the dipole compared to the wavelength, Vi,e is defined as
Vi,e (β) = −
1
hJ dip , E iβ i = −Ldip udip · E iβ (r dip ).
I0
On the other hand, Vs,e (γ) is defined by the reaction integral
Z
1
Vs,e (γ) = −
J α(r) · E iβ (r) dS.
I0
r ∈∂Ωα
(10.13)
(10.14)
The transmitting-state current distribution J α is induced on ∂Ωα in the absence of the
incident field E iβ , when the unit current source I0 = 1A is applied at the port of the
dipole. The current density J α is obtained by solving an equivalent EFIE with a method
of moments using a set of Rao-Wilton-Glisson (RWG) basis functions defined on ∂Ωα [65].
10.4 Random rectangular metallic plate
183
This electromagnetic configuration can be interpreted in several ways. Firstly, it can be
viewed as a shielding configuration in which the objective is to study the effect of the
randomly varying shielding surface ∂Ωα on the electromagnetic coupling between E iβ and
the elementary dipole. Secondly, this problem can be regarded as a scattering problem
in which the dipole probes one component of the total electromagnetic field at its center.
It is worth noting that in bi-static radar cross-section (RCS) studies the dipole is located
at infinity in a direction specified by the polar coordinates (θd , φd ). The current J α then
corresponds to the current induced on ∂Ωα in the absence of the incident field E iβ, due to
a plane wave E id that is created by a unit current source I0 = 1A impressed at the port
of the dipole. The direction of propagation of E id is then specified by the angles (θd , φd )
and the RCS coefficient can be deduced from Eq. (10.14).
10.4.2
Representative example
For the sake of illustration, the reference surface ∂D is chosen as a square centered around
the origin, with edges of length Lx = Ly = 0.5 m. The height zα of ∂Ωα is defined as
π
π
zα(rD ) =
αk sin k
x sin k
y ,
2Lx
2Ly
k=1
2
X
with r D = (x, y, 0) ∈ ∂D.
(10.15)
0.2
0.2
0.1
0.1
z [m]
z [m]
The resulting surface, illustrated in Figs 10.16(a) and 10.16(b), has undulating edges and
is meshed in 2 × 10 × 10 triangular patches (≡ 280 RWG basis functions).
0
0
−0.1
−0.1
−0.2
−0.3
−0.2
−0.1
−0.2
−0.3
−0.2
−0.1
0.2
0
0 0.1
0.1
−0.1
x [m]
0.2
y [m]
0.3 −0.3−0.2
0.3
0.1 0.2
0
0.1
−0.1 x [m]
0.2
y [m]
0.3 −0.3−0.2
0.3
0
(a)
(b)
Figure 10.16: Examples of surfaces ∂Ωα.
The random amplitudes α1 and α2 are independent and uniformly distributed in the
domains A1 = A2 = [−0.05; +0.05] m. The average surface therefore corresponds to
184
Extensions
α1 = α2 = 0 m and coincides with the flat plate ∂D. The Hertzian dipole, at the center
of which Ve is defined, lies 30 cm above the center of ∂Ωα, i.e. r dip = (0, 0, 0.3) m, it has
a length 2Ldip = 2 cm, and is oriented along the vector udip = uy .
Two different types of incident fields are considered, viz. a deterministic plane wave and
a plane wave with a random direction of propagation. The deterministic plane wave has
a frequency denoted f and a parallel-polarized electric field of magnitude of 1 Vm−1 . It
propagates in the plane Oyz, following the direction θi = 45◦ , φi = 270◦ . In this case,
the uncertainty of Ve stems uniquely from the random surface ∂Ωα and the statistical
moments of Ve are indexed by the subscript α.
The stochastic incident field has the same characteristics as the deterministic field except
for its direction of propagation, which is such that φi = 270◦ and θi = β1 is random and
uniformly distributed in the domain B1 = [15◦ ; 75◦ ], with the average direction given by
E[θi ] = E[β1 ] = 45◦ . In this case, the stochastic nature of Ve is caused by the randomness
of γ = (α1 , α2 , β1 ) and the corresponding statistics are written with the index γ .
10.4.3
Mean and standard deviation
Using a sparse-grid rule, the mean and the variance of Ve are computed for several
frequencies ranging from 50 MHz (λ ≈ 6 m) to 300 MHz (λ ≈ 1 m). On average, the
statistics in the case of a deterministic excitation are obtained in 13 function evaluations,
whereas in the case of the random incident field, 441 samples of Ve are required for the
quadrature rule to converge, i.e. to reach a relative error below 1%.
The real and imaginary parts of E[Ve ]α and E[Ve ]γ are displayed in Figs 10.17(a) and
10.17(b), respectively, where they are compared to the voltage V0 corresponding to the
average configuration ∂D. These plots highlight the resemblance between the values of
E[Ve ]α and V0 for f ∈ [50, 300] MHz. As is often done in first-order perturbation methods,
the assumption that E[Ve ]α ≈ V0 can be made over this range of frequencies. Due to the
randomness of the incident field, the graphs of the real and imaginary parts of E[Ve ]γ are
shifted with respect to the graphs of E[Ve ]α . Nonetheless, the evolutions of E[Ve ]α , V0
and E[Ve ]γ as a function of the frequency are very similar. This can be observed around
f =182 MHz, where the real parts reach their highest values, whilst the imaginary parts
have an inflection point.
10.4 Random rectangular metallic plate
8
x 10
185
−3
Re(E[V ] )
e α
Re(E[Ve]γ)
7
Re(V )
[Volts]
0
6
5
4
50
100
150
f
[MHz]
200
250
300
(a) Real part
[Volts]
2
x 10
−3
Im(E[V ] )
e α
1
Im(E[Ve]γ)
0
Im(V )
0
−1
−2
−3
−4
−5
50
100
150
f
[MHz]
200
250
300
(b) Imaginary part
Figure 10.17: Averages E[Ve ]α (solid line) and E[Ve ]γ (stars) versus V0 (dashed line).
186
Extensions
The standard deviation σ[Ve ]α , plotted in Fig. 10.18(a), indicates that the dispersion of
Ve represents approximately 1% of the value of |E[Ve ]α |. Depending on the application or
on the device connected at the port of the dipole, this may be considered as a negligible
physical dispersion of Ve . On the other hand, the randomness of E iβ provokes an increase
in the spread of Ve accounted for by σ[Ve ]γ that is comparable in magnitude to |E[Ve ]γ |.
In terms of the frequency f , the evolutions of σ[Ve ]α and σ[Ve ]γ are different. When
f ≤ 130 MHz, σ[Ve ]α is approximately equal to 4 µV. As f varies from 130 MHz to
240 MHz, σ[Ve ]α increases and levels off around 36 µV when f ∈[240;300] MHz. On the
other hand, σ[Ve ]γ first decreases moderately when f varies from 50 MHz to 160 MHz,
before increasing steadily once f ≥ 160 MHz.
10.4.4
Comparison with deterministic samples at f = 182 MHz
At the frequency f =182 MHz (λ ≈ 1.65 m), the statistical data summarized in Table 10.5
is post-processed and compared to the actual distribution of sample values of Ve .
E iβ
E[Ve ] [mV]
σ[Ve ] [mV]
deterministic
7.56 − j 1.97
0.02
random
7.25 − j 2.08
2.22
Table 10.5: Statistical moments at f =182 MHz.
For this purpose, 104 deterministic samples are computed both for the case of a
deterministic and a random excitation E iβ. These samples are shown in Figs 10.19(a) and
10.19(b). These graphs reveal the structure of the distribution of Ve ’s values in the complex
plane. In the case of a deterministic incident field, the samples of Ve are grouped around
the average and describe an “open-hand”-like pattern. When E iβ is random, the samples
of Ve form clusters that are due to the sampling of the parameter θi . A confirmation
hereof is obtained by zooming into one of these clusters as shown in Fig. 10.20, where the
“open-hand” distribution, arising from the variation of the geometry, is retrieved.
Next, the deterministic samples are normalized and then plotted in the complex plane
together with normalized Chebychev circles in Figs 10.21(a) and 10.21(b). In spite of
σ[Ve ]γ being two orders of magnitude larger than σ[Ve ]α , the statistical spread is conversely
more pronounced in the case of a deterministic E iβ. These results would not have come as
a surprise had the kurtosis been analyzed, since κ[Ve ]α = 4.74 is higher than κ[Ve ]γ = 1.63.
10.4 Random rectangular metallic plate
4
x 10
187
−5
[Volts]
3
σ[Ve]α
2
1
0
50
100
150
f
200
250
300
250
300
[MHz]
(a) Random geometry
[Volts]
3.5
x 10
−3
σ[V ]
e γ
3
2.5
2
50
100
150
f
[MHz]
200
(b) Random geometry and incident field
Figure 10.18: Standard deviation σ[Ve ] versus the frequency f .
188
Extensions
x 10
−3
−1.9
−1.92
e
Im(V ) [Volts]
−1.88
−1.94
−1.96
−1.98
7.46
7.48
7.5
7.52
7.54 7.56 7.58
Re(V ) [Volts]
7.6
e
7.62 7.64
−3
x 10
(a) Random geometry
0
x 10
−3
−2
e
Im(V ) [Volts]
−1
−3
−4
−5
2
3
4
5
6
7
Re(V ) [Volts]
8
e
(b) Random geometry and excitation
Figure 10.19: 104 samples of Ve at f =182 MHz.
9
10
11
x 10
−3
10.4 Random rectangular metallic plate
e
Im(V ) [Volts]
−2.64
x 10
189
−3
−2.66
−2.68
−2.7
−2.72
−2.74
−2.76
6.1
6.12 6.14 6.16 6.18 6.2 6.22 6.24 6.26 6.28 6.3
−3
Re(V ) [Volts]
x
10
e
Figure 10.20: Details of one of the clusters of Fig. 10.19(b).
These results suggest an analogy with the theory of telecommunications where, in
frequency modulation, a high-frequency carrier is used to convey a modulating signal
that has a lower frequency. Similarly, the results obtained for the surface indicate that,
in the presence of a random geometry and incident field, the larger randomness of the
geometry acts as the “carrier” of the randomness, whereas the more limited randomness
of the incident field is the “modulator” of the randomness.
Extensions
Im(Vn)
190
5
4
3
2
1
0
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1
Re(Vn)
2
3
4
5
2
3
4
5
Im(Vn)
(a) Random geometry
5
4
3
2
1
0
−1
−2
−3
−4
−5
−5 −4 −3 −2 −1 0 1
Re(Vn)
(b) Random geometry and excitation
Figure 10.21: 104 normalized samples of Ve at f =182 MHz.
10.4 Random rectangular metallic plate
10.4.5
191
Conclusion on the example of the surface
In this chapter, we have studied the voltage induced at the port of a Hertzian dipole by the
interaction between a random metallic surface and an incident field. Such a configuration
represents an extension to the thin-wire examples employed throughout this thesis. This
test case is a higher-dimensional deterministic problem in comparison with the 1–D thinwire examples. However, from a stochastic point of view, the dimension of the problem is
dictated by the number of random variables.
Comparisons have been made between the cases where the random geometry is immersed
in a deterministic incident field and the case where the incident field is also random.
The results obtained for the average and the variance have revealed similarities, mainly
regarding the evolution of the mean in terms of the frequency. Differences have been
noted concerning the values of the averages obtained in these two cases of excitations,
but also in the physical dispersion of the induced voltage measured in volts through the
standard deviation. Conversely, the statistical spread of Ve is larger in the presence of a
deterministic incident field, as indicated by the higher value of the kurtosis of Ve .
The distribution of the values of the voltage under both deterministic and random
illumination has highlighted a phenomenon of modulation between the randomness of
the geometry and that of the incident field. Likewise, a modulation of the field scattered
by a linearly vibrating flat plate under oblique incidence has been signaled and studied in
a deterministic framework in [153].
192
Extensions
10.5
Circular metallic plate with a random radius
In the final example that we consider, we are interested in comparing the pdf and cdf
reconstruction methods that are presented in this dissertation. In view of the complexity
of the DIFT method, we restrict ourselves to a 1–D stochastic problem, which involves a
similar setup as in Section 10.4, i.e. the coupling between an elementary dipole and an
incident field in the presence of a metallic surface. The metallic plate is now circular as
illustrated in Fig. 10.22(a).
i
Eb
Ve
0.3
0.2
y [m]
0.1
0
−0.1
a
−0.2
dWa
(a) Setup
−0.3
−0.3 −0.2 −0.1
0
0.1
x [m]
0.2
0.3
(b) Mesh of the circular surface
Figure 10.22: Configuration: surface ∂Ωα and y-directed Hertzian dipole located at r dip .
This plate, which lies in the Oxy plane, is centered around the origin. Its radius α is
random and uniformly distributed in the interval A = [0.225; 0.275] m. Thus, the average
geometry corresponds to a disc with a radius of 0.25 m. Regardless of the value of α in A,
the circular surface is meshed into 196 triangular patches (≡ 280 RWG basis functions),
as shown in Fig. 10.22(b). Hence, as α varies in A, this mesh is stretched accordingly.
The Hertzian dipole is oriented along the direction udip = uy . It has a length 2Ldip = 2 cm
and is located 10 cm above the center of the plate, i.e r dip = (0, 0, 0.1) m.
The entire setup is illuminated by a 300 MHz (λ ≈ 1 m) deterministic plane wave E iβ ,
with the following characteristics {|E iβ | = 1 Vm−1 , y-polarization, normal incidence}. The
voltage Ve created at the port of the dipole by E iβ is obtained via Eqs (10.12)–(10.14).
Such a configuration may arise, e.g., when investigating the tolerance to drifts in the
10.5 Circular metallic plate with a random radius
193
manufacturing of the circular plate, which can be viewed as the reflector of an antenna.
Moreover, this test case is inspired from the sensitivity analysis presented in [100] and
from the marching-on-in-length extrapolation performed in [36], where the effect of length
variations of a rectangular plate is studied.
Our objective is to analyze the uncertainty of |Ve | resulting from the randomness of α.
To do so, we determine the pdf f|Ve | and the cdf F|Ve | by using the following approaches:
• Deterministic sweep: 1000 deterministic samples of |Ve | are computed and sorted to
obtain the pdf fref and the cdf Fref , which serve as references.
• Polynomial-chaos approach: First, the polynomial-chaos approximation of Ve is
constructed according to the lines of Section 6.1.3, i.e.
Ve (α) ≈
PC
Ve,N
(x)
∗
=
N∗
X
VeP C (k)ψk (x),
(10.16)
k=0
where x is randomly distributed in the domain X ⊂ R according to the distribution
Px , and where ψk are Px -orthogonal polynomials. A deterministic sweep is then
PC
performed by computing 1000 samples of Ve,N
to approximate the polynomial-chaos
∗
pdf and cdf of |Ve | that are denoted fNP∗C and FNP∗C , respectively. The computation
of the coefficients VeP C (k), for k = 1, . . . , N∗ , by a quadrature rule acting on A,
represents the main numerical effort. Both a Legendre-uniform polynomial-chaos
(LU-PC) approach and a Hermite-Gauss polynomial-chaos (HG-PC) approach are
applied (see Section 6.2).
• Maximum-entropy (MaxEnt) approach: A total of Nmom normalized statistical
moments of |Ve | are first computed by quadrature and then employed in a maximumE
entropy algorithm (see Section 8.5) to deduce the MaxEnt pdf fNMmom
and the cdf
ME
FNmom of |Ve |. The demanding numerical step concerns the evaluation of the Nmom
normalized statistical moments of |Ve |.
• Discrete inverse-Fourier-transform (DIFT) approach: The normalized characteristic
function {ΦU (k), k ∈ [0; kmax ]} of |Ve | is first determined by quadrature. Then, ΦU
is inverted to deduce the DIFT pdf fkDIFT
and the DIFT cdf FkDIFT
(see Section 8.5).
max
max
The numerical cost of this approach is almost entirely devoted to the evaluation of
ΦU by quadrature.
194
10.5.1
Extensions
Numerical effort
To begin with, the stochastic methods presented above are compared with respect to their
numerical performances indicated by their complexities. Since 1000 samples are used in
the deterministic sweep to obtain fref and Fref , this number is taken as the upper bound
for the complexity of the other stochastic methods.
Since α is the only random input, all the statistics are computed with the aid of a
Clenshaw-Curtis quadrature rule, which employs NV samples of Ve , i.e. NV evaluations
of the deterministic model based on an EFIE. Table 10.6 summarizes the values of NV
required by the different methods3 . This table also contains the complexity obtained when
a Monte-Carlo quadrature rule is employed.
Stochastic method
level
Clenshaw-Curtis: NV
Monte-Carlo: NV
LU-PC
N∗ = 10
N∗ = 20
513
513
1025
> 1025
HG-PC
N∗ = 10
N∗ = 20
129
257
> 1025
> 1025
Maximum entropy
Nmom = 6
Nmom = 12
17
33
129
257
33
129
513
1025
257
513
513
1025
DIFT
kmax =
kmax =
kmax =
kmax =
10
20
30
50
Table 10.6: Complexity NV using a Clenshaw-Curtis and a Monte-Carlo rule.
In each of the stochastic approaches, the use of a higher level4 translates into an increase
of the complexity NV . This increase originates from the need to integrate higher-order
polynomials in the polynomial-chaos and the maximum-entropy methods, whereas in the
DIFT approach, the augmentation of NV is caused by the highly oscillatory nature of
the integrals that define ΦU (k) for high values of k. In all the cases considered, the
Clenshaw-Curtis rule generally outperforms the Monte-Carlo rule.
3
The convergence of the quadrature rule is monitored via the maximum relative error Emax of the
statistical moments (Emax ≤ 0.01) and of the PC coefficients (Emax ≤ 0.05). For the characteristic
function, the relative error is defined using the L1 -norm of ΦU (see Eq. (9.13)) with the target EΦU ≤ 0.05.
4
The “level” is used as a generic term referring to N∗ in the PC method, to Nmom in the maximumentropy approach and to kmax in the DIFT method.
10.5 Circular metallic plate with a random radius
10.5.2
195
Approximations of the pdf of |Ve |
LU
The LU-PC pdfs fNLU
are shown in Fig. 10.23(a). Given the already high accuracy of f10
,
∗
the improvement achieved by employing N∗ = 20 coefficients instead of N∗ = 10 is not
significant. This is in contrast with the HG-PC pdf fNHG
, which is refined when N∗ is
∗
increased from 10 to 20, as highlighted by Fig. 10.23(b). This can be observed by
HG
HG
comparing the supports of f10
and f20
to the support of fref , or by comparing the
values of the HG-PC pdfs to the values of fref , for instance when |Ve | ≤ 9.15 mV or
|Ve | ≥ 10 mV. Similar to the thin-wire examples analyzed in Sections 6.2 and 6.3, the
improved performances of the LU-PC over the HG-PC stem from the steeper decay of the
LU-PC coefficients of Ve .
ME
The MaxEnt pdfs f6ME and f12
are displayed in Fig. 10.24(a). These graphs confirm the
results of Section 8.5, viz. the smoothness of the MaxEnt pdfs. Increasing Nmom leads to
slight adjustments of the support and of the values of fNME
, e.g. for |Ve | ≥ 10.1 mV.
mom
Figure 10.24(b) depicts the DIFT pdfs fkDIFT
that correspond to kmax ∈ {10, 20, 30, 50}.
max
These curves are oscillating and assume negative values caused by Gibbs oscillations
around the points of discontinuity of fref , e.g. around |Ve | = 10.42 mV. The accuracy
DIFT
ME
of the “low-frequency” pdf f10
is comparable to that of f12
. Further, when kmax ≥ 30,
DIFT
fkmax restitutes details as fine as the discontinuities of fref around |Ve | = 9.30 mV,
|Ve | = 9.38 mV and |Ve | = 10.21 mV. Among all the results shown in Figs 10.23(a)–
10.24(b), these DIFT pdfs are the only ones that provide such a high resolution.
196
Extensions
2500
f
ref
LU−PC: N =10
2000
*
LU−PC: N =20
e
pdf of |V |
*
1500
1000
500
0
8.5
9
9.5
|V |
e
10
[Volts]
10.5
11
x 10
−3
(a) LU-PC pdf of |Ve |
2500
f
ref
HG−PC: N =10
2000
*
e
pdf of |V |
HG−PC: N*=20
1500
1000
500
0
8.5
9
9.5
|V |
e
10
[Volts]
10.5
(b) LU-PC pdf of |Ve |
Figure 10.23: Comparisons between the polynomial-chaos pdfs and fref .
11
x 10
−3
10.5 Circular metallic plate with a random radius
197
2500
fref
2000
N
=6
N
= 12
mom
pdf of |Ve|
mom
1500
1000
500
0
8.5
9
9.5
|V |
e
10
[Volts]
10.5
11
x 10
−3
(a) maximum-entropy pdf of |Ve |
2500
f
ref
pdf of |Ve|
2000
k
=10
k
=20
k
=30
max
max
1500
max
1000
kmax=50
500
0
−500
8.5
9
9.5
|V |
e
10
[Volts]
10.5
11
x 10
(b) DIFT pdf of |Ve |
Figure 10.24: Comparisons between the maximum-entropy and the DIFT pdfs and fref .
−3
198
Extensions
10.5.3
Approximation of the cdf of |Ve |
Fref
The cdfs obtained via the LU-PC, HG-PC, MaxEnt, and DIFT methods are now compared
to Fref , which is plotted in Fig. 10.25.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
8.5
8.75
9
9.25
9.5
9.75 10 10.25 10.5 10.75 11
−3
|Ve| [Volts]
x 10
Figure 10.25: Reference cdf Fref , obtained using 1000 samples of |Ve |.
For this purpose, an example of practical exploitation of the cdf of |Ve | is described.
By hypothesis, the nominal value of Ve that is expected corresponds to the unperturbed
configuration. In this case, the radius of the circular plate equals α0 = 0.25 m and the
induced voltage is V0 = |Ve (α0 )|= 9.846 mV. However, the uncertainty of α causes Ve
to vary and to differ from V0 . If the maximum tolerated variation of Ve is set at 5%
of V0 , it makes sense to determine the probability Ptol of observing |Ve | in the interval
[V− ; V+ ], where V− = 0.95V0 = 9.846 mV and V+ = 1.05V0 = 10.338 mV. The accuracy of
Ptol = F|Ve | (V+ ) − F|Ve | (V− ) depends on the accuracy of the cdf F|Ve | at V− and V+ .
Using Fref , we obtain the value which is taken as a reference, i.e.
ref
Ptol
= Fref (V+ ) − Fref (V− ) = 0.925 − 0.205 = 0.720.
(10.17)
The values of Ptol obtained via the different stochastic approaches are gathered in
ref
Table 10.7, together with their relative errors with respect to Ptol
.
The probability Ptol is well approximated by all the stochastic approaches, with a relative
error lower than 4 %. The largest errors arise when the MaxEnt cdfs are employed.
10.5 Circular metallic plate with a random radius
199
Stochastic method
level
Probability Ptol
Relative error of Ptol
LU-PC
N∗ = 10
N∗ = 20
0.730
0.725
+ 1.4 10−2
+ 6.9 10−3
HG-PC
N∗ = 10
N∗ = 20
0.744
0.714
+ 3.3 10−2
- 8.3 10−3
Maximum entropy
Nmom = 6
Nmom = 12
0.742
0.736
+ 3.1 10−2
+ 2.2 10−2
kmax =
kmax =
kmax =
kmax =
0.730
0.726
0.722
0.720
+1.4 10−2
+8.3 10−3
+2.8 10−3
0
0.720
0
DIFT
Fref
10
20
30
50
1000 samples of |Ve |
Table 10.7: Probability Ptol of observing |Ve | in the interval [V− = 9.846; V+ = 10.338] mV.
DIFT
LU
HG
The results obtained with F10
, F10
and F10
are accurate up to approximately 1%.
A higher precision is achieved with the higher-order LU-PC and HG-PC cdfs, and more
DIFT
notably with the DIFT cdfs. A match with Fref is even reached with F50
. Apart from
HG
DIFT
F20 , which underestimates Ptol , and F50
which is precise, all the other cdfs slightly
overestimate Ptol .
10.5.4
Conclusion
The configuration studied in this section demonstrates the capabilities of pdf- and cdfestimation techniques on an interaction problem that involves a 2–D scattering device, viz.
a circular plate with a randomly varying radius. A tolerance analysis has been performed
to assess the effect of the geometrical uncertainty of the plate on the voltage induced at
the port of a dipole located above the center of the plate.
Such a study permitted a comparison between the performances of various stochastic
methods that are proposed in this dissertation. In terms of their complexities, the
computation of standard statistical moments for use in the maximum-entropy method
proved to be very fast, when carried out with a Clenshaw-Curtis rule. In the DIFT
approach, the evaluation of the characteristic function ΦU (k) for lower values of k can
also be performed rapidly. The determination of the characteristic function for higher
values of k is more demanding, as is the construction of higher-order polynomial-chaos
representations.
200
Extensions
On the other hand, concerning the accuracy of the pdfs and cdfs obtained, the MaxEnt
and the HG-PC results were not as accurate as the LU-PC and DIFT results, which is
in agreement with the observations made for the thin-wire setup. Further, the results
highlight the high resolution that can be attained by the DIFT approach.
Hence, a trade-off needs to be found between the requirement for a high accuracy and
the need to preserve the numerical efficiency of the stochastic approach by limiting the
complexity of the quadrature rule used to evaluate the statistical moments.
Chapter 11
Conclusions and recommendations
11.1
Summary and conclusions
Computational simulations play an essential role in modern-day engineering. These
simulations represent a versatile tool to model devices both for design purposes during a
conception stage, or for maintenance analyses of existing objects. This increasing influence
of numerical methods also poses the question of the robustness of these methods with
respect to uncertainties affecting their input parameters. The evaluation of this effect
allows to build confidence in the performance and accuracy of the computational approach.
This thesis has addressed this issue by employing probability theory to randomize a
deterministic numerical model, to propagate the randomness through this model, and
to assess the effect of this randomness on the output of the model.
Deterministic models
The deterministic model in question, described in Chapter 2, was based on a frequencydomain integral-equation formulation to represent interactions between material systems
in a receiving state and electromagnetic fields created by external sources. As schematically
represented in Fig. 11.1, the inputs i of this model consisted of the geometrical and
physical parameters of the scattering device together with the characteristics of the
incident field, while the output Y (i) was chosen as the equivalent Thévenin network,
which comprised an ideal voltage source in series with an impedance.
The example of a thin wire over an infinite ground plane has been employed as leading
example throughout this thesis. Despite the apparent simplicity of this configuration at
first sight, the study of this test case has turned out to be very instructive. Various
modeling strategies have been adopted to gain in computational efficiency, viz meshing of
202
Conclusions and recommendations
deterministic model
(EFIE,MoM)
i
I
Y(i)
Y
Figure 11.1: Generic representation of the deterministic method.
the geometry of the thin wire via quadratic-segment basis functions, the reduction of the
scattering problem to line integrals via thin-wire approximations, and the use of a reduced
kernel in the EFIE. All these choices translated into a rapid and accurate performance
of the deterministic model. Working in the frequency domain also implied that most
of the electromagnetic quantities became complex variables rather than real variables.
Furthermore, the wire example constituted a resonant structure which was helpful to
evaluate the behaviour of the stochastic methods under such harsh conditions.
In comparison with this wire example, the case of a metallic plate analyzed in Chapter 10
presented somewhat different features. The need for a fine geometrical mesh of the surface
led to a large numerical problem, which was slower to run than the thin-wire example.
The investigation of this example highlighted the importance of the numerical cost of the
deterministic model on the overall performances of the stochastic approach.
Randomization
The starting point of the stochastic rationale consists in regarding the uncertainties of
the inputs i in their range I as variations that are random, according to a prescribed
probability distribution PI . The chapters 3, 5, 6 and 10 have highlighted the importance
of the selection of the input probability distribution, as it allocates a weight to all the
possible values of the inputs i in I, according to their likelihood of occurrence.
This randomness has been propagated through the deterministic model from the input
variables to the response variables Y (i), as depicted in Fig. 11.2. The underlying boundaryvalue problem did not allow for a direct determination of the probability distribution PY
of the observable from the input’s probability distribution. Nevertheless, the fundamental
theorem in of Section 3.3.3 has been a powerful tool that enabled the definition of
statistical moments of the observable as computable integrals over the space of random
11.1 Summary and conclusions
203
Figure 11.2: Stochastic parameterisation of the deterministic model.
inputs. As stated in Chapter 8, the fundamental theorem allowed us to establish a dialogue
with the yet unknown probability distribution PY , to determine its essential features.
Computational strategy for the evaluation of the statistics
In the presence of multiple random inputs, the generally multi-dimensional input space
infers that statistical moments can only be computed at the cost of approximating multidimensional integrals. To this end, several nested quadrature rules have been utilized
ranging from a Cartesian-product rule to a Monte-Carlo rule while including a sparse-grid
and a space-filling-curve rule. These different rules have been compared in Chapter 4 with
respect to their complexities, i.e. the number of deterministic problems that needed to
be solved. These comparisons revealed the suitability of the trapezoidal rule for a small
number of dimensions (lower than three), whereas the sparse-grid and the space-fillingcurve rule were efficient for a moderate number of dimensions (up to ten). Beyond a dozen
dimensions, the Monte-Carlo rule seems difficult to outperform.
204
Conclusions and recommendations
In an attempt to reduce the complexity of these rules during the evaluation of multiple
integrals, the choice has been made to re-use the same nested quadrature rules to compute
multiple integrals simultaneously. The benefit of such a procedure resides in the possibility
to store and re-use samples for all the integrals, which is particularly advisable when the
integrals to be computed have integrands of similar forms. One limitation of such a choice
is however that the simultaneous convergence of all the integrals is required. Furthermore,
such a strategy is also accompanied by memory requirements to store the values of the
observable, for use at higher levels of quadrature.
Apart from the Monte-Carlo rule, the examples analyzed in this thesis have shown that
the convergence of the various quadrature rules was strongly influenced by the regularity
of the integrand, which in turn was conditioned by the smoothness of the observable. In
cases where the observable was roughly behaved, due to for instance resonances, the rate of
convergence generally deteriorates leading to a non-negligible complexity. This increased
complexity imposes the necessity to solve a large number of deterministic problems similar
to what would have been required by a statistical approach or a deterministic sweep.
Acceleration techniques for the computation of the statistics
Given the paramount importance of the computation of statistical moments in the
stochastic approach, several methods have been applied to lower the computational effort
required. The objective of all these methods was less to exactly interpolate the observable
than to derive numerically cheap methods for obtaining the statistical moments.
On the one hand, the different levels of local expansions used in the perturbation method
described in Chapter 5 have significantly lowered the computation time of the statistics.
The accuracy of the thus evaluated statistics strongly depended on the smoothness of the
observable as a function of the configuration. Hence, for smoothly varying observables,
the reduced complexity of this method makes this a very interesting numerical approach.
On the other hand, the polynomial-chaos method explored in Chapter 6 was rooted in
a spectral representation of the observable via orthogonal polynomials associated with
prescribed probability distributions. The thus derived representation permitted a straightforward evaluation of statistical moments such as the average and the variance of the
output variable. The numerical effort required to build this spectral decomposition strongly
depends on the correspondence between the family of orthogonal polynomials that is
employed, and the yet unknown probability distribution of the observable.
11.1 Summary and conclusions
205
Interestingly, similarities can be pointed out between the perturbation and polynomialchaos methods. Both methods involve a pre-computation stage that can be more or
less demanding, in particular in the case of the polynomial chaos, but after which the
subsequent evaluations of the observable are carried out at a significantly reduced numerical
cost. This feature even allows for a deterministic sweep to approximate the unknown
probability distribution of the observable, as was done in Chapters 5 and 6.
Semi-intrusive method
The semi-intrusive method presented in Chapter 7 can also be regarded as a technique
to accelerate the numerical evaluation of the statistics. This method characterizes the
geometrically induced randomness of the observable independently from the randomness
stemming from the incident field. The assumption of an incident plane wave having a fixed
direction of propagation made this method possible, thereby allowing for the evaluation
of the average and covariance of the transmitting-state current. These moments, which
need to be computed only once, can then be post-processed to deduce the average and
the variance of the induced voltage, regardless of the polarization of the incident field.
Nonetheless, the determination of higher-order moments using this approach is hindered
by the necessity to evaluate statistical moments of higher-order tensors, which can be
numerically tedious to manipulate.
Post-processing and interpretation of the statistical moments
The practical interpretation of the significance of the statistical moments for the dispersion
of the complex-valued observable has then been illustrated in Chapter 8. The physical
uncertainty quantification achieved by the first two statistical moments has been
demonstrated with the aid of Chebychev’s inequality, which has been applied both by
viewing the observable as a complex-valued scalar and defining its average and its variance,
or by handling it as a real-valued vector and defining its average vector and covariance
matrix. In any case, the average and the variance have been utilized to normalize the
output variable. The skewness has then been introduced for the real and imaginary parts
of the observable as a measure of the asymmetry of its spread in the complex plane.
The analysis of the kurtosis of the observable has been advocated as a measure of the
extent of the statistical spread of the observable, which should not be considered as similar
to its physical spread : examples have been shown of cases where a large physical spread
still corresponded to a limited statistical spread and vice versa. Hence, we encourage
the combined use of the variance as a dimensioning parameter to quantify the physical
206
Conclusions and recommendations
variations of the output, and the kurtosis as a risk indicator of the likelihood of observing
extreme samples that could damage the proper functioning of electronic devices.
Methods of approximating the probability distribution of the output
The analysis of the statistical moments provided valuable but partial information about
the randomness of the output variable. A complete characterization of the randomness
of the observable is however only feasible by determining its probability distribution
or, equivalently, its probability density function, its cumulative distribution function
or its characteristic function. Two probabilistic methods achieving this goal have been
implemented to study real-valued observables deduced from the Thévenin circuit, such as
the module of the induced voltage source or the imaginary part of the Thévenin impedance.
The well-documented maximum-entropy principle is a heuristic approach, which permitted
the approximation of the pdf of the observable based on the known information about
the randomness of the observable, viz the number of statistical moments available. The
output’s pdf was approximated by a set of smooth parametric pdfs that were fine-tuned
by solving an optimization problem.
An alternative to the study of the statistical moments is provided by the characteristic
function, which is computable and can be inverted to deduce the pdf and the cdf of
the observable. A strategy to do so has been detailed in Chapter 9, and coined the
discrete-inverse-Fourier-transform (DIFT) method. The computation of the characteristic
function has been shown to be all the more efficient when the characteristic function was
evaluated over a limited range. The DIFT pdf exhibited Gibbs oscillations symptomatic
of its discontinuity, whilst the DIFT cdf has turned out to be very robust.
Among the various methods presented in this thesis, and summarized in Fig. 11.3, the
maximum-entropy and the DIFT methods are not the only ones capable of completely
assessing the randomness of observables. The perturbation and polynomial-chaos methods
also offer such a possibility, with the notable difference that they both take advantage of
a modified representation of the deterministic model to perform a deterministic sweep at
lower cost, whereas the maximum-entropy and the DIFT methods apply probabilistic tools
to the initial deterministic model. One could also consider employing the perturbation or
polynomial-chaos methods in conjunction with the maximum-entropy or DIFT methods,
however the advantages of such combinations are not necessarily obvious.
11.1 Summary and conclusions
207
Figure 11.3: Summary of the stochastic methods employed in this thesis to determine the pdf fY or
the cdf FY , knowing the input distribution PI or the pdf fI . Via the perturbation and polynomialchaos methods, a deterministic sweep is applied to a modified version of the deterministic model,
which is numerically cheap to execute.
208
Conclusions and recommendations
From the quantification to the management of uncertainties
Once the cdf of the observable has been computed, it can yield a large amount of directly
useable information concerning the spread of the observable. This is particularly
appreciable when decisions need to be made concerning the spread of the observable. The
examples presented in Chapter 10 have illustrated the possibility to determine quantiles as
well as the probabilities of failure, which are all useful items for reliability and performance
analyses of electronic devices.
11.2
Outlook
Grail stochastic method
An ideal stochastic method would consist in the formulation of a physically motivated
equation having the output’s probability distribution as unknown. All the efforts would
then be devoted to finding efficient ways of solving such an equation once, to obtain
the entire probability distribution, or equivalently the pdf, the cdf or the characteristic
function. In other words, such an approach would study the physics of the statistics of the
observable, rather than studying the statistics of the physical phenomenon that defines the
observable. One possible approach towards building such an equation might be to give a
physical interpretation to the probability distribution and then build an entire equation
with respect to this new unknown as in done in statistical physics with the Fokker-Planck
equations [154, 155].
In our case however, the fundamental physical equation is the EFIE that provides the
transmitting-state current, which is at the core of the definitions of the Thévenin
parameters. Transforming this equation into a physical equation involving the probability
distribution of the transmitting-state current is not obvious. For this reason, all the
approaches considered in this thesis are based on successive evaluations of the observable,
to allow for the reconstitution of its probability distribution.
Starting points of the stochastic approaches
The randomization of the deterministic model can also be performed otherwise, by
considering the random inputs of the configuration, i.e. the parameters of the geometry
and the incident field, as stochastic processes rather than random variables. However, in
these cases the application of a Karhunen-Loève transform would lead to a parameterisation
by a discrete set of random variables.
11.2 Outlook
209
Besides, rather than considering the geometry as random, it is also possible to start by
regarding the transmitting state current as the random input, or alternatively the
scattering operator as a random operator. The latter assumption makes it possible to
employ random-matrix theory to characterize the randomness of the operator [156, 157].
However, there should be a means for the designer or engineer to ground the choice of
the type of random matrix on available or measurable data, and on physical insight. For
example, the so-called random-coupling model, developed by Anlage et al., is an interesting
approach, which combines random-matrix theory with wave chaos theory to perform
statistical predictions of electromagnetic interactions in complex enclosures [158, 159].
More intrusive stochastic methods can also be applied to study the properties of the
operators underlying the definition of the observables. The intrusive polynomial-chaos
methods proposed by Schwab et al. [145, 160] are well suited for this purpose. From this
perspective, it could be interesting to analyze the statistical properties of the spectrum
of the operators which generally capture the essence of the physical phenomenon studied.
Furthermore, if the observables can be defined directly in terms of the spectrum of the
operator, the link between the statistical properties of the operator and the statistical
moments of the observable will be more obvious.
Quadrature rules
The arsenal of quadrature rules can be completed by considering adaptive rules, that can
advantageously cope with local areas where the observable is roughly behaved, as is the
case in the presence of resonances. Moreover, the DIFT method has highlighted the need
for specific quadrature rules, such as a Levin rule or an asymptotic-expansion method, to
efficiently handle highly oscillatory integrals.
Instead of grounding the quadrature formulas on polynomial interpolations, one can also
aim for an extrapolation-based approach. This is possible by resorting to a marching-onin-anything method [36, 66] or a Padé approximant technique [161, p. 354]. The range
of accuracy of these expansions is likely to be larger than the range of accuracy of the
perturbation expansions that have been applied in Chapter 5.
Post-processing methods
The methods to recover the probability distribution via statistical techniques have been
described for the cases of real-valued observables. These methods can however be extended
to the case of complex-valued observables, by regarding the latter as real-valued vectors.
210
Conclusions and recommendations
Multi-dimensional maximum-entropy methods have been documented in [162, 163] and
the application of multidimensional inversion methods is also discussed in [164, 165].
Larger deterministic model
Apart from the perturbation and the semi-intrusive approaches, all the other methods
considered in this thesis are non-intrusive, i.e. they regard the deterministic model as
a black box which yields a given value of the observable for known values of its inputs.
A corollary hereof is that these methods can be extended without difficulty to
different types of physical phenomena that may arise for instance in antenna or scattering
problems. This non-intrusiveness has also been advantageously exploited to adapt, in a
straightforward manner, the stochastic methods employed for the thin wire to the case of
the random geometry of a metallic plate.
Even though all the structures considered in this dissertation are perfectly electrically
conducting, the non-intrusive stochastic methods can also be applied to electromagnetic
interactions that involve dielectric homogeneous and inhomogeneous objects.
Similarly, the stochastic analysis of more elaborate physical interactions can be
contemplated. For this purpose, the different dimensions of the problem need to be
considered. First, from a deterministic point of view, larger problems should translate
into a more complex deterministic process, the numerical efficiency of which can strongly
influence the computation time of the entire stochastic approach. Secondly, the number
of random inputs should guide the choice of the quadrature rule that is employed to
compute the statistical moments, as was done in the applications considered in this
work. Thirdly, the presence of multiple outputs has also been dealt with in this thesis.
Heterogeneous outputs can be treated separately as was done in this work with the
equivalent Thévenin voltage and impedance. Otherwise, the mutual statistical coupling
between the outputs can be taken into account by defining vector-valued statistical
moments such as the covariance matrices, which can subsequently be decomposed
spectrally by applying a principal-component analysis or a Karhunen-Loève transformation.
Appendix A
Some concepts of probability theory
This appendix gathers some tools from probability theory that have been employed in
this thesis. Firstly, an axiomatic definition of probability spaces is recalled, followed by a
summary of some of the properties of random variables and random processes. A wealth
of books and references are available for an in-depth description of these probabilistic
concepts [67–69, 86, 166].
A.1
Probability space
Sigma-algebra
If the ensemble I is nonempty and P(I) denotes the collection of all the subsets of I, the
subset E of P(I) is called a sigma algebra (σ- algebra) or σ- field on I, if and only if it
fulfills the following conditions [67, p. 111]:
• I is included in E,
• E is stable by complementation, i.e., for any set B ∈ E, its complementary I \ B
also belongs to E,
• E is stable by countable union, i.e. for any countable sequence {Bn ∈ E, n ∈ N},
S
the union
Bn belongs also to E.
n∈N
The σ-algebra E = P(I) can always be defined, but it is usually too large and contains sets
that are not necessarily relevant for a probabilistic study. Given an ensemble A ∈ P(I)
there exists a σ-algebra generated by A denoted σ(A), which is the smallest σ-algebra
containing A. The pair (I, E) defines a measurable space, and the elements of E are called
measurable sets [67, p. 112].
212
Some concepts of probability theory
Probability measure
A probability measure on the measurable space (I, E) is a positive function PI : E → R+
such that
• PI is additive: ∀B1 , B2 ∈ E, B1
T
B2 = ∅ ⇒ PI (B1
S
B2 ) = PI (B1 ) + µ(B2 ),
• for any sequence {Bn∈ E, n ∈ N}, if (Bn )n increases (Bn ⊂ Bn+1 ) and if the union
S
S
Bn ∈ E, then PI
Bn = lim PI (Bn ),
n∈N
n→∞
n∈N
• PI (∅) = 0 and PI (I) = 1.
Probability space
The triplet (I, E, PI ) defines a probability space. The elements B ∈ E are events, and the
number PI (B) measures the probability of occurrence of the event B.
Two events A, B ∈ E are statistically independent if and only if PI (A
T
B) = PI (A)PI (B).
The σ-algebra E contains the useful information about the structure of the randomness
of i in I [69, p. 65].
The probability density function (pdf) of the law PI , when it exists, is the positive function
fI defined on I such that
Z
PI (C) = fI (i)di,
for any set C ⊂ Ei.
(A.1)
C
The pdf characterizes the randomness of i completely, and is uniquely associated with the
probability law PI . A partial description of the randomness of i is also possible through
the expectation of i denoted E[i]
Z
Z
′
E[i] = i dPI = i′ fI (i′ )di′ ,
(A.2)
I
I
where the first integral should be understood as a Stieltjes integral, and the second one
as a Riemann integral over I, when fI is Riemann integrable.
A.2 Random variables
A.2
213
Random variables
Random variable
[77, p. 14]: Given the probability space (I, EI , PI ) and the
measurable Banach space (X , EX ), where EX is a σ-algebra on X , an X -valued random
variable L is a measurable function from I to X , i.e. ∀U ∈ EX , L−1 (U) ∈ EI .
The random variable Y induces a measure PY on EY , also known as the image measure
PY :
EY −→ R+
.
U0 7−→ PY (U0 ) = PI (Y −1 (U0 ))
(A.3)
This induced measure is therefore expressed in terms of the measure PI . Once PY is
known, the probability space (Y, EY , PY ) is fully characterized and the randomness of Y
is fully mastered. Equivalently, and provided that it exists, the pdf fY of the law PY can
be defined through Eq. (A.1) such that
Z
PY (C) = fY (Y ′ )dY ′ ,
for any set C ⊂ EY ,
(A.4)
C
as well as the expectation of Y denoted E[Y ]
E[Y ] =
Z
′
Y dPY =
Y
Z
Y ′ fY (Y ′ )dY ′ .
(A.5)
Y
The definitions of fY and E[Y ] depend on the availability of the probability measure PY .
Regarding E[Y ] however, the following fundamental theorem allows for its computation
without resorting to PY or fY .
Fundamental transport theorem Given a measurable function h on the
probability space (I, EI , PI ), it defines a random variable h(i) that is also measurable,
and its expectation E[h(i)] reads
Z
Z
E[h(i)] = h(i)dPI = h(i)fI (i)di.
(A.6)
I
I
where fI is the pdf of i when it exists. A proof of this theorem can be found in [68, p. 124].
Thus, this expectation can be evaluated merely from PI , without knowing the law Ph(i) .
214
Some concepts of probability theory
Random-variable transformation theorem: For known cumulative distribution
functions (cdf), FI and FX , of the real random variables i and x respectively, the random
variable i associated with the probability space (I, EI , PI ) can be expressed as a function
of the random variable x associated with (X , EX , PX ) by the following formula [86, p. 28]
i : X ∋ x 7−→ FI−1 ◦ FX (x) ∈ I.
(A.7)
The set X can therefore be chosen as a bounded interval [a; b] ⊂ R. This formula is often
employed in random-number generators.
A.3
Stochastic processes
Stochastic processes are intuitively defined as indexed families of random variables. For
the subsequent developments, the following sets need to be introduced:
• the probability space (I, EI , PI ),
• the fixed measured space (Θ, Eθ , Λ), i.e. a measurable space without necessarily
Λ(Θ) = 1,
• the known Banach space W.
Definition: A stochastic process Y is a measurable mapping from Θ × I to W [120, p. 59]
Y : Θ × I ∋ (θ, i) 7−→ Y (θ, i) ∈ W.
(A.8)
The set Θ is called the time space or indexing space and can be either countable or an
interval. At any given θ ∈ Θ, Y θ = Y (θ, ·) is a random variable, and for any i ∈ I,
Y i = Y (·, θ) corresponds to a trajectory as a function of θ. The space W will correspond
either to Rn or Cn , with n ≥ 1.
Induced measure: At each θ ∈ Θ, the random variable Y θ induces a 1-point
(1)
probability measure Pθ . For a finite sequence of times Θm = {θ k ∈ Θ, k = 1, . . . , m}, an
(m)
m-point probability distribution denoted as PΘm must be considered. The (m − 1)-point
(m)
distribution corresponding to the set Θm−1 = Θm − θ i can be deduced from PΘm as
Z
(m−1) ′
(m)
′
′
′
′
PΘm−1 (y1 , . . . , yi−1 , yi+1, . . . , ym ) = PΘm (y1′ , . . . , ym
)dyi′
(A.9)
Y
This property is known as the consistency of the stochastic process Y . The distribution
(m−1)
PΘm−1 is called a marginal law of order m − 1. Although the marginal laws can be
A.4 Random variables and stochastic processes
215
expressed by using the m-point distribution, the converse is generally not true. For
consistent stochastic processes Y , Kolmogorov’s theorem [120, p. 62] ascertains the
existence of a unique probability distribution PY , if a multi-point probability distribution
can be defined for any finite sequence of points in Θ.
In spite of the difficulty to explicitly define PY , or the pdf fY , the advantage of the
stochastic-process formalism lies in the statistical moments which provide partial but
valuable insight into the randomness of Y .
Expectation of Y : If Y is integrable, the mean of Y can be defined as
Z
Z
E[Y (θ, i)] = Y (θ, i)dPI = Y (θ, i)fI (i)di,
∀θ ∈ Θ.
I
(A.10)
I
Second-order statistical moments of Y : If W is a Banach algebra, then L2PI (I, W) is the
set of W-valued random variables which have finite second-order moments. Second-order
random processes are such that, for each θ ∈ Θ, the random variable Y (θ, i) belongs to
L2PI (I, W). These processes are of prime importance as they correspond to finite-power
phenomena. The covariance V[Y ] of a second-order process Y is defined as [69, p. 28]
V[Y ](s, t) = E [{Y (s, i) − E[Y (s, i)]}{Y (t, i) − E[Y (t, i)]}∗ ] ,
for s, t ∈ Θ.
(A.11)
where ∗ indicates complex conjugation. The tensor V[Y ] corresponds to the kernel of an
operator which is Hermitian and semi-definite positive [77, p. 48].
A.4
Random variables and stochastic processes
As illustrated in Figs A.1(a) and A.1(b), the concepts of random variable and random
process are equivalent: a random process is a random variable taking its values in the
space of functions defined on Θ; alternatively, a random variable can be perceived as a
stochastic process indexed by a set Θ that is shrunk to a single element.
216
Some concepts of probability theory
(a) Random variable.
(b) Random process.
Figure A.1: Distinction between random variables (Fig. (a)) and random processes (Fig. (b)).
Appendix B
Univariate polynomial-interpolation
based quadrature rules
B.1
Statement of the problem
A Riemann-integrable function f is considered, which is defined on the bounded interval
A = [a; b] ⊂ R and which is real-valued. The objective is to evaluate the integral
Z
I[f ] = f (x)dx
(B.1)
A
with an N-point quadrature rule denoted QN , where N ∈ N. As a result, I[f ] will be
approximated by the discrete sum
I[f ] =
Z
A
f (x)dx ≈ QN [f ] =
N
X
wk f (xk ),
(B.2)
k=1
where, the abscissae {xk , k = 1, . . . , N} ⊂ A form the grid of the quadrature rule,
and where the weights wk should be positive to ensure the numerical stability of the
quadrature rule, i.e. to prevent small errors in the value of the integrand f from being
amplified [85, p. 127]. The grid and the weights of QN are entirely determined by the
quadrature rule, independently from the integrand f .
The complexity N represents the number of function evaluations needed to build QN [f ].
It is essential to keep this number as low as possible, while ensuring a sufficient accuracy
of the estimator QN [f ]. The increase of N can be controlled by using nested quadrature
rules, which offer the advantage of requiring only N new samples of f to evolve from
QN [f ] to Q2N [f ].
218
B.2
Univariate polynomial-interpolation based quadrature rules
Polynomial-interpolation-based rules
General principle
In polynomial-interpolation quadrature rules of level N ∈ N, the integrand f is first
interpolated by the unique polynomial PN −1 of degree N − 1, which passes through the N
distinct points {(xk , f (xk )), k = 1, . . . , N}, where the abscissae {xk , k = 1, . . . , N} are
given in A. In terms of Lagrange polynomials, PN −1 is defined as
PN −1 (x) =
where
λN −1,i (x) =
N
X
for x ∈ A,
f (xi )λN −1,i (x),
i=1
N
Y
j=1
j 6= i
x − xj
,
xi − xj
(B.3)
i ∈ [1, N]
(B.4)
wi f (xi ) = QN [f ].
(B.5)
The integral I[f ] is then approximated by I[PN −1 ] as follows
I[f ] ≈ I[PN −1 ] =
N
X
i=1
I[λN −1,i ]f (xi ) =
N
X
i=1
The weights wi = I[λN −1,i ] can be computed in closed form, since they only depend on
the Lagrange polynomials.
Alternative construction of the quadrature rule
The weights of the quadrature rule described above can be obtained via a different approach. Under the assumption that the abscissae {xk , k = 1, . . . , N} are distinct and
given in A, the weights wi of QN are built in such a way that QN exactly integrates
any polynomial of degree lower than or equal to N − 1. In terms of the monomial basis
{X k , k = 0, . . . , N − 1}, the exactness of QN then implies that
QN [X k ] = I[X k ],
for k = 0, . . . , N − 1,
which can be cast into the following matrix form

 

w1
I[1]
1
...
1
 .  
 .
.
..
.
 ..
..
..   ..  = 
.

 

N −1
N −1
I[X N −1 ]
x1
. . . xN
wN


.

(B.6)
(B.7)
The weights wk are then obtained by inverting the Vandermonde matrix (xk−1
)k=1,...,N ;l=1,...,N ,
l
which yields the values obtained in Eq. (B.5).
B.2 Polynomial-interpolation-based rules
219
Degree of polynomial exactness
The degree of polynomial exactness (dpe) of QN indicates the maximum degree of the
polynomials that can be integrated exactly by QN . If this degree equals D ∈ N, then
QN [p] = I[p],
QN [p] 6= I[p],
for any polynomial p in RD [X]
for at least 1 polynomial p in RD+1 [X],
where RD [X] is the set of real-valued univariate polynomials of degree at most equal to D.
In [85, p. 126], it is shown that for stable quadrature rules with a dpe equal to D, the
error made in Eq. (B.5) is bounded as follows
|I[f ] − QN [f ]| ≤ (b − a)eD (h),
where eD (f ) =
inf
p∈RD [X]
(B.8)
kf − pk∞ is the distance from f to RD [X]. For sufficiently smooth
integrands, eD (f ) can be reduced by increasing D. Hence, the higher the value of D, the
faster the convergence of QN [f ] to I[f ].
With closed N-point Newton-Cotes formulas, i.e. where the abscissae are equidistantly
spaced in A with x1 = a and xN = b, the dpe is D = N. For practical Clenshaw-Curtis
formulas, i.e. where the abscissae correspond to extrema of Chebychev polynomials, the
dpe is D = N − 1. These values are due to the fact that the aforementioned rules are
constructed by first fixing the abscissae xk in A, and then building the weights to exactly
integrate all the polynomials of degree lower than N − 1.
On the other hand, Gaussian quadrature rules are constructed in such a way that they
reach a dpe as high as D = 2N −1. This is achieved by explicitly constructing the abscissae
and the weights of the quadrature rule QN such that QN exactly integrates all polynomials
of degree lower than 2N −1. The algorithm underlying the construction of Gaussian rules,
which is similar to the rationale presented in Section B.2, can be found, e.g., in [85, p. 135].
The abscissae of Gaussian rules correspond to zeros of orthogonal polynomials. Hence,
despite their high dpe, and unlike closed Newton-Cotes and Clenshaw-Curtis formulas,
the grids of Gaussian quadrature rules are not nested [85, p. 141],[92]. Nested variants
of Gauss’ formulas, such as Gauss-Konrod, Radau or Lobatto formulas, are obtained by
fixing some abscissae and then constructing the remaining abscissae and the weights to
reach the highest dpe [85, p. 141],[167, p. 101].
220
Univariate polynomial-interpolation based quadrature rules
The main features of the aforementioned quadrature rules are summarized in Table B.1.
Newton-Cotes (NC)
Clenshaw-Curtis (CC)
Gaussian
Stability
unstable
stable
stable
Grids
nested
nested
not nested
degree of exactness D
N − 1 or N
Convergence for f ∈ C D+1 (A)
O N
N −1
−(D+2)
2N − 1
Table B.1: Properties of the simple univariate polynomial-interplation quadrature rules.
Appendix C
Multivariate integrals
Integrals defined over a multi-dimensional domain are now tackled. The emphasis is
put on the description of lattice quadrature rules. Differences and commonalities between
lattice rules and the space-filling-curve rule, described in Section 4.1.4, are then discussed.
C.1
Statement of the problem
To begin with, a real-valued Riemann-integrable function f is considered, which is defined
on the bounded d−dimensional hypercube A = [0; 1]d . The objective is to approximate
the value of the integral
Z
I[f ] =
f (x)dx
(C.1)
A
with an N-point quadrature rule denoted QN , were N ∈ N, which replaces I[f ] by the
discrete sum
Z
N
X
I[f ] = f (x)dx ≈
wk f (xk ) = QN [f ].
(C.2)
A
k=1
The abscissae {xk = (xk,1 , . . . , xk,d ), k = 1, . . . N} belong to A and the weights wk should
be positive to ensure the stability of QN .
A natural way of building multi-dimensional quadrature rules, also known as cubature
rules, consists in starting from univariate integration formulas such as the ones presented
in Chapter 4 and in Appendix B. As pointed out in Section 4.1.3, a brute-force Cartesian
product of univariate rules leads to a so-called curse of dimensionality, i.e. a complexity
N that increases exponentially with the dimension d of A. Instead, sparse-grid rules can
be obtained, by using Smolyak’s algorithm, to sidestep the curse of dimensionality up to
a logarithmic extent (see Eq. (4.18) or [80]).
222
Multivariate integrals
Randomized formulas such as Monte-Carlo rules can also be used to approximate I[f ]. In
this case, the abscissae are random and drawn via a random-number generator. MonteCarlo rules are well suited for integrals over high-dimensional domains. However, their
√
convergence rate as 1/ N is often too slow in comparison with polynomial-based formulas,
which take advantage of the smoothness of the integrand in their convergence rates.
C.2
Lattice rules
Lattice rules represent yet another type of quadrature approach, where the algorithm
is based on the error analysis of the quadrature rule [168, p. 212]. These rules are
equally weighted, as are Monte-Carlo rules, and they are applicable to smooth and periodic
functions.
Let f be a smooth function, which is 1-periodic in each of its arguments, i.e.
f (x) = f (x + z),
∀x ∈ Rd , ∀z ∈ Zd .
(C.3)
It is further assumed that f can be expanded in an absolutely convergent Fourier series
X
f (x) =
fe(m)ej2πm·x,
∀x ∈ Rd ,
(C.4)
m∈Zd
where m = (m1 , . . . , md ), x = (x1 , . . . , xd ), m · x =
e
f(m)
=
Z
f (x)e−j2πm·xdx,
N
X
mi xi , and
i=1
∀m ∈ ZN .
(C.5)
[0;1]d
Given an N-point quadrature rule QN , owing to the absolute convergence of the Fourier
series of f , we can write
X
j2πm·x
e
QN [f ] =
f(m)Q
].
(C.6)
N [e
m∈Zd
Since the coefficient fe(m = 0) corresponds to the integral I[f ], the error can be
represented as
X
QN [f ] − I[f ] =
fe(m)QN [ej2πm·x].
(C.7)
m∈ZN \{0}
It can be shown that if QN is equally weighted, then
(
1 if m · x is an integer
QN [ej2π m·x] =
.
0 otherwise
This property together with Eq. (C.7) motivate the definition of a lattice.
(C.8)
C.2 Lattice rules
223
Lattice
A lattice L in Rd is a discrete subset of Rd such that [168, p. 20],[85, p. 216]
• if x1 ∈ L and x2 ∈ L, then x1 + x2 ∈ L and x1 − x2 ∈ L;
• L contains d linearly independent elements,
• 0 is isolated in L, i.e. there exists an open domain B, such that B
T
L = {0}.
An integration lattice is a lattice that contains Zd . The dual lattice of L is denoted L⊥
and defined as
L⊥ = {m ∈ RN , m · x ∈ Z, ∀x ∈ L} ⊂ ZN .
(C.9)
If L is an integration lattice, then L⊥ is a subset of Zd .
Lattice rule
A lattice rule QN is a quadrature rule of the form
N
1 X
QN [f ] =
f (xk ),
N k=1
(C.10)
where the abscissae xk belong to the intersection between an integration lattice L and
[0; 1)d .
With these new concepts at hand, the error of QN (see Eq. (C.7)) can be re-written as
QN [f ] − I[f ] =
X
m∈L⊥
fe(m)QN [ej2πm·x]
(C.11)
To minimize the error, one must make sure that the QN cancels the trigonometric
polynomials ej2πm·x for which fe(m) is large, with m ∈ L⊥ . Since most smooth
functions have large Fourier coefficients when the components of m are low, the idea is to
choose L⊥ such that the components of its elements have high values.
The quality index of the lattice quantifies the ability of QN to cancel trigonometric
polynomials ej2πm·x [85, p. 226].
For periodic integrands, the error of a lattice rule depends on the smoothness of the
integrand and on the lattice quality index [85, p. 226].
224
Multivariate integrals
Method of Good Lattice Points
An example of particular interest is the method of Good Lattice Points (GLP). It is based
on formulas of the type
N −1 1 X
i
QN [f ] =
f
p
N i=0
N
(C.12)
with p = (p1 , . . . , pd ) ∈ Zd , and the braces {u} mean that each component of u is to be
replaced by its fractional part, which belongs to ∈ [0; 1). The integer vector p should be
chosen such that the common divisor of p1 , . . . , pd and N is 1 in order for the N abscissae
to be distinct.
C.3
Lattice rules and space-filling-curve rule
A resemblance can be noted between lattice rules and the space-filling-curve (SFC) rule
presented in Section 4.1.4.
Lattice rules are defined for periodic integrands, and in the SFC rule, the integrand is
transformed via Eq. (4.21) into a 2π-periodic function that is parameterized by a single
variable.
Both rules hinge on the idea of rejecting possible error sources in the quadrature
formula towards higher frequencies, which ensure the accuracy of the rule for most smooth
integrands. This is done in the lattice rule by choosing a suitable lattice quality index
and in the SFC rule by rejecting interference frequencies beyond the fourth order (see
Section 4.1.4).
In the GLP method, the lattice is built such that it contains distinct abscissae by
carefully selecting the integer vector p in Eq. (C.12). Likewise, in the SFC rule the
frequencies that are used to build the space-filling curve are chosen such that they form
a nearly incommensurate set to guarantee that the curve comes arbitrarily close to any
point in the integration domain A.
The convergence rates of both rules take advantage of the smoothness of the integrand.
The main distinction between the two algorithms resides in the fact that, in the lattice
rule, the abscissae are chosen on a deterministic lattice, whereas the construction of the
space-filling curve incorporates the probability distribution of the random input.
Bibliography
[1] D. Morgan, A Handbook for EMC Testing and Measurement. IEE, London, UK,
1994, ch. Nature and Origins of Electromagnetic Compatibility, pp. 1–13.
[2] R. Leach and M. Alexander, “Electronic systems failures and anomalies attributed
to electromagnetic interference,” NASA Reference Publication, Tech. Rep. 1374,
1995.
[3] Council of the European Communities, “Directive EMC 89/336/EEC on the approximation of the laws of the Member States relating to electromagnetic compatibility.”
Official J. the European Communities, no. No L 139/21, 1989.
[4] H. D. Bruns, C. Schuster, and H. Singer, “Numerical electromagnetic field analysis
for EMC problems,” IEEE Trans. EMC, vol. 49, no. 2, pp. 253–262, May 2007.
[5] S. Adhikari, “On the quantification of damping model uncertainty,” J. Sound and
Vibration, vol. 305, no. 1-2, pp. 153–171, September 2007.
[6] J. Liu and J.-M. Jin, “Analysis of conformal antennas on a complex platform,”
Microwave and Optical Technology Letters, vol. 36, no. 2, pp. 139 – 142, 2002.
[7] P. Salonen and Y. Rahmat-Samii, “Textile antennas: Effects of antenna bending on
input matching and impedance bandwidth,” IEEE Aerospace and Electronic Systems
Magazine, vol. 22, no. 12, pp. 18–22, December 2007.
[8] R. Brewer and D. Trout, “Modern spacecraft - antique specifications,” in Proc. IEEE
International Symposium on EMC, Portland, vol. 1, August 2006, pp. 213–218.
[9] D. A. Hill, “Thirty years in electromagnetic compatibility: Projects and colleagues,”
IEEE Trans. EMC, vol. 49, no. 2, pp. 219–223, May 2007.
[10] L. Maistrov and S. Kotz, Probability Theory : a Historical Sketch. London : Academic Press, 1974.
226
[11] T. M. Apostol, Calculus, 2nd ed.
vol. 2.
Bibliography
Waltham, Mass. : Xerox College Publ., 1969,
[12] D. Sankoff, “Probability and linguistic variation,” Synthese, vol. 37, no. 2, pp. 217–
238, 1978.
[13] D. Jurafsky, A. Bell, M. Gregory, and W. D. Raymond, “The effect of language
model probability on pronunciation reduction,” in Proc. IEEE Internat. Conf. on
Acoustics, Speech Signal Processing (ICASSP ’01), Salt Lake City, vol. 2, May 2001,
pp. 801–804.
[14] H. W. L. Naus, “Statistical electromagnetics: Complex cavities,” IEEE Trans. EMC,
vol. 50, no. 2, pp. 316–324, May 2008.
[15] R. H. St John and R. Holland, “Simple deterministic solutions for cables over a
ground plane or in an enclosure,” IEEE Trans. EMC, vol. 44, no. 4, pp. 574–579,
November 2002.
[16] M. R. Lambrecht, C. Baum, J. Gaudet, C. Christodoulou, and E. Schamiloglu,
“Study of statistical electromagnetics and modeling of surrogate IED blasting caps,”
in IEEE Conf. Pulsed Power Plasma Science, Albuquerque, June 2007, pp. 743–743.
[17] R. Holland and R. St. John, Statistical Electromagnetics. CRC Press, 1999.
[18] D. Carpenter, “Statistical electromagnetics: an end-game to computational electromagnetics,” in Proc. IEEE International Symposium on EMC, Portland, vol. 3,
August 2006, pp. 736–741.
[19] G. S. Brown, “A Stochastic Fourier Transform Approach to scattering from perfectly
conducting randomly rough surfaces,” IEEE Trans. Ant. Prop., vol. AP-30, no. 6,
pp. 1135–1144, November 1982.
[20] ——, “Simplifications in the Stochastic Fourier Transform Approach to random
surface scattering,” IEEE Trans. Ant. Prop., vol. AP-33, no. 1, pp. 48–55, January
1985.
[21] S. Mudaliar and F. Lin, “Statistical characteristics of microwave signals scattered
from a randomly rough surface,” in Proc. IEEE Antennas and Propagation International Symposium, Hawaı̈, June 2007, pp. 4809–4812.
[22] E. Bahar, “Excitation of surface waves and the scattered radiation fields by rough
surfaces of arbitrary slope,” IEEE Trans. Microwave Theory Techniques, vol. 28,
no. 9, pp. 999–1006, September 1980.
Bibliography
227
[23] C. Charalambous and N. Menemenlis, “Stochastic models for short-term multipath
fading channels: chi-square and ornstein-uhlenbeck processes,” in Proc. 38th IEEE
Conference on Decision and Control, Phoenix, vol. 5, December 1999, pp. 4959–4964.
[24] C. Charalambous, N. Menemenlis, O. Kabranov, and D. Makrakis, “Statistical analysis of multipath fading channels using shot-noise analysis: an introduction,” in
Proc. IEEE International Conference on Communications, Beijing, vol. 4, June
2001, pp. 1011–1015.
[25] Y. de Jong and M. Herben, “A tree-scattering model for improved propagation
prediction in urban microcells,” IEEE Trans. Vehicular Technology, vol. 53, no. 2,
pp. 503–513, March 2004.
[26] J.-M. Lee, I.-S. Koh, and Y. Oh, “Simple statistical model of scattering by tree for
site-specific channel model for wireless communication applications,” in Proc. IEEE
International Geoscience and Remote Sensing Symposium (IGARSS ’05), Seoul,
vol. 1, July 2005, pp. 570–573.
[27] C. Fiachetti, “Modèles du champ électromagnétique aléatoire pour le calcul du couplage sur un équipement électronique en chambre réverbérante à brassage de modes
et validation expérimentale,” Ph.D. dissertation, Université de Limoges, November
2002.
[28] D. Hill, “Plane-wave integral representation for fields in reverberation chambers,”
IEEE Trans. EMC, vol. 40, no. 3, pp. 209–217, 1998.
[29] C. L. Holloway, D. A. Hill, J. M. Ladbury, and G. Koepke, “Requirements for an
effective reverberation chamber: unloaded or loaded,” IEEE Trans. EMC, vol. 48,
no. 1, pp. 187–194, February 2006.
[30] T. H. Lehman and E. K. Miller, “The statistical properties of electromagnetic fields
with application to radiation and scattering,” IEEE Antennas and Propagation Society Symposium AP-S, vol. 3, pp. 1616–1619, 1991.
[31] V. Rannou, F. Brouaye, P. De Doncker, M. Helier, and W. Tabbara, “Statistical
analysis of the end current of a transmission line illuminated by an elementary
current source at random orientation and position,” in Proc. IEEE International
Symposium on EMC, Montreal, vol. 2, August 2001, pp. 1078–1083.
[32] D. Bellan and S. Pignari, “A probabilistic model for the response of an electrically
short two-conductor transmission line driven by a random plane wave field,” IEEE
Trans. EMC, vol. 43, no. 2, pp. 130–139, May 2001.
228
Bibliography
[33] S. Sun, G. Liu, J. L. Drewniak, and D. J. Pommerenke, “Hand-assembled cable
bundle modeling for crosstalk and common-mode radiation prediction,” IEEE Trans.
EMC, vol. 49, no. 3, pp. 708–718, August 2007.
[34] B. L. Michielsen, “Probabilistic modelling of stochastic interactions between electromagnetic fields and systems,” Comptes Rendus de l’Académie des sciences:
Physique, vol. 7, pp. 543–559, 2006.
[35] L. Day and I. McNeil, Eds., Biographical Dictionary of the History of Technology.
London : Routledge, 1996.
[36] M. C. van Beurden, “Integro-Differential Equations for Electromagnetic Scattering:
Analysis and Computation for Objects with Electric Contrast,” Ph.D. dissertation,
Eindhoven University of Technology, 2003.
[37] J. Jackson, Classical Electrodynamics, 2nd ed., J. Wiley and Sons, Eds., 1975.
[38] D. L. Colton and R. Kress, Integral Equation Methods in Scattering Theory. Wiley,
New York.
[39] G. Hsiao and R. Kleinman, “Mathematical foundations of error estimation in numerical solutions of integral equations in electromagnetics,” IEEE Trans. Ant. Prop.,
vol. 45, no. 3, pp. 316–328, March 1997.
[40] S. Orfanidis, Electromagnetic Waves and Antennas, 2008. [Online]. Available:
http://www.ece.rutgers.edu/ orfanidi/ewa/
[41] C. Balanis, “Antenna theory: a review,” Proceedings of the IEEE, vol. 80, no. 1, pp.
7–23, January 1992.
[42] M. D. Carmo, Differential Geometry of Curves and Surfaces. Prentice Hall, 1976.
[43] J. M. Lee, Introduction to Smooth Manifolds. Springer-Verlag, New York, 2003.
[44] K. K. Mei, “On the integral equation of thin wire antennas,” IEEE Trans. Ant.
Prop., vol. AP-13, no. 3, pp. 374–378, May 1965.
[45] R. Harrington, Field Computation by Moment Methods.
1968.
New York : MacMillan,
[46] R. F. Harrington, “Origin and development of the method of moments for field
computation,” IEEE Ant. Propag. Mag., vol. 32, no. 3, pp. 31–35, June 1990.
Bibliography
229
[47] A. T. de Hoop, Handbook of Radiation and Scattering of Waves: Acoustic Waves
in Fluids, Elastic Waves in Solids, Electromagnetic Waves. Academic Press, 1995.
[48] B. L. Michielsen and C. Fiachetti, “Electromagnetic theory of mode stirring chambers,” ONERA/DEMR, Tech. Rep., September 2004.
[49] B. L. Michielsen, “A new approach to electromagnetic shielding,” Proc. International
Zürich Symposium on EMC, pp. 509–514, 1985.
[50] V. H. Rumsey, “Reaction concept in electromagnetic theory,” Physical Review, vol.
94 (6), pp. 1483–1491, 1954.
[51] J. Vaessen, O. Sy, M. van Beurden, A. Tijhuis, and B. Michielsen, “Monte-Carlo
method applied to a stochastically varying wire above a PEC ground plane,” in
Proceedings EMC Europe Workshop, Paris, 2007, pp. 1–5.
[52] B. Michielsen, “Analysis of the coupling of a deterministic plane wave to a stochastic
twisted pair of wires,” in Proc. International Zürich Symposium on EMC, 2005, pp.
439–442.
[53] O. Sy, J. Vaessen, M. van Beurden, A. Tijhiuis, and B. Michielsen, “Probabilistic
study of the coupling between deterministic electromagnetic fields and a stochastic
thin-wire over a PEC plane,” in Proc. International Conference on Electromagnetics
in Advanced Applications ICEAA 2007,Torino, 2007, pp. 637–640.
[54] D. Steven, K. T. N. Harrah, and T. E. Batchman, “Voltage response and field
reconstruction for a miniature field probe in a spatially non-uniform electric field,”
IEEE Trans. Instrument. Measur., vol. 39, no. 1, pp. 27 – 31, 1990.
[55] R. Olsen and K. Yamazaki, “The interaction between ELF electric fields and RF
survey meters: Theory and experiment,” IEEE Trans. EMC, vol. 47, 2005.
[56] C. Furse, O. Gandhi, and G. Lazzi, “Dipole antennas,” Wiley Encyclopedia of Electrical and Electronics Engineering, 2007.
[57] H. Schippers, M. Bandinelli, and R. Cioni, “Antenna farm simulation on spacecraft: impact of uncertainties on electro-magnetic modeling,” NLR, The Netherlands, Tech. Rep. NLR-CR-98516.
[58] T. Cui and W. Chew, “Accurate model of arbitrary wire antennas in free space,
above or inside ground,” IEEE Trans. Ant. Prop., vol. 48, no. 4, pp. 482–493, 2000.
230
Bibliography
[59] D. Bellan and S. Pignari, “Estimation of crosstalk in non-uniform cable bundles,” in
Proc. International Symposium on EMC, Chicago, vol. 2, August 2005, pp. 336–341.
[60] Y. Choong, P. Sewell, and C. Christopoulos, “Accurate modelling of an arbitrary
placed thin wire in a coarse mesh,” IEE Proc. Science, Measurement and Technology,
vol. 149, no. 5, pp. 250–253, 2002.
[61] N. J. Champagne II, J. T. Williams, and D. R. Wilton, “The use of curved segments for numerically modeling thin-wire antennas and scatterers,” IEEE Trans.
Ant. Prop., vol. 40, no. 6, pp. 682–689, June 1992.
[62] S. D. Rogers and C. M. Butler, “An efficient curved-wire integral equation solution
technique,” IEEE Trans. Ant. Prop., vol. 49, no. 1, pp. 70–79, January 2001.
[63] M. C. van Beurden and A. G. Tijhuis, “Analysis and regularization of the thin-wire
integral equation with reduced kernel,” IEEE Trans. Ant. Prop., vol. 55, no. 1, pp.
120–129, 2007.
[64] C. Marasini, “Efficient Computation Techniques for Galerkin MoM Antenna Design,” Ph.D. dissertation, Eindhoven University of Technology, 2008.
[65] S. Rao, D. Wilton, and A. Glisson, “Electromagnetic scattering by surfaces of arbitrary shape,” IEEE Trans. Ant. Prop., vol. 30, no. 3, pp. 409–418, May 1982.
[66] A. Tijhuis, M. van Beurden, and A. Zwamborn, Scientific Computing in Electrical Engineering. Springer Berlin Heidelberg, 2006, ch. Iterative Solution of Field
Problems with a Varying Physical Parameter, pp. 253–257.
[67] W. Feller, An Introduction to Probability Theory and its Applications, Wiley and
sons, Eds., 1971.
[68] A. Papoulis, Probability & Statistics. Prentice-Hall International editions, 1990.
[69] T. Mikosch, Elementary Stochastic Calculus with Finance in View. World Scientific
Publishing, 1998, vol. 6.
[70] C. Charalambous and N. Menemenlis, “Stochastic models for long-term multipath
fading channels and their statistical properties,” in Proc. 38th IEEE Conference on
Decision and Control, Phoenix, vol. 5, December 1999, pp. 4947–4952.
[71] L. Lebensztajn, C. Marretto, M. Costa, and J.-L. Coulomb, “Kriging: a useful tool
for electromagnetic device optimization,” IEEE Trans. Magnetics, vol. 40, no. 2, pp.
1196–1199, 2004.
Bibliography
231
[72] J. Lefebvre, H. Roussel, E. Walter, D. Lecointe, and W. Tabbara, “Prediction from
wrong models: the Kriging approach,” IEEE Ant. Propag. Mag., vol. 38, no. 4, pp.
35–45, August 1996.
[73] A. Khenchaf, F. Daout, and J. Saillard, “Polarization degradation in the sea surface
environment,” in Proc. MTS/IEEE. Challenges of Our Changing Global Environment (OCEANS ’95), San Diego, vol. 3, October 1995, pp. 1517–1522.
[74] T. Zwick, C. Fischer, and W. Wiesbeck, “A stochastic multipath channel model
including path directions for indoor environments,” IEEE J. Selected Areas in Communications, vol. 20, no. 6, pp. 1178–1192, 2002.
[75] J. Einbu, “On the existence of a class of maximum-entropy probability density
function,” IEEE Trans. Information Theory, vol. IT-23, pp. 772–775, 1977.
[76] E. T. Jaynes, “Information theory and statistical mechanics,” Phys. Rev., vol. 106,
no. 4, pp. 620–630, May 1957.
[77] A. Bharucha-Reid, Random Integral Equations, R. Bellman, Ed.
Science and Engineering, 1972, vol. 96.
Mathematics in
[78] J. A. Gubner, Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press, 2006.
[79] R. De Roo, S. Misra, and C. Ruf, “Sensitivity of the kurtosis statistic as a detector
of pulsed sinusoidal RFI,” IEEE Trans. Geoscience Remote Sensing, vol. 45, no. 7,
pp. 1938–1946, July 2007.
[80] O. Gerek and D. Ece, “Detection of disturbances in energy system signals using
gaussian distribution fitness test,” in Proc. IEEE 12th Signal Processing and Communications Applications Conference, Kusadasi, April 2004, pp. 220–223.
[81] J. Gubner and M. Hayat, “A method to recover counting distributions from their
characteristic functions,” IEEE Signal Processing Letters, vol. 3, no. 6, pp. 184–186,
June 1996.
[82] N. Bershad and L. Qu, “On the probability density function of the LMS adaptive
filter weights,” in Proc. IEEE International Conference on Acoustics, Speech, and
Signal Processing, Dallas, vol. 12, April 1987, pp. 109–112.
[83] F. McNolty and E. Hansen, “Probability densities and characteristic functions for
fluctuating targets,” IEEE Trans. Aerospace Electronic Systems, no. 4, pp. 474–480,
July 1979.
232
Bibliography
[84] A. Zoubir, C. Brown, and B. Boashash, “Testing multivariate Gaussianity with the
characteristic function,” in Proc. IEEE Signal Processing Workshop on Higher-Order
Statistics, Alberta, July 1997, pp. 438–442.
[85] A. R. Krommer and C. W. Ueberhuber, Computational Integration, SIAM, 1998.
[86] L. Devroye, Non-Uniform Random Variate Generation. Springer-Verlag, New York,
1986.
[87] C. B. Laney, Computational Gasdynamics. Cambridge University Press, 1998.
[88] W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes: the Art
of Scientific Computing, 3rd ed. Cambridge University Press, 2007.
[89] E. Novak and K. Ritter, “The curse of dimension and a universal method for numerical integration,” pp. 177–187, 1998.
[90] H.-J. Bungartz and M. Griebel, “Sparse grids,” Acta Numerica, pp. 1–123, 2004.
[91] G. W. Wasilkowski, “Explicit cost bounds of algorithms for multivariate tensor
product problems,” J. Complexity, vol. 11, pp. 1–56, 1995.
[92] M. Gerstner, T.; Griebel, “Numerical integration using sparse grids,” Numerical
Algorithms, vol. 18 (24), no. Numbers 3-4, pp. 209–232, 1998.
[93] R. Cukier, J. Schaibly, and K. Shuler, “Study of the sensitivity of coupled reaction
systems to uncertainties in rate coefficients. iii. analysis of the approximations,” J.
Chemical Physics, vol. 63, no. 3, pp. 1140–1149, 1975.
[94] A. Saltelli, S. Tarantola, and K.-S. Chan, “Global sensitivity analysis- a computational implementation of the Fourier Amplitude Sensitivity Test (FAST),” Technometrics, vol. 41, no. 1, pp. 39–56, 1999.
[95] H. Weyl, “Mean motion,” American J. Mathematics, The Johns Hopkins University
Press, vol. Vol. 60, no. 4, pp. 889–896, 1938.
[96] ——, “Mean motion. II,” American J. Mathematics, The Johns Hopkins University
Press, vol. Vol. 61, no. 1, pp. 143–148, 1938.
[97] J. Schaibly and K. Shuler, “Study of the sensitivity of coupled reaction systems
to uncertainties in rate coefficients. ii. applications,” J. Chemical Physics, vol. 59,
no. 8, pp. 3879–3888, 1973.
Bibliography
233
[98] R. Cukier, H. Levine, and K. Shuler, “Nonlinear sensitivity analysis of multiparameter model systems,” J. Comput. Physics, vol. 26, pp. 1–42, 1978.
[99] J. Ureel and D. De Zutter, “A new method for obtaining the shape sensitivities
of planar microstrip structures by a full-wave analysis,” IEEE Trans. Microwave
Theory Techniques, vol. 44, no. 2, pp. 249–260, February 1996.
[100] J. Ureel and D. de Zutter, “Shape sensitivities of capacitances of planar conducting
surfaces using the method of moments,” IEEE Trans. Ant. Prop., vol. 44, no. 2, pp.
198–207, February 1996.
[101] R. Ghanem and P. Spanos, Stochastic Finite Elements: A Spectral Approach. Dover
Publications, 1991.
[102] A. Sveshnikov and B. Gelbaum, Problems in Probability Theory, Mathematical
Statistics and Theory of Random Functions. W.B. Saunders Company, 1968.
[103] O. Sy, J. Vaessen, B. Michielsen, M. van Beurden, and A. Tijhuis, “Modelling the
interaction of stochastic systems with electromagnetic fields,” in Proc. IEEE Antennas and Propagation Society International Symposium, Albuquerque, 2006, pp.
931–934.
[104] B. J. Debusschere, H. N. Najm, P. P. Pébay, O. M. Knio, R. G. Ghanem, and O. P. L.
Maı̂tre, “Numerical challenges in the use of polynomial chaos representations for
stochastic processes,” SIAM J. Sci. Comput., vol. 26, no. 2, pp. 698–719, 2005.
[105] I.-H. Park, B.-T. Lee, and S.-Y. Hahn, “Sensitivity analysis based on analytic approach for shape optimization of electromagnetic devices: interface problem of iron
and air,” IEEE Trans. Magnetics, vol. 27, no. 5, pp. 4142–4145, September 1991.
[106] D. Xiu and G. E. Karniadakis, “The Wiener-Askey polynomial chaos for stochastic
differential equations,” SIAM J. Sci. Comput., vol. 24, no. 2, 2002.
[107] C. Soize and R. Ghanem, “Physical systems with random uncertainties: Chaos
representations with arbitrary probability measure,” SIAM J. Sci. Comput., vol. 26,
no. 2, pp. 395–410, 2005.
[108] S. Kakutani, “Determination of the spectrum of the flow of Brownian motion.”
Proceedings of the National Academy of Sciences of the United States of America,
vol. 36, no. 5, pp. 319–323, May 1950.
[109] N. Wiener, “The homogeneous chaos,” American J. Mathematics, vol. 60, no. 4, pp.
897–936, 1938.
234
Bibliography
[110] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications, 1964.
[111] R. Field and M. Grigoriu, “On the accuracy of the polynomial chaos approximation,”
Probabilistic Engineering Mechanics, vol. 19, pp. 65–80, 2004.
[112] A. Monti, F. Ponci, and T. Lovett, “A polynomial chaos theory approach to uncertainty in electrical engineering,” in Proc. 13th International Conference on Intelligent Systems Application to Power Systems, Washington D.C., November 2005, pp.
534–539.
[113] G. D’Antona, A. Monti, F. Ponci, and L. Rocca, “Maximum-entropy multivariate
analysis of uncertain dynamical systems based on the Wiener-Askey polynomial
chaos,” IEEE Trans. Instrument. Measur., vol. 56, no. 3, pp. 689–695, 2007.
[114] P. Frauenfelder, C. Schwab, and R. Todor, “Finite elements for elliptic problems with
stochastic coefficients,” Computer Methods in Applied Mechanics and Engineering,
vol. 194, no. 2–5, pp. 205–228, 2005.
[115] C. P. Rupert and C. T. Miller, “An analysis of polynomial chaos approximations
for modeling single-fluid-phase flow in porous medium systems,” J. Comput. Phys.,
vol. 226, no. 2, pp. 2175–2205, 2007.
[116] D. Lucor and G. E. Karniadakis, “Adaptive generalized polynomial chaos for nonlinear random oscillators,” SIAM J. Sci. Comput., vol. 26, no. 2, pp. 720–735, 2005.
[117] B. L. Michielsen and C. Fiachetti, “Covariance operators, green functions, and
canonical stochastic electromagnetic fields,” Radio Science, vol. 40, no. 5, pp.
RS5001.1–RS5001.12, 2005.
[118] P. Clemmow, The Plane Wave Spectrum Representation of Electromagnetic Fields.
Wiley-IEEE Press, 1996.
[119] O. Sy, M. van Beurden, B. Michielsen, and A. Tijhiuis, “Semi-intrusive quantification
of uncertainties in stochastic electromagnetic interactions: Analysis of a spectral
formulation,” in Proc. International Conference on Electromagnetics in Advanced
Applications (ICEAA),Torino, 2009.
[120] G. Adomian, Stochastic Systems. Academic Press, inc., 1983.
[121] R. Courant and D. Hilbert, Methods of Mathematical Physics, J. Wiley and Sons,
Eds., 1953, vol. 1.
Bibliography
235
[122] G. H. Golub and C. F. V. Loan, Matrix Computations, 2nd ed., Baltimore, MD,
USA, 1989.
[123] L. Mathelin and M. Y. Hussaini, “A stochastic collocation algorithm for uncertainty
analysis,” NASA, Tech. Rep., 2003.
[124] C. Ruf, S. Gross, and S. Misra, “RFI detection and mitigation for microwave radiometry with an agile digital detector,” IEEE Trans. Geoscience Remote Sensing,
vol. 44, no. 3, pp. 694–706, 2006.
[125] E. Njoku, P. Ashcroft, T. Chan, and L. Li, “Global survey and statistics of radiofrequency interference in AMSR-E land observations,” IEEE Trans. Geoscience Remote Sensing, vol. 43, no. 5, pp. 938–947, 2005.
[126] B. Mandelbrot and R. Hudson, The (Mis)behaviour of Markets.
2004.
Profile Business,
[127] A. Paulson, J. Scacchia, and D. Goldenberg, “Skewness and kurtosis in pricing
european and american options,” in Proc. Computational Intelligence for Financial
Engineering (CIFER) the IEEE/IAFE 1997, March 1997, pp. 171–176.
[128] S. Alvarez, “Risk management: an analysis of the low-tail behaviour of high frequency data for computing value at risk,” J. Academy of Business and Economics,
2004.
[129] O. Sy, J. Vaessen, M. v. Beurden, B. Michielsen, and A. Tijhuis, “Probablistic
characterization of resonant EM interactions with thin-wires: variance and kurtosis
analysis,” in Proc. Scientific Computing in Electrical Engineering (SCEE 2008),
Espoo, 2008, pp. 41–42.
[130] O. Sy, M. van Beurden, and B. Michielsen, “Analysis of stochastic resonances in
electromagnetic couplings to transmission lines,” in Proc. International Zürich Symposium on EMC, 2009, pp. 33–36.
[131] R. Fletcher, Practical Methods of Optimization, 2nd ed.
1987.
John Wiley and Sons,
[132] F. Graybill and H. Iyer, Regression Analysis, Concepts and Applications. Duxbury
Press, 1994.
[133] H. Bohman, “From characteristic function to distribution function via Fourier analysis,” BIT Numerical Mathematics, vol. 12, no. 3, pp. 279–283, 1972.
236
Bibliography
[134] P. Hughett, “Error bounds for numerical inversion of a probability characteristic
function,” Society for Industrial and Applied Mathematics, vol. 35, no. 4, pp. 1368–
1392, August 1998.
[135] L. A. Waller, B. W. Turnbull, and J. M. Hardin, “Obtaining distribution functions
by numerical inversion of characteristic functions with application,” The American
Statistician, vol. 49, no. 4, pp. 346–350, 1995.
[136] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed. Academic Press, September
1999.
[137] K. G. Guderley, “On the estimation of multi-dimensional integrals with strongly
oscillating integrands,” Zeitschrift fr Angewandte Mathematik und Physik (ZAMP),
vol. 26, no. 5, pp. 505–519, September 1975.
[138] A. Iserles, S. Nrsett, and S. Olver, Numerical Mathematics and Advanced Applications. Springer Berlin Heidelberg, 2006, ch. Highly Oscillatory Quadrature: The
Story so Far, pp. 97–118.
[139] V. Guillemin and S. Sternberg, Geometric Asymptotics. AMS, 1977.
[140] S. Toriyama and N. Sano, “Probability distribution functions of threshold voltage
fluctuations due to random impurities in decanano MOSFETs,” Physica E: Lowdimensional Systems and Nanostructures, vol. 19, no. 1–2, pp. 44–47, 2003.
[141] S. Xiang and H. Wang, “On the Levin iterative method for oscillatory integrals,” J.
Comput. Appl. Math., vol. 217, no. 1, pp. 38–45, 2008.
[142] W. Thomson, Theory of Vibration with Applications. Taylor & Francis, 2004.
[143] O. Sy, M. van Beurden, B. Michielsen, and A. Tijhuis, “Variance and kurtosisbased characterization of resonances in stochastic transmission lines: local versus
global random geometries,” Turkish J. Electrical Engineering and Computer Sciences, 2009.
[144] E. Dautbegovic, “Wavelets in circuit simulation,” in Proc. Scientific Computing in
Electrical Engineering (SCEE 2008), Espoo, 2008, pp. 141–142.
[145] H. Schippers, J. H. van Tongeren, P. Knott, T. Deloues, P. Lacomme, and
M. R. Scherbarth, “Vibrating antennas and compensation techniques research in
NATO/RTO/SET 087/RTG 50,” in Proc. IEEE Aerospace Conference, Big Sky,
March 2007, pp. 1–13.
Bibliography
237
[146] P. Salonen, Y. Rahmat-Samii, and M. Kivikoski, “Wearable antennas in the vicinity
of human body,” in Proc. IEEE Antennas and Propagation Society International
Symposium, Monterey, vol. 1, June 2004, pp. 467–470.
[147] C. Hertleer, A. Tronquo, H. Rogier, L. Vallozzi, and L. Van Langenhove, “Aperturecoupled patch antenna for integration into wearable textile systems,” IEEE Antennas and Wireless Propagation Letters, vol. 6, pp. 392–395, May 2007.
[148] D. Polla, “BioMEMS applications in medicine,” in Proc. International Symposium
on Micromechatronics and Human Science, Nagoya, September 2001, pp. 13–15.
[149] A. Brown, “GPS/INS uses low-cost MEMS IMU,” IEEE Aerospace and Electronic
Systems Magazine, vol. 20, no. 9, pp. 3–10, September 2005.
[150] J. Maciel, J. Slocum, J. Smith, and J. Turtle, “MEMS electronically steerable antennas for fire control radars,” in Proc. IEEE Radar Conference, Edinburgh, April
2007, pp. 677–682.
[151] S. Borkar and R. Yang, “Scattering of electromagnetic waves from rough oscillating
surfaces using spectral Fourier method,” IEEE Trans. Ant. Prop., vol. 21, no. 5, pp.
734–736, September 1973.
[152] S. Borkar and R. F. Yang, “Reflection of electromagnetic waves from oscillating
surfaces,” IEEE Trans. Ant. Prop., vol. 23, no. 1, pp. 122–127, January 1975.
[153] D. De Zutter, “Reflections from linearly vibrating objects: plane mirror at oblique
incidence,” IEEE Trans. Ant. Prop., vol. 30, no. 5, pp. 898–903, September 1982.
[154] N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 2nd ed. Elsevier,
1992.
[155] H. Risken, The Fokker-Planck equation: methods of solution and applications.
Springer, 1996.
[156] M. L. Mehta, Random Matrices, 3rd ed. Academic press, 1991.
[157] A. Sondipon, “Wishart random matrices in probabilistic structural mechanics,” J.
Engineering Mechanics, vol. 134, no. 12, pp. 1029–1044, 2008.
[158] X. Zheng, S. Hemmady, T. Antonsen, S. M. Anlage, and E. Ott, “Characterization
of fluctuations of impedance and scattering matrices in wave chaotic scattering,”
Phys. Rev. E, vol. 73, no. 046208, pp. 1–6, 2006.
238
Bibliography
[159] S. Hemmady, X. Zheng, J. Hart, T. Antonsen, E. Ott, and S. M. Anlage, “Universal
properties of two-port scattering, impedance, and admittance matrices of wavechaotic systems,” Phys. Rev. E, vol. 74, no. 036213, pp. 1–12, 2006.
[160] C. Schwab and R. Todor, “Karhunen-Loève approximation of random fields by generalized fast multipole methods,” J. Comput. Phys., vol. 217, no. 1, pp. 100–122,
2006.
[161] A. Sidi, Practical Extrapolation Methods: Their Mathematical Theory and Application. Cambridge University Press, 2002.
[162] R. Abramov, “The multidimensional moment-constrained maximum entropy problem: A BFGS algorithm with constraint scaling,” J. Comput. Physics, vol. 228, pp.
96–108, 2009.
[163] ——, “A practical computational framework for the multidimensional momentconstrained maximum entropy principle,” J. Comput. Phys., vol. 211, no. 1, pp.
198–209, 2006.
[164] A. A. Zinger, “On characteristic functions of multidimensional distributions,” J.
Mathematical Sciences, vol. 99, no. 2, pp. 1105–1109, 2000.
[165] M. Pinsky, “Fourier inversion for multidimensional characteristic functions,” J. Theoretical Probability, vol. 6, no. 1, pp. 187–193, 1993.
[166] A. Papoulis, Probability, Random Variables and Stochastic Processes. McGraw-Hill
Companies, February 1991.
[167] P. Davis and P. Rabinowitz, Methods of Numerical Integration.
New York, 1984.
[168] I. Sloan and S. Joe, Lattice Methods for Multiple Integration.
Oxford, 1994.
Academic Press,
Clarendon Press -
Summary
In computational engineering, the reliability of numerical models depends on the
accuracy of the characterization of the interaction’s configuration. However, in many cases,
a precise depiction of this configuration cannot be guaranteed due to, e.g., an excessive
complexity, a large variability, or insufficient knowledge. For sensitive or mathematically
ill-conditioned models, taking these uncertainties into account is crucial to preserve a
sufficient level of confidence in the numerical results.
The present dissertation addresses this need for electromagnetic compatibility (EMC)
problems, where couplings between electronic devices in a receiving state and
electromagnetic fields are investigated. In the frequency domain, an integral-equation
approach is adopted to generate a numerical input-output representation specified in
Chapter 2. The input of this deterministic black box consists of the parameters describing
the interaction configuration, viz the receiver’s geometrical and physical properties
together with the incident field. The response variables, or observables, are chosen as the
elements of the corresponding Thévenin circuit, which comprises an ideal voltage source
in series with an impedance. This numerical scheme is put into application through the
example of a thin-wire structure serving as leading example throughout the thesis and
which can be regarded as a transmission line.
Uncertainties of the inputs of the aforementioned deterministic model are handled via a
stochastic rationale, as presented in Chapter 3. This rationale assumes that the partially or
completely unknown inputs vary randomly according to a probability distribution chosen
a priori. Once this initial randomness is propagated through the deterministic model, the
observables become random variables, which are described completely via their probability
distributions, or partially via their statistical moments. In practice, however, only the
statistical moments can be obtained in a numerically tractable manner from the known
probability distribution of the input variables. At this stage, two major issues arise, viz
the choice of a numerically efficient strategy to compute the statistical moments and the
practical interpretation and post processing of the thus obtained statistical data.
240
Summary
Computation of statistical moments
The generally multi-dimensional integrals that define the statistical moments cannot be
evaluated in closed form. Instead, they are approximated by a quadrature rule
consisting of a finite weighted sum of values of the observable. Given the numerical cost
inherent to the deterministic model, the number of samples of the observable required by
the quadrature algorithm gives a direct indication of the numerical effort necessary to
compute the statistical moments. In Chapter 4, four different quadrature rules are
mutually compared with respect to their complexity, particularly as a function of the
dimension of the stochastic problem at hand, i.e. the number of random input parameters.
In addition to lowering the complexity of the quadrature rule, it is possible to aim for
alternative representations of the deterministic model which are numerically cheaper to
run. A first attempt in this direction resides in the Perturbation method presented in
Chapter 5. The local transformations applied in this method translate into notable gains
in computation time, which accelerate the computation of statistical moments. However,
the accuracy of these moments depends critically on the smoothness of the observable.
Alternatively, a spectral decomposition of the observable is considered in Chapter 6. This
Polynomial-Chaos method yields a globally accurate representation even with rougher
configurations. Nevertheless, the pre-computation of the polynomial-chaos representation
can prove demanding, particularly if the polynomial-chaos system chosen a priori is not
adapted to the probability distribution of the observable, which is unknown a priori.
Chapter 7 describes a semi-intrusive technique, in which the average and variance of
the Thévenin voltage are obtained by characterizing the effect of the random geometry
of the receiver independently from the polarization of an incident plane wave. Such
a reasoning provides accurate estimates of the first two statistical moments. However,
obtaining higher-order statistical information is contingent upon efficient manipulations
of demanding higher-order tensor products.
Post-processing and interpretation of the statistical moments
In Chapter 8, the practical relevance of standard statistical moments is demonstrated.
The first two moments essentially measure the physical spread of the observable in the
complex plane, while higher-order moments, such as the skewness and the kurtosis, provide
qualitative information about the statistical dispersion of the observable. The Maximum-
Summary
241
Entropy principle illustrates the usefulness of higher-order moments to approximate the
probability distribution function (pdf) of the observable.
Rather than studying standard statistical moments, the random behaviour of the
observable can also be captured by inverting its characteristic functional, which is
computable. Such a discrete inverse Fourier transform (DIFT) method, which is
outlined in Chapter 9, enables a complete mastering of the randomness of the observable,
by providing its pdf and its cumulative distribution functions (cdf). The results show that
the DIFT cdf is more robust than the DIFT pdf, which exhibits local Runge oscillations
at the discontinuities of the pdf.
Chapter 10 discusses several applications of the stochastic approach presented in this
thesis. First, the importance of the choice of the probability distribution of the input
parameters, on the subsequent statistical moments of the Thévenin voltage and impedance,
is confirmed a posteriori. Next, a conditional stochastic analysis of a locally deformed
thin wire reveals some robustness in the randomness of the induced voltage. Finally,
results are shown for the case of the interaction between an incident field, a randomly
vibrating plate and an elementary dipole. This test case exemplifies the possibility to
extend the stochastic uncertainty quantification method of this thesis to handle more
elaborate models of electromagnetic interactions.
Samenvatting
Voor het gebruik van computer simulaties is de betrouwbaarheid van het onderliggende
numerieke model afhankelijk van de nauwkeurigheid waarmee een configuratie wordt
beschreven. Echter, in veel gevallen kan een precieze beschrijving van de situatie niet
worden gegarandeerd, bijvoorbeeld als gevolg van een buitensporige complexiteit of
onvoldoende kennis. Voor gevoelige of wiskundig slecht geconditioneerde modellen is het
meenemen van onzekerheden van cruciaal belang om voldoende vertrouwen te kunnen
stellen in de numerieke resultaten van een simulatie.
Dit proefschrift bekijkt het bovenstaande probleem voor problemen met betrekking tot
elektromagnetische compatibiliteit (EMC), waarbij de koppeling tussen elektronische
componenten en elektromagnetische velden worden bestudeerd. In hoofdstuk 2 vertrekken
we vanuit een integraalvergelijkingsformulering in het frequentiedomein om een numeriek
verband te leggen tussen invoer en uitvoer van het model. De invoer van een
deterministisch model bestaat uit parameters die de geometrische en fysische
eigenschappen van de configuratie en de eigenschappen van het invallende veld
beschrijven. De uitvoervariabelen, of observabelen, worden gekozen in overeenstemming
met het Thévenin circuit, dat een ideale spanning in serie met een impedantie omvat. We
gebruiken deze numerieke aanpak, toegepast op een dunne draad structuur die kan worden
opgevat als een transmissielijn, als rode draad door dit proefschrift.
Onzekerheden in de invoerparameters van het hiervoor beschreven deterministische model
worden aangepakt via een stochastische redenering, zoals beschreven in hoofdstuk 3. Deze
redenering vertrekt van geheel of gedeeltelijk onbekende waarden voor de invoerparameters,
waarbij de kansdichtheidsfunctie van de invoerparameters op voorhand is vastgelegd. Deze
kansbeschrijving verspreidt zich door het deterministische model, waardoor de observabelen
eveneens een stochastisch karakter krijgen, die op hun beurt volledig worden beschreven
door middel van hun eigen kansdichtheden of door middel van hun statistische momenten.
Echter, in de praktijk is het alleen voor de statistische momenten haalbaar om ze
numeriek te bereken via de reeds bekende kansdichtheid van de invoerparameters. Hieruit
244
Samenvatting
volgen twee hoofdthema’s, namelijk de keuze voor een numeriek efficiente strategie om de
statistische momenten te berekenen, alsmede de praktische interpretatie en verwerking
van de aldus verkregen statistische gegeven.
Bepaling van statistische momenten
De berekening van de statistische momenten leidt tot meer-dimensionale integralen, die
niet in gesloten vorm kunnen worden bepaald. Als alternatief worden ze benaderd door
middel van een kwadratuurregel die bestaat uit een eindige gewogen sum van waarden
van de observabele. Gegeven de numerieke inspanning die bij het deterministische model
hoort, is het aantal evaluaties van de observabele dat benodigd is voor de kwadratuurregel
een directe indicatie voor de numerieke inspanning die benodigd is voor het bepalen van de
statistische momenten.
In hoofdstuk 4 vergelijken we vier verschillende
kwadratuurregels aan de hand van hun complexiteit en in het bijzonder als functie van de
dimensie van het onderhavige stochastische probleem, dat wil zeggen het aantal
stochastische invoerparameters.
Naast het verlagen van de complexiteit van de kwadratuurregel is het mogelijk om andere
representaties van het deterministische model te gebruiken, die numeriek efficinter zijn.
Een eerste poging in die richting leidt tot de perturbatiemethode, zoals uiteengezet in
hoofdstuk 5. De lokale transformatie die in deze methode wordt toegepast leidt tot een
noemenswaardige winst in rekentijd en dus ook tot een versnelling van het berekenen van
de statistische momenten. De nauwkeurigheid van deze momenten hangt echter op een
kritische manier af van de mate van gladheid van de observabele.
Als alternatief wordt in hoofdstuk 6 een spectrale decompositie gebruikt.
Deze
zogenaamde Polynomiale Chaos methode leidt tot een representatie met een globale
nauwkeurigheid, zelfs als de observabele niet glad is.
Desalniettemin kan de
voorbereidende berekening hiervoor bijzonder rekenintensief blijken, met name indien het
gebruikte Polynomiale Chaos systeem, dat op voorhand wordt gekozen, niet goed past op
de kansdichtheid van de observabele.
In hoofdstuk 7 bekijken we een half-ingrijpende techniek, waarbij het gemiddelde en
de variantie van de Thévenin spanningsbron worden verkregen door het effect van een
stochastische geometrie van de ontvanger te karakteriseren, onafhankelijk van de polarisatie
en van een invallende vlakke golf. Een dergelijke aanpak levert nauwkeurige schattingen
op voor de eerste twee statistische momenten. Hogere orde statistische informatie vereist
echter het efficient manipuleren van rekenintensieve hogere orde tensorproducten.
Samenvatting
245
Verwerking en interpretatie van de statistische momenten
In hoofdstuk 8 demonstreren we de praktische relevantie van een aantal standaard
statistische momenten.
De eerste twee momenten meten in essentie de fysische
spreiding van de observabele in het complexe vlak, terwijl hogere orde momenten, zoals
de scheefheid en de kurtosis, kwalitatieve informatie over de statistische spreiding van de
observabele geven. Het principe van Maximale Entropie illustreert het nut van hoger orde
momenten om de kansdichtheidsfunctie van de observabele te benaderen.
In plaats van het bestuderen van de standaard statistische momenten kan het stochastische
gedrag van de observabele ook benaderd worden door het inverteren van de karakteristieke
functie, die eveneens te bepalen is. Deze aanpak die bekend staat als de discrete inverse
Fourier transformatie (DIFT), wordt uiteengezet in hoofdstuk 9 en stelt ons in staat
om een volledige beschrijving, in de vorm van de kansdichtheidsfunctie of cumulatieve
kansfunctie, van de stochastiek van de observabele te verkrijgen. De aldus verkregen
resultaten tonen aan dat de verkregen benadering van de cumulatieve kansfunctie robuster
is dan de benadering van de kansdichtheidsfunctie, die lokale Runge oscillaties vertoont
in de buurt van discontinuteiten.
In hoofdstuk 10 wordt een aantal additionele toepassingen van de stochastische
methode gedemonstreed. Als eerste bevestigen we het belang van de keuze van de
kansdichtheidsfuncties van de invoer parameters, waarbij we de invloed op de statistische
momenten van de Thévenin spanningsbron en impedantie bekijken. Het volgende
voorbeeld is conditioneel stochastistische analyse van een lokaal vervormde draad, waaruit
een zekere robuustheid blijkt van het stochastische gedrag van de genduceerde spanning.
Tenslotte laten we resultaten zien voor de interactie van een invallend veld met een
stochastisch varierende plaat en een elementaire dipool. Dit illustreert de mogelijkheden
om de stochastische onzekerheidskwantificatie van dit proefschrift verder uit te breiden
naar meer geavanceerde modellen met elektromagnetische interacties.
List of publications
Journal articles
O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis. “Variance and kurtosisbased characterization of resonances in stochastic transmission lines: Local versus global
random geometries”. ELEKTRIK, Turkish Journal of Electrical Engineering & Computer
Sciences. 2009. (invited paper, accepted for publication)
O.O. Sy, B.L. Michielsen, M.C. van Beurden, J.A.H.M. Vaessen, A.G. Tijhuis, “Secondorder statistics of fully stochastic electromagnetic interactions I: Theory and computational
techniques”. Radioscience. 2009. (submitted, under review)
O.O. Sy, M.C. van Beurden, B.L. Michielsen, J.A.H.M. Vaessen, A.G. Tijhuis, “Secondorder statistics of fully stochastic electromagnetic interactions II: Field to thin wire
coupling”. Radioscience. 2009. (submitted, under review)
O.O. Sy, M.C. van Beurden, B.L. Michielsen, J.A.H.M. Vaessen, A.G. Tijhuis, “Higherorder stochastic analysis of EMC problems with thin wires: maximum-entropy versus
discrete inverse-Fourier transformation”. 2009. (in preparation)
O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “Semi-intrusive stochastic
method for uncertainty quantification in EMC problems”. 2009. (in preparation).
Book chapter
O.O. Sy, M.C. van Beurden, B.L. Michielsen, J.A.H.M. Vaessen, A.G. Tijhuis,
“Probablistic characterization of resonant electromagnetic interactions with thin-wires:
variance and kurtosis analysis”. In Mathematics for industry: Scientific Computing in
Electrical Engineering 2008, Springer (accepted for publication in 2009).
248
List of publications
International conferences
O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “Semi-intrusive quantification
of uncertainties in fully stochastic electromagnetic interactions: analysis of a spectral
formulation.” International Conference on Electromagnetics in Advanced Applications
(ICEAA 09), September 2009 (Torino, Italy).
O.O. Sy, M.C. van Beurden, B.L. Michielsen, “Analysis of stochastic resonances in electromagnetic couplings to transmission lines.” International Zürich Symposium on Electromagnetic Compatibility (EMC Zürich 09), January 2009 (Zürich, Switzerland), pp 33–36.
O.O. Sy, J.A.H.M. Vaessen, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, A.P.M.
Zwamborn, J. de Groot, “Experimental validation of the stochastic model of a randomly
fluctuating transmission line”. European Microwave Conference (EuMC 08), October
2008, (Amsterdam, The Netherlands), pp. 833–836.
O.O. Sy, J.A.H.M. Vaessen, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “Probablistic
characterization of resonant electromagnetic interactions with thin-wires: variance and
kurtosis analysis”. Scientific Computing in Electrical Engineering (SCEE 08), September
2008, (Espoo, Finland), pp. 41-42.
O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “Probabilistic study of fully
stochastic electromagnetic interactions: coupling between a stochastic PEC plate and a
stochastic incident plane wave”. 29th URSI General Assembly (URSI GA 08), August
2008, (Chicago, USA).
O.O. Sy, J.A.H.M. Vaessen, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “Probabilistic
approach of electromagnetic interaction problems using quadrature rules”. Progress in
Electromagnetic Research Symposium (PIERS 08), July 2008, (Cambridge, USA), p. 624.
O.O. Sy, J.A.H.M. Vaessen, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis,
“Probabilistic study of the coupling between deterministic electromagnetic fields and a
stochastic thin-wire over a PEC plane”. International Conference on Electromagnetics
in Advanced Applications (ICEAA 07), September 2007 (Torino, Italy), pp. 637–640.
J.A.H.M. Vaessen, O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis,
“Comparison of computational methods for interaction between electromagnetic fields and
stochastic systems”. URSI EMTS Commission B, July 2007 (Ottawa, Canada).
List of publications
249
O.O. Sy, J.A.H.M. Vaessen, B.L. Michielsen, M.C. van Beurden, A.G. Tijhuis, “Modeling
the interaction of stochastic systems with electromagnetic fields”. IEEE Antennas and
Propagation Society International Symposium (IEEE APS 06), July 2006 (Albuquerque,
USA), vol. 2, pp. 931–934.
International workshops
M.C. van Beurden, O.O. Sy, J.A.H.M. Vaessen, B.L. Michielsen, A.G. Tijhuis, “Computing
and interpreting statistical moments in electromagnetic analysis”. IET & NPL Seminar
on Statistical electromagnetic Methods for Analysing Complex Systems and Structures,
March 2009 (Teddington, UK).
O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, Uncertainty quantification in
EMC models: a probabilistic approach. URSI BeNeLux forum, May 2008 (Brussels, Belgium), pp. 29-31.
O.O. Sy, J.A.H.M. Vaessen, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis,
“Probabilistic study of the coupling between a deterministic field and stochastic system:
application to a thin wire over a PEC ground plane”. EMC Europe Workshop, June 2007
(Paris, France), pp. 21–25.
J.A.H.M. Vaessen, O.O. Sy, M.C. van Beurden, B.L. Michielsen, A.G. Tijhuis, “MonteCarlo method applied to a stochastically varying wire above a PEC ground plane”.
EMC Europe Workshop, June 2007 (Paris, France), pp. 1–5.
Other publication
O.O Sy, Gobin, V. Adaptation d’une méthode intégrale aux calculs de diffractions en très
basses fréquences. Master thesis, Office National d’Études et de Recherches Aérospatiales
(ONERA), July 2003 (Toulouse, France).
Curriculum vitae
Ousmane Sy was born in Paris on March 22nd, 1980. He graduated from high school in
1998, after having attended the Lycée Guebre Mariam in Ethiopia and the Collège SainteBarbe in Paris. From September 1998 to June 2000, he studied maths and physics via
the Classes Préparatoires aux Grandes Écoles of the Lycée Chaptal, in Paris. Between
September 2000 and June 2003, he studied aerospace engineering at the École Nationale
de l’Aviation Civile (ENAC), in Toulouse, France. He graduated with an ingenieur-ENAC
diploma and a PEGASUS certificate in June 2003. In July 2003, he obtained a M.Sc. in
Microwaves and Optical Telecommunications (cum laude) from Paul Sabatier university
in Toulouse. His MSc. thesis was carried out at ONERA (French Aerospace Research
center) under the supervision of Dr. Vincent Gobin. It dealt with the extension of an
integral-equation approach to very-low frequencies, by using Loop-Tree basis functions.
Between October 2003 and March 2005, he was employed by Assystem Services as an
aerospace engineer. He first carried out a mission at EADS Astrium to perform pre-flight
tests on Eutelsat’s W3A satellite (launched in March 2004). He then worked on ESA’s
“optical wireless layer validation for data communications on-board satellites”. From
January 2004 to March 2005, he worked at Airbus as a test engineer on navigation systems.
In March 2005, he enrolled as a PhD student at the Eindhoven University of Technology
(TU/e). Under the supervision of Prof. Anton Tijhuis, Dr. Martijn van Beurden (both
from TU/e) and Ir. Bastiaan Michielsen (from ONERA), he investigated probabilistic
methods of quantifying uncertainties in electromagnetic compatibility problems.
The results of this thesis have been published in international journals and conferences. In
August 2008, Ousmane received a Young Scientist Award during the general assembly of
the International Union of RadioScience (URSI), in Chicago, USA. In September 2008, he
was invited as a distinguished PhD student, sponsored by NOKIA, to attend the conference
on Scientific Computing in Electrical Engineering, in Espoo, Finland. In January 2009,
he received the Best Student Paper Award at Zürich’s international symposium on EMC,
in Switzerland.
Acknowledgements
This dissertation is the fruit of four rewarding years, during which I have had the privilege
of working under the supervision of prof. Anton Tijhuis, dr. Martijn van Beurden and
ir. Bastiaan Michielsen. The respectful and enthusiastic consideration you have shown
for my work has been a great source of motivation for me. The wonderful social and
working atmosphere that prevails in the EM department owes a lot to the positive and
joyful personality of Anton. As my daily supervisor, Martijn did a tremendous job in
giving me friendly items of advice and in encouraging my initiatives, while keeping me in
line with the priorities of my project. Getting to know and to work with Bas has also
been an excellent experience. Our discussions on the theoretical and practical aspects of
my thesis have been of great help to me, as were our various informal conversations.
Further, I am grateful to all the members of my jury for their thorough and very
constructive review of the 200 + pages of my thesis.
I enjoyed the regular IOP project meetings that we had, which were as many occasions to
gather with fellow researchers from Delft, Enschede and Eindhoven. I wish to express my
gratitude to dr. Lex van Deursen, prof. Frank Leferink and dr. Rik Naus for their useful
comments and references given during my PhD.
I am acknowledgeable to Dr. John Burkardt from Virginia Tech University, who provided
me with the sparse-grid-rule algorithm. I also express my thanks to prof. Christian Soize
from Université Paris Est for his explanations on the polynomial-chaos method.
My colleagues at TU/e have largely contributed to make me feel at home in Eindhoven.
I have genuinely enjoyed my collaboration with Jean-Pierre Vaessen, aka my “stochastic
teammate”, who provided me with an efficient thin-wire code and often helped me
“communicate” with my computer. Dr. Bastiaan de Hon, dr. Emilia Motoasca and
dr. Rutger Smink have guided me through the paperwork that I had to face when I
arrived in Eindhoven (with my (in)famous car...). “Gift-hunting” with you in Eindhoven
254
Acknowledgements
was really fun. Your constant inspiration for new ideas has been a pleasure to witness.
Dr. Frank van de Water and David Duque Guerra have been very helpful as regards the
surface code. Bedankt y gracias. By the way Tom, how’re you doing? Many thanks to
Hans Moermans for his kind and much needed help. Working with Martien Oppeneer has
been an enjoyable and rewarding experience. All the best with your thesis. Hartelijke dank
aan al mijn collega’s (studenten, TOIOs, AIOs, postdocs, Doret, Suzanne, docenten) van
de EMW capaciteitsgroep. Ik wil ook Elie bedanken voor zijn uitstekende taal lessen. Je
ne saurais oublier Alexandre Chabory dans ces remerciements, étant donné le bon temps
passé entre bouwkunde, la zwarte doos, kennispoort et le danssalon. Thank you Ehsan
Baha for your patience and for the great job you’ve done with the cover.
Un grand merci à dr. Vincent Gobin pour l’enthousiasme avec lequel il a encadré mon
stage de fin d’études, et aussi pour m’avoir mis en relation avec Bas pour cette thèse.
Je suis également reconnaissant à prof. Bernard Souny et prof. Paul Combes pour leurs
aides lors de mes candidatures de thèse.
Je remercie mes amis, qui se reconnaı̂tront, d’avoir été là dans les bons moments et surtout
dans les moments plus rudes. “Spéciale dédicace” à vous toutes et à vous tous. Je tiens
particulièrement à dire un grand bedankt à Carole, la fée morale.
Last but certainly not least, je dédie cet ouvrage à ma mère et à mon père. J’y associe
également le reste de la famille Sy d’Addis, i.e. Inna, Issa, Lalla et Malick. Merci à Tonton
Amar et Tantie Christiane pour leurs soutiens constants au fil des ans. Enfin, j’ai une
pensée pour Tonton Nidiaye Cheikh.
Je rends grâce au Tout-Puissant pour les faveurs qu’il nous a accordées et dont il continue
de nous gratifier. Je prie qu’il continue de veiller sur celles et ceux qui me sont chers.
To Addis and to Africa, with Love.
Ousmane Oumar Sy
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement